text
stringlengths 0
316k
| year
stringclasses 50
values | No
stringclasses 911
values |
---|---|---|
Proceedings of the 43rd Annual Meeting of the ACL, pages 215–222, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Question Answering as Question-Biased Term Extraction: A New Approach toward Multilingual QA Yutaka Sasaki Department of Natural Language Processing ATR Spoken Language Communication Research Laboratories 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288 Japan [email protected] Abstract This paper regards Question Answering (QA) as Question-Biased Term Extraction (QBTE). This new QBTE approach liberates QA systems from the heavy burden imposed by question types (or answer types). In conventional approaches, a QA system analyzes a given question and determines the question type, and then it selects answers from among answer candidates that match the question type. Consequently, the output of a QA system is restricted by the design of the question types. The QBTE directly extracts answers as terms biased by the question. To confirm the feasibility of our QBTE approach, we conducted experiments on the CRL QA Data based on 10-fold cross validation, using Maximum Entropy Models (MEMs) as an ML technique. Experimental results showed that the trained system achieved 0.36 in MRR and 0.47 in Top5 accuracy. 1 Introduction The conventional Question Answering (QA) architecture is a cascade of the following building blocks: Question Analyzer analyzes a question sentence and identifies the question types (or answer types). Document Retriever retrieves documents related to the question from a large-scale document set. Answer Candidate Extractor extracts answer candidates that match the question types from the retrieved documents. Answer Selector ranks the answer candidates according to the syntactic and semantic conformity of each answer with the question and its context in the document. Typically, question types consist of named entities, e.g., PERSON, DATE, and ORGANIZATION, numerical expressions, e.g., LENGTH, WEIGHT, SPEED, and class names, e.g., FLOWER, BIRD, and FOOD. The question type is also used for selecting answer candidates. For example, if the question type of a given question is PERSON, the answer candidate extractor lists only person names that are tagged as the named entity PERSON. The conventional QA architecture has a drawback in that the question-type system restricts the range of questions that can be answered by the system. It is thus problematic for QA system developers to carefully design and build an answer candidate extractor that works well in conjunction with the questiontype system. This problem is particularly difficult when the task is to develop a multilingual QA system to handle languages that are unfamiliar to the developer. Developing high-quality tools that can extract named entities, numerical expressions, and class names for each foreign language is very costly and time-consuming. Recently, some pioneering studies have investigated approaches to automatically construct QA components from scratch by applying machine learning techniques to training data (Ittycheriah et al., 2001a)(Ittycheriah et al., 2001b)(Ng et al., 2001) (Pasca and Harabagiu)(Suzuki et al., 2002)(Suzuki 215 Table 1: Number of Questions in Question Types of CRL QA Data # of Questions # of Question Types Example 1-9 74 AWARD, CRIME, OFFENSE 10-50 32 PERCENT, N PRODUCT, YEAR PERIOD 51-100 6 COUNTRY, COMPANY, GROUP 100-300 3 PERSON, DATE, MONEY Total 115 et al., 2003) (Zukerman and Horvitz, 2001)(Sasaki et al., 2004). These approaches still suffer from the problem of preparing an adequate amount of training data specifically designed for a particular QA system because each QA system uses its own questiontype system. It is very typical in the course of system development to redesign the question-type system in order to improve system performance. This inevitably leads to revision of a large-scale training dataset, which requires a heavy workload. For example, assume that you have to develop a Chinese or Greek QA system and have 10,000 pairs of question and answers. You have to manually classify the questions according to your own questiontype system. In addition, you have to annotate the tags of the question types to large-scale Chinese or Greek documents. If you wanted to redesign the question type ORGANIZATION to three categories, COMPANY, SCHOOL, and OTHER ORGANIZATION, then the ORGANIZATION tags in the annotated document set would need to be manually revisited and revised. To solve this problem, this paper regards Question Answering as Question-Biased Term Extraction (QBTE). This new QBTE approach liberates QA systems from the heavy burden imposed by question types. Since it is a challenging as well as a very complex and sensitive problem to directly extract answers without using question types and only using features of questions, correct answers, and contexts in documents, we have to investigate the feasibility of this approach: how well can answer candidates be extracted, and how well are answer candidates ranked? In response, this paper employs the machine learning technique Maximum Entropy Models (MEMs) to extract answers to a question from documents based on question features, document features, and the combined features. Experimental results show the performance of a QA system that applies MEMs. 2 Preparation 2.1 Training Data Document Set Japanese newspaper articles of The Mainichi Newspaper published in 1995. Question/Answer Set We used the CRL1 QA Data (Sekine et al., 2002). This dataset comprises 2,000 Japanese questions with correct answers as well as question types and IDs of articles that contain the answers. Each question is categorized as one of 115 hierarchically classified question types. The document set is used not only in the training phase but also in the execution phrase. Although the CRL QA Data contains question types, the information of question types are not used for the training. This is because more than the 60% of question types have fewer than 10 questions as examples (Table 1). This means it is very unlikely that we can train a QA system that can handle this 60% due to data sparseness. 2 Only for the purpose of analyzing experimental results in this paper do we refer to the question types of the dataset. 2.2 Learning with Maximum Entropy Models This section briefly introduces the machine learning technique Maximum Entropy Models and describes how to apply MEMs to QA tasks. 2.2.1 Maximum Entropy Models Let X be a set of input symbols and Y be a set of class labels. A sample (x, y) is a pair of input x={x1,..., xm} (xi ∈X) and output y ∈Y. 1Presently, National Institute of Information and Communications Technology (NICT), Japan 2A machine learning approach to hierarchical question analysis was reported in (Suzuki et al., 2003), but training and maintaining an answer extractor for question types of fine granularity is not an easy task. 216 The Maximum Entropy Principle (Berger et al., 1996) is to find a model p∗= argmax p∈C H(p), which means a probability model p(y|x) that maximizes entropy H(p). Given data (x(1), y(1)),. . .,(x(n), y(n)), let [ k (x(k) × {y(k)}) = {⟨˜x1, ˜y1⟩, ..., ⟨˜xi, ˜yi⟩, ..., ⟨˜xm, ˜ym⟩}. This means that we enumerate all pairs of an input symbol and label and represent them as ⟨˜xi, ˜yi⟩using index i (1 ≤i ≤m). In this paper, feature function fi is defined as follows. fi(x, y) = ( 1 if ˜xi ∈x and y = ˜yi 0 otherwise We use all combinations of input symbols in x and class labels for features (or the feature function) of MEMs. With Lagrangian λ = λ1, ..., λm, the dual function of H is: Ψ(λ) = − X x ˜p(x) log Zλ(x) + X λi˜p(fi), where Zλ(x) = X y exp( X i λifi(x, y)) and ˜p(x) and ˜p(fi) indicate the empirical distribution of x and fi in the training data. The dual optimization problem λ∗ = argmax λ Ψ(λ) can be efficiently solved as an optimization problem without constraints. As a result, probabilistic model p∗= pλ∗is obtained as: pλ∗(y|x) = 1 Zλ(x) exp X i λifi(x, y) ! . 2.2.2 Applying MEMs to QA Question analysis is a classification problem that classifies questions into different question types. Answer candidate extraction is also a classification problem that classifies words into answer types (i.e., question types), such as PERSON, DATE, and AWARD. Answer selection is an exactly classification that classifies answer candidates as positive or negative. Therefore, we can apply machine learning techniques to generate classifiers that work as components of a QA system. In the QBTE approach, these three components, i.e., question analysis, answer candidate extraction, and answer selection, are integrated into one classifier. To successfully carry out this goal, we have to extract features that reflect properties of correct answers of a question in the context of articles. 3 QBTE Model 1 This section presents a framework, QBTE Model 1, to construct a QA system from question-answer pairs based on the QBTE Approach. When a user gives a question, the framework finds answers to the question in the following two steps. Document Retrieval retrieves the top N articles or paragraphs from a large-scale corpus. QBTE creates input data by combining the question features and documents features, evaluates the input data, and outputs the top M answers.3 Since this paper focuses on QBTE, this paper uses a simple idf method in document retrieval. Let wi be words and w1,w2,. . .wm be a document. Question Answering in the QBTE Model 1 involves directly classifying words wi in the document into answer words or non-answer words. That is, given input x(i) for wi, its class label is selected from among {I, O, B} as follows: I: if the word is in the middle of the answer word sequence; O: if the word is not in the answer word sequence; B: if the word is the start word of the answer word sequence. The class labeling system in our experiment is IOB2 (Sang, 2000), which is a variation of IOB (Ramshaw and Marcus, 1995). Input x(i) of each word is defined as described below. 3.1 Feature Extraction This paper employs three groups of features as features of input data: • Question Feature Set (QF); • Document Feature Set (DF); • Combined Feature Set (CF), i.e., combinations of question and document features. 3In this paper, M is set to 5. 217 3.1.1 Question Feature Set (QF) A Question Feature Set (QF) is a set of features extracted only from a question sentence. This feature set is defined as belonging to a question sentence. The following are elements of a Question Feature Set: qw: an enumeration of the word n-grams (1 ≤ n ≤N), e.g., given question “What is CNN?”, the features are {qw:What, qw:is, qw:CNN, qw:What-is, qw:is-CNN } if N = 2, qq: interrogative words (e.g., who, where, what, how many), qm1: POS1 of words in the question, e.g., given “What is CNN?”, { qm1:wh-adv, qm1:verb, qm1:noun } are features, qm2: POS2 of words in the question, qm3: POS3 of words in the question, qm4: POS4 of words in the question. POS1-POS4 indicate part-of-speech (POS) of the IPA POS tag set generated by the Japanese morphological analyzer ChaSen. For example, “Tokyo” is analyzed as POS1 = noun, POS2 = propernoun, POS3 = location, and POS4 = general. This paper used up to 4-grams for qw. 3.1.2 Document Feature Set (DF) Document Feature Set (DF) is a feature set extracted only from a document. Using only DF corresponds to unbiased Term Extraction (TE). For each word wi, the following features are extracted: dw–k,. . .,dw+0,. . .,dw+k: k preceding and following words of the word wi, e.g., { dw–1:wi−1, dw+0:wi, dw+1:wi+1} if k = 1, dm1–k,. . .,dm1+0,. . .,dm1+k: POS1 of k preceding and following words of the word wi, dm2–k,. . .,dm2+0,. . .,dm2+k: POS2 of k preceding and following words of the word wi, dm3–k,. . .,dm3+0,. . .,dm3+k: POS3 of k preceding and following words of the word wi, dm4–k,. . .,dm4+0,. . .,dm4+k: POS4 of k preceding and following words of the word wi. In this paper, k is set to 3 so that the window size is 7. 3.1.3 Combined Feature Set (CF) Combined Feature Set (CF) contains features created by combining question features and document features. QBTE Model 1 employs CF. For each word wi, the following features are created. cw–k,. . .,cw+0,. . .,cw+k: matching results (true/false) between each of dw–k,...,dw+k features and any qw feature, e.g., cw–1:true if dw–1:President and qw: President, cm1–k,. . .,cm1+0,. . .,cm1+k: matching results (true/false) between each of dm1–k,...,dm1+k features and any POS1 in qm1 features, cm2–k,. . .,cm2+0,. . .,cm2+k: matching results (true/false) between each of dm2–k,...,dm2+k features and any POS2 in qm2 features, cm3–k,. . .,cm3+0,. . .,cm3+k: matching results (true/false) between each of dm3–k,...,dm3+k features and any POS3 in qm3 features, cm4–k,. . .,cm4+0,. . .,cm4+k: matching results (true/false) between each of dm4–k,...,dm4+k features and any POS4 in qm4 features, cq–k,. . .,cq+0,. . .,cq+k: combinations of each of dw–k,...,dw+k features and qw features, e.g., cq–1:President&Who is a combination of dw– 1:President and qw:Who. 3.2 Training and Execution The training phase estimates a probabilistic model from training data (x(1),y(1)),...,(x(n),y(n)) generated from the CRL QA Data. The execution phase evaluates the probability of y′(i) given inputx′(i) using the the probabilistic model. Training Phase 1. Given question q, correct answer a, and document d. 2. Annotate ⟨A⟩and ⟨/A⟩right before and after answer a in d. 3. Morphologically analyze d. 4. For d = w1, ..., ⟨A⟩, wj, ..., wk, ⟨/A⟩, ..., wm, extract features as x(1),...,x(m). 5. Class label y(i) = B if wi follows ⟨A⟩, y(i) = I if wi is inside of ⟨A⟩and ⟨/A⟩, and y(i) = O otherwise. 218 Table 2: Main Results with 10-fold Cross Validation Correct Answer Rank MRR Top5 1 2 3 4 5 Exact match 453 139 68 35 19 0.28 0.36 Partial match 684 222 126 80 48 0.43 0.58 Ave. 0.355 0.47 Manual evaluation 578 188 86 55 34 0.36 0.47 6. Estimate pλ∗from (x(1),y(1)),...,(x(n),y(n)) using Maximum Entropy Models. The execution phase extracts answers from retrieved documents as Term Extraction, biased by the question. Execution Phase 1. Given question q and paragraph d. 2. Morphologically analyze d. 3. For wi of d = w1, ..., wm, create input data x′(i) by extracting features. 4. For each y′(j) ∈Y, compute pλ ∗(y′(j)|x′(i)), which is a probability of y′(j) given x′(i). 5. For each x′(i), y′(j) with the highest probability is selected as the label of wi. 6. Extract word sequences that start with the word labeled B and are followed by words labeled I from the labeled word sequence of d. 7. Rank the top M answers according to the probability of the first word. This approach is designed to extract only the most highly probable answers. However, pin-pointing only answers is not an easy task. To select the top five answers, it is necessary to loosen the condition for extracting answers. Therefore, in the execution phase, we only give label O to a word if its probability exceeds 99%, otherwise we give the second most probable label. As a further relaxation, word sequences that include B inside the sequences are extracted for answers. This is because our preliminary experiments indicated that it is very rare for two answer candidates to be adjacent in Question-Biased Term Extraction, unlike an ordinary Term Extraction task. 4 Experimental Results We conducted 10-fold cross validation using the CRL QA Data. The output is evaluated using the Top5 score and MRR. Top5 Score shows the rate at which at least one correct answer is included in the top 5 answers. MRR (Mean Reciprocal Rank) is the average reciprocal rank (1/n) of the highest rank n of a correct answer for each question. Judgment of whether an answer is correct is done by both automatic and manual evaluation. Automatic evaluation consists of exact matching and partial matching. Partial matching is useful for absorbing the variation in extraction range. A partial match is judged correct if a system’s answer completely includes the correct answer or the correct answer completely includes a system’s answer. Table 2 presents the experimental results. The results show that a QA system can be built by using our QBTE approach. The manually evaluated performance scored MRR=0.36 and Top5=0.47. However, manual evaluation is costly and time-consuming, so we use automatic evaluation results, i.e., exact matching results and partial matching results, as a pseudo lowerbound and upper-bound of the performances. Interestingly, the manual evaluation results of MRR and Top5 are nearly equal to the average between exact and partial evaluation. To confirm that the QBTE ranks potential answers to the higher rank, we changed the number of paragraphs retrieved from a large corpus from N = 1, 3, 5 to 10. Table 3 shows the results. Whereas the performances of Term Extraction (TE) and Term Extraction with question features (TE+QF) significantly degraded, the performance of the QBTE (CF) did not severely degrade with the larger number of retrieved paragraphs. 219 Table 3: Answer Extraction from Top N documents Feature set Top N paragraphs Match Correct Answer Rank MRR Top5 1 2 3 4 5 1 Exact 102 109 80 71 62 0.11 0.21 Partial 207 186 155 153 121 0.21 0.41 3 Exact 65 63 55 53 43 0.07 0.14 TE (DF) Partial 120 131 112 108 94 0.13 0.28 5 Exact 51 38 38 36 36 0.05 0.10 Partial 99 80 89 81 75 0.10 0.21 10 Exact 29 17 19 22 18 0.03 0.07 Partial 59 38 35 49 46 0.07 0.14 1 Exact 120 105 94 63 80 0.12 0. 23 Partial 207 198 175 126 140 0.21 0 .42 TE (DF) 3 Exact 65 68 52 58 57 0.07 0.15 + Partial 119 117 111 122 106 0.13 0.29 QF 5 Exact 44 57 41 35 31 0.05 0.10 Partial 91 104 71 82 63 0.10 0.21 10 Exact 28 42 30 28 26 0.04 0.08 Partial 57 68 57 56 45 0.07 0.14 1 Exact 453 139 68 35 19 0.28 0.36 Partial 684 222 126 80 48 0.43 0.58 3 Exact 403 156 92 52 43 0.27 0.37 QBTE (CF) Partial 539 296 145 105 92 0.42 0.62 5 Exact 381 153 92 59 50 0.26 0.37 Partial 542 291 164 122 102 0.40 0.61 10 Exact 348 128 92 65 57 0.24 0.35 Partial 481 257 173 124 102 0.36 0.57 5 Discussion Our approach needs no question type system, and it still achieved 0.36 in MRR and 0.47 in Top5. This performance is comparable to the results of SAIQAII (Sasaki et al., 2004) (MRR=0.4, Top5=0.55) whose question analysis, answer candidate extraction, and answer selection modules were independently built from a QA dataset and an NE dataset, which is limited to eight named entities, such as PERSON and LOCATION. Since the QA dataset is not publicly available, it is not possible to directly compare the experimental results; however we believe that the performance of the QBTE Model 1 is comparable to that of the conventional approaches, even though it does not depend on question types, named entities, or class names. Most of the partial answers were judged correct in manual evaluation. For example, for “How many times bigger ...?”, “two times” is a correct answer but “two” was judged correct. Suppose that “John Kerry” is a prepared correct answer in the CRL QA Data. In this case, “Senator John Kerry” would also be correct. Such additions and omissions occur because our approach is not restricted to particular extraction units, such as named entities or class names. The performance of QBTE was affected little by the larger number of retrieved paragraphs, whereas the performances of TE and TE + QF significantly degraded. This indicates that QBTE Model 1 is not mere Term Extraction with document retrieval but Term Extraction appropriately biased by questions. Our experiments used no information about question types given in the CRL QA Data because we are seeking a universal method that can be used for any QA dataset. Beyond this main goal, as a reference, The Appendix shows our experimental results classified into question types without using them in the training phase. The results of automatic evaluation of complete matching are in Top5 (T5), and MRR and partial matching are in Top5 (T5’) and MRR’. It is interesting that minor question types were correctly answered, e.g., SEA and WEAPON, for which there was only one training question. We also conducted an additional experiment, as a reference, on the training data that included question types defined in the CRL QA Data; the questiontype of each question is added to the qw feature. The performance of QBTE from the first-ranked paragraph showed no difference from that of experiments shown in Table 2. 220 6 Related Work There are two previous studies on integrating QA components into one using machine learning/statistical NLP techniques. Echihabi et al. (Echihabi et al., 2003) used Noisy-Channel Models to construct a QA system. In this approach, the range of Term Extraction is not trained by a data set but selected from answer candidates, e.g., named entities and noun phrases, generated by a decoder. Lita et al. (Lita and Carbonell, 2004) share our motivation to build a QA system only from question-answer pairs without depending on the question types. Their method finds clusters of questions and defines how to answer questions in each cluster. However, their approach is to find snippets, i.e., short passages including answers, not exact answers extracted by Term Extraction. 7 Conclusion This paper described a novel approach to extracting answers to a question using probabilistic models constructed from only question-answer pairs. This approach requires no question type system, no named entity extractor, and no class name extractor. To the best of our knowledge, no previous study has regarded Question Answering as Question-Biased Term Extraction. As a feasibility study, we built a QA system using Maximum Entropy Models on a 2000-question/answer dataset. The results were evaluated by 10-fold cross validation, which showed that the performance is 0.36 in MRR and 0.47 in Top5. Since this approach relies on a morphological analyzer, applying the QBTE Model 1 to QA tasks of other languages is our future work. Acknowledgment This research was supported by a contract with the National Institute of Information and Communications Technology (NICT) of Japan entitled, “A study of speech dialogue translation technology based on a large corpus”. References Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra: A Maximum Entropy Approach to Natural Language Processing, Computational Linguistics, Vol. 22, No. 1, pp. 39–71 (1996). Abdessamad Echihabi and Daniel Marcu: A NoisyChannel Approach to Question Answering, Proc. of ACL-2003, pp. 16-23 (2003). Abraham Ittycheriah, Martin Franz, Wei-Jing Zhu, and Adwait Ratnaparkhi: Question Answering Using Maximum-Entropy Components, Proc. of NAACL2001 (2001). Abraham Ittycheriah, Martin Franz, Wei-Jing Zhu, and Adwait Ratnaparkhi: IBM’s Statistical Question Answering System – TREC-10, Proc. of TREC-10 (2001). Lucian Vlad Lita and Jaime Carbonell: Instance-Based Question Answering: A Data-Driven Approach: Proc. of EMNLP-2004, pp. 396–403 (2004). Hwee T. Ng, Jennifer L. P. Kwan, and Yiyuan Xia: Question Answering Using a Large Text Database: A Machine Learning Approach: Proc. of EMNLP-2001, pp. 67–73 (2001). Marisu A. Pasca and Sanda M. Harabagiu: High Performance Question/Answering, Proc. of SIGIR-2001, pp. 366–374 (2001). Lance A. Ramshaw and Mitchell P. Marcus: Text Chunking using Transformation-Based Learning, Proc. of WVLC-95, pp. 82–94 (1995). Erik F. Tjong Kim Sang: Noun Phrase Recognition by System Combination, Proc. of NAACL-2000, pp. 55– 55 (2000). Yutaka Sasaki, Hideki Isozaki, Jun Suzuki, Kouji Kokuryou, Tsutomu Hirao, Hideto Kazawa, and Eisaku Maeda, SAIQA-II: A Trainable Japanese QA System with SVM, IPSJ Journal, Vol. 45, NO. 2, pp. 635-646, 2004. (in Japanese) Satoshi Sekine, Kiyoshi Sudo, Yusuke Shinyama, Chikashi Nobata, Kiyotaka Uchimoto, and Hitoshi Isahara, NYU/CRL QA system, QAC question analysis and CRL QA data, in Working Notes of NTCIR Workshop 3 (2002). Jun Suzuki, Yutaka Sasaki, and Eisaku Maeda: SVM Answer Selection for Open-Domain Question Answering, Proc. of Coling-2002, pp. 974–980 (2002). Jun Suzuki, Hirotoshi Taira, Yutaka Sasaki, and Eisaku Maeda: Directed Acyclic Graph Kernel, Proc. of ACL 2003 Workshop on Multilingual Summarization and Question Answering - Machine Learning and Beyond, pp. 61–68, Sapporo (2003). Ingrid Zukerman and Eric Horvitz: Using Machine Learning Techniques to Interpret WH-Questions, Proc. of ACL-2001, Toulouse, France, pp. 547–554 (2001). 221 Appendix: Analysis of Evaluation Results w.r.t. Question Type — Results of QBTE from the firstranked paragraph (NB: No information about these question types was used in the training phrase.) Question Type #Qs MRR T5 MRR’ T5’ GOE 36 0.30 0.36 0.41 0.53 GPE 4 0.50 0.50 1.00 1.00 N EVENT 7 0.76 0.86 0.76 0.86 EVENT 19 0.17 0.21 0.41 0.53 GROUP 74 0.28 0.35 0.45 0.62 SPORTS TEAM 15 0.28 0.40 0.45 0.73 BROADCAST 1 0.00 0.00 0.00 0.00 POINT 2 0.00 0.00 0.00 0.00 DRUG 2 0.00 0.00 0.00 0.00 SPACESHIP 4 0.88 1.00 0.88 1.00 ACTION 18 0.22 0.22 0.30 0.44 MOVIE 6 0.50 0.50 0.56 0.67 MUSIC 8 0.19 0.25 0.56 0.62 WATER FORM 3 0.50 0.67 0.50 0.67 CONFERENCE 17 0.14 0.24 0.46 0.65 SEA 1 1.00 1.00 1.00 1.00 PICTURE 1 0.00 0.00 0.00 0.00 SCHOOL 21 0.10 0.10 0.33 0.43 ACADEMIC 5 0.20 0.20 0.37 0.60 PERCENT 47 0.35 0.43 0.43 0.55 COMPANY 77 0.45 0.55 0.57 0.70 PERIODX 1 1.00 1.00 1.00 1.00 RULE 35 0.30 0.43 0.49 0.69 MONUMENT 2 0.00 0.00 0.25 0.50 SPORTS 9 0.17 0.22 0.40 0.67 INSTITUTE 26 0.38 0.46 0.53 0.69 MONEY 110 0.33 0.40 0.48 0.63 AIRPORT 4 0.38 0.50 0.44 0.75 MILITARY 4 0.00 0.00 0.25 0.25 ART 4 0.25 0.50 0.25 0.50 MONTH PERIOD 6 0.06 0.17 0.06 0.17 LANGUAGE 3 1.00 1.00 1.00 1.00 COUNTX 10 0.33 0.40 0.38 0.60 AMUSEMENT 2 0.00 0.00 0.00 0.00 PARK 1 0.00 0.00 0.00 0.00 SHOW 3 0.78 1.00 1.11 1.33 PUBLIC INST 19 0.18 0.26 0.34 0.53 PORT 3 0.17 0.33 0.33 0.67 N COUNTRY 8 0.28 0.38 0.32 0.50 NATIONALITY 4 0.50 0.50 1.00 1.00 COUNTRY 84 0.45 0.60 0.51 0.67 OFFENSE 9 0.23 0.44 0.23 0.44 CITY 72 0.41 0.50 0.53 0.65 N FACILITY 4 0.25 0.25 0.38 0.50 FACILITY 11 0.20 0.36 0.25 0.55 TIMEX 3 0.00 0.00 0.00 0.00 TIME TOP 2 0.00 0.00 0.50 0.50 TIME PERIOD 8 0.12 0.12 0.48 0.75 TIME 13 0.22 0.31 0.29 0.38 ERA 3 0.00 0.00 0.33 0.33 PHENOMENA 5 0.50 0.60 0.60 0.80 DISASTER 4 0.50 0.75 0.50 0.75 OBJECT 5 0.47 0.60 0.47 0.60 CAR 1 1.00 1.00 1.00 1.00 RELIGION 5 0.30 0.40 0.30 0.40 WEEK PERIOD 4 0.05 0.25 0.55 0.75 WEIGHT 12 0.21 0.25 0.31 0.42 PRINTING 6 0.17 0.17 0.38 0.50 Question Type #Q MRR T5 MRR’ T5’ RANK 7 0.18 0.29 0.54 0.71 BOOK 6 0.31 0.50 0.47 0.67 AWARD 9 0.17 0.33 0.34 0.56 N LOCATION 2 0.10 0.50 0.10 0.50 VEGETABLE 10 0.31 0.50 0.34 0.60 COLOR 5 0.20 0.20 0.20 0.20 NEWSPAPER 7 0.61 0.71 0.61 0.71 WORSHIP 8 0.47 0.62 0.62 0.88 SEISMIC 1 0.00 0.00 1.00 1.00 N PERSON 72 0.30 0.39 0.43 0.60 PERSON 282 0.18 0.21 0.46 0.55 NUMEX 19 0.32 0.32 0.35 0.47 MEASUREMENT 1 0.00 0.00 0.00 0.00 P ORGANIZATION 3 0.33 0.33 0.67 0.67 P PARTY 37 0.30 0.41 0.43 0.57 GOVERNMENT 37 0.50 0.54 0.53 0.57 N PRODUCT 41 0.25 0.37 0.37 0.56 PRODUCT 58 0.24 0.34 0.44 0.69 WAR 2 0.75 1.00 0.75 1.00 SHIP 7 0.26 0.43 0.40 0.57 N ORGANIZATION 20 0.14 0.25 0.28 0.55 ORGANIZATION 23 0.08 0.13 0.20 0.30 SPEED 1 0.00 0.00 1.00 1.00 VOLUME 5 0.00 0.00 0.18 0.60 GAMES 8 0.28 0.38 0.34 0.50 POSITION TITLE 39 0.20 0.28 0.30 0.44 REGION 22 0.17 0.23 0.46 0.64 GEOLOGICAL 3 0.42 0.67 0.42 0.67 LOCATION 2 0.00 0.00 0.50 0.50 EXTENT 22 0.04 0.09 0.13 0.18 CURRENCY 1 0.00 0.00 0.00 0.00 STATION 3 0.50 0.67 0.50 0.67 RAILROAD 1 0.00 0.00 0.25 1.00 PHONE 1 0.00 0.00 0.00 0.00 PROVINCE 36 0.30 0.33 0.45 0.50 N ANIMAL 3 0.11 0.33 0.22 0.67 ANIMAL 10 0.26 0.50 0.31 0.60 ROAD 1 0.00 0.00 0.50 1.00 DATE PERIOD 9 0.11 0.11 0.33 0.33 DATE 130 0.24 0.32 0.41 0.58 YEAR PERIOD 34 0.22 0.29 0.38 0.59 AGE 22 0.34 0.45 0.44 0.59 MULTIPLICATION 9 0.39 0.44 0.56 0.67 CRIME 4 0.75 0.75 0.75 0.75 AIRCRAFT 2 0.00 0.00 0.25 0.50 MUSEUM 3 0.33 0.33 0.33 0.33 DISEASE 18 0.29 0.50 0.43 0.72 FREQUENCY 13 0.18 0.31 0.19 0.38 WEAPON 1 1.00 1.00 1.00 1.00 MINERAL 18 0.16 0.22 0.25 0.39 METHOD 29 0.39 0.48 0.48 0.62 ETHNIC 3 0.42 0.67 0.75 1.00 NAME 5 0.20 0.20 0.40 0.40 SPACE 4 0.50 0.50 0.50 0.50 THEORY 1 0.00 0.00 0.00 0.00 LANDFORM 5 0.13 0.40 0.13 0.40 TRAIN 2 0.17 0.50 0.17 0.50 2000 0.28 0.36 0.43 0.58 222 | 2005 | 27 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 223–230, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Exploring and Exploiting the Limited Utility of Captions in Recognizing Intention in Information Graphics∗ Stephanie Elzer1 and Sandra Carberry2 and Daniel Chester2 and Seniz Demir2 and Nancy Green3 and Ingrid Zukerman4 and Keith Trnka2 1Dept. of Computer Science, Millersville University, Millersville, PA 17551 2Dept. of Computer Science, University of Delaware, Newark, DE 19716 3Dept. of Mathematical Sciences, Univ. of NC at Greensboro, Greensboro, NC 27402 4School of CS & Software Engrg, Monash Univ., Clayton, Victoria 3800 Australia Abstract This paper presents a corpus study that explores the extent to which captions contribute to recognizing the intended message of an information graphic. It then presents an implemented graphic interpretation system that takes into account a variety of communicative signals, and an evaluation study showing that evidence obtained from shallow processing of the graphic’s caption has a significant impact on the system’s success. This work is part of a larger project whose goal is to provide sight-impaired users with effective access to information graphics. 1 Introduction Language research has posited that a speaker or writer executes a speech act whose intended meaning he expects the listener to be able to deduce, and that the listener identifies the intended meaning by reasoning about the observed signals and the mutual beliefs of author and interpreter (Grice, 1969; Clark, 1996). But as noted by Clark (Clark, 1996), language is more than just words. It is any “signal” (or lack of signal when one is expected), where a signal is a deliberate action that is intended to convey a message. Although some information graphics are only intended to display data values, the overwhelming majority of the graphics that we have examined (taken ∗Authors can be reached via email as follows: [email protected], [email protected], {carberry, chester, demir, trnka}@cis.udel.edu, [email protected]. 1998 1999 2000 2001 1000 1500 2000 2500 3000 personal filings Local bankruptcy Figure 1: Graphic from a 2001 Local Newspaper from newspaper, magazine, and web articles) appear to have some underlying goal or intended message, such as the graphic in Figure 1 whose communicative goal is ostensibly to convey the sharp increase in local bankruptcies in the current year compared with the previous decreasing trend. Applying Clark’s view of language, it is reasonable to presume that the author of an information graphic expects the viewer to deduce from the graphic the message that the graphic was intended to convey, by reasoning about the graphic itself, the salience of entities in the graphic, and the graphic’s caption. This paper adopts Clark’s view of language as any deliberate signal that is intended to convey a message. Section 3 investigates the kinds of signals used in information graphics. Section 4 presents a corpus study that investigates the extent to which captions capture the message of the graphic, illustrates the issues that would arise in trying to fully understand such captions, and proposes shallow processing of the caption to extract evidence from it. Section 5 then describes how evidence obtained from a variety of communicative signals, including shallow processing of the graphic’s caption, is used in a probabilistic system for hypothesizing the intended message of the graphic. Section 6 presents an eval223 10 5 15 0−6 80+ 65−79 7−19 35−49 80+ 65−79 50−64 35−49 10 5 15 20−34 7−19 0−6 20−34 50−64 (a) (b) Figure 2: Two Alternative Graphs from the Same Data uation showing the system’s success, with particular attention given to the impact of evidence from shallow processing of the caption, and Section 7 discusses future work. Although we believe that our findings are extendible to other kinds of information graphics, our current work focuses on bar charts. This research is part of a larger project whose goal is a natural language system that will provide effective access to information graphics for individuals with sight impairments, by inferring the intended message underlying the graphic, providing an initial summary of the graphic that includes the intended message along with notable features of the graphic, and then responding to follow-up questions from the user. 2 Related Work Our work is related to efforts on graph summarization. (Yu et al., 2002) used pattern recognition techniques to summarize interesting features of automatically generated graphs of time-series data from a gas turbine engine. (Futrelle and Nikolakis, 1995) developed a constraint grammar for parsing vectorbased visual displays and producing representations of the elements comprising the display. The goal of Futrelle’s project is to produce a graphic that summarizes one or more graphics from a document (Futrelle, 1999). The summary graphic might be a simplification of a graphic or a merger of several graphics from the document, along with an appropriate summary caption. Thus the end result of summarization will itself be a graphic. The long range goal of our project, on the other hand, is to provide alternative access to information graphics via an initial textual summary followed by an interactive followup component for additional information. The intended message of the graphic will be an important component of the initial summary, and hypothesizing it is the goal of our current work. 3 Evidence about the Intended Message The graphic designer has many alternative ways of designing a graphic; different designs contain different communicative signals and thus convey different communicative intents. For example, consider the two graphics in Figure 2. The graphic in Figure 2a conveys that average doctor visits per year is U-shaped by age; it starts out high when one is very young, decreases into middle age, and then rises again as one ages. The graphic in Figure 2b presents the same data; but instead of conveying a trend, this graphic seems to convey that the elderly and the young have the highest number of doctor visits per year. These graphics illustrate how choice of design affects the message that the graphic conveys. Following the AutoBrief work (Kerpedjiev and Roth, 2000) (Green et al., 2004) on generating graphics that fulfill communicative goals, we hypothesize that the designer chooses a design that best facilitates the perceptual and cognitive tasks that are most important to conveying his intended message, subject to the constraints imposed by competing tasks. By perceptual tasks we mean tasks that can be performed by simply viewing the graphic, such as finding the top of a bar in a bar chart; by cognitive tasks we mean tasks that are done via mental computations, such as computing the difference between two numbers. Thus one source of evidence about the intended message is the relative difficulty of the perceptual tasks that the viewer would need to perform in order to recognize the message. For example, determining 224 the entity with maximum value in a bar chart will be easiest if the bars are arranged in ascending or descending order of height. We have constructed a set of rules, based on research by cognitive psychologists, that estimate the relative difficulty of performing different perceptual tasks; these rules have been validated by eye-tracking experiments and are presented in (Elzer et al., 2004). Another source of evidence is entities that have been made salient in the graphic by some kind of focusing device, such as coloring some elements of the graphic, annotations such as an asterisk, or an arrow pointing to a particular location in a graphic. Entities that have been made salient suggest particular instantiations of perceptual tasks that the viewer is expected to perform, such as comparing the heights of two highlighted bars in a bar chart. And lastly, one would expect captions to help convey the intended message of an information graphic. The next section describes a corpus study that we performed in order to explore the usefulness of captions and how we might exploit evidence from them. 4 A Corpus Study of Captions Although one might suggest relying almost exclusively on captions to interpret an information graphic, (Corio and Lapalme, 1999) found in a corpus study that captions are often very general. The objective of their corpus study was to categorize the kinds of information in captions so that their findings could be used in forming rules for generating graphics with captions. Our project is instead concerned with recognizing the intended message of an information graphic. To investigate how captions might be used in a system for understanding information graphics, we performed a corpus study in which we analyzed the first 100 bar charts from our corpus of information graphics; this corpus contains a variety of bar charts from different publication venues. The following subsections present the results of this corpus study. 4.1 Do Captions Convey the Intended Message? Our first investigation explored the extent to which captions capture the intended message of an information graphic. We extracted the first 100 graphics Category # Category-1: Captures intention (mostly) 34 Category-2: Captures intention (somewhat) 15 Category-3: Hints at intention 7 Category-4: No contribution to intention 44 Figure 3: Analysis of 100 Captions on Bar Charts from our corpus of bar charts. The intended message of each bar chart had been previously annotated by two coders. The coders were asked to identify 1) the intended message of the graphic using a list of 12 high-level intentions (see Section 5 for examples) and 2) the instantiation of the parameters. For example, if the coder classified the intended message of a graphic as Change-trend, the coder was also asked to identify where the first trend began, its general slope (increasing, decreasing, or stable), where the change in trend occurred, the end of the second trend, and the slope of the second trend. If there was disagreement between the coders on either the intention or the instantiation of the parameters, we utilized consensus-based annotation (Ang et al., 2002), in which the coders discussed the graphic to try to come to an agreement. As observed by (Ang et al., 2002), this allowed us to include the “harder” or less obvious graphics in our study, thus lowering our expected system performance. We then examined the caption of each graphic, and determined to what extent the caption captured the graphic’s intended message. Figure 3 shows the results. 44% of the captions in our corpus did not convey to any extent the message of the information graphic. The following categorizes the purposes that these captions served, along with an example of each: • general heading (8 captions): “UGI Monthly Gas Rates” on a graphic conveying a recent spike in home heating bills. • reference to dependent axis (15 captions): “Lancaster rainfall totals for July” on a graphic conveying that July-02 was the driest of the previous decade. • commentary relevant to graphic (4 captions): “Basic performers: One look at the best performing stocks in the Standard&Poor’s 500 index this year shows that companies with basic businesses are rewarding investors” on a 225 graphic conveying the relative rank of different stocks, some of which were basic businesses and some of which were not. This type of information was classified as deductive by (Corio and Lapalme, 1999) since it draws a conclusion from the data depicted in the graphic. • commentary extending message of graphic (8 captions): “Profits are getting squeezed” on a graphic conveying that Southwest Airlines net income is estimated to increase in 2003 after falling the preceding three years. Here the commentary does not draw a conclusion from the data in the graphic but instead supplements the graphic’s message. However this type of caption would probably fall into the deductive class in (Corio and Lapalme, 1999). • humor (7 captions): “The Sound of Sales” on a graphic conveying the changing trend (downward after years of increase) in record album sales. This caption has nothing to do with the change-trend message of the graphic, but appears to be an attempt at humor. • conclusion unwarranted by graphic (2 captions): “Defense spending declines” on a graphic that in fact conveys that recent defense spending is increasing. Slightly over half the captions (56%) contributed to understanding the graphic’s intended message. 34% were judged to convey most of the intended message. For example, the caption “Tennis players top nominees” appeared on a graphic whose intended message is to convey that more tennis players were nominated for the 2003 Laureus World Sports Award than athletes from any other sport. Since we argue that captions alone are insufficient for interpreting information graphics, in the few cases where it was unclear whether a caption should be placed in Category-1 or Category-2, we erred on the side of over-rating the contribution of a caption to the graphic’s intended message. For example, consider the caption “Chirac is riding high in the polls” which appeared on a graphic conveying that there has been a steady increase in Chirac’s approval ratings from 55% to about 75%. Although this caption does not fully capture the communicative intention of the graphic (since it does not capture the steady increase conveyed by the graphic), we placed it in the first category since one might argue that riding high in the polls would suggest both high and improving ratings. 15% of the captions were judged to convey only part of the graphic’s intended message; an example is “Drug spending for young outpace seniors” that appears on a graphic whose intended message appears to be that there is a downward trend by age for increased drug spending; we classified the caption in Category-2 since the caption fails to capture that the graphic is talking about percent increases in drug spending, not absolute drug spending, and that the graphic conveys the downward trend for increases in drug spending by age group, not just that increases for the young were greater than for the elderly. 7% of the captions were judged to only hint at the graphic’s message. An example is “GM’s Money Machine” which appeared on a graphic whose intended message was a contrast of recent performance against the previous trend — ie., that although there had been a steady decrease in the percentage of GM’s overall income produced by its finance unit, there was now a substantial increase in the percentage provided by the finance unit. Since the term money machine is a colloquialism that suggests making a lot of money, the caption was judged to hint at the graphic’s intended message. 4.2 Understanding Captions For the 49 captions in Category 1 or 2 (where the caption conveyed at least some of the message of the graphic), we examined how well the caption could be parsed and understood by a natural language system. We found that 47% were fragments (for example, “A Growing Biotech Market”), or involved some other kind of ill-formedness (for example, “Running tops in sneaker wear in 2002” or “More seek financial aid”1). 16% would require extensive domain knowledge or analogical reasoning to understand. One example is “Chirac is riding high in the polls” which would require understanding the meaning of riding high in the polls. Another example is “Bad Moon Rising”; here the verb rising suggests that something is increasing, but the 1Here we judge the caption to be ill-formed due to the ellipsis since More should be More students. 226 system would need to understand that a bad moon refers to something undesirable (in this case, delinquent loans). 4.3 Simple Evidence from Captions Although our corpus analysis showed that captions can be helpful in understanding the message conveyed by an information graphic, it also showed that full understanding of a caption would be problematic; moreover, once the caption was understood, we would still need to relate it to the information extracted from the graphic itself, which appears to be a difficult problem. Thus we began investigating whether shallow processing of the caption might provide evidence that could be effectively combined with other evidence obtained from the graphic itself. Our analysis provided the following observations: • Verbs in a caption often suggest the kind of message being conveyed by the graphic. An example from our corpus is “Boating deaths decline”; the verb decline suggests that the graphic conveys a decreasing trend. Another example from our corpus is “American Express total billings still lag”; the verb lag suggests that the graphic conveys that some entity (in this case American Express) is ranked behind some others. • Adjectives in a caption also often suggest the kind of message being conveyed by the graphic. An example from our corpus is “Air Force has largest percentage of women”; the adjective largest suggests that the graphic is conveying an entity whose value is largest. Adjectives derived from verbs function similarly to verbs. An example from our corpus is “Soaring Demand for Servers” which is the caption on a graphic that conveys the rapid increase in demand for servers. Here the adjective soaring is derived from the verb soar, and suggests that the graphic is conveying a strong increase. • Nouns in a caption often refer to an entity that is a label on the independent axis. When this occurs, the caption brings the entity into focus and suggests that it is part of the intended message of the graphic. An example from our corpus is “Germans miss their marks” where the graphic displays a bar chart that is intended to convey that Germans are the least happy with the Euro. Words that usually appear as verbs, but are used in the caption as a noun, may function similarly to verbs. An example is “Cable On The Rise”; in this caption, rise is used as a noun, but suggests that the graphic is conveying an increase. 5 Utilizing Evidence We developed and implemented a probabilistic framework for utilizing evidence from a graphic and its caption to hypothesize the graphic’s intended message. To identify the intended message of a new information graphic, the graphic is first given to a Visual Extraction Module (Chester and Elzer, 2005) that is responsible for recognizing the individual components of a graphic, identifying the relationship of the components to one another and to the graphic as a whole, and classifying the graphic as to type (bar chart, line graph, etc.); the result is an XML file that describes the graphic and all of its components. Next a Caption Processing Module analyzes the caption. To utilize verb-related evidence from captions, we identified a set of verbs that would indicate each category of high-level goal2, such as recover for Change-trend and beats for Relative-difference; we then extended the set of verbs by examining WordNet for verbs that were closely related in meaning, and constructed a verb class for each set of closely related verbs. Adjectives such as more and most were handled in a similar manner. The Caption Processing Module applies a part-of-speech tagger and a stemmer to the caption in order to identify nouns, adjectives, and the root form of verbs and adjectives derived from verbs. The XML representation of the graphic is augmented to indicate any independent axis labels that match nouns in the caption, and the presence of a verb or adjective class in the caption. The Intention Recognition Module then analyzes the XML file to build the appropriate Bayesian network; the current system is limited to bar charts, but 2As described in the next paragraph, there are 12 categories of high-level goals. 227 the principles underlying the system should be extendible to other kinds of information graphics. The network is described in (Elzer et al., 2005). Very briefly, our analysis of simple bar charts has shown that the intended message can be classified into one of 12 high-level goals; examples of such goals include: • Change-trend: Viewer to believe that there is a <slope-1> trend from <param1> to <param2> and a significantly different <slope-2> trend from <param3> to <param4> • Relative-difference: Viewer to believe that the value of element <param1> is <comparison> the value of element <param2> where <comparison> is greater-than, less-than, or equal-to. Each category of high-level goal is represented by a node in the network (whose parent is the top-level goal node), and instances of these goals (ie., goals with their parameters instantiated) appear as children with inhibitory links (Huber et al., 1994) capturing their mutual exclusivity. Each goal is broken down further into subtasks (perceptual or cognitive) that the viewer would need to perform in order to accomplish the goal of the parent node. The network is built dynamically when the system is presented with a new information graphic, so that nodes are added to the network only as suggested by the graphic. For example, low-level nodes are added for the easiest primitive perceptual tasks and for perceptual tasks in which a parameter is instantiated with a salient entity (such as an entity colored differently from others in the graphic or an entity that appears as a noun in the caption), since the graphic designer might have intended the viewer to perform these tasks; then higher-level goals that involve these tasks are added, until eventually a link is established to the top-level goal node. Next evidence nodes are added to the network to capture the kinds of evidence noted in Sections 3 and 4.3. For example, evidence nodes are added to the network as children of each low-level perceptual task; these evidence nodes capture the relative difficulty (categorized as easy, medium, hard, or impossible) of performing the perceptual task as estimated by our effort estimation rules mentioned in Section 3, whether a parameter in the task refers to an entity that is salient in the graphic, and whether a parameter in the task refers to an entity that is a noun in the caption. An evidence node, indicating for each verb class whether that verb class appears in the caption (either as a verb, or as an adjective derived from a verb, or as a noun that can also serve as a verb) is added as a child of the top level goal node. Adjectives such as more and most that provide evidence are handled in a similar manner. In a Bayesian network, conditional probability tables capture the conditional probability of a child node given the value of its parent(s). For example, the network requires the conditional probability of an entity appearing as a noun in the caption given that recognizing the intended message entails performing a particular perceptual task involving that entity. Similarly, the network requires the conditional probability, for each class of verb, that the verb class appears in the caption given that the intended message falls into a particular intention category. These probabilities are learned from our corpus of graphics, as described in (Elzer et al., 2005). 6 Evaluation In this paper, we are particularly interested in whether shallow processing of captions can contribute to recognizing the intended message of an information graphic. As mentioned earlier, the intended message of each information graphic in our corpus of bar charts had been previously annotated by two coders. To evaluate our approach, we used leave-one-out cross validation. We performed a series of experiments in which each graphic in the corpus is selected once as the test graphic, the probability tables in the Bayesian network are learned from the remaining graphics, and the test graphic is presented to the system as a test case. The system was judged to fail if either its top-rated hypothesis did not match the intended message that was assigned to the graphic by the coders or the probability rating of the system’s top-rated hypothesis did not exceed 50%. Overall success was then computed by averaging together the results of the whole series of experiments. Each experiment consisted of two parts, one in 228 Diner’s Club Discover American Express Mastercard Visa 400 600 200 Total credit card purchases per year in billions Figure 4: A Graphic from Business Week3 which captions were not taken into account in the Bayesian network and one in which the Bayesian network included evidence from captions. Our overall accuracy without the caption evidence was 64.5%, while the inclusion of caption evidence increased accuracy to 79.1% for an absolute increase in accuracy of 14.6% and a relative improvement of 22.6% over the system’s accuracy without caption evidence. Thus we conclude that shallow processing of a caption provides evidence that can be effectively utilized in a Bayesian network to recognize the intended message of an information graphic. Our analysis of the results provides some interesting insights on the role of elements of the caption. There appear to be two primary functions of verbs. The first is to reflect what is in the data, thereby strengthening the message that would be recognized without the caption. One example from our corpus is a graphic with the caption “Legal immigration to the U.S. has been rising for decades”. Although the early part of the graphic displays a change from decreasing immigration to a steadily increasing immigration trend, most of the graphic focuses on the decades of increasing immigration and the caption strengthens increasing trend in immigration as the intended message of the graphic. If we do not include the caption, our system hypothesizes an increasing trend message with a probability of 66.4%; other hypotheses include an intended message that emphasizes the change in trend with a probability of 15.3%. However, when the verb increasing from the caption is taken into account, the probability of increasing trend in immigration being the intended message rises to 97.9%. 3This is a slight variation of the graphic from Business Week. In the Business Week graphic, the labels sometimes apThe second function of a verb is to focus attention on some aspect of the data. For example, consider the graphic in Figure 4. Without a caption, our system hypothesizes that the graphic is intended to convey the relative rank in billings of different credit card issuers and assigns it a probability of 72.7%. Other possibilities have some probability assigned to them. For example, the intention of conveying that Visa has the highest billings is assigned a probability of 26%. Suppose that the graphic had a caption of “Billings still lag”; if the verb lag is taken into account, our system hypothesizes an intended message of conveying the credit card issuer whose billings are lowest, namely Diner’s Club; the probability assigned to this intention is now 88.4%, and the probability assigned to the intention of conveying the relative rank of different credit card issuers drops to 7.8%. This is because the verb class containing lag appeared in our corpus as part of the caption for graphics whose message conveyed an entity with a minimum value, and not with graphics whose message conveyed the relative rank of all the depicted entities. On the other hand, if the caption is “American Express total billings still lag” (which is the caption associated with the graphic in our corpus), then we have two pieces of evidence from the caption — the verb lag, and the noun American Express which matches a label. In this case, the probabilities change dramatically; the hypothesis that the graphic is intended to convey the rank of American Express (namely third behind Visa and Mastercard) is assigned a probability of 76% and the probability drops to 24% that the graphic is intended to convey that Diner’s Club has the lowest billings. This is not surprising. The presence of the noun American Express in the caption makes that entity salient and is very strong evidence that the intended message places an emphasis on American Express, thus significantly affecting the probabilities of the different hypotheses. On the other hand, the verb class containing lag occurred both in the caption of graphics whose message was judged to convey the entity with the minimum value and in the caption of graphics pear on the bars and sometimes next to them, and the heading for the dependent axis appears in the empty white space of the graphic instead of below the values on the horizontal axis as we show it. Our vision system does not yet have heuristics for recognizing non-standard placement of labels and axis headings. 229 that conveyed an entity ranked behind some others. Therefore, conveying the entity with minimum value is still assigned a non-negligible probability. 7 Future Work It is rare that a caption contains more than one verb class; when it does happen, our current system by default uses the first one that appears. We need to examine how to handle the occurrence of multiple verb classes in a caption. Occasionally, labels in the graphic appear differently in the caption. An example is DJIA (for Dow Jones Industrial Average) that occurs in one graphic as a label but appears as Dow in the caption. We need to investigate resolving such coreferences. We currently limit ourselves to recognizing what appears to be the primary communicative intention of an information graphic; in the future we will also consider secondary intentions. We will also extend our work to other kinds of information graphics such as line graphs and pie charts, and to complex graphics, such as grouped and composite bar charts. 8 Summary To our knowledge, our project is the first to investigate the problem of understanding the intended message of an information graphic. This paper has focused on the communicative evidence present in an information graphic and how it can be used in a probabilistic framework to reason about the graphic’s intended message. The paper has given particular attention to evidence provided by the graphic’s caption. Our corpus study showed that about half of all captions contain some evidence that contributes to understanding the graphic’s message, but that fully understanding captions is a difficult problem. We presented a strategy for extracting evidence from a shallow analysis of the caption and utilizing it, along with communicative signals from the graphic itself, in a Bayesian network that hypothesizes the intended message of an information graphic, and our results demonstrate the effectiveness of our methodology. Our research is part of a larger project aimed at providing alternative access to information graphics for individuals with sight impairments. References J. Ang, R. Dhillon, A. Krupski, E. Shriberg, and A. Stolcke. 2002. Prosody-based automatic detection of annoyance and frustration in human-computer dialog. In Proc. of the Int’l Conf. on Spoken Language Processing (ICSLP). D. Chester and S. Elzer. 2005. Getting computers to see information graphics so users do not have to. To appear in Proc. of the 15th Int’l Symposium on Methodologies for Intelligent Systems. H. Clark. 1996. Using Language. Cambridge University Press. M. Corio and G. Lapalme. 1999. Generation of texts for information graphics. In Proc. of the 7th European Workshop on Natural Language Generation, 49–58. S. Elzer, S. Carberry, N. Green, and J. Hoffman. 2004. Incorporating perceptual task effort into the recognition of intention in information graphics. In Proceedings of the 3rd Int’l Conference on Diagrams, LNAI 2980, 255–270. S. Elzer, S. Carberry, I. Zukerman, D. Chester, N. Green, S. Demir. 2005. A probabilistic framework for recognizing intention in information graphics. To appear in Proceedings of the Int’l Joint Conf. on AI (IJCAI). R. Futrelle and N. Nikolakis. 1995. Efficient analysis of complex diagrams using constraint-based parsing. In Proc. of the Third International Conference on Document Analysis and Recognition. R. Futrelle. 1999. Summarization of diagrams in documents. In I. Mani and M. Maybury, editors, Advances in Automated Text Summarization. MIT Press. Nancy Green, Giuseppe Carenini, Stephan Kerpedjiev, Joe Mattis, Johanna Moore, and Steven Roth. Autobrief: an experimental system for the automatic generation of briefings in integrated text and information graphics. International Journal of Human-Computer Studies, 61(1):32–70, 2004. H. P. Grice. 1969. Utterer’s Meaning and Intentions. Philosophical Review, 68:147–177. M. Huber, E. Durfee, and M. Wellman. 1994. The automated mapping of plans for plan recognition. In Proc. of Uncertainty in AI, 344–351. S. Kerpedjiev and S. Roth. 2000. Mapping communicative goals into conceptual tasks to generate graphics in discourse. In Proc. of Int. Conf. on Intelligent User Interfaces, 60–67. J. Yu, J. Hunter, E. Reiter, and S. Sripada. 2002. Recognising visual patterns to communicate gas turbine time-series data. In ES2002, 105–118. 230 | 2005 | 28 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 231–238, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Scaling up from Dialogue to Multilogue: some principles and benchmarks Jonathan Ginzburg and Raquel Fern´andez Dept of Computer Science King’s College, London The Strand, London WC2R 2LS UK {ginzburg,raquel}@dcs.kcl.ac.uk Abstract The paper considers how to scale up dialogue protocols to multilogue, settings with multiple conversationalists. We extract two benchmarks to evaluate scaled up protocols based on the long distance resolution possibilities of nonsentential utterances in dialogue and multilogue in the British National Corpus. In light of these benchmarks, we then consider three possible transformations to dialogue protocols, formulated within an issue-based approach to dialogue management. We show that one such transformation yields protocols for querying and assertion that fulfill these benchmarks. 1 Introduction The development of dialogue systems in which a human agent interacts using natural language with a computational system is by now a flourishing domain (see e.g. (NLE, 2003)), buttressed by an increasing theoretical and experimental literature on the properties of dialogue (see e.g. recent work in the SEMDIAL and SIGDIAL conferences). In contrast, the development of multilogue systems, in which conversation with 3 or more participants ensue—is still in its early stages, as is the theoretical and experimental study of multilogue. The fundamental issue in tackling multilogue is: how can mechanisms motivated for dialogue (e.g. information states, protocols, update rules etc) be scaled up to multilogue? In this paper we extract from a conversational corpus, the British National Corpus (BNC), several benchmarks that characterize dialogue and multilogue interaction. These are based on the resolution possibilities of non-sentential utterances (NSUs). We then use these benchmarks to evaluate certain general transformations whose application to a dialogue interaction system yield a system appropriate for multilogue. There are of course various plausible views of the relation between dialogue and multilogue. One possible approach to take is to view multilogue as a sequence of dialogues. Something like this approach seems to be adopted in the literature on communication between autonomous software agents. However, even though many situations considered in multiagent systems do involve more than two agents, most interaction protocols are designed only for two participants at a time. This is the case of the protocol specifications provided by FIPA (Foundation for Intelligent Physical Agents) for agent communication language messages (FIPA, 2003). The FIPA interaction protocols (IP) are most typically designed for two participants, an initiator and a responder . Some IPs permit the broadcasting of a message to a group of addressees, and the reception of multiple responses by the original initiator (see most particularly the Contract Net IP). However, even though more than two agents participate in the communicative process, as (Dignum and Vreeswijk, 2003) point out, such conversations can not be considered multilogue, but rather a number of parallel dialogues. The Mission Rehearsal Exercise (MRE) Project (Traum and Rickel, 2002), one of the largest multilogue systems developed hitherto, is a virtual reality environment where multiple partners (including humans and other autonomous agents) engage in multi-conversation situations. The MRE is underpinned by an approach to the modelling of interaction in terms of obligations that different utterance types bring about originally proposed for dialogue (see e.g. (Matheson et al. , 2000)). In particular, this includes a model of the grounding process (Clark, 1996) that involves recognition and construction of common ground units (CGUs) (see (Traum, 2003)). Modelling of obligations and grounding becomes more complex when considering multilogue situations. The model of grounding implemented in the MRE project can only be used in cases where there is a single initiator and responder. It is not clear what the model should be for 231 multiple addressees: should the contents be considered grounded when any of the addressees has acknowledged them? Should evidence of understanding be required from every addressee? Since their resolution is almost wholly reliant on context, non sentential utterances provide a large testbed concerning the structure of both dialogue and multilogue. In section 2 we present data from the British National Corpus (BNC) concerning the resolution of NSUs in dialogue and multilogue. The main focus of this data is with the distance between antecedent and fragment. We use this to extract certain benchmarks concerning multilogue interaction. Thus, acknowledgement and acceptance markers (e.g. ‘mmh’, ‘yeah’) are resolved with reference to an utterance (assertion) which they ground (accept). The data we provide shows that acknowledgements in multilogue, as in dialogue, are adjacent to their antecedent. This provides evidence that, in general, a single addressee serves to signal grounding. In contrast, BNC data indicates the prevalence in multilogue of short answers that are resolved using material from an antecedent question located several turns back, whereas in dialogue short answers are generally adjacent to their antecedent. This provides evidence against reducing querying interaction in multilogue to a sequence of dialogues. We show that long distance short answers are a stable phenomenon for multilogue involving both small (≤5 persons) and large (> 5 persons) groups, despite the apparently declining interactivity with increasing group size flagged in experimental work (see (Fay et al., 2000)). In section 3 we sketch the basic principles of issue based dialogue management which we use as a basis for our subsequent investigations of multilogue interaction. This will include information states and formulation of protocols for querying and assertion in dialogue. In section 4 we consider three possible transformations on dialogue protocols into multilogue protocols. These transformations are entirely general in nature and could be applied to protocols stated in whatever specification language. We evaluate the protocols that are generated by these transformations with reference to the benchmarks extracted in section 2. In particular, we show that one such transformation, dubbed Add Side Participants(ASP), yields protocols for querying and assertion that fulfill these benchmarks. Finally, section 5 provides some conclusions and pointers to future work. 2 Long Distance Resolution of NSUs in Dialogue and Multilogue: some benchmarks The work we present in this paper is based on empirical evidence provided by corpus data extracted from the British National Corpus (BNC). 2.1 The Corpus Our current corpus is a sub-portion of the BNC conversational transcripts consisting of 14,315 sentences. The corpus was created by randomly excerpting a 200-speakerturn section from 54 BNC files. Of these files, 29 are transcripts of conversations between two dialogue participants, and 25 files are multilogue transcripts. A total of 1285 NSUs were found in our sub-corpus. Table 1 shows the raw counts of NSUs found in the dialogue and multilogue transcripts, respectively. NSUs BNC files Dialogue 709 29 Multilogue 576 25 Total 1285 54 Table 1: Total of NSUs in Dialogue and Multilogue All NSUs encountered within the corpus were classified according to the NSU typology presented in (Fern´andez and Ginzburg, 2002). Additionally, the distance from their antecedent was measured.1 Table 2 shows the distribution of NSU categories and their antecedent separation distance. The classes of NSU which feature in our discussion below are boldfaced. The BNC annotation includes tagging of units approximating to sentences, as identified by the CLAWS segmentation scheme (Garside, 1987). Each sentence unit is assigned an identifier number. By default it is assumed that sentences are non-overlapping and that their numeration indicates temporal sequence. When this is not the case because speakers overlap, the tagging scheme encodes synchronous speech by means of an alignment map used to synchronize points within the transcription. However, even though information about simultaneous speech is available, overlapping sentences are annotated with different sentence numbers. In order to be able to measure the distance between the NSUs encountered and their antecedents, all instances were tagged with the sentence number of their antecedent utterance. The distance we report is therefore measured in terms of sentence numbers. It should however be noted that taking into account synchronous speech would not change the data reported in Table 2 in any significant 1This classification was done by one expert annotator. To assess its reliability a pilot study of the taxonomy was performed using two additional non-expert coders. These annotated 50 randomly selected NSUs (containing a minimum of 2 instances of each NSU class, as labelled by the expert annotator.). The agreement achieved by the three coders is reasonably good, yielding a kappa score κ = 0.76. We also assessed the accuracy of the coders’ choices in choosing the antecedent utterance using the expert annotator’s annotation as a gold standard. Given this, one coder’s accuracy was 92%, whereas the other coder’s was 96%. 232 Distance NSU Class Example Total 1 2 3 4 5 6 >6 Acknowledgment Mm mm. 595 578 15 2 Short Answer Ballet shoes. 188 104 21 17 5 5 8 28 Affirmative Answer Yes. 109 104 4 1 Clarification Ellipsis John? 92 76 13 2 1 Repeated Ack. His boss, right. 86 81 2 3 Rejection No. 50 49 1 Factual Modifier Brilliant! 27 23 2 1 1 Repeated Aff. Ans. Very far, yes. 26 25 1 Helpful Rejection No, my aunt. 24 18 5 1 Check Question Okay? 22 15 7 Filler ... a cough. 18 16 1 1 Bare Mod. Phrase On the desk. 16 11 4 1 Sluice When? 11 10 1 Prop. Modifier Probably. 11 10 1 Conjunction Phrase Or a mirror. 10 5 4 1 Total 1285 1125 82 26 9 7 8 28 Percentage 100 87.6 6.3 2 0.6 0.5 0.6 2.1 Table 2: NSUs sorted by Class and Distance way, as manual examination of all NSUs at more than distance 3 reveals that the transcription portion between antecedent and NSU does not contain any completely synchronous sentences in such cases. In the examples throughout the paper we shall use italics to indicate speech overlap. When italics are not used, utterances take place sequentially. 2.2 NSU-Antecedent Separation Distance The last row in Table 2 shows the distribution of NSUantecedent separation distances as percentages of the total of NSUs found. This allows us to see that about 87% of NSUs have a distance of 1 sentence (i.e. the antecedent was the immediately preceding sentence), and that the vast majority (about 96%) have a distance of 3 sentences or less. Although the proportion of NSUs found in dialogue and multilogue is roughly the same (see Table 1 above), when taking into account the distance of NSUs from their antecedent, the proportion of long distance NSUs in multilogue increases radically: the longer the distance, the higher the proportion of NSUs that were found in multilogue. In fact, as Table 3 shows, NSUs that have a distance of 7 sentences or more appear exclusively in multilogue transcripts. These differences are significant (χ2 = 62.24, p ≤0.001). Adjacency of grounding and affirmation utterances The data in table 2 highlights a fundamental characteristic of the remaining majoritarian classes of NSUs, Ack(nowledgements), Affirmative Answer, CE (clarification ellipsis), Repeated Ack(nowledgements), and Rejection. These are used either in grounding interaction, or to affirm/reject propositions.2 The overwhelming adjacency to their antecedent underlines the locality of these interactions. Long distance potential for short answers One striking result exhibited in Table 2 is the uneven distribution of long distance NSUs across categories. With a few exceptions, NSUs that have a distance of 3 sentences or more are exclusively short answers. Not only is the long distance phenomenon almost exclusively restricted to short answers, but the frequency of long distance short answers stands in strong contrast to the other NSUs classes; indeed, over 44% of short answers have more than distance 1, and over 24% have distance 4 or more, like the last answer in the following example: (1) Allan: How much do you think? Cynthia: Three hundred pounds. Sue: More. Cynthia: A thousand pounds. Allan: More. Unknown: <unclear> Allan: Eleven hundred quid apparently. [BNC, G4X] Long distance short answers primarily a multilogue effect Table 4 shows the total number of short answers found in dialogue and multilogue respectively, and the proportions sorted by distance over those totals: From this it emerges that short answers are more common in multilogue than in dialogue—134(71%) v. 2Acknowledgements and acceptances are, in principle, distinct acts: the former involves indication that an utterance has been understood, whereas the latter that an assertion is accepted. In practice, though, acknowledgements in the form of NSUs commonly simultaneously signal acceptances. Given this, corpus studies of NSUs (e.g. (Fern´andez and Ginzburg, 2002)) often conflate the two. 233 Distance 1 2 3 4 5 6 >6 Dialogue 658 (59%) 37 (45%) 11 (45%) 1 (12%) 1 (14%) 1 (13%) 0 (0%) Multilogue 467 (41%) 45 (55%) 15 (55%) 8 (88%) 6 (86%) 7 (87%) 28 (100%) Table 3: NSUs in dialogue and multilogue sorted by distance Short Answers Total # 1 2 3 > 3 Dialogue 54 82 9 9 0 Multilogue 134 44 11 8 37 Table 4: % over the totals found in dialogue and multilogue 54(29%). Also, the distance pattern exhibited by these two groups is strikingly different: Only 18% of short answers found in dialogue have a distance of more than 1 sentence, with all of them having a distance of at most 3, like the short answer in (2). (2) Malcolm: [...] cos what’s three hundred and sixty divided by seven? Anon 1: I don’t know. Malcolm: Yes I don’t know either! Anon 1: Fifty four point fifty one point four. [BNC, KND] This dialogue/multilogue asymmetry argues against reductive views of multilogue as sequential dialogue. Long Distance short answers and group size As Table 4 shows, all short answers at more than distance 3 appear in multilogues. Following (Fay et al., 2000), we distinguish between small groups (those with 3 to 5 participants) and large groups (those with more than 5 participants). The size of the group is determined by the amount of participants that are active when a particular short answer is uttered. We consider active participants those that have made a contribution within a window of 30 turns back from the turn where the short answer was uttered. Table 5 shows the distribution of long distance short answers (distance > 3) in small and large groups respectively. This indicates that long distance short answers are significantly more frequent in large groups (χ2 = 22.17, p ≤0.001), though still reasonably common in small groups. A pragmatic account correlating group size and frequency of long distance short answers is offered in the final paragraph of section 3. Group Size d > 3 d ≤3 Total ≤5 20 73 93 (21.5%) (78.5%) > 5 26 15 41 (63%) (37%) Table 5: Long distance short answers in small and large groups Large group multilogues in the corpus are all transcripts of tutorials, training sessions or seminars, which exhibit a rather particular structure. The general pattern involves a question being asked by the tutor or session leader, the other participants then taking turns to answer that question. The tutor or leader acts as turn manager. She assigns the turn explicitly usually by addressing the participants by their name without need to repeat the question under discussion. An example is shown in (3): (3) Anon1: How important is those three components and what value would you put on them [...] Anon3: Tone forty five. Body language thirty . Anon1: Thank you. Anon4: Oh. Anon1: Melanie. Anon5: twenty five. Anon1: Yes. Anon5: Tone of voice twenty five. [BNC, JYM] Small group multilogues on the other hand have a more unconstrained structure: after a question is asked, the participants tend to answer freely. Answers by different participants can follow one after the other without explicit acknowledgements nor turn management, like in (4):. (4) Anon 1: How about finance then? <pause> Unknown 1: Corruption Unknown 2: Risk <pause dur=30> Unknown 3: Wage claims <pause dur=18> 2.3 Two Benchmarks of multilogue The data we have seen above leads in particular to the following two benchmarks protocols for querying, assertion, and grounding interaction in multilogue: (5) a. Multilogue Long Distance short answers (MLDSA): querying protocols for multilogue must license short answers an unbounded number of turns from the original query. b. Multilogue adjacency of grounding/acceptance (MAG): assertion and grounding protocols for multilogue should license grounding/clarification/acceptance moves only adjacently to their antecedent utterance. MLDSA and MAG have a somewhat different status: whereas MLDSA is a direct generalization from the data, MAG is a negative constraint, posited given the paucity of positive instances. As such MAG is more open to doubt and we shall treat it as such in the sequel. 234 3 Issue based Dialogue Management: basic principles In this section we outline some of the basic principles of Issue-based Dialogue Management, which we use as a basis for our subsequent investigations of multilogue interaction. Information States We assume information states of the kind developed in the KoS framework (e.g. (Ginzburg, 1996, forthcoming), (Larsson, 2002)) and implemented in systems such as GODIS, IBIS, and CLARIE (see e.g. (Larsson, 2002; Purver, 2004)). On this view each dialogue participant’s view of the common ground, their Dialogue Gameboard (DGB), is structured by a number of attributes including the following three: FACTS: a set of facts representing the shared assumptions of the CPs, LatestMove: the most recent grounded move, and QUD (‘questions under discussion’): a partially ordered set—often taken to be structured as a stack—consisting of the currently discussable questions. Querying and Assertion Both querying and assertion involve a question becoming maximal in the querier/asserter’s QUD:3 the posed question q for a query where q is posed, the polar question p? for an assertion where p is asserted. Roughly, the responder can subsequently either choose to start a discussion (of q or p?) or, in the case of assertion, to update her FACTS structure with p. A dialogue participant can downdate q/p? from QUD when, as far as her (not necessarily public) goals dictate, sufficient information has been accumulated in FACTS. The querying/assertion protocols (in their most basic form) are summarized as follows: (6) querying assertion LatestMove = Ask(A,q) LatestMove = Assert(A,p) A: push q onto QUD; A: push p? onto QUD; release turn; release turn B: push q onto QUD; B: push p? onto QUD; take turn; take turn; make max-qud–specific; Option 1: Discuss p? utterance4 take turn. Option 2: Accept p LatestMove = Accept(B,p) B: increment FACTS with p; pop p? from QUD; A: increment FACTS with p; pop p? from QUD; Following (Larsson, 2002; Cooper, 2004), one can 3In other words, pushed onto the stack, if one assumes QUD is a stack. 4An utterance whose content is either a proposition p About max-qud or a question q1 on which max-qud Depends. For the latter see footnote 7. If one assumes QUD to be a stack, then ‘max-qud–specific’ will in this case reduce to ‘q–specific’. But the more general formulation will be important below. decompose interaction protocols into conversational update rules—functions from DGBs into DGBs using Type Theory with Records (TTR). This allows simple interfacing with the grammar, a Constraint-based Grammar closely modelled on HPSG but formulated in TTR (see (Ginzburg, forthcoming)). Grounding Interaction Grounding an utterance u : T (‘the sign associated with u is of type T’) is modelled as involving the following interaction. (a) Addressee B tries to anchor the contextual parameters of T. If successful, B acknowledges u (directly, gesturally or implicitly) and responds to the content of u. (b) If unsuccessful, B poses a Clarification Request (CR), that arises via utterance coercion (see (Ginzburg and Cooper, 2001)). For reasons of space we do not formulate an explicit protocol here— the structure of such a protocol resembles the assertion protocol. Our subsequent discussion of assertion can be modified mutatis mutandis to grounding. NSU Resolution We assume the account of NSU resolution developed in (Ginzburg and Sag, 2000). The essential idea they develop is that NSUs get their main predicates from context, specifically via unification with the question that is currently under discussion, an entity dubbed the maximal question under discussion (MAXQUD). NSU resolution is, consequently, tied to conversational topic, viz. the MAX-QUD.5 Distance effects in dialogue short answers If one assumes QUD to be a stack, this affords the potential for non adjacent short answers in dialogue. These, as discussed in section 2, are relatively infrequent. Two commonly observed dialogue conditions will jointly enforce adjacency between short answers and their interrogative antecedents: (a) Questions have a simple, one phrase answer. (b) Questions can be answered immediately, without preparatory or subsequent discussion. For multilogue (or at least certain genres thereof), both these conditions are less likely to be maintained: different CPs can supply different answers, even assuming that relative to each CP there is a simple, one phrase answer. The more CPs there are in a conversation, the smaller their common ground and the more likely the need for clarificatory interaction. A pragmatic account of this type of the frequency of adjacency in dialogue short answers seems clearly preferable to any actual mechanism that would rule out long distance short answers. These can be perfectly felicitous—see e.g. example (1) above which 5The resolution of NSUs, on the approach of (Ginzburg and Sag, 2000), involves one other parameter, an antecedent subutterance they dub the salient-utterance (SAL-UTT). This plays a role similar to the role played by the parallel element in higher order unification–based approaches to ellipsis resolution (see e.g. (Pulman, 1997). For current purposes, we limit attention to the MAX-QUD as the nucleus of NSU resolution. 235 would work fine if the turn uttered by Sue had been uttered by Allan instead. Moreover such a pragmatic account leads to the expectation that the frequency of long distance antecedents is correlated with group size, as indeed indicated by the data in table 5. 4 Scaling up Protocols (Goffman, 1981) introduced the distinction between ratified participants and overhearers in a conversation. Within the former are located the speaker and participants whom she takes into account in her utterance design— the intended addressee(s) of a given utterance, as well as side participants. In this section we consider three possible principles of protocol extension, each of which can be viewed as adding roles for participants from one of Goffman’s categories. We evaluate the protocol that results from the application of each such principle relative to the benchmarks we introduced in section 2.3. Seen in this light, the final principle we consider, Add Side Participants (ASP), arguably, yields the best results. Nonetheless, these three principles would appear to be complementary—the most general protocol for multilogue will involve, minimally, application of all three.6 We state the principles informally and framework independently as transformations on operational construals of the protocols. In a more extended presentation we will formulate these as functions on TTR conversational update rules. The simplest principle is Add Overhearers (AOV). This involves adding participants who merely observe the interaction. They keep track of facts concerning a particular interaction, but their context is not facilitated for them to participate: (7) Given a dialogue protocol π, add roles C1,. .. ,Cn where each Ci is a silent participant: given an utterance u0 classified as being of type T0, Ci updates Ci.DGB.FACTS with the proposition u0 : T0. Applying AOV yields essentially multilogues which are sequences of dialogues. A special case of this are moderated multilogues, where all dialogues involve a designated individual (who is also responsible for turn assignment.). Restricting scaling up to applications of AOV is not sufficient since inter alia this will not fulfill the MLDSA benchmark. A far stronger principle is Duplicate Responders (DR): (8) Given a dialogue protocol π, add roles C1,. .. ,Cn which duplicate the responder role. 6We thank an anonymous reviewer for ACL for convincing us of this point. Applying DR to the querying protocol yields the following protocol: (9) Querying with multiple responders 1. LatestMove = Ask(A,q) 2. A: push q onto QUD; release turn 3. Resp1: push q onto QUD; take turn; make max-qud– specific utterance; release turn 4. Resp2: push q onto QUD; take turn; make max-qud– specific utterance; release turn 5. . . . 6. Respn: push q onto QUD; take turn; make max-qud– specific utterance; release turn This yields interactions such as (4) above. The querying protocol in (9) licenses long distance short answers, so satisfies the MLDSA benchmark. On the other hand, the contextual updates it enforces will not enable it to deal with the following (constructed) variant on (4), in other words does not afford responders to comment on previous responders, as opposed to the original querier: (10) A: Who should we invite for the conference? B: Svetlanov. C: No (=Not Svetlanov), Zhdanov D: No (= Not Zhdanov, ̸= Not Svetlanov), Gergev Applying DR to the assertion protocol will yield the following protocol: (11) Assertion with multiple responders 1. LatestMove = Assert(A,p) 2. A: push p? onto QUD; release turn 3. Resp1: push p? onto QUD; take turn; ⟨Option 1: Discuss p?, Option 2: Accept p ⟩ 4. Resp2: push p? onto QUD; take turn; ⟨Option 1: Discuss p?, Option 2: Accept p ⟩ 5. . . . 6. Respn: push p? onto QUD; take turn; ⟨Option 1: Discuss p?, Option 2: Accept p ⟩ One arguable problem with this protocol—equally applicable to the corresponding DRed grounding protocol—is that it licences long distance acceptance and is, thus, inconsistent with the MAG benchmark. On the other hand, it is potentially useful for interactions where there is explicitly more than one direct addressee. A principle intermediate between AOV and DR is Add Side Participants (ASP): (12) Given a dialogue protocol π, add roles C1,...,Cn, which effect the same contextual update as the interaction initiator. Applying ASP to the dialogue assertion protocol yields the following protocol: (13) Assertion for a conversation involving {A,B,C1,. . . ,Cn} 236 1. LatestMove = Assert(A,p) 2. A: push p? onto QUD; release turn 3. Ci: push p? onto QUD; 4. B: push p? onto QUD; take turn; ⟨Option 1: Accept p, Option 2: Discuss p?⟩ (14) 1. LatestMove = Accept(B,p) 2. B: increment FACTS with p; pop p? from QUD; 3. Ci:increment FACTS with p; pop p? from QUD; 4. A: increment FACTS with p; pop p? from QUD; This protocol satisfies the MAG benchmark in that acceptance is strictly local. This is because it enforces communal acceptance—acceptance by one CP can count as acceptance by all other addressees of an assertion. There is an obvious rational motivation for this, given the difficulty of a CP constantly monitoring an entire audience (when this consists of more than one addressee) for acceptance signals—it is well known that the effect of visual access on turn taking is highly significant (Dabbs and Ruback, 1987). It also enforces quick reaction to an assertion—anyone wishing to dissent from p must get their reaction in early i.e. immediately following the assertion since further discussion of p? is not countenanced if acceptance takes place. The latter can happen of course as a consequence of a dissenter not being quick on their feet; on this protocol to accommodate such cases would require some type of backtracking. Applying ASP to the dialogue querying protocol yields the following protocol: (15) Querying for a conversation involving { A,B,C1,. . . ,Cn} 1. LatestMove = Ask(A,q) 2. A: push q onto QUD; release turn 3. Ci: push q onto QUD; 4. B: push q onto QUD; take turn; make max-qud– specific utterance. This improves on the DR generated protocol because it does allow responders to comment on previous responders—the context is modified as in the dialogue protocol. Nonetheless, as it stands, this protocol won’t fully deal with examples such as (4)—the issue introduced by each successive participant takes precedence given that QUD is assumed to be a stack. This can be remedied by slightly modifying this latter assumption: we will assume that when a question q is pushed onto QUD it doesn’t subsume all existing questions in QUD, but rather only those on which q does not depend:7 (16) q is QUDmod(dependence) maximal iff for any q0 in QUD such that ¬Depend(q, q1): q ≻q0. 7 The notion of dependence we assume here is one common in work on questions, e.g. (Ginzburg and Sag, 2000), intuitively corresponding to the notion of ‘is a subquestion of’. q1 depends on q2 iff any proposition p such that p resolves q2 also satisfies p is about q1. This is conceptually attractive because it reinforces that the order in QUD has an intuitive semantic basis. One effect this has is to ensure that any polar question p? introduced into QUD, whether by an assertion or by a query, subsequent to a wh-question q on which p? depends does not subsume q. Hence, q will remain accessible as an antecedent for NSUs, as long as no new unrelated topic has been introduced. Assuming this modification to QUD is implemented in the above ASP–generated protocols, both MLDSA and MAG benchmarks are fulfilled. 5 Conclusions and Further Work In this paper we consider how to scale up dialogue protocols to multilogue, settings with multiple conversationalists. We have extracted two benchmarks, MLDSA and MAG, to evaluate scaled up protocols based on the long distance resolution possibilities of NSUs in dialogue and multilogue in the BNC. MLDSA, the requirement that multilogue protocols license long distance short answers, derives from the statistically significant increase in frequency of long distance short answers in multilogue as opposed to dialogue. MAG, the requirement that multilogue protocols enforce adjacency of acceptance and grounding interaction, derives from the overwhelming locality of acceptance/grounding interaction in multilogue, as in dialogue. In light of these benchmarks, we then consider three possible transformations to dialogue protocols formulated within an issue-based approach to dialogue management. Each transformation can be intuited as adding roles that correspond to distinct categories of an audience originally suggested by Goffman. The three transformations would appear to be complementary—it seems reasonable to assume that application of all three (in some formulation) will be needed for wide coverage of multilogue. MLDSA and MAG can be fulfilled within an approach that combines the Add Side Participants transformation on protocols with an independently motivated modification of the structure of QUD from a canonical stack to a stack where maximality is conditioned by issue dependence. With respect to long distance short answers our account licences their occurrence in dialogue, as in multilogue. We offer a pragmatic account for their low frequency in dialogue, which indeed generalizes to explain a statistically significant correlation we observe between their increased incidence and increasing active participant size. We plan to carry out more detailed work, both corpus–based and experimental, in order to evaluate the status of MAG and, correspondingly to assess just how local acceptance and grounding interaction really are. We also intend to implement multilogue protocols in CLARIE so it can simulate multilogue. We will then evaluate its ability to process NSUs from the BNC. 237 Acknowledgements We would like to thank three anonymous ACL reviewers for extremely useful comments, which in particular forced us to rethink some key issues. We would also like to thank Pat Healey, Shalom Lappin, Richard Power, and Matt Purver for discussion, and Zoran Macura and Yo Sato for help in assessing the NSU taxonomy. Earlier versions of this work were presented at colloquia at ITRI, Brighton, and at the Universit´e Paris, 7. The research described here is funded by grant number RES-000-230065 from the Economic and Social Research Council of the United Kingdom. References Special issue on best practice in spoken language dialogue systems engineering. 2003. Natural Language Engineering. Herbert Clark. 1996. Using Language. Cambridge University Press, Cambridge. Robin Cooper. 2004. A type theoretic approach to information state update in issue based dialogue management. Invited paper, Catalog’04, the 8th Workshop on the Semantics and Pragmatics of Dialogue, Pompeu Fabra University, Barcelona. James Dabbs and R. Barry Ruback. 1987 Dimensions of group process: amount and structure of vocal interaction. Advances in Experimental Social Psychology 20, pages 123–169. Frank P.M. Dignum and Gerard A.W. Vreeswijk. 2003. Towards a testbed for multi-party dialogues. In Proceedings of the first International Joint Conference on Autonomous Agents and Multi-agent Systems (AAMAS 2003). Nicholas Fay, Simon Garrod, and Jean Carletta. 2000. Group discussion as interactive dialogue or serial monologue. Psychological Science, pages 481–486. Raquel Fern´andez and Jonathan Ginzburg. 2002. Nonsentential utterances: A corpus study. Traitement automatique des languages. Dialogue, 43(2):13–42. FIPA. 2003. The foundation for intelligent physical agents. interaction protocol specifications. http://www.fipa.org. Roger Garside. 1987. The CLAWS word-tagging system, In Roger Garside et al. editors, The computational analysis of English: a corpus-based approach, Longman, Harlow, pages 30–41. Jonathan Ginzburg and Robin Cooper. 2001. Resolving ellipsis in clarification. In Proceedings of the 39th Meeting of the Association for Computational Linguistics, Toulouse. Jonathan Ginzburg and Ivan A. Sag. 2000. Interrogative Investigations: the form, meaning and use of English Interrogatives. Number 123 in CSLI Lecture Notes. CSLI Publications, Stanford: California. Jonathan Ginzburg. (forthcoming). Semantics and Interaction in Dialogue CSLI Publications and University of Chicago Press. Jonathan Ginzburg. 1996. Interrogatives: Questions, facts, and dialogue. In Shalom Lappin, editor, Handbook of Contemporary Semantic Theory. Blackwell, Oxford. Erving Goffman 1981 Forms of Talk. University of Pennsylvania Press, Philadelphia. Staffan Larsson. 2002. Issue based Dialogue Management. Ph.D. thesis, Gothenburg University. Colin Matheson and Massimo Poesio and David Traum. 2000. Modelling Grounding and Discourse Obligations Using Update Rules. Proceedings of NAACL 2000, Seattle. Stephen Pulman. 1997. Focus and higher order unification. Linguistics and Philosophy, 20. Matthew Purver. 2004. The Theory and Use of Clarification in Dialogue. Ph.D. thesis, King’s College, London. David Traum and Jeff Rickel. 2002. Embodied agents for multi-party dialogue in immersive virtual world. In Proceedings of the first International Joint Conference on Autonomous Agents and Multi-agent Systems (AAMAS 2002), pages 766–773. David Traum. 2003. Semantics and pragmatics of questions and answers for dialogue agents. In H. Bunt, editor, Proceedings of the 5th International Workshop on Computational Semantics, pages 380–394, Tilburg. ITK, Tilburg University. 238 | 2005 | 29 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 18–25, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Logarithmic Opinion Pools for Conditional Random Fields Andrew Smith Division of Informatics University of Edinburgh United Kingdom [email protected] Trevor Cohn Department of Computer Science and Software Engineering University of Melbourne, Australia [email protected] Miles Osborne Division of Informatics University of Edinburgh United Kingdom [email protected] Abstract Recent work on Conditional Random Fields (CRFs) has demonstrated the need for regularisation to counter the tendency of these models to overfit. The standard approach to regularising CRFs involves a prior distribution over the model parameters, typically requiring search over a hyperparameter space. In this paper we address the overfitting problem from a different perspective, by factoring the CRF distribution into a weighted product of individual “expert” CRF distributions. We call this model a logarithmic opinion pool (LOP) of CRFs (LOP-CRFs). We apply the LOP-CRF to two sequencing tasks. Our results show that unregularised expert CRFs with an unregularised CRF under a LOP can outperform the unregularised CRF, and attain a performance level close to the regularised CRF. LOP-CRFs therefore provide a viable alternative to CRF regularisation without the need for hyperparameter search. 1 Introduction In recent years, conditional random fields (CRFs) (Lafferty et al., 2001) have shown success on a number of natural language processing (NLP) tasks, including shallow parsing (Sha and Pereira, 2003), named entity recognition (McCallum and Li, 2003) and information extraction from research papers (Peng and McCallum, 2004). In general, this work has demonstrated the susceptibility of CRFs to overfit the training data during parameter estimation. As a consequence, it is now standard to use some form of overfitting reduction in CRF training. Recently, there have been a number of sophisticated approaches to reducing overfitting in CRFs, including automatic feature induction (McCallum, 2003) and a full Bayesian approach to training and inference (Qi et al., 2005). These advanced methods tend to be difficult to implement and are often computationally expensive. Consequently, due to its ease of implementation, the current standard approach to reducing overfitting in CRFs is the use of a prior distribution over the model parameters, typically a Gaussian. The disadvantage with this method, however, is that it requires adjusting the value of one or more of the distribution’s hyperparameters. This usually involves manual or automatic tuning on a development set, and can be an expensive process as the CRF must be retrained many times for different hyperparameter values. In this paper we address the overfitting problem in CRFs from a different perspective. We factor the CRF distribution into a weighted product of individual expert CRF distributions, each focusing on a particular subset of the distribution. We call this model a logarithmic opinion pool (LOP) of CRFs (LOP-CRFs), and provide a procedure for learning the weight of each expert in the product. The LOPCRF framework is “parameter-free” in the sense that it does not involve the requirement to adjust hyperparameter values. LOP-CRFs are theoretically advantageous in that their Kullback-Leibler divergence with a given distribution can be explicitly represented as a function of the KL-divergence with each of their expert distributions. This provides a well-founded framework for designing new overfitting reduction schemes: 18 look to factorise a CRF distribution as a set of diverse experts. We apply LOP-CRFs to two sequencing tasks in NLP: named entity recognition and part-of-speech tagging. Our results show that combination of unregularised expert CRFs with an unregularised standard CRF under a LOP can outperform the unregularised standard CRF, and attain a performance level that rivals that of the regularised standard CRF. LOP-CRFs therefore provide a viable alternative to CRF regularisation without the need for hyperparameter search. 2 Conditional Random Fields A linear chain CRF defines the conditional probability of a state or label sequence s given an observed sequence o via1: p(s|o) = 1 Z(o) exp T+1 ∑ t=1 ∑ k λk fk(st−1,st,o,t) ! (1) where T is the length of both sequences, λk are parameters of the model and Z(o) is the partition function that ensures (1) represents a probability distribution. The functions fk are feature functions representing the occurrence of different events in the sequences s and o. The parameters λk can be estimated by maximising the conditional log-likelihood of a set of labelled training sequences. The log-likelihood is given by: L (λ) = ∑ o,s ˜p(o,s)log p(s|o;λ) = ∑ o,s ˜p(o,s) " T+1 ∑ t=1 λ ·f(s,o,t) # −∑ o ˜p(o)logZ(o;λ) where ˜p(o,s) and ˜p(o) are empirical distributions defined by the training set. At the maximum likelihood solution the model satisfies a set of feature constraints, whereby the expected count of each feature under the model is equal to its empirical count on the training data: 1In this paper we assume there is a one-to-one mapping between states and labels, though this need not be the case. E ˜p(o,s)[ fk]−Ep(s|o)[ fk] = 0, ∀k In general this cannot be solved for the λk in closed form so numerical routines must be used. Malouf (2002) and Sha and Pereira (2003) show that gradient-based algorithms, particularly limited memory variable metric (LMVM), require much less time to reach convergence, for some NLP tasks, than the iterative scaling methods (Della Pietra et al., 1997) previously used for log-linear optimisation problems. In all our experiments we use the LMVM method to train the CRFs. For CRFs with general graphical structure, calculation of Ep(s|o)[ fk] is intractable, but for the linear chain case Lafferty et al. (2001) describe an efficient dynamic programming procedure for inference, similar in nature to the forward-backward algorithm in hidden Markov models. 3 Logarithmic Opinion Pools In this paper an expert model refers a probabilistic model that focuses on modelling a specific subset of some probability distribution. The concept of combining the distributions of a set of expert models via a weighted product has previously been used in a range of different application areas, including economics and management science (Bordley, 1982), and NLP (Osborne and Baldridge, 2004). In this paper we restrict ourselves to sequence models. Given a set of sequence model experts, indexed by α, with conditional distributions pα(s|o) and a set of non-negative normalised weights wα, a logarithmic opinion pool 2 is defined as the distribution: pLOP(s|o) = 1 ZLOP(o) ∏ α [pα(s|o)]wα (2) with wα ≥0 and ∑α wα = 1, and where ZLOP(o) is the normalisation constant: ZLOP(o) = ∑ s ∏ α [pα(s|o)]wα (3) 2Hinton (1999) introduced a variant of the LOP idea called Product of Experts, in which expert distributions are multiplied under a uniform weight distribution. 19 The weight wα encodes our confidence in the opinion of expert α. Suppose that there is a “true” conditional distribution q(s | o) which each pα(s | o) is attempting to model. Heskes (1998) shows that the KL divergence between q(s | o) and the LOP, can be decomposed into two terms: K(q, pLOP) = E −A (4) = ∑ α wαK (q, pα)−∑ α wαK (pLOP, pα) This tells us that the closeness of the LOP model to q(s | o) is governed by a trade-off between two terms: an E term, which represents the closeness of the individual experts to q(s | o), and an A term, which represents the closeness of the individual experts to the LOP, and therefore indirectly to each other. Hence for the LOP to model q well, we desire models pα which are individually good models of q (having low E) and are also diverse (having large A). 3.1 LOPs for CRFs Because CRFs are log-linear models, we can see from equation (2) that CRF experts are particularly well suited to combination under a LOP. Indeed, the resulting LOP is itself a CRF, the LOP-CRF, with potential functions given by a log-linear combination of the potential functions of the experts, with weights wα. As a consequence of this, the normalisation constant for the LOP-CRF can be calculated efficiently via the usual forward-backward algorithm for CRFs. Note that there is a distinction between normalisation constant for the LOP-CRF, ZLOP as given in equation (3), and the partition function of the LOP-CRF, Z. The two are related as follows: pLOP(s|o) = 1 ZLOP(o) ∏ α [pα(s|o)]wα = 1 ZLOP(o) ∏ α Uα(s|o) Zα(o) wα = ∏α [Uα(s|o)]wα ZLOP(o)∏α [Zα(o)]wα where Uα = exp∑T+1 t=1 ∑k λαk fαk(st−1,st,o,t) and so logZ(o) = logZLOP(o)+∑ α wα logZα(o) This relationship will be useful below, when we describe how to train the weights wα of a LOP-CRF. In this paper we will use the term LOP-CRF weights to refer to the weights wα in the weighted product of the LOP-CRF distribution and the term parameters to refer to the parameters λαk of each expert CRF α. 3.2 Training LOP-CRFs In our LOP-CRF training procedure we first train the expert CRFs unregularised on the training data. Then, treating the experts as static pre-trained models, we train the LOP-CRF weights wα to maximise the log-likelihood of the training data. This training process is “parameter-free” in that neither stage involves the use of a prior distribution over expert CRF parameters or LOP-CRF weights, and so avoids the requirement to adjust hyperparameter values. The likelihood of a data set under a LOP-CRF, as a function of the LOP-CRF weights, is given by: L(w) = ∏ o,s pLOP(s|o;w) ˜p(o,s) = ∏ o,s 1 ZLOP(o;w) ∏ α pα(s|o)wα ˜p(o,s) After taking logs and rearranging, the loglikelihood can be expressed as: L (w) = ∑ o,s ˜p(o,s)∑ α wα log pα(s|o) −∑ o ˜p(o)logZLOP(o;w) = ∑ α wα ∑ o,s ˜p(o,s)log pα(s|o) + ∑ α wα ∑ o ˜p(o)logZα(o) −∑ o ˜p(o)logZ(o;w) For the first two terms, the quantities that are multiplied by wα inside the (outer) sums are independent of the weights, and can be evaluated once at the 20 beginning of training. The third term involves the partition function for the LOP-CRF and so is a function of the weights. It can be evaluated efficiently as usual for a standard CRF. Taking derivatives with respect to wβ and rearranging, we obtain: ∂L (w) ∂wβ = ∑ o,s ˜p(o,s)log pβ(s|o) + ∑ o ˜p(o)logZβ(o) −∑ o ˜p(o)EpLOP(s|o) ∑ t logUβt(o,s) where Uβt(o,s) is the value of the potential function for expert β on clique t under the labelling s for observation o. In a way similar to the representation of the expected feature count in a standard CRF, the third term may be re-written as: −∑ o ∑ t ∑ s′,s′′ pLOP(st−1 = s′,st = s′′,o)logUβt(s′,s′′,o) Hence the derivative is tractable because we can use dynamic programming to efficiently calculate the pairwise marginal distribution for the LOP-CRF. Using these expressions we can efficiently train the LOP-CRF weights to maximise the loglikelihood of the data set.3 We make use of the LMVM method mentioned earlier to do this. We will refer to a LOP-CRF with weights trained using this procedure as an unregularised LOP-CRF. 3.2.1 Regularisation The “parameter-free” aspect of the training procedure we introduced in the previous section relies on the fact that we do not use regularisation when training the LOP-CRF weights wα. However, there is a possibility that this may lead to overfitting of the training data. In order to investigate this, we develop a regularised version of the training procedure and compare the results obtained with each. We 3We must ensure that the weights are non-negative and normalised. We achieve this by parameterising the weights as functions of a set of unconstrained variables via a softmax transformation. The values of the log-likelihood and its derivatives with respect to the unconstrained variables can be derived from the corresponding values for the weights wα. use a prior distribution over the LOP-CRF weights. As the weights are non-negative and normalised we use a Dirichlet distribution, whose density function is given by: p(w) = Γ(∑α θα) ∏α Γ(θα) ∏ α wθα−1 α where the θα are hyperparameters. Under this distribution, ignoring terms that are independent of the weights, the regularised loglikelihood involves an additional term: ∑ α (θα −1)logwα We assume a single value θ across all weights. The derivative of the regularised log-likelihood with respect to weight wβ then involves an additional term 1 wβ (θ −1). In our experiments we use the development set to optimise the value of θ. We will refer to a LOP-CRF with weights trained using this procedure as a regularised LOP-CRF. 4 The Tasks In this paper we apply LOP-CRFs to two sequence labelling tasks in NLP: named entity recognition (NER) and part-of-speech tagging (POS tagging). 4.1 Named Entity Recognition NER involves the identification of the location and type of pre-defined entities within a sentence and is often used as a sub-process in information extraction systems. With NER the CRF is presented with a set of sentences and must label each word so as to indicate whether the word appears outside an entity (O), at the beginning of an entity of type X (B-X) or within the continuation of an entity of type X (I-X). All our results for NER are reported on the CoNLL-2003 shared task dataset (Tjong Kim Sang and De Meulder, 2003). For this dataset the entity types are: persons (PER), locations (LOC), organisations (ORG) and miscellaneous (MISC). The training set consists of 14,987 sentences and 204,567 tokens, the development set consists of 3,466 sentences and 51,578 tokens and the test set consists of 3,684 sentences and 46,666 tokens. 21 4.2 Part-of-Speech Tagging POS tagging involves labelling each word in a sentence with its part-of-speech, for example noun, verb, adjective, etc. For our experiments we use the CoNLL-2000 shared task dataset (Tjong Kim Sang and Buchholz, 2000). This has 48 different POS tags. In order to make training time manageable4, we collapse the number of POS tags from 48 to 5 following the procedure used in (McCallum et al., 2003). In summary: • All types of noun collapse to category N. • All types of verb collapse to category V. • All types of adjective collapse to category J. • All types of adverb collapse to category R. • All other POS tags collapse to category O. The training set consists of 7,300 sentences and 173,542 tokens, the development set consists of 1,636 sentences and 38,185 tokens and the test set consists of 2,012 sentences and 47,377 tokens. 4.3 Expert sets For each task we compare the performance of the LOP-CRF to that of the standard CRF by defining a single, complex CRF, which we call a monolithic CRF, and a range of expert sets. The monolithic CRF for NER comprises a number of word and POS tag features in a window of five words around the current word, along with a set of orthographic features defined on the current word. These are based on those found in (Curran and Clark, 2003). Examples include whether the current word is capitalised, is an initial, contains a digit, contains punctuation, etc. The monolithic CRF for NER has 450,345 features. The monolithic CRF for POS tagging comprises word and POS features similar to those in the NER monolithic model, but over a smaller number of orthographic features. The monolithic model for POS tagging has 188,448 features. Each of our expert sets consists of a number of CRF experts. Usually these experts are designed to 4See (Cohn et al., 2005) for a scaling method allowing the full POS tagging task with CRFs. focus on modelling a particular aspect or subset of the distribution. As we saw earlier, the aim here is to define experts that model parts of the distribution well while retaining mutual diversity. The experts from a particular expert set are combined under a LOP-CRF and the weights are trained as described previously. We define our range of expert sets as follows: • Simple consists of the monolithic CRF and a single expert comprising a reduced subset of the features in the monolithic CRF. This reduced CRF models the entire distribution rather than focusing on a particular aspect or subset, but is much less expressive than the monolithic model. The reduced model comprises 24,818 features for NER and 47,420 features for POS tagging. • Positional consists of the monolithic CRF and a partition of the features in the monolithic CRF into three experts, each consisting only of features that involve events either behind, at or ahead of the current sequence position. • Label consists of the monolithic CRF and a partition of the features in the monolithic CRF into five experts, one for each label. For NER an expert corresponding to label X consists only of features that involve labels B-X or IX at the current or previous positions, while for POS tagging an expert corresponding to label X consists only of features that involve label X at the current or previous positions. These experts therefore focus on trying to model the distribution of a particular label. • Random consists of the monolithic CRF and a random partition of the features in the monolithic CRF into four experts. This acts as a baseline to ascertain the performance that can be expected from an expert set that is not defined via any linguistic intuition. 5 Experiments To compare the performance of LOP-CRFs trained using the procedure we described previously to that of a standard CRF regularised with a Gaussian prior, we do the following for both NER and POS tagging: 22 • Train a monolithic CRF with regularisation using a Gaussian prior. We use the development set to optimise the value of the variance hyperparameter. • Train every expert CRF in each expert set without regularisation (each expert set includes the monolithic CRF, which clearly need only be trained once). • For each expert set, create a LOP-CRF from the expert CRFs and train the weights of the LOP-CRF without regularisation. We compare its performance to that of the unregularised and regularised monolithic CRFs. • To investigate whether training the LOP-CRF weights contributes significantly to the LOPCRF’s performance, for each expert set we create a LOP-CRF with uniform weights and compare its performance to that of the LOP-CRF with trained weights. • To investigate whether unregularised training of the LOP-CRF weights leads to overfitting, for each expert set we train the weights of the LOP-CRF with regularisation using a Dirichlet prior. We optimise the hyperparameter in the Dirichlet distribution on the development set. We then compare the performance of the LOP-CRF with regularised weights to that of the LOP-CRF with unregularised weights. 6 Results 6.1 Experts Before presenting results for the LOP-CRFs, we briefly give performance figures for the monolithic CRFs and expert CRFs in isolation. For illustration, we do this for NER models only. Table 1 shows F scores on the development set for the NER CRFs. We see that, as expected, the expert CRFs in isolation model the data relatively poorly compared to the monolithic CRFs. Some of the label experts, for example, attain relatively low F scores as they focus only on modelling one particular label. Similar behaviour was observed for the POS tagging models. Expert F score Monolithic unreg. 88.33 Monolithic reg. 89.84 Reduced 79.62 Positional 1 86.96 Positional 2 73.11 Positional 3 73.08 Label LOC 41.96 Label MISC 22.03 Label ORG 29.13 Label PER 40.49 Label O 60.44 Random 1 70.34 Random 2 67.76 Random 3 67.97 Random 4 70.17 Table 1: Development set F scores for NER experts 6.2 LOP-CRFs with unregularised weights In this section we present results for LOP-CRFs with unregularised weights. Table 2 gives F scores for NER LOP-CRFs while Table 3 gives accuracies for the POS tagging LOP-CRFs. The monolithic CRF scores are included for comparison. Both tables illustrate the following points: • In every case the LOP-CRFs outperform the unregularised monolithic CRF • In most cases the performance of LOP-CRFs rivals that of the regularised monolithic CRF, and in some cases exceeds it. We use McNemar’s matched-pairs test (Gillick and Cox, 1989) on point-wise labelling errors to examine the statistical significance of these results. We test significance at the 5% level. At this threshold, all the LOP-CRFs significantly outperform the corresponding unregularised monolithic CRF. In addition, those marked with ∗show a significant improvement over the regularised monolithic CRF. Only the value marked with † in Table 3 significantly under performs the regularised monolithic. All other values a do not differ significantly from those of the regularised monolithic CRF at the 5% level. These results show that LOP-CRFs with unregularised weights can lead to performance improvements that equal or exceed those achieved from a conventional regularisation approach using a Gaussian prior. The important difference, however, is that the LOP-CRF approach is “parameter-free” in the 23 Expert set Development set Test set Monolithic unreg. 88.33 81.87 Monolithic reg. 89.84 83.98 Simple 90.26 84.22∗ Positional 90.35 84.71∗ Label 89.30 83.27 Random 88.84 83.06 Table 2: F scores for NER unregularised LOP-CRFs Expert set Development set Test set Monolithic unreg. 97.92 97.65 Monolithic reg. 98.02 97.84 Simple 98.31∗ 98.12∗ Positional 98.03 97.81 Label 97.99 97.77 Random 97.99 97.76† Table 3: Accuracies for POS tagging unregularised LOP-CRFs sense that each expert CRF in the LOP-CRF is unregularised and the LOP weight training is also unregularised. We are therefore not required to search a hyperparameter space. As an illustration, to obtain our best results for the POS tagging regularised monolithic model, we re-trained using 15 different values of the Gaussian prior variance. With the LOP-CRF we trained each expert CRF and the LOP weights only once. As an illustration of a typical weight distribution resulting from the training procedure, the positional LOP-CRF for POS tagging attaches weight 0.45 to the monolithic model and roughly equal weights to the other three experts. 6.3 LOP-CRFs with uniform weights By training LOP-CRF weights using the procedure we introduce in this paper, we allow the weights to take on non-uniform values. This corresponds to letting the opinion of some experts take precedence over others in the LOP-CRF’s decision making. An alternative, simpler, approach would be to combine the experts under a LOP with uniform weights, thereby avoiding the weight training stage. We would like to ascertain whether this approach will significantly reduce the LOP-CRF’s performance. As an illustration, Table 4 gives accuracies for LOPCRFs with uniform weights for POS tagging. A similar pattern is observed for NER. Comparing these values to those in Tables 2 and 3, we can see that in Expert set Development set Test set Simple 98.30 98.12 Positional 97.97 97.79 Label 97.85 97.73 Random 97.82 97.74 Table 4: Accuracies for POS tagging uniform LOPCRFs general LOP-CRFs with uniform weights, although still performing significantly better than the unregularised monolithic CRF, generally under perform LOP-CRFs with trained weights. This suggests that the choice of weights can be important, and justifies the weight training stage. 6.4 LOP-CRFs with regularised weights To investigate whether unregularised training of the LOP-CRF weights leads to overfitting, we train the LOP-CRF with regularisation using a Dirichlet prior. The results we obtain show that in most cases a LOP-CRF with regularised weights achieves an almost identical performance to that with unregularised weights, and suggests there is little to be gained by weight regularisation. This is probably due to the fact that in our LOP-CRFs the number of experts, and therefore weights, is generally small and so there is little capacity for overfitting. We conjecture that although other choices of expert set may comprise many more experts than in our examples, the numbers are likely to be relatively small in comparison to, for example, the number of parameters in the individual experts. We therefore suggest that any overfitting effect is likely to be limited. 6.5 Choice of Expert Sets We can see from Tables 2 and 3 that the performance of a LOP-CRF varies with the choice of expert set. For example, in our tasks the simple and positional expert sets perform better than those for the label and random sets. For an explanation here, we refer back to our discussion of equation (5). We conjecture that the simple and positional expert sets achieve good performance in the LOP-CRF because they consist of experts that are diverse while simultaneously being reasonable models of the data. The label expert set exhibits greater diversity between the experts, because each expert focuses on modelling a particular label only, but each expert is a relatively 24 poor model of the entire distribution and the corresponding LOP-CRF performs worse. Similarly, the random experts are in general better models of the entire distribution but tend to be less diverse because they do not focus on any one aspect or subset of it. Intuitively, then, we want to devise experts that provide diverse but accurate views on the data. The expert sets we present in this paper were motivated by linguistic intuition, but clearly many choices exist. It remains an important open question as to how to automatically construct expert sets for good performance on a given task, and we intend to pursue this avenue in future research. 7 Conclusion and future work In this paper we have introduced the logarithmic opinion pool of CRFs as a way to address overfitting in CRF models. Our results show that a LOPCRF can provide a competitive alternative to conventional regularisation with a prior while avoiding the requirement to search a hyperparameter space. We have seen that, for a variety of types of expert, combination of expert CRFs with an unregularised standard CRF under a LOP with optimised weights can outperform the unregularised standard CRF and rival the performance of a regularised standard CRF. We have shown how these advantages a LOPCRF provides have a firm theoretical foundation in terms of the decomposition of the KL-divergence between a LOP-CRF and a target distribution, and how this provides a framework for designing new overfitting reduction schemes in terms of constructing diverse experts. In this work we have considered training the weights of a LOP-CRF using pre-trained, static experts. In future we intend to investigate cooperative training of LOP-CRF weights and the parameters of each expert in an expert set. Acknowledgements We wish to thank Stephen Clark, our colleagues in Edinburgh and the anonymous reviewers for many useful comments. References R. F. Bordley. 1982. A multiplicative formula for aggregating probability assessments. Management Science, (28):1137– 1148. T. Cohn, A. Smith, and M. Osborne. 2005. Scaling conditional random fields using error-correcting codes. In Proc. ACL 2005. J. Curran and S. Clark. 2003. Language independent NER using a maximum entropy tagger. In Proc. CoNLL-2003. S. Della Pietra, Della Pietra V., and J. Lafferty. 1997. Inducing features of random fields. In IEEE PAMI, volume 19(4), pages 380–393. L. Gillick and S. Cox. 1989. Some statistical issues in the comparison of speech recognition algorithms. In International Conference on Acoustics, Speech and Signal Processing, volume 1, pages 532–535. T. Heskes. 1998. Selecting weighting factors in logarithmic opinion pools. In Advances in Neural Information Processing Systems 10. G. E. Hinton. 1999. Product of experts. In ICANN, volume 1, pages 1–6. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML 2001. R. Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proc. CoNLL-2002. A. McCallum and W. Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proc. CoNLL-2003. A. McCallum, K. Rohanimanesh, and C. Sutton. 2003. Dynamic conditional random fields for jointly labeling multiple sequences. In NIPS-2003 Workshop on Syntax, Semantics and Statistics. A. McCallum. 2003. Efficiently inducing features of conditional random fields. In Proc. UAI 2003. M. Osborne and J. Baldridge. 2004. Ensemble-based active learning for parse selection. In Proc. NAACL 2004. F. Peng and A. McCallum. 2004. Accurate information extraction from research papers using conditional random fields. In Proc. HLT-NAACL 2004. Y. Qi, M. Szummer, and T. P. Minka. 2005. Bayesian conditional random fields. In Proc. AISTATS 2005. F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proc. HLT-NAACL 2003. E. F. Tjong Kim Sang and S. Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Proc. CoNLL2000. E. F. Tjong Kim Sang and F. De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proc. CoNLL-2003. 25 | 2005 | 3 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 239–246, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Implications for Generating Clarification Requests in Task-oriented Dialogues Verena Rieser Department of Computational Linguistics Saarland University Saarbr¨ucken, D-66041 [email protected] Johanna D. Moore School of Informatics University of Edinburgh Edinburgh, EH8 9LW, GB [email protected] Abstract Clarification requests (CRs) in conversation ensure and maintain mutual understanding and thus play a crucial role in robust dialogue interaction. In this paper, we describe a corpus study of CRs in task-oriented dialogue and compare our findings to those reported in two prior studies. We find that CR behavior in task-oriented dialogue differs significantly from that in everyday conversation in a number of ways. Moreover, the dialogue type, the modality and the channel quality all influence the decision of when to clarify and at which level of the grounding process. Finally we identify formfunction correlations which can inform the generation of CRs. 1 Introduction Clarification requests in conversation ensure and maintain mutual understanding and thus play a significant role in robust and efficient dialogue interaction. From a theoretical perspective, the model of grounding explains how mutual understanding is established. According to Clark (1996), speakers and listeners ground mutual understanding on four levels of coordination in an action ladder, as shown in Table 1. Several current research dialogue systems can detect errors on different levels of grounding (Paek and Horvitz, 2000; Larsson, 2002; Purver, 2004; Level Speaker S Listener L Convers. S is proposing activity α L is considering proposal α Intention S is signalling that p L is recognizing that p Signal S is presenting signal σ L is identifying signal σ Channel S is executing behavior β L is attending to behavior β Table 1: Four levels of grounding Schlangen, 2004). However, only the work of Purver (2004) addresses the question of how the source of the error affects the form the CR takes. In this paper, we investigate the use of formfunction mappings derived from human-human dialogues to inform the generation of CRs. We identify the factors that determine which function a CR should take and identify function-form correlations that can be used to guide the automatic generation of CRs. In Section 2, we discuss the classification schemes used in two recent corpus studies of CRs in human-human dialogue, and assess their applicability to the problem of generating CRs. Section 3 describes the results we obtained by applying the classification scheme of Rodriguez and Schlangen (2004) to the Communicator Corpus (Bennett and Rudnicky, 2002). Section 4 draws general conclusions for generating CRs by comparing our results to those of (Purver et al., 2003) and (Rodriguez and Schlangen, 2004). Section 5 describes the correlations between function and form features that are present in the corpus and their implications for generating CRs. 239 Attr. Value Category Example form non Non-Reprise “What did you say?” wot Conventional “Sorry?” frg Reprise Fragment “Edinburgh?” lit Literal Reprise “You want a flight to Edinburgh?” slu Reprise Sluice “Where?” sub Wh-substituted Reprise “You want a flight where?” gap Gap “You want a flight to...?” fil Gap Filler “...Edinburgh?” other Other x readings cla Clausal “Are you asking/asserting that X?” con Constituent “What do you mean by X?” lex Lexical “Did you utter X?” corr Correction “Did you intend to utter X instead?” other Other x Table 2: CR classification scheme by PGH 2 CR Classification Schemes We now discuss two recently proposed classification schemes for CRs, and assess their usefulness for generating CRs in a spoken dialogue system (SDS). 2.1 Purver, Ginzburg and Healey (PGH) Purver, Ginzburg and Healey (2003) investigated CRs in the British National Corpus (BNC) (Burnard, 2000). In their annotation scheme, a CR can take seven distinct surface forms and four readings, as shown in Table 2. The examples for the form feature are possible CRs following the statement “I want a flight to Edinburgh”. The focus of this classification scheme is to map semantic readings to syntactic surface forms. The form feature is defined by its relation to the problematic utterance, i.e., whether a CR reprises the antecedent utterance and to what extent. CRs may take the three different readings as defined by Ginzburg and Cooper (2001), as well as a fourth reading which indicates a correction. Although PGH report good coverage of the scheme on their subcorpus of the BNC (99%), we found their classification scheme to to be too coarsegrained to prescribe the form that a CR should take. As shown in example 1, Reprise Fragments (RFs), which make up one third of the BNC, are ambiguous in their readings and may also take several surface forms. (1) I would like to book a flight on Monday. (a) Monday? frg, con/cla (b) Which Monday? frg, con (c) Monday the first? frg, con (d) The first of May? frg, con (e) Monday the first or Monday the eighth? frg, (exclusive) con RFs endorse literal repetitions of part of the problematic utterance (1.a); repetitions with an additional question word (1.b); repetition with further specification (1.c); reformulations (1.d); and alternative questions (1.e)1. In addition to being too general to describe such differences, the classification scheme also fails to describe similarities. As noted by (Rodriguez and Schlangen, 2004), PGH provide no feature to describe the extent to which an RF repeats the problematic utterance. Finally, some phenomena cannot be described at all by the four readings. For example, the readings do not account for non-understanding on the pragmatic level. Furthermore the readings may have several problem sources: the clausal reading may be appropriate where the CR initiator failed to recognise the word acoustically as well as when he failed to resolve the reference. Since we are interested in generating CRs that indicate the source of the error, we need a classification scheme that represents such information. 2.2 Rodriguez and Schlangen (R&S) Rodriguez and Schlangen (2004) devised a multidimensional classification scheme where form and 1Alternative questions would be interpreted as asking a polar question with an exclusive reading. 240 function are meta-features taking sub-features as attributes. The function feature breaks down into the sub-features source, severity, extent, reply and satisfaction. The sources that might have caused the problem map to the levels as defined by Clark (1996). These sources can also be of different severity. The severity can be interpreted as describing the set of possible referents: asking for repetition indicates that no interpretation is available (cont-rep); asking for confirmation means that the CR initiator has some kind of hypothesis (cont-conf). The extent of a problem describes whether the CR points out a problematic element in the problem utterance. The reply represents the answer the addressee gives to the CR. The satisfaction of the CR-initiator is indicated by whether he renews the request for clarification or not. The meta-feature form describes how the CR is lingustically realised. It describes the sentence’s mood, whether it is grammatically complete, the relation to the antecedent, and the boundary tone. According to R&S’s classification scheme our illustrative example would be annotated as follows2: (2) I would like to book a flight on Monday. (a) Monday? mood: decl completeness: partial rel-antecedent: repet source: acous/np-ref severity: cont-repet extent: yes (b) Which Monday? mood: wh-question completeness: partial rel-antecedent: addition source: np-ref severity: cont-repet extent: yes (c) Monday the first? mood: decl completeness: partial rel-antecedent: addition source: np-ref severity: cont-conf extent: yes (d) The first of May? mood: decl completeness: partial 2The source features answer and satisfaction are ignored as they depend on how the dialogue continues. The interpretation of the source is dependent on the reply to the CR. Therefore all possible interpretations are listed. rel-antecedent: reformul source: np-ref severity: cont-conf extent: yes (d) Monday the first or Monday the eighth? mood: alt-q completeness: partial rel-antecedent: addition source: np-ref severity: cont-repet extent: yes In R&S’s classification scheme, ambiguities about CRs having different sources cannot be resolved entirely as example (2.a) shows. However, in contrast to PGH, the overall approach is a different one: instead of explaining causes of CRs within a theoretic-semantic model (as the three different readings of Ginzburg and Cooper (2001) do), they infer the interpretation of the CR from the context. Ambiguities get resolved by the reply of the addressee and the satisfaction of the CR initiator indicates the “mutually agreed interpretation” . R&S’s multi-dimensional CR description allows the fine-grained distinctions needed to generate natural CRs to be made. For example, PGH’s general category of RFs can be made more specific via the values for the feature relation to antecedent. In addition, the form feature is not restricted to syntax; it includes features such as intonation and coherence, which are useful for generating the surface form of CRs. Furthermore, the multi-dimensional function feature allows us to describe information relevant to generating CRs that is typically available in dialogue systems, such as the level of confidence in the hypothesis and the problem source. 3 CRs in the Communicator Corpus 3.1 Material and Method Material: We annotated the human-human travel reservation dialogues available as part of the Carnegie Mellon Communicator Corpus (Bennett and Rudnicky, 2002) because we were interested in studying naturally occurring CRs in task-oriented dialogue. In these dialogues, an experienced travel agent is making reservations for trips that people in the Carnegie Mellon Speech Group were taking in the upcoming months. The corpus comprises 31 dialogues of transcribed telephone speech, with 2098 dialogue turns and 19395 words. 241 form: distance-src: 1 | 2 | 3 | 4 | 5 | more mood: none | decl | polar-q | wh-q | alt-q | imp | other form: none | particle | partial | complete relation-antecedent: none | add | repet | repet-add | reformul | indep boundary-tone: none | rising | falling | no-appl function: source: n none | acous | lex | parsing | np-ref | deitic-ref | act-ref | int+eval | relevance | belief | ambiguity | scr-several o extent: none | fragment | whole severity: none | cont-conf | cont-rep | cont-disamb | no-react answer: n none | ans-repet | ans-y/n | ans-reformul | ans-elab | ans-w-defin | no-react o satisfaction: none | happy-yes | happy-no | happy-ambig Figure 1: CR classification scheme Annotation Scheme: Our annotation scheme, shown in Figure 1, is an extention of the R&S scheme described in the previous section. R&S’s scheme was devised for and tested on the Bielefeld Corpus of German task-oriented dialogues about joint problem solving.3 To annotate the Communicator Corpus we extended the scheme in the following ways. First, we found the need to distinguish CRs that consist only of newly added information, as in example 3, from those that add information while also repeating part of the utterance to be clarified, as in 4. We augmented the scheme to allow two distinct values for the form feature relation-antecedent, add for cases like 3 and repet-add for cases like 4. (3) Cust: What is the last flight I could come back on? Agent: On the 29th of March? (4) Cust: I’ll be returning on Thursday the fifth. Agent: The fifth of February? To the function feature source we added the values belief to cover CRs like 5 and ambiguity refinement to cover CRs like 6. (5) Agent: You need a visa. Cust: I do need one? Agent: Yes you do. (6) Agent: Okay I have two options ...with Hertz . . . if not they do have a lower rate with Budget and that is fifty one dollars. Cust: Per day? Agent: Per day um mm. Finally, following Gabsdil (2003) we introduced an additional value for severity, cont-disamb, to 3http://sfb360.uni-bielefeld.de cover CRs that request disambiguation when more than one interpretation is available. Method: We first identified turns containing CRs, and then annotated them with form and function features. It is not always possible to identify CRs from the utterance alone. Frequently, context (e.g., the reaction of the addressee) or intonation is required to distinguish a CR from other feedback strategies, such as positive feedback. See (Rieser, 2004) for a detailed discussion. The annotation was only performed once. The coding scheme is a slight variation of R&S, which has been shown relaiable with Kappa of 0.7 for identifying source. 3.2 Forms and Functions of CRs in the Communicator Corpus The human-human dialogues in the Communicator Corpus contain 98 CRs in 2098 dialogue turns (4.6%). Forms: The frequencies for the values of the individual form features are shown in Table 3. The most frequent type of CRs were partial declarative questions, which combine the mood value declarative and the completeness value partial.4 These account for 53.1% of the CRs in the corpus. Moreover, four of the five most frequent surface forms of CRs in the Communicator Corpus differ only in the value for the feature relation-antecedent. They are partial declaratives with rising boundary tone, that either reformulate (7.1%) the problematic utterance, repeat 4Declarative questions cover “all cases of non-interrogative word-order, i.e., both declarative sentences and fragments” (Rodriguez and Schlangen, 2004). 242 Feature Value Freq. (%) Mood declarative 65 polar 21 wh-question 7 other 7 Completeness partial 58 complete 38 other 4 Relation antecedent rep-add 27 independent 21 reformulation 19 repetition 18 addition 10 other 5 Boundary tone rising 74 falling 22 other 4 Table 3: Distribution of values for the form features the problematic constituent (11.2%), add only new information (7.1%), or repeat the problematic constituent and add new information (10.2%). The fifth most frequent type is conventional CRs (10.2%).5 Functions: The distributions of the function features are given in Figure 4. The most frequent source of problems was np-reference. Next most frequent were acoustic problems, possibly due to the poor channel quality. Third were CRs that enquire about intention. As indicated by the feature extent, almost 80% of CRs point out a specific element of the problematic utterance. The features severity and answer illustrate that most of the time CRs request confirmation of an hypothesis (73.5%) with a yesno-answer (64.3%). The majority of the provided answers were satisfying, which means that the addressee tends to interpret the CR correctly and answers collaboratively. Only 6.1% of CRs failed to elicit a response. 4 CRs in Task-oriented Dialogue 4.1 Comparison In order to determine whether there are differences as regards CRs between task-oriented dialogues and everyday conversations, we compared our results to those of PGH’s study on the BNC and those of R&S 5Conventional forms are “Excuse me?”, “Pardon?”, etc. Feature Value Freq. (%) Source np-reference 40 acoustic 31 intention 8 belief 6 ambiguity 4 contact 4 others 3 relevance 2 several 2 Extent yes 80 no 20 Severity confirmation 73 repetition 20 other 7 Answer y/n answer 64 other 15 elaboration 13 no reaction 6 Table 4: Distribution of values for the function features on the Bielefeld Corpus. The BNC contains a 10 million word sub-corpus of English dialogue transcriptions about topics of general interest. PGH analysed a portion consisting of ca. 10,600 turns, ca. 150,000 words. R&S annotated 22 dialogues from the Bielefeld Corpus, consisting of ca. 3962 turns, ca. 36,000 words. The major differences in the feature distributions are listed in Table 5. We found that there are no significant differences between the feature distributions for the Communicator and Bielefeld corpora, but that the differences between Communicator and BNC, and Bielefeld and BNC are significant at the levels indicated in Table 5 using Pearson’s χ2. The differences between dialogues of different types suggest that there is a different grounding strategy. In task-oriented dialogues we see a tradeoff between avoiding misunderstanding and keeping the conversation as efficient as possible. The hypothesis that grounding in task-oriented dialogues is more cautious is supported by the following facts (as shown by the figures in Table 5): • CRs are more frequent in task-oriented dialogues. • The overwhelming majority of CRs directly follow the problematic utterance. 243 Corpus Feature Communicator Bielefeld BNC CRs 98 230 418 frequency 4.6% 5.8%*** 3.9% distance-src=1 92.8%* 94.8%*** 84.4% no-react 6.1%* 8.7%** 17.0% cont-conf 73.5%*** 61.7%*** 46.6% partial 58.2%** 76.5%*** 42.4% independent 21.4%*** 9.6%*** 44.2% cont-rep 19.8%*** 14.8%*** 39.5% y/n-answer 64.3% 44.8% n/a Table 5: Comparison of CR forms in everyday vs. taskoriented corpora (* denotes p < .05, ** is p < .01, *** is p < .005.) • CRs in everyday conversation fail to elicit a response nearly three times as often.6 • Even though dialogue participants seem to have strong hypotheses, they frequently confirm them. Although grounding is more cautious in taskoriented dialogues, the dialogue participants try to keep the dialogue as efficient as possible: • Most CRs are partial in form. • Most of the CRs point out one specific element (with only a minority being independent as shown in Table 5). Therefore, in task-oriented dialogues, CRs locate the understanding problem directly and give partial credit for what was understood. • In task-oriented dialogues, the CR-initiator asks to confirm an hypothesis about what he understood rather than asking the other dialogue participant to repeat her utterance. • The addressee prefers to give a short y/n answer in most cases. Comparing error sources in the two task-oriented corpora, we found a number of differences as shown in Table 6. In particular: 6Another factor that might account for these differences is that the BNC contains multi-party conversations, and questions in multi-party conversations may be less likely to receive responses. Furthermore, due to the poor recording quality of the BNC, many utterances are marked as “not interpretable”, which could also lower the response rate. Corpus Feature Communicator Bielefeld Significance contact 4.1% 0 inst n/a acoustic 30.6% 11.7% *** lexical 1 inst 1 inst n/a parsing 1 inst 0 inst n/a np-ref 39.8% 24.4% ** deict-ref 1 inst 27.4% *** ambiguity 4.1% not eval. n/a belief 6.1% not eval. n/a relevance 2.1% not eval. n/a intention 8.2% 22.2% ** several 2.0% 14.3% *** Table 6: Comparison of CR problem sources in task-oriented corpora • Dialogue type: Belief and ambiguity refinement do not seem to be a source of problems in joint problem solving dialogues, as R&S did not include them in their annotation scheme. For CRs in information seeking these features need to be added to explain quite frequent phenomena. As shown in Table 6, 10.2% of CRs were in one of these two classes. • Modality: Deictic reference resolution causes many more understanding difficulties in dialogues where people have a shared point of view than in telephone communication (Bielefeld: most frequent problem source; Communicator: one instance detected). Furthermore, in the Bielefeld Corpus, people tend to formulate more fragmentary sentences. In environments where people have a shared point of view, complete sentences can be avoided by using nonverbal communication channels. Finally, we see that establishing contact is more of a problem when speech is the only modality available. • Channel quality: Acoustic problems are much more likely in the Communicator Corpus. These results indicate that the decision process for grounding needs to consider the modality, the domain, and the communication channel. Similar extensions to the grounding model are suggested by (Traum, 1999). 244 4.2 Consequences for Generation The similarities and differences detected can be used to give recommendations for generating CRs. In terms of when to initiate a CR, we can state that clarification should not be postponed, and immediate, local management of uncertainty is critical. This view is also supported by observations of how non-native speakers handle non-understanding (Paek, 2003). Furthermore, for task-oriented dialogues the system should present an hypothesis to be confirmed, rather than ask for repetition. Our data suggests that, when they are confronted with uncertainty, humans tend to build up hypotheses from the dialogue history and from their world knowledge. For example, when the customer specified a date without a month, the travel agent would propose the most reasonable hypothesis instead of asking a wh-question. It is interesting to note that Skantze (2003) found that users are more satisfied if the system “hides” its recognition problem by asking a task-related question to help to confirm the hypothesis, rather than explicitly indicating non-understanding. 5 Correlations between Function and Form: How to say it? Once the dialogue system has decided on the function features, it must find a corresponding surface form to be generated. Many forms are indeed related to the function as shown in Table 7, where we present a significance analysis using Pearson’s χ2 (with Yates correction). Source: We found that the relation to the antecedent seems to distinguish fairly reliably between CRs clarifying reference and those clarifying acoustic understanding. In the Communicator Corpus, for acoustic problems the CR-initiator tends to repeat the problematic part literally, while reference problems trigger a reformulation or a repetition with addition. For both problem sources, partial declarative questions are preferred. These findings are also supported by R&S. For the first level of non-understanding, the inability to establish contact, complete polar questions with no relation to the antecedent are formulated, e.g., ”Are you there?”. Severity: The severity indicates how much was understood, i.e., whether the CR initiator asks to confirm an hypothesis or to repeat the antecedent utterance. The severity of an error strongly correlates with the sentence mood. Declarative and polar questions, which take up material from the problematic utterance, ask to confirm an hypothesis. Wh-questions, which are independent, reformulations or repetitions with additions (e.g., whsubstituted reprises) of the problematic utterance usually prompt for repetition, as do imperatives. Alternative questions prompt the addressee to disambiguate the hypothesis. Answer: By definition, certain types of question prompt for certain answers. Therefore, the feature answer is closely linked to the sentence mood of the CR. As polar questions and declarative questions generally enquire about a proposition, i.e., an hypothesis or belief, they tend to receive yes/no answers, but repetitions are also possible. Whquestions, alternative questions and imperatives tend to get answers providing additional information (i.e., reformulations and elaborations). Extent: The function feature extent is logically independent from the form feature completeness, although they are strongly correlated. Extent is a binary feature indicating whether the CR points out a specific element or concerns the whole utterance. Most fragmentary declarative questions and fragmentary polar questions point out a specific element, especially when they are not independent but stand in some relation to the antecedent utterance. Independent complete imperatives address the whole previous utterance. The correlations found in the Communicator Corpus are fairly consistent with those found in the Bielefeld Corpus, and thus we believe that the guidelines for generating CRs in task-oriented dialogues may be language independent, at least for German and English. 6 Summary and Future Work In this paper we presented the results of a corpus study of naturally occurring CRs in task-oriented dialogue. Comparing our results to two other studies, one of a task-oriented corpus and one of a cor245 Function Form source severity extent answer mood χ2(24) = 112.20 p < 0.001 χ2(5) = 30.34 p < 0.001 χ2(5) = 24.25 df = p < 0.005 χ2(5) = 25.19 p < 0.001 bound-tone indep. indep. indep. indep. rel-antec χ2(24) = 108.23 p < 0.001 χ2(4) = 11.69 p < 0.005 χ2(4) = 42.58 p < 0.001 indep. complete χ2(7) = 27.39 p < 0.005 indep. χ2(1) = 27.39 p < 0.001 indep. Table 7: Significance analysis for form/function correlations. pus of everyday conversation, we found no significant differences in frequency of CRs and distribution of forms in the two task-oriented corpora, but many significant differences between CRs in taskoriented dialogue and everyday conversation. Our findings suggest that in task-oriented dialogues, humans use a cautious, but efficient strategy for clarification, preferring to present an hypothesis rather than ask the user to repeat or rephrase the problematic utterance. We also identified correlations between function and form features that can serve as a basis for generating more natural sounding CRs, which indicate a specific problem with understanding. In current work, we are studying data collected in a wizard-of-oz study in a multi-modal setting, in order to study clarification behavior in multi-modal dialogue. Acknowledgements The authors would like thank Kepa Rodriguez, Oliver Lemon, and David Reitter for help and discussion. References Christina L. Bennett and Alexander I. Rudnicky. 2002. The Carnegie Mellon Communicator Corpus. In Proceedings of the International Conference of Spoken Language Processing (ICSLP02). Lou Burnard. 2000. The British National Corpus Users Reference Guide. Technical report, Oxford Universiry Computing Services. Herbert Clark. 1996. Using Language. Cambridge University Press. Malte Gabsdil. 2003. Clarification in Spoken Dialogue Systems. Proceedings of the 2003 AAAI Spring Symposium. Workshop on Natural Language Generation in Spoken and Written Dialogue. Jonathan Ginzburg and Robin Cooper. 2001. Resolving Ellipsis in Clarification. In Proceedings of the 39th meeting of the Association for Computational Linguistics. Staffan Larsson. 2002. Issue-based Dialogue Management. Ph.D. thesis, Goteborg University. Tim Paek and Eric Horvitz. 2000. Conversation as Action Under Uncertainty. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence. Tim Paek. 2003. Toward a Taxonomy of Communication Errors. In ISCA Tutorial and Research Workshop on Error Handling in Spoken Dialogue Systems. Matthew Purver, Jonathan Ginzburg, and Patrick Healey. 2003. On the Means for Clarification in Dialogue. In R. Smith and J. van Kuppevelt, editors, Current and New Directions in Discourse and Dialogue. Matthew Purver. 2004. CLARIE: The Clarification Engine. In Proceedings of the Eighth Workshop on Formal Semantics and Dialogue. Verena Rieser. 2004. Fragmentary Clarifications on Several Levels for Robust Dialogue Systems. Master’s thesis, School of Informatics, University of Edinburgh. Kepa J. Rodriguez and David Schlangen. 2004. Form, Intonation and Function of Clarification Requests in German Task-orientaded Spoken Dialogues. In Proceedings of the Eighth Workshop on Formal Semantics and Dialogue. David Schlangen. 2004. Causes and Strategies for Requestion Clarification in Dialogue. Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue. Gabriel Skantze. 2003. Exploring Human Error Handling Strategies: Implications for Spoken Dialogue Systems. In ISCA Tutorial and Research Workshop on Error Handling in Spoken Dialogue Systems. David R. Traum. 1999. Computational Models of Grounding in Collaborative Systems. In Proceedings of the AAAI Fall Symposium on Psychological Models of Communication. 246 | 2005 | 30 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 247–254, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Towards Finding and Fixing Fragments: Using ML to Identify Non-Sentential Utterances and their Antecedents in Multi-Party Dialogue David Schlangen Department of Linguistics University of Potsdam P.O. Box 601553 D-14415 Potsdam — Germany [email protected] Abstract Non-sentential utterances (e.g., shortanswers as in “Who came to the party?”— “Peter.”) are pervasive in dialogue. As with other forms of ellipsis, the elided material is typically present in the context (e.g., the question that a short answer answers). We present a machine learning approach to the novel task of identifying fragments and their antecedents in multiparty dialogue. We compare the performance of several learning algorithms, using a mixture of structural and lexical features, and show that the task of identifying antecedents given a fragment can be learnt successfully (f(0.5) = .76); we discuss why the task of identifying fragments is harder (f(0.5) = .41) and finally report on a combined task (f(0.5) = .38). 1 Introduction Non-sentential utterances (NSUs) as in (1) are pervasive in dialogue: recent studies put the proportion of such utterances at around 10% across different types of dialogue (Fern´andez and Ginzburg, 2002; Schlangen and Lascarides, 2003). (1) a. A: Who came to the party? B: Peter. (= Peter came to the party.) b. A: I talked to Peter. B: Peter Miller? (= Was it Peter Miller you talked to?) c. A: Who was this? Peter Miller? (= Was this Peter Miller? Such utterances pose an obvious problem for natural language processing applications, namely that the intended information (in (1-a)-B a proposition) has to be recovered from the uttered information (here, an NP meaning) with the help of information from the context. While some systems that automatically resolve such fragments have recently been developed (Schlangen and Lascarides, 2002; Fern´andez et al., 2004a), they have the drawback that they require “deep” linguistic processing (full parses, and also information about discourse structure) and hence are not very robust. We have defined a well-defined subtask of this problem, namely identifying fragments (certain kinds of NSUs, see below) and their antecedents (in multi-party dialogue, in our case), and present a novel machine learning approach to it, which we hypothesise will be useful for tasks such as automatic meeting summarisation.1 The remainder of this paper is structured as follows. In the next section we further specify the task and different possible approaches to it. We then describe the corpus we used, some of its characteristics with respect to fragments, and the features we extracted from it for machine learning. Section 4 describes our experimental settings and reports the results. After a comparison to related work in Section 5, we close with a conclusion and some further 1(Zechner and Lavie, 2001) describe a related task, linking questions and answers, and evaluate its usefulness in the context of automatic summarisation; see Section 5. 247 work that is planned. 2 The Tasks As we said in the introduction, the main task we want to tackle is to align (certain kinds of) NSUs and their antecedents. Now, what characterises this kind of NSU, and what are their antecedents? In the examples from the introduction, the NSUs can be resolved simply by looking at the previous utterance, which provides the material that is elided in them. In reality, however, the situation is not that simple, for three reasons: First, it is of course not always the previous utterance that provides this material (as illustrated by (2), where utterance 7 is resolved by utterance 1); in our data the average distance in fact is 2.5 utterances (see below). (2) 1 B: [. . . ] What else should be done ? 2 C: More intelligence . 3 More good intelligence . 4 Right . 5 D: Intelligent intelligence . 6 B: Better application of face and voice recognition . 7 C: More [. . . ] intermingling of the agencies , you know . [ from NSI 20011115 ] Second, it’s not even necessarily a single utterance that does this–it might very well be a span of utterances, or something that has to be inferred from such spans (parallel to the situation with pronouns, as discussed empirically e.g. in (Strube and M¨uller, 2003)). (3) shows an example where a new topic is broached by using an NSU. It is possible to analyse this as an answer to the question under discussion “what shall we organise for the party?”, as (Fern´andez et al., 2004a) would do; a question, however, which is only implicitly posed by the previous discourse, and hence this is an example of an NSU that does not have an overt antecedent. (3) [after discussing a number of different topics] 1 D: So, equipment. 2 I can bring [. . . ] [ from NSI 20011211 ] Lastly, not all NSUs should be analysed as being the result of ellipsis: backchannels for example (like the “Right” in utterance 4 in (2) above) seem to directly fulfil their discourse function without any need for reconstruction.2 To keep matters simple, we concentrate in this paper on NSUs of a certain kind, namely those that a) do not predominantly have a discourse-management function (like for example backchannels), but rather convey messages (i.e., propositions, questions or requests)—this is what distinguishes fragments from other NSUs—and b) have individual utterances as antecedents. In the terminology of (Schlangen and Lascarides, 2003), fragments of the latter type are resolution-via-identity-fragments, where the elided information can be identified in the context and need not be inferred (as opposed to resolution-viainference-fragments). Choosing only this special kind of NSUs poses the question whether this subgroup is distinguished from the general group of fragments by criteria that can be learnt; we will return to this below when we analyse the errors made by the classifier. We have defined two approaches to this task. One is to split the task into two sub-tasks: identifying fragments in a corpus, and identifying antecedents for fragments. These steps are naturally performed sequentially to handle our main task, but they also allow the fragment classification decision to come from another source—a language-model used in an automatic speech recognition system, for example— and to use only the antecedent-classifier. The other approach is to do both at the same time, i.e. to classify pairs of utterances into those that combine a fragment and its antecedent and those that don’t. We report the results of our experiments with these tasks below, after describing the data we used. 3 Corpus, Features, and Data Creation 3.1 Corpus As material we have used six transcripts from the “NIST Meeting Room Pilot Corpus” (Garofolo et al., 2004), a corpus of recordings and transcriptions of multi-party meetings.3 Those six transcripts con2The boundaries are fuzzy here, however, as backchannels can also be fragmental repetitions of previous material, and sometimes it is not clear how to classify a given utterance. A similar problem of classifying fragments is discussed in (Schlangen, 2003) and we will not go further into this here. 3We have chosen a multi-party setting because we are ultimately interested in automatic summarisation of meetings. In this paper here, however, we view our task as a “stand-alone task”. Some of the problems resulting in the presence of many 248 average distance α – β (utterances): 2.5 α declarative 159 (52%) α interrogative 140 (46%) α unclassfd. 8 (2%) β declarative 235 (76%) β interrogative (23%) β unclassfd. 2 (0.7%) α being last in their turn 142 (46%) β being first in their turn 159 (52%) Table 1: Some distributional characteristics. (α denotes antecedent, β fragment.) sist of 5,999 utterances, among which we identified 307 fragment–antecedent pairs.4,5 With 5.1% this is a lower rate than that reported for NSUs in other corpora (see above); but note that as explained above, we are actually only looking at a sub-class of all NSUs here. For these pairs we also annotated some more attributes, which are summarised in Table 1. Note that the average distance is slightly higher than that reported in (Schlangen and Lascarides, 2003) for (2-party) dialogue (1.8); this is presumably due to the presence of more speakers who are able to reply to an utterance. Finally, we automatically annotated all utterances with part-of-speech tags, using TreeTagger (Schmid, 1994), which we’ve trained on the switchboard corpus of spoken language (Godfrey et al., 1992), because it contains, just like our corpus, speech disfluencies.6 We now describe the creation of the data we used for training. We first describe the data-sets for the different tasks, and then the features used to represent the events that are to be classified. 3.2 Data Sets Data creation for the fragment-identification task (henceforth simply fragment-task) was straightforspeakers are discussed below. 4We have used the MMAX tool (M¨uller and Strube, 2001)) for the annotation. 5To test the reliability of the annotation scheme, we had a subset of the data annotated by two annotators and found a satisfactory κ-agreement (Carletta, 1996) of κ = 0.81. 6The tagger is available free for academic research from http://www.ims.uni-stuttgart.de/projekte/ corplex/TreeTagger/DecisionTreeTagger.html. ward: for each utterance, a number of features was derived automatically (see next section) and the correct class (fragment / other) was added. (Note that none of the manually annotated attributes were used.) This resulted in a file with 5,999 data points for classification. Given that there were 307 fragments, this means that in this data-set there is a ratio positives (fragments) vs. negatives (non-fragments) for the classifier of 1:20. To address this imbalance, we also ran the experiments with balanced data-sets with a ratio of 1:5. The other tasks, antecedent-identification (antecedent-task) and antecedent-fragmentidentification (combined-task) required the creation of data-sets containing pairs. For this we created an “accessibility window” going back from each utterance. Specifically, we included for each utterance a) all previous utterances of the same speaker from the same turn; and b) the three last utterances of every speaker, but only until one speaker took the turn again and up to a maximum of 6 previous utterances. To illustrate this method, given example (2) it would form pairs with utterance 7 as fragment-candidate and all of utterances 6–2, but not 1, because that violates condition b) (it is the second turn of speaker B). In the case of (2), this exclusion would be a wrong decision, since 1 is in fact the antecedent for 7. In general, however, this dynamic method proved good at capturing as many antecedents as possible while keeping the number of data points manageable. It captured 269 antecedent-fragment pairs, which had an average distance of 1.84 utterances. The remaining 38 pairs which it missed had an average distance of 7.27 utterances, which means that to capture those we would have had to widen the window considerably. E.g., considering all previous 8 utterances would capture an additional 25 pairs, but at the cost of doubling the number of data points. We hence chose the approach described here, being aware of the introduction of a certain bias. As we have said, we are trying to link utterances, one a fragment, the other its antecedent. The notion of utterance is however less well-defined than one might expect, and the segmentation of continuous speech into utterances is a veritable research problem on its own (see e.g. (Traum and Heeman, 1997)). Often it is arguable whether a prepositional 249 Structural features dis distance α – β, in utterances sspk same speaker yes/no nspk number speaker changes (= # turns) iqu number of intervening questions alt α last utterance in its turn? bft β first utterance in its turn? Lexical / Utterance-based features bvb (tensed) verb present in β? bds disfluency present in β? aqm α contains question mark awh α contains wh word bpr ratio of polar particles (yes, no, maybe, etc..) / other in β apr ratio of polar particles in α lal length of α lbe length of β nra ratio nouns / non-nouns in α nra ratio nouns / non-nouns in β rab ratio nouns in β that also occur in α rap ratio words in β that also occur in α god google similarity (see text) Table 2: The Features phrase for example should be analysed as an adjunct (and hence as not being an utterance on its own) or as a fragment. In our experiments, we have followed the decision made by the transcribers of the original corpus, since they had information (e.g. about pauses) which was not available to us. For the antecedent-task, we include only pairs where β (the second utterance in the pair) is a fragment—since the task is to identify an antecedent for already identified fragments. This results in a data-set with 1318 data points (i.e., we created on average 4 pairs per fragment). This data-set is sufficiently balanced between positives and negatives, and so we did not create another version of it. The data for the combined-task, however, is much bigger, as it contains pairs for all utterances. It consists of 26,340 pairs, i.e. a ratio of roughly 1:90. For this reason we also used balanced data-sets for training, where the ratio was adjusted to 1:25. 3.3 Features Table 2 lists the features we have used to represent the utterances. (In this table, and in this section, we denote the candidate for being a fragment with β and the candidate for being β’s antecedent with α.) We have defined a number of structural features, which give information about the (discourse)structural relation between α and β. The rationale behind choosing them should be clear; iqu for example indicates in a weak way whether there might have been a topic change, and high nspk should presumably make an antecedent relation between α and β less likely. We have also used some lexical or utterancebased features, which describe lexical properties of the individual utterances and lexical relations between them which could be relevant for the tasks. For example, the presence of a verb in β is presumably predictive for its being a fragment or not, as is the length. To capture a possible semantic relationship between the utterances, we defined two features. The more direct one, rab, looks at verbatim re-occurrences of nouns from α in β, which occur for example in check-questions as in (4) below. (4) A: I saw Peter. B: Peter? (= Who is this Peter you saw?) Less direct semantic relations are intended to be captured by god, the second semantic feature we use.7 It is computed as follows: for each pair (x, y) of nouns from α and β, Google is called (via the Google API) with a query for x, for y, and for x and y together. The similarity then is the average ratio of pair vs. individual term: Google Similarity(x, y) = (hits(x, y) hits(x) +hits(x, y) hits(y) )∗1 2 We now describe the experiments we performed and their results. 4 Experiments and Results 4.1 Experimental Setup For the learning experiments, we used three classifiers on all data-sets for the the three tasks: • SLIPPER (Simple Learner with Iterative Pruning to Produce Error Reduction), (Cohen and Singer, 1999), which is a rule learner which combines the separate-and-conquer approach with confidencerated boosting. It is unique among the classifiers that 7The name is short for google distance, which indicates its relatedness to the feature used by (Poesio et al., 2004); it is however a measure of similarity, not distance, as described above. 250 we have used in that it can make use of “set-valued” features, e.g. strings; we have run this learner both with only the features listed above and with the utterances (and POS-tags) as an additional feature. • TIMBL (Tilburg Memory-Based Learner), (Daelemans et al., 2003), which implements a memory-based learning algorithm (IB1) which predicts the class of a test data point by looking at its distance to all examples from the training data, using some distance metric. In our experiments, we have used the weighted-overlap method, which assigns weights to all features. • MAXENT, Zhang Le’s C++ implementation8 of maximum entropy modelling (Berger et al., 1996). In our experiments, we used L-BFGS parameter estimation. We also implemented a na¨ıve bayes classifier and ran it on the fragment-task, with a data-set consisting only of the strings and POS-tags. To determine the contribution of all features, we used an iterative process similar to the one described in (Kohavi and John, 1997; Strube and M¨uller, 2003): we start with training a model using a baseline set of features, and then add each remaining feature individually, recording the gain (w.r.t. the fmeasure (f(0.5), to be precise)), and choosing the best-performing feature, incrementally until no further gain is recorded. All individual training- and evaluation-steps are performed using 8-fold crossvalidation (given the small number of positive instances, more folds would have made the number of instances in the test set set too small). The baselines were as follows: for the fragmenttask, we used bvb and lbe as baseline, i.e. we let the classifier know the length of the candidate and whether the candidate contains a verb or not. For the antecedent-task we tested a very simple baseline, containing only of one feature, the distance between α and β (dis). The baseline for the combinedtask, finally, was a combination of those two baselines, i.e. bvb+lbe+dis. The full feature-set for the fragment-task was lbe, bvb, bpr, nrb, bft, bds (since for this task there was no α to compute features of), for the two other tasks it was the complete set shown in Table 2. 8Available from http://homepages.inf.ed.ac.uk/ s0450736/maxent toolkit.html. 4.2 Results The Tables 3–5 show the results of the experiments. The entries are roughly sorted by performance of the classifier used; for most of the classifiers and datasets for each task we show the performance for baseline, intermediate feature set(s), and full feature-set, for the rest we only show the best-performing setting. We also indicate whether a balanced or unbalanced data set was used. I.e., the first three lines in Table 3 report on MaxEnt on a balanced data set for the fragment-task, giving results for the baseline, baseline+nrb+bft, and the full feature-set. We begin with discussing the fragment task. As Table 3 shows, the three main classifiers perform roughly equivalently. Re-balancing the data, as expected, boosts recall at the cost of precision. For all settings (i.e., combinations of data-sets, feature-sets and classifier), except re-balanced maxent, the baseline (verb in β yes/no, and length of β) already has some success in identifying fragments, but adding the remaining features still boosts the performance. Having available the string (condition s.s; slipper with set valued features) interestingly does not help SLIPPER much. Overall the performance on this task is not great. Why is that? An analysis of the errors made shows two problems. Among the false negatives, there is a high number of fragments like “yeah” and “mhm”, which in their particular context were answers to questions, but that however occur much more often as backchannels (true negatives). The classifier, without having information about the context, can of course not distinguish between these cases, and goes for the majority decision. Among the false positives, we find utterances that are indeed non-sentential, but for which no antecedent was marked (as in (3) above), i.e., which are not fragments in our narrow sense. It seems, thus, that the required distinctions are not ones that can be reliably learnt from looking at the fragments alone. The antecedent-task was handled more satisfactorily, as Table 4 shows. For this task, a na¨ıve baseline (“always take previous utterance”) preforms relatively well already; however, all classifiers were able to improve on this, with a slight advantage for the maxent model (f(0.5) = 0.76). As the entry for MaxEnt shows, adding to the baseline-features 251 Data Set Cl. Recall Precision f(0.5) f(1.0) f(2.0) B; bl m 0.00 0.00 0.00 0.00 0.00 B; bl+nrb+bft m 36.39 31.16 0.31 0.33 0.35 B; all m 40.61 44.10 0.43 0.42 0.41 UB; all m 22.13 65.06 0.47 0.33 0.25 B; bl t 31.77 21.20 0.22 0.24 0.28 B; bl+nrb+bpr+bds t 42.18 41.26 0.41 0.42 0.42 B; all t 44.54 32.74 0.34 0.37 0.41 UB; bl+nrb t 26.22 59.05 0.47 0.36 0.29 B; bl s 21.07 16.95 0.17 0.18 0.20 B; bl+nrb+bft+bds s 36.37 49.28 0.46 0.41 0.38 B; all s 36.67 43.31 0.42 0.40 0.38 UB; bl+nrb s 28.28 57.88 0.48 0.38 0.31 B s.s 32.57 42.96 0.40 0.36 0.34 B b 55.62 19.75 0.23 0.29 0.41 UB b 66.50 20.00 0.23 0.31 0.45 Table 3: Results for the fragment task. (Cl. = classifier used, where s = slipper, s.s = slipper + set-valued features, t = timbl, m = maxent, b = naive bayes; UB/B = (un)balanced training data.) Data Set Cl. Recall Precision f(0.5) f(1.0) f(2.0) dis=1 44.95 44.81 0.45 0.45 0.45 UB; bl m 0 0 0.0 0.0 0.0 UB; bl+awh m 43.21 52.90 0.50 0.47 0.45 UB; bl+awh+god m 36.98 75.31 0.62 0.50 0.41 UB; bl+awh+god+lbe+lal+iqu+nra+buh m 64.26 80.39 0.76 0.71 0.67 UB; all m 58.16 73.57 0.69 0.64 0.60 UB; bl s 0.00 0.00 0.00 0.00 0.00 UB; bl+aqm s 36.65 78.44 0.63 0.49 0.41 UB; bl+aqm+rab+iqu+lal s 49.72 79.75 0.71 0.61 0.54 UB; all s 49.43 72.57 0.66 0.58 0.52 UB; bl t 0 0 0.0 0.0 0.0 UB; bl+aqm t 36.98 73.58 0.61 0.49 0.41 UB; bl+aqm+awh+rab+iqu t 46.41 77.65 0.68 0.58 0.50 UB; all t 60.57 58.74 0.59 0.60 0.60 Table 4: Results for the antecedent task. Data Set Cl. Recall Precision f(0.5) f(1.0) f(2.0) B; bl m 0.00 0.00 0.00 0.00 0.00 B; bl+rap m 5.83 40.91 0.18 0.10 0.07 B; bl+rap+god m 7.95 55.83 0.25 0.14 0.10 B; bl+rap+god+nspk m 11.70 49.15 0.30 0.19 0.14 B; bl+rap+god+nspk+alt+awh+nra+lal m 20.27 50.02 0.38 0.28 0.23 B; all m 23.29 43.79 0.36 0.30 0.25 UB; bl+rap+god+nspk+iqu+nra+bds+rab+awh m 13.01 54.87 0.33 0.21 0.15 B; bl s 0.00 0.00 0.00 0.00 0.00 B; bl+god s 11.80 35.60 0.25 0.17 0.13 B; bl+god+bds s 14.44 46.98 0.32 0.22 0.17 B; all s 17.78 41.96 0.32 0.24 0.20 UB; bl+alt+bds+god+sspk+rap s 11.37 56.34 0.31 0.19 0.13 B; bl t 0.00 0.00 0.00 0.00 0.00 B; bl+god t 17.20 29.09 0.25 0.21 0.19 B; all t 17.87 19.97 0.19 0.19 0.18 UB; bl+god+iqu+rab t 14.24 41.63 0.29 0.21 0.16 B; bl+rab+buh s.s 8.63 54.20 0.26 0.15 0.10 Table 5: Results for the combined task. 252 information about whether α is a question or not already boost the performance considerably. An analysis of the predictions of this model then indeed shows that it already captures cases of question and answer pairs quite well. Adding the similarity feature god then gives the model information about semantic relatedness, which, as hypothesised, captures elaboration-type relations (as in (1-b) and (1-c) above). Structural information (iqu) further improves the model; however, the remaining features only seem to add interfering information, for performance using the full feature-set is worse. If one of the problems of the fragment-task was that information about the context is required to distinguish fragments and backchannels, then the hope could be that in the combined-task the classifier would able to capture these cases. However, the performance of all classifiers on this task is not satisfactory, as Table 5 shows; in fact, it is even slightly worse than the performance on the fragment task alone. We speculate that instead of of cancelling out mistakes in the other part of the task, the two goals (let β be a fragment, and α a typical antecedent) interfere during optimisation of the rules. To summarise, we have shown that the task of identifying the antecedent of a given fragment is learnable, using a feature-set that combines structural and lexical features; in particular, the inclusion of a measure of semantic relatedness, which was computed via queries to an internet search engine, proved helpful. The task of identifying (resolutionvia-identity) fragments, however, is hindered by the high number of non-sentential utterances which can be confused with the kinds of fragments we are interested in. Here it could be helpful to have a method that identifies and filters out backchannels, presumably using a much more local mechanism (as for example proposed in (Traum, 1994)). Similarly, the performance on the combined task is low, also due to a high number of confusions of backchannels and fragments. We discuss an alternative set-up below. 5 Related Work To our knowledge, the tasks presented here have so far not been studied with a machine learning approach. The closest to our problem is (Fern´andez et al., 2004b), which discusses classifying certain types of fragments, namely questions of the type “Who?”, “When?”, etc. (sluices). However, that paper does not address the task of identifying those in a corpus (which in any case should be easier than our fragment-task, since those fragments cannot be confused with backchannels). Overlapping from another direction is the work presented in (Zechner and Lavie, 2001), where the task of aligning questions and answers is tackled. This subsumes the task of identifying questionantecedents for short-answers, but again is presumably somewhat simpler than our general task, because questions are easier to identify. The authors also evaluate the use of the alignment of questions and answers in a summarisation system, and report an increase in summary fluency, without a compromise in informativeness. This is something we hope to be able to show for our tasks as well. There are also similarities, especially of the antecedent task, to the pronoun resolution task (see e.g. (Strube and M¨uller, 2003; Poesio et al., 2004)). Interestingly, our results for the antecedent task are close to those reported for that task. The problem of identifying the units in need of an antecedent, however, is harder for us, due to the problem of there being a large number of non-sentential utterances that cannot be linked to a single utterance as antecedent. In general, this seems to be the main difference between our task and the ones mentioned here, which concentrate on more easily identified markables (questions, sluices, and pronouns). 6 Conclusions and Further Work We have presented a machine learning approach to the task of identifying fragments and their antecedents in multi-party dialogue. This represents a well-defined subtask of computing discourse structure, which to our knowledge has not been studied so far. We have shown that the task of identifying the antecedent of a given fragment is learnable, using features that provide information about the structure of the discourse between antecedent and fragment, and about semantic closeness. The other tasks, identifying fragments and the combined tasks, however, did not perform as well, mainly because of a high rate of confusions between general non-sentential utterances and frag253 ments (in our sense). In future work, we will try a modified approach, where the detection of fragments is integrated with a classification of utterances as backchannels, fragments, or full sentences, and where the antecedent task only ranks pairs, leaving open the possibility of excluding a supposed fragment by using contextual information. Lastly, we are planning to integrate our classifier into a processing pipeline after the pronoun resolution step, to see whether this would improve both our performance and the quality of automatic meeting summarisations.9 References Adam L. Berger, Stephen Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. Jean Carletta. 1996. Assessing agreement on classification tasks: the kappa statistic. Computational Linguistics, 22(2):249–254. William Cohen and Yoram Singer. 1999. A simple, fast, and effective rule learner. In Proceedings of the Sixteenth National Conference on Artificial Intelligence (AAAI-99), Orlando, Florida, July. AAAI. Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2003. TiMBL: Tilburg memory based learner, version 5.0, reference guide. ILC Technical Report 03-10, Induction of Linguistic Knowledge; Tilburg University. Available from http://ilk.uvt.nl/downloads/pub/... papers/ilk0310.pdf. Raquel Fern´andez and Jonathan Ginzburg. 2002. Nonsentential utterances in dialogue: A corpus-based study. In Kristiina Jokinen and Susan McRoy, editors, Proceedings of the Third SIGdial Workshop on Discourse and Dialogue, pages 15–26, Philadelphia, USA, July. ACL Special Interest Group on Dialog. Raquel Fern´andez, Jonathan Ginzburg, Howard Gregory, and Shalom Lappin. 2004a. Shards: Fragment resolution in dialogue. In H. Bunt and R. Muskens, editors, Computing Meaning, volume 3. Kluwer. Raquel Fern´andez, Jonathan Ginzburg, and Shalom Lappin. 2004b. Classifying ellipsis in dialogue: A machine learning approach. In Proceedings of COLING 2004, Geneva, Switzerland, August. John S. Garofolo, Christophe D. Laprun, Martial Michel, Vincent M. Stanford, and Elham Tabassi. 2004. The NITS 9Acknowledgements: We would like to acknowledge helpful discussions with Jason Baldridge and Michael Strube during the early stages of the project, and helpful comments from the anonymous reviewers. meeting room pilot corpus. In Proceedings of the International Language Resources Conference (LREC04), Lisbon, Portugal, May. J.J. Godfrey, E. C. Holliman, and J. McDaniel. 1992. SWITCHBOARD: Telephone speech corpus for research and devlopment. In Proceedings of the IEEE Conference on Acoustics, Speech, and Signal Processing, pages 517–520, San Francisco, USA, March. Ron Kohavi and George H. John. 1997. Wrappers for feature selection. Artificial Intelligence Journal, 97(1–2):273–324. Christoph M¨uller and Michael Strube. 2001. MMAX: A Tool for the Annotation of Multi-modal Corpora. In Proceedings of the 2nd IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems, pages 45–50, Seattle, USA, August. Massimo Poesio, Rahul Mehta, Axel Maroudas, and Janet Hitzeman. 2004. Learning to resolve bridging references. In Proceedings of the 42nd annual meeting of the Association for Computational Linguistics, pages 144–151, Barcelona, Spain, July. David Schlangen and Alex Lascarides. 2002. Resolving fragments using discourse information. In Johan Bos, Mary Ellen Foster, and Colin Matheson, editors, Proceedings of the 6th International Workshop on Formal Semantics and Pragmatics of Dialogue (EDILOG 2002), pages 161– 168, Edinburgh, September. David Schlangen and Alex Lascarides. 2003. The interpretation of non-sentential utterances in dialogue. In Alexander Rudnicky, editor, Proceedings of the 4th SIGdial workshop on Discourse and Dialogue, Sapporo, Japan, July. David Schlangen. 2003. A Coherence-Based Approach to the Interpretation of Non-Sentential Utterances in Dialogue. Ph.D. thesis, School of Informatics, University of Edinburgh, Edinburgh, UK. Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of the International Conference on New Methods in Language Processing, Manchester, UK. Michael Strube and Christoph M¨uller. 2003. A machine learning approach to pronoun resolution in spoken dialogue. In Proceedings of the 41st Annual Meeting of the Association for Computational Lingustics, Sapporo, Japan. D. Traum and P. Heeman. 1997. Utterance units in spoken dialogue. In E. Maier, M. Mast, and S. LuperFoy, editors, Dialogue Processing in Spoken Language Systems, Lecture Notes in Artificial Intelligence. Springer-Verlag. David R. Traum. 1994. A Computational Theory of Grounding in Natural Language Conversation. Ph.D. thesis, Computer Science, University of Rochester, Rochester, USA, December. Klaus Zechner and Anton Lavie. 2001. Increasing the coherence of spoken dialogue summaries by cross-speaker information linking. In Proceedings of the NAAACL Workshop on Automatic Summarisation, Pittsburgh, USA, June. 254 | 2005 | 31 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 255–262, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Scaling Phrase-Based Statistical Machine Translation to Larger Corpora and Longer Phrases Chris Callison-Burch Colin Bannard University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW {chris,colin}@linearb.co.uk Josh Schroeder Linear B Ltd. 39 B Cumberland Street Edinburgh EH3 6RA [email protected] Abstract In this paper we describe a novel data structure for phrase-based statistical machine translation which allows for the retrieval of arbitrarily long phrases while simultaneously using less memory than is required by current decoder implementations. We detail the computational complexity and average retrieval times for looking up phrase translations in our suffix array-based data structure. We show how sampling can be used to reduce the retrieval time by orders of magnitude with no loss in translation quality. 1 Introduction Statistical machine translation (SMT) has an advantage over many other statistical natural language processing applications in that training data is regularly produced by other human activity. For some language pairs very large sets of training data are now available. The publications of the European Union and United Nations provide gigbytes of data between various language pairs which can be easily mined using a web crawler. The Linguistics Data Consortium provides an excellent set of off the shelf Arabic-English and Chinese-English parallel corpora for the annual NIST machine translation evaluation exercises. The size of the NIST training data presents a problem for phrase-based statistical machine translation. Decoders such as Pharaoh (Koehn, 2004) primarily use lookup tables for the storage of phrases and their translations. Since retrieving longer segments of human translated text generally leads to better translation quality, participants in the evaluation exercise try to maximize the length of phrases that are stored in lookup tables. The combination of large corpora and long phrases means that the table size can quickly become unwieldy. A number of groups in the 2004 evaluation exercise indicated problems dealing with the data. Coping strategies included limiting the length of phrases to something small, not using the entire training data set, computing phrases probabilities on disk, and filtering the phrase table down to a manageable size after the testing set was distributed. We present a data structure that is easily capable of handling the largest data sets currently available, and show that it can be scaled to much larger data sets. In this paper we: • Motivate the problem with storing enumerated phrases in a table by examining the memory requirements of the method for the NIST data set • Detail the advantages of using long phrases in SMT, and examine their potential coverage • Describe a suffix array-based data structure which allows for the retrieval of translations of arbitrarily long phrases, and show that it requires far less memory than a table • Calculate the computational complexity and average time for retrieving phrases and show how this can be sped up by orders of magnitude with no loss in translation accuracy 2 Related Work Koehn et al. (2003) compare a number of different approaches to phrase-based statistical machine 255 length num uniq (mil) average # translations avg trans length 1 .88 8.322 1.37 2 16.5 1.733 2.35 3 42.6 1.182 3.44 4 58.7 1.065 4.58 5 65.0 1.035 5.75 6 66.4 1.022 6.91 7 65.8 1.015 8.07 8 64.3 1.012 9.23 9 62.2 1.010 10.4 10 59.9 1.010 11.6 Table 1: Statistics about Arabic phrases in the NIST2004 large data track. translation including the joint probability phrasebased model (Marcu and Wong, 2002) and a variant on the alignment template approach (Och and Ney, 2004), and contrast them to the performance of the word-based IBM Model 4 (Brown et al., 1993). Most relevant for the work presented in this paper, they compare the effect on translation quality of using various lengths of phrases, and the size of the resulting phrase probability tables. Tillmann (2003) further examines the relationship between maximum phrase length, size of the translation table, and accuracy of translation when inducing block-based phrases from word-level alignments. Venugopal et al. (2003) and Vogel et al. (2003) present methods for achieving better translation quality by growing incrementally larger phrases by combining smaller phrases with overlapping segments. 3 Scaling to Long Phrases Table 1 gives statistics about the Arabic-English parallel corpus used in the NIST large data track. The corpus contains 3.75 million sentence pairs, and has 127 million words in English, and 106 million words in Arabic. The table shows the number of unique Arabic phrases, and gives the average number of translations into English and their average length. Table 2 gives estimates of the size of the lookup tables needed to store phrases of various lengths, based on the statistics in Table 1. The number of unique entries is calculated as the number unique length entries (mil) words (mil) memory (gigs) including alignments 1 7.3 10 .1 .11 2 36 111 .68 .82 3 86 412 2.18 2.64 4 149 933 4.59 5.59 5 216 1,645 7.74 9.46 6 284 2,513 11.48 14.07 7 351 3,513 15.70 19.30 8 416 4,628 20.34 25.05 9 479 5,841 25.33 31.26 10 539 7,140 30.62 37.85 Table 2: Estimated size of lookup tables for the NIST-2004 Arabic-English data length coverage length coverage 1 93.5% 6 4.70% 2 73.3% 7 2.95% 3 37.1% 8 2.14% 4 15.5% 9 1.99% 5 8.05% 10 1.49% Table 3: Lengths of phrases from the training data that occur in the NIST-2004 test set phrases times the average number of translations. The number of words in the table is calculated as the number of unique phrases times the phrase length plus the number of entries times the average translation length. The memory is calculated assuming that each word is represented with a 4 byte integer, that each entry stores its probability as an 8 byte double and that each word alignment is stored as a 2 byte short. Note that the size of the table will vary depending on the phrase extraction technique. Table 3 gives the percent of the 35,313 word long test set which can be covered using only phrases of the specified length or greater. The table shows the efficacy of using phrases of different lengths. The table shows that while the rate of falloff is rapid, there are still multiple matches of phrases of length 10. The longest matching phrase was one of length 18. There is little generalization in current SMT implementations, and consequently longer phrases generally lead to better translation quality. 256 3.1 Why use phrases? Statistical machine translation made considerable advances in translation quality with the introduction of phrase-based translation. By increasing the size of the basic unit of translation, phrase-based machine translation does away with many of the problems associated with the original word-based formulation of statistical machine translation (Brown et al., 1993), in particular: • The Brown et al. (1993) formulation doesn’t have a direct way of translating phrases; instead they specify a fertility parameter which is used to replicate words and translate them individually. • With units as small as words, a lot of reordering has to happen between languages with different word orders. But the distortion parameter is a poor explanation of word order. Phrase-based SMT overcomes the first of these problems by eliminating the fertility parameter and directly handling word-to-phrase and phrase-tophrase mappings. The second problem is alleviated through the use of multi-word units which reduce the dependency on the distortion parameter. Less word re-ordering need occur since local dependencies are frequently captured. For example, common adjective-noun alternations are memorized. However, since this linguistic information is not encoded in the model, unseen adjective noun pairs may still be handled incorrectly. By increasing the length of phrases beyond a few words, we might hope to capture additional non-local linguistic phenomena. For example, by memorizing longer phrases we may correctly learn case information for nouns commonly selected by frequently occurring verbs; we may properly handle discontinuous phrases (such as French negation, some German verb forms, and English verb particle constructions) that are neglected by current phrasebased models; and we may by chance capture some agreement information in coordinated structures. 3.2 Deciding what length of phrase to store Despite the potential gains from memorizing longer phrases, the fact remains that as phrases get longer length coverage length coverage 1 96.3% 6 21.9% 2 94.9% 7 11.2% 3 86.1% 8 6.16% 4 65.6% 9 3.95% 5 40.9% 10 2.90% Table 4: Coverage using only repeated phrases of the specified length there is a decreasing likelihood that they will be repeated. Because of the amount of memory required to store a phrase table, in current implementations a choice is made as to the maximum length of phrase to store. Based on their analysis of the relationship between translation quality and phrase length, Koehn et al. (2003) suggest limiting phrase length to three words or less. This is entirely a practical suggestion for keeping the phrase table to a reasonable size, since they measure minor but incremental improvement in translation quality up to their maximum tested phrase length of seven words.1 Table 4 gives statistics about phrases which occur more than once in the English section of the Europarl corpus (Koehn, 2002) which was used in the Koehn et al. (2003) experiments. It shows that the percentage of words in the corpus that can be covered by repeated phrases falls off rapidly at length 6, but that even phrases up to length 10 are able to cover a non-trivial portion of the corpus. This draws into question the desirability of limiting phrase retrieval to length three. The decision concerning what length of phrases to store in the phrase table seems to boil down to a practical consideration: one must weigh the likelihood of retrieval against the memory needed to store longer phrases. We present a data structure where this is not a consideration. Our suffix arraybased data structure allows the retrieval of arbitrarily long phrases, while simultaneously requiring far less memory than the standard table-based representation. 1While the improvements to translation quality reported in Koehn et al. (2003) are minor, their evaluation metric may not have been especially sensitive to adding longer phrases. They used the Bleu evaluation metric (Papineni et al., 2002), but capped the n-gram precision at 4-grams. 257 0 1 2 3 4 5 6 7 8 9 spain declined to confirm that spain declined to aid morocco declined to confirm that spain declined to aid morocco to confirm that spain declined to aid morocco confirm that spain declined to aid morocco that spain declined to aid morocco spain declined to aid morocco declined to aid morocco to aid morocco aid morocco morocco spain declined to confirm that spain declined aid to morocco 0 1 2 3 4 5 6 8 7 9 s[0] s[1] s[2] s[3] s[4] s[5] s[6] s[7] s[8] s[9] Initialized, unsorted Suffix Array Suffixes denoted by s[i] Corpus Index of words: Figure 1: An initialized, unsorted suffix array for a very small corpus 4 Suffix Arrays The suffix array data structure (Manber and Myers, 1990) was introduced as a space-economical way of creating an index for string searches. The suffix array data structure makes it convenient to compute the frequency and location of any substring or ngram in a large corpus. Abstractly, a suffix array is an alphabetically-sorted list of all suffixes in a corpus, where a suffix is a substring running from each position in the text to the end. However, rather than actually storing all suffixes, a suffix array can be constructed by creating a list of references to each of the suffixes in a corpus. Figure 1 shows how a suffix array is initialized for a corpus with one sentence. Each index of a word in the corpus has a corresponding place in the suffix array, which is identical in length to the corpus. Figure 2 shows the final state of the suffix array, which is as a list of the indices of words in the corpus that corresponds to an alphabetically sorted list of the suffixes. The advantages of this representation are that it is compact and easily searchable. The total size of the suffix array is a constant amount of memory. Typically it is stored as an array of integers where the array is the same length as the corpus. Because it is organized alphabetically, any phrase can be quickly located within it using a binary search algorithm. Yamamoto and Church (2001) show how to use suffix arrays to calculate a number of statistics that are interesting in natural language processing applications. They demonstrate how to calculate term fre8 3 6 1 9 5 0 4 7 2 to aid morocco to confirm that spain declined to aid morocco morocco spain declined to aid morocco declined to confirm that spain declined to aid morocco declined to aid morocco confirm that spain declined to aid morocco aid morocco that spain declined to aid morocco spain declined to confirm that spain declined to aid morocco Sorted Suffix Array Suffixes denoted by s[i] s[0] s[1] s[2] s[3] s[4] s[5] s[6] s[7] s[8] s[9] Figure 2: A sorted suffix array and its corresponding suffixes quency / inverse document frequency (tf / idf) for all n-grams in very large corpora, as well as how to use these frequencies to calculate n-grams with high mutual information and residual inverse document frequency. Here we show how to apply suffix arrays to parallel corpora to calculate phrase translation probabilities. 4.1 Applied to parallel corpora In order to adapt suffix arrays to be useful for statistical machine translation we need a data structure with the following elements: • A suffix array created from the source language portion of the corpus, and another created from the target language portion of the corpus, • An index that tells us the correspondence between sentence numbers and positions in the source and target language corpora, • An alignment a for each sentence pair in the parallel corpus, where a is defined as a subset of the Cartesian product of the word positions in a sentence e of length I and a sentence f of length J: a ⊆{(i, j) : i = 1...I; j = 1...J} • A method for extracting the translationally equivalent phrase for a subphrase given an aligned sentence pair containing that subphrase. The total memory usage of the data structure is thus the size of the source and target corpora, plus the size of the suffix arrays (identical in length to the 258 corpora), plus the size of the two indexes that correlate sentence positions with word positions, plus the size of the alignments. Assuming we use ints to represent words and indices, and shorts to represent word alignments, we get the following memory usage: 2 ∗num words in source corpus ∗sizeof(int)+ 2 ∗num words in target corpus ∗sizeof(int)+ 2 ∗number sentence pairs ∗sizeof(int)+ number of word alignments ∗sizeof(short) The total amount of memory required to store the NIST Arabic-English data using this data structure is 2 ∗105,994,774 ∗sizeof(int)+ 2 ∗127,450,473 ∗sizeof(int)+ 2 ∗3,758,904 ∗sizeof(int)+ 92,975,229 ∗sizeof(short) Or just over 2 Gigabytes. 4.2 Calculating phrase translation probabilities In order to produce a set of phrase translation probabilities, we need to examine the ways in which they are calculated. We consider two common ways of calculating the translation probability: using the maximum likelihood estimator (MLE) and smoothing the MLE using lexical weighting. The maximum likelihood estimator for the probability of a phrase is defined as p( ¯f|¯e) = count( ¯f, ¯e) P ¯f count( ¯f, ¯e) (1) Where count( ¯f, ¯e) gives the total number of times the phrase ¯f was aligned with the phrase ¯e in the parallel corpus. We define phrase alignments as follows. A substring ¯e consisting of the words at positions l...m is aligned with the phrase ¯f by way of the subalignment s = a ∩{(i, j) : i = l...m, j = 1...J} The aligned phrase ¯f is the subphrase in f which spans from min(j) to max(j) for j|(i, j) ∈s. The procedure for generating the counts that are used to calculate the MLE probability using our suffix array-based data structures is: 1. Locate all the suffixes in the English suffix array which begin with the phrase ¯e. Since the suffix array is sorted alphabetically we can easily find the first occurrence s[k] and the last occurrence s[l]. The length of the span in the suffix array l−k+1 indicates the number of occurrences of ¯e in the corpus. Thus the denominator P ¯f count( ¯f, ¯e) can be calculated as l −k + 1. 2. For each of the matching phrases s[i] in the span s[k]...s[l], look up the value of s[i] which is the word index w of the suffix in the English corpus. Look up the sentence number that includes w, and retrieve the corresponding sentences e and f, and their alignment a. 3. Use a to extract the target phrase ¯f that aligns with the phrase ¯e that we are searching for. Increment the count for < ¯f, ¯e >. 4. Calculate the probability for each unique matching phrase ¯f using the formula in Equation 1. A common alternative formulation of the phrase translation probability is to lexically weight it as follows: plw( ¯f|¯e, s) = n Y i=1 1 |{i|(i, j) ∈s}| X ∀(i,j)∈s p(fj|ei) (2) Where n is the length of ¯e. In order to use lexical weighting we would need to repeat steps 1-4 above for each word ei in ¯e. This would give us the values for p(fj|ei). We would further need to retain the subphrase alignment s in order to know the correspondence between the words (i, j) ∈s in the aligned phrases, and the total number of foreign words that each ei is aligned with (|{i|(i, j) ∈s}|). Since a phrase alignment < ¯f, ¯e > may have multiple possible word-level alignments, we retain a set of alignments S and take the maximum: 259 p( ¯f|¯e, S) = p( ¯f|¯e) ∗arg max s∈S plw( ¯f|¯e, s) (3) Thus our suffix array-based data structure can be used straightforwardly to look up all aligned translations for a given phrase and calculate the probabilities on-the-fly. In the next section we turn to the computational complexity of constructing phrase translation probabilities in this way. 5 Computational Complexity Computational complexity is relevant because there is a speed-memory tradeoff when adopting our data structure. What we gained in memory efficiency may be rendered useless if the time it takes to calculate phrase translation probabilities is unreasonably long. The computational complexity of looking up items in a hash table, as is done in current tablebased data structures, is extremely fast. Looking up a single phrase can be done in unit time, O(1). The computational complexity of our method has the following components: • The complexity of finding all occurrences of the phrase in the suffix array • The complexity of retrieving the associated aligned sentence pairs given the positions of the phrase in the corpus • The complexity of extracting all aligned phrases using our phrase extraction algorithm • The complexity of calculating the probabilities given the aligned phrases The methods we use to execute each of these, and their complexities are as follow: • Since the array is sorted, finding all occurrences of the English phrase is extremely fast. We can do two binary searches: one to find the first occurrence of the phrase and a second to find the last. The computational complexity is therefore bounded by O(2 log(n)) where n is the length of the corpus. • We use a similar method to look up the sentences ei and fi and word-level alignment ai phrase freq O time (ms) respect for the dead 3 80 24 since the end of the cold war 19 240 136 the parliament 1291 4391 1117 of the 290921 682550 218369 Table 5: Examples of O and calculation times for phrases of different frequencies that are associated with the position wi in the corpus of each phrase occurrence ¯ei. The complexity is O(k ∗2 log(m)) where k is the number of occurrences of ¯e and m is the number of sentence pairs in the parallel corpus. • The complexity of extracting the aligned phrase for a single occurrence of ¯ei is O(2 log(|ai|) to get the subphrase alignment si, since we store the alignments in a sorted array. The complexity of then getting ¯fi from si is O(length(¯fi)). • The complexity of summing over all aligned phrases and simultaneously calculating their probabilities is O(k). Thus we have a total complexity of: O(2 log(n) + k ∗2 log(m) (4) + ¯e1... ¯ek X ai, ¯fi| ¯ei (2 log(|ai|) + length(¯fi)) + k) (5) for the MLE estimation of the translation probabilities for a single phrase. The complexity is dominated by the k terms in the equation, when the number of occurrences of the phrase in the corpus is high. Phrases with high frequency may cause excessively long retrieval time. This problem is exacerbated when we shift to a lexically weighted calculation of the phrase translation probability. The complexity will be multiplied across each of the component words in the phrase, and the component words themselves will be more frequent than the phrase. Table 5 shows example times for calculating the translation probabilities for a number of phrases. For frequent phrases like of the these times get unacceptably long. While our data structure is perfect for 260 overcoming the problems associated with storing the translations of long, infrequently occurring phrases, it in a way introduces the converse problem. It has a clear disadvantage in the amount of time it takes to retrieve commonly occurring phrases. In the next section we examine the use of sampling to speed up the calculation of translation probabilities for very frequent phrases. 6 Sampling Rather than compute the phrase translation probabilities by examining the hundreds of thousands of occurrences of common phrases, we instead sample from a small subset of the occurrences. It is unlikely that we need to extract the translations of all occurrences of a high frequency phrase in order to get a good approximation of their probabilities. We instead cap the number of occurrences that we consider, and thus give a maximum bound on k in Equation 5. In order to determine the effect of different levels of sampling, we compare the translation quality against cumulative retrieval time for calculating the phrase translation probabilities for all subphrases in an evaluation set. We translated a held out set of 430 German sentences with 50 words or less into English. The test sentences were drawn from the 01/17/00 proceedings of the Europarl corpus. The remainder of the corpus (1 million sentences) was used as training data to calculate the phrase translation probabilities. We calculated the translation quality using Bleu’s modified n-gram precision metric (Papineni et al., 2002) for n-grams of up to length four. The framework that we used to calculate the translation probabilities was similar to that detailed in Koehn et al. (2003). That is: ˆe = arg max eI 1 p(eI 1|fI 1) (6) = arg max eI 1 pLM(eI 1) ∗ (7) IY i=1 p(¯fi|¯ei)d(ai −bi−1)plw(¯fi|¯ei, a) (8) Where pLM is a language model probability and d is a distortion probability which penalizes movement. Table 6 gives a comparison of the translation quality under different levels of sampling. While the acsample size time quality unlimited 6279 sec .290 50000 1051 sec .289 10000 336 sec .291 5000 201 sec .289 1000 60 sec .288 500 35 sec .288 100 10 sec .288 Table 6: A comparison of retrieval times and translation quality when the number of translations is capped at various sample sizes curacy fluctuates very slightly it essentially remains uniformly high for all levels of sampling. There are a number of possible reasons for the fact that the quality does not decrease: • The probability estimates under sampling are sufficiently good that the most probable translations remain unchanged, • The interaction with the language model probability rules out the few misestimated probabilities, or • The decoder tends to select longer or less frequent phrases which are not affected by the sampling. While the translation quality remains essentially unchanged, the cumulative time that it takes to calculate the translation probabilities for all subphrases in the 430 sentence test set decreases radically. The total time drops by orders of magnitude from an hour and a half without sampling down to a mere 10 seconds with a cavalier amount of sampling. This suggests that the data structure is suitable for deployed SMT systems and that no additional caching need be done to compensate for the structure’s computational complexity. 7 Discussion The paper has presented a super-efficient data structure for phrase-based statistical machine translation. We have shown that current table-based methods are unwieldily when used in conjunction with large data sets and long phrases. We have contrasted this with our suffix array-based data structure which provides 261 a very compact way of storing large data sets while simultaneously allowing the retrieval of arbitrarily long phrases. For the NIST-2004 Arabic-English data set, which is among the largest currently assembled for statistical machine translation, our representation uses a very manageable 2 gigabytes of memory. This is less than is needed to store a table containing phrases with a maximum of three words, and is ten times less than the memory required to store a table with phrases of length eight. We have further demonstrated that while computational complexity can make the retrieval of translation of frequent phrases slow, the use of sampling is an extremely effective countermeasure to this. We demonstrated that calculating phrase translation probabilities from sets of 100 occurrences or less results in nearly no decrease in translation quality. The implications of the data structure presented in this paper are significant. The compact representation will allow us to easily scale to parallel corpora consisting of billions of words of text, and the retrieval of arbitrarily long phrases will allow experiments with alternative decoding strategies. These facts in combination allow for an even greater exploitation of training data in statistical machine translation. References Peter Brown, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer. 1993. The mathematics of machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311, June. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT/NAACL. Philipp Koehn. 2002. Europarl: A multilingual corpus for evaluation of machine translation. Unpublished Draft. Philipp Koehn. 2004. Pharaoh: A beam search decoder for phrase-based statistical machine translation models. In Proceedings of AMTA. Udi Manber and Gene Myers. 1990. Suffix arrays: A new method for on-line string searches. In The First Annual ACM-SIAM Symposium on Dicrete Algorithms, pages 319–327. Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proceedings of EMNLP. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–450, December. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of ACL. Christoph Tillmann. 2003. A projection extension algorithm for statistical machine translation. In Proceedings of EMNLP. Ashish Venugopal, Stephan Vogel, and Alex Waibel. 2003. Effective phrase translation extraction from alignment models. In Proceedings of ACL. Stephan Vogel, Ying Zhang, Fei Huang, Alicia Tribble, Ashish Venugopal, Bing Zhao, and Alex Waibel. 2003. The CMU statistical machine translation system. In Proceedings of MT Summit 9. Mikio Yamamoto and Kenneth Church. 2001. Using suffix arrays to compute term frequency and document frequency for all substrings in a corpus. Compuatational Linguistics, 27(1):1–30. 262 | 2005 | 32 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 263–270, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics A Hierarchical Phrase-Based Model for Statistical Machine Translation David Chiang Institute for Advanced Computer Studies (UMIACS) University of Maryland, College Park, MD 20742, USA [email protected] Abstract We present a statistical phrase-based translation model that uses hierarchical phrases— phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system. 1 Introduction The alignment template translation model (Och and Ney, 2004) and related phrase-based models advanced the previous state of the art by moving from words to phrases as the basic unit of translation. Phrases, which can be any substring and not necessarily phrases in any syntactic theory, allow these models to learn local reorderings, translation of short idioms, or insertions and deletions that are sensitive to local context. They are thus a simple and powerful mechanism for machine translation. The basic phrase-based model is an instance of the noisy-channel approach (Brown et al., 1993),1 in which the translation of a French sentence f into an 1Throughout this paper, we follow the convention of Brown et al. of designating the source and target languages as “French” and “English,” respectively. The variables f and e stand for source and target sentences; f j i stands for the substring of f from position i to position j inclusive, and similarly for e j i . English sentence e is modeled as: arg max e P(e | f) = arg max e P(e, f) (1) = arg max e (P(e) × P( f | e)) (2) The translation model P(f | e) “encodes” e into f by the following steps: 1. segment e into phrases ¯e1 · · · ¯eI, typically with a uniform distribution over segmentations; 2. reorder the ¯ei according to some distortion model; 3. translate each of the ¯ei into French phrases according to a model P( ¯f | ¯e) estimated from the training data. Other phrase-based models model the joint distribution P(e, f) (Marcu and Wong, 2002) or made P(e) and P(f | e) into features of a log-linear model (Och and Ney, 2002). But the basic architecture of phrase segmentation (or generation), phrase reordering, and phrase translation remains the same. Phrase-based models can robustly perform translations that are localized to substrings that are common enough to have been observed in training. But Koehn et al. (2003) find that phrases longer than three words improve performance little, suggesting that data sparseness takes over for longer phrases. Above the phrase level, these models typically have a simple distortion model that reorders phrases independently of their content (Och and Ney, 2004; Koehn et al., 2003), or not at all (Zens and Ney, 2004; Kumar et al., 2005). But it is often desirable to capture translations whose scope is larger than a few consecutive words. 263 Consider the following Mandarin example and its English translation: (3) ³2 Aozhou Australia / shi is yu with Bei North é Han Korea you have ¦¤ bangjiao dipl. rels. „ de that p shaoshu few ý¶ guojia countries K zhiyi one of ‘Australia is one of the few countries that have diplomatic relations with North Korea’ If we count zhiyi, lit. ‘of-one,’ as a single token, then translating this sentence correctly into English requires reversing a sequence of five elements. When we run a phrase-based system, Pharaoh (Koehn et al., 2003; Koehn, 2004a), on this sentence (using the experimental setup described below), we get the following phrases with translations: (4) [Aozhou] [shi] [yu] [Bei Han] [you] [bangjiao]1 [de shaoshu guojia zhiyi] [Australia] [is] [dipl. rels.]1 [with] [North Korea] [is] [one of the few countries] where we have used subscripts to indicate the reordering of phrases. The phrase-based model is able to order “diplomatic. . .Korea” correctly (using phrase reordering) and “one. . .countries” correctly (using a phrase translation), but does not accomplish the necessary inversion of those two groups. A lexicalized phrase-reordering model like that in use in ISI’s system (Och et al., 2004) might be able to learn a better reordering, but simpler distortion models will probably not. We propose a solution to these problems that does not interfere with the strengths of the phrasebased approach, but rather capitalizes on them: since phrases are good for learning reorderings of words, we can use them to learn reorderings of phrases as well. In order to do this we need hierarchical phrases that consist of both words and subphrases. For example, a hierarchical phrase pair that might help with the above example is: (5) ⟨yu 1 you 2 , have 2 with 1 ⟩ where 1 and 2 are placeholders for subphrases. This would capture the fact that Chinese PPs almost always modify VP on the left, whereas English PPs usually modify VP on the right. Because it generalizes over possible prepositional objects and direct objects, it acts both as a discontinuous phrase pair and as a phrase-reordering rule. Thus it is considerably more powerful than a conventional phrase pair. Similarly, (6) ⟨1 de 2 , the 2 that 1 ⟩ would capture the fact that Chinese relative clauses modify NPs on the left, whereas English relative clauses modify on the right; and (7) ⟨1 zhiyi, one of 1 ⟩ would render the construction zhiyi in English word order. These three rules, along with some conventional phrase pairs, suffice to translate the sentence correctly: (8) [Aozhou] [shi] [[[yu [Bei Han]1 you [bangjiao]2] de [shaoshu guojia]3] zhiyi] [Australia] [is] [one of [the [few countries]3 that [have [dipl. rels.]2 with [North Korea]1]]] The system we describe below uses rules like this, and in fact is able to learn them automatically from a bitext without syntactic annotation. It translates the above example almost exactly as we have shown, the only error being that it omits the word ‘that’ from (6) and therefore (8). These hierarchical phrase pairs are formally productions of a synchronous context-free grammar (defined below). A move to synchronous CFG can be seen as a move towards syntax-based MT; however, we make a distinction here between formally syntax-based and linguistically syntax-based MT. A system like that of Yamada and Knight (2001) is both formally and linguistically syntax-based: formally because it uses synchronous CFG, linguistically because the structures it is defined over are (on the English side) informed by syntactic theory (via the Penn Treebank). Our system is formally syntaxbased in that it uses synchronous CFG, but not necessarily linguistically syntax-based, because it induces a grammar from a parallel text without relying on any linguistic annotations or assumptions; the result sometimes resembles a syntactician’s grammar but often does not. In this respect it resembles Wu’s 264 bilingual bracketer (Wu, 1997), but ours uses a different extraction method that allows more than one lexical item in a rule, in keeping with the phrasebased philosophy. Our extraction method is basically the same as that of Block (2000), except we allow more than one nonterminal symbol in a rule, and use a more sophisticated probability model. In this paper we describe the design and implementation of our hierarchical phrase-based model, and report on experiments that demonstrate that hierarchical phrases indeed improve translation. 2 The model Our model is based on a weighted synchronous CFG (Aho and Ullman, 1969). In a synchronous CFG the elementary structures are rewrite rules with aligned pairs of right-hand sides: (9) X →⟨γ, α, ∼⟩ where X is a nonterminal, γ and α are both strings of terminals and nonterminals, and ∼is a one-to-one correspondence between nonterminal occurrences in γ and nonterminal occurrences in α. Rewriting begins with a pair of linked start symbols. At each step, two coindexed nonterminals are rewritten using the two components of a single rule, such that none of the newly introduced symbols is linked to any symbols already present. Thus the hierarchical phrase pairs from our above example could be formalized in a synchronous CFG as: X →⟨yu X 1 you X 2 , have X 2 with X 1 ⟩ (10) X →⟨X 1 de X 2 , the X 2 that X 1 ⟩ (11) X →⟨X 1 zhiyi, one of X 1 ⟩ (12) where we have used boxed indices to indicate which occurrences of X are linked by ∼. Note that we have used only a single nonterminal symbol X instead of assigning syntactic categories to phrases. In the grammar we extract from a bitext (described below), all of our rules use only X, except for two special “glue” rules, which combine a sequence of Xs to form an S: S →⟨S 1 X 2 , S 1 X 2 ⟩ (13) S →⟨X 1 , X 1 ⟩ (14) These give the model the option to build only partial translations using hierarchical phrases, and then combine them serially as in a standard phrase-based model. For a partial example of a synchronous CFG derivation, see Figure 1. Following Och and Ney (2002), we depart from the traditional noisy-channel approach and use a more general log-linear model. The weight of each rule is: (15) w(X →⟨γ, α⟩) = Y i φi(X →⟨γ, α⟩)λi where the φi are features defined on rules. For our experiments we used the following features, analogous to Pharaoh’s default feature set: • P(γ | α) and P(α | γ), the latter of which is not found in the noisy-channel model, but has been previously found to be a helpful feature (Och and Ney, 2002); • the lexical weights Pw(γ | α) and Pw(α | γ) (Koehn et al., 2003), which estimate how well the words in α translate the words in γ;2 • a phrase penalty exp(1), which allows the model to learn a preference for longer or shorter derivations, analogous to Koehn’s phrase penalty (Koehn, 2003). The exceptions to the above are the two glue rules, (13), which has weight one, and (14), which has weight (16) w(S →⟨S 1 X 2 , S 1 X 2 ⟩) = exp(−λg) the idea being that λg controls the model’s preference for hierarchical phrases over serial combination of phrases. Let D be a derivation of the grammar, and let f(D) and e(D) be the French and English strings generated by D. Let us represent D as a set of triples ⟨r, i, j⟩, each of which stands for an application of a grammar rule r to rewrite a nonterminal that spans f(D)j i on the French side.3 Then the weight of D 2This feature uses word alignment information, which is discarded in the final grammar. If a rule occurs in training with more than one possible word alignment, Koehn et al. take the maximum lexical weight; we take a weighted average. 3This representation is not completely unambiguous, but is sufficient for defining the model. 265 ⟨S 1 , S 1 ⟩⇒⟨S 2 X 3 , S 2 X 3 ⟩ ⇒⟨S 4 X 5 X 3 , S 4 X 5 X 3 ⟩ ⇒⟨X 6 X 5 X 3 , X 6 X 5 X 3 ⟩ ⇒⟨Aozhou X 5 X 3 , Australia X 5 X 3 ⟩ ⇒⟨Aozhou shi X 3 , Australia is X 3 ⟩ ⇒⟨Aozhou shi X 7 zhiyi, Australia is one of X 7 ⟩ ⇒⟨Aozhou shi X 8 de X 9 zhiyi, Australia is one of the X 9 that X 8 ⟩ ⇒⟨Aozhou shi yu X 1 you X 2 de X 9 zhiyi, Australia is one of the X 9 that have X 2 with X 1 ⟩ Figure 1: Example partial derivation of a synchronous CFG. is the product of the weights of the rules used in the translation, multiplied by the following extra factors: (17) w(D) = Y ⟨r,i,j⟩∈D w(r)× plm(e)λlm ×exp(−λwp|e|) where plm is the language model, and exp(−λwp|e|), the word penalty, gives some control over the length of the English output. We have separated these factors out from the rule weights for notational convenience, but it is conceptually cleaner (and necessary for polynomial-time decoding) to integrate them into the rule weights, so that the whole model is a weighted synchronous CFG. The word penalty is easy; the language model is integrated by intersecting the English-side CFG with the language model, which is a weighted finitestate automaton. 3 Training The training process begins with a word-aligned corpus: a set of triples ⟨f, e, ∼⟩, where f is a French sentence, e is an English sentence, and ∼is a (manyto-many) binary relation between positions of f and positions of e. We obtain the word alignments using the method of Koehn et al. (2003), which is based on that of Och and Ney (2004). This involves running GIZA++ (Och and Ney, 2000) on the corpus in both directions, and applying refinement rules (the variant they designate “final-and”) to obtain a single many-to-many word alignment for each sentence. Then, following Och and others, we use heuristics to hypothesize a distribution of possible derivations of each training example, and then estimate the phrase translation parameters from the hypothesized distribution. To do this, we first identify initial phrase pairs using the same criterion as previous systems (Och and Ney, 2004; Koehn et al., 2003): Definition 1. Given a word-aligned sentence pair ⟨f, e, ∼⟩, a rule ⟨f j i , e j′ i′ ⟩is an initial phrase pair of ⟨f, e, ∼⟩iff: 1. fk ∼ek′ for some k ∈[i, j] and k′ ∈[i′, j′]; 2. fk / ek′ for all k ∈[i, j] and k′ < [i′, j′]; 3. fk / ek′ for all k < [i, j] and k′ ∈[i′, j′]. Next, we form all possible differences of phrase pairs: Definition 2. The set of rules of ⟨f, e, ∼⟩is the smallest set satisfying the following: 1. If ⟨f j i , ej′ i′ ⟩is an initial phrase pair, then X →⟨f j i , ej′ i′ ⟩ is a rule. 2. If r = X →⟨γ, α⟩is a rule and ⟨f j i , ej′ i′ ⟩is an initial phrase pair such that γ = γ1 f j i γ2 and α = α1ej′ i′ α2, then X →⟨γ1X k γ2, α1X k α2⟩ is a rule, where k is an index not used in r. The above scheme generates a very large number of rules, which is undesirable not only because it makes training and decoding very slow, but also 266 because it creates spurious ambiguity—a situation where the decoder produces many derivations that are distinct yet have the same model feature vectors and give the same translation. This can result in nbest lists with very few different translations or feature vectors, which is problematic for the algorithm we use to tune the feature weights. Therefore we filter our grammar according to the following principles, chosen to balance grammar size and performance on our development set: 1. If there are multiple initial phrase pairs containing the same set of alignment points, we keep only the smallest. 2. Initial phrases are limited to a length of 10 on the French side, and rule to five (nonterminals plus terminals) on the French right-hand side. 3. In the subtraction step, f j i must have length greater than one. The rationale is that little would be gained by creating a new rule that is no shorter than the original. 4. Rules can have at most two nonterminals, which simplifies the decoder implementation. Moreover, we prohibit nonterminals that are adjacent on the French side, a major cause of spurious ambiguity. 5. A rule must have at least one pair of aligned words, making translation decisions always based on some lexical evidence. Now we must hypothesize weights for all the derivations. Och’s method gives equal weight to all the extracted phrase occurences. However, our method may extract many rules from a single initial phrase pair; therefore we distribute weight equally among initial phrase pairs, but distribute that weight equally among the rules extracted from each. Treating this distribution as our observed data, we use relativefrequency estimation to obtain P(γ | α) and P(α | γ). 4 Decoding Our decoder is a CKY parser with beam search together with a postprocessor for mapping French derivations to English derivations. Given a French sentence f, it finds the best derivation (or n best derivations, with little overhead) that generates ⟨f, e⟩ for some e. Note that we find the English yield of the highest-probability single derivation (18) e arg max D s.t. f(D) = f w(D) and not necessarily the highest-probability e, which would require a more expensive summation over derivations. We prune the search space in several ways. First, an item that has a score worse than β times the best score in the same cell is discarded; second, an item that is worse than the bth best item in the same cell is discarded. Each cell contains all the items standing for X spanning f j i . We choose b and β to balance speed and performance on our development set. For our experiments, we set b = 40, β = 10−1 for X cells, and b = 15, β = 10−1 for S cells. We also prune rules that have the same French side (b = 100). The parser only operates on the French-side grammar; the English-side grammar affects parsing only by increasing the effective grammar size, because there may be multiple rules with the same French side but different English sides, and also because intersecting the language model with the English-side grammar introduces many states into the nonterminal alphabet, which are projected over to the French side. Thus, our decoder’s search space is many times larger than a monolingual parser’s would be. To reduce this effect, we apply the following heuristic when filling a cell: if an item falls outside the beam, then any item that would be generated using a lowerscoring rule or a lower-scoring antecedent item is also assumed to fall outside the beam. This heuristic greatly increases decoding speed, at the cost of some search errors. Finally, the decoder has a constraint that prohibits any X from spanning a substring longer than 10 on the French side, corresponding to the maximum length constraint on initial rules during training. This makes the decoding algorithm asymptotically linear-time. The decoder is implemented in Python, an interpreted language, with C++ code from the SRI Language Modeling Toolkit (Stolcke, 2002). Using the settings described above, on a 2.4 GHz Pentium IV, it takes about 20 seconds to translate each sentence (average length about 30). This is faster than our 267 Python implementation of a standard phrase-based decoder, so we expect that a future optimized implementation of the hierarchical decoder will run at a speed competitive with other phrase-based systems. 5 Experiments Our experiments were on Mandarin-to-English translation. We compared a baseline system, the state-of-the-art phrase-based system Pharaoh (Koehn et al., 2003; Koehn, 2004a), against our system. For all three systems we trained the translation model on the FBIS corpus (7.2M+9.2M words); for the language model, we used the SRI Language Modeling Toolkit to train a trigram model with modified Kneser-Ney smoothing (Chen and Goodman, 1998) on 155M words of English newswire text, mostly from the Xinhua portion of the Gigaword corpus. We used the 2002 NIST MT evaluation test set as our development set, and the 2003 test set as our test set. Our evaluation metric was BLEU (Papineni et al., 2002), as calculated by the NIST script (version 11a) with its default settings, which is to perform case-insensitive matching of n-grams up to n = 4, and to use the shortest (as opposed to nearest) reference sentence for the brevity penalty. The results of the experiments are summarized in Table 1. 5.1 Baseline The baseline system we used for comparison was Pharaoh (Koehn et al., 2003; Koehn, 2004a), as publicly distributed. We used the default feature set: language model (same as above), p( ¯f | ¯e), p(¯e | ¯f), lexical weighting (both directions), distortion model, word penalty, and phrase penalty. We ran the trainer with its default settings (maximum phrase length 7), and then used Koehn’s implementation of minimumerror-rate training (Och, 2003) to tune the feature weights to maximize the system’s BLEU score on our development set, yielding the values shown in Table 2. Finally, we ran the decoder on the test set, pruning the phrase table with b = 100, pruning the chart with b = 100, β = 10−5, and limiting distortions to 4. These are the default settings, except for the phrase table’s b, which was raised from 20, and the distortion limit. Both of these changes, made by Koehn’s minimum-error-rate trainer by default, improve performance on the development set. Rank Chinese English 1 . 3 „ the 14 ( in 23 „ ’s 577 X 1 „ X 2 the X 2 of X 1 735 X 1 „ X 2 the X 2 X 1 763 X 1 K one of X 1 1201 X 1 ;ß president X 1 1240 X 1 ŽC $ X 1 2091 Êt X 1 X 1 this year 3253 ~K X 1 X 1 percent 10508 ( X 1 under X 1 28426 ( X 1 M before X 1 47015 X 1 „ X 2 the X 2 that X 1 1752457 X 1 X 2 have X 2 with X 1 Figure 2: A selection of extracted rules, with ranks after filtering for the development set. All have X for their left-hand sides. 5.2 Hierarchical model We ran the training process of Section 3 on the same data, obtaining a grammar of 24M rules. When filtered for the development set, the grammar has 2.2M rules (see Figure 2 for examples). We then ran the minimum-error rate trainer with our decoder to tune the feature weights, yielding the values shown in Table 2. Note that λg penalizes the glue rule much less than λpp does ordinary rules. This suggests that the model will prefer serial combination of phrases, unless some other factor supports the use of hierarchical phrases (e.g., a better language model score). We then tested our system, using the settings described above.4 Our system achieves an absolute improvement of 0.02 over the baseline (7.5% relative), without using any additional training data. This difference is statistically significant (p < 0.01).5 See Table 1, which also shows that the relative gain is higher for higher n-grams. 4Note that we gave Pharaoh wider beam settings than we used on our own decoder; on the other hand, since our decoder’s chart has more cells, its b limits do not need to be as high. 5We used Zhang’s significance tester (Zhang et al., 2004), which uses bootstrap resampling (Koehn, 2004b); it was modified to conform to NIST’s current definition of the BLEU brevity penalty. 268 BLEU-n n-gram precisions System 4 1 2 3 4 5 6 7 8 Pharaoh 0.2676 0.72 0.37 0.19 0.10 0.052 0.027 0.014 0.0075 hierarchical 0.2877 0.74 0.39 0.21 0.11 0.060 0.032 0.017 0.0084 +constituent 0.2881 0.73 0.39 0.21 0.11 0.062 0.032 0.017 0.0088 Table 1: Results on baseline system and hierarchical system, with and without constituent feature. Features System Plm(e) P(γ|α) P(α|γ) Pw(γ|α) Pw(α|γ) Word Phr λd λg λc Pharaoh 0.19 0.095 0.030 0.14 0.029 −0.20 0.22 0.11 — — hierarchical 0.15 0.036 0.074 0.037 0.076 −0.32 0.22 — 0.09 — +constituent 0.11 0.026 0.062 0.025 0.029 −0.23 0.21 — 0.11 0.20 Table 2: Feature weights obtained by minimum-error-rate training (normalized so that absolute values sum to one). Word = word penalty; Phr = phrase penalty. Note that we have inverted the sense of Pharaoh’s phrase penalty so that a positive weight indicates a penalty. 5.3 Adding a constituent feature The use of hierarchical structures opens the possibility of making the model sensitive to syntactic structure. Koehn et al. (2003) mention German ⟨es gibt, there is⟩as an example of a good phrase pair which is not a syntactic phrase pair, and report that favoring syntactic phrases does not improve accuracy. But in our model, the rule (19) X →⟨es gibt X 1 , there is X 1 ⟩ would indeed respect syntactic phrases, because it builds a pair of Ss out of a pair of NPs. Thus, favoring subtrees in our model that are syntactic phrases might provide a fairer way of testing the hypothesis that syntactic phrases are better phrases. This feature adds a factor to (17), (20) c(i, j) = 1 if f j i is a constituent 0 otherwise as determined by a statistical tree-substitutiongrammar parser (Bikel and Chiang, 2000), trained on the Penn Chinese Treebank, version 3 (250k words). Note that the parser was run only on the test data and not the (much larger) training data. Rerunning the minimum-error-rate trainer with the new feature yielded the feature weights shown in Table 2. Although the feature improved accuracy on the development set (from 0.314 to 0.322), it gave no statistically significant improvement on the test set. 6 Conclusion Hierarchical phrase pairs, which can be learned without any syntactically-annotated training data, improve translation accuracy significantly compared with a state-of-the-art phrase-based system. They also facilitate the incorporation of syntactic information, which, however, did not provide a statistically significant gain. Our primary goal for the future is to move towards a more syntactically-motivated grammar, whether by automatic methods to induce syntactic categories, or by better integration of parsers trained on annotated data. This would potentially improve both accuracy and efficiency. Moreover, reducing the grammar size would allow more ambitious training settings. The maximum initial phrase length is currently 10; preliminary experiments show that increasing this limit to as high as 15 does improve accuracy, but requires more memory. On the other hand, we have successfully trained on almost 30M+30M words by tightening the initial phrase length limit for part of the data. Streamlining the grammar would allow further experimentation in these directions. In any case, future improvements to this system will maintain the design philosophy proven here, that ideas from syntax should be incorporated into statistical translation, but not in exchange for the strengths of the phrase-based approach. 269 Acknowledgements I would like to thank Philipp Koehn for the use of the Pharaoh software; and Adam Lopez, Michael Subotin, Nitin Madnani, Christof Monz, Liang Huang, and Philip Resnik. This work was partially supported by ONR MURI contract FCPO.810548265 and Department of Defense contract RD-02-5700. S. D. G. References A. V. Aho and J. D. Ullman. 1969. Syntax directed translations and the pushdown assembler. Journal of Computer and System Sciences, 3:37–56. Daniel M. Bikel and David Chiang. 2000. Two statistical parsing models applied to the Chinese Treebank. In Proceedings of the Second Chinese Language Processing Workshop, pages 1–6. Hans Ulrich Block. 2000. Example-based incremental synchronous interpretation. In Wolfgang Wahlster, editor, Verbmobil: Foundations of Speech-to-Speech Translation, pages 411–417. Springer-Verlag, Berlin. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19:263–311. Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University Center for Research in Computing Technology. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL 2003, pages 127–133. Philipp Koehn. 2003. Noun Phrase Translation. Ph.D. thesis, University of Southern California. Philipp Koehn. 2004a. Pharaoh: a beam search decoder for phrase-based statistical machine translation models. In Proceedings of the Sixth Conference of the Association for Machine Translation in the Americas, pages 115–124. Philipp Koehn. 2004b. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 388–395. Shankar Kumar, Yonggang Deng, and William Byrne. 2005. A weighted finite state transducer translation template model for statistical machine translation. Natural Language Engineering. To appear. Daniel Marcu and William Wong. 2002. A phrasebased, joint probability model for statistical machine translation. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 133–139. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the ACL, pages 440–447. Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of the 40th Annual Meeting of the ACL, pages 295–302. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30:417–449. Franz Josef Och, Ignacio Thayer, Daniel Marcu, Kevin Knight, Dragos Stefan Munteanu, Quamrul Tipu, Michel Galley, and Mark Hopkins. 2004. Arabic and Chinese MT at USC/ISI. Presentation given at NIST Machine Translation Evaluation Workshop. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the ACL, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. B: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the ACL, pages 311–318. Andreas Stolcke. 2002. SRILM – an extensible language modeling toolkit. In Proceedings of the International Conference on Spoken Language Processing, volume 2, pages 901–904. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23:377–404. Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Proceedings of the 39th Annual Meeting of the ACL, pages 523–530. Richard Zens and Hermann Ney. 2004. Improvements in phrase-based statistical machine translation. In Proceedings of HLT-NAACL 2004, pages 257–264. Ying Zhang, Stephan Vogel, and Alex Waibel. 2004. Interpreting BLEU/NIST scores: How much improvement do we need to have a better system? In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC), pages 2051– 2054. 270 | 2005 | 33 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 271–279, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Dependency Treelet Translation: Syntactically Informed Phrasal SMT Chris Quirk, Arul Menezes Colin Cherry Microsoft Research University of Alberta One Microsoft Way Edmonton, Alberta Redmond, WA 98052 Canada T6G 2E1 {chrisq,arulm}@microsoft.com [email protected] Abstract We describe a novel approach to statistical machine translation that combines syntactic information in the source language with recent advances in phrasal translation. This method requires a source-language dependency parser, target language word segmentation and an unsupervised word alignment component. We align a parallel corpus, project the source dependency parse onto the target sentence, extract dependency treelet translation pairs, and train a tree-based ordering model. We describe an efficient decoder and show that using these treebased models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in a parser. 1. Introduction Over the past decade, we have witnessed a revolution in the field of machine translation (MT) toward statistical or corpus-based methods. Yet despite this success, statistical machine translation (SMT) has many hurdles to overcome. While it excels at translating domain-specific terminology and fixed phrases, grammatical generalizations are poorly captured and often mangled during translation (Thurmair, 04). 1.1. Limitations of string-based phrasal SMT State-of-the-art phrasal SMT systems such as (Koehn et al., 03) and (Vogel et al., 03) model translations of phrases (here, strings of adjacent words, not syntactic constituents) rather than individual words. Arbitrary reordering of words is allowed within memorized phrases, but typically only a small amount of phrase reordering is allowed, modeled in terms of offset positions at the string level. This reordering model is very limited in terms of linguistic generalizations. For instance, when translating English to Japanese, an ideal system would automatically learn largescale typological differences: English SVO clauses generally become Japanese SOV clauses, English post-modifying prepositional phrases become Japanese pre-modifying postpositional phrases, etc. A phrasal SMT system may learn the internal reordering of specific common phrases, but it cannot generalize to unseen phrases that share the same linguistic structure. In addition, these systems are limited to phrases contiguous in both source and target, and thus cannot learn the generalization that English not may translate as French ne…pas except in the context of specific intervening words. 1.2. Previous work on syntactic SMT1 The hope in the SMT community has been that the incorporation of syntax would address these issues, but that promise has yet to be realized. One simple means of incorporating syntax into SMT is by re-ranking the n-best list of a baseline SMT system using various syntactic models, but Och et al. (04) found very little positive impact with this approach. However, an n-best list of even 16,000 translations captures only a tiny fraction of the ordering possibilities of a 20 word sentence; re-ranking provides the syntactic model no opportunity to boost or prune large sections of that search space. Inversion Transduction Grammars (Wu, 97), or ITGs, treat translation as a process of parallel parsing of the source and target language via a synchronized grammar. To make this process 1 Note that since this paper does not address the word alignment problem directly, we do not discuss the large body of work on incorporating syntactic information into the word alignment process. 271 computationally efficient, however, some severe simplifying assumptions are made, such as using a single non-terminal label. This results in the model simply learning a very high level preference regarding how often nodes should switch order without any contextual information. Also these translation models are intrinsically word-based; phrasal combinations are not modeled directly, and results have not been competitive with the top phrasal SMT systems. Along similar lines, Alshawi et al. (2000) treat translation as a process of simultaneous induction of source and target dependency trees using headtransduction; again, no separate parser is used. Yamada and Knight (01) employ a parser in the target language to train probabilities on a set of operations that convert a target language tree to a source language string. This improves fluency slightly (Charniak et al., 03), but fails to significantly impact overall translation quality. This may be because the parser is applied to MT output, which is notoriously unlike native language, and no additional insight is gained via source language analysis. Lin (04) translates dependency trees using paths. This is the first attempt to incorporate large phrasal SMT-style memorized patterns together with a separate source dependency parser and SMT models. However the phrases are limited to linear paths in the tree, the only SMT model used is a maximum likelihood channel model and there is no ordering model. Reported BLEU scores are far below the leading phrasal SMT systems. MSR-MT (Menezes & Richardson, 01) parses both source and target languages to obtain a logical form (LF), and translates source LFs using memorized aligned LF patterns to produce a target LF. It utilizes a separate sentence realization component (Ringger et al., 04) to turn this into a target sentence. As such, it does not use a target language model during decoding, relying instead on MLE channel probabilities and heuristics such as pattern size. Recently Aue et al. (04) incorporated an LF-based language model (LM) into the system for a small quality boost. A key disadvantage of this approach and related work (Ding & Palmer, 02) is that it requires a parser in both languages, which severely limits the language pairs that can be addressed. 2. Dependency Treelet Translation In this paper we propose a novel dependency treebased approach to phrasal SMT which uses treebased ‘phrases’ and a tree-based ordering model in combination with conventional SMT models to produce state-of-the-art translations. Our system employs a source-language dependency parser, a target language word segmentation component, and an unsupervised word alignment component to learn treelet translations from a parallel sentence-aligned corpus. We begin by parsing the source text to obtain dependency trees and word-segmenting the target side, then applying an off-the-shelf word alignment component to the bitext. The word alignments are used to project the source dependency parses onto the target sentences. From this aligned parallel dependency corpus we extract a treelet translation model incorporating source and target treelet pairs, where a treelet is defined to be an arbitrary connected subgraph of the dependency tree. A unique feature is that we allow treelets with a wildcard root, effectively allowing mappings for siblings in the dependency tree. This allows us to model important phenomena, such as not … ne…pas. We also train a variety of statistical models on this aligned dependency tree corpus, including a channel model and an order model. To translate an input sentence, we parse the sentence, producing a dependency tree for that sentence. We then employ a decoder to find a combination and ordering of treelet translation pairs that cover the source tree and are optimal according to a set of models that are combined in a log-linear framework as in (Och, 03). This approach offers the following advantages over string-based SMT systems: Instead of limiting learned phrases to contiguous word sequences, we allow translation by all possible phrases that form connected subgraphs (treelets) in the source and target dependency trees. This is a powerful extension: the vast majority of surface-contiguous phrases are also treelets of the tree; in addition, we gain discontiguous phrases, including combinations such as verb-object, article-noun, adjective-noun etc. regardless of the number of intervening words. 272 Another major advantage is the ability to employ more powerful models for reordering source language constituents. These models can incorporate information from the source analysis. For example, we may model directly the probability that the translation of an object of a preposition in English should precede the corresponding postposition in Japanese, or the probability that a pre-modifying adjective in English translates into a post-modifier in French. 2.1. Parsing and alignment We require a source language dependency parser that produces unlabeled, ordered dependency trees and annotates each source word with a partof-speech (POS). An example dependency tree is shown in Figure 1. The arrows indicate the head annotation, and the POS for each candidate is listed underneath. For the target language we only require word segmentation. To obtain word alignments we currently use GIZA++ (Och & Ney, 03). We follow the common practice of deriving many-to-many alignments by running the IBM models in both directions and combining the results heuristically. Our heuristics differ in that they constrain manyto-one alignments to be contiguous in the source dependency tree. A detailed description of these heuristics can be found in Quirk et al. (04). 2.2. Projecting dependency trees Given a word aligned sentence pair and a source dependency tree, we use the alignment to project the source structure onto the target sentence. Oneto-one alignments project directly to create a target tree isomorphic to the source. Many-to-one alignments project similarly; since the ‘many’ source nodes are connected in the tree, they act as if condensed into a single node. In the case of one-to-many alignments we project the source node to the rightmost2 of the ‘many’ target words, and make the rest of the target words dependent on it. 2 If the target language is Japanese, leftmost may be more appropriate. Unaligned target words3 are attached into the dependency structure as follows: assume there is an unaligned word tj in position j. Let i < j and k > j be the target positions closest to j such that ti depends on tk or vice versa: attach tj to the lower of ti or tk. If all the nodes to the left (or right) of position j are unaligned, attach tj to the left-most (or right-most) word that is aligned. The target dependency tree created in this process may not read off in the same order as the target string, since our alignments do not enforce phrasal cohesion. For instance, consider the projection of the parse in Figure 1 using the word alignment in Figure 2a. Our algorithm produces the dependency tree in Figure 2b. If we read off the leaves in a left-to-right in-order traversal, we do not get the original input string: de démarrage appears in the wrong place. A second reattachment pass corrects this situation. For each node in the wrong order, we reattach it to the lowest of its ancestors such that it is in the correct place relative to its siblings and parent. In Figure 2c, reattaching démarrage to et suffices to produce the correct order. 3 Source unaligned nodes do not present a problem, with the exception that if the root is unaligned, the projection process produces a forest of target trees anchored by a dummy root. startup properties and options Noun Noun Conj Noun Figure 1. An example dependency tree. startup properties and options propriétés et options de démarrage (a) Word alignment. startup properties and options propriétés de démarrage et options (b) Dependencies after initial projection. startup properties and options propriétés et options de démarrage (c) Dependencies after reattachment step. Figure 2. Projection of dependencies. 273 2.3. Extracting treelet translation pairs From the aligned pairs of dependency trees we extract all pairs of aligned source and target treelets along with word-level alignment linkages, up to a configurable maximum size. We also keep treelet counts for maximum likelihood estimation. 2.4. Order model Phrasal SMT systems often use a model to score the ordering of a set of phrases. One approach is to penalize any deviation from monotone decoding; another is to estimate the probability that a source phrase in position i translates to a target phrase in position j (Koehn et al., 03). We attempt to improve on these approaches by incorporating syntactic information. Our model assigns a probability to the order of a target tree given a source tree. Under the assumption that constituents generally move as a whole, we predict the probability of each given ordering of modifiers independently. That is, we make the following simplifying assumption (where c is a function returning the set of nodes modifying t): ∏ ∈ = T t T S t c order T S T order ) , | )) ( ( P( ) , |) ( P( Furthermore, we assume that the position of each child can be modeled independently in terms of a head-relative position: ) , |) , ( P( ) , | )) ( ( P( ) ( T S t m pos T S t c order t c m∏ ∈ = Figure 3a demonstrates an aligned dependency tree pair annotated with head-relative positions; Figure 3b presents the same information in an alternate tree-like representation. We currently use a small set of features reflecting very local information in the dependency tree to model P(pos(m,t) | S, T): • The lexical items of the head and modifier. • The lexical items of the source nodes aligned to the head and modifier. • The part-of-speech ("cat") of the source nodes aligned to the head and modifier. • The head-relative position of the source node aligned to the source modifier. 4 As an example, consider the children of propriété in Figure 3. The head-relative positions 4 One can also include features of siblings to produce a Markov ordering model. However, we found that this had little impact in practice. of its modifiers la and Cancel are -1 and +1, respectively. Thus we try to predict as follows: P(pos(m1) = -1 | lex(m1)="la", lex(h)="propriété", lex(src(m1))="the", lex(src(h)="property", cat(src(m1))=Determiner, cat(src(h))=Noun, position(src(m1))=-2) · P(pos(m2) = +1 | lex(m2)="Cancel", lex(h)="propriété", lex(src(m2))="Cancel", lex(src(h))="property", cat(src(m2))=Noun, cat(src(h))=Noun, position(src(m2))=-1) The training corpus acts as a supervised training set: we extract a training feature vector from each of the target language nodes in the aligned dependency tree pairs. Together these feature vectors are used to train a decision tree (Chickering, 02). The distribution at each leaf of the DT can be used to assign a probability to each possible target language position. A more detailed description is available in (Quirk et al., 04). 2.5. Other models Channel Models: We incorporate two distinct channel models, a maximum likelihood estimate (MLE) model and a model computed using Model-1 word-to-word alignment probabilities as in (Vogel et al., 03). The MLE model effectively captures non-literal phrasal translations such as idioms, but suffers from data sparsity. The wordthe-2 Cancel-1 property-1 uses these-1 settings+1 la-1 propriété-1 Cancel+1 utilise ces-1 paramètres+1 (a) Head annotation representation uses property-1 settings+1 the-2 Cancel-1 these-1 la-1 Cancel+1 ces-1 propriété-1 paramètres+1 utilise (b) Branching structure representation. Figure 3. Aligned dependency tree pair, annotated with head-relative positions 274 to-word model does not typically suffer from data sparsity, but prefers more literal translations. Given a set of treelet translation pairs that cover a given input dependency tree and produce a target dependency tree, we model the probability of source given target as the product of the individual treelet translation probabilities: we assume a uniform probability distribution over the decompositions of a tree into treelets. Target Model: Given an ordered target language dependency tree, it is trivial to read off the surface string. We evaluate this string using a trigram model with modified Kneser-Ney smoothing. Miscellaneous Feature Functions: The log-linear framework allows us to incorporate other feature functions as ‘models’ in the translation process. For instance, using fewer, larger treelet translation pairs often provides better translations, since they capture more context and allow fewer possibilities for search and model error. Therefore we add a feature function that counts the number of phrases used. We also add a feature that counts the number of target words; this acts as an insertion/deletion bonus/penalty. 3. Decoding The challenge of tree-based decoding is that the traditional left-to-right decoding approach of string-based systems is inapplicable. Additional challenges are posed by the need to handle treelets—perhaps discontiguous or overlapping— and a combinatorially explosive ordering space. Our decoding approach is influenced by ITG (Wu, 97) with several important extensions. First, we employ treelet translation pairs instead of single word translations. Second, instead of modeling rearrangements as either preserving source order or swapping source order, we allow the dependents of a node to be ordered in any arbitrary manner and use the order model described in section 2.4 to estimate probabilities. Finally, we use a log-linear framework for model combination that allows any amount of other information to be modeled. We will initially approach the decoding problem as a bottom up, exhaustive search. We define the set of all possible treelet translation pairs of the subtree rooted at each input node in the following manner: A treelet translation pair x is said to match the input dependency tree S iff there is some connected subgraph S’ that is identical to the source side of x. We say that x covers all the nodes in S’ and is rooted at source node s, where s is the root of matched subgraph S’. We first find all treelet translation pairs that match the input dependency tree. Each matched pair is placed on a list associated with the input node where the match is rooted. Moving bottomup through the input dependency tree, we compute a list of candidate translations for the input subtree rooted at each node s, as follows: Consider in turn each treelet translation pair x rooted at s. The treelet pair x may cover only a portion of the input subtree rooted at s. Find all descendents s' of s that are not covered by x, but whose parent s'' is covered by x. At each such node s'' look at all interleavings of the children of s'' specified by x, if any, with each translation t' from the candidate translation list5 of each child s'. Each such interleaving is scored using the models previously described and added to the candidate translation list for that input node. The resultant translation is the best scoring candidate for the root input node. As an example, see the example dependency tree in Figure 4a and treelet translation pair in 4b. This treelet translation pair covers all the nodes in 4a except the subtrees rooted at software and is. 5 Computed by the previous application of this procedure to s' during the bottom-up traversal. installed software is on the computer your (a) Example input dependency tree. installed on computer your votre ordinateur sur installés (b) Example treelet translation pair. Figure 4. Example decoder structures. 275 We first compute (and cache) the candidate translation lists for the subtrees rooted at software and is, then construct full translation candidates by attaching those subtree translations to installés in all possible ways. The order of sur relative to installés is fixed; it remains to place the translated subtrees for the software and is. Note that if c is the count of children specified in the mapping and r is the count of subtrees translated via recursive calls, then there are (c+r+1)!/(c+1)! orderings. Thus (1+2+1)!/(1+1)! = 12 candidate translations are produced for each combination of translations of the software and is. 3.1. Optimality-preserving optimizations Dynamic Programming Converting this exhaustive search to dynamic programming relies on the observation that scoring a translation candidate at a node depends on the following information from its descendents: the order model requires features from the root of a translated subtree, and the target language model is affected by the first and last two words in each subtree. Therefore, we need to keep the best scoring translation candidate for a given subtree for each combination of (head, leading bigram, trailing bigram), which is, in the worst case, O(V5), where V is the vocabulary size. The dynamic programming approach therefore does not allow for great savings in practice because a trigram target language model forces consideration of context external to each subtree. Duplicate elimination To eliminate unnecessary ordering operations, we first check that a given set of words has not been previously ordered by the decoder. We use an order-independent hash table where two trees are considered equal if they have the same tree structure and lexical choices after sorting each child list into a canonical order. A simpler alternate approach would be to compare bags-ofwords. However since our possible orderings are bound by the induced tree structure, we might overzealously prune a candidate with a different tree structure that allows a better target order. 3.2. Lossy optimizations The following optimizations do not preserve optimality, but work well in practice. N-best lists Instead of keeping the full list of translation candidates for a given input node, we keep a topscoring subset of the candidates. While the decoder is no longer guaranteed to find the optimal translation, in practice the quality impact is minimal with a list size ≥ 10 (see Table 5.6). Variable-sized n-best lists: A further speedup can be obtained by noting that the number of translations using a given treelet pair is exponential in the number of subtrees of the input not covered by that pair. To limit this explosion we vary the size of the n-best list on any recursive call in inverse proportion to the number of subtrees uncovered by the current treelet. This has the intuitive appeal of allowing a more thorough exploration of large treelet translation pairs (that are likely to result in better translations) than of smaller, less promising pairs. Pruning treelet translation pairs Channel model scores and treelet size are powerful predictors of translation quality. Heuristically pruning low scoring treelet translation pairs before the search starts allows the decoder to focus on combinations and orderings of high quality treelet pairs. • Only keep those treelet translation pairs with an MLE probability above a threshold t. • Given a set of treelet translation pairs with identical sources, keep those with an MLE probability within a ratio r of the best pair. • At each input node, keep only the top k treelet translation pairs rooted at that node, as ranked first by size, then by MLE channel model score, then by Model 1 score. The impact of this optimization is explored in Table 5.6. Greedy ordering The complexity of the ordering step at each node grows with the factorial of the number of children to be ordered. This can be tamed by noting that given a fixed pre- and post-modifier count, our order model is capable of evaluating a single ordering decision independently from other ordering decisions. One version of the decoder takes advantage of this to severely limit the number of ordering possibilities considered. Instead of considering all interleavings, it considers each potential modifier position in turn, greedily picking the most 276 probable child for that slot, moving on to the next slot, picking the most probable among the remaining children for that slot and so on. The complexity of greedy ordering is linear, but at the cost of a noticeable drop in BLEU score (see Table 5.4). Under default settings our system tries to decode a sentence with exhaustive ordering until a specified timeout, at which point it falls back to greedy ordering. 4. Experiments We evaluated the translation quality of the system using the BLEU metric (Papineni et al., 02) under a variety of configurations. We compared against two radically different types of systems to demonstrate the competitiveness of this approach: • Pharaoh: A leading phrasal SMT decoder (Koehn et al., 03). • The MSR-MT system described in Section 1, an EBMT/hybrid MT system. 4.1. Data We used a parallel English-French corpus containing 1.5 million sentences of Microsoft technical data (e.g., support articles, product documentation). We selected a cleaner subset of this data by eliminating sentences with XML or HTML tags as well as very long (>160 characters) and very short (<40 characters) sentences. We held out 2,000 sentences for development testing and parameter tuning, 10,000 sentences for testing, and 250 sentences for lambda training. We ran experiments on subsets of the training data ranging from 1,000 to 300,000 sentences. Table 4.1 presents details about this dataset. 4.2. Training We parsed the source (English) side of the corpus using NLPWIN, a broad-coverage rule-based parser developed at Microsoft Research able to produce syntactic analyses at varying levels of depth (Heidorn, 02). For the purposes of these experiments we used a dependency tree output with part-of-speech tags and unstemmed surface words. For word alignment, we used GIZA++, following a standard training regimen of five iterations of Model 1, five iterations of the HMM Model, and five iterations of Model 4, in both directions. We then projected the dependency trees and used the aligned dependency tree pairs to extract treelet translation pairs and train the order model as described above. The target language model was trained using only the French side of the corpus; additional data may improve its performance. Finally we trained lambdas via Maximum BLEU (Och, 03) on 250 held-out sentences with a single reference translation, and tuned the decoder optimization parameters (n-best list size, timeouts etc) on the development test set. Pharaoh The same GIZA++ alignments as above were used in the Pharaoh decoder. We used the heuristic combination described in (Och & Ney, 03) and extracted phrasal translation pairs from this combined alignment as described in (Koehn et al., 03). Except for the order model (Pharaoh uses its own ordering approach), the same models were used: MLE channel model, Model 1 channel model, target language model, phrase count, and word count. Lambdas were trained in the same manner (Och, 03). MSR-MT MSR-MT used its own word alignment approach as described in (Menezes & Richardson, 01) on the same training data. MSR-MT does not use lambdas or a target language model. 5. Results We present BLEU scores on an unseen 10,000 sentence test set using a single reference translation for each sentence. Speed numbers are the end-to-end translation speed in sentences per minute. All results are based on a training set size of 100,000 sentences and a phrase size of 4, except Table 5.2 which varies the phrase size and Table 5.3 which varies the training set size. English French Training Sentences 570,562 Words 7,327,251 8,415,882 Vocabulary 72,440 80,758 Singletons 38,037 39,496 Test Sentences 10,000 Words 133,402 153,701 Table 4.1 Data characteristics 277 Results for our system and the comparison systems are presented in Table 5.1. Pharaoh monotone refers to Pharaoh with phrase reordering disabled. The difference between Pharaoh and the Treelet system is significant at the 99% confidence level under a two-tailed paired t-test. BLEU Score Sents/min Pharaoh monotone 37.06 4286 Pharaoh 38.83 162 MSR-MT 35.26 453 Treelet 40.66 10.1 Table 5.1 System comparisons Table 5.2 compares Pharaoh and the Treelet system at different phrase sizes. While all the differences are statistically significant at the 99% confidence level, the wide gap at smaller phrase sizes is particularly striking. We infer that whereas Pharaoh depends heavily on long phrases to encapsulate reordering, our dependency treebased ordering model enables credible performance even with single-word ‘phrases’. We conjecture that in a language pair with large-scale ordering differences, such as English-Japanese, even long phrases are unlikely to capture the necessary reorderings, whereas our tree-based ordering model may prove more robust. Max. size Treelet BLEU Pharaoh BLEU 1 37.50 23.18 2 39.84 32.07 3 40.36 37.09 4 (default) 40.66 38.83 5 40.71 39.41 6 40.74 39.72 Table 5.2 Effect of maximum treelet/phrase size Table 5.3 compares the same systems at different training corpus sizes. All of the differences are statistically significant at the 99% confidence level. Noting that the gap widens at smaller corpus sizes, we suggest that our tree-based approach is more suitable than string-based phrasal SMT when translating from English into languages or domains with limited parallel data. We also ran experiments varying different system parameters. Table 5.4 explores different ordering strategies, Table 5.5 looks at the impact of discontiguous phrases and Table 5.6 looks at the impact of decoder optimizations such as treelet pruning and n-best list size. Ordering strategy BLEU Sents/min No order model (monotone) 35.35 39.7 Greedy ordering 38.85 13.1 Exhaustive (default) 40.66 10.1 Table 5.4 Effect of ordering strategies BLEU Score Sents/min Contiguous only 40.08 11.0 Allow discontiguous 40.66 10.1 Table 5.5 Effect of allowing treelets that correspond to discontiguous phrases BLEU Score Sents/min Pruning treelets Keep top 1 28.58 144.9 … top 3 39.10 21.2 … top 5 40.29 14.6 … top 10 (default) 40.66 10.1 … top 20 40.70 3.5 Keep all 40.29 3.2 N-best list size 1-best 37.28 175.4 5-best 39.96 79.4 10-best 40.42 23.3 20-best (default) 40.66 10.1 50-best 39.39 3.7 Table 5.6 Effect of optimizations 6. Discussion We presented a novel approach to syntacticallyinformed statistical machine translation that leverages a parsed dependency tree representation of the source language via a tree-based ordering model and treelet phrase extraction. We showed that it significantly outperforms a leading phrasal SMT system over a wide range of training set sizes and phrase sizes. Constituents vs. dependencies: Most attempts at 1k 3k 10k 30k 100k 300k Pharaoh 17.20 22.51 27.70 33.73 38.83 42.75 Treelet 18.70 25.39 30.96 35.81 40.66 44.32 Table 5.3 Effect of training set size on treelet translation and comparison system 278 syntactic SMT have relied on a constituency analysis rather than dependency analysis. While this is a natural starting point due to its wellunderstood nature and commonly available tools, we feel that this is not the most effective representation for syntax in MT. Dependency analysis, in contrast to constituency analysis, tends to bring semantically related elements together (e.g., verbs become adjacent to all their arguments) and is better suited to lexicalized models, such as the ones presented in this paper. 7. Future work The most important contribution of our system is a linguistically motivated ordering approach based on the source dependency tree, yet this paper only explores one possible model. Different model structures, machine learning techniques, and target feature representations all have the potential for significant improvements. Currently we only consider the top parse of an input sentence. One means of considering alternate possibilities is to build a packed forest of dependency trees and use this in decoding translations of each input sentence. As noted above, our approach shows particular promise for language pairs such as EnglishJapanese that exhibit large-scale reordering and have proven difficult for string-based approaches. Further experimentation with such language pairs is necessary to confirm this. Our experience has been that the quality of GIZA++ alignments for such language pairs is inadequate. Following up on ideas introduced by (Cherry & Lin, 03) we plan to explore ways to leverage the dependency tree to improve alignment quality. References Alshawi, Hiyan, Srinivas Bangalore, and Shona Douglas. Learning dependency translation models as collections of finite-state head transducers. Computational Linguistics, 26(1):45–60, 2000. Aue, Anthony, Arul Menezes, Robert C. Moore, Chris Quirk, and Eric Ringger. Statistical machine translation using labeled semantic dependency graphs. TMI 2004. Charniak, Eugene, Kevin Knight, and Kenji Yamada. Syntax-based language models for statistical machine translation. MT Summit 2003. Cherry, Colin and Dekang Lin. A probability model to improve word alignment. ACL 2003. Chickering, David Maxwell. The WinMine Toolkit. Microsoft Research Technical Report: MSR-TR2002-103. Ding, Yuan and Martha Palmer. Automatic learning of parallel dependency treelet pairs. IJCNLP 2004. Heidorn, George. (2000). “Intelligent writing assistance”. In Dale et al. Handbook of Natural Language Processing, Marcel Dekker. Koehn, Philipp, Franz Josef Och, and Daniel Marcu. Statistical phrase based translation. NAACL 2003. Lin, Dekang. A path-based transfer model for machine translation. COLING 2004. Menezes, Arul and Stephen D. Richardson. A bestfirst alignment algorithm for automatic extraction of transfer mappings from bilingual corpora. DDMT Workshop, ACL 2001. Och, Franz Josef and Hermann Ney. A systematic comparison of various statistical alignment models, Computational Linguistics, 29(1):19-51, 2003. Och, Franz Josef. Minimum error rate training in statistical machine translation. ACL 2003. Och, Franz Josef, et al. A smorgasbord of features for statistical machine translation. HLT/NAACL 2004. Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: a method for automatic evaluation of machine translation. ACL 2002. Quirk, Chris, Arul Menezes, and Colin Cherry. Dependency Tree Translation. Microsoft Research Technical Report: MSR-TR-2004-113. Ringger, Eric, et al. Linguistically informed statistical models of constituent structure for ordering in sentence realization. COLING 2004. Thurmair, Gregor. Comparing rule-based and statistical MT output. Workshop on the amazing utility of parallel and comparable corpora, LREC, 2004. Vogel, Stephan, Ying Zhang, Fei Huang, Alicia Tribble, Ashish Venugopal, Bing Zhao, and Alex Waibel. The CMU statistical machine translation system. MT Summit 2003. Wu, Dekai. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403, 1997. Yamada, Kenji and Kevin Knight. A syntax-based statistical translation model. ACL, 2001. 279 | 2005 | 34 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 280–289, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics QARLA:A Framework for the Evaluation of Text Summarization Systems Enrique Amig´o, Julio Gonzalo, Anselmo Pe˜nas, Felisa Verdejo Departamento de Lenguajes y Sistemas Inform´aticos Universidad Nacional de Educaci´on a Distancia c/Juan del Rosal, 16 - 28040 Madrid - Spain {enrique,julio,anselmo,felisa}@lsi.uned.es Abstract This paper presents a probabilistic framework, QARLA, for the evaluation of text summarisation systems. The input of the framework is a set of manual (reference) summaries, a set of baseline (automatic) summaries and a set of similarity metrics between summaries. It provides i) a measure to evaluate the quality of any set of similarity metrics, ii) a measure to evaluate the quality of a summary using an optimal set of similarity metrics, and iii) a measure to evaluate whether the set of baseline summaries is reliable or may produce biased results. Compared to previous approaches, our framework is able to combine different metrics and evaluate the quality of a set of metrics without any a-priori weighting of their relative importance. We provide quantitative evidence about the effectiveness of the approach to improve the automatic evaluation of text summarisation systems by combining several similarity metrics. 1 Introduction The quality of an automatic summary can be established mainly with two approaches: Human assessments: The output of a number of summarisation systems is compared by human judges, using some set of evaluation guidelines. Proximity to a gold standard: The best automatic summary is the one that is closest to some reference summary made by humans. Using human assessments has some clear advantages: the results of the evaluation are interpretable, and we can trace what a system is doing well, and what is doing poorly. But it also has a couple of serious drawbacks: i) different human assessors reach different conclusions, and ii) the outcome of a comparative evaluation exercise is not directly reusable for new techniques, i.e., a summarisation strategy developed after the comparative exercise cannot be evaluated without additional human assessments made from scratch. Proximity to a gold standard, on the other hand, is a criterion that can be automated (see Section 6), with the advantages of i) being objective, and ii) once gold standard summaries are built for a comparative evaluation of systems, the resulting testbed can iteratively be used to refine text summarisation techniques and re-evaluate them automatically. This second approach, however, requires solving a number of non-trivial issues. For instance, (i) How can we know whether an evaluation metric is good enough for automatic evaluation?, (ii) different users produce different summaries, all of them equally good as gold standards, (iii) if we have several metrics which test different features of a summary, how can we combine them into an optimal test?, (iv) how do we know if our test bed 280 Figure 1: Illustration of some of the restrictions on Q, K is reliable, or the evaluation outcome may change by adding, for instance, additional gold standards? In this paper, we introduce a probabilistic framework, QARLA, that addresses such issues. Given a set of manual summaries and another set of baseline summaries per task, together with a set of similarity metrics, QARLA provides quantitative measures to (i) select and combine the best (independent) metrics (KING measure), (ii) apply the best set of metrics to evaluate automatic summaries (QUEEN measure), and (iii) test whether evaluating with that test-bed is reliable (JACK measure). 2 Formal constraints on any evaluation framework based on similarity metrics We are looking for a framework to evaluate automatic summarisation systems objectively using similarity metrics to compare summaries. The input of the framework is: • A summarisation task (e.g. topic oriented, informative multi-document summarisation on a given domain/corpus). • A set T of test cases (e.g. topic/document set pairs for the example above) • A set of summaries M produced by humans (models), and a set of automatic summaries A (peers), for every test case. • A set X of similarity metrics to compare summaries. An evaluation framework should include, at least: • A measure QM,X(a) ∈[0, 1] that estimates the quality of an automatic summary a, using the similarity metrics in X to compare the summary with the models in M. With Q, we can compare the quality of automatic summaries. • A measure KM,A(X) ∈[0, 1] that estimates the suitability of a set of similarity metrics X for our evaluation purposes. With K, we can choose the best similarity metrics. Our main assumption is that all manual summaries are equally optimal and, while they are likely to be different, the best similarity metric is the one that identifies and uses the features that are common to all manual summaries, grouping and separating them from the automatic summaries. With these assumption in mind, it is useful to think of some formal restrictions that any evaluation framework Q, K must hold. We will consider the following ones (see illustrations in Figure 1): (1) Given two automatic summaries a, a′ and a similarity measure x, if a is more distant to all manual summaries than a′, then a cannot be better 281 than a′. Formally: ∀m ∈M.x(a, m) < x(a′, m) → QM,x(a) ≤QM,x(a′) (2) A similarity metric x is better when it is able to group manual summaries more closely, while keeping them more distant from automatic summaries: (∀m, m′ ∈M.x(m, m′) > x′(m, m′) ∧∀m ∈ M, a ∈Ax(a, m) < x′(a, m)) →KM,A(x) > KM,A(x′) (3) If x is a perfect similarity metric, the quality of a manual summary cannot be zero: KM,A(x) = 1 → ∀m ∈M.QM,x(m) > 0 (4) The quality of a similarity metric or a summary should not be dependent on scale issues. In general, if x′ = f(x) with f being a growing monotonic function, then KM,A(x) = KM,A(x′) and QM,x(a) = QM,x′(a) . (5) The quality of a similarity metric should not be sensitive to repeated elements in A, i.e. KM,A∪{a}(x) = KM,A∪{a,a}(x). (6) A random metric x should have KM,A(x) = 0. (7) A non-informative (constant) metric x should have KM,A(x) = 0. 3 QARLA evaluation framework 3.1 QUEEN: Estimation of the quality of an automatic summary We are now looking for a function QM,x(a) that estimates the quality of an automatic summary a ∈ A, given a set of models M and a similarity metric x. An obvious first attempt would be to compute the average similarity of a to all model summaries in M in a test sample. But such a measure depends on scale properties: metrics producing larger similarity values will produce larger Q values; and, depending on the scale properties of x, this cannot be solved just by scaling the final Q value. A probabilistic measure that solves this problem and satisfies all the stated formal constraints is: QUEENx,M(a) ≡P(x(a, m) ≥x(m′, m′′)) which defines the quality of an automatic summary a as the probability over triples of manual summaries m, m′, m′′ that a is closer to a model than the other two models to each other. This measure draws from the way in which some formal restrictions on Q are stated (by comparing similarity values), and is inspired in the QARLA criterion introduced in (Amigo et al., 2004). Figure 2: Summaries quality in a similarity metric space Figure 2 illustrates some of the features of the QUEEN estimation: • Peers which are very far from the set of models all receive QUEEN = 0. In other words, QUEEN does not distinguish between very poor automatic summarisation strategies. While this feature reduces granularity of the ranking produced by QUEEN, we find it desirable, because in such situations, the values returned by a similarity measure are probably meaningless. • The value of QUEEN is maximised for the peers that “merge” with the models. For QUEEN values between 0.5 and 1, peers are effectively merged with the models. • An ideal metric (that puts all models together) would give QUEEN(m) = 1 for all models, and QUEEN(a) = 0 for all peers that are not put together with the models. This is a reasonable boundary condition saying that, if we can distinguish between models and peers perfectly, then all peers are poor emulations of human summarising behaviour. 3.2 Generalisation of QUEEN to metric sets It is desirable, however, to have the possibility of evaluating summaries with respect to several metrics together. Let us imagine, for instance, that the best metric turns out to be a ROUGE (Lin and Hovy, 2003a) variant that only considers unigrams to compute similarity. Now consider a summary 282 which has almost the same vocabulary as a human summary, but with a random scrambling of the words which makes it unreadable. Even if the unigram measure is the best hint of similarity to human performance, in this case it would produce a high similarity value, while any measure based on 2-grams, 3-grams or on any simple syntactic property would detect that the summary is useless. The issue is, therefore, how to find informative metrics, and then how to combine them into an optimal single quality estimation for automatic summaries. The most immediate way of combining metrics is via some weighted linear combination. But our example suggests that this is not the optimal way: the unigram measure would take the higher weight, and therefore it would assign a fair amount of credit to a summary that can be strongly rejected with other criteria. Alternatively, we can assume that a summary is better if it is closer to the model summaries according to all metrics. We can formalise this idea by introducing a universal quantifier on the variable x in the QUEEN formula. In other words, QUEENX,M(a) can be defined as the probability, measured over M ×M ×M, that for every metric in X the automatic summary a is closer to a model than two models to each other. QUEENX,M(a) ≡P(∀x ∈X.x(a, m) ≥x(m′, m′′)) We can think of the generalised QUEEN measure as a way of using a set of tests (every similarity metric in X) to falsify the hypothesis that a given summary a is a model. If, for every comparison of similarities between a, m, m′, m′′, there is at least one test that a does not pass, then a is rejected as a model. This generalised measure is not affected by the scale properties of every individual metric, i.e. it does not require metric normalisation and it is not affected by metric weighting. In addition, it still satisfies the properties enumerated for its singlemetric counterpart. Of course, the quality ranking provided by QUEEN is meaningless if the similarity metric x does not capture the essential features of the models. Therefore, we need to estimate the quality of similarity metrics in order to use QUEEN effectively. 3.3 KING: estimation of the quality of a similarity metric Now we need a measure KM,A(x) that estimates the quality of a similarity metric x to evaluate automatic summaries (peers) by comparison to human-produced models. In order to build a suitable K estimation, we will again start from the hypothesis that the best metric is the one that best characterises human summaries as opposed to automatic summaries. Such a metric should identify human summaries as closer to each other, and more distant to peers (second constraint in Section 2). By analogy with QUEEN, we can try (for a single metric): KM,A(x) ≡P(x(a, m) < x(m′, m′′)) = 1 −(QUEENx,M(a)) which is the probability that two models are closer to each other than a third model to a peer, and has smaller values when the average QUEEN value of peers decreases. The generalisation of K to metric sets would be simply: KM,A(X) ≡1 −(QUEENX,M(a))) This measure, however, does not satisfy formal conditions 3 and 5. Condition 3 is violated because, given a limited set of models, the K measure grows with a large number of metrics in X, eventually reaching K = 1 (perfect metric set). But in this situation, QUEEN(m) becomes 0 for all models, because there will always exist a metric that breaks the universal quantifier condition over x. We have to look, then, for an alternative formulation for K. The best K should minimise QUEEN(a), but having the quality of the models as a reference. A direct formulation can be: KM,A(X) = P(QUEEN(m) > QUEEN(a)) According to this formula, the quality of a metric set X is the probability that the quality of a 283 model is higher than the quality of a peer according to this metric set. This formula satisfies all formal conditions except 5 (KM,A∪{a}(x) = KM,A∪{a,a}(x)), because it is sensitive to repeated peers. If we add a large set of identical (or very similar peers), K will be biased towards this set. We can define a suitable K that satisfies condition 5 if we apply a universal quantifier on a. This is what we call the KING measure: KINGM,A(X) ≡ P(∀a ∈A.QUEENM,X(m) > QUEENM,X(a)) KING is the probability that a model is better than any peer in a test sample. In terms of a quality ranking, it is the probability that a model gets a better ranking than all peers in a test sample. Note that KING satisfies all restrictions because it uses QUEEN as a quality estimation for summaries; if QUEEN is substituted for a different quality measure, some of the properties might not hold any longer. Figure 3: Metrics quality representation Figure 3 illustrates the behaviour of the KING measure in boundary conditions. The leftmost figure represents a similarity metric which mixes models and peers randomly. Therefore, P(QUEEN(m) > QUEEN(a)) ≈0.5. As there are seven automatic summaries, KING = P(∀a ∈ A, QUEEN(m) > QUEEN(a)) ≈0.57 ≈0 The rightmost figure represents a metric which is able to group models and separate them from peers. In this case, QUEEN(a) = 0 for all peers, and then KING(x) = 1. 3.4 JACK:Reliability of the peers set Once we detect a difference in quality between two summarisation systems, the question is now whether this result is reliable. Would we get the same results using a different test set (different examples, different human summarisers (models) or different baseline systems)? The first step is obviously to apply statistical significance tests to the results. But even if they give a positive result, it might be insufficient. The problem is that the estimation of the probabilities in KING, QUEEN assumes that the sample sets M, A are not biased. If M, A are biased, the results can be statistically significant and yet unreliable. The set of examples and the behaviour of human summarisers (models) should be somehow controlled either for homogeneity (if the intended profile of examples and/or users is narrow) or representativity (if it is wide). But how to know whether the set of automatic summaries is representative and therefore is not penalising certain automatic summarisation strategies? Our goal is, therefore, to have some estimation JACK(X, M, A) of the reliability of the test set to compute reliable QUEEN, KING measures. We can think of three reasonable criteria for this estimation: 1. All other things being equal, if the elements of A are more heterogeneous, we are enhancing the representativeness of A (we have a more diverse set of (independent) automatic summarization strategies represented), and therefore the reliability of the results should be higher. Reversely, if all automatic summarisers employ similar strategies, we may end up with a biased set of peers. 2. All other things being equal, if the elements of A are closer to the model summaries in M, the reliability of the results should be higher. 3. Adding items to A should not reduce its reliability. A possible formulation for JACK which satisfies that criteria is: JACK(X, M, A) ≡P(∃a, a′ ∈A.QUEEN(a) > 0 ∧QUEEN(a′) > 0 ∧∀x ∈X.x(a, a′) ≤x(a, m)) i.e. the probability over all model summaries m of finding a couple of automatic summaries a, a′ 284 which are closer to each other than to m according to all metrics. This measure satisfies all three constraints: it can be enlarged by increasing the similarity of the peers to the models (the x(m, a) factor in the inequality) or decreasing the similarity between automatic summaries (the x(a, a′) factor in the inequality). Finally, adding elements to A can only increase the chances of finding a pair of automatic summaries satisfying the condition in JACK. Figure 4: JACK values Figure 4 illustrates how JACK works: in the leftmost part of the figure, peers are grouped together and far from the models, giving a low JACK value. In the rightmost part of the figure, peers are distributed around the set of models, closely surrounding them, receiving a high JACK value. 4 A Case of Study In order to test the behaviour of our evaluation framework, we have applied it to the ISCORPUS described in (Amigo et al., 2004). The ISCORPUS was built to study an Information Synthesis task, where a (large) set of relevant documents has to be studied to give a brief, well-organised answer to a complex need for information. This corpus comprises: • Eight topics extracted from the CLEF Spanish Information Retrieval test set, slightly reworded to move from a document retrieval task (find documents about hunger strikes in...) into an Information Synthesis task (make a report about major causes of hunger strikes in...). • One hundred relevant documents per topic taken from the CLEF EFE 1994 Spanish newswire collection. • M: Manual extractive summaries for every topic made by 9 different users, with a 50sentence upper limit (half the number of relevant documents). • A: 30 automatic reports for every topic made with baseline strategies. The 10 reports with highest sentence overlap with the manual summaries were selected as a way to increase the quality of the baseline set. We have considered the following similarity metrics: ROUGESim: ROUGE is a standard measure to evaluate summarisation systems based on n-gram recall. We have used ROUGE-1 (only unigrams with lemmatization and stop word removal), which gives good results with standard summaries (Lin and Hovy, 2003a). ROUGE can be turned into a similarity metric ROUGESim simply by considering only one model when computing its value. SentencePrecision: Given a reference and a contrastive summary, the number of fragments of the contrastive summary which are also in the reference summary, in relation to the size of the reference summary. SentenceRecall: Given a reference and a contrastive summary, the number of fragments of the reference summary which are also in the contrastive summary, in relation to the size of the contrastive summary. DocSim: The number of documents used to select fragments in both summaries, in relation to the size of the contrastive summary. VectModelSim: Derived from the Euclidean distance between vectors of relative word frequencies representing both summaries. NICOS (key concept overlap): Same as VectModelSim, but using key-concepts (manually identified by the human summarisers after producing the summary) instead of all nonempty words. 285 TruncatedVectModeln: Same as VectModelSim, but using only the n more frequent terms in the reference summary. We have used 10 variants of this measure with n = 1, 8, 64, 512. 4.1 Quality of Similarity Metric Sets Figure 5 shows the quality (KING values averaged over the eight ISCORPUS topics) of every individual metric. The rightmost part of the figure also shows the quality of two metric sets: • The first one ({ROUGESim, VectModelSim, TruncVectModel.1}) is the metric set that maximises KING, using only similarity metrics that do not require manual annotation (i.e. excluding NICOS) or can only be applied to extractive summaries (i.e. DocSim, SentenceRecall and SentencePrecision). • The second one ({ TruncVectModel.1, ROUGESim, DocSim, VectModelSim }) is the best combination considering all metrics. The best result of individual metrics is obtained by ROUGESim (0.39). All other individual metrics give scores below 0.31. Both metric sets, on the other, are better than ROUGESim alone, confirming that metric combination is feasible to improve system evaluation. The quality of the best metric set (0.47) is 21% better than ROUGESim. 4.2 Reliability of the test set The 30 automatic summaries (baselines) per topic were built with four different classes of strategies: i) picking up the first sentence from assorted subsets of documents, ii) picking up first and second sentences from assorted documents, iii) picking up first, second or third sentences from assorted documents, and iv) picking up whole documents with different algorithms to determine which are the most representative documents. Figure 6 shows the reliability (JACK) of every subset, and the reliability of the whole set of automatic summaries, computed with the best metric set. Note that the individual subsets are all below 0.2, while the reliability of the full set of peers goes up to 0.57. That means that the condition in JACK is satisfied for more than half of the models. This value would probably be higher if state-of-the-art summarisation techniques were represented in the set of peers. 5 Testing the predictive power of the framework The QARLA probabilistic framework is designed to evaluate automatic summarisation systems and, at the same time, similarity metrics conceived as well to evaluate summarisation systems. Therefore, testing the validity of the QARLA proposal implies some kind of meta-meta-evaluation, something which seems difficult to design or even to define. It is relatively simple, however, to perform some simple cross-checkings on the ISCORPUS data to verify that the qualitative information described above is reasonable. This is the test we have implemented: If we remove a model m from M, and pretend it is the output of an automatic summariser, we can evaluate the peers set A and the new peer m using M′ = M\{m} as the new model set. If the evaluation metric is good, the quality of the new peer m should be superior to all other peers in A. What we have to check, then, is whether the average quality of a human summariser on all test cases (8 topics in ISCORPUS) is superior to the average quality of any automatic summariser. We have 9 human subjects in the ISCORPUS test bed; therefore, we can repeat this test nine times. With this criterion, we can compare our quality measure Q with state-of-the-art evaluation measures such as ROUGE variants. Table 1 shows the results of applying this test on ROUGE1, ROUGE-2, ROUGE-3, ROUGE-4 (as stateof-the-art references) and QUEEN(ROUGESim), QUEEN(Best Metric Combination) as representatives of the QARLA framework. Even if the test is very limited by the number of topics, it confirms the potential of the framework, with the highest KING metric combination doubling the performance of the best ROUGE measure (6/9 versus 3/9 correct detections). 286 Figure 5: Quality of similarity metrics Figure 6: Reliability of ISCORPUS peer sets Evaluation criterion human summarisers ranked first ROUGE-1 3/9 ROUGE-2 2/9 ROUGE-3 1/9 ROUGE-4 1/9 QUEEN(ROUGESim) 4/9 QUEEN(Best Metric Combination) 6/9 Table 1: Results of the test of identifying the manual summariser 287 6 Related work and discussion 6.1 Application of similarity metrics to evaluate summaries Both in Text Summarisation and Machine Translation, the automatic evaluation of systems consists of computing some similarity metric between the system output and a human model summary. Systems are then ranked in order of decreasing similarity to the gold standard. When there are more than one reference items, similarity is calculated over a pseudo-summary extracted from every model. BLEU (Papineni et al., 2001) and ROUGE (Lin and Hovy, 2003a) are the standard similarity metrics used in Machine Translation and Text Summarisation. Generating a pseudo-summary from every model, the results of a evaluation metric might depend on the scale properties of the metric regarding different models; our QUEEN measure, however, does not depend on scales. Another problem of the direct application of a single evaluation metric to rank systems is how to combine different metrics. The only way to do this is by designing an algebraic combination of the individual metrics into a new combined metric, i.e. by deciding the weight of each individual metric beforehand. In our framework, however, it is not necessary to prescribe how similarity metrics should be combined, not even to know which ones are individually better indicators. 6.2 Meta-evaluation of similarity metrics The question of how to know which similarity metric is best to evaluate automatic summaries/translations has been addressed by • comparing the quality of automatic items with the quality of manual references (Culy and Riehemann, 2003; Lin and Hovy, 2003b). If the metric does not identify that the manual references are better, then it is not good enough for evaluation purposes. • measuring the correlation between the values given by different metrics (Coughlin, 2003). • measuring the correlation between the rankings generated by each metric and rankings generated by human assessors. (Joseph P. Turian and Melamed, 2003; Lin and Hovy, 2003a). The methodology which is closest to our framework is ORANGE (Lin, 2004), which evaluates a similarity metric using the average ranks obtained by reference items within a baseline set. As in our framework, ORANGE performs an automatic meta-evaluation, there is no need for human assessments, and it does not depend on the scale properties of the metric being evaluated (because changes of scale preserve rankings). The ORANGE approach is, indeed, closely related to the original QARLA measure introduced in (Amigo et al., 2004). Our KING, QUEEN, JACK framework, however, has a number of advantages over ORANGE: • It is able to combine different metrics, and evaluate the quality of metric sets, without any a-priori weighting of their relative importance. • It is not sensitive to repeated (or very similar) baseline elements. • It provides a mechanism, JACK, to check whether a set X, M, A of metrics, manual and baseline items is reliable enough to produce a stable evaluation of automatic summarisation systems. Probably the most significant improvement over ORANGE is the ability of KING, QUEEN, JACK to combine automatically the information of different metrics. We believe that a comprehensive automatic evaluation of a summary must necessarily capture different aspects of the problem with different metrics, and that the results of every individual metric should not be combined in any prescribed algebraic way (such as a linear weighted combination). Our framework satisfies this condition. An advantage of ORANGE, however, is that it does not require a large number of gold standards to reach stability, as in the case of QARLA. Finally, it is interesting to compare the rankings produced by QARLA with the output of human assessments, even if the philosophy of QARLA is not considering human assessments as the gold standard for evaluation. Our initial tests on DUC 288 Figure 7: KING vs Pearson correlation with manual rankings in DUC for 1024 metrics combinations test beds are very promising, reaching Pearson correlations of 0.9 and 0.95 between human assessments and QUEEN values for DUC 2004 tasks 2 and 5 (Over and Yen, 2004), using metric sets with highest KING values. The figure 7 shows how Pearson correlation grows up with higher KING values for 1024 metric combinations. Acknowledgments We are indebted to Ed Hovy, Donna Harman, Paul Over, Hoa Dang and Chin-Yew Lin for their inspiring and generous feedback at different stages in the development of QARLA. We are also indebted to NIST for hosting Enrique Amig´o as a visitor and for providing the DUC test beds. This work has been partially supported by the Spanish government, project R2D2 (TIC-2003-7180). References E. Amigo, V. Peinado, J. Gonzalo, A. Pe˜nas, and F. Verdejo. 2004. An empirical study of information synthesis task. In Proceedings of the 42th Annual Meeting of the Association for Computational Linguistics (ACL), Barcelona, July. Deborah Coughlin. 2003. Correlating Automated and Human Assessments of Machine Translation Quality. In In Proceedings of MT Summit IX, New Orleans,LA. Christopher Culy and Susanne Riehemann. 2003. The Limits of N-Gram Translation Evaluation Metrics. In Proceedings of MT Summit IX, New Orleans,LA. Luke Shen Joseph P. Turian and I. Dan Melamed. 2003. Evaluation of Machine Translation and its Evaluation. In In Proceedings of MT Summit IX, New Orleans,LA. C. Lin and E. H. Hovy. 2003a. Automatic Evaluation of Summaries Using N-gram Co-ocurrence Statistics. In Proceeding of 2003 Language Technology Conference (HLT-NAACL 2003). Chin-Yew Lin and Eduard Hovy. 2003b. The Potential and Limitations of Automatic Sentence Extraction for Summarization. In Dragomir Radev and Simone Teufel, editors, HLT-NAACL 2003 Workshop: Text Summarization (DUC03), Edmonton, Alberta, Canada, May 31 - June 1. Association for Computational Linguistics. C. Lin. 2004. Orange: a Method for Evaluating Automatic Metrics for Machine Translation. In Proceedings of the 36th Annual Conference on Computational Linguisticsion for Computational Linguistics (Coling’04), Geneva, August. P. Over and J. Yen. 2004. An introduction to duc 2004 intrinsic evaluation of generic new text summarization systems. In Proceedings of DUC 2004 Document Understanding Workshop, Boston. K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311–318, Philadelphia, jul. 289 | 2005 | 35 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 290–297, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Supervised and Unsupervised Learning for Sentence Compression Jenine Turner and Eugene Charniak Department of Computer Science Brown Laboratory for Linguistic Information Processing (BLLIP) Brown University Providence, RI 02912 {jenine|ec}@cs.brown.edu Abstract In Statistics-Based Summarization - Step One: Sentence Compression, Knight and Marcu (Knight and Marcu, 2000) (K&M) present a noisy-channel model for sentence compression. The main difficulty in using this method is the lack of data; Knight and Marcu use a corpus of 1035 training sentences. More data is not easily available, so in addition to improving the original K&M noisy-channel model, we create unsupervised and semi-supervised models of the task. Finally, we point out problems with modeling the task in this way. They suggest areas for future research. 1 Introduction Summarization in general, and sentence compression in particular, are popular topics. Knight and Marcu (henceforth K&M) introduce the task of statistical sentence compression in Statistics-Based Summarization - Step One: Sentence Compression (Knight and Marcu, 2000). The appeal of this problem is that it produces summarizations on a small scale. It simplifies general compression problems, such as text-to-abstract conversion, by eliminating the need for coherency between sentences. The model is further simplified by being constrained to word deletion: no rearranging of words takes place. Others have performed the sentence compression task using syntactic approaches to this problem (Mani et al., 1999) (Zajic et al., 2004), but we focus exclusively on the K&M formulation. Though the problem is simpler, it is still pertinent to current needs; generation of captions for television and audio scanning services for the blind (Grefenstette, 1998), as well as compressing chosen sentences for headline generation (Angheluta et al., 2004) are examples of uses for sentence compression. In addition to simplifying the task, K&M’s noisy-channel formulation is also appealing. In the following sections, we discuss the K&M noisy-channel model. We then present our cleaned up, and slightly improved noisy-channel model. We also develop unsupervised and semi-supervised (our term for a combination of supervised and unsupervised) methods of sentence compression with inspiration from the K&M model, and create additional constraints to improve the compressions. We conclude with the problems inherent in both models. 2 The Noisy-Channel Model 2.1 The K&M Model The K&M probabilistic model, adapted from machine translation to this task, is the noisy-channel model. In machine translation, one imagines that a string was originally in English, but that someone adds some noise to make it a foreign string. Analogously, in the sentence compression model, the short string is the original sentence and someone adds noise, resulting in the longer sentence. Using this framework, the end goal is, given a long sentence l, to determine the short sentence s that maximizes 290 P(s | l). By Bayes Rule, P(s | l) = P(l | s)P(s) P(l) (1) The probability of the long sentence, P(l) can be ignored when finding the maximum, because the long sentence is the same in every case. P(s) is the source model: the probability that s is the original sentence. P(l | s) is the channel model: the probability the long sentence is the expanded version of the short. This framework independently models the grammaticality of s (with P(s)) and whether s is a good compression of l (P(l | s)). The K&M model uses parse trees for the sentences. These allow it to better determine the probability of the short sentence and to obtain alignments from the training data. In the K&M model, the sentence probability is determined by combining a probabilistic context free grammar (PCFG) with a word-bigram score. The joint rules used to create the compressions are generated by aligning the nodes of the short and long trees in the training data to determine expansion probabilities (P(l | s)). Recall that the channel model tries to find the probability of the long string with respect to the short string. It obtains these probabilities by aligning nodes in the parsed parallel training corpus, and counting the nodes that align as “joint events.” For example, there might be S →NP VP PP in the long sentence and S →NP VP in the short sentence; we count this as one joint event. Non-compressions, where the long version is the same as the short, are also counted. The expansion probability, as used in the channel model, is given by Pexpand(l | s) = count(joint(l, s)) count(s) (2) where count(joint(l, s)) is the count of alignments of the long rule and the short. Many compressions do not align exactly. Sometimes the parses do not match, and sometimes there are deletions that are too complex to be modeled in this way. In these cases sentence pairs, or sections of them, are ignored. The K&M model creates a packed parse forest of all possible compressions that are grammatical with respect to the Penn Treebank (Marcus et al., 1993). Any compression given a zero expansion probability according to the training data is instead assigned a very small probability. A tree extractor (Langkilde, 2000) collects the short sentences with the highest score for P(s | l). 2.2 Our Noisy-Channel Model Our starting implementation is intended to follow the K&M model fairly closely. We use the same 1067 pairs of sentences from the Ziff-Davis corpus, with 32 used as testing and the rest as training. The main difference between their model and ours is that instead of using the rather ad-hoc K&M language model, we substitute the syntax-based language model described in (Charniak, 2001). We slightly modify the channel model equation to be P(l | s) = Pexpand(l | s)Pdeleted, where Pdeleted is the probability of adding the deleted subtrees back into s to get l. We determine this probability also using the Charniak language model. We require an extra parameter to encourage compression. We create a development corpus of 25 sentences from the training data in order to adjust this parameter. That we require a parameter to encourage compression is odd as K&M required a parameter to discourage compression, but we address this point in the penultimate section. Another difference is that we only generate short versions for which we have rules. If we have never before seen the long version, we leave it alone, and in the rare case when we never see the long version as an expansion of itself, we allow only the short version. We do not use a packed tree structure, because we make far fewer sentences. Additionally, as we are traversing the list of rules to compress the sentences, we keep the list capped at the 100 compressions with the highest Pexpand(l | s). We eventually truncate the list to the best 25, still based upon Pexpand(l | s). 2.3 Special Rules One difficulty in the use of training data is that so many compressions cannot be modeled by our simple method. The rules it does model, immediate constituent deletion, as in taking out the ADVP , of S →ADVP , NP VP ., are certainly common, but many good deletions are more structurally complicated. One particular type of rule, such as NP(1) → 291 NP(2) CC NP(3), where the parent has at least one child with the same label as itself, and the resulting compression is one of the matching children, such as, here, NP(2). There are several hundred rules of this type, and it is very simple to incorporate into our model. There are other structures that may be common enough to merit adding, but we limit this experiment to the original rules and our new “special rules.” 3 Unsupervised Compression One of the biggest problems with this model of sentence compression is the lack of appropriate training data. Typically, abstracts do not seem to contain short sentences matching long ones elsewhere in a paper, and we would prefer a much larger corpus. Despite this lack of training data, very good results were obtained both by the K&M model and by our variant. We create a way to compress sentences without parallel training data, while sticking as closely to the K&M model as possible. The source model stays the same, and we still pay a probability cost in the channel model for every subtree deleted. However, the way we determine Pexpand(l | s) changes because we no longer have a parallel text. We create joint rules using only the first section (0.mrg) of the Penn Treebank. We count all probabilistic context free grammar (PCFG) expansions, and then match up similar rules as unsupervised joint events. We change Equation 2 to calculate Pexpand(s | l) without parallel data. First, let us define svo (shorter version of) to be: r1 svo r2 iff the righthand side of r1 is a subsequence of the righthand side of r2. Then define Pexpand(l | s) = count(l) P l′s.t. s svo l′ count(l′) (3) This is best illustrated by a toy example. Consider a corpus with just 7 rules: 3 instances of NP →DT JJ NN and 4 instances of NP →DT NN. P(NP →DT JJ NN | NP →DT JJ NN) = 1. To determine this, you divide the count of NP →DT JJ NN = 3 by all the possible long versions of NP → DT JJ NN = 3. P(NP →DT JJ NN | NP →DT NN) = 3/7. The count of NP →DT JJ NN = 3, and the possible long versions of NP →DT NN are itself (with count of 3) and NP →DT JJ NN (with count of 4), yielding a sum of 7. Finally, P(NP →DT NN | NP →DT NN) = 4/7. The count of NP →DT NN = 4, and since the short (NP →DT NN) is the same as above, the count of the possible long versions is again 7. In this way, we approximate Pexpand(l | s) without parallel data. Since some of these “training” pairs are likely to be fairly poor compressions, due to the artificiality of the construction, we restrict generation of short sentences to not allow deletion of the head of any subtree. None of the special rules are applied. Other than the above changes, the unsupervised model matches our supervised version. As will be shown, this rule is not constraining enough and allows some poor compressions, but it is remarkable that any sort of compression can be achieved without training data. Later, we will describe additional constraints that help even more. 4 Semi-Supervised Compression Because the supervised version tends to do quite well, and its main problem is that the model tends to pick longer compressions than a human would, it seems reasonable to incorporate the unsupervised version into our supervised model, in the hope of getting more rules to use. In generating new short sentences, if we have compression probabilities in the supervised version, we use those, including the special rules. The only time we use an unsupervised compression probability is when there is no supervised version of the unsupervised rule. 5 Additional Constraints Even with the unsupervised constraint from section 3, the fact that we have artificially created our joint rules gives us some fairly ungrammatical compressions. Adding extra constraints improves our unsupervised compressions, and gives us better performance on the supervised version as well. We use a program to label syntactic arguments with the roles they are playing (Blaheta and Charniak, 2000), and the rules for complement/adjunct distinction given by (Collins, 1997) to never allow deletion of the complement. Since many nodes that should not 292 be deleted are not labeled with their syntactic role, we add another constraint that disallows deletion of NPs. 6 Evaluation As with Knight and Marcu’s (2000) original work, we use the same 32 sentence pairs as our Test Corpus, leaving us with 1035 training pairs. After adjusting the supervised weighting parameter, we fold the development set back into the training data. We presented four judges with nine compressed versions of each of the 32 long sentences: A humangenerated short version, the K&M version, our first supervised version, our supervised version with our special rules, our supervised version with special rules and additional constraints, our unsupervised version, our supervised version with additional constraints, our semi-supervised version, and our semisupervised version with additional constraints. The judges were asked to rate the sentences in two ways: the grammaticality of the short sentences on a scale from 1 to 5, and the importance of the short sentence, or how well the compressed version retained the important words from the original, also on a scale from 1 to 5. The short sentences were randomly shuffled across test cases. The results in Table 1 show compression rates, as well as average grammar and importance scores across judges. There are two main ideas to take away from these results. First, we can get good compressions without paired training data. Second, we achieved a good boost by adding our additional constraints in two of the three versions. Note that importance is a somewhat arbitrary distinction, since according to our judges, all of the computer-generated versions do as well in importance as the human-generated versions. 6.1 Examples of Results In Figure 1, we give four examples of most compression techniques in order to show the range of performance that each technique spans. In the first two examples, we give only the versions with constraints, because there is little or no difference between the versions with and without constraints. Example 1 shows the additional compression obtained by using our special rules. Figure 2 shows the parse trees of the original pair of short and long versions. The relevant expansion is NP →NP1 , PP in the long version and simply NP1 in the short version. The supervised version that includes the special rules learned this particular common special joint rule from the training data and could apply it to the example case. This supervised version compresses better than either version of the supervised noisy-channel model that lacks these rules. The unsupervised version does not compress at all, whereas the semi-supervised version is identical with the better supervised version. Example 2 shows how unsupervised and semisupervised techniques can be used to improve compression. Although the final length of the sentences is roughly the same, the unsupervised and semisupervised versions are able to take the action of deleting the parenthetical. Deleting parentheses was never seen in the training data, so it would be extremely unlikely to occur in this case. The unsupervised version, on the other hand, sees both PRN → lrb NP rrb and PRN →NP in its training data, and the semi-supervised version capitalizes on this particular unsupervised rule. Example 3 shows an instance of our initial supervised versions performing far worse than the K&M model. The reason is that currently our supervised model only generates compressions that it has seen before, unlike the K&M model, which generates all possible compressions. S →S , NP VP . never occurs in the training data, and so a good compression does not exist. The unsupervised and semi-supervised versions do better in this case, and the supervised version with the added constraints does even better. Example 4 gives an example of the K&M model being outperformed by all of our other models. 7 Problems with Noisy Channel Models of Sentence Compression To this point our presentation has been rather normal; we draw inspiration from a previous paper, and work at improving on it in various ways. We now deviate from the usual by claiming that while the K&M model works very well, there is a technical problem with formulating the task in this way. We start by making our noisy channel notation a 293 original: Many debugging features, including user-defined break points and variable-watching and message-watching windows, have been added. human: Many debugging features have been added. K&M: Many debugging features, including user-defined points and variable-watching and message-watching windows, have been added. supervised: Many features, including user-defined break points and variable-watching and windows, have been added. super (+ extra rules, constraints): Many debugging features have been added. unsuper (+ constraints): Many debugging features, including user-defined break points and variable-watching and message-watching windows, have been added. semi-supervised (+ constraints): Many debugging features have been added. original: Also, Trackstar supports only the critical path method (CPM) of project scheduling. human: Trackstar supports the critical path method of project scheduling. K&M: Trackstar supports only the critical path method (CPM) of scheduling. supervised: Trackstar supports only the critical path method (CPM) of scheduling. super (+ extra rules, constraints): Trackstar supports only the critical path method (CPM) of scheduling. unsuper (+ constraints): Trackstar supports only the critical path method of project scheduling. semi-supervised (+ constraints): Trackstar supports only the critical path method of project scheduling. original: The faster transfer rate is made possible by an MTI-proprietary data buffering algorithm that off-loads lock-manager functions from the Q-bus host, Raimondi said. human: The algorithm off-loads lock-manager functions from the Q-bus host. K&M: The faster rate is made possible by a MTI-proprietary data buffering algorithm that off-loads lock-manager functions from the Q-bus host, Raimondi said. supervised: Raimondi said. super (+ extra rules): Raimondi said. super (+ extra rules, constraints): The faster transfer rate is made possible by an MTI-proprietary data buffering algorithm, Raimondi said. unsuper (+ constraints): The faster transfer rate is made possible, Raimondi said. semi-supervised (+ constraints): The faster transfer rate is made possible, Raimondi said. original: The SAS screen is divided into three sections: one for writing programs, one for the system’s response as it executes the program, and a third for output tables and charts. human: The SAS screen is divided into three sections. K&M: The screen is divided into one super (+ extra rules): SAS screen is divided into three sections: one for writing programs, and a third for output tables and charts. super (+ extra rules, constraints): The SAS screen is divided into three sections. unsupervised: The screen is divided into sections: one for writing programs, one for the system’s response as it executes program, and third for output tables and charts. unsupervised (+ constraints): Screen is divided into three sections: one for writing programs, one for the system’s response as it executes program, and a third for output tables and charts. semi-supervised: The SAS screen is divided into three sections: one for writing programs, one for the system’s response as it executes the program, and a third for output tables and charts. semi-super (+ constraints): The screen is divided into three sections: one for writing programs, one for the system’s response as it executes the program, and a third for output tables and charts. Figure 1: Compression Examples 294 compression rate grammar importance humans 53.33% 4.96 3.73 K&M 70.37% 4.57 3.85 supervised 79.85% 4.64 3.97 supervised with extra rules 67.41% 4.57 3.66 supervised with extra rules and constraints 68.44% 4.77 3.76 unsupervised 79.11% 4.38 3.93 unsupervised with constraints 77.93% 4.51 3.88 semi-supervised 81.19% 4.79 4.18 semi-supervised with constraints 79.56% 4.75 4.16 Table 1: Experimental Results short: (S (NP (JJ Many) (JJ debugging) (NNS features)) (VP (VBP have) (VP (VBN been) (VP (VBN added))))(. .)) long: (S (NP (NP (JJ Many) (JJ debugging) (NNS features))(, ,) (PP (VBG including) (NP (NP (JJ user-defined)(NN break)(NNS points) (CC and)(NN variable-watching)) (CC and)(NP (JJ message-watching) (NNS windows))))(, ,)) (VP (VBP have) (VP (VBN been) (VP (VBN added))))(. .)) Figure 2: Joint Trees for special rules bit more explicit: arg maxsp(s, L = s | l, L = l) = (4) arg maxsp(s, L = s)p(l, L = l | s, L = s) Here we have introduced explicit conditioning events L = l and L = s to state that that the sentence in question is either the long version or the short version. We do this because in order to get the equation that K&M (and ourselves) start with, it is necessary to assume the following p(s, L = s) = p(s) (5) p(l, L = l | s, L = s) = p(l | s) (6) This means we assume that the probability of, say, s as a short (compressed) sentence is simply its probability as a sentence. This will be, in general, false. One would hope that real compressed sentences are more probable as a member of the set of compressed sentences than they are as simply a member of all English sentences. However, neither K&M, nor we, have a large enough body of compressed and original sentences from which to create useful language models, so we both make this simplifying assumption. At this point it seems like a reasonable choice root vp vb buy np nns toys root vp vb buy np jj large nns toys Figure 3: A compression example — trees A and B respectively to make. In fact, it compromises the entire enterprise. To see this, however, we must descend into more details. Let us consider a simplified version of a K&M example, but as reinterpreted for our model: how the noisy channel model assigns a probability of the compressed tree (A) in Figure 3 given the original tree B. We compute the probabilities p(A) and p(B | A) as follows (Figure 4): We have divided the probabilities up according to whether they are contributed by the source or channel models. Those from the source 295 p(A) p(B | A) p(s →vp | H(s)) p(s →vp | s →vp) p(vp →vb np | H(vp)) p(vp →vb np | vp →vb np) p(np →nns | H(np)) p(np →jj nns | np →nns) p(vb →buy | H(vb)) p(vb →buy | vb →buy) p(nns →toys | H(nns)) p(nns →toys | nns →toys) p(jj →large | H(jj)) Figure 4: Source and channel probabilities for compressing B into A p(B) p(B | B) p(s →vp | H(s)) p(s →vp | s →vp) p(vp →vb np | H(vp)) p(vp →vb np | vp →vb np) p(np →jj nns | H(np)) p(np →jj nns | np →jj nns) p(vb →buy | H(vb)) p(vb →buy | vb →buy) p(nns →toys | H(nns)) p(nns →toys | nns →toys) p(jj →large | H(jj)) p(jj →large | jj →large) Figure 5: Source and channel probabilities for leaving B as B model are conditioned on, e.g. H(np) the history in terms of the tree structure around the noun-phrase. In a pure PCFG this would only include the label of the node. In our language model it includes much more, such as parent and grandparent heads. Again, following K&M, contrast this with the probabilities assigned when the compressed tree is identical to the original (Figure 5). Expressed like this it is somewhat daunting, but notice that if all we want is to see which probability is higher (the compressed being the same as the original or truly compressed) then most of these terms cancel, and we get the rule, prefer the truly compressed if and only if the following ratio is greater than one. p(np →nns | H(np)) p(np →jj nns | H(np)) p(np →jj nns | np →nns) p(np →jj nns | np →jj nns) (7) 1 p(jj →large | jj →large) In the numerator are the unmatched probabilities that go into the compressed sentence noisy channel probability, and in the denominator are those for when the sentence does not undergo any change. We can make this even simpler by noting that because tree-bank pre-terminals can only expand into words p(jj →large | jj →large) = 1. Thus the last fraction in Equation 7 is equal to one and can be ignored. For a compression to occur, it needs to be less desirable to add an adjective in the channel model than in the source model. In fact, the opposite occurs. The likelihood of almost any constituent deletion is far lower than the probability of the constituents all being left in. This seems surprising, considering that the model we are using has had some success, but it makes intuitive sense. There are far fewer compression alignments than total alignments: identical parts of sentences are almost sure to align. So the most probable short sentence should be very barely compressed. Thus we add a weighting factor to compress our supervised version further. K&M also, in effect, weight shorter sentences more strongly than longer ones based upon their language model. In their papers on sentence compression, they give an example similar to our “buy large toys” example. The equation they get for the channel probabilities in their example is similar to the channel probabilities we give in Figures 3 and 4. However their source probabilities are different. K&M did not have a true syntax-based language model to use as we have. Thus they divided the language model into two parts. Part one assigns probabilities to the grammar rules using a probabilistic contextfree grammar, while part two assigns probabilities to the words using a bi-gram model. As they acknowledge in (Knight and Marcu, 2002), the word bigram probabilities are also included in the PCFG probabilities. So in their versions of Figures 3 and 4 they have both p(toys | nns) (from the PCFG) and p(toys | buy) for the bigram probability. In this model, the probabilities do not sum to one, because they pay the probabilistic price for guessing the word “toys” twice, based upon two different conditioning events. Based upon this language model, they prefer shorter sentences. To reiterate this section’s argument: A noisy channel model is not by itself an appropriate model for sentence compression. In fact, the most likely short sentence will, in general, be the same length as the long sentence. We achieve compression by weighting to give shorter sentences more likelihood. In fact, what is really required is some model that takes “utility” into account, using a utility model 296 in which shorter sentences are more useful. Our term giving preference to shorter sentences can be thought of as a crude approximation to such a utility. However, this is clearly an area for future research. 8 Conclusion We have created a supervised version of the noisychannel model with some improvements over the K&M model. In particular, we learned that adding an additional rule type improved compression, and that enforcing some deletion constraints improves grammaticality. We also show that it is possible to perform an unsupervised version of the compression task, which performs remarkably well. Our semisupervised version, which we hoped would have good compression rates and grammaticality, had good grammaticality but lower compression than desired. We would like to come up with a better utility function than a simple weighting parameter for our supervised version. The unsupervised version probably can also be further improved. We achieved much success using syntactic labels to constrain compressions, and there are surely other constraints that can be added. However, more training data is always the easiest cure to statistical problems. If we can find much larger quantities of training data we could allow for much richer rule paradigms that relate compressed to original sentences. One example of a rule we would like to automatically discover would allow us to compress all of our design goals or (NP (NP (DT all)) (PP (IN of) (NP (PRP$ our) (NN design) (NNS goals))))} to all design goals or (NP (DT all) (NN design) (NNS goals)) In the limit such rules blur the distinction between compression and paraphrase. 9 Acknowledgements This work was supported by NSF grant IIS0112435. We would like to thank Kevin Knight and Daniel Marcu for their clarification and test sentences, and Mark Johnson for his comments. References Roxana Angheluta, Rudradeb Mitra, Xiuli Jing, and Francine-Marie Moens. 2004. K.U.Leuven summarization system at DUC 2004. In Document Understanding Conference. Don Blaheta and Eugene Charniak. 2000. Assigning function tags to parsed text. In The Proceedings of the North American Chapter of the Association for Computational Linguistics, pages 234–240. Eugene Charniak. 2001. Immediate-head parsing for language models. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics. The Association for Computational Linguistics. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In The Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, San Francisco. Morgan Kaufmann. Gregory Grefenstette. 1998. Producing intelligent telegraphic text reduction to provide an audio scanning service for the blind. In Working Notes of the AAAI Spring Symposium on Intelligent Text Summarization, pages 111–118. Kevin Knight and Daniel Marcu. 2000. Statistics-based summarization - step one: sentence compression. In Proceedings of the 17th National Conference on Artificial Intelligence, pages 703–71. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: A probabilistic approach to sentence compression. In Artificial Intelligence, 139(1): 91-107. Irene Langkilde. 2000. Forest-based statistical sentence generation. In Proceedings of the 1st Annual Meeting of the North American Chapter of the Association for Computationl Linguistics. Inderjeet Mani, Barbara Gates, and Eric Bloedorn. 1999. Improving summaries by revising them. In The Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. The Association for Computational Linguistics. Michell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. David Zajic, Bonnie Dorr, and Richard Schwartz. 2004. BBN/UMD at DUC 2004: Topiary. In Document Understanding Conference. 297 | 2005 | 36 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 298–305, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Digesting Virtual “Geek” Culture: The Summarization of Technical Internet Relay Chats Liang Zhou and Eduard Hovy University of Southern California Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292-6695 {liangz, hovy} @isi.edu Abstract This paper describes a summarization system for technical chats and emails on the Linux kernel. To reflect the complexity and sophistication of the discussions, they are clustered according to subtopic structure on the sub-message level, and immediate responding pairs are identified through machine learning methods. A resulting summary consists of one or more mini-summaries, each on a subtopic from the discussion. 1 Introduction The availability of many chat forums reflects the formation of globally dispersed virtual communities. From them we select the very active and growing movement of Open Source Software (OSS) development. Working together in a virtual community in non-collocated environments, OSS developers communicate and collaborate using a wide range of web-based tools including Internet Relay Chat (IRC), electronic mailing lists, and more (Elliott and Scacchi, 2004). In contrast to conventional instant message chats, IRCs convey engaging and focused discussions on collaborative software development. Even though all OSS participants are technically savvy individually, summaries of IRC content are necessary within a virtual organization both as a resource and an organizational memory of activities (Ackerman and Halverson, 2000). They are regularly produced manually by volunteers. These summaries can be used for analyzing the impact of virtual social interactions and virtual organizational culture on software/product development. The emergence of email thread discussions and chat logs as a major information source has prompted increased interest in thread summarization within the Natural Language Processing (NLP) community. One might assume a smooth transition from text-based summarization to email and chat-based summarizations. However, chat falls in the genre of correspondence, which requires dialogue and conversation analysis. This property makes summarization in this area even more difficult than traditional summarization. In particular, topic “drift” occurs more radically than in written genres, and interpersonal and pragmatic content appears more frequently. Questions about the content and overall organization of the summary must be addressed in a more thorough way for chat and other dialogue summarization systems. In this paper we present a new system that clusters sub-message segments from correspondences according to topic, identifies the sub-message segment containing the leading issue within the topic, finds immediate responses from other participants, and consequently produces a summary for the entire IRC. Other constructions are possible. One of the two baseline systems described in this paper uses the timeline and dialogue structure to select summary content, and is quite effective. We use the term chat loosely in this paper. Input IRCs for our system is a mixture of chats and 298 emails that are indistinguishable in format observed from the downloaded corpus (Section 3). In the following sections, we summarize previous work, describe the email/chat data, intramessage clustering and summary extraction process, and discuss the results and future work. 2 Previous and Related Work There are at least two ways of organizing dialogue summaries: by dialogue structure and by topic. Newman and Blitzer (2002) describe methods for summarizing archived newsgroup conversations by clustering messages into subtopic groups and extracting top-ranked sentences per subtopic group based on the intrinsic scores of position in the cluster and lexical centrality. Due to the technical nature of our working corpus, we had to handle intra-message topic shifts, in which the author of a message raises or responds to multiple issues in the same message. This requires that our clustering component be not message-based but submessage-based. Lam et al. (2002) employ an existing summarizer for single documents using preprocessed email messages and context information from previous emails in the thread. Rambow et al. (2004) show that sentence extraction techniques are applicable to summarizing email threads, but only with added email-specific features. Wan and McKeown (2004) introduce a system that creates overview summaries for ongoing decision-making email exchanges by first detecting the issue being discussed and then extracting the response to the issue. Both systems use a corpus that, on average, contains 190 words and 3.25 messages per thread, much shorter than the ones in our collection. Galley et al. (2004) describe a system that identifies agreement and disagreement occurring in human-to-human multi-party conversations. They utilize an important concept from conversational analysis, adjacent pairs (AP), which consists of initiating and responding utterances from different speakers. Identifying APs is also required by our research to find correspondences from different chat participants. In automatic summarization of spoken dialogues, Zechner (2001) presents an approach to obtain extractive summaries for multi-party dialogues in unrestricted domains by addressing intrinsic issues specific to speech transcripts. Automatic question detection is also deemed important in this work. A decision-tree classifier was trained on question-triggering words to detect questions among speech acts (sentences). A search heuristic procedure then finds the corresponding answers. Ries (2001) shows how to use keyword repetition, speaker initiative and speaking style to achieve topical segmentation of spontaneous dialogues. 3 Technical Internet Relay Chats GNUe, a meta-project of the GNU project1–one of the most famous free/open source software projects–is the case study used in (Elliott and Scacchi, 2004) in support of the claim that, even in virtual organizations, there is still the need for successful conflict management in order to maintain order and stability. The GNUe IRC archive is uniquely suited for our experimental purpose because each IRC chat log has a companion summary digest written by project participants as part of their contribution to the community. This manual summary constitutes gold-standard data for evaluation. 3.1 Kernel Traffic2 Kernel Traffic is a collection of summary digests of discussions on GNUe development. Each digest summarizes IRC logs and/or email messages (later referred to as chat logs) for a period of up to two weeks. A nice feature is that direct quotes and hyperlinks are part of the summary. Each digest is an extractive overview of facts, plus the author’s dramatic and humorous interpretations. 3.2 Corpus Download The complete Linux Kernel Archive (LKA) consists of two separate downloads. The Kernel Traffic (summary digests) are in XML format and were downloaded by crawling the Kernel Traffic site. The Linux Kernel Archives (individual IRC chat logs) are downloaded from the archive site. We matched the summaries with their respective chat logs based on subject line and publication dates. 3.3 Observation on Chat Logs 1 http://www.gnu.org 2 http://kt.hoser.ca/kernel-traffic/index.html 299 Upon initial examination of the chat logs, we found that many conventional assumptions about chats in general do not apply. For example, in most instant-message chats, each exchange usually consists of a small number of words in several sentences. Due to the technical nature of GNUe, half of the chat logs contain in-depth discussions with lengthy messages. One message might ask and answer several questions, discuss many topics in detail, and make further comments. This property, which we call subtopic structure, is an important difference from informal chat/interpersonal banter. Figure 1 shows the subtopic structure and relation of the first 4 messages from a chat log, produced manually. Each message is represented horizontally; the vertical arrows show where participants responded to each other. Visual inspection reveals in this example there are three distinctive clusters (a more complex cluster and two smaller satellite clusters) of discussions between participants at sub-message level. 3.4 Observation on Summary Digests To measure the goodness of system-produced summaries, gold standards are used as references. Human-written summaries usually make up the gold standards. The Kernel Traffic (summary digests) are written by Linux experts who actively contribute to the production and discussion of the open source projects. However, participantproduced digests cannot be used as reference summaries verbatim. Due to the complex structure of the dialogue, the summary itself exhibits some discourse structure, necessitating such reader guidance phrases such as “for the … question,” “on the … subject,” “regarding …,” “later in the same thread,” etc., to direct and refocus the reader’s attention. Therefore, further manual editing and partitioning is needed to transform a multi-topic digest into several smaller subtopic-based gold-standard reference summaries (see Section 6.1 for the transformation). 4 Fine-grained Clustering To model the subtopic structure of each chat message, we apply clustering at the sub-message level. 4.1 Message Segmentation First, we look at each message and assume that each participant responds to an ongoing discussion by stating his/her opinion on several topics or issues that have been discussed in the current chat log, but not necessarily in the order they were discussed. Thus, topic shifts can occur sequentially within a message. Messages are partitioned into multi-paragraph segments using TextTiling, which reportedly has an overall precision of 83% and recall of 78% (Hearst, 1994). 4.2 Clustering After distinguishing a set of message segments, we cluster them. When choosing an appropriate clustering method, because the number of subtopics under discussion is unknown, we cannot make an assumption about the total number of resulting clusters. Thus, nonhierarchical partitioning methods cannot be used, and we must use a hierarchical method. These methods can be either agglomerative, which begin with an unclustered data set and perform N – 1 pairwise joins, or divisive, which add all objects to a single cluster, and then perform N – 1 divisions to create a hierarchy of smaller clusters, where N is the total number of items to be clustered (Frakes and Baeza-Yates, 1992). Ward’s Method Hierarchical agglomerative clustering methods are commonly used and we employ Ward’s method (Ward and Hook, 1963), in which the text segment pair merged at each stage is the one that minimizes the increase in total within-cluster variance. Each cluster is represented by an L-dimensional vector (xi1, xi2, …, xiL) where each xik is the word’s tf • idf score. If mi is the number of objects in the cluster, the squared Euclidean distance between two segments i and j is: € dij 2 = (xik K=1 L ∑ −x jk)2 Figure 1. An example of chat subtopic structure and relation between correspondences. 300 When two segments are joined, the increase in variance Iij is expressed as: € Iij = mim j mi + m j dij 2 Number of Clusters The process of joining clusters continues until the combination of any two clusters would destabilize the entire array of currently existing clusters produced from previous stages. At each stage, the two clusters xik and xjk are chosen whose combination would cause the minimum increase in variance Iij, expressed as a percentage of the variance change from the last round. If this percentage reaches a preset threshold, it means that the nearest two clusters are much further from each other compared to the previous round; therefore, joining of the two represents a destabilizing change, and should not take place. Sub-message segments from resulting clusters are arranged according to the sequence the original messages were posted and the resulting subtopic structures are similar to the one shown in Figure 1. 5 Summary Extraction Having obtained clusters of message segments focused on subtopics, we adopt the typical summarization paradigm to extract informative sentences and segments from each cluster to produce subtopic-based summaries. If a chat log has n clusters, then the corresponding summary will contain n mini-summaries. All message segments in a cluster are related to the central topic, but to various degrees. Some are answers to questions asked previously, plus further elaborative explanations; some make suggestions and give advice where they are requested, etc. From careful analysis of the LKA data, we can safely assume that for this type of conversational interaction, the goal of the participants is to seek help or advice and advance their current knowledge on various technical subjects. This kind of interaction can be modeled as one probleminitiating segment and one or more corresponding problem-solving segments. We envisage that identifying corresponding message segment pairs will produce adequate summaries. This analysis follows the structural organization of summaries from Kernel Traffic. Other types of discussions, at least in part, require different discourse/summary organization. These corresponding pairs are formally introduced below, and the methods we experimented with for identifying them are described. 5.1 Adjacent Response Pairs An important conversational analysis concept, adjacent pairs (AP), is applied in our system to identify initiating and responding correspondences from different participants in one chat log. Adjacent pairs are considered fundamental units of conversational organization (Schegloff and Sacks, 1973). An adjacent pair is said to consist of two parts that are ordered, adjacent, and produced by different speakers (Galley et al., 2004). In our email/chat (LKA) corpus a physically adjacent message, following the timeline, may not directly respond to its immediate predecessor. Discussion participants read the current live thread and decide what he/she would like to correspond to, not necessarily in a serial fashion. With the added complication of subtopic structure (see Figure 1) the definition of adjacency is further violated. Due to its problematic nature, a relaxation on the adjacency requirement is used in extensive research in conversational analysis (Levinson, 1983). This relaxed requirement is adopted in our research. Information produced by adjacent correspondences can be used to produce the subtopic-based summary of the chat log. As described in Section 4, each chat log is partitioned, at sub-message level, into several subtopic clusters. We take the message segment that appears first chronologically in the cluster as the topic-initiating segment in an adjacent pair. Given the initiating segment, we need to identify one or more segments from the same cluster that are the most direct and relevant responses. This process can be viewed equivalently as the informative sentence extraction process in conventional text-based summarization. 5.2 AP Corpus and Baseline We manually tagged 100 chat logs for adjacent pairs. There are, on average, 11 messages per chat log and 3 segments per message (This is considerably larger than threads used in previous research). Each chat log has been clustered into one or more bags of message segments. The message segment that appears earliest in time in a cluster 301 was marked as the initiating segment. The annotators were provided with this segment and one other segment at a time, and were asked to decide whether the current message segment is a direct answer to the question asked, the suggestion that was requested, etc. in the initiating segment. There are 1521 adjacent response pairs; 1000 were used for training and 521 for testing. Our baseline system selects the message segment (from a different author) immediately following the initiating segment. It is quite effective, with an accuracy of 64.67%. This is reasonable because not all adjacent responses are interrupted by messages responding to different earlier initiating messages. In the following sections, we describe two machine learning methods that were used to identify the second element in an adjacent response pair and the features used for training. We view the problem as a binary classification problem, distinguishing less relevant responses from direct responses. Our approach is to assign a candidate message segment c an appropriate response class r. 5.3 Features Structural and durational features have been demonstrated to improve performance significantly in conversational text analysis tasks. Using them, Galley et al. (2004) report an 8% increase in speaker identification. Zechner (2001) reports excellent results (F > .94) for inter-turn sentence boundary detection when recording the length of pause between utterances. In our corpus, durational information is nonexistent because chats and emails were mixed and no exact time recordings beside dates were reported. So we rely solely on structural and lexical features. For structural features, we count the number of messages between the initiating message segment and the responding message segment. Lexical features are listed in Table 1. The tech words are the words that are uncommon in conventional literature and unique to Linux discussions. 5.4 Maximum Entropy Maximum entropy has been proven to be an effective method in various natural language processing applications (Berger et al., 1996). For training and testing, we used YASMET3. To est imate P(r | c) in the exponential form, we have: € Pλ(r |c) = 1 Zλ(c) exp( λi,r i∑ fi,r(c,r)) where Zλ(c) is a normalizing constant and the feature function for feature fi and response class r is defined as: € fi,r(c, ′ r ) = 1, if fi > 0 and ′ r = r 0, otherwise . λi,r is the feature-weight parameter for feature fi and response class r. Then, to determine the best class r for the candidate message segment c, we have: € r* = arg maxrP(r |c) . 5.5 Support Vector Machine Support vector machines (SVMs) have been shown to outperform other existing methods (naïve Bayes, k-NN, and decision trees) in text categorization (Joachims, 1998). Their advantages are robustness and the elimination of the need for feature selection and parameter tuning. SVMs find the hyperplane that separates the positive and negative training examples with maximum margin. Finding this hyperplane can be translated into an optimization problem of finding a set of coefficients αi * of the weight vector € r w for document di of class yi ∈ {+1 , –1}: € r w = αi * i∑ yi r d i, αi > 0 . Testing data are classified depending on the side of the hyperplane they fall on. We used the LIBSVM4 package for training and testing. 3 http://www.fjoch.com/YASMET.html 4 http://www.csie.ntu.edu.tw/~cjlin/libsvm/ Feature sets baseline MaxEnt SVM 64.67% Structural 61.22% 71.79% Lexical 62.24% 72.22% Structural + Lexical 72.61% 72.79% • number of overlapping words • number of overlapping content words • ratio of overlapping words • ratio of overlapping content words • number of overlapping tech words Table 1. Lexical features. Table 2. Accuracy on identifying APs. 302 5.6 Results Entries in Table 2 show the accuracies achieved using machine learning models and feature sets. 5.7 Summary Generation After responding message segments are identified, we couple them with their respective initiating segment to form a mini-summary based on their subtopic. Each initializing segment has zero or more responding segments. We also observed zero response in human-written summaries where participants initiated some question or concern, but others failed to follow up on the discussion. The AP process is repeated for each cluster created previously. One or more subtopic-based minisummaries make up one final summary for each chat log. Figure 2 shows an example. For longer chat logs, the length of the final summary is arbitrarily averaged at 35% of the original. 6 Summary Evaluation To evaluate the goodness of the system-produced summaries, a set of reference summaries is used for comparison. In this section, we describe the manual procedure used to produce the reference summaries, and the performances of our system and two baseline systems. 6.1 Reference Summaries Kernel Traffic digests are participant-written summaries of the chat logs. Each digest mixes the summary writer’s own narrative comments with direct quotes (citing the authors) from the chat log. As observed in Section 3.4, subtopics are intermingled in each digest. Authors use key phrases to link the contents of each subtopic throughout texts. In Figure 3, we show an example of such a digest. Discussion participants’ names are in italics and subtopics are in bold. In this example, the conversation was started by Benjamin Reed with two questions: 1) asking for conventions for writing /proc drivers, and 2) asking about the status of sysctl. The summary writer indicated that Linus Torvalds replied to both questions and used the phrase “for the … question, he added…” to highlight the answer to the second question. As the diSubtopic 1: Benjamin Reed: I wrote a wireless ethernet driver a while ago... Are driver writers recommended to use that over extending /proc or is it deprecated? Linus Torvalds: Syscyl is deprecated. It’s useful in one way only ... Subtopic 2: Benjamin Reed: I am a bit uncomfortable ... wondering for a while if there are guidelines on … Linus Torvalds: The thing to do is to create ... Subtopic 3: Marcin Dalecki: Are you just blind to the never-ending format/ compatibility/ … problems the whole idea behind /proc induces inherently? Figure 2. A system-produced summary. Benjamin Reed wrote a wireless Ethernet driver that used /proc as its interface. But he was a little uncomfortable … asked if there were any conventions he should follow. He added, “and finally, what’s up with sysctl? …” Linus Torvalds replied with: “the thing to do is to create a …[program code]. The /proc/drivers/ directory is already there, so you’d basically do something like … [program code].” For the sysctl question, he added “sysctl is deprecated. ...” Marcin Dalecki flamed Linus: “Are you just blind to the never-ending format/compatibility/… problems the whole idea behind /proc induces inherently? …[example]” Figure 3. An original Kernel Traffic digest. Mini 1: Benjamin Reed wrote a wireless Ethernet driver that used /proc as its interface. But he was a little uncomfortable … and asked if there were any conventions he should follow. Linus Torvalds replied with: the thing to do is to create a …[program code]. The /proc/drivers/ directory is already there, so you’d basically do something like … [program code]. Marcin Dalecki flamed Linus: Are you just blind to the never-ending format/ compatibility/ … problems the whole idea behind /proc induces inherently? …[example] Mini 2: Benjamin Reed: and finally, what’s up with sysctl? ... Linus Torvalds replied: sysctl is deprecated. ... Figure 4. A reference summary reproduced from a summary digest. 303 gest goes on, Marcin Dalecki only responded to the first question with his excited commentary. Since our system-produced summaries are subtopic-based and partitioned accordingly, if we use unprocessed Kernel Traffic as references, the comparison would be rather complicated and would increase the level of inconsistency in future assessments. We manually reorganized each summary digest into one or more mini-summaries by subtopic (see Figure 4.) Examples (usually kernel stats) and programs are reduced to “[example]” and “[program code].” Quotes (originally in separate messages but merged by the summary writer) that contain multiple topics are segmented and the participant’s name is inserted for each segment. We follow clues like “to answer … question” to pair up the main topics and their responses. 6.2 Summarization Results We evaluated 10 chat logs. On average, each contains approximately 50 multi-paragraph tiles (partitioned by TextTile) and 5 subtopics (clustered by the method from Section 4). A simple baseline system takes the first sentence from each email in the sequence that they were posted, based on the assumption that people tend to put important information in the beginning of texts (Position Hypothesis). A second baseline system was built based on constructing and analyzing the dialogue structure of each chat log. Participants often quote portions of previously posted messages in their responses. These quotes link most of the messages from a chat log. The message segment that immediately follows the quote is automatically paired with the quote itself and added to the summary and sorted according to the timeline. Segments that are not quoted in later messages are labeled as less relevant and discarded. A resulting baseline summary is an inter-connected structure of segments that quoted and responded to one another. Figure 5 is a shortened summary produced by this baseline for the ongoing example. The summary digests from Kernel Traffic mostly consist of direct snippets from original messages, thus making the reference summaries extractive even after rewriting. This makes it possible to conduct an automatic evaluation. A computerized procedure calculates the overlap between reference and system-produced summary units. Since each system-produced summary is a set of mini-summaries based on subtopics, we also compared the subtopics against those appearing in reference summaries (precision = 77.00%, recall = 74.33 %, F = 0.7566). Recall Precision F-measure Baseline1 30.79% 16.81% .2175 Baseline2 63.14% 36.54% .4629 Summary 52.57% 52.14% .5235 System Topic-summ 52.57% 63.66% .5758 Table 3 shows the recall, precision, and Fmeasure from the evaluation. From manual analysis on the results, we notice that the original digest writers often leave large portions of the discussion out and focus on a few topics. We think this is because among the participants, some are Linux veterans and others are novice programmers. Digest writers recognize this difference and reflect it in their writings, whereas our system does not. The entry “Topic-summ” in the table shows systemproduced summaries being compared only against the topics discussed in the reference summaries. 6.3 Discussion A recall of 30.79% from the simple baseline reassures us the Position Hypothesis still applies in conversational discussions. The second baseline performs extremely well on recall, 63.14%. It shows that quoted message segments, and thereby derived dialogue structure, are quite indicative of where the important information resides. Systems built on these properties are good summarization systems and hard-to-beat baselines. The system described in this paper (Summary) shows an Fmeasure of .5235, an improvement from .4629 of the smart baseline. It gains from a high precision because less relevant message segments are identified and excluded from the adjacent response pairs, [0|0] Benjamin Reed: “I wrote an … driver … /proc …” [0|1] Benjamin Reed: “… /proc/ guideline …” [0|2] Benjamin Reed: “… syscyl …” [1|0] Linus Torvalds responds to [0|0, 0|1, 0|2]: “the thing to do is …” “sysctl is deprecated … “ Figure 5. A short example from Baseline 2. Table 3. Summary of results. 304 leaving mostly topic-oriented segments in summaries. There is a slight improvement when assessing against only those subtopics appeared in the reference summaries (Topic-summ). This shows that we only identified clusters on their information content, not on their respective writers’ experience and reliability of knowledge. In the original summary digests, interactions and reactions between participants are sometimes described. Digest writers insert terms like “flamed”, “surprised”, “felt sorry”, “excited”, etc. To analyze social and organizational culture in a virtual environment, we need not only information extracts (implemented so far) but also passages that reveal the personal aspect of the communications. We plan to incorporate opinion identification into the current system in the future. 7 Conclusion and Future Work In this paper we have described a system that performs intra-message topic-based summarization by clustering message segments and classifying topicinitiating and responding pairs. Our approach is an initial step in developing a framework that can eventually reflect the human interactions in virtual environments. In future work, we need to prioritize information according to the perceived knowledgeability of each participant in the discussion, in addition to identifying informative content and recognizing dialogue structure. While the approach to the detection of initiating-responding pairs is quite effective, differentiating important and nonimportant topic clusters is still unresolved and must be explored. References M. S. Ackerman and C. Halverson. 2000. Reexaming organizational memory. Communications of the ACM, 43(1), 59–64. A. Berger, S. Della Pietra, and V. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. M. Elliott and W. Scacchi. 2004. Free software development: cooperation and conflict in a virtual organizational culture. S. Koch (ed.), Free/Open Source Software Development, IDEA publishing, 2004. W. B. Frakes and R. Baeza-Yates. 1992. Information retrieval: data structures & algorithms. Prentice Hall. M. Galley, K. McKeown, J. Hirschberg, and E. Shriberg. 2004. Identifying agreement and disagreement in conversational speech: use of Bayesian networks to model pragmatic dependencies. In the Proceedings of ACL-04. M. A. Hearst. 1994. Multi-paragraph segmentation of expository text. In the Proceedings of ACL 1994. T. Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. In Proceedings of the ECML, pages 137–142. D. Lam and S. L. Rohall. 2002. Exploiting e-mail structure to improve summarization. Technical Paper at IBM Watson Research Center #20–02. S. Levinson. 1983. Pragmatics. Cambridge University Press. P. Newman and J. Blitzer. 2002. Summarizing archived discussions: a beginning. In Proceedings of Intelligent User Interfaces. O. Rambow, L. Shrestha, J. Chen and C. Laurdisen. 2004. Summarizing email threads. In Proceedings of HLT-NAACL 2004: Short Papers. K. Ries. 2001. Segmenting conversations by topic, initiative, and style. In Proceedings of SIGIR Workshop: Information Retrieval Techniques for Speech Applications 2001: 51–66. E. A. Schegloff and H. Sacks. 1973. Opening up closings. Semiotica, 7-4:289–327. S. Wan and K. McKeown. 2004. Generating overview summaries of ongoing email thread discussions. In Proceedings of COLING 2004. J. H. Ward Jr. and M. E. Hook. 1963. Application of an hierarchical grouping procedure to a problem of grouping profiles. Educational and Psychological Measurement, 23, 69–81. K. Zechner. 2001. Automatic generation of concise summaries of spoken dialogues in unrestricted domains. In Proceedings of SIGIR 2001. 305 | 2005 | 37 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 306–313, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Lexicalization in Crosslinguistic Probabilistic Parsing: The Case of French Abhishek Arun and Frank Keller School of Informatics, University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW, UK [email protected], [email protected] Abstract This paper presents the first probabilistic parsing results for French, using the recently released French Treebank. We start with an unlexicalized PCFG as a baseline model, which is enriched to the level of Collins’ Model 2 by adding lexicalization and subcategorization. The lexicalized sister-head model and a bigram model are also tested, to deal with the flatness of the French Treebank. The bigram model achieves the best performance: 81% constituency F-score and 84% dependency accuracy. All lexicalized models outperform the unlexicalized baseline, consistent with probabilistic parsing results for English, but contrary to results for German, where lexicalization has only a limited effect on parsing performance. 1 Introduction This paper brings together two strands of research that have recently emerged in the field of probabilistic parsing: crosslinguistic parsing and lexicalized parsing. Interest in parsing models for languages other than English has been growing, starting with work on Czech (Collins et al., 1999) and Chinese (Bikel and Chiang, 2000; Levy and Manning, 2003). Probabilistic parsing for German has also been explored by a range of authors (Dubey and Keller, 2003; Schiehlen, 2004). In general, these authors have found that existing lexicalized parsing models for English (e.g., Collins 1997) do not straightforwardly generalize to new languages; this typically manifests itself in a severe reduction in parsing performance compared to the results for English. A second recent strand in parsing research has dealt with the role of lexicalization. The conventional wisdom since Magerman (1995) has been that lexicalization substantially improves performance compared to an unlexicalized baseline model (e.g., a probabilistic context-free grammar, PCFG). However, this has been challenged by Klein and Manning (2003), who demonstrate that an unlexicalized model can achieve a performance close to the state of the art for lexicalized models. Furthermore, Bikel (2004) provides evidence that lexical information (in the form of bi-lexical dependencies) only makes a small contribution to the performance of parsing models such as Collins’s (1997). The only previous authors that have directly addressed the role of lexicalization in crosslinguistic parsing are Dubey and Keller (2003). They show that standard lexicalized models fail to outperform an unlexicalized baseline (a vanilla PCFG) on Negra, a German treebank (Skut et al., 1997). They attribute this result to two facts: (a) The Negra annotation assumes very flat trees, which means that Collins-style head-lexicalization fails to pick up the relevant information from non-head nodes. (b) German allows flexible word order, which means that standard parsing models based on context free grammars perform poorly, as they fail to generalize over different positions of the same constituent. As it stands, Dubey and Keller’s (2003) work does not tell us whether treebank flatness or word order flexibility is responsible for their results: for English, the annotation scheme is non-flat, and the word order is non-flexible; lexicalization improves performance. For German, the annotation scheme is flat and the word order is flexible; lexicalization fails to improve performance. The present paper provides the missing piece of evidence by applying probabilistic parsing models to French, a language with non-flexible word order (like English), but with a treebank with a flat annotation scheme (like German). Our results show that French patterns with English: a large increase of parsing performance can be obtained by using a lexicalized model. We conclude that the failure to find a sizable effect of lexicalization in German can be attributed to the word order flexibility of that language, rather than to the flatness of the annotation in the German treebank. The paper is organized as follows: In Section 2, we give an overview of the French Treebank we use for our experiments. Section 3 discusses its annotation scheme and introduces a set of tree transformations that we apply. Section 4 describes the pars306 <NP> <w lemma="eux" ei="PROmp" ee="PRO-3mp" cat="PRO" subcat="3mp">eux</w> </NP> Figure 1: Word-level annotation in the French Treebank: eux ‘they’ (cat: POS tag, subcat: subcategory, ei, ee: inflection) ing models, followed by the results for the unlexicalized baseline model in Section 6 and for a range of lexicalized models in Section 5. Finally, Section 7 provides a crosslinguistic comparison involving data sets of the same size extracted from the French, English, and German treebanks. 2 The French Treebank 2.1 Annotation Scheme The French Treebank (FTB; Abeill´e et al. 2000) consists of 20,648 sentences extracted from the daily newspaper Le Monde, covering a variety of authors and domains (economy, literature, politics, etc.).1 The corpus is formatted in XML and has a rich morphosyntactic tagset that includes part-of-speech tag, ‘subcategorization’ (e.g., possessive or cardinal), inflection (e.g., masculine singular), and lemma information. Compared to the Penn Treebank (PTB; Marcus et al. 1993), the POS tagset of the French Treebank is smaller (13 tags vs. 36 tags): all punctuation marks are represented as the single PONCT tag, there are no separate tags for modal verbs, whwords, and possessives. Also verbs, adverbs and prepositions are more coarsely defined. On the other hand, a separate clitic tag (CL) for weak pronouns is introduced. An example for the word-level annotation in the FTB is given in Figure 1 The phrasal annotation of the FTB differs from that for the Penn Treebank in several aspects. There is no verb phrase: only the verbal nucleus (VN) is annotated. A VN comprises the verb and any clitics, auxiliaries, adverbs, and negation associated with it. This results in a flat syntactic structure, as in (1). (1) (VN (V sont) (ADV syst´ematiquement) (V arrˆet´es)) ‘are systematically arrested’ The noun phrases (NPs) in the FTB are also flat; a noun is grouped together with any associated determiners and prenominal adjectives, as in example (2). Note that postnominal adjectives, however, are adjoined to the NP in an adjectival phrase (AP). 1The French Treebank was developed at Universit´e Paris 7. A license can be obtained by emailing Anne Abeill´e (abeille@ linguist.jussieu.fr). <w compound="yes" lemma="d’entre" ei="P" ee="P" cat="P"> <w catint="P">d’</w> <w catint="P">entre</w> </w> Figure 2: Annotation of compounds in the French Treebank: d’entre ‘between’ (catint: compoundinternal POS tag) (2) (NP (D des) (A petits) (N mots) (AP (ADV tr`es) (A gentils))) ‘small, very gentle words’ Unlike the PTB, the FTB annotates coordinated phrases with the syntactic tag COORD (see the left panel of Figure 3 for an example). The treatment of compounds is also different in the FTB. Compounds in French can comprise words which do not exist otherwise (e.g., insu in the compound preposition `a l’insu de ‘unbeknownst to’) or can exhibit sequences of tags otherwise ungrammatical (e.g., `a la va vite ‘in a hurry’: Prep + Det + finite verb + adverb). To account for these properties, compounds receive a two-level annotation in the FTB: a subordinate level is added for the constituent parts of the compound (both levels use the same POS tagset). An example is given in Figure 2. Finally, the FTB differs from the PTB in that it does not use any empty categories. 2.2 Data Sets The version of the FTB made available to us (version 1.4, May 2004) contains numerous errors. Two main classes of inaccuracies were found in the data: (a) The word is present but morphosyntactic tags are missing; 101 such cases exist. (b) The tag information for a word (or a part of a compound) is present but the word (or compound part) itself is missing. There were 16,490 instances of this error in the dataset. Initially we attempted to correct the errors, but this proved too time consuming, and we often found that the errors cannot be corrected without access to the raw corpus, which we did not have. We therefore decided to remove all sentences with errors, which lead to a reduced dataset of 10,552 sentences. The remaining data set (222,569 words at an average sentence length of 21.1 words) was split into a training set, a development set (used to test the parsing models and to tune their parameters), and a test set, unseen during development. The training set consisted of the first 8,552 sentences in the corpus, with the following 1000 sentences serving as the development set and the final 1000 sentences forming the test set. All results reported in this paper were obtained on the test set, unless stated otherwise. 307 3 Tree Transformations We created a number of different datasets from the FTB, applying various tree transformation to deal with the peculiarities of the FTB annotation scheme. As a first step, the XML formatted FTB data was converted to PTB-style bracketed expressions. Only the POS tag was kept and the rest of the morphological information for each terminal was discarded. For example, the NP in Figure 1 was transformed to: (3) (NP (PRO eux)) In order to make our results comparable to results from the literature, we also transformed the annotation of punctuation. In the FTB, all punctuations is tagged uniformly as PONCT. We reassigned the POS for punctuation using the PTB tagset, which differentiates between commas, periods, brackets, etc. Compounds have internal structure in the FTB (see Section 2.1). We created two separate data sets by applying two alternative tree transformation to make FTB compounds more similar to compounds in other annotation schemes. The first was collapsing the compound by concatenating the compound parts using an underscore and picking up the cat information supplied at the compound level. For example, the compound in Figure 2 results in: (4) (P d’ entre) This approach is similar to the treatment of compounds in the German Negra treebank (used by Dubey and Keller 2003), where compounds are not given any internal structure (compounds are mostly spelled without spaces or apostrophes in German). The second approach is expanding the compound. Here, the compound parts are treated as individual words with their own POS (from the catint tag), and the suffix Cmp is appended the POS of the compound, effectively expanding the tagset.2 Now Figure 2 yields: (5) (PCmp (P d’) (P entre)). This approach is similar to the treatment of compounds in the PTB (except hat the PTB does not use a separate tag for the mother category). We found that in the FTB the POS tag of the compound part is sometimes missing (i.e., the value of catint is blank). In cases like this, the missing catint was substituted with the cat tag of the compound. This heuristic produces the correct POS for the subparts of the compound most of the time. 2An alternative would be to retain the cat tag of the compound. The effect of this decision needs to be investigated in future work. XP X COORD C XP X XP X C XP X XP X C X Figure 3: Coordination in the FTB: before (left) and after transformation (middle); coordination in the PTB (right) As mentioned previously, coordinate structures have their own constituent label COORD in the FTB annotation. Existing parsing models (e.g., the Collins models) have coordination-specific rules, presupposing that coordination is marked up in PTB format. We therefore created additional datasets where a transformation is applied that raises coordination. This is illustrated in Figure 3. Note that in the FTB annotation scheme, a coordinating conjunction is always followed by a syntactic category. Hence the resulting tree, though flatter, is still not fully compatible with the PTB treatment of coordination. 4 Probabilistic Parsing Models 4.1 Probabilistic Context-Free Grammars The aim of this paper is to further explore the crosslinguistic role of lexicalization by applying lexicalized parsing models to the French Treebank parsing accuracy. Following Dubey and Keller (2003), we use a standard unlexicalized PCFG as our baseline. In such a model, each context-free rule RHS → LHS is annotated with an expansion probability P(RHS|LHS). The probabilities for all the rules with the same left-hand side have to sum up to one and the probability of a parse tree T is defined as the product of the probabilities of each rule applied in the generation of T. 4.2 Collins’ Head-Lexicalized Models A number of lexicalized models can then be applied to the FTB, comparing their performance to the unlexicalized baseline. We start with Collins’ Model 1, which lexicalizes a PCFG by associating a word w and a POS tag t with each non-terminal X in the tree. Thus, a non-terminal is written as X(x) where x = ⟨w,t⟩and X is constituent label. Each rule now has the form: P(h) →Ln(ln)...L1(l1)H(h)R1(r1)...Rm(rm) (1) Here, H is the head-daughter of the phrase, which inherits the head-word h from its parent P. L1 ...Ln and R1 ...Rn are the left and right sisters of H. Either n or m may be zero, and n = m for unary rules. 308 The addition of lexical heads leads to an enormous number of potential rules, making direct estimation of P(RHS|LHS) infeasible because of sparse data. Therefore, the generation of the RHS of a rule given the LHS is decomposed into three steps: first the head is generated, then the left and right sisters are generated by independent 0th-order Markov processes. The probability of a rule is thus defined as: P(RHS|LHS) = P(Ln(ln)...L1(l1)H(h),R1(r1)...Rm(rm)|P(h)) = Ph(H|P,h)×∏m+1 i=1 Pr(Ri(ri)|P,h,H,d(i)) ×∏n+1 i=1 Pl(Li(li)|P,h,H,d(i)) (2) Here, Ph is the probability of generating the head, Pl and Pr are the probabilities of generating the left and right sister respectively. Lm+1(lm+1) and Rm+1(rm+1) are defined as stop categories which indicate when to stop generating sisters. d(i) is a distance measure, a function of the length of the surface string between the head and the previously generated sister. Collins’ Model 2 further refines the initial model by incorporating the complement/adjunct distinction and subcategorization frames. The generative process is enhanced to include a probabilistic choice of left and right subcategorization frames. The probability of a rule is now: Ph(H|P,h)×Plc(LC|P,H,h)×Prc(RC|P,H,h) ×∏m+1 i=1 Pr(Ri(ri)|P,h,H,d(i),RC) ×∏n+1 i=1 Pl(Li(li)|P,h,H,d(i),LC) (3) Here, LC and RC are left and right subcat frames, multisets specifying the complements that the head requires in its left or right sister. The subcat requirements are added to the conditioning context. As complements are generated, they are removed from the appropriate subcat multiset. 5 Experiment 1: Unlexicalized Model 5.1 Method This experiment was designed to compare the performance of the unlexicalized baseline model on four different datasets, created by the tree transformations described in Section 3: compounds expanded (Exp), compounds contracted (Cont), compounds expanded with coordination raised (Exp+CR), and compounds contracted with coordination raised (Cont+CR). We used BitPar (Schmid, 2004) for our unlexicalized experiments. BitPar is a parser based on a bit-vector implementation of the CKY algorithm. A grammar and lexicon were read off our training set, along with rule frequencies and frequencies for lexical items, based on which BitPar computes the rule Model LR LP CBs 0CB ≤2CB Tag Cov Exp 59.97 58.64 1.74 39.05 73.23 91.00 99.20 Exp+CR 60.75 60.57 1.57 40.77 75.03 91.08 99.09 Cont 64.19 64.61 1.50 46.74 76.80 93.30 98.48 Cont+CR 66.11 65.55 1.39 46.99 78.95 93.22 97.94 Table 1: Results for unlexicalized models (sentences ≤40 words); each model performed its own POS tagging. probabilities using maximum likelihood estimation. A frequency distribution for POS tags was also read off the training set; this distribution is used by BitPar to tag unknown words in the test data. All models were evaluated using standard Parseval measures of labeled recall (LR), labeled precision (LP), average crossing brackets (CBs), zero crossling brackets (0CB), and two or less crossing brackets (≤2CB). We also report tagging accuracy (Tag), and coverage (Cov). 5.2 Results The results for the unlexicalized model are shown in Table 1 for sentences of length ≤40 words. We find that contracting compounds increases parsing performance substantially compared to expanding compounds, raising labeled recall from around 60% to around 64% and labeled precision from around 59% to around 65%. The results show that raising coordination is also beneficial; it increases precision and recall by 1–2%, both for expanded and for nonexpanded compounds. Note that these results were obtained by uniformly applying coordination raising during evaluation, so as to make all models comparable. For the Exp and Cont models, the parsed output and the gold standard files were first converted by raising coordination and then the evaluation was performed. 5.3 Discussion The disappointing performance obtained for the expanded compound models can be partly attributed to the increase in the number of grammar rules (11,704 expanded vs. 10,299 contracted) and POS tags (24 expanded vs. 11 contracted) associated with that transformation. However, a more important point observation is that the two compound models do not yield comparable results, since an expanded compound has more brackets than a contracted one. We attempted to address this problem by collapsing the compounds for evaluation purposes (as described in Section 3). For example, (5) would be contracted to (4). However, this approach only works if we are certain that the model is tagging the right words as compounds. Un309 fortunately, this is rarely the case. For example, the model outputs: (6) (NCmp (N jours) (N commerc¸ants)) But in the gold standard file, jours and commerc¸ants are two distinct NPs. Collapsing the compounds therefore leads to length mismatches in the test data. This problem occurs frequently in the test set, so that such an evaluation becomes pointless. 6 Experiment 2: Lexicalized Models 6.1 Method Parsing We now compare a series of lexicalized parsing models against the unlexicalized baseline established in the previous experiment. Our is was to test if French behaves like English in that lexicalization improves parsing performance, or like German, in that lexicalization has only a small effect on parsing performance. The lexicalized parsing experiments were run using Dan Bikel’s probabilistic parsing engine (Bikel, 2002) which in addition to replicating the models described by Collins (1997) also provides a convenient interface to develop corresponding parsing models for other languages. Lexicalization requires that each rule in a grammar has one of the categories on its right hand side annotated as the head. These head rules were constructed based on the FTB annotation guidelines (provided along with the dataset), as well as by using heuristics, and were optimized on the development set. Collins’ Model 2 incorporates a complement/adjunct distinction and probabilities over subcategorization frames. Complements were marked in the training phase based on argument identification rules, tuned on the development set. Part of speech tags are generated along with the words in the models; parsing and tagging are fully integrated. To achieve this, Bikel’s parser requires a mapping of lexical items to orthographic/morphological word feature vectors. The features implemented (capitalization, hyphenation, inflection, derivation, and compound) were again optimized on the development set. Like BitPar, Bikel’s parser implements a probabilistic version of the CKY algorithm. As with normal CKY, even though the model is defined in a top-down, generative manner, decoding proceeds bottom-up. To speed up decoding, the algorithm implements beam search. Collins uses a beam width of 104, while we found that a width of 105 gave us the best coverage vs. parsing speed trade-off. Label FTB PTB Negra Label FTB PTB Negra SENT 5.84 2.22 4.55 VPpart 2.51 – – Ssub 4.41 – – VN 1.76 – – Sint 3.44 – – PP 2.10 2.03 3.08 Srel 3.92 – – NP 2.45 2.20 3.08 VP – 2.32 2.59 AdvP 2.24 – 2.08 VPinf 3.07 – – AP 1.34 – 2.22 Table 2: Average number of daughter nodes per constituents in three treebanks Flatness As already pointed out in Section 2.1, the FTB uses a flat annotation scheme. This can be quantified by computing the average number of daughters for each syntactic category in the FTB, and comparing them with the figures available for PTB and Negra (Dubey and Keller, 2003). This is done in Table 2. The absence of sentence-internal VPs explains the very high level of flatness for the sentential category SENT (5.84 daughters), compared to the PTB (2.44), and even to Negra, which is also very flat (4.55 daughters). The other sentential categories Ssub (subordinate clauses), Srel (relative clause), and Sint (interrogative clause) are also very flat. Note that the FTB uses VP nodes only for nonfinite subordinate clauses: VPinf (infinitival clause) and VPpart (participle clause); these categories are roughly comparable in flatness to the VP category in the PTB and Negra. For NP, PPs, APs, and AdvPs the FTB is roughly as flat as the PTB, and somewhat less flat than Negra. Sister-Head Model To cope with the flatness of the FTB, we implemented three additional parsing models. First, we implemented Dubey and Keller’s (2003) sister-head model, which extends Collins’ base NP model to all syntactic categories. This means that the probability function Pr in equation (2) is no longer conditioned on the head but instead on its previous sister, yielding the following definition for Pr (and by analogy Pl): Pr(Ri(ri)|P,Ri−1(ri−1),d(i)) (4) Dubey and Keller (2003) argue that this implicitly adds binary branching to the grammar, and therefore provides a way of dealing with flat annotation (in Negra and in the FTB, see Table 2). Bigram Model This model, inspired by the approach of Collins et al. (1999) for parsing the Prague Dependency Treebank, builds on Collins’ Model 2 by implementing a 1st order Markov assumption for the generation of sister non-terminals. The latter are now conditioned, not only on their head, but also on the previous sister. The probability function for Pr (and by analogy Pl) is now: Pr(Ri(ri)|P,h,H,d(i),Ri−1,RC) (5) 310 Model LR LP CBs 0CB ≤2CB Tag Cov Model 1 80.35 79.99 0.78 65.22 89.46 96.86 99.68 Model 2 80.49 79.98 0.77 64.85 90.10 96.83 99.68 SisterHead 80.47 80.56 0.78 64.96 89.34 96.85 99.57 Bigram 81.15 80.84 0.74 65.21 90.51 96.82 99.46 BigramFlat 80.30 80.05 0.77 64.78 89.13 96.71 99.57 Table 3: Results for lexicalized models (sentences ≤40 words); each model performed its own POS tagging; all lexicalized models used the Cont+CR data set The intuition behind this approach is that the model will learn that the stop symbol is more likely to follow phrases with many sisters. Finally, we also experimented with a third model (BigramFlat) that applies the bigram model only for categories with high degrees of flatness (SENT, Srel, Ssub, Sint, VPinf, and VPpart). 6.2 Results Constituency Evaluation The lexicalized models were tested on the Cont+CR data set, i.e., compounds were contracted and coordination was raised (this is the configuration that gave the best performance in Experiment 1). Table 3 shows that all lexicalized models achieve a performance of around 80% recall and precision, i.e., they outperform the best unlexicalized model by at least 14% (see Table 1). This is consistent with what has been reported for English on the PTB. Collins’ Model 2, which adds the complement/adjunct distinction and subcategorization frames achieved only a very small improvement over Collins’ Model 1, which was not statistically significant using a χ2 test. It might well be that the annotation scheme of the FTB does not lend itself particularly well to the demands of Model 2. Moreover, as Collins (1997) mentions, some of the benefits of Model 2 are already captured by inclusion of the distance measure. A further small improvement was achieved using Dubey and Keller’s (2003) sister-head model; however, again the difference did not reach statistical significance. The bigram model, however, yielded a statistically significant improvement over Collins’ Model 1 (recall χ2 = 3.91, df = 1, p ≤.048; precision χ2 = 3.97, df = 1, p ≤.046). This is consistent with the findings of Collins et al. (1999) for Czech, where the bigram model upped dependency accuracy by about 0.9%, as well as for English where Charniak (2000) reports an increase in F-score of approximately 0.3%. The BigramFlat model, which applies the bigram model to only those labels which have a high degree of flatness, performs Model LR LP CBs 0CB ≤2CB Tag Cov Exp+CR 65.50 64.76 1.49 42.36 77.48 100.0 97.83 Cont+CR 69.35 67.93 1.34 47.43 80.25 100.0 96.97 Model1 81.51 81.43 0.78 64.60 89.25 98.54 99.78 Model2 81.69 81.59 0.78 63.84 89.69 98.55 99.78 SisterHead 81.08 81.56 0.79 64.35 89.57 98.51 99.57 Bigram 81.78 81.91 0.78 64.96 89.12 98.81 99.67 BigramFlat 81.14 81.19 0.81 63.37 88.80 98.80 99.67 Table 4: Results for lexicalized and unlexicalized models (sentences ≤40 words) with correct POS tags supplied; all lexicalized models used the Cont+CR data set at roughly the same level as Model 1. The models in Tables 1 and 3 implemented their own POS tagging. Tagging accuracy was 91–93% for BitPar (unlexicalized models) and around 96% for the word-feature enhanced tagging model of the Bikel parser (lexicalized models). POS tags are an important cue for parsing. To gain an upper bound on the performance of the parsing models, we reran the experiments by providing the correct POS tag for the words in the test set. While BitPar always uses the tags provided, the Bikel parser only uses them for words whose frequency is less than the unknown word threshold. As Table 4 shows, perfect tagging increased parsing performance in the lexicalized models by around 3%. This shows that the poor POS tagging performed by BitPar is one of the reasons of the poor performance of the lexicalized models. The impact of perfect tagging is less drastic on the lexicalized models (around 1% increase). However, our main finding, viz., that lexicalized models outperform unlexicalized models considerable on the FTB, remains valid, even with perfect tagging.3 Dependency Evaluation We also evaluated our models using dependency measures, which have been argued to be more annotation-neutral than Parseval. Lin (1995) notes that labeled bracketing scores are more susceptible to cascading errors, where one incorrect attachment decision causes the scoring algorithm to count more than one error. The gold standard and parsed trees were converted into dependency trees using the algorithm described by Lin (1995). Dependency accuracy is defined as the ratio of correct dependencies over the total number of dependencies in a sentence. (Note that this is an unlabeled dependency measure.) Dependency accuracy and constituency F-score are shown 3It is important to note that the Collins model has a range of other features that set it apart from a standard unlexicalized PCFG (notably Markovization), as discussed in Section 4.2. It is therefore likely that the gain in performance is not attributable to lexicalization alone. 311 Model Dependency F-score Cont+CR 73.09 65.83 Model 2 83.96 80.23 SisterHead 84.00 80.51 Bigram 84.20 80.99 Table 5: Dependency vs. constituency scores for lexicalized and unlexicalized models in Table 5 for the most relevant FTB models. (Fscore is computed as the geometric mean of labeled recall and precision.) Numerically, dependency accuracies are higher than constituency F-scores across the board. However, the effect of lexicalization is the same on both measures: for the FTB, a gain of 11% in dependency accuracy is observed for the lexicalized model. 7 Experiment 3: Crosslinguistic Comparison The results reported in Experiments 1 and 2 shed some light on the role of lexicalization for parsing French, but they are not strictly comparable to the results that have been reported for other languages. This is because the treebanks available for different languages typically vary considerably in size: our FTB training set was about 8,500 sentences large, while the standard training set for the PTB is about 40,000 sentences in size, and the Negra training set used by Dubey and Keller (2003) comprises about 18,600 sentences. This means that the differences in the effect of lexicalization that we observe could be simply due to the size of the training set: lexicalized models are more susceptible to data sparseness than unlexicalized ones. We therefore conducted another experiment in which we applied Collins’ Model 2 to subsets of the PTB that were comparable in size to our FTB data sets. We combined sections 02–05 and 08 of the PTB (8,345 sentences in total) to form the training set, and the first 1,000 sentences of section 23 to form our test set. As a baseline model, we also run an unlexicalized PCFG on the same data sets. For comparison with Negra, we also include the results of Dubey and Keller (2003): they report the performance of Collins’ Model 1 on a data set of 9,301 sentences and a test set of 1,000 sentences, which are comparable in size to our FTB data sets.4 The results of the crosslinguistic comparison are shown in Table 6.5 We conclude that the effect of 4Dubey and Keller (2003) report only F-scores for the reduced data set (see their Figure 1); the other scores were provided by Amit Dubey. No results for Model 2 are available. 5For this experiments, the same POS tagging model was applied to the PTB and the FTB data, which is why the FTB figCorpus Model LR LP CBs 0CB ≤2CB FTB Cont+CR 66.11 65.55 1.39 46.99 78.95 Model 2 79.20 78.58 0.83 63.33 89.23 PTB Unlex 72.79 75.23 2.54 31.56 58.98 Model 2 86.43 86.79 1.17 57.80 82.44 Negra Unlex 69.64 67.27 1.12 54.21 82.84 Model 1 68.33 67.32 0.83 60.43 88.78 Table 6: The effect of lexicalization on different corpora for training sets of comparable size (sentences ≤40 words) lexicalization is stable even if the size of the training set is held constant across languages: For the FTB we find that lexicalization increases F-score by around 13%. Also for the PTB, we find an effect of lexicalization of about 14%. For the German Negra treebank, however, the performance of the lexicalized and the unlexicalized model are almost indistinguishable. (This is true for Collins’ Model 1; note that Dubey and Keller (2003) do report a small improvement for the lexicalized sister-head model.) 8 Related Work We are not aware of any previous attempts to build a probabilistic, treebank-trained parser for French. However, there is work on chunking for French. The group who built the French Treebank (Abeill´e et al., 2000) used a rule-based chunker to automatically annotate the corpus with syntactic structures, which were then manually corrected. They report an unlabeled recall/precision of 94.3/94.2% for opening brackets and 92.2/91.4% for closing brackets, and a label accuracy of 95.6%. This result is not comparable to our results for full parsing. Giguet and Vergne (1997) present use a memorybased learner to predict chunks and dependencies between chunks. The system is evaluated on texts from Le Monde (different from the FTB texts). Results are only reported for verb-object dependencies, for which recall/precision is 94.04/96.39%. Again, these results are not comparable to ours, which were obtained using a different corpus, a different dependency scheme, and for a full set of dependencies. 9 Conclusions In this paper, we provided the first probabilistic, treebank-trained parser for French. In Experiment 1, we established an unlexicalized baseline model, which yielded a labeled precision and recall of about 66%. We experimented with a number of tree transformation that take account of the peculiarities of the annotation of the French Treeures are slightly lower than in Table 3. 312 bank; the best performance was obtained by raising coordination and contracting compounds (which have internal structure in the FTB). In Experiment 2, we explored a range of lexicalized parsing models, and found that lexicalization improved parsing performance by up to 15%: Collins’ Models 1 and 2 performed at around 80% LR and LP. No significant improvement could be achieved by switching to Dubey and Keller’s (2003) sister-head model, which has been claimed to be particularly suitable for treebanks with flat annotation, such as the FTB. A small but significant improvement (to 81% LR and LP) was obtained by a bigram model that combines features of the sister-head model and Collins’ model. These results have important implications for crosslinguistic parsing research, as they allow us to tease apart language-specific and annotationspecific effects. Previous work for English (e.g., Magerman, 1995; Collins, 1997) has shown that lexicalization leads to a sizable improvement in parsing performance. English is a language with nonflexible word order and with a treebank with a nonflat annotation scheme (see Table 2). Research on German (Dubey and Keller, 2003) showed that lexicalization leads to no sizable improvement in parsing performance for this language. German has a flexible word order and a flat treebank annotation, both of which could be responsible for this counterintuitive effect. The results for French presented in this paper provide the missing piece of evidence: they show that French behaves like English in that it shows a large effect of lexicalization. Like English, French is a language with non-flexible word order, but like the German Treebank, the French Treebank has a flat annotation. We conclude that Dubey and Keller’s (2003) results for German can be attributed to a language-specific factor (viz., flexible word order) rather than to an annotation-specific factor (viz., flat annotation). We confirmed this claim in Experiment 3 by showing that the effects of lexicalization observed for English, French, and German are preserved if the size of the training set is kept constant across languages. An interesting prediction follows from the claim that word order flexibility, rather than flatness of annotation, is crucial for lexicalization. A language which has a flexible word order (like German), but a non-flat treebank (like English) should show no effect of lexicalization, i.e., lexicalized models are predicted not to outperform unlexicalized ones. In future work, we plan to test this prediction for Korean, a flexible word order language whose treebank (Penn Korean Treebank) has a non-flat annotation. References Abeill´e, Anne, Lionel Clement, and Alexandra Kinyon. 2000. Building a treebank for French. In Proceedings of the 2nd International Conference on Language Resources and Evaluation. Athens. Bikel, Daniel M. 2002. Design of a multi-lingual, parallelprocessing statistical parsing engine. In Proceedings of the 2nd International Conference on Human Language Technology Research. Morgan Kaufmann, San Francisco. Bikel, Daniel M. 2004. A distributional analysis of a lexicalized statistical parsing model. In Dekang Lin and Dekai Wu, editors, Proceedings of the Conference on Empirical Methods in Natural Language Processing. Barcelona, pages 182–189. Bikel, Daniel M. and David Chiang. 2000. Two statistical parsing models applied to the Chinese treebank. In Proceedings of the 2nd ACL Workshop on Chinese Language Processing. Hong Kong. Charniak, Eugene. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics. Seattle, WA, pages 132–139. Collins, Michael. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and the 8th Conference of the European Chapter of the Association for Computational Linguistics. Madrid, pages 16–23. Collins, Michael, Jan Hajiˇc, Lance Ramshaw, and Christoph Tillmann. 1999. A statistical parser for Czech. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. University of Maryland, College Park. Dubey, Amit and Frank Keller. 2003. Probabilistic parsing for German using sister-head dependencies. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Sapporo, pages 96–103. Giguet, Emmanuel and Jacques Vergne. 1997. From part-ofspeech tagging to memory-based deep syntactic analysis. In Proceedings of the International Workshop on Parsing Technologies. Boston, pages 77–88. Klein, Dan and Christopher Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Sapporo. Levy, Roger and Christopher Manning. 2003. Is it harder to parse Chinese, or the Chinese treebank? In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Sapporo. Lin, Dekang. 1995. A dependency-based method for evaluating broad-coverage parsers. In Proceedings of the International Joint Conference on Artificial Intelligence. Montreal, pages 1420–1425. Magerman, David. 1995. Statistical decision-tree models for parsing. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Cambridge, MA, pages 276–283. Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics 19(2):313–330. Schiehlen, Michael. 2004. Annotation strategies for probabilistic parsing in German. In Proceedings of the 20th International Conference on Computational Linguistics. Geneva. Schmid, Helmut. 2004. Efficient parsing of highly ambiguous context-free grammars with bit vectors. In Proceedings of the 20th International Conference on Computational Linguistics. Geneva. Skut, Wojciech, Brigitte Krenn, Thorsten Brants, and Hans Uszkoreit. 1997. An annotation scheme for free word order languages. In Proceedings of the 5th Conference on Applied Natural Language Processing. Washington, DC. 313 | 2005 | 38 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 314–321, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics What to do when lexicalization fails: parsing German with suffix analysis and smoothing Amit Dubey University of Edinburgh [email protected] Abstract In this paper, we present an unlexicalized parser for German which employs smoothing and suffix analysis to achieve a labelled bracket F-score of 76.2, higher than previously reported results on the NEGRA corpus. In addition to the high accuracy of the model, the use of smoothing in an unlexicalized parser allows us to better examine the interplay between smoothing and parsing results. 1 Introduction Recent research on German statistical parsing has shown that lexicalization adds little to parsing performance in German (Dubey and Keller, 2003; Beil et al., 1999). A likely cause is the relative productivity of German morphology compared to that of English: German has a higher type/token ratio for words, making sparse data problems more severe. There are at least two solutions to this problem: first, to use better models of morphology or, second, to make unlexicalized parsing more accurate. We investigate both approaches in this paper. In particular, we develop a parser for German which attains the highest performance known to us by making use of smoothing and a highly-tuned suffix analyzer for guessing part-of-speech (POS) tags from the input text. Rather than relying on smoothing and suffix analysis alone, we also utilize treebank transformations (Johnson, 1998; Klein and Manning, 2003) instead of a grammar induced directly from a treebank. The organization of the paper is as follows: Section 2 summarizes some important aspects of our treebank corpus. In Section 3 we outline several techniques for improving the performance of unlexicalized parsing without using smoothing, including treebank transformations, and the use of suffix analysis. We show that suffix analysis is not helpful on the treebank grammar, but it does increase performance if used in combination with the treebank transformations we present. Section 4 describes how smoothing can be incorporated into an unlexicalized grammar to achieve state-of-the-art results in German. Rather using one smoothing algorithm, we use three different approaches, allowing us to compare the relative performance of each. An error analysis is presented in Section 5, which points to several possible areas of future research. We follow the error analysis with a comparison with related work in Section 6. Finally we offer concluding remarks in Section 7. 2 Data The parsing models we present are trained and tested on the NEGRA corpus (Skut et al., 1997), a handparsed corpus of German newspaper text containing approximately 20,000 sentences. It is available in several formats, and in this paper, we use the Penn Treebank (Marcus et al., 1993) format of NEGRA. The annotation used in NEGRA is similar to that used in the English Penn Treebank, with some differences which make it easier to annotate German syntax. German’s flexible word order would have required an explosion in long-distance dependencies (LDDs) had annotation of NEGRA more closely resembled that of the Penn Treebank. The NEGRA designers therefore chose to use relatively flat trees, encoding elements of flexible word order us314 ing grammatical functions (GFs) rather than LDDs wherever possible. To illustrate flexible word order, consider the sentences Der Mann sieht den Jungen (‘The man sees the boy’) and Den Jungen sieht der Mann. Despite the fact the subject and object are swapped in the second sentence, the meaning of both are essentially the same.1 The two possible word orders are disambiguated by the use of the nominative case for the subject (marked by the article der) and the accusative case for the object (marked by den) rather than their position in the sentence. Whenever the subject appears after the verb, the non-standard position may be annotated using a long-distance dependency (LDD). However, as mentioned above, this information can also be retrieved from the grammatical function of the respective noun phrases: the GFs of the two NPs above would be ‘subject’ and ‘accusative object’ regardless of their position in the sentence. These labels may therefore be used to recover the underlying dependencies without having to resort to LDDs. This is the approach used in NEGRA. It does have limitations: it is only possible to use GF labels instead of LDDs when all the nodes of interest are dominated by the same parent. To maximize cases where all necessary nodes are dominated by the same parent, NEGRA uses flat ‘dependency-style’ rules. For example, there is no VP node when there is no overt auxiliary verb. category. Under the NEGRA annotation scheme, the first sentence above would have a rule S NP-SB VVFIN NP-OA and the second, S NP-OA VVFIN NP-SB, where SB denotes subject and OA denotes accusative object. 3 Parsing with Grammatical Functions 3.1 Model As explained above, this paper focuses on unlexicalized grammars. In particular, we make use of probabilistic context-free grammars (PCFGs; Booth (1969)) for our experiments. A PCFG assigns each context-free rule LHS RHS a conditional probability Pr RHS LHS . If a parser were to be given POS tags as input, this would be the only distribution 1Pragmatically speaking, the second sentence has a slightly different meaning. A better translation might be: ‘It is the boy the man sees.’ required. However, in this paper we are concerned with the more realistic problem of accepting text as input. Therefore, the parser also needs a probability distribution Pw w LHS to generate words. The probability of a tree is calculated by multiplying the probabilities all the rules and words generated in the derivation of the tree. The rules are simply read out from the treebank, and the probabilities are estimated from the frequency of rules in the treebank. More formally: Pr RHS LHS c LHS RHS c LHS (1) The probabilities of words given tags are similarly estimated from the frequency of word-tag cooccurrences: Pw w LHS c LHS w c LHS (2) To handle unseen or infrequent words, all words whose frequency falls below a threshold Ωare grouped together in an ‘unknown word’ token, which is then treated like an additional word. For our experiments, we use Ω 10. We consider several variations of this simple model by changing both Pr and Pw. In addition to the standard formulation in Equation (1), we consider two alternative variants of Pr. The first is a Markov context-free rule (Magerman, 1995; Charniak, 2000). A rule may be turned into a Markov rule by first binarizing it, then making independence assumptions on the new binarized rules. Binarizing the rule A B1 Bn results in a number of smaller rules A B1AB1, AB1 B2AB1B2, , AB1 Bn 1 Bn. Binarization does not change the probability of the rule: P B1 Bn A i 1 ∏ n P Bi A B1 Bi
1 Making the 2nd order Markov assumption ‘forgets’ everything earlier then 2 previous sisters. A rule would now be in the form ABi 2Bi 1 BiABi 1Bi, and the probability would be: P B1 Bn A i 1 ∏ n P Bi A Bi
2 Bi
1 315 The other rule type we consider are linear precedence/immediate dominance (LP/ID) rules (Gazdar et al., 1985). If a context-free rule can be thought of as a LHS token with an ordered list of tokens on the RHS, then an LP/ID rule can be thought of as a LHS token with a multiset of tokens on the RHS together with some constraints on the possible orders of tokens on the RHS. Uszkoreit (1987) argues that LP/ID rules with violatable ‘soft’ constraints are suitable for modelling some aspects of German word order. This makes a probabilistic formulation of LP/ID rules ideal: probabilities act as soft constraints. Our treatment of probabilistic LP/ID rules generate children one constituent at a time, conditioning upon the parent and a multiset of previously generated children. Formally, the the probability of the rule is approximated as: P B1 Bn A i 1 ∏ n P Bi A B j j i In addition to the two additional formulations of the Pr distribution, we also consider one variant of the Pw distribution, which includes the suffix analysis. It is important to clarify that we only change the handling of uncommon and unknown words; those which occur often are handled as normal. suggested different choices for Pw in the face of unknown words: Schiehlen (2004) suggests using a different unknown word token for capitalized versus uncapitalized unknown words (German orthography dictates that all common nouns are capitalized) and Levy and Manning (2004) consider inspecting the last letter the unknown word to guess the part-of-speech (POS) tags. Both of these models are relatively impoverished when compared to the approaches of handling unknown words which have been proposed in the POS tagging literature. Brants (2000) describes a POS tagger with a highly tuned suffix analyzer which considers both capitalization and suffixes as long as 10 letters long. This tagger was developed with German in mind, but neither it nor any other advanced POS tagger morphology analyzer has ever been tested with a full parser. Therefore, we take the novel step of integrating this suffix analyzer into the parser for the second Pw distribution. 3.2 Treebank Re-annotation Automatic treebank transformations are an important step in developing an accurate unlexicalized parser (Johnson, 1998; Klein and Manning, 2003). Most of our transformations focus upon one part of the NEGRA treebank in particular: the GF labels. Below is a list of GF re-annotations we utilise: Coord GF In NEGRA, a co-ordinated accusative NP rule might look like NP-OA NP-CJ KON NPCJ. KON is the POS tag for a conjunct, and CJ denotes the function of the NP is a coordinate sister. Such a rule hides an important fact: the two co-ordinate sisters are also accusative objects. The Coord GF re-annotation would therefore replace the above rule with NP-OA NP-OA KON NP-OA. NP case German articles and pronouns are strongly marked for case. However, the grammatical function of all articles is usually NK, meaning noun kernel. To allow case markings in articles and pronouns to ‘communicate’ with the case labels on the GFs of NPs, we copy these GFs down into the POS tags of articles and pronouns. For example, a rule like NP-OA ART-NK NN-NK would be replaced by NP-OA ART-OA NN-NK. A similar improvement has been independently noted by Schiehlen (2004). PP case Prepositions determine the case of the NP they govern. While the case is often unambiguous (i.e. f¨ur ‘for’ always takes an accusative NP), at times the case may be ambiguous. For instance, in ‘in’ may take either an accusative or dative NP. We use the labels -OA, -OD, etc. for unambiguous prepositions, and introduce new categories AD (accusative/dative ambiguous) and DG (dative/genitive ambiguous) for the ambiguous categories. For example, a rule such as PP P ART-NK NN-NK is replaced with PP P-AD ART-AD NN-NK if it is headed by the preposition in. SBAR marking German subordinate clauses have a different word order than main clauses. While subordinate clauses can usually be distinguished from main clauses by their GF, there are some GFs which are used in both cases. This transformation adds an SBAR category to explicitly disambiguate these 316 No suffix With suffix F-score F-score Normal rules 66.3 66.2 LP/ID rules 66.5 66.6 Markov rules 69.4 69.1 Table 1: Effect of rule type and suffix analysis. cases. The transformation does not add any extra nonterminals, rather it replaces rules such as S KOUS NP V NP (where KOUS is a complementizer POS tag) with SBAR KOUS NP V NP. S GF One may argue that, as far as syntactic disambiguation is concerned, GFs on S categories primarily serve to distinguish main clauses from subordinate clauses. As we have explicitly done this in the previous transformation, it stands to reason that the GF tags on S nodes may therefore be removed without penalty. If the tags are necessary for semantic interpretation, presumably they could be re-inserted using a strategy such as that of Blaheta and Charniak (2000) The last transformation therefore removes the GF of S nodes. 3.3 Method To allow comparisons with earlier work on NEGRA parsing, we use the same split of training, development and testing data as used in Dubey and Keller (2003). The first 18,602 sentences are used as training data, the following 1,000 form the development set, and the last 1,000 are used as the test set. We remove long-distance dependencies from all sets, and only consider sentences of length 40 or less for efficiency and memory concerns. The parser is given untagged words as input to simulate a realistic parsing task. A probabilistic CYK parsing algorithm is used to compute the Viterbi parse. We perform two sets of experiments. In the first set, we vary the rule type, and in the second, we report the additive results of the treebank reannotations described in Section 3.2. The three rule types used in the first set of experiments are standard CFG rules, our version of LP/ID rules, and 2nd order Markov CFG rules. The second battery of experiments was performed on the model with Markov rules. In both cases, we report PARSEVAL labeled No suffix With suffix F-score F-score GF Baseline 69.4 69.1 +Coord GF 70.2 71.5 +NP case 71.1 72.4 +PP case 71.0 72.7 +SBAR 70.9 72.6 +S GF 71.3 73.1 Table 2: Effect of re-annotation and suffix analysis with Markov rules. bracket scores (Magerman, 1995), with the brackets labeled by syntactic categories but not grammatical functions. Rather than reporting precision and recall of labelled brackets, we report only the F-score, i.e. the harmonic mean of precision and recall. 3.4 Results Table 1 shows the effect of rule type choice, and Table 2 lists the effect of the GF re-annotations. From Table 1, we see that Markov rules achieve the best performance, ahead of both standard rules as well as our formulation of probabilistic LP/ID rules. In the first group of experiments, suffix analysis marginally lowers performance. However, a different pattern emerges in the second set of experiments. Suffix analysis consistently does better than the simpler word generation probability model. Looking at the treebank transformations with suffix analysis enabled, we find the coordination reannotation provides the greatest benefit, boosting performance by 2.4 to 71.5. The NP and PP case re-annotations together raise performance by 1.2 to 72.7. While the SBAR annotation slightly lowers performance, removing the GF labels from S nodes increased performance to 73.1. 3.5 Discussion There are two primary results: first, although LP/ID rules have been suggested as suitable for German’s flexible word order, it appears that Markov rules actually perform better. Second, adding suffix analysis provides a clear benefit, but only after the inclusion of the Coord GF transformation. While the SBAR transformation slightly reduces performance, recall that we argued the S GF transformation only made sense if the SBAR transforma317 tion is already in place. To test if this was indeed the case, we re-ran the final experiment, but excluded the SBAR transformation. We did indeed find that applying S GF without the SBAR transformation reduced performance. 4 Smoothing & Search With the exception of DOP models (Bod, 1995), it is uncommon to smooth unlexicalized grammars. This is in part for the sake of simplicity: unlexicalized grammars are interesting because they are simple to estimate and parse, and adding smoothing makes both estimation and parsing nearly as complex as with fully lexicalized models. However, because lexicalization adds little to the performance of German parsing models, it is therefore interesting to investigate the impact of smoothing on unlexicalized parsing models for German. Parsing an unsmoothed unlexicalized grammar is relatively efficient because the grammar constraints the search space. As a smoothed grammar does not have a constrained search space, it is necessary to find other means to make parsing faster. Although it is possible to efficiently compute the Viterbi parse (Klein and Manning, 2002) using a smoothed grammar, the most common approach to increase parsing speed is to use some form of beam search (cf. Goodman (1998)), a strategy we follow here. 4.1 Models We experiment with three different smoothing models: the modified Witten-Bell algorithm employed by Collins (1999), the modified Kneser-Ney algorithm of Chen and Goodman (1998) the smoothing algorithm used in the POS tagger of Brants (2000). All are variants of linear interpolation, and are used with 2nd order Markovization. Under this regime, the probability of adding the ith child to A B1 Bn is estimated as P Bi A Bi
1 Bi
2 λ1P Bi A Bi
1 Bi
2 λ2P Bi A Bi
1 λ3P Bi A λ4P Bi The models differ in how the λ’s are estimated. For both the Witten-Bell and Kneser-Ney algorithms, the λ’s are a function of the context A Bi
2 Bi
1. By contrast, in Brants’ algorithm the λ’s are constant λ1 λ2 λ3 0 for each trigram x1 x2 x3 with c x1 x2 x3 0 d3 c xi xi 1 xi 2
1 c xi 1 xi 2
1 if c xi
1 xi
2
1 0 if c xi
1 xi
2 1 d2 c xi xi 1
1 c xi 1
1 if c xi
1
1 0 if c xi
1 1 d1 c xi
1 N
1 if d3 max d1 d2 d3 then λ3 λ3 c xi xi
1 xi
2 elseif d2 max d1 d2 d3 then λ2 λ2 c xi xi
1 xi
2 else λ1 λ1 c xi xi
1 xi
2 end λ1 λ1 λ1 λ2 λ 3 λ2 λ2 λ1 λ2 λ 3 λ3 λ3 λ1 λ2 λ 3 Figure 1: Smoothing estimation based on the Brants (2000) approach for POS tagging. for all possible contexts. As both the Witten-Bell and Kneser-Ney variants are fairly well known, we do not describe them further. However, as Brants’ approach (to our knowledge) has not been used elsewhere, and because it needs to be modified for our purposes, we show the version of the algorithm we use in Figure 1. 4.2 Method The purpose of this is experiment is not only to improve parsing results, but also to investigate the overall effect of smoothing on parse accuracy. Therefore, we do not simply report results with the best model from Section 3. Rather, we re-do each modification in Section 3 with both search strategies (Viterbi and beam) in the unsmoothed case, and with all three smoothing algorithms with beam search. The beam has a variable width, which means an arbitrary number of edges may be considered, as long as their probability is within 4 10
3 of the best edge in a given span. 4.3 Results Table 3 summarizes the results. The best result in each column is italicized, and the overall best result 318 No Smoothing No Smoothing Brants Kneser-Ney Witten-Bell Viterbi Beam Beam Beam Beam GF Baseline 69.1 70.3 72.3 72.6 72.3 +Coord GF 71.5 72.7 75.2 75.4 74.5 +NP case 72.4 73.3 76.0 76.1 75.6 +PP case 72.7 73.2 76.1 76.2 75.7 +SBAR 72.6 73.1 76.3 76.0 75.3 +S GF Removal 73.1 72.6 75.7 75.3 75.1 Table 3: Effect of various smoothing algorithms. in shown in bold. The column titled Viterbi reproduces the second column of Table 2 whereas the column titled Beam shows the result of re-annotation using beam search, but no smoothing. The best result with beam search is 73.3, slightly higher than without beam search. Among smoothing algorithms, the Brants approach yields the highest results, of 76.3, with the modified Kneser-Ney algorithm close behind, at 76.2. The modified Witten-Bell algorithm achieved an F-score of 75.7. 4.4 Discussion Overall, the best-performing model, using Brants smoothing, achieves a labelled bracketing F-score of 76.2, higher than earlier results reported by Dubey and Keller (2003) and Schiehlen (2004). It is surprisingly that the Brants algorithm performs favourably compared to the better-known modified Kneser-Ney algorithm. This might be due to the heritage of the two algorithms. Kneser-Ney smoothing was designed for language modelling, where there are tens of thousands or hundreds of thousands of tokens having a Zipfian distribution. With all transformations included, the nonterminals of our grammar did have a Zipfian marginal distribution, but there were only several hundred tokens. The Brants algorithm was specifically designed for distributions with fewer tokens. Also surprising is the fact that each smoothing algorithm reacted differently to the various treebank transformations. It is obvious that the choice of search and smoothing algorithm add bias to the final result. However, our results indicate that the choice of search and smoothing algorithm also add a degree of variance as improvements are added to the parser. This is worrying: at times in the literature, details of search or smoothing are left out (e.g. Charniak (2000)). Given the degree of variance due to search and smoothing, it raises the question if it is in fact possible to reproduce such results without the necessary details.2 5 Error Analysis While it is uncommon to offer an error analysis for probabilistic parsing, Levy and Manning (2003) argue that a careful error classification can reveal possible improvements. Although we leave the implementation of any improvements to future research, we do discuss several common errors. Because the parser with Brants smoothing performed best, we use that as the basis of our error analysis. First, we found that POS tagging errors had a strong effect on parsing results. This is surprising, given that the parser is able to assign POS tags with a high degree of accuracy. POS tagging results are comparable to the best stand-alone POS taggers, achieving results of 97.1% on the test set, matching the performance of the POS tagger described by Brants (2000) When GF labels are included (e.g. considering ART-SB instead of just ART), tagging accuracy falls to 90.1%. To quantify the effect of POS tagging errors, we re-parsed with correct POS tags (rather than letting the parser guess the tags), and found that labelled bracket F-scores increase from 76.3 to 85.2. A manual inspection of 100 sentences found that GF mislabelling can accounts for at most two-thirds of the mistakes due to POS tags. Over one third was due to genuine POS tagging errors. The most common problem was verb mistagging: they are either confused with adjectives (both 2As an anonymous reviewer pointed out, it is not always straightforward to reproduce statistical parsing results even when the implementation details are given (Bikel, 2004). 319 Model LB F-score This paper 76.3 Dubey and Keller (2003) 74.1 Schiehlen (2004) 71.1 Table 4: Comparison with previous work. take the common -en suffix), or the tense was incorrect. Mistagged verb are a serious problem: it entails an entire clause is parsed incorrectly. Verb mistagging is also a problem for other languages: Levy and Manning (2003) describe a similar problem in Chinese for noun/verb ambiguity. This problem might be alleviated by using a more detailed model of morphology than our suffix analyzer provides. To investigate pure parsing errors, we manually examined 100 sentences which were incorrectly parsed, but which nevertheless were assigned the correct POS tags. Incorrect modifier attachment accounted for for 39% of all parsing errors (of which 77% are due to PP attachment alone). Misparsed coordination was the second most common problem, accounting for 15% of all mistakes. Another class of error appears to be due to Markovization. The boundaries of VPs are sometimes incorrect, with the parser attaching dependents directly to the S node rather than the VP. In the most extreme cases, the VP had no verb, with the main verb heading a subordinate clause. 6 Comparison with Previous Work Table 4 lists the result of the best model presented here against the earlier work on NEGRA parsing described in Dubey and Keller (2003) and Schiehlen (2004). Dubey and Keller use a variant of the lexicalized Collins (1999) model to achieve a labelled bracketing F-score of 74.1%. Schiehlen presents a number of unlexicalized models. The best model on labelled bracketing achieves an F-score of 71.8%. The work of Schiehlen is particularly interesting as he also considers a number of transformations to improve the performance of an unlexicalized parser. Unlike the work presented here, Schiehlen does not attempt to perform any suffix or morphological analysis of the input text. However, he does suggest a number of treebank transformations. One such transformation is similar to one we prosed here, the NP case transformation. His implementation is different from ours: he annotates the case of pronouns and common nouns, whereas we focus on articles and pronouns (articles are pronouns are more strongly marked for case than common nouns). The remaining transformations we present are different from those Schiehlen describes; it is possible that an even better parser may result if all the transformations were combined. Schiehlen also makes use of a morphological analyzer tool. While this includes more complete information about German morphology, our suffix analysis model allows us to integrate morphological ambiguities into the parsing system by means of lexical generation probabilities. Levy and Manning (2004) also present work on the NEGRA treebank, but are primarily interested in long-distance dependencies, and therefore do not report results on local dependencies, as we do here. 7 Conclusions In this paper, we presented the best-performing parser for German, as measured by labelled bracket scores. The high performance was due to three factors: (i) treebank transformations (ii) an integrated model of morphology in the form of a suffix analyzer and (iii) the use of smoothing in an unlexicalized grammar. Moreover, there are possible paths for improvement: lexicalization could be added to the model, as could some of the treebank transformations suggested by Schiehlen (2004). Indeed, the suffix analyzer could well be of value in a lexicalized model. While we only presented results on the German NEGRA corpus, there is reason to believe that the techniques we presented here are also important to other languages where lexicalization provides little benefit: smoothing is a broadly-applicable technique, and if difficulties with lexicalization are due to sparse lexical data, then suffix analysis provides a useful way to get more information from lexical elements which were unseen while training. In addition to our primary results, we also provided a detailed error analysis which shows that PP attachment and co-ordination are problematic for our parser. Furthermore, while POS tagging is highly accurate, the error analysis also shows it does 320 have surprisingly large effect on parsing errors. Because of the strong impact of POS tagging on parsing results, we conjecture that increasing POS tagging accuracy may be another fruitful area for future parsing research. References Franz Beil, Glenn Carroll, Detlef Prescher, Stefan Riezler, and Mats Rooth. 1999. Inside-Outside Estimation of a Lexicalized PCFG for German. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, University of Maryland, College Park. Daniel M. Bikel. 2004. Intricacies of Collins’ Parsing Model. Computational Linguistics, 30(4). Don Blaheta and Eugene Charniak. 2000. Assigning function tags to parsed text. In Proceedings of the 1st Conference of the North American Chapter of the ACL (NAACL), Seattle, Washington., pages 234–240. Rens Bod. 1995. Enriching Linguistics with Statistics: Performance Models of Natural Language. Ph.D. thesis, University of Amsterdam. Taylor L. Booth. 1969. Probabilistic Representation of Formal Languages. In Tenth Annual IEEE Symposium on Switching and Automata Theory, pages 74–81. Thorsten Brants. 2000. TnT: A statistical part-of-speech tagger. In Proceedings of the 6th Conference on Applied Natural Language Processing, Seattle. Eugene Charniak. 2000. A Maximum-Entropy-Inspired Parser. In Proceedings of the 1st Conference of North American Chapter of the Association for Computational Linguistics, pages 132–139, Seattle, WA. Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Center for Research in Computing Technology, Harvard University. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Amit Dubey and Frank Keller. 2003. Parsing German with Sister-head Dependencies. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 96–103, Sapporo, Japan. Gerald Gazdar, Ewan Klein, Geoffrey Pullum, and Ivan Sag. 1985. Generalized Phase Structure Grammar. Basil Blackwell, Oxford, England. Joshua Goodman. 1998. Parsing inside-out. Ph.D. thesis, Harvard University. Mark Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24(4):613–632. Dan Klein and Christopher D. Manning. 2002. A* Parsing: Fast Exact Viterbi Parse Selection. Technical Report dbpubs/2002-16, Stanford University. Dan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423–430, Sapporo, Japan. Roger Levy and Christopher D. Manning. 2003. Is it Harder to Parse Chinese, or the Chinese Treebank? In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Roger Levy and Christopher D. Manning. 2004. Deep Dependencies from Context-Free Statistical Parsers: Correcting the Surface Dependency Approximation. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics. David M. Magerman. 1995. Statistical Decision-Tree Models for Parsing. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 276–283, Cambridge, MA. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Micheal Schiehlen. 2004. Annotation Strategies for Probabilistic Parsing in German. In Proceedings of the 20th International Conference on Computational Linguistics. Wojciech Skut, Brigitte Krenn, Thorsten Brants, and Hans Uszkoreit. 1997. An annotation scheme for free word order languages. In Proceedings of the 5th Conference on Applied Natural Language Processing, Washington, DC. Hans Uszkoreit. 1987. Word Order and Constituent Structure in German. CSLI Publications, Stanford, CA. 321 | 2005 | 39 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 26–33, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Supersense Tagging of Unknown Nouns using Semantic Similarity James R. Curran School of Information Technologies University of Sydney NSW 2006, Australia [email protected] Abstract The limited coverage of lexical-semantic resources is a significant problem for NLP systems which can be alleviated by automatically classifying the unknown words. Supersense tagging assigns unknown nouns one of 26 broad semantic categories used by lexicographers to organise their manual insertion into WORDNET. Ciaramita and Johnson (2003) present a tagger which uses synonym set glosses as annotated training examples. We describe an unsupervised approach, based on vector-space similarity, which does not require annotated examples but significantly outperforms their tagger. We also demonstrate the use of an extremely large shallow-parsed corpus for calculating vector-space semantic similarity. 1 Introduction Lexical-semantic resources have been applied successful to a wide range of Natural Language Processing (NLP) problems ranging from collocation extraction (Pearce, 2001) and class-based smoothing (Clark and Weir, 2002), to text classification (Baker and McCallum, 1998) and question answering (Pasca and Harabagiu, 2001). In particular, WORDNET (Fellbaum, 1998) has significantly influenced research in NLP. Unfortunately, these resource are extremely timeconsuming and labour-intensive to manually develop and maintain, requiring considerable linguistic and domain expertise. Lexicographers cannot possibly keep pace with language evolution: sense distinctions are continually made and merged, words are coined or become obsolete, and technical terms migrate into the vernacular. Technical domains, such as medicine, require separate treatment since common words often take on special meanings, and a significant proportion of their vocabulary does not overlap with everyday vocabulary. Burgun and Bodenreider (2001) compared an alignment of WORDNET with the UMLS medical resource and found only a very small degree of overlap. Also, lexicalsemantic resources suffer from: bias towards concepts and senses from particular topics. Some specialist topics are better covered in WORDNET than others, e.g. dog has finer-grained distinctions than cat and worm although this does not reflect finer distinctions in reality; limited coverage of infrequent words and senses. Ciaramita and Johnson (2003) found that common nouns missing from WORDNET 1.6 occurred every 8 sentences in the BLLIP corpus. By WORDNET 2.0, coverage has improved but the problem of keeping up with language evolution remains difficult. consistency when classifying similar words into categories. For instance, the WORDNET lexicographer file for ionosphere (location) is different to exosphere and stratosphere (object), two other layers of the earth’s atmosphere. These problems demonstrate the need for automatic or semi-automatic methods for the creation and maintenance of lexical-semantic resources. Broad semantic classification is currently used by lexicographers to organise the manual insertion of words into WORDNET, and is an experimental precursor to automatically inserting words directly into the WORDNET hierarchy. Ciaramita and Johnson (2003) call this supersense tagging and describe a multi-class perceptron tagger, which uses WORDNET’s hierarchical structure to create many annotated training instances from the synset glosses. This paper describes an unsupervised approach to supersense tagging that does not require annotated sentences. Instead, we use vector-space similarity to retrieve a number of synonyms for each unknown common noun. The supersenses of these synonyms are then combined to determine the supersense. This approach significantly outperforms the multi-class perceptron on the same dataset based on WORDNET 1.6 and 1.7.1. 26 LEX-FILE DESCRIPTION act acts or actions animal animals artifact man-made objects attribute attributes of people and objects body body parts cognition cognitive processes and contents communication communicative processes and contents event natural events feeling feelings and emotions food foods and drinks group groupings of people or objects location spatial position motive goals object natural objects (not man-made) person people phenomenon natural phenomena plant plants possession possession and transfer of possession process natural processes quantity quantities and units of measure relation relations between people/things/ideas shape two and three dimensional shapes state stable states of affairs substance substances time time and temporal relations Table 1: 25 noun lexicographer files in WORDNET 2 Supersenses There are 26 broad semantic classes employed by lexicographers in the initial phase of inserting words into the WORDNET hierarchy, called lexicographer files (lexfiles). For the noun hierarchy, there are 25 lex-files and a file containing the top level nodes in the hierarchy called Tops. Other syntactic classes are also organised using lex-files: 15 for verbs, 3 for adjectives and 1 for adverbs. Lex-files form a set of coarse-grained sense distinctions within WORDNET. For example, company appears in the following lex-files in WORDNET 2.0: group, which covers company in the social, commercial and troupe fine-grained senses; and state, which covers companionship. The names and descriptions of the noun lex-files are shown in Table 1. Some lex-files map directly to the top level nodes in the hierarchy, called unique beginners, while others are grouped together as hyponyms of a unique beginner (Fellbaum, 1998, page 30). For example, abstraction subsumes the lex-files attribute, quantity, relation, communication and time. Ciaramita and Johnson (2003) call the noun lex-file classes supersenses. There are 11 unique beginners in the WORDNET noun hierarchy which could also be used as supersenses. Ciaramita (2002) has produced a miniWORDNET by manually reducing the WORDNET hierarchy to 106 broad categories. Ciaramita et al. (2003) describe how the lex-files can be used as root nodes in a two level hierarchy with the WORDNET synsets appearing directly underneath. Other alternative sets of supersenses can be created by an arbitrary cut through the WORDNET hierarchy near the top, or by using topics from a thesaurus such as Roget’s (Yarowsky, 1992). These topic distinctions are coarser-grained than WORDNET senses, which have been criticised for being too difficult to distinguish even for experts. Ciaramita and Johnson (2003) believe that the key sense distinctions are still maintained by supersenses. They suggest that supersense tagging is similar to named entity recognition, which also has a very small set of categories with similar granularity (e.g. location and person) for labelling predominantly unseen terms. Supersense tagging can provide automated or semiautomated assistance to lexicographers adding words to the WORDNET hierarchy. Once this task is solved successfully, it may be possible to insert words directly into the fine-grained distinctions of the hierarchy itself. Clearly, this is the ultimate goal, to be able to insert new terms into lexical resources, extending the structure where necessary. Supersense tagging is also interesting for many applications that use shallow semantics, e.g. information extraction and question answering. 3 Previous Work A considerable amount of research addresses structurally and statistically manipulating the hierarchy of WORDNET and the construction of new wordnets using the concept structure from English. For lexical FreeNet, Beeferman (1998) adds over 350 000 collocation pairs (trigger pairs) extracted from a 160 million word corpus of broadcast news using mutual information. The co-occurrence window was 500 words which was designed to approximate average document length. Caraballo and Charniak (1999) have explored determining noun specificity from raw text. They find that simple frequency counts are the most effective way of determining the parent-child ordering, achieving 83% accuracy over types of vehicle, food and occupation. The other measure they found to be successful was the entropy of the conditional distribution of surrounding words given the noun. Specificity ordering is a necessary step for building a noun hierarchy. However, this approach clearly cannot build a hierarchy alone. For instance, entity is less frequent than many concepts it subsumes. This suggests it will only be possible to add words to an existing abstract structure rather than create categories right up to the unique beginners. Hearst and Sch¨utze (1993) flatten WORDNET into 726 categories using an algorithm which attempts to minimise the variance in category size. These categories are used to label paragraphs with topics, effectively repeating Yarowsky’s (1992) experiments using the their categories rather than Roget’s thesaurus. Sch¨utze’s (1992) 27 WordSpace system was used to add topical links, such as between ball, racquet and game (the tennis problem). Further, they also use the same vector-space techniques to label previously unseen words using the most common class assigned to the top 20 synonyms for that word. Widdows (2003) uses a similar technique to insert words into the WORDNET hierarchy. He first extracts synonyms for the unknown word using vector-space similarity measures based on Latent Semantic Analysis and then searches for a location in the hierarchy nearest to these synonyms. This same technique as is used in our approach to supersense tagging. Ciaramita and Johnson (2003) implement a supersense tagger based on the multi-class perceptron classifier (Crammer and Singer, 2001), which uses the standard collocation, spelling and syntactic features common in WSD and named entity recognition systems. Their insight was to use the WORDNET glosses as annotated training data and massively increase the number of training instances using the noun hierarchy. They developed an efficient algorithm for estimating the model over hierarchical training data. 4 Evaluation Ciaramita and Johnson (2003) propose a very natural evaluation for supersense tagging: inserting the extra common nouns that have been added to a new version of WORDNET. They use the common nouns that have been added to WORDNET 1.7.1 since WORDNET 1.6 and compare this evaluation with a standard cross-validation approach that uses a small percentage of the words from their WORDNET 1.6 training set for evaluation. Their results suggest that the WORDNET 1.7.1 test set is significantly harder because of the large number of abstract category nouns, e.g. communication and cognition, that appear in the 1.7.1 data, which are difficult to classify. Our evaluation will use exactly the same test sets as Ciaramita and Johnson (2003). The WORDNET 1.7.1 test set consists of 744 previously unseen nouns, the majority of which (over 90%) have only one sense. The WORDNET 1.6 test set consists of several cross-validation sets of 755 nouns randomly selected from the BLLIP training set used by Ciaramita and Johnson (2003). They have kindly supplied us with the WORDNET 1.7.1 test set and one cross-validation run of the WORDNET 1.6 test set. Our development experiments are performed on the WORDNET 1.6 test set with one final run on the WORDNET 1.7.1 test set. Some examples from the test sets are given in Table 2 with their supersenses. 5 Corpus We have developed a 2 billion word corpus, shallowparsed with a statistical NLP pipeline, which is by far the WORDNET 1.6 WORDNET 1.7.1 NOUN SUPERSENSE NOUN SUPERSENSE stock index communication week time fast food food buyout act bottler group insurer group subcompact artifact partner person advancer person health state cash flow possession income possession downside cognition contender person discounter artifact cartel group trade-off act lender person billionaire person planner artifact Table 2: Example nouns and their supersenses largest NLP processed corpus described in published research. The corpus consists of the British National Corpus (BNC), the Reuters Corpus Volume 1 (RCV1), and most of the Linguistic Data Consortium’s news text collected since 1987: Continuous Speech Recognition III (CSR-III); North American News Text Corpus (NANTC); the NANTC Supplement (NANTS); and the ACQUAINT Corpus. The components and their sizes including punctuation are given in Table 3. The LDC has recently released the English Gigaword corpus which includes most of the corpora listed above. CORPUS DOCS. SENTS. WORDS BNC 4 124 6.2M 114M RCV1 806 791 8.1M 207M CSR-III 491 349 9.3M 226M NANTC 930 367 23.2M 559M NANTS 942 167 25.2M 507M ACQUAINT 1 033 461 21.3M 491M Table 3: 2 billion word corpus statistics We have tokenized the text using the Grok-OpenNLP tokenizer (Morton, 2002) and split the sentences using MXTerminator (Reynar and Ratnaparkhi, 1997). Any sentences less than 3 words or more than 100 words long were rejected, along with sentences containing more than 5 numbers or more than 4 brackets, to reduce noise. The rest of the pipeline is described in the next section. 6 Semantic Similarity Vector-space models of similarity are based on the distributional hypothesis that similar words appear in similar contexts. This hypothesis suggests that semantic similarity can be measured by comparing the contexts each word appears in. In vector-space models each headword is represented by a vector of frequency counts recording the contexts that it appears in. The key parameters are the context extraction method and the similarity measure used to compare context vectors. Our approach to 28 vector-space similarity is based on the SEXTANT system described in Grefenstette (1994). Curran and Moens (2002b) compared several context extraction methods and found that the shallow pipeline and grammatical relation extraction used in SEXTANT was both extremely fast and produced high-quality results. SEXTANT extracts relation tuples (w, r, w′) for each noun, where w is the headword, r is the relation type and w′ is the other word. The efficiency of the SEXTANT approach makes the extraction of contextual information from over 2 billion words of raw text feasible. We describe the shallow pipeline in detail below. Curran and Moens (2002a) compared several different similarity measures and found that Grefenstette’s weighted JACCARD measure performed the best: P min(wgt(w1, ∗r, ∗w′), wgt(w2, ∗r, ∗w′)) P max(wgt(w1, ∗r, ∗w′), wgt(w2, ∗r, ∗w′)) (1) where wgt(w, r, w′) is the weight function for relation (w, r, w′). Curran and Moens (2002a) introduced the TTEST weight function, which is used in collocation extraction. Here, the t-test compares the joint and product probability distributions of the headword and context: p(w, r, w′) −p(∗, r, w′)p(w, ∗, ∗) p p(∗, r, w′)p(w, ∗, ∗) (2) where ∗indicates a global sum over that element of the relation tuple. JACCARD and TTEST produced better quality synonyms than existing measures in the literature, so we use Curran and Moen’s configuration for our supersense tagging experiments. 6.1 Part of Speech Tagging and Chunking Our implementation of SEXTANT uses a maximum entropy POS tagger designed to be very efficient, tagging at around 100 000 words per second (Curran and Clark, 2003), trained on the entire Penn Treebank (Marcus et al., 1994). The only similar performing tool is the Trigrams ‘n’ Tags tagger (Brants, 2000) which uses a much simpler statistical model. Our implementation uses a maximum entropy chunker which has similar feature types to Koeling (2000) and is also trained on chunks extracted from the entire Penn Treebank using the CoNLL 2000 script. Since the Penn Treebank separates PPs and conjunctions from NPs, they are concatenated to match Grefenstette’s table-based results, i.e. the SEXTANT always prefers noun attachment. 6.2 Morphological Analysis Our implementation uses morpha, the Sussex morphological analyser (Minnen et al., 2001), which is implemented using lex grammars for both affix splitting and generation. morpha has wide coverage – nearly 100% RELATION DESCRIPTION adj noun–adjectival modifier relation dobj verb–direct object relation iobj verb–indirect object relation nn noun–noun modifier relation nnprep noun–prepositional head relation subj verb–subject relation Table 4: Grammatical relations from SEXTANT against the CELEX lexical database (Minnen et al., 2001) – and is very efficient, analysing over 80 000 words per second. morpha often maintains sense distinctions between singular and plural nouns; for instance: spectacles is not reduced to spectacle, but fails to do so in other cases: glasses is converted to glass. This inconsistency is problematic when using morphological analysis to smooth vector-space models. However, morphological smoothing still produces better results in practice. 6.3 Grammatical Relation Extraction After the raw text has been POS tagged and chunked, the grammatical relation extraction algorithm is run over the chunks. This consists of five passes over each sentence that first identify noun and verb phrase heads and then collect grammatical relations between each common noun and its modifiers and verbs. A global list of grammatical relations generated by each pass is maintained across the passes. The global list is used to determine if a word is already attached. Once all five passes have been completed this association list contains all of the nounmodifier/verb pairs which have been extracted from the sentence. The types of grammatical relation extracted by SEXTANT are shown in Table 4. For relations between nouns (nn and nnprep), we also create inverse relations (w′, r′, w) representing the fact that w′ can modify w. The 5 passes are described below. Pass 1: Noun Pre-modifiers This pass scans NPs, left to right, creating adjectival (adj) and nominal (nn) pre-modifier grammatical relations (GRs) with every noun to the pre-modifier’s right, up to a preposition or the phrase end. This corresponds to assuming right-branching noun compounds. Within each NP only the NP and PP heads remain unattached. Pass 2: Noun Post-modifiers This pass scans NPs, right to left, creating post-modifier GRs between the unattached heads of NPs and PPs. If a preposition is encountered between the noun heads, a prepositional noun (nnprep) GR is created, otherwise an appositional noun (nn) GR is created. This corresponds to assuming right-branching PP attachment. After this phrase only the NP head remains unattached. Tense Determination The rightmost verb in each VP is considered the head. A 29 VP is initially categorised as active. If the head verb is a form of be then the VP becomes attributive. Otherwise, the algorithm scans the VP from right to left: if an auxiliary verb form of be is encountered the VP becomes passive; if a progressive verb (except being) is encountered the VP becomes active. Only the noun heads on either side of VPs remain unattached. The remaining three passes attach these to the verb heads as either subjects or objects depending on the voice of the VP. Pass 3: Verb Pre-Attachment This pass scans sentences, right to left, associating the first NP head to the left of the VP with its head. If the VP is active, a subject (subj) relation is created; otherwise, a direct object (dobj) relation is created. For example, antigen is the subject of represent. Pass 4: Verb Post-Attachment This pass scans sentences, left to right, associating the first NP or PP head to the right of the VP with its head. If the VP was classed as active and the phrase is an NP then a direct object (dobj) relation is created. If the VP was classed as passive and the phrase is an NP then a subject (subj) relation is created. If the following phrase is a PP then an indirect object (iobj) relation is created. The interaction between the head verb and the preposition determine whether the noun is an indirect object of a ditransitive verb or alternatively the head of a PP that is modifying the verb. However, SEXTANT always attaches the PP to the previous phrase. Pass 5: Verb Progressive Participles The final step of the process is to attach progressive verbs to subjects and objects (without concern for whether they are already attached). Progressive verbs can function as nouns, verbs and adjectives and once again a na¨ıve approximation to the correct attachment is made. Any progressive verb which appears after a determiner or quantifier is considered a noun. Otherwise, it is a verb and passes 3 and 4 are repeated to attach subjects and objects. Finally, SEXTANT collapses the nn, nnprep and adj relations together into a single broad noun-modifier grammatical relation. Grefenstette (1994) claims this extractor has a grammatical relation accuracy of 75% after manually checking 60 sentences. 7 Approach Our approach uses voting across the known supersenses of automatically extracted synonyms, to select a supersense for the unknown nouns. This technique is similar to Hearst and Sch¨utze (1993) and Widdows (2003). However, sometimes the unknown noun does not appear in our 2 billion word corpus, or at least does not appear frequently enough to provide sufficient contextual information to extract reliable synonyms. In these cases, our SUFFIX EXAMPLE SUPERSENSE -ness remoteness attribute -tion, -ment annulment act -ist, -man statesman person -ing, -ion bowling act -ity viscosity attribute -ics, -ism electronics cognition -ene, -ane, -ine arsine substance -er, -or, -ic, -ee, -an mariner person -gy entomology cognition Table 5: Hand-coded rules for supersense guessing fall-back method is a simple hand-coded classifier which examines the unknown noun and makes a guess based on simple morphological analysis of the suffix. These rules were created by inspecting the suffixes of rare nouns in WORDNET 1.6. The supersense guessing rules are given in Table 5. If none of the rules match, then the default supersense artifact is assigned. The problem now becomes how to convert the ranked list of extracted synonyms for each unknown noun into a single supersense selection. Each extracted synonym votes for its one or more supersenses that appear in WORDNET 1.6. There are many parameters to consider: • how many extracted synonyms to use; • how to weight each synonym’s vote; • whether unreliable synonyms should be filtered out; • how to deal with polysemous synonyms. The experiments described below consider a range of options for these parameters. In fact, these experiments are so quick to run we have been able to exhaustively test many combinations of these parameters. We have experimented with up to 200 voting extracted synonyms. There are several ways to weight each synonym’s contribution. The simplest approach would be to give each synonym the same weight. Another approach is to use the scores returned by the similarity system. Alternatively, the weights can use the ranking of the extracted synonyms. Again these options have been considered below. A related question is whether to use all of the extracted synonyms, or perhaps filter out synonyms for which a small amount of contextual information has been extracted, and so might be unreliable. The final issue is how to deal with polysemy. Does every supersense of each extracted synonym get the whole weight of that synonym or is it distributed evenly between the supersenses like Resnik (1995)? Another alternative is to only consider unambiguous synonyms with a single supersense in WORDNET. A disadvantage of this similarity approach is that it requires full synonym extraction, which compares the unknown word against a large number of words when, in 30 SYSTEM WN 1.6 WN 1.7.1 Ciaramita and Johnson baseline 21% 28% Ciaramita and Johnson perceptron 53% 53% Similarity based results 68% 63% Table 6: Summary of supersense tagging accuracies fact, we want to calculate the similarity to a small number of supersenses. This inefficiency could be reduced significantly if we consider only very high frequency words, but even this is still expensive. 8 Results We have used the WORDNET 1.6 test set to experiment with different parameter settings and have kept the WORDNET 1.7.1 test set as a final comparison of best results with Ciaramita and Johnson (2003). The experiments were performed by considering all possible configurations of the parameters described above. The following voting options were considered for each supersense of each extracted synonym: the initial voting weight for a supersense could either be a constant (IDENTITY) or the similarity score (SCORE) of the synonym. The initial weight could then be divided by the number of supersenses to share out the weight (SHARED). The weight could also be divided by the rank (RANK) to penalise supersenses further down the list. The best performance on the 1.6 test set was achieved with the SCORE voting, without sharing or ranking penalties. The extracted synonyms are filtered before contributing to the vote with their supersense(s). This filtering involves checking that the synonym’s frequency and number of contexts are large enough to ensure it is reliable. We have experimented with a wide range of cutoffs and the best performance on the 1.6 test set was achieved using a minimum cutoff of 5 for the synonym’s frequency and the number of contexts it appears in. The next question is how many synonyms are considered. We considered using just the nearest unambiguous synonym, and the top 5, 10, 20, 50, 100 and 200 synonyms. All of the top performing configurations used 50 synonyms. We have also experimented with filtering out highly polysemous nouns by eliminating words with two, three or more synonyms. However, such a filter turned out to make little difference. Finally, we need to decide when to use the similarity measure and when to fall-back to the guessing rules. This is determined by looking at the frequency and number of attributes for the unknown word. Not surprisingly, the similarity system works better than the guessing rules if it has any information at all. The results are summarised in Table 6. The accuracy of the best-performing configurations was 68% on the WORDNET 1.6 WORDNET 1.7.1 SUPERSENSE N P R F N P R F Tops 2 0 0 0 1 50 100 67 act 84 60 74 66 86 53 73 61 animal 16 69 56 62 5 33 60 43 artifact 134 61 86 72 129 57 76 65 attribute 32 52 81 63 16 44 69 54 body 8 88 88 88 5 50 40 44 cognition 31 56 45 50 41 70 34 46 communication 66 80 56 66 57 58 44 50 event 14 83 36 50 10 80 40 53 feeling 8 70 88 78 1 0 0 0 food 29 91 69 78 12 67 67 67 group 27 75 22 34 26 50 4 7 location 43 81 30 44 13 40 15 22 motive 0 0 0 0 1 0 0 0 object 17 73 47 57 13 75 23 35 person 155 76 89 82 207 81 86 84 phenomenon 3 100 100 100 9 0 0 0 plant 11 80 73 76 0 0 0 0 possession 9 100 22 36 16 78 44 56 process 2 0 0 0 9 50 11 18 quantity 12 80 33 47 5 0 0 0 relation 2 100 50 67 0 0 0 0 shape 1 0 0 0 0 0 0 0 state 21 48 48 48 28 50 39 44 substance 24 58 58 58 44 63 73 67 time 5 100 60 75 10 36 40 38 Overall 756 68 68 68 744 63 63 63 Table 7: Breakdown of results by supersense WORDNET 1.6 test set with several other parameter combinations described above performing nearly as well. On the previously unused WORDNET 1.7.1 test set, our accuracy is 63% using the best system on the WORDNET 1.6 test set. By optimising the parameters on the 1.7.1 test set we can increase that to 64%, indicating that we have not excessively over-tuned on the 1.6 test set. Our results significantly outperform Ciaramita and Johnson (2003) on both test sets even though our system is unsupervised. The large difference between our 1.6 and 1.7.1 test set accuracy demonstrates that the 1.7.1 set is much harder. Table 7 shows the breakdown in performance for each supersense. The columns show the number of instances of each supersense with the precision, recall and f-score measures as percentages. The most frequent supersenses in both test sets were person, attribute and act. Of the frequent categories, person is the easiest supersense to get correct in both the 1.6 and 1.7.1 test sets, followed by food, artifact and substance. This is not surprising since these concrete words tend to have very fewer other senses, well constrained contexts and a relatively high frequency. These factors are conducive for extracting reliable synonyms. These results also support Ciaramita and Johnson’s view that abstract concepts like communication, cognition and state are much harder. We would expect the location 31 supersense to perform well since it is quite concrete, but unfortunately our synonym extraction system does not incorporate proper nouns, so many of these words were classified using the hand-built classifier. Also, in the data from Ciaramita and Johnson all of the words are in lower case, so no sensible guessing rules could help. 9 Other Alternatives and Future Work An alternative approach worth exploring is to create context vectors for the supersense categories themselves and compare these against the words. This has the advantage of producing a much smaller number of vectors to compare against. In the current system, we must compare a word against the entire vocabulary (over 500 000 headwords), which is much less efficient than a comparison against only 26 supersense context vectors. The question now becomes how to construct vectors of supersenses. The most obvious solution is to sum the context vectors across the words which have each supersense. However, our early experiments suggest that this produces extremely large vectors which do not match well against the much smaller vectors of each unseen word. Also, the same questions arise in the construction of these vectors. How are words with multiple supersenses handled? Our preliminary experiments suggest that only combining the vectors for unambiguous words produces the best results. One solution would be to take the intersection between vectors across words for each supersense (i.e. to find the common contexts that these words appear in). However, given the sparseness of the data this may not leave very large context vectors. A final solution would be to consider a large set of the canonical attributes (Curran and Moens, 2002a) to represent each supersense. Canonical attributes summarise the key contexts for each headword and are used to improve the efficiency of the similarity comparisons. There are a number of problems our system does not currently handle. Firstly, we do not include proper names in our similarity system which means that location entities can be very difficult to identify correctly (as the results demonstrate). Further, our similarity system does not currently incorporate multi-word terms. We overcome this by using the synonyms of the last word in the multi-word term. However, there are 174 multi-word terms (23%) in the WORDNET 1.7.1 test set which we could probably tag more accurately with synonyms for the whole multi-word term. Finally, we plan to implement a supervised machine learner to replace the fallback method, which currently has an accuracy of 37% on the WORDNET 1.7.1 test set. We intend to extend our experiments beyond the Ciaramita and Johnson (2003) set to include previous and more recent versions of WORDNET to compare their difficulty, and also perform experiments over a range of corpus sizes to determine the impact of corpus size on the quality of results. We would like to move onto the more difficult task of insertion into the hierarchy itself and compare against the initial work by Widdows (2003) using latent semantic analysis. Here the issue of how to combine vectors is even more interesting since there is the additional structure of the WORDNET inheritance hierarchy and the small synonym sets that can be used for more fine-grained combination of vectors. 10 Conclusion Our application of semantic similarity to supersense tagging follows earlier work by Hearst and Sch¨utze (1993) and Widdows (2003). To classify a previously unseen common noun our approach extracts synonyms which vote using their supersenses in WORDNET 1.6. We have experimented with several parameters finding that the best configuration uses 50 extracted synonyms, filtered by frequency and number of contexts to increase their reliability. Each synonym votes for each of its supersenses from WORDNET 1.6 using the similarity score from our synonym extractor. Using this approach we have significantly outperformed the supervised multi-class perceptron Ciaramita and Johnson (2003). This paper also demonstrates the use of a very efficient shallow NLP pipeline to process a massive corpus. Such a corpus is needed to acquire reliable contextual information for the often very rare nouns we are attempting to supersense tag. This application of semantic similarity demonstrates that an unsupervised methods can outperform supervised methods for some NLP tasks if enough data is available. Acknowledgements We would like to thank Massi Ciaramita for supplying his original data for these experiments and answering our queries, and to Stephen Clark and the anonymous reviewers for their helpful feedback and corrections. This work has been supported by a Commonwealth scholarship, Sydney University Travelling Scholarship and Australian Research Council Discovery Project DP0453131. References L. Douglas Baker and Andrew McCallum. 1998. Distributional clustering of words for text classification. In Proceedings of the 21st annual international ACM SIGIR conference on Research and Development in Information Retrieval, pages 96–103, Melbourne, Australia. Doug Beeferman. 1998. Lexical discovery with an enriched semantic network. In Proceedings of the Workshop on Usage 32 of WordNet in Natural Language Processing Systems, pages 358–364, Montr´eal, Qu´ebec, Canada. Thorsten Brants. 2000. TnT - a statistical part-of-speech tagger. In Proceedings of the 6th Applied Natural Language Processing Conference, pages 224–231, Seattle, WA USA. Anita Burgun and Olivier Bodenreider. 2001. Comparing terms, concepts and semantic classes in WordNet and the Unified Medical Language System. In Proceedings of the Workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations, pages 77–82, Pittsburgh, PA USA. Sharon A. Caraballo and Eugene Charniak. 1999. Determining the specificity of nouns from text. In Proceedings of the Joint ACL SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 63–70, College Park, MD USA. Massimiliano Ciaramita and Mark Johnson. 2003. Supersense tagging of unknown nouns in WordNet. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 168–175, Sapporo, Japan. Massimiliano Ciaramita, Thomas Hofmann, and Mark Johnson. 2003. Hierarchical semantic classification: Word sense disambiguation with world knowledge. In Proceedings of the 18th International Joint Conference on Artificial Intelligence, Acapulco, Mexico. Massimiliano Ciaramita. 2002. Boosting automatic lexical acquisition with morphological information. In Proceedings of the Workshop on Unsupervised Lexical Acquisition, pages 17–25, Philadelphia, PA, USA. Stephen Clark and David Weir. 2002. Class-based probability estimation using a semantic hierarchy. Computational Linguistics, 28(2):187–206, June. Koby Crammer and Yoram Singer. 2001. Ultraconservative online algorithms for multiclass problems. In Proceedings of the 14th annual Conference on Computational Learning Theory and 5th European Conference on Computational Learning Theory, pages 99–115, Amsterdam, The Netherlands. James R. Curran and Stephen Clark. 2003. Investigating GIS and smoothing for maximum entropy taggers. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics, pages 91–98, Budapest, Hungary. James R. Curran and Marc Moens. 2002a. Improvements in automatic thesaurus extraction. In Proceedings of the Workshop on Unsupervised Lexical Acquisition, pages 59– 66, Philadelphia, PA, USA. James R. Curran and Marc Moens. 2002b. Scaling context space. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 231–238, Philadelphia, PA, USA. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA USA. Gregory Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers, Boston, MA USA. Marti A. Hearst and Hinrich Sch¨utze. 1993. Customizing a lexicon to better suit a computational task. In Proceedings of the Workshop on Acquisition of Lexical Knowledge from Text, pages 55–69, Columbus, OH USA. Rob Koeling. 2000. Chunking with maximum entropy models. In Proceedings of the 4th Conference on Computational Natural Language Learning and of the 2nd Learning Language in Logic Workshop, pages 139–141, Lisbon, Portugal. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1994. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):313–330. Guido Minnen, John Carroll, and Darren Pearce. 2001. Applied morphological processing of English. Natural Language Engineering, 7(3):207–223. Tom Morton. 2002. Grok tokenizer. Grok OpenNLP toolkit. Marius Pasca and Sanda M. Harabagiu. 2001. The informative role of WordNet in open-domain question answering. In Proceedings of the Workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations, pages 138–143, Pittsburgh, PA USA. Darren Pearce. 2001. Synonymy in collocation extraction. In Proceedings of the Workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations, pages 41–46, Pittsburgh, PA USA. Philip Resnik. 1995. Using information content to evaluate semantic similarity. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, pages 448–453, Montreal, Canada. Jeffrey C. Reynar and Adwait Ratnaparkhi. 1997. A maximum entropy approach to identifying sentence boundaries. In Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 16–19, Washington, D.C. USA. Hinrich Sch¨utze. 1992. Context space. In Intelligent Probabilistic Approaches to Natural Language, number FS-92-04 in Fall Symposium Series, pages 113–120, Stanford University, CA USA. Dominic Widdows. 2003. Unsupervised methods for developing taxonomies by combining syntactic and statistical information. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 276–283, Edmonton, Alberta Canada. David Yarowsky. 1992. Word-sense disambiguation using statistical models of Roget’s categories trained on large corpora. In Proceedings of the 14th international conference on Computational Linguistics, pages 454–460, Nantes, France. 33 | 2005 | 4 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 322–329, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Detecting Errors in Discontinuous Structural Annotation Markus Dickinson Department of Linguistics The Ohio State University [email protected] W. Detmar Meurers Department of Linguistics The Ohio State University [email protected] Abstract Consistency of corpus annotation is an essential property for the many uses of annotated corpora in computational and theoretical linguistics. While some research addresses the detection of inconsistencies in positional annotation (e.g., partof-speech) and continuous structural annotation (e.g., syntactic constituency), no approach has yet been developed for automatically detecting annotation errors in discontinuous structural annotation. This is significant since the annotation of potentially discontinuous stretches of material is increasingly relevant, from treebanks for free-word order languages to semantic and discourse annotation. In this paper we discuss how the variation n-gram error detection approach (Dickinson and Meurers, 2003a) can be extended to discontinuous structural annotation. We exemplify the approach by showing how it successfully detects errors in the syntactic annotation of the German TIGER corpus (Brants et al., 2002). 1 Introduction Annotated corpora have at least two kinds of uses: firstly, as training material and as “gold standard” testing material for the development of tools in computational linguistics, and secondly, as a source of data for theoretical linguists searching for analytically relevant language patterns. Annotation errors and why they are a problem The high quality annotation present in “gold standard” corpora is generally the result of a manual or semi-automatic mark-up process. The annotation thus can contain annotation errors from automatic (pre-)processes, human post-editing, or human annotation. The presence of errors creates problems for both computational and theoretical linguistic uses, from unreliable training and evaluation of natural language processing technology (e.g., van Halteren, 2000; Kvˇetˇon and Oliva, 2002, and the work mentioned below) to low precision and recall of queries for already rare linguistic phenomena. Investigating the quality of linguistic annotation and improving it where possible thus is a key issue for the use of annotated corpora in computational and theoretical linguistics. Illustrating the negative impact of annotation errors on computational uses of annotated corpora, van Halteren et al. (2001) compare taggers trained and tested on the Wall Street Journal (WSJ, Marcus et al., 1993) and the Lancaster-Oslo-Bergen (LOB, Johansson, 1986) corpora and find that the results for the WSJ perform significantly worse. They report that the lower accuracy figures are caused by inconsistencies in the WSJ annotation and that 44% of the errors for their best tagging system were caused by “inconsistently handled cases.” Turning from training to evaluation, Padro and Marquez (1998) highlight the fact that the true accuracy of a classifier could be much better or worse than reported, depending on the error rate of the corpus used for the evaluation. Evaluating two taggers on the WSJ, they find tagging accuracy rates for am322 biguous words of 91.35% and 92.82%. Given the estimated 3% error rate of the WSJ tagging (Marcus et al., 1993), they argue that the difference in performance is not sufficient to establish which of the two taggers is actually better. In sum, corpus annotation errors, especially errors which are inconsistencies, can have a profound impact on the quality of the trained classifiers and the evaluation of their performance. The problem is compounded for syntactic annotation, given the difficulty of evaluating and comparing syntactic structure assignments, as known from the literature on parser evaluation (e.g., Carroll et al., 2002). The idea that variation in annotation can indicate annotation errors has been explored to detect errors in part-of-speech (POS) annotation (van Halteren, 2000; Eskin, 2000; Dickinson and Meurers, 2003a) and syntactic annotation (Dickinson and Meurers, 2003b). But, as far as we are aware, the research we report on here is the first approach to error detection for the increasing number of annotations which make use of more general graph structures for the syntactic annotation of free word order languages or the annotation of semantic and discourse properties. Discontinuous annotation and its relevance The simplest kind of annotation is positional in nature, such as the association of a part-of-speech tag with each corpus position. On the other hand, structural annotation such as that used in syntactic treebanks (e.g., Marcus et al., 1993) assigns a syntactic category to a contiguous sequence of corpus positions. For languages with relatively free constituent order, such as German, Dutch, or the Slavic languages, the combinatorial potential of the language encoded in constituency cannot be mapped straightforwardly onto the word order possibilities of those languages. As a consequence, the treebanks that have been created for German (NEGRA, Skut et al., 1997; VERBMOBIL, Hinrichs et al., 2000; TIGER, Brants et al., 2002) have relaxed the requirement that constituents have to be contiguous. This makes it possible to syntactically annotate the language data as such, i.e., without requiring postulation of empty elements as placeholders or other theoretically motivated changes to the data. We note in passing that discontinuous constituents have also received some support in theoretical linguistics (cf., e.g., the articles collected in Huck and Ojeda, 1987; Bunt and van Horck, 1996). Discontinuous constituents are strings of words which are not necessarily contiguous, yet form a single constituent with a single label, such as the noun phrase Ein Mann, der lacht in the German relative clause extraposition example (1) (Brants et al., 2002).1 (1) Ein a Mann man kommt comes , , der who lacht laughs ‘A man who laughs comes.’ In addition to their use in syntactic annotation, discontinuous structural annotation is also relevant for semantic and discourse-level annotation— essentially any time that graph structures are needed to encode relations that go beyond ordinary tree structures. Such annotations are currently employed in the mark-up for semantic roles (e.g., Kingsbury et al., 2002) and multi-word expressions (e.g., Rayson et al., 2004), as well as for spoken language corpora or corpora with multiple layers of annotation which cross boundaries (e.g., Blache and Hirst, 2000). In this paper, we present an approach to the detection of errors in discontinuous structural annotation. We focus on syntactic annotation with potentially discontinuous constituents and show that the approach successfully deals with the discontinuous syntactic annotation found in the TIGER treebank (Brants et al., 2002). 2 The variation n-gram method Our approach builds on the variation n-gram algorithm introduced in Dickinson and Meurers (2003a,b). The basic idea behind that approach is that a string occurring more than once can occur with different labels in a corpus, which we refer to as variation. Variation is caused by one of two reasons: i) ambiguity: there is a type of string with multiple possible labels and different corpus occurrences of that string realize the different options, or ii) error: the tagging of a string is inconsistent across comparable occurrences. 1The ordinary way of marking a constituent with brackets is inadequate for discontinuous constituents, so we instead boldface and underline the words belonging to a discontinuous constituent. 323 The more similar the context of a variation, the more likely the variation is an error. In Dickinson and Meurers (2003a), contexts are composed of words, and identity of the context is required. The term variation n-gram refers to an n-gram (of words) in a corpus that contains a string annotated differently in another occurrence of the same n-gram in the corpus. The string exhibiting the variation is referred to as the variation nucleus. 2.1 Detecting variation in POS annotation In Dickinson and Meurers (2003a), we explore this idea for part-of-speech annotation. For example, in the WSJ corpus the string in (2) is a variation 12gram since off is a variation nucleus that in one corpus occurrence is tagged as a preposition (IN), while in another it is tagged as a particle (RP).2 (2) to ward off a hostile takeover attempt by two European shipping concerns Once the variation n-grams for a corpus have been computed, heuristics are employed to classify the variations into errors and ambiguities. The first heuristic encodes the basic fact that the label assignment for a nucleus is dependent on the context: variation nuclei in long n-grams are likely to be errors. The second takes into account that natural languages favor the use of local dependencies over non-local ones: nuclei found at the fringe of an n-gram are more likely to be genuine ambiguities than those occurring with at least one word of surrounding context. Both of these heuristics are independent of a specific corpus, annotation scheme, or language. We tested the variation error detection method on the WSJ and found 2495 distinct3 nuclei for the variation n-grams between the 6-grams and the 224grams. 2436 of these were actual errors, making for a precision of 97.6%, which demonstrates the value of the long context heuristic. 57 of the 59 genuine ambiguities were fringe elements, confirming that fringe elements are more indicative of a true ambiguity. 2To graphically distinguish the variation nucleus within a variation n-gram, the nucleus is shown in grey. 3Being distinct means that each corpus position is only taken into account for the longest variation n-gram it occurs in. 2.2 Detecting variation in syntactic annotation In Dickinson and Meurers (2003b), we decompose the variation n-gram detection for syntactic annotation into a series of runs with different nucleus sizes. This is needed to establish a one-to-one relation between a unit of data and a syntactic category annotation for comparison. Each run detects the variation in the annotation of strings of a specific length. By performing such runs for strings from length 1 to the length of the longest constituent in the corpus, the approach ensures that all strings which are analyzed as a constituent somewhere in the corpus are compared to the annotation of all other occurrences of that string. For example, the variation 4-gram from a year earlier appears 76 times in the WSJ, where the nucleus a year is labeled noun phrase (NP) 68 times, and 8 times it is not annotated as a constituent and is given the special label NIL. An example with two syntactic categories involves the nucleus next Tuesday as part of the variation 3-gram maturity next Tuesday, which appears three times in the WSJ. Twice it is labeled as a noun phrase (NP) and once as a prepositional phrase (PP). To be able to efficiently calculate all variation nuclei of a treebank, in Dickinson and Meurers (2003b) we make use of the fact that a variation necessarily involves at least one constituent occurrence of a nucleus and calculate the set of nuclei for a window of length i by first finding the constituents of that length. Based on this set, we then find nonconstituent occurrences of all strings occurring as constituents. Finally, the variation n-grams for these variation nuclei are obtained in the same way as for POS annotation. In the WSJ, the method found 34,564 variation nuclei, up to size 46; an estimated 71% of the 6277 non-fringe distinct variation nuclei are errors. 3 Discontinuous constituents In Dickinson and Meurers (2003b), we argued that null elements need to be ignored as variation nuclei because the variation in the annotation of a null element as the nucleus is largely independent of the local environment. For example, in (3) the null element *EXP* (expletive) can be annotated a. as a sentence (S) or b. as a relative/subordinate clause 324 (SBAR), depending on the properties of the clause it refers to. (3) a. For cities losing business to suburban shopping centers , it *EXP* may be a wise business investment [S * to help * keep those jobs and sales taxes within city limits] . b. But if the market moves quickly enough , it *EXP* may be impossible [SBAR for the broker to carry out the order] because the investment has passed the specified price . We found that removing null elements as variation nuclei of size 1 increased the precision of error detection to 78.9%. Essentially, null elements represent discontinuous constituents in a formalism with a context-free backbone (Bies et al., 1995). Null elements are coindexed with a non-adjacent constituent; in the predicate argument structure, the constituent should be interpreted where the null element is. To be able to annotate discontinuous material without making use of inserted null elements, some treebanks have instead relaxed the definition of a linguistic tree and have developed more complex graph annotations. An error detection method for such corpora thus does not have to deal with the problems arising from inserted null elements discussed above, but instead it must function appropriately even if constituents are discontinuously realized. A technique such as the variation n-gram method is applicable to corpora with a one-to-one mapping between the text and the annotation. For corpora with positional annotation—e.g., part-ofspeech annotated corpora—the mapping is trivial given that the annotation consists of one-toone correspondences between words (i.e., tokens) and labels. For corpora annotated with more complex structural information—e.g., syntacticallyannotated corpora—the one-to-one mapping is obtained by considering every interval (continuous string of any length) which is assigned a category label somewhere in the corpus. While this works for treebanks with continuous constituents, a one-to-one mapping is more complicated to establish for syntactic annotation involving discontinuous constituents (NEGRA, Skut et al., 1997; TIGER, Brants et al., 2002). In order to apply the variation n-gram method to discontinuous constituents, we need to develop a technique which is capable of comparing labels for any set of corpus positions, instead of for any interval. 4 Extending the variation n-gram method To extend the variation n-gram method to handle discontinuous constituents, we first have to define the characteristics of such a constituent (section 4.1), in other words our units of data for comparison. Then, we can find identical non-constituent (NIL) strings (section 4.2) and expand the context into variation n-grams (section 4.3). 4.1 Variation nuclei: Constituents For traditional syntactic annotation, a variation nucleus is defined as a contiguous string with a single label; this allows the variation n-gram method to be broken down into separate runs, one for each constituent size in the corpus. For discontinuous syntactic annotation, since we are still interested in comparing cases where the nucleus is the same, we will treat two constituents as having the same size if they consist of the same number of words, regardless of the amount of intervening material, and we can again break the method down into runs of different sizes. The intervening material is accounted for when expanding the context into n-grams. A question arises concerning the word order of elements in a constituent. Consider the German example (4) (M¨uller, 2004). (4) weil because der the Mann mannom der the Frau womandat das the Buch bookacc gab. gave ‘because the man gave the woman the book.’ The three arguments of the verb gab (’give’) can be permuted in all six possible ways and still result in a well-formed sentence. It might seem, then, that we would want to allow different permutations of nuclei to be treated as identical. If das Buch der Frau gab is a constituent in another sentence, for instance, it should have the same category label as der Frau das Buch gab. Putting all permutations into one equivalence class, however, amounts to stating that all order325 ings are always the same. But even “free word order” languages are more appropriately called free constituent order; for example, in (4), the argument noun phrases can be freely ordered, but each argument noun phrase is an atomic unit, and in each unit the determiner precedes the noun. Since we want our method to remain data-driven and order can convey information which might be reflected in an annotation system, we keep strings with different orders of the same words distinct, i.e., ordering of elements is preserved in our method. 4.2 Variation nuclei: Non-constituents The basic idea is to compare a string annotated as a constituent with the same string found elsewhere— whether annotated as a constituent or not. So we need to develop a method for finding all string occurrences not analyzed as a constituent (and assign them the special category label NIL). Following Dickinson and Meurers (2003b), we only look for non-constituent occurrences of those strings which also occur at least once as a constituent. But do we need to look for discontinuous NIL strings or is it sufficient to assume only continuous ones? Consider the TIGER treebank examples (5). (5) a. in on diesem this Punkt point seien are sich SELF Bonn Bonn und and London London nicht not einig agreed . . ‘Bonn and London do not agree on this point.’ b. in on diesem this Punkt point seien are sich SELF Bonn Bonn und and London London offensichtlich clearly nicht einig not agreed . . In example (5a), sich einig (’SELF agree’) forms an adjective phrase (AP) constituent. But in example (5b), that same string is not analyzed as a constituent, despite being in a nearly identical sentence. We would thus like to assign the discontinuous string sich einig in (5b) the label NIL, so that the labeling of this string in (5a) can be compared to its occurrence in (5b). In consequence, our approach should be able to detect NIL strings which are discontinuous—an issue which requires special attention to obtain an algorithm efficient enough to handle large corpora. Use sentence boundary information The first consideration makes use of the fact that syntactic annotation by its nature respects sentence boundaries. In consequence, we never need to search for NIL strings that span across sentences.4 Use tries to store constituent strings The second consideration concerns how we calculate the NIL strings. To find every non-constituent string in the corpus, discontinuous or not, which is identical to some constituent in the corpus, a basic approach would first generate all possible strings within a sentence and then test to see which ones occur as a constituent elsewhere in the corpus. For example, if the sentence is Nobody died when Clinton lied, we would see if any of the 31 subsets of strings occur as constituents (e.g., Nobody, Nobody when, Clinton lied, Nobody when lied, etc.). But such a generate and test approach clearly is intractable given that it generates generates 2n −1 potential matches for a sentence of n words. We instead split the task of finding NIL strings into two runs through the corpus. In the first, we store all constituents in the corpus in a trie data structure (Fredkin, 1960), with words as nodes. In the second run through the corpus, we attempt to match the strings in the corpus with a path in the trie, thus identifying all strings occurring as constituents somewhere in the corpus. Filter out unwanted NIL strings The final consideration removes “noisy” NIL strings from the candidate set. Certain NIL strings are known to be useless for detecting annotation errors, so we should remove them to speed up the variation n-gram calculations. Consider example (6) from the TIGER corpus, where the continuous constituent die Menschen is annotated as a noun phrase (NP). (6) Ohne without diese these Ausgaben, expenses so according to die the Weltbank, world bank seien are die Menschen the people totes dead Kapital capital ‘According to the world bank, the people are dead capital without these expenses.’ 4This restriction clearly is syntax specific and other topological domains need to be identified to make searching for NIL strings tractable for other types of discontinuous annotation. 326 Our basic method of finding NIL strings would detect another occurrence of die Menschen in the same sentence since nothing rules out that the other occurrence of die in the sentence (preceding Weltbank) forms a discontinuous NIL string with Menschen. Comparing a constituent with a NIL string that contains one of the words of the constituent clearly goes against the original motivation for wanting to find discontinuous strings, namely that they show variation between different occurrences of a string. To prevent such unwanted variation, we eliminate occurrences of NIL-labeled strings that overlap with identical constituent strings from consideration. 4.3 Variation n-grams The more similar the context surrounding a variation nucleus, the more likely it is for a variation in its annotation to be an error. For detecting errors in traditional syntactic annotation (see section 2.2), the context consists of the elements to the left and the right of the nucleus. When nuclei can be discontinuous, however, there can also be internal context, i.e., elements which appear between the words forming a discontinuous variation nucleus. As in our earlier work, an instance of the a priori algorithm is used to expand a nucleus into a longer n-gram by stepwise adding context elements. Where previously it was possible to add an element to the left or the right, we now also have the option of adding it in the middle—as part of the new, internal context. But depending on how we fill in the internal context, we can face a serious tractability problem. Given a nucleus with j gaps within it, we need to potentially expand it in j + 2 directions, instead of in just 2 directions (to the right and to the left). For example, the potential nucleus was werden appears as a verb phrase (VP) in the TIGER corpus in the string was ein Seeufer werden; elsewhere in the corpus was and werden appear in the same sentence with 32 words between them. The chances of one of the middle 32 elements matching something in the internal context of the VP is relatively high, and indeed the twenty-sixth word is ein. However, if we move stepwise out from the nucleus in order to try to match was ein Seeufer werden, the only options are to find ein directly to the right of was or Seeufer directly to the left of werden, neither of which occurs, thus stopping the search. In conclusion, we obtain an efficient application of the a priori algorithm by expanding the context only to elements which are adjacent to an element already in the n-gram. Note that this was already implicitly assumed for the left and the right context. There are two other efficiency-related issues worth mentioning. Firstly, as with the variation nucleus detection, we limit the n-grams expansion to sentences only. Since the category labels do not represent cross-sentence dependencies, we gain no new information if we find more context outside the sentence, and in terms of efficiency, we cut off what could potentially be a very large search space.5 Secondly, the methods for reducing the number of variation nuclei discussed in section 4.2 have the consequence of also reducing the number of possible variation n-grams. For example, in a test run on the NEGRA corpus we allowed identical strings to overlap; this generated a variation nucleus of size 63, with 16 gaps in it, varying between NP and NIL within the same sentence. Fifteen of the gaps can be filled in and still result in variation. The filter for unwanted NIL strings described in the previous section eliminates the NIL value from consideration. Thus, there is no variation and no tractability problem in constructing n-grams. 4.3.1 Generalizing the n-gram context So far, we assumed that the context added around variation nuclei consists of words. Given that treebanks generally also provide part-of-speech information for every token, we experimented with partof-speech tags as a less restrictive kind of context. The idea is that it should be possible to find more variation nuclei with comparable contexts if only the part-of-speech tags of the surrounding words have to be identical instead of the words themselves. As we will see in section 5, generalizing n-gram contexts in this way indeed results in more variation n-grams being found, i.e., increased recall. 4.4 Adapting the heuristics To determine which nuclei are errors, we can build on the two heuristics from previous research (Dick5Note that similar sentences which were segmented differently could potentially cause varying n-gram strings not to be found. We propose to treat this as a separate sentence segmentation error detection phase in future work. 327 inson and Meurers, 2003a,b)—trust long contexts and distrust the fringe—with some modification, given that we have more fringe areas to deal with for discontinuous strings. In addition to the right and the left fringe, we also need to take into account the internal context in a way that maintains the nonfringe heuristic as a good indicator for errors. As a solution that keeps internal context on a par with the way external context is treated in our previous work, we require one word of context around every terminal element that is part of the variation nucleus. As discussed below, this heuristic turns out to be a good predictor of which variations are annotation errors; expanding to the longest possible context, as in Dickinson and Meurers (2003a), is not necessary. 5 Results on the TIGER Corpus We ran the variation n-grams error detection method for discontinuous syntactic constituents on v. 1 of TIGER (Brants et al., 2002), a corpus of 712,332 tokens in 40,020 sentences. The method detected a total of 10,964 variation nuclei. From these we sampled 100 to get an estimate of the number of errors in the corpus which concern variation. Of these 100, 13 variation nuclei pointed to an error; with this point estimate of .13, we can derive a 95% confidence interval of (0.0641, 0.1959),6 which means that we are 95% confident that the true number of variation-based errors is between 702 and 2148. The effectiveness of a method which uses context to narrow down the set of variation nuclei can be judged by how many of these variation errors it finds. Using the non-fringe heuristic discussed in the previous section, we selected the shortest non-fringe variation n-grams to examine. Occurrences of the same strings within larger n-grams were ignored, so as not to artificially increase the resulting set of ngrams. When the context is defined as identical words, we obtain 500 variation n-grams. Sampling 100 of these and labeling for each position whether it is an error or an ambiguity, we find that 80 out of the 100 samples point to at least one token error. The 95% confidence interval for this point estimate of .80 is 6The 95% confidence interval was calculated using the standard formula of p±1.96 q p(1−p) n , where p is the point estimate and n the sample size. (0.7216, 0.8784), so we are 95% confident that the true number of error types is between 361 and 439. Note that this precision is comparable to the estimates for continuous syntactic annotation in Dickinson and Meurers (2003b) of 71% (with null elements) and 78.9% (without null elements). When the context is defined as identical parts of speech, as described in section 4.3.1, we obtain 1498 variation n-grams. Again sampling 100 of these, we find that 52 out of the 100 point to an error. And the 95% confidence interval for this point estimate of .52 is (0.4221, 0.6179), giving a larger estimated number of errors, between 632 and 926. Context Precision Errors Word 80% 361–439 POS 52% 632–926 Figure 1: Accuracy rates for the different contexts Words convey more information than part-ofspeech tags, and so we see a drop in precision when using part-of-speech tags for context, but these results highlight a very practical benefit of using a generalized context. By generalizing the context, we maintain a precision rate of approximately 50%, and we substantially increase the recall of the method. There are, in fact, likely twice as many errors when using POS contexts as opposed to word contexts. Corpus annotation projects willing to put in some extra effort thus can use this method of finding variation n-grams with a generalized context to detect and correct more errors. 6 Summary and Outlook We have described the first method for finding errors in corpora with graph annotations. We showed how the variation n-gram method can be extended to discontinuous structural annotation, and how this can be done efficiently and with as high a precision as reported for continuous syntactic annotation. Our experiments with the TIGER corpus show that generalizing the context to part-of-speech tags increases recall while keeping precision above 50%. The method can thus have a substantial practical benefit when preparing a corpus with discontinuous annotation. Extending the error detection method to handle 328 discontinuous constituents, as we have done, has significant potential for future work given the increasing number of free word order languages for which corpora and treebanks are being developed. Acknowledgements We are grateful to George Smith and Robert Langner of the University of Potsdam TIGER team for evaluating the variation we detected in the samples. We would also like to thank the three ACL reviewers for their detailed and helpful comments, and the participants of the OSU CLippers meetings for their encouraging feedback. References Ann Bies, Mark Ferguson, Karen Katz and Robert MacIntyre, 1995. Bracketing Guidelines for Treebank II Style Penn Treebank Project. University of Pennsylvania. Philippe Blache and Daniel Hirst, 2000. Multi-level annotation for spoken-language corpora. In Proceedings of ICSLP-00. Beijing, China. Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius and George Smith, 2002. The TIGER Treebank. In Proceedings of TLT-02. Sozopol, Bulgaria. Harry Bunt and Arthur van Horck (eds.), 1996. Discontinuous Constituency. Mouton de Gruyter, Berlin and New York. John Carroll, Anette Frank, Dekang Lin, Detlef Prescher and Hans Uszkoreit (eds.), 2002. Proceedings of the LREC Workshop “Beyond PARSEVAL. Towards Improved Evaluation Measures for Parsing Systems”, Las Palmas, Gran Canaria. Markus Dickinson and W. Detmar Meurers, 2003a. Detecting Errors in Part-of-Speech Annotation. In Proceedings of EACL-03. Budapest, Hungary. Markus Dickinson and W. Detmar Meurers, 2003b. Detecting Inconsistencies in Treebanks. In Proceedings of TLT-03. V¨axj¨o, Sweden. Eleazar Eskin, 2000. Automatic Corpus Correction with Anomaly Detection. In Proceedings of NAACL-00. Seattle, Washington. Edward Fredkin, 1960. Trie Memory. CACM, 3(9):490–499. Erhard Hinrichs, Julia Bartels, Yasuhiro Kawata, Valia Kordoni and Heike Telljohann, 2000. The T¨ubingen Treebanks for Spoken German, English, and Japanese. In Wolfgang Wahlster (ed.), Verbmobil: Foundations of Speech-to-Speech Translation, Springer, Berlin, pp. 552–576. Geoffrey Huck and Almerindo Ojeda (eds.), 1987. Discontinuous Constituency. Academic Press, New York. Stig Johansson, 1986. The Tagged LOB Corpus: Users’ Manual. Norwegian Computing Centre for the Humanities, Bergen. Paul Kingsbury, Martha Palmer and Mitch Marcus, 2002. Adding Semantic Annotation to the Penn TreeBank. In Proceedings of HLT-02. San Diego. Pavel Kvˇetˇon and Karel Oliva, 2002. Achieving an Almost Correct PoS-Tagged Corpus. In Petr Sojka, Ivan Kopeˇcek and Karel Pala (eds.), TSD 2002. Springer, Heidelberg, pp. 19–26. M. Marcus, Beatrice Santorini and M. A. Marcinkiewicz, 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Stefan M¨uller, 2004. Continuous or Discontinuous Constituents? A Comparison between Syntactic Analyses for Constituent Order and Their Processing Systems. Research on Language and Computation, 2(2):209–257. Lluis Padro and Lluis Marquez, 1998. On the Evaluation and Comparison of Taggers: the Effect of Noise in Testing Corpora. In COLING/ACL-98. Paul Rayson, Dawn Archer, Scott Piao and Tony McEnery, 2004. The UCREL Semantic Analysis System. In Proceedings of the Workshop on Beyond Named Entity Recognition: Semantic labelling for NLP tasks. Lisbon, Portugal, pp. 7–12. Wojciech Skut, Brigitte Krenn, Thorsten Brants and Hans Uszkoreit, 1997. An Annotation Scheme for Free Word Order Languages. In Proceedings of ANLP-97. Washington, D.C. Hans van Halteren, 2000. The Detection of Inconsistency in Manually Tagged Text. In Anne Abeill´e, Thorsten Brants and Hans Uszkoreit (eds.), Proceedings of LINC-00. Luxembourg. Hans van Halteren, Walter Daelemans and Jakub Zavrel, 2001. Improving Accuracy in Word Class Tagging through the Combination of Machine Learning Systems. Computational Linguistics, 27(2):199–229. 329 | 2005 | 40 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 330–337, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics High Precision Treebanking — Blazing Useful Trees Using POS Information — Takaaki Tanaka,† Francis Bond,† Stephan Oepen,‡ Sanae Fujita† † {takaaki, bond, fujita}@cslab.kecl.ntt.co.jp ‡ [email protected] † NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation ‡ Universitetet i Oslo and CSLI, Stanford Abstract In this paper we present a quantitative and qualitative analysis of annotation in the Hinoki treebank of Japanese, and investigate a method of speeding annotation by using part-of-speech tags. The Hinoki treebank is a Redwoods-style treebank of Japanese dictionary definition sentences. 5,000 sentences are annotated by three different annotators and the agreement evaluated. An average agreement of 65.4% was found using strict agreement, and 83.5% using labeled precision. Exploiting POS tags allowed the annotators to choose the best parse with 19.5% fewer decisions. 1 Introduction It is important for an annotated corpus that the markup is both correct and, in cases where variant analyses could be considered correct, consistent. Considerable research in the field of word sense disambiguation has concentrated on showing that the annotation of word senses can be done correctly and consistently, with the normal measure being interannotator agreement (e.g. Kilgariff and Rosenzweig, 2000). Surprisingly, few such studies have been carried out for syntactic annotation, with the notable exceptions of Brants et al. (2003, p 82) for the German NeGra Corpus and Civit et al. (2003) for the Spanish Cast3LB corpus. Even such valuable and widely used corpora as the Penn TreeBank have not been verified in this way. We are constructing the Hinoki treebank as part of a larger project in cognitive and computational linguistics ultimately aimed at natural language understanding (Bond et al., 2004). In order to build the initial syntactic and semantic models, we are treebanking the dictionary definition sentences of the most familiar 28,000 words of Japanese and building an ontology from the results. Arguably the most common method in building a treebank still is manual annotation, annotators (often linguistics students) marking up linguistic properties of words and phrases. In some semi-automated treebank efforts, annotators are aided by POS taggers or phrase-level chunkers, which can propose mark-up for manual confirmation, revision, or extension. As computational grammars and parsers have increased in coverage and accuracy, an alternate approach has become feasible, in which utterances are parsed and the annotator selects the best parse Carter (1997); Oepen et al. (2002) from the full analyses derived by the grammar. We adopted the latter approach. There were four main reasons. The first was that we wanted to develop a precise broad-coverage grammar in tandem with the treebank, as part of our research into natural language understanding. Treebanking the output of the parser allows us to immediately identify problems in the grammar, and improving the grammar directly improves the quality of the treebank in a mutually beneficial feedback loop (Oepen et al., 2004). The second reason is that we wanted to annotate to a high level of detail, marking not only dependency and constituent structure but also detailed semantic relations. By using a Japanese grammar (JACY: Siegel and Bender, 2002) based on a monostratal theory of grammar (HPSG: Pollard and Sag, 1994) we could simultaneously annotate syntactic and semantic structure without overburdening the annota330 tor. The third reason was that we expected the use of the grammar to aid in enforcing consistency — at the very least all sentences annotated are guaranteed to have well-formed parses. The flip side to this is that any sentences which the parser cannot parse remain unannotated, at least unless we were to fall back on full manual mark-up of their analyses. The final reason was that the discriminants can be used to update the treebank when the grammar changes, so that the treebank can be improved along with the grammar. This kind of dynamic, discriminant-based treebanking was pioneered in the Redwoods treebank of English (Oepen et al., 2002), so we refer to it as Redwoods-style treebanking. In the next section, we give some more details about the Hinoki Treebank and the data used to evaluate the parser (§ 2). This is followed by a brief discussion of treebanking using discriminants (§ 3), and an extension to seed the treebanking using existing markup (§ 4). Finally we present the results of our evaluation (§ 5), followed by some discussion and outlines for future research. 2 The Hinoki Treebank The Hinoki treebank currently consists of around 95,000 annotated dictionary definition and example sentences. The dictionary is the Lexeed Semantic Database of Japanese (Kasahara et al., 2004), which consists of all words with a familiarity greater than or equal to five on a scale of one to seven. This gives 28,000 words, divided into 46,347 different senses. Each sense has a definition sentence and example sentence written using only these 28,000 familiar words (and some function words). Many senses have more than one sentence in the definition: there are 81,000 defining sentences in all. The data used in our evaluation is taken from the first sentence of the definitions of all words with a familiarity greater than six (9,854 sentences). The Japanese grammar JACY was extended until the coverage was over 80% (Bond et al., 2004). For evaluation of the treebanking we selected 5,000 of the sentences that could be parsed, and divided them into five 1,000 sentence sets (A–E). Definition sentences tend to vary widely in form depending on the part of speech of the word being defined — each set was constructed with roughly the same distribution of defined words, as well as having roughly the same length (the average was 9.9, ranging from 9.5–10.4). A (simplified) example of an entry (Sense 2 of k¯aten “curtain: any barrier to communication or vision”), and a syntactic view of its parse are given in Figure 1. There were 6 parses for this definition sentence. The full parse is an HPSG sign, containing both syntactic and semantic information. A view of the semantic information is given in Figure 21. UTTERANCE NP VP N PP V NP DET N CASE-P
aru monogoto o kakusu mono a certain stuff ACC hide thing Curtain2: “a thing that hides something” Figure 1: Syntactic View of the Definition of 2 k¯aten “curtain” ⟨h0, x2 {h0 : proposition(h5) h1 : aru(e1, x1, u0) “a certain” h1 : monogoto(x1) “stuff” h2 : u def(x1, h1, h6) h5 : kakusu(e2, x2, x1) “hide” h3 : mono(x2) “thing” h4 : u def(x2, h3, h7)}⟩ Figure 2: Semantic View of the Definition of 2 k¯aten “curtain” The semantic view shows some ambiguity has been resolved that is not visible in the purely syntactic view. In Japanese, relative clauses can have gapped and non-gapped readings. In the gapped reading (selected here), mono “thing” is the subject of kakusu “hide”. In the non-gapped reading there is some unspecified relation between the thing and the verb phrase. This is similar to the difference in the two readings of the day he knew in English: “the day that he knew about” (gapped) vs “the day on which he knew (something)” (non-gapped). 1The semantic representation used is Minimal Recursion Semantics (Copestake et al., Forthcoming). The figure shown here hides some of the detail of the underspecified scope. 331 Such semantic ambiguity is resolved by selecting the correct derivation tree that includes the applied rules in building the tree, as shown in Figure 3. In the next phase of the Hinoki project, we are concentrating on acquiring an ontology from these semantic representations and using it to improve the parse selection (Bond et al., 2004). 3 Treebanking Using Discriminants Selection among analyses in our set-up is done through a choice of elementary discriminants, basic and mostly independent contrasts between parses. These are (relatively) easy to judge by annotators. The system selects features that distinguish between different parses, and the annotator selects or rejects the features until only one parse is left. In a small number of cases, annotation may legitimately leave more than one parse active (see below). The system we used for treebanking was the [incr tsdb()] Redwoods environment2 (Oepen et al., 2002). The number of decisions for each sentence is proportional to the log of the number of parses. The number of decisions required depends on the ambiguity of the parses and the length of the input. For Hinoki, on average, the number of decisions presented to the annotator was 27.5. However, the average number of decisions needed to disambiguate each sentence was only 2.6, plus an additional decision to accept or reject the selected parses3. In general, even a sentence with 100 parses requires only around 5 decisions and 1,000 parses only around 7 decisions. A graph of parse results versus number of decisions presented and required is given in Figure 6. The primary data stored in the treebank is the derivation tree: the series of rules and lexical items the parser used to construct the parse. This, along with the grammar, can be combined to rebuild the complete HPSG sign. The annotators task is to select the appropriate derivation tree or trees. The possible derivation trees for 2 k¯aten “curtain” are shown in Figure 3. Nodes in the trees indicate applied rules, simplified lexical types or words. We 2The [incr tsdb()] system, Japanese and English grammars and the Redwoods treebank of English are available from the Deep Linguistic Processing with HPSG Initiative (DELPH-IN: http://www.delph-in.net/). 3This average is over all sentences, even non-ambiguous ones, which only require a decision as to whether to accept or reject. will use it as an example to explain the annotation process. Figure 3 also displays POS tag from a separate tagger, shown in typewriter font.4 This example has two major sources of ambiguity. One is lexical: aru “a certain/have/be” is ambiguous between a reading as a determiner “a certain” (det-lex) and its use as a verb of possession “have” (aru-verb-lex). If it is a verb, this gives rise to further structural ambiguity in the relative clause, as discussed in Section 2. Reliable POS tags can thus resolve some ambiguity, although not all. Overall, this five-word sentence has 6 parses. The annotator does not have to examine every tree but is instead presented with a range of 9 discriminants, as shown in Figure 4, each local to some segment of the utterance (word or phrase) and thus presenting a contrast that can be judged in isolation. Here the first column shows deduced status of discriminants (typically toggling one discriminant will rule out others), the second actual decisions, the third the discriminating rule or lexical type, the fourth the constituent spanned (with a marker showing segmentation of daughters, where it is unambiguous), and the fifth the parse trees which include the rule or lexical type. D A Rules / Lexical Types Subtrees / Lexical items Parse Trees ? ? rel-cl-sbj-gap
2,4,6 ? ? rel-clause
1,3,5 - ? rel-cl-sbj-gap 3,4 - ? rel-clause 5,6 + ? hd-specifier 1,2 ? ? subj-zpro
2,4,6 - ? subj-zpro 5,6 - ? aru-verb-lex 3–6 + + det-lex 1,2 +: positive decision -: negative decision ?: indeterminate / unknown Figure 4: Discriminants (marked after one is selected). D : deduced decisions, A : actual decisions After selecting a discriminant, the system recalculates the discriminant set. Those discriminants which can be deduced to be incompatible with the decisions are marked with ‘−’, and this information is recorded. The tool then presents to the annotator 4The POS markers used in our experiment are from the ChaSen POS tag set (http://chasen.aist-nara.ac. jp/), we show simplified transliterated tag names. 332 NP-frag rel-cl-sbj-gap hd-complement N hd-complement V hd-specifier DET N CASE-P ! " " "$# # # % % % & & &(' ' ' " " " adnominal noun particle verb noun a certain thing ACC hide thing Tree #1 NP-frag rel-clause hd-complement N hd-complement subj-zpro hd-specifier V DET N CASE-P ! " " "$# # # % % % & & &(' ' ' " " " adnominal noun particle verb noun a certain thing ACC hide thing Tree #2 NP-frag rel-cl-sbj-gap hd-complement N hd-complement V rel-cl-sbj-gap V N CASE-P ! " " ")# # # % % % & & &*' ' ' " " " verb noun particle verb noun exist thing ACC hide thing Tree #3 NP-frag rel-clause hd-complement N hd-complement subj-zpro rel-cl-sbj-gap V V N CASE-P ! " " "$# # # % % % & & &(' ' ' " " " verb noun particle verb noun exist thing ACC hide thing Tree #4 NP-frag rel-cl-sbj-gap hd-complement N hd-complement V rel-clause subj-zpro V N CASE-P ! " " ")# # # % % % & & &*' ' ' " " " verb noun particle verb noun exist thing ACC hide thing Tree #5 NP-frag rel-clause hd-complement N hd-complement subj-zpro rel-clause V subj-zpro V N CASE-P ! " " ")# # # % % % & & &*' ' ' " " " verb noun particle verb noun exist thing ACC hide thing Tree #6 Figure 3: Derivation Trees of the Definition of +, 2 k¯aten “curtain” only those discriminants which still select between the remaining parses, marked with ‘?’. In this case the desired parse can be selected with a minimum of two decisions. If the first decision is that -/. aru is a determiner (det-lex), it eliminates four parses, leaving only three discriminants (corresponding to trees #1 and #2 in Figure 3) to be decided on in the second round of decisions. Selecting mono “thing” as the gapped subject of 0 kakusu “hide” (rel-cl-sbj-gap) resolves the parse forest to the single correct derivation tree #1 in Figure 3. The annotator also has the option of leaving some ambiguity in the treebank. For example, the verbal noun 1 2, ¯opun “open” is defined with the single word 354 aku/hiraku “open”. This word however, has two readings: aku which is intransitive and hiraku which is transitive. As 1 2, ¯opun “open” can be either transitive or intransitive, both parses are in fact correct! In such cases, the annotators were instructed to leave both parses. Finally, the annotator has the option of rejecting all the parses presented, if none have the correct syntax and semantics. This decision has to be made even for sentences with a unique parse. 4 Using POS Tags to Blaze the Trees Sentences in the Lexeed dictionary were already part-of-speech tagged so we investigated exploiting this information to reduce the number of decisions the annotators had to make. More generally, there are many large corpora with a subset of the information we desire already available. For example, the Kyoto Corpus (Kurohashi and Nagao, 2003) has part of speech information and dependency information, but not the detailed information available from an HPSG analysis. However, the existing information can be used to blaze5 trees in the parse forest: that is to select or reject certain discriminants based on existing information. Because other sources of information may not be entirely reliable, or the granularity of the information may be different from the granularity in our 5In forestry, to blaze is to mark a tree, usually by painting and/or cutting the bark, indicating those to be cut or the course of a boundary, road, or trail. 333 treebank, we felt it was important that the blazes be defeasible. The annotator can always reject the blazed decisions and retag the sentence. In [incr tsdb()], it is currently possible to blaze using POS information. The criteria for the blazing depend on both the grammar used to make the treebank and the POS tag set. The system matches the tagged POS against the grammar’s lexical hierarchy, using a one-to-many mapping of parts of speech to types of the grammar and a subsumption-based comparison. It is thus possible to write very general rules. Blazes can be positive to accept a discriminant or negative to reject it. The blaze markers are defined to be a POS tag, and then a list of lexical types and a score. The polarity of the score determines whether to accept or reject. The numerical value allows the use of a threshold, so that only those markers whose absolute value is greater than a threshold will be used. The threshold is currently set to zero: all blaze markers are used. Due to the nature of discriminants, having two positively marked but competing discriminants for the same word will result in no trees satisfying the conditions. Therefore, it is important that only negative discriminants should be used for more general lexical types. Hinoki uses 13 blaze markers at present, a simplified representation of them is shown in Figure 5. E.g. if ⟨verb-aux, v-stem-lex, -1.0⟩was a blaze marker, then any sentence with a verb that has two non-auxiliary entries (e.g. hiraku/aku vt and vi) would be eliminated. The blaze set was derived from a conservative inspection of around 1,000 trees from an earlier round of annotation of similar data, identifying high-frequency contrasts in lexical ambiguity that can be confidently blazed from the POS granularity available for Lexeed. POS tags Lexical Types in the Grammar Score verb-aux v-stem-lex −1.0 verb-main aspect-stem-lex −1.0 noun verb-stem-lex −1.0 adnominal noun mod-lex-l 0.9 det-lex 0.9 conjunction n conj-p-lex 0.9 v-coord-end-lex 0.9 adjectival-noun noun-lex −1.0 Figure 5: Some Blaze Markers used in Hinoki For the example shown in Figures 3 and 4, the blaze markers use the POS tagging of the determiner -6. aru to mark it as det-lex. This eliminates four parses and six discriminants leaving only three to be presented to the annotator. On average, marking blazes reduced the average number of blazes presented per sentence from 27.5 to 23.8 (a reduction of 15.6%). A graphical view of number of discriminants versus parse ambiguity is shown in Figure 6. 5 Measuring Inter-Annotator Agreement Lacking a task-oriented evaluation scenario at this point, inter-annotator agreement is our core measure of annotation consistency in Hinoki. All trees (and associated semantics) in Hinoki are derived from a computational grammar and thus should be expected to demonstrate a basic degree of internal consistency. On the other hand, the use of the grammar exposes large amounts of ambiguity to annotators that might otherwise go unnoticed. It is therefore not a priori clear whether the Redwoods-style approach to treebank construction as a general methodology results in a high degree of internal consistency or a comparatively low one. α – β β – γ γ – α Average Parse Agreement 63.9 68.2 64.2 65.4 Reject Agreement 4.8 3.0 4.1 4.0 Parse Disagreement 17.5 19.2 17.9 18.2 Reject Disagreement 13.7 9.5 13.8 12.4 Table 1: Exact Match Inter-annotator Agreement Table 1 quantifies inter-annotator agreement in terms of the harshest possible measure, the proportion of sentences for which two annotators selected the exact same parse or both decided to reject all available parses. Each set was annotated by three annotators (α, β, γ). They were all native speakers of Japanese with a high score in a Japanese proficiency test (Amano and Kondo, 1998) but no linguistic training. The average annotation speed was 50 sentences an hour. In around 19 per cent of the cases annotators chose to not fully disambiguate, keeping two or even three active parses; for these we scored i j , with j being the number of identical pairs in the cross-product of active parses, and i the number of mismatches. One annotator keeping {1, 2, 3}, for example, and another {3, 4} would be scored as 1 6. In addition to 334 leaving residual ambiguity, annotators opted to reject all available parses in some eight per cent of cases, usually indicating opportunities for improvement of the underlying grammar. The Parse Agreement figures (65.4%) in Table 1 are those sentences where both annotators chose one or more parses, and they showed non-zero agreement. This figure is substantially above the published figure of 52% for NeGra Brants et al. (2003). Parse Disagreement is where both chose parses, but there was no agreement. Reject Agreement shows the proportion of sentences for which both annotators found no suitable analysis. Finally Reject Disagreement is those cases were one annotator found no suitable parses, but one selected one or more. The striking contrast between the comparatively high exact match ratios (over a random choice baseline of below seven per cent; κ = 0.628) and the low agreement between annotators on which structures to reject completely suggests that the latter type of decision requires better guidelines, ideally tests that can be operationalized. To obtain both a more fine-grained measure and also be able to compare to related work, we computed a labeled precision f-score over derivation trees. Note that our inventory of labels is large, as they correspond in granularity to structures of the grammar: close to 1,000 lexical and 120 phrase types. As there is no ‘gold’ standard in contrasting two annotations, our labeled constituent measure F is the harmonic mean of standard labeled precision P (Black et al., 1991; Civit et al., 2003) applied in both ‘directions’: for a pair of annotators α and β, F is defined as: F = 2P(α, β)P(β, α) P(α, β) + P(β, α) As found in the discussion of exact match interannotator agreement over the entire treebank, there are two fundamentally distinct types of decisions made by annotators, viz. (a) elimination of unwanted ambiguity and (b) the choice of keeping at least one analysis or rejecting the entire item. Of these, only (b) applies to items that are assigned only one parse by the grammar, hence we omit unambiguous items from our labeled precision measures (a little more than twenty per cent of the total) to exclude trivial agreement from the comparison. In the same spirit, to eliminate noise hidden in pairs of items where one or both annotators opted for multiple valid parses, we further reduced the comparison set to those pairs where both annotators opted for exactly one active parse. Intersecting both conditions for pairs of annotators leaves us with subsets of around 2,500 sentences each, for which we record F values ranging from 95.1 to 97.4, see Table 2. When broken down by pairs of annotators and sets of 1,000 items each, which have been annotated in strict sequential order, F scores in Table 2 confirm that: (a) inter-annotator agreement is stable, all three annotators appear to have performed equally (well); (b) with growing experience, there is a slight increase in F scores over time, particularly when taking into account that set E exhibits a noticeably higher average ambiguity rate (1208 parses per item) than set D (820 average parses); and (c) Hinoki inter-annotator agreement compares favorably to results reported for the German NeGra (Brants, 2000) and Spanish Cast3LB (Civit et al., 2003) treebanks, both of which used manual mark-up seeded from automated POS tagging and chunking. Compared to the 92.43 per cent labeled F score reported by Brants (2000), Hinoki achieves an ‘error’ (i.e. disagreement) rate of less than half, even though our structures are richer in information and should probably be contrasted with the ‘edge label’ F score for NeGra, which is 88.53 per cent. At the same time, it is unknown to what extent results are influenced by differences in text genre, i.e. average sentence length of our dictionary definitions is noticeably shorter than for the NeGra newspaper corpus. In addition, our measure is computed only over a subset of the corpus (those trees that can be parsed and that had multiple parses which were not rejected). If we recalculate over all 5,000 sentences, including rejected sentences (F measure of 0) and those with no ambiguity (F measure of 1) then the average F measure is 83.5, slightly worse than the score for NeGra. However, the annotation process itself identifies which the problematic sentences are, and how to improve the agreement: improve the grammar so that fewer sentences need to be rejected and then update the annotation. The Hinoki treebank is, by design, dynamic, so we expect to continue to improve the grammar and annotation continuously over the project’s lifetime. 335 Test α – β β – γ γ – α Average Set # F # F # F F A 507 96.03 516 96.22 481 96.24 96.19 B 505 96.79 551 96.40 511 96.57 96.58 C 489 95.82 517 95.15 477 95.42 95.46 D 454 96.83 477 96.86 447 97.40 97.06 E 480 95.15 497 96.81 484 96.57 96.51 2435 96.32 2558 96.28 2400 96.47 96.36 Table 2: Inter-Annotator Agreement as Mutual Labeled Precision F-Score Test Annotator Decisions Blazed Set α β γ Decisions A 2,659 2,606 3,045 416 B 2,848 2,939 2,253 451 C 1,930 2,487 2,882 468 D 2,254 2,157 2,347 397 E 1,769 2,278 1,811 412 Table 3: Number of Decisions Required 5.1 The Effects of Blazing Table 3 shows the number of decisions per annotator, including revisions, and the number of decisions that can be done automatically by the part-of-speech blazed markers. The test sets where the annotators used the blazes are shown underlined. The final decision to accept or reject the parses was not included, as it must be made for every sentence. The blazed test sets require far fewer annotator decisions. In order to evaluate the effect of the blazes, we compared the average number of decisions per sentence for the test sets in which some annotators used blazes and some did not (B–D). The average number of decisions went from 2.63 to 2.11, a substantial reduction of 19.5%. similarly, the time required to annotate an utterance was reduced from 83 seconds per sentence to 70, a speed up of 15.7%. We did not include A and E, as there was variation in difficulty between test sets, and it is well known that annotators improve (at least in speed of annotation) over time. Research on other projects has shown that it is normal for learning curve differences to swamp differences in tools (Wallis, 2003, p. 65). The number of decisions against the number of parses is show in Figure 6, both with and without the blazes. 6 Discussion Annotators found the rejections the most time consuming. If a parse was eliminated, they often redid the decision process several times to be sure 0 5 10 15 20 25 30 100 101 102 103 20 40 60 80 100 120 Selected discriminants Presented discriminants Readings Selected discriminants (w/ blaze) Selected discriminants (w/o blaze) Presented discriminants (w/ blaze) Presented discriminants (w/o blaze) Figure 6: Number of Decisions versus Number of Parses (Test Sets B–D) they had not eliminated the correct parse in error, which was very time consuming. This shows that the most important consideration for the success of treebanking in this manner is the quality of the grammar. Fortunately, treebanking offers direct feedback to the grammar developers. Rejected sentences identify which areas need to be improved, and because the treebank is dynamic, it can be improved when we improve the analyses in the grammar. This is a notable improvement over semi-automatically constructed grammars, such as the Penn Treebank, where many inconsistencies remain (around 4,500 types estimated by Dickinson and Meurers, 2003) and the treebank does not allow them to be identified automatically or easily updated. Because we are simultaneously using the semantic output of the grammar in building an ontology, and the syntax and semantics are tightly coupled, the knowledge acquisition provides a further route for feedback. Extracting an ontology from the semantic representations revealed many issues with the semantics that had previously been neglected. Our top priority for further work within Hinoki 336 is to improve the grammar so as to both increase the cover and decrease the number of results with no acceptable parses. This will allow us to treebank a higher proportion of sentences, with even higher precision. For more general work on treebank construction, we would like to investigate (1) using other information for blazes (syntactic constituents, dependencies, translation data) and marking blazes automatically using confident scores from existing POS taggers or parsers, (2) other agreement measures (for example agreement over the semantic representations), (3) presenting discriminants based on the semantic representations. 7 Conclusions We conducted an experiment to measure interannotator agreement for the Hinoki corpus. Three annotators marked up 5,000 sentences. Sentence agreement was an unparalleled 65.4%. The method used identifies problematic annotations as a byproduct, and allows the treebank to be improved as its underlying grammar improves. We also presented a method to speed up the annotation by exploiting existing part-of-speech tags. This led to a decrease in the number of annotation decisions of 19.5%. Acknowledgments The authors would like to thank the other members of the NTT Machine Translation Research Group, as well as Timothy Baldwin and Dan Flickinger. This research was supported by the research collaboration between the NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation and CSLI, Stanford University. References Anne Abeill´e, editor. Treebanks: Building and Using Parsed Corpora. Kluwer Academic Publishers, 2003. Shigeaki Amano and Tadahisa Kondo. Estimation of mental lexicon size with word familiarity database. In International Conference on Spoken Language Processing, volume 5, pages 2119–2122, 1998. Ezra Black, Steven Abney, Dan Flickinger, Claudia Gdaniec, Ralph Grishman, Philip Harrison, Donald Hindle, Robert Ingria, Fred Jelinek, Judith Klavans, Mark Lieberman, and Tomek Strzalkowski. A procedure for quantitatively comparing the syntactic coverage of English. In Proceedings of the Speech and Natural Language Workshop, pages 306– 311, Pacific Grove, CA, 1991. Morgan Kaufmann. Francis Bond, Sanae Fujita, Chikara Hashimoto, Kaname Kasahara, Shigeko Nariyama, Eric Nichols, Akira Ohtani, Takaaki Tanaka, and Shigeaki Amano. The Hinoki treebank: A treebank for text understanding. In Proceedings of the First International Joint Conference on Natural Language Processing (IJCNLP-04), pages 554–559, Hainan Island, 2004. Thorsten Brants. Inter-annotator agreement for a German newspaper corpus. In Proceedings of the 2nd International Conference on Language Resources and Evaluation (LREC 2000), Athens, Greece, 2000. Thorsten Brants, Wojciech Skut, and Hans Uszkoreit. Syntactic annotation of a German newspaper corpus. In Abeill´e (2003), chapter 5, pages 73–88. David Carter. The TreeBanker: a tool for supervised training of parsed corpora. In ACL Workshop on Computational Environments for Grammar Development and Linguistic Engineering, Madrid, 1997. (http://xxx.lanl.gov/ abs/cmp-lg/9705008). Montserrat Civit, Alicia Ageno, Borja Navarro, N´uria Bufi, and Maria Antonia Mart´ı. Qualitative and quantitative analysis of annotators’ agreement in the development of Cast3LB. In Proceedings of the Second Workshop on Treebanks and Linguistic Theories, V¨axj¨o, Sweeden, 2003. Ann Copestake, Daniel P. Flickinger, Carl Pollard, and Ivan A. Sag. Minimal Recursion Semantics. An introduction. Journal of Research in Language and Computation, Forthcoming. Markus Dickinson and W. Detmar Meurers. Detecting inconsistencies in treebanks. In Proceedings of the Second Workshop on Treebanks and Linguistic Theories, V¨axj¨o, Sweeden, 2003. Kaname Kasahara, Hiroshi Sato, Francis Bond, Takaaki Tanaka, Sanae Fujita, Tomoko Kanasugi, and Shigeaki Amano. Construction of a Japanese semantic lexicon: Lexeed. SIG NLC-159, IPSJ, Tokyo, 2004. (in Japanese). Adam Kilgariff and Joseph Rosenzweig. Framework and results for English SENSEVAL. Computers and the Humanities, 34 (1–2):15–48, 2000. Special Issue on SENSEVAL. Sadao Kurohashi and Makoto Nagao. Building a Japanese parsed corpus — while improving the parsing system. In Abeill´e (2003), chapter 14, pages 249–260. Stephan Oepen, Dan Flickinger, and Francis Bond. Towards holistic grammar engineering and testing — grafting treebank maintenance into the grammar revision cycle. In Beyond Shallow Analyses — Formalisms and Statistical Modeling for Deep Analysis (Workshop at IJCNLP-2004), Hainan Island, 2004. (http://www-tsujii.is.s. u-tokyo.ac.jp/bsa/). Stephan Oepen, Kristina Toutanova, Stuart Shieber, Christoper D. Manning, Dan Flickinger, and Thorsten Brant. The LinGO redwoods treebank: Motivation and preliminary applications. In 19th International Conference on Computational Linguistics: COLING-2002, pages 1253–7, Taipei, Taiwan, 2002. Carl Pollard and Ivan A. Sag. Head Driven Phrase Structure Grammar. University of Chicago Press, Chicago, 1994. Melanie Siegel and Emily M. Bender. Efficient deep processing of Japanese. In Proceedings of the 3rd Workshop on Asian Language Resources and International Standardization at the 19th International Conference on Computational Linguistics, Taipei, 2002. Sean Wallis. Completing parsed corpora: From correction to evolution. In Abeill´e (2003), chapter 4, pages 61–71. 337 | 2005 | 41 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 338–345, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics A Dynamic Bayesian Framework to Model Context and Memory in Edit Distance Learning: An Application to Pronunciation Classification Karim Filali and Jeff Bilmes∗ Departments of Computer Science & Engineering and Electrical Engineering University of Washington Seattle, WA 98195, USA {karim@cs,bilmes@ee}.washington.edu Abstract Sitting at the intersection between statistics and machine learning, Dynamic Bayesian Networks have been applied with much success in many domains, such as speech recognition, vision, and computational biology. While Natural Language Processing increasingly relies on statistical methods, we think they have yet to use Graphical Models to their full potential. In this paper, we report on experiments in learning edit distance costs using Dynamic Bayesian Networks and present results on a pronunciation classification task. By exploiting the ability within the DBN framework to rapidly explore a large model space, we obtain a 40% reduction in error rate compared to a previous transducer-based method of learning edit distance. 1 Introduction Edit distance (ED) is a common measure of the similarity between two strings. It has a wide range of applications in classification, natural language processing, computational biology, and many other fields. It has been extended in various ways; for example, to handle simple (Lowrance and Wagner, 1975) or (constrained) block transpositions (Leusch et al., 2003), and other types of block operations (Shapira and Storer, 2003); and to measure similarity between graphs (Myers et al., 2000; Klein, 1998) or automata (Mohri, 2002). ∗This material was supported by NSF under Grant No. ISS0326276. Another important development has been the use of data-driven methods for the automatic learning of edit costs, such as in (Ristad and Yianilos, 1998) in the case of string edit distance and in (Neuhaus and Bunke, 2004) for graph edit distance. In this paper we revisit the problem of learning string edit distance costs within the Graphical Models framework. We apply our method to a pronunciation classification task and show significant improvements over the standard Levenshtein distance (Levenshtein, 1966) and a previous transducer-based learning algorithm. In section 2, we review a stochastic extension of the classic string edit distance. We present our DBNbased edit distance models in section 3 and show results on a pronunciation classification task in section 4. In section 5, we discuss the computational aspects of using our models. We end with our conclusions and future work in section 6. 2 Stochastic Models of Edit Distance Let sm 1 = s1s2...sm be a source string over a source alphabet A, and m the length of the string. sj i is the substring si...sj and sj i is equal to the empty string, ϵ, when i > j. Likewise, tn 1 denotes a target string over a target alphabet B, and n the length of tn 1. A source string can be transformed into a target string through a sequence of edit operations. We write ⟨s, t⟩((s, t) ̸= (ϵ, ϵ)) to denote an edit operation in which the symbol s is replaced by t. If s=ϵ and t̸=ϵ, ⟨s, t⟩is an insertion. If s̸=ϵ and t=ϵ, ⟨s, t⟩ is a deletion. When s̸=ϵ, t̸=ϵ and s̸=t, ⟨s, t⟩is a substitution. In all other cases, ⟨s, t⟩is an identity. The string edit distance, d(sm 1 , tn 1) between sm 1 and tn 1 is defined as the minimum weighted sum of 338 the number of deletions, insertions, and substitutions required to transform sm 1 into tn 1 (Wagner and Fischer, 1974). A O(m · n) Dynamic Programming (DP) algorithm exists to compute the ED between two strings. The algorithm is based on the following recursion: d(si 1, tj 1) = min d(si−1 1 , tj 1) + γ(⟨si, ϵ⟩), d(si 1, tj−1 1 ) + γ(⟨ϵ, tj⟩), d(si−1 1 , tj−1 1 ) + γ(⟨si, tj⟩) with d(ϵ, ϵ)=0 and γ : {⟨s, t⟩|(s, t) ̸= (ϵ, ϵ)} →ℜ+ a cost function. When γ maps non-identity edit operations to unity and identities to zero, string ED is often referred to as the Levenshtein distance. To learn the edit distance costs from data, Ristad and Yianilos (1998) use a generative model (henceforth referred to as the RY model) based on a memoryless transducer of string pairs. Below we summarize their main idea and introduce our notation, which will be useful later on. We are interested in modeling the joint probability P(Sm 1=sm 1 , T n 1=tn 1 | θ) of observing the source/target string pair (sm 1 , tn 1) given model parameters θ. Si (resp. Ti), 1≤i≤m, is a random variable (RV) associated with the event of observing a source (resp. target) symbol at position i.1 To model the edit operations, we introduce a hidden RV, Z, that takes values in (A ∪ϵ × B ∪ϵ) \ {(ϵ, ϵ)}. Z can be thought of as a random vector with two components, Z(s) and Z(t). We can then write the joint probability P(sm 1 , tn 1 | θ) as P(sm 1 , tn 1 | θ) = XX {zℓ 1:v(zℓ 1)=<sm 1 ,tn 1 >, max(m,n)≤ℓ≤m+n} P(Zℓ 1=zℓ 1, sm 1 , tn 1 | θ) (1) where v(zℓ 1) is the yield of the sequence zℓ 1: the string pair output by the transducer. Equation 1 says that the probability of a particular pair of strings is equal to the sum of the probabilities of all possible ways to generate the pair by concatenating the edit operations z1...zℓ. If we make the assumption that there is no dependence between edit operations, we call our model memoryless. P(Zℓ 1, sm 1 , tn 1 | θ) can then be factored as ΠiP(Zi, sm 1 , tn 1 | θ). In addition, we call the model context-independent if we can write Q(zi) = 1We follow the convention of using capital letters for random variables and lowercase letters for instantiations of random variables. P(Zi=zi, sm 1 , tn 1 | θ), 1<i<ℓ, where zi=⟨z(s) i , z(t) i ⟩, in the form Q(zi) ∝ fins(tbi) for z(s) i = ϵ; z(t) i = tbi fdel(sai) for z(s) i = sai; z(t) i = ϵ fsub(sai, tbi) for (z(s) i , z(t) i ) = (sai, tbi) 0 otherwise (2) where P z Q(z) = 1; ai = Pi−1 j=1 1{z(s) j ̸=ϵ} (resp. bi) is the index of the source (resp. target) string generated up to the ith edit operation; and fins,fdel,and fsub are functions mapping to [0, 1].2 Context independence is not to be taken here to mean Zi does not depend on sai or tbi. It depends on them through the global context which forces Zℓ 1 to generate (sm 1 , tn 1). The RY model is memoryless and context-independent (MCI). Equation 2, also implicitly enforces the consistency constraint that the pair of symbols output, (z(s) i , z(t) i ), agrees with the actual pair of symbols, (sai, tbi), that needs to be generated at step i in order for the total yield, v(zℓ 1), to equal the string pair. The RY stochastic model is similar to the one introduced earlier by Bahl and Jelinek (1975). The difference is that the Bahl model is memoryless and context-dependent (MCD); the f functions are now indexed by sai (or tai, or both) such that P z Qsai(z) = 1 ∀sai. In general, context dependence can be extended to include up to the whole source (and/or target) string, sai−1 1 , sai, sm ai+1. Several other types of dependence can be exploited as will be discussed in section 3. Both the Ristad and the Bahl transducer models give exponentially smaller probability to longer strings and edit sequences. Ristad presents an alternate explicit model of the joint probability of the length of the source and target strings. In this parametrization the probability of the length of an edit sequence does not necessarily decrease geometrically. A similar effect can be achieved by modeling the length of the hidden edit sequence explicitly (see section 3). 3 DBNs for Learning Edit Distance Dynamic Bayesian Networks (DBNs), of which Hidden Markov Models (HMMs) are the most fa2By convention, sai = ϵ for ai > m. Likewise, tbi = ϵ if bi > n. f ins(ϵ) = f del(ϵ) = f sub(ϵ, ϵ) = 0. This takes care of the case when we are past the end of a string. 339 mous representative, are well suited for modeling stochastic temporal processes such as speech and neural signals. DBNs belong to the larger family of Graphical Models (GMs). In this paper, we restrict ourselves to the class of DBNs and use the terms DBN and GM interchangeably. For an example in which Markov Random Fields are used to compute a context-sensitive edit distance see (Wei, 2004).3 There is a large body of literature on DBNs and algorithms associated with them. To briefly define a graphical model, it is a way of representing a (factored) probability distribution using a graph. Nodes of the graph correspond to random variables; and edges to dependence relations between the variables.4 To do inference or parameter learning using DBNs, various generic exact or approximate algorithms exist (Lauritzen, 1996; Murphy, 2002; Bilmes and Bartels, 2003). In this section we start by introducing a graphical model for the MCI transducer then present four additional classes of DBN models: context-dependent, memory (where an edit operation can depend on past operations), direct (HMM-like), and length models (in which we explicitly model the length of the sequence of edits to avoid the exponential decrease in likelihood of longer sequences). A few other models are discussed in section 4.2. 3.1 Memoryless Context-independent Model Fig. 1 shows a DBN representation of the memoryless context-independent transducer model (section 2). The graph represents a template which consists, in general, of three parts: a prologue, a chunk, and an epilogue. The chunk is repeated as many times as necessary to model sequences of arbitrary length. The product of unrolling the template is a Bayesian Network organized into a given number of frames. The prologue and the epilogue often differ from the chunk because they model boundary conditions, such as ensuring that the end of both strings is reached at or before the last frame. Associated with each node is a probability function that maps the node’s parent values to the values the node can take. We will refer to that function as a 3While the Markov Edit Distance introduced in the paper takes local statistical dependencies into account, the edit costs are still fixed and not corpus-driven. 4The concept of d-separation is useful to read independence relations encoded by the graph (Lauritzen, 1996). Figure 1: DBN for the memory-less transducer model. Unshaded nodes are hidden nodes with probabilistic dependencies with respect to their parents. Nodes with stripes are deterministic hidden nodes, i.e., they take a unique value for each configuration of their parents. Filled nodes are observed (they can be either stochastic or deterministic). The graph template is divided into three frames. The center frame is repeated m + n −2 times to yield a graph with a total of m+n frames, the maximum number of edit operations needed to transform sm 1 into tn 1. Outgoing light edges mean the parent is a switching variable with respect to the child: depending on the value of the switching RV, the child uses different CPTs and/or a different parent set. conditional probability table (CPT). Common to all the frames in fig. 1, are position RVs, a and b, which encode the current positions in the source and target strings resp.; source and target symbols, s and t; the hidden edit operation, Z; and consistency nodes sc and tc, which enforce the consistency constraint discussed in section 2. Because of symmetry we will explain the upper half of the graph involving the source string unless the target half is different. We drop subscripts when the frame number is clear from the context. In the first frame, a and b are observed to have value 1, the first position in both strings. a and b determine the value of the symbols s and t. Z takes a random value ⟨z(s), z(t)⟩. sc has the fixed observed value 1. The only configurations of its parents, Z and s, that satisfy P(sc = 1|s, z) > 0 are such that (Z(s) = s) or (Z(s) = ϵ and Z ̸= ⟨ϵ, ϵ⟩). This is the consistency constraint in equation 2. In the following frame, the position RV a2 depends on a1 and Z1. If Z1 is an insertion (i.e. Z(s) 1 = ϵ : the source symbol in the first frame is 340 not output), then a2 retains the same value as a1; otherwise a2 is incremented by 1 to point to the next symbol in the source string. The end RV is an indicator of when we are past the end of both source and target strings (a > m and b > n). end is also a switching parent of Z; when end = 0, the CPT of Z is the same as described above: a distribution over edit operations. When end = 1, Z takes, with probability 1, a fixed value outside the range of edit operations but consistent with s and t. This ensures 1) no “null” state (⟨ϵ, ϵ⟩) is required to fill in the value of Z until the end of the graph is reached; our likelihoods and model parameters therefore do not become dependent on the amount of “null” padding; and 2) no probability mass is taken from the other states of Z as is the case with the special termination symbol # in the original RY model. We found empirically that the use of either a null or an end state hurts performance to a small but significant degree. In the last frame, two new nodes make their appearance. send and tend ensure we are at or past the end of the two strings (the RV end only checks that we are past the end). That is why send depends on both a and Z. If a > m, send (observed to be 1) is 1 with probability 1. If a < m, then P(send=1) = 0 and the whole sequence Zℓ 1 has zero probability. If a = m, then send only gets probability greater than zero if Z is not an insertion. This ensures the last source symbol is indeed consumed. Note that we can obtain the equivalent of the total edit distance cost by using Viterbi inference and adding a costi variable as a deterministic child of the random variable Zi : in each frame the cost is equal to costi−1 plus 0 when Zi is an identity, or plus 1 otherwise. 3.2 Context-dependent Model Adding context dependence in the DBN framework is quite natural. In fig. 2, we add edges from si, sprevi, and snexti to Zi. The sc node is no longer required because we can enforce the consistency constraint via the CPT of Z given its parents. snexti is an RV whose value is set to the symbol at the ai+1 position of the string, i.e., snexti=sai+1. Likewise, sprevi = sai−1. The Bahl model (1975) uses a dependency on si only. Note that si−1 is not necessarily equal to sai−1. Conditioning on si−1 induces an Figure 2: Context-dependent model. indirect dependence on whether there was an insertion in the previous step because si−1 = si might be correlated with the event Z(s) i−1 = ϵ. 3.3 Memory Model Memory models are another easy extension of the basic model as fig. 3 shows. Depending on whether the variable Hi−1 linking Zi−1 to Zi is stochastic or deterministic, there are several models that can be implemented; for example, a latent factor memory model when H is stochastic. The cardinality of H determines how much the information from one frame to the other is “summarized.” With a deterministic implementation, we can, for example, specify the usual P(Zi|Zi−1) memory model when H is a simple copy of Z or have Zi depend on the type of edit operation in the previous frame. Figure 3: Memory model. Depending on the type of dependency between Zi and Hi, the model can be latent variable based or it can implement a deterministic dependency on a function of Zi 3.4 Direct Model The direct model in fig. 4 is patterned on the classic HMM, where the unrolled length of graph is the same as the length of the sequence of observations. The key feature of this model is that we are required 341 to consume a target symbol per frame. To achieve that, we introduce two RVs, ins, with cardinality 2, and del, with cardinality at most m. The dependency of del on ins is to ensure the two events never happen concomitantly. At each frame, a is incremented either by the value of del in the case of a (possibly block) deletion or by zero or one depending on whether there was an insertion in the previous frame. An insertion also forces s to take value ϵ. Figure 4: Direct model. In essence the direct model is not very different from the context-dependent model in that here too we learn the conditional probabilities P(ti|si) (which are implicit in the CD model). 3.5 Length Model While this model (fig. 5) is more complex than the previous ones, much of the network structure is “control logic” necessary to simulate variable length-unrolling of the graph template. The key idea is that we have a new stochastic hidden RV, inclen, whose value added to that of the RV inilen determines the number of edit operations we are allowed. A counter variable, counter is used to keep track of the frame number and when the required number is reached, the RV atReqLen is triggered. If at that point we have just reached the end of one of the strings while the end of the other one is reached in this frame or a previous one, then the variable end is explained (it has positive probability). Otherwise, the entire sequence of edit operations up to that point has zero probability. 4 Pronunciation Classification In pronunciation classification we are given a lexicon, which consists of words and their corresponding canonical pronunciations. We are also provided with surface pronunciations and asked to find the most likely corresponding words. Formally, for each Figure 5: Length unrolling model. surface form, tn 1, we need to find the set of words ˆW s.t. ˆW = argmaxwP(w|tn 1). There are several ways we could model the probability P(w|tn 1). One way is to assume a generative model whereby a word w and a surface pronunciation tn 1 are related via an underlying canonical pronunciation sm 1 of w and a stochastic process that explains the transformation from sm 1 to tn 1. This is summarized in equation 3. C(w) denotes the set of canonical pronunciations of w. ˆW = argmax w X sm 1 ∈C(w) P(w|sm 1 )P(sm 1 , tn 1) (3) If we assume uniform probabilities P(w|sm 1 ) (sm 1 ∈C(w)) and use the max approximation in place of the sum in eq. 3 our classification rule becomes ˆW = {w| ˆS∩C(w)̸=∅, ˆS=argmax sm 1 P(sm 1 , tn 1)} (4) It is straightforward to create a DBN to model the joint probability P(w, sm 1 , tn 1) by adding a word RV and a canonical pronunciation RV on top of any of the previous models. There are other pronunciation classification approaches with various emphases. For example, Rentzepopoulos and Kokkinakis (1996) use HMMs to convert phoneme sequences to their most likely orthographic forms in the absence of a lexicon. 4.1 Data We use Switchboard data (Godfrey et al., 1992) that has been hand annotated in the context of the Speech Transcription Project (STP) described in (Greenberg et al., 1996). Switchboard consists of spontaneous informal conversations recorded over the 342 phone. Because of the informal non-scripted nature of the speech and the variety of speakers, the corpus presents much variety in word pronunciations, which can significantly deviate from the prototypical pronunciations found in a lexicon. Another source of pronunciation variability is the noise introduced during the annotation of speech segments. Even when the phone labels are mostly accurate, the start and end time information is not as precise and it affects how boundary phones get aligned to the word sequence. As a reference pronunciation dictionary we use a lexicon of the 2002 Switchboard speech recognition evaluation. The lexicon contains 40000 entries, but we report results on a reduced dictionary5 with 5000 entries corresponding to only those words that appear in our train and test sets. Ristad and Yianilos use a few additional lexicons, some of which are corpus-derived. We did reproduce their results on the different types of lexicons. For testing we randomly divided STP data into 9495 training words (corresponding to 9545 pronunciations) and 912 test words (901 pronunciations). For the Levenshtein and MCI results only, we performed ten-fold cross validation to verify we did not pick a non-representative test set. Our models are implemented using GMTK, a general-purpose DBN tool originally created to explore different speech recognition models (Bilmes and Zweig, 2002). As a sanity check, we also implemented the MCI model in C following RY’s algorithm. The error rate is computed by calculating, for each pronunciation form, the fraction of words that are correctly hypothesized and averaging over the test set. For example if the classifier returns five words for a given pronunciation, and two of the words are correct, the error rate is 3/5*100%. Three EM iterations are used for training. Additional iterations overtrained our models. 4.2 Results Table 1 summarizes our results using DBN based models. The basic MCI model does marginally better than the Levenshtein edit distance. This is consistent with the finding in RY: their gains come from the joint learning of the probabilities P(w|sm 1 ) and P(sm 1 , tn 1). Specifically, the word model accounts for much of their gains over the Levenshtein dis5Equivalent to the E2 lexicon in RY. tance. We use uniform priors and the simple classification rule in eq. 4. We feel it is more compelling that we are able to significantly improve upon standard edit distance and the MCI model without using any lexicon or word model. Memory Models Performance improves with the addition of a direct dependence of Zi on Zi−1. The biggest improvement (27.65% ER) however comes from conditioning on Z(t) i−1, the target symbol that is hypothesized in the previous step. There was no gain when conditioning on the type of edit operation in the previous frame. Context Models Interestingly, the exact opposite from the memory models is happening here when we condition on the source context (versus conditioning on the target context). Conditioning on si gets us to 21.70%. With si, si−1 we can further reduce the error rate to 20.26%. However, when we add a third dependency, the error rate worsens to 29.32%, which indicates a number of parameters too high for the given amount of training data. Backoff, interpolation, or state clustering might all be appropriate strategies here. Position Models Because in the previous models, when conditioning on the past, boundary conditions dictate that we use a different CPT in the first frame, it is fair to wonder whether part of the gain we witness is due to the implicit dependence on the source-target string position. The (small) improvement due to conditioning on bi indicates there is such dependence. Also, the fact that the target position is more informative than the source one is likely due to the misalignments we observed in the phonetically transcribed corpus, whereby the first or last phones would incorrectly be aligned with the previous or next word resp. I.e., the model might be learning to not put much faith in the start and end positions of the target string, and thus it boosts deletion and insertion probabilities at those positions. We have also conditioned on coarser-grained positions (beginning, middle, and end of string) but obtained the same results as with the fine-grained dependency. Length Models Modeling length helps to a small extent when it is added to the MCI and MCD models. Belying the assumption motivating this model, we found that the distribution over the RV inclen (which controls how much the edit sequence extends 343 beyond the length of the source string) is skewed towards small values of inclen. This indicates on that insertions are rare when the source string is longer than the target one and vice-versa for deletions. Direct Model The low error rate obtained by this model reflects its similarity to the context-dependent model. From the two sets of results, it is clear that source string context plays a crucial role in predicting canonical pronunciations from corpus ones. We would expect additional gains from modeling context dependencies across time here as well. Model Zi Dependencies % Err rate Lev none 35.97 Baseline none 35.55 Memory Zi−1 30.05 editOperationType(Zi−1) 36.16 stochastic binary Hi−1 33.87 Z(s) i−1 29.62 Z(t) i−1 27.65 Context si 21.70 ti 32.06 si, si−1 20.26 ti, ti−1 28.21 si, si−1, sai+1 29.32 si, sai+1 (sai−1 in last frame) 23.14 si, sai−1 (sai+1 in first frame) 23.15 Position ai 33.80 bi 31.06 ai, bi 34.17 Mixed bi,si 22.22 Z(t) i−1,si 24.26 Length none 33.56 si 20.03 Direct none 23.70 Table 1: DBN based model results summary. When we combine the best position-dependent or memory models with the context-dependent one, the error rate decreases (from 31.31% to 25.25% when conditioning on bi and si; and from 28.28% to 25.75% when conditioning on z(t) i−1 and si) but not to the extent conditioning on si alone decreases error rate. Not shown in table 1, we also tried several other models, which although they are able to produce reasonable alignments (in the sense that the Levenshtein distance would result in similar alignments) between two given strings, they have extremely poor discriminative ability and result in error rates higher than 90%. One such example is a model in which Zi depends on both si and ti. It is easy to see where the problem lies with this model once one considers that two very different strings might still get a higher likelihood than more similar pair because, given s and t s.t. s ̸= t, the probability of identity is obviously zero and that of insertion or deletion can be quite high; and when s = t, the probability of insertion (or deletion) is still positive. We observe the same non-discriminative behavior when we replace, in the MCI model, Zi with a hidden RV Xi, where Xi takes as values one of the four edit operations. 5 Computational Considerations The computational complexity of inference in a graphical model is related to the state space of the largest clique (maximal complete subgraph) in the graph. In general, finding the smallest such clique is NP-complete (Arnborg et al., 1987). In the case of the MCI model, however, it is not difficult to show that the smallest such clique contains all the RVs within a frame and the complexity of doing inference is order O(mn · max(m, n)). The reason there is a complexity gap is that the source and target position variables are indexed by the frame number and we do not exploit the fact that even though we arrive at a given source-target position pair along different edit sequence paths at different frames, the position pair is really the same regardless of its frame index. We are investigating generic ways of exploiting this constraint. In practice, however, state space pruning can significantly reduce the running time of DBN inference. Ukkonen (1985) reduces the complexity of the classic edit distance to O(d·max(m, n)), where d is the edit distance. The intuition there is that, assuming a small edit distance, the most likely alignments are such that the source position does not diverge too much from the target position. The same intuition holds in our case: if the source and the target position do not get too far out of sync, then at each step, only a small fraction of the m · n possible sourcetarget position configurations need be considered. The direct model, for example, is quite fast in practice because we can restrict the cardinality of the del RV to a constant c (i.e. we disallow long-span deletions, which for certain applications is a reasonable restriction) and make inference linear in n with a running time constant proportional to c2. 344 6 Conclusion We have shown how the problem of learning edit distance costs from data can be modeled quite naturally using Dynamic Bayesian Networks even though the problem lacks the temporal or order constraints that other problems such as speech recognition exhibit. This gives us confidence that other important problems such as machine translation can benefit from a Graphical Models perspective. Machine translation presents a fresh set of challenges because of the large combinatorial space of possible alignments between the source string and the target. There are several extensions to this work that we intend to implement or have already obtained preliminary results on. One is simple and block transposition. Another natural extension is modeling edit distance of multiple strings. It is also evident from the large number of dependency structures that were explored that our learning algorithm would benefit from a structure learning procedure. Maximum likelihood optimization might, however, not be appropriate in this case, as exemplified by the failure of some models to discriminate between different pronunciations. Discriminative methods have been used with significant success in training HMMs. Edit distance learning could benefit from similar methods. References S. Arnborg, D. G. Corneil, and A. Proskurowski. 1987. Complexity of finding embeddings in a k-tree. SIAM J. Algebraic Discrete Methods, 8(2):277–284. L. R. Bahl and F. Jelinek. 1975. Decoding for channels with insertions, deletions, and substitutions with applications to speech recognition. Trans. on Information Theory, 21:404–411. J. Bilmes and C. Bartels. 2003. On triangulating dynamic graphical models. In Uncertainty in Artificial Intelligence: Proceedings of the 19th Conference, pages 47–56. Morgan Kaufmann. J. Bilmes and G. Zweig. 2002. The Graphical Models Toolkit: An open source software system for speech and time-series processing. Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing. J. J. Godfrey, E. C. Holliman, and J. McDaniel. 1992. SWITCHBOARD: Telephone speech corpus for research and development. In ICASSP, volume 1, pages 517–520. S. Greenberg, J. Hollenback, and D. Ellis. 1996. Insights into spoken language gleaned from phonetic transcription of the switchboard corpus. In ICSLP, pages S24– 27. P. N. Klein. 1998. Computing the edit-distance between unrooted ordered trees. In Proceedings of 6th Annual European Symposium, number 1461, pages 91–102. S.L. Lauritzen. 1996. Graphical Models. Oxford Science Publications. G. Leusch, N. Ueffing, and H. Ney. 2003. A novel string-to-string distance measure with applications to machine translation evaluation. In Machine Translation Summit IX, pages 240–247. V. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and reversals. Sov. Phys. Dokl., 10:707–710. R. Lowrance and R. A. Wagner. 1975. An extension to the string-to-string correction problem. J. ACM, 22(2):177–183. M. Mohri. 2002. Edit-distance of weighted automata. In CIAA, volume 2608 of Lecture Notes in Computer Science, pages 1–23. Springer. K. Murphy. 2002. Dynamic Bayesian Networks: Representation, Inference and Learning. Ph.D. thesis, U.C. Berkeley, Dept. of EECS, CS Division. R. Myers, R.C. Wison, and E.R. Hancock. 2000. Bayesian graph edit distance. IEEE Trans. on Pattern Analysis and Machine Intelligence, 22:628–635. M. Neuhaus and H. Bunke. 2004. A probabilistic approach to learning costs for graph edit distance. In ICPR, volume 3, pages 389–393. P. A. Rentzepopoulos and G. K. Kokkinakis. 1996. Efficient multilingual phoneme-to-grapheme conversion based on hmm. Comput. Linguist., 22(3):351–376. E. S. Ristad and P. N. Yianilos. 1998. Learning string edit distance. Trans. on Pattern Recognition and Machine Intelligence, 20(5):522–532. D. Shapira and J. A. Storer. 2003. Large edit distance with multiple block operations. In SPIRE, volume 2857 of Lecture Notes in Computer Science, pages 369–377. Springer. E. Ukkonen. 1985. Algorithms for approximate string matching. Inf. Control, 64(1-3):100–118. R. A. Wagner and M. J. Fischer. 1974. The string-tostring correction problem. J. ACM, 21(1):168–173. J. Wei. 2004. Markov edit distance. Trans. on Pattern Analysis and Machine Intelligence, 26(3):311–321. 345 | 2005 | 42 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 346–353, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Learning Stochastic OT Grammars: A Bayesian approach using Data Augmentation and Gibbs Sampling Ying Lin∗ Department of Linguistics University of California, Los Angeles Los Angeles, CA 90095 [email protected] Abstract Stochastic Optimality Theory (Boersma, 1997) is a widely-used model in linguistics that did not have a theoretically sound learning method previously. In this paper, a Markov chain Monte-Carlo method is proposed for learning Stochastic OT Grammars. Following a Bayesian framework, the goal is finding the posterior distribution of the grammar given the relative frequencies of input-output pairs. The Data Augmentation algorithm allows one to simulate a joint posterior distribution by iterating two conditional sampling steps. This Gibbs sampler constructs a Markov chain that converges to the joint distribution, and the target posterior can be derived as its marginal distribution. 1 Introduction Optimality Theory (Prince and Smolensky, 1993) is a linguistic theory that dominates the field of phonology, and some areas of morphology and syntax. The standard version of OT contains the following assumptions: • A grammar is a set of ordered constraints ({Ci : i = 1, · · · , N}, >); • Each constraint Ci is a function: Σ∗ → {0, 1, · · · }, where Σ∗is the set of strings in the language; ∗The author thanks Bruce Hayes, Ed Stabler, Yingnian Wu, Colin Wilson, and anonymous reviewers for their comments. • Each underlying form u corresponds to a set of candidates GEN(u). To obtain the unique surface form, the candidate set is successively filtered according to the order of constraints, so that only the most harmonic candidates remain after each filtering. If only 1 candidate is left in the candidate set, it is chosen as the optimal output. The popularity of OT is partly due to learning algorithms that induce constraint ranking from data. However, most of such algorithms cannot be applied to noisy learning data. Stochastic Optimality Theory (Boersma, 1997) is a variant of Optimality Theory that tries to quantitatively predict linguistic variation. As a popular model among linguists that are more engaged with empirical data than with formalisms, Stochastic OT has been used in a large body of linguistics literature. In Stochastic OT, constraints are regarded as independent normal distributions with unknown means and fixed variance. As a result, the stochastic constraint hierarchy generates systematic linguistic variation. For example, consider a grammar with 3 constraints, C1 ∼N(µ1, σ2), C2 ∼N(µ2, σ2), C3 ∼N(µ3, σ2), and 2 competing candidates for a given input x: p(.) C1 C2 C3 x ∼ y1 .77 0 0 1 x ∼ y2 .23 1 1 0 Table 1: A Stochastic OT grammar with 1 input and 2 outputs 346 The probabilities p(.) are obtained by repeatedly sampling the 3 normal distributions, generating the winning candidate according to the ordering of constraints, and counting the relative frequencies in the outcome. As a result, the grammar will assign nonzero probabilities to a given set of outputs, as shown above. The learning problem of Stochastic OT involves fitting a grammar G ∈RN to a set of candidates with frequency counts in a corpus. For example, if the learning data is the above table, we need to find an estimate of G = (µ1, µ2, µ3)1 so that the following ordering relations hold with certain probabilities: max{C1, C2} > C3; with probability .77 max{C1, C2} < C3; with probability .23 (1) The current method for fitting Stochastic OT models, used by many linguists, is the Gradual Learning Algorithm (GLA) (Boersma and Hayes, 2001). GLA looks for the correct ranking values by using the following heuristic, which resembles gradient descent. First, an input-output pair is sampled from the data; second, an ordering of the constraints is sampled from the grammar and used to generate an output; and finally, the means of the constraints are updated so as to minimize the error. The updating is done by adding or subtracting a “plasticity” value that goes to zero over time. The intuition behind GLA is that it does “frequency matching”, i.e. looking for a better match between the output frequencies of the grammar and those in the data. As it turns out, GLA does not work in all cases2, and its lack of formal foundations has been questioned by a number of researchers (Keller and Asudeh, 2002; Goldwater and Johnson, 2003). However, considering the broad range of linguistic data that has been analyzed with Stochastic OT, it seems unadvisable to reject this model because of the absence of theoretically sound learning methods. Rather, a general solution is needed to evaluate Stochastic OT as a model for linguistic variation. In this paper, I introduce an algorithm for learning Stochastic OT grammars using Markov chain Monte-Carlo methods. Within a Bayesian frame1Up to translation by an additive constant. 2Two examples included in the experiment section. See 6.3. work, the learning problem is formalized as finding the posterior distribution of ranking values (G) given the information on constraint interaction based on input-output pairs (D). The posterior contains all the information needed for linguists’ use: for example, if there is a grammar that will generate the exact frequencies as in the data, such a grammar will appear as a mode of the posterior. In computation, the posterior distribution is simulated with MCMC methods because the likelihood function has a complex form, thus making a maximum-likelihood approach hard to perform. Such problems are avoided by using the Data Augmentation algorithm (Tanner and Wong, 1987) to make computation feasible: to simulate the posterior distribution G ∼p(G|D), we augment the parameter space and simulate a joint distribution (G, Y ) ∼p(G, Y |D). It turns out that by setting Y as the value of constraints that observe the desired ordering, simulating from p(G, Y |D) can be achieved with a Gibbs sampler, which constructs a Markov chain that converges to the joint posterior distribution (Geman and Geman, 1984; Gelfand and Smith, 1990). I will also discuss some issues related to efficiency in implementation. 2 The difficulty of a maximum-likelihood approach Naturally, one may consider “frequency matching” as estimating the grammar based on the maximumlikelihood criterion. Given a set of constraints and candidates, the data may be compiled in the form of (1), on which the likelihood calculation is based. As an example, given the grammar and data set in Table 1, the likelihood of d=“max{C1, C2} > C3” can be written as P(d|µ1, µ2, µ3)= 1 − R 0 −∞ R 0 −∞ 1 2πσ2 exp ½ − ⃗fxy·Σ·⃗fT xy 2 ¾ dx dy where ⃗fxy = (x −µ1 + µ3, y −µ2 + µ3), and Σ is the identity covariance matrix. The integral sign follows from the fact that both C1 −C2, C2 −C3 are normal, since each constraint is independently normally distributed. If we treat each data as independently generated by the grammar, then the likelihood will be a product of such integrals (multiple integrals if many constraints are interacting). One may attempt to maximize such a likelihood function using numerical 347 methods3, yet it appears to be desirable to avoid likelihood calculations altogether. 3 The missing data scheme for learning Stochastic OT grammars The Bayesian approach tries to explore p(G|D), the posterior distribution. Notice if we take the usual approach by using the relationship p(G|D) ∝ p(D|G) · p(G), we will encounter the same problem as in Section 2. Therefore we need a feasible way of sampling p(G|D) without having to derive the closed-form of p(D|G). The key idea here is the so-called “missing data” scheme in Bayesian statistics: in a complex modelfitting problem, the computation can sometimes be greatly simplified if we treat part of the unknown parameters as data and fit the model in successive stages. To apply this idea, one needs to observe that Stochastic OT grammars are learned from ordinal data, as seen in (1). In other words, only one aspect of the structure generated by those normal distributions — the ordering of constraints — is used to generate outputs. This observation points to the possibility of treating the sample values of constraints ⃗y = (y1, y2, · · · , yN) that satisfy the ordering relations as missing data. It is appropriate to refer to them as “missing” because a language learner obviously cannot observe real numbers from the constraints, which are postulated by linguistic theory. When the observed data are augmented with missing data and become a complete data model, computation becomes significantly simpler. This type of idea is officially known as Data Augmentation (Tanner and Wong, 1987). More specifically, we also make the following intuitive observations: • The complete data model consists of 3 random variables: the observed ordering relations D, the grammar G, and the missing samples of constraint values Y that generate the ordering D. • G and Y are interdependent: – For each fixed d, values of Y that respect d can be obtained easily once G is given: we just sample from p(Y |G) and only keep 3Notice even computing the gradient is non-trivial. those that observe d. Then we let d vary with its frequency in the data, and obtain a sample of p(Y |G, D); – Once we have the values of Y that respect the ranking relations D, G becomes independent of D. Thus, sampling G from p(G|Y, D) becomes the same as sampling from p(G|Y ). 4 Gibbs sampler for the joint posterior — p(G, Y |D) The interdependence of G and Y helps design iterative algorithms for sampling p(G, Y |D). In this case, since each step samples from a conditional distribution (p(G|Y, D) or p(Y |G, D)), they can be combined to form a Gibbs sampler (Geman and Geman, 1984). In the same order as described in Section 3, the two conditional sampling steps are implemented as follows: 1. Sample an ordering relation d according to the prior p(D), which is simply normalized frequency counts; sample a vector of constraint values y = {y1, · · · , yN} from the normal distributions N(µ(t) 1 , σ2), · · · , N(µ(t) N , σ2) such that y observes the ordering in d; 2. Repeat Step 1 and obtain M samples of missing data: y1, · · · , yM; sample µ(t+1) i from N(P j yj i /M, σ2/M). The grammar G = (µ1, · · · , µN), and the superscript (t) represents a sample of G in iteration t. As explained in 3, Step 1 samples missing data from p(Y |G, D), and Step 2 is equivalent to sampling from p(G|Y, D), by the conditional independence of G and D given Y . The normal posterior distribution N(P j yj i /M, σ2/M) is derived by using p(G|Y ) ∝p(Y |G)p(G), where p(Y |G) is normal, and p(G) ∼N(µ0, σ0) is chosen to be an noninformative prior with σ0 →∞. M (the number of missing data) is not a crucial parameter. In our experiments, M is set to the total number of observed forms4. Although it may seem that σ2/M is small for a large M and does not play 4Other choices of M, e.g. M = 1, lead to more or less the same running time. 348 a significant role in the sampling of µ(t+1) i , the variance of the sampling distribution is a necessary ingredient of the Gibbs sampler5. Under fairly general conditions (Geman and Geman, 1984), the Gibbs sampler iterates these two steps until it converges to a unique stationary distribution. In practice, convergence can be monitored by calculating cross-sample statistics from multiple Markov chains with different starting points (Gelman and Rubin, 1992). After the simulation is stopped at convergence, we will have obtained a perfect sample of p(G, Y |D). These samples can be used to derive our target distribution p(G|D) by simply keeping all the G components, since p(G|D) is a marginal distribution of p(G, Y |D). Thus, the sampling-based approach gives us the advantage of doing inference without performing any integration. 5 Computational issues in implementation In this section, I will sketch some key steps in the implementation of the Gibbs sampler. Particular attention is paid to sampling p(Y |G, D), since a direct implementation may require an unrealistic running time. 5.1 Computing p(D) from linguistic data The prior probability p(D) determines the number of samples (missing data) that are drawn under each ordering relation. The following example illustrates how the ordering D and p(D) are calculated from data collected in a linguistic analysis. Consider a data set that contains 2 inputs and a few outputs, each associated with an observed frequency in the lexicon: C1 C2 C3 C4 C5 Freq. x1 y11 0 1 0 1 0 4 y12 1 0 0 0 0 3 y13 0 1 1 0 1 0 y14 0 0 1 0 0 0 x2 y21 1 1 0 0 0 3 y22 0 0 1 1 1 0 Table 2: A Stochastic OT grammar with 2 inputs The three ordering relations (corresponding to 3 attested outputs) and p(D) are computed as follows: 5As required by the proof in (Geman and Geman, 1984). Ordering Relation D p(D) C1>max{C2, C4} max{C3, C5}>C4 C3>max{C2, C4} .4 max{C2, C4}>C1 max{C2, C3, C5}>C1 C3>C1 .3 max{C3, C4, C5} > max{C1, C2} .3 Table 3: The ordering relations D and p(D) computed from Table 2. Here each ordering relation has several conjuncts, and the number of conjuncts is equal to the number of competing candidates for each given input. These conjuncts need to hold simultaneously because each winning candidate needs to be more harmonic than all other competing candidates. The probabilities p(D) are obtained by normalizing the frequencies of the surface forms in the original data. This will have the consequence of placing more weight on lexical items that occur frequently in the corpus. 5.2 Sampling p(Y |G, D) under complex ordering relations A direct implementation p(Y |G, d) is straightforward: 1) first obtain N samples from N Gaussian distributions; 2) check each conjunct to see if the ordering relation is satisfied. If so, then keep the sample; if not, discard the sample and try again. However, this can be highly inefficient in many cases. For example, if m constraints appear in the ordering relation d and the sample is rejected, the N −m random numbers for constraints not appearing in d are also discarded. When d has several conjuncts, the chance of rejecting samples for irrelevant constraints is even greater. In order to save the generated random numbers, the vector Y can be decomposed into its 1-dimensional components (Y1, Y2, · · · , YN). The problem then becomes sampling p(Y1, · · · , YN|G, D). Again, we may use conditional sampling to draw yi one at a time: we keep yj̸=i and d fixed6, and draw yi so that d holds for y. There are now two cases: if d holds regardless of yi, then any sample from N(µ(t) i , σ2) will do; otherwise, we will need to draw yi from a truncated 6Here we use yj̸=i for all components of y except the i-th dimension. 349 normal distribution. To illustrate this idea, consider an example used earlier where d=“max{c1, c2} > c3”, and the initial sample and parameters are (y(0) 1 , y(0) 2 , y(0) 3 ) = (µ(0) 1 , µ(0) 2 , µ(0) 3 ) = (1, −1, 0). Sampling dist. Y1 Y2 Y3 p(Y1|µ1, Y1 > y3) 2.3799 -1.0000 0 p(Y2|µ2) 2.3799 -0.7591 0 p(Y3|µ3, Y3 < y1) 2.3799 -0.7591 -1.0328 p(Y1|µ1) -1.4823 -0.7591 -1.0328 p(Y2|µ2, Y2 > y3) -1.4823 2.1772 -1.0328 p(Y3|µ3, Y3 < y2) -1.4823 2.1772 1.0107 Table 4: Conditional sampling steps for p(Y |G, d) = p(Y1, Y2, Y3|µ1, µ2, µ3, d) Notice that in each step, the sampling density is either just a normal, or a truncated normal distribution. This is because we only need to make sure that d will continue to hold for the next sample y(t+1), which differs from y(t) by just 1 constraint. In our experiment, sampling from truncated normal distributions is realized by using the idea of rejection sampling: to sample from a truncated normal7 πc(x) = 1 Z(c) ·N(µ, σ)·I{x>c}, we first find an envelope density function g(x) that is easy to sample directly, such that πc(x) is uniformly bounded by M · g(x) for some constant M that does not depend on x. It can be shown that once each sample x from g(x) is rejected with probability r(x) = 1 − πc(x) M·g(x), the resulting histogram will provide a perfect sample for πc(x). In the current work, the exponential distribution g(x) = λ exp {−λx} is used as the envelope, with the following choices for λ and the rejection ratio r(x), which have been optimized to lower the rejection rate: λ = c + √ c + 4σ2 2σ2 r(x) = exp ½(x + c)2 2 + λ0(x + c) −σ2λ2 0 2 ¾ Putting these ideas together, the final version of Gibbs sampler is constructed by implementing Step 1 in Section 4 as a sequence of conditional sampling steps for p(Yi|Yj̸=i, d), and combining them 7Notice the truncated distribution needs to be re-normalized in order to be a proper density. with the sampling of p(G|Y, D). Notice the order in which Yi is updated is fixed, which makes our implementation an instance of the systematic-scan Gibbs sampler (Liu, 2001). This implementation may be improved even further by utilizing the structure of the ordering relation d, and optimizing the order in which Yi is updated. 5.3 Model identifiability Identifiability is related to the uniqueness of solution in model fitting. Given N constraints, a grammar G ∈RN is not identifiable because G + C will have the same behavior as G for any constant C = (c0, · · · , c0). To remove translation invariance, in Step 2 the average ranking value is subtracted from G, such that P i µi = 0. Another problem related to identifiability arises when the data contains the so-called “categorical domination”, i.e., there may be data of the following form: c1 > c2 with probability 1. In theory, the mode of the posterior tends to infinity and the Gibbs sampler will not converge. Since having categorical dominance relations is a common practice in linguistics, we avoid this problem by truncating the posterior distribution8 by I|µ|<K, where K is chosen to be a positive number large enough to ensure that the model be identifiable. The role of truncation/renormalization may be seen as a strong prior that makes the model identifiable on a bounded set. A third problem related to identifiability occurs when the posterior has multiple modes, which suggests that multiple grammars may generate the same output frequencies. This situation is common when the grammar contains interactions between many constraints, and greedy algorithms like GLA tend to find one of the many solutions. In this case, one can either introduce extra ordering relations or use informative priors to sample p(G|Y ), so that the inference on the posterior can be done with a relatively small number of samples. 5.4 Posterior inference Once the Gibbs sampler has converged to its stationary distribution, we can use the samples to make var8The implementation of sampling from truncated normals is the same as described in 5.2. 350 ious inferences on the posterior. In the experiments reported in this paper, we are primarily interested in the mode of the posterior marginal9 p(µi|D), where i = 1, · · · , N. In cases where the posterior marginal is symmetric and uni-modal, its mode can be estimated by the sample median. In real linguistic applications, the posterior marginal may be a skewed distribution, and many modes may appear in the histogram. In these cases, more sophisticated non-parametric methods, such as kernel density estimation, can be used to estimate the modes. To reduce the computation in identifying multiple modes, a mixture approximation (by EM algorithm or its relatives) may be necessary. 6 Experiments 6.1 Ilokano reduplication The following Ilokano grammar and data set, used in (Boersma and Hayes, 2001), illustrate a complex type of constraint interaction: the interaction between the three constraints: ∗COMPLEX-ONSET, ALIGN, and IDENTBR([long]) cannot be factored into interactions between 2 constraints. For any given candidate to be optimal, the constraint that prefers such a candidate must simultaneously dominate the other two constraints. Hence it is not immediately clear whether there is a grammar that will assign equal probability to the 3 candidates. /HRED-bwaja/ p(.) ∗C-ONS AL IBR bu:.bwa.ja .33 1 0 1 bwaj.bwa.ja .33 2 0 0 bub.wa.ja .33 0 1 0 Table 5: Data for Ilokano reduplication. Since it does not address the problem of identifiability, the GLA does not always converge on this data set, and the returned grammar does not always fit the input frequencies exactly, depending on the choice of parameters10. In comparison, the Gibbs sampler converges quickly11, regardless of the parameters. The result suggests the existence of a unique grammar that will 9Note G = (µ1, · · · , µN), and p(µi|D) is a marginal of p(G|D). 10B &H reported results of averaging many runs of the algorithm. Yet there appears to be significant randomness in each run of the algorithm. 11Within 1000 iterations. assign equal probabilities to the 3 candidates. The posterior samples and histograms are displayed in Figure 1. Using the median of the marginal posteriors, the estimated grammar generates an exact fit to the frequencies in the input data. 0 200 400 600 800 1000 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 −2 −1 0 1 2 0 50 100 150 200 250 300 350 Figure 1: Posterior marginal samples and histograms for Experiment 2. 6.2 Spanish diminutive suffixation The second experiment uses linguistic data on Spanish diminutives and the analysis proposed in (ArbisiKelm, 2002). There are 3 base forms, each associated with 2 diminutive suffixes. The grammar consists of 4 constraints: ALIGN(TE,Word,R), MAX-OO(V), DEP-IO and BaseTooLittle. The data presents the problem of learning from noise, since no Stochastic OT grammar can provide an exact fit to the data: the candidate [ubita] violates an extra constraint compared to [liri.ito], and [ubasita] violates the same constraint as [liryosito]. Yet unlike [lityosito], [ubasita] is not observed. Input Output Freq. A M D B /uba/ [ubita] 10 0 1 0 1 [ubasita] 0 1 0 0 0 /mar/ [marEsito] 5 0 0 1 0 [marsito] 5 0 0 0 1 /liryo/ [liri.ito] 9 0 1 0 0 [liryosito] 1 1 0 0 0 Table 6: Data for Spanish diminutive suffixation. In the results found by GLA, [marEsito] always has a lower frequency than [marsito] (See Table 7). This is not accidental. Instead it reveals a problematic use of heuristics in GLA12: since the constraint B is violated by [ubita], it is always demoted whenever the underlying form /uba/ is encountered during learning. Therefore, even though the expected 12Thanks to Bruce Hayes for pointing out this problem. 351 model assigns equal values to µ3 and µ4 (corresponding to D and B, respectively), µ3 is always less than µ4, simply because there is more chance of penalizing D rather than B. This problem arises precisely because of the heuristic (i.e. demoting the constraint that prefers the wrong candidate) that GLA uses to find the target grammar. The Gibbs sampler, on the other hand, does not depend on heuristic rules in its search. Since modes of the posterior p(µ3|D) and p(µ4|D) reside in negative infinity, the posterior is truncated by Iµi<K, with K = 6, based on the discussion in 5.3. Results of the Gibbs sampler and two runs of GLA13 are reported in Table 7. Input Output Obs Gibbs GLA1 GLA2 /uba/ [ubita] 100% 95% 96% 96% [ubasita] 0% 5% 4% 4% /mar/ [marEsito] 50% 50% 38% 45% [marsito] 50% 50% 62% 55% /liryo/ [liri.ito] 90% 95% 96% 91.4% [liryosito] 10% 5% 4% 8.6% Table 7: Comparison of Gibbs sampler and GLA 7 A comparison with Max-Ent models Previously, problems with the GLA14 have inspired other OT-like models of linguistic variation. One such proposal suggests using the more well-known Maximum Entropy model (Goldwater and Johnson, 2003). In Max-Ent models, a grammar G is also parameterized by a real vector of weights w = (w1, · · · , wN), but the conditional likelihood of an output y given an input x is given by: p(y|x) = exp{P i wifi(y, x)} P z exp{P i wifi(z, x)} (2) where fi(y, x) is the violation each constraint assigns to the input-output pair (x, y). Clearly, Max-Ent is a rather different type of model from Stochastic OT, not only in the use of constraint ordering, but also in the objective function (conditional likelihood rather than likelihood/posterior). However, it may be of interest to compare these two types of models. Using the same 13The two runs here both use 0.002 and 0.0001 as the final plasticity. The initial plasticity and the iterations are set to 2 and 1.0e7. Slightly better fits can be found by tuning these parameters, but the observation remains the same. 14See (Keller and Asudeh, 2002) for a summary. data as in 6.2, results of fitting Max-Ent (using conjugate gradient descent) and Stochastic OT (using Gibbs sampler) are reported in Table 8: Input Output Obs SOT ME MEsm /uba/ [ubita] 100% 95% 100% 97.5% [ubasita] 0% 5% 0% 2.5% /mar/ [marEsito] 50% 50% 50% 48.8% [marsito] 50% 50% 50% 51.2% /liryo/ [liri.ito] 90% 95% 90% 91.4% [liryosito] 10% 5% 10% 8.6% Table 8: Comparison of Max-Ent and Stochastic OT models It can be seen that the Max-Ent model, in the absence of a smoothing prior, fits the data perfectly by assigning positive weights to constraints B and D. A less exact fit (denoted by MEsm) is obtained when the smoothing Gaussian prior is used with µi = 0, σ2 i = 1. But as observed in 6.2, an exact fit is impossible to obtain using Stochastic OT, due to the difference in the way variation is generated by the models. Thus it may be seen that Max-Ent is a more powerful class of models than Stochastic OT, though it is not clear how the Max-Ent model’s descriptive power is related to generative linguistic theories like phonology. Although the abundance of well-behaved optimization algorithms has been pointed out in favor of Max-Ent models, it is the author’s hope that the MCMC approach also gives Stochastic OT a similar underpinning. However, complex Stochastic OT models often bring worries about identifiability, whereas the convexity property of Max-Ent may be viewed as an advantage15. 8 Discussion From a non-Bayesian perspective, the MCMC-based approach can be seen as a randomized strategy for learning a grammar. Computing resources make it possible to explore the entire space of grammars and discover where good hypotheses are likely to occur. In this paper, we have focused on the frequently visited areas of the hypothesis space. It is worth pointing out that the Graduate Learning Algorithm can also be seen from this perspective. An examination of the GLA shows that when the plasticity term is fixed, parameters found by GLA also form a Markov chain G(t) ∈RN, t = 1, 2, · · · . Therefore, assuming the model is identifiable, it 15Concerns about identifiability appear much more frequently in statistics than in linguistics. 352 seems possible to use GLA in the same way as the MCMC methods: rather than forcing it to stop, we can run GLA until it reaches stationary distribution, if it exists. However, it is difficult to interpret the results found by this “random walk-GLA” approach: the stationary distribution of GLA may not be the target distribution — the posterior p(G|D). To construct a Markov chain that converges to p(G|D), one may consider turning GLA into a real MCMC algorithm by designing reversible jumps, or the Metropolis algorithm. But this may not be easy, due to the difficulty in likelihood evaluation (including likelihood ratio) discussed in Section 2. In contrast, our algorithm provides a general solution to the problem of learning Stochastic OT grammars. Instead of looking for a Markov chain in RN, we go to a higher dimensional space RN × RN, using the idea of data augmentation. By taking advantage of the interdependence of G and Y , the Gibbs sampler provides a Markov chain that converges to p(G, Y |D), which allows us to return to the original subspace and derive p(G|D) — the target distribution. Interestingly, by adding more parameters, the computation becomes simpler. 9 Future work This work can be extended in two directions. First, it would be interesting to consider other types of OT grammars, in connection with the linguistics literature. For example, the variances of the normal distribution are fixed in the current paper, but they may also be treated as unknown parameters (Nagy and Reynolds, 1997). Moreover, constraints may be parameterized as mixture distributions, which represent other approaches to using OT for modeling linguistic variation (Anttila, 1997). The second direction is to introduce informative priors motivated by linguistic theories. It is found through experimentation that for more sophisticated grammars, identifiability often becomes an issue: some constraints may have multiple modes in their posterior marginal, and it is difficult to extract modes in high dimensions16. Therefore, use of priors is needed in order to make more reliable inferences. In addition, priors also have a linguistic appeal, since 16Notice that posterior marginals do not provide enough information for modes of the joint distribution. current research on the “initial bias” in language acquisition can be formulated as priors (e.g. Faithfulness Low (Hayes, 2004)) from a Bayesian perspective. Implementing these extensions will merely involve modifying p(G|Y, D), which we leave for future work. References Anttila, A. (1997). Variation in Finnish Phonology and Morphology. PhD thesis, Stanford University. Arbisi-Kelm, T. (2002). An analysis of variability in Spanish diminutive formation. Master’s thesis, UCLA, Los Angeles. Boersma, P. (1997). How we learn variation, optionality, probability. In Proceedings of the Institute of Phonetic Sciences 21, pages 43–58, Amsterdam. University of Amsterdam. Boersma, P. and Hayes, B. P. (2001). Empirical tests of the Gradual Learning Algorithm. Linguistic Inquiry, 32:45–86. Gelfand, A. and Smith, A. (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85(410). Gelman, A. and Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7:457–472. Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. on Pattern Analysis and Machine Intelligence, 6(6):721–741. Goldwater, S. and Johnson, M. (2003). Learning OT constraint rankings using a Maximum Entropy model. In Spenader, J., editor, Proceedings of the Workshop on Variation within Optimality Theory, Stockholm. Hayes, B. P. (2004). Phonological acquisition in optimality theory: The early stages. In Kager, R., Pater, J., and Zonneveld, W., editors, Fixing Priorities: Constraints in Phonological Acquisition. Cambridge University Press. Keller, F. and Asudeh, A. (2002). Probabilistic learning algorithms and Optimality Theory. Linguistic Inquiry, 33(2):225–244. Liu, J. S. (2001). Monte Carlo Strategies in Scientific Computing. Number 33 in Springer Statistics Series. SpringerVerlag, Berlin. Nagy, N. and Reynolds, B. (1997). Optimality theory and variable word-final deletion in Faetar. Language Variation and Change, 9. Prince, A. and Smolensky, P. (1993). Optimality Theory: Constraint Interaction in Generative Grammar. Forthcoming. Tanner, M. and Wong, W. H. (1987). The calculation of posterior distributions by data augmentation. Journal of the American Statistical Association, 82(398). 353 | 2005 | 43 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 354–362, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Contrastive Estimation: Training Log-Linear Models on Unlabeled Data∗ Noah A. Smith and Jason Eisner Department of Computer Science / Center for Language and Speech Processing Johns Hopkins University, Baltimore, MD 21218 USA {nasmith,jason}@cs.jhu.edu Abstract Conditional random fields (Lafferty et al., 2001) are quite effective at sequence labeling tasks like shallow parsing (Sha and Pereira, 2003) and namedentity extraction (McCallum and Li, 2003). CRFs are log-linear, allowing the incorporation of arbitrary features into the model. To train on unlabeled data, we require unsupervised estimation methods for log-linear models; few exist. We describe a novel approach, contrastive estimation. We show that the new technique can be intuitively understood as exploiting implicit negative evidence and is computationally efficient. Applied to a sequence labeling problem—POS tagging given a tagging dictionary and unlabeled text—contrastive estimation outperforms EM (with the same feature set), is more robust to degradations of the dictionary, and can largely recover by modeling additional features. 1 Introduction Finding linguistic structure in raw text is not easy. The classical forward-backward and inside-outside algorithms try to guide probabilistic models to discover structure in text, but they tend to get stuck in local maxima (Charniak, 1993). Even when they avoid local maxima (e.g., through clever initialization) they typically deviate from human ideas of what the “right” structure is (Merialdo, 1994). One strategy is to incorporate domain knowledge into the model’s structure. Instead of blind HMMs or PCFGs, one could use models whose features ∗This work was supported by a Fannie and John Hertz Foundation fellowship to the first author and NSF ITR grant IIS0313193 to the second author. The views expressed are not necessarily endorsed by the sponsors. The authors also thank three anonymous ACL reviewers for helpful comments, colleagues at JHU CLSP (especially David Smith and Roy Tromble) and Miles Osborne for insightful feedback, and Eric Goldlust and Markus Dreyer for Dyna language support. are crafted to pay attention to a range of domainspecific linguistic cues. Log-linear models can be so crafted and have already achieved excellent performance when trained on annotated data, where they are known as “maximum entropy” models (Ratnaparkhi et al., 1994; Rosenfeld, 1994). Our goal is to learn log-linear models from unannotated data. Since the forward-backward and inside-outside algorithms are instances of Expectation-Maximization (EM) (Dempster et al., 1977), a natural approach is to construct EM algorithms that handle log-linear models. Riezler (1999) did so, then resorted to an approximation because the true objective function was hard to normalize. Stepping back from EM, we may generally envision parameter estimation for probabilistic modeling as pushing probability mass toward the training examples. We must consider not only where the learner pushes the mass, but also from where the mass is taken. In this paper, we describe an alternative to EM: contrastive estimation (CE), which (unlike EM) explicitly states the source of the probability mass that is to be given to an example.1 One reason is to make normalization efficient. Indeed, CE generalizes EM and other practical techniques used to train log-linear models, including conditional estimation (for the supervised case) and Riezler’s approximation (for the unsupervised case). The other reason to use CE is to improve accuracy. CE offers an additional way to inject domain knowledge into unsupervised learning (Smith and Eisner, 2005). CE hypothesizes that each positive example in training implies a domain-specific set of examples which are (for the most part) degraded (§2). This class of implicit negative evidence provides the source of probability mass for the observed example. We discuss the application of CE to loglinear models in §3. 1Not to be confused with contrastive divergence minimization (Hinton, 2003), a technique for training products of experts. 354 We are particularly interested in log-linear models over sequences, like the conditional random fields (CRFs) of Lafferty et al. (2001) and weighted CFGs (Miyao and Tsujii, 2002). For a given sequence, implicit negative evidence can be represented as a lattice derived by finite-state operations (§4). Effectiveness of the approach on POS tagging using unlabeled data is demonstrated (§5). We discuss future work (§6) and conclude (§7). 2 Implicit Negative Evidence Natural language is a delicate thing. For any plausible sentence, there are many slight perturbations of it that will make it implausible. Consider, for example, the first sentence of this section. Suppose we choose one of its six words at random and remove it; on this example, odds are two to one that the resulting sentence will be ungrammatical. Or, we could randomly choose two adjacent words and transpose them; none of the results are valid conversational English. The learner we describe here takes into account not only the observed positive example, but also a set of similar but deprecated negative examples. 2.1 Learning setting Let ⃗x = ⟨x1, x2, ...⟩, be our observed example sentences, where each xi ∈X, and let y∗ i ∈Y be the unobserved correct hidden structure for xi (e.g., a POS sequence). We seek a model, parameterized by ⃗θ, such that the (unknown) correct analysis y∗ i is the best analysis for xi (under the model). If y∗ i were observed, a variety of training criteria would be available (see Tab. 1), but y∗ i is unknown, so none apply. Typically one turns to the EM algorithm (Dempster et al., 1977), which locally maximizes Y i p X = xi | ⃗θ = Y i X y∈Y p X = xi, Y = y | ⃗θ (1) where X is a random variable over sentences and Y a random variable over analyses (notation is often abbreviated, eliminating the random variables). An often-used alternative to EM is a class of socalled Viterbi approximations, which iteratively find the probabilistically-best ˆy and then, on each iteration, solve a supervised problem (see Tab. 1). joint likelihood (JL) Y i p xi, y∗ i | ⃗θ conditional likelihood (CL) Y i p y∗ i | xi, ⃗θ classification accuracy (Juang and Katagiri, 1992) X i δ(y∗ i , ˆy(xi)) expected classification accuracy (Klein and Manning, 2002) X i p y∗ i | xi, ⃗θ negated boosting loss (Collins, 2000) − X i p y∗ i | xi, ⃗θ −1 margin (Crammer and Singer, 2001) γ s.t. ∥⃗θ∥≤1; ∀i, ∀y ̸= y∗ i , ⃗θ · (⃗f(xi, y∗ i ) −⃗f(xi, y)) ≥γ expected local accuracy (Altun et al., 2003) Y i Y j p ℓj(Y ) = ℓj(y∗ i ) | xi, ⃗θ Table 1: Various supervised training criteria. All functions are written so as to be maximized. None of these criteria are available for unsupervised estimation because they all depend on the correct label, y∗. 2.2 A new approach: contrastive estimation Our approach instead maximizes Y i p Xi = xi | Xi ∈N(xi), ⃗θ (2) where the “neighborhood” N(xi) ⊆X is a set of implicit negative examples plus the example xi itself. As in EM, p(xi | ..., ⃗θ) is found by marginalizing over hidden variables (Eq. 1). Note that the x′ ∈N(xi) are not treated as hard negative examples; we merely seek to move probability mass from them to the observed x. The neighborhood of x, N(x), contains examples that are perturbations of x. We refer to the mapping N : X →2X as the neighborhood function, and the optimization of Eq. 2 as contrastive estimation (CE). CE seeks to move probability mass from the neighborhood of an observed xi to xi itself. The learner hypothesizes that good models are those which discriminate an observed example from its neighborhood. Put another way, the learner assumes not only that xi is good, but that xi is locally optimal in example space (X), and that alternative, similar examples (from the neighborhood) are inferior. Rather than explain all of the data, the model must only explain (using hidden variables) why the 355 observed sentence is better than its neighbors. Of course, the validity of this hypothesis will depend on the form of the neighborhood function. Consider, as a concrete example, learning natural language syntax. In Smith and Eisner (2005), we define a sentence’s neighborhood to be a set of slightly-altered sentences that use the same lexemes, as suggested at the start of this section. While their syntax is degraded, the inferred meaning of any of these altered sentences is typically close to the intended meaning, yet the speaker chose x and not one of the other x′ ∈N(x). Why? Deletions are likely to violate subcategorization requirements, and transpositions are likely to violate word order requirements—both of which have something to do with syntax. x was the most grammatical option that conveyed the speaker’s meaning, hence (we hope) roughly the most grammatical option in the neighborhood N(x), and the syntactic model should make it so. 3 Log-Linear Models We have not yet specified the form of our probabilistic model, only that it is parameterized by ⃗θ ∈Rn. Log-linear models, which we will show are a natural fit for CE, assign probability to an (example, label) pair (x, y) according to p x, y | ⃗θ def = 1 Z ⃗θ u x, y | ⃗θ (3) where the “unnormalized score” u(x, y | ⃗θ) is u x, y | ⃗θ def = exp ⃗θ · ⃗f(x, y) (4) The notation above is defined as follows. ⃗f : X × Y →Rn ≥0 is a nonnegative vector feature function, and ⃗θ ∈Rn are the corresponding feature weights (the model’s parameters). Because the features can take any form and need not be orthogonal, log-linear models can capture arbitrary dependencies in the data and cleanly incorporate them into a model. Z(⃗θ) (the partition function) is chosen so that P (x,y) p(x, y | ⃗θ) = 1; i.e., Z(⃗θ) = P (x,y) u(x, y | ⃗θ). u is typically easy to compute for a given (x, y), but Z may be much harder to compute. All the objective functions in this paper take the form Y i P (x,y)∈Ai p x, y | ⃗θ P (x,y)∈Bi p x, y | ⃗θ (5) likelihood criterion Ai Bi joint {(xi, y∗ i )} X × Y conditional {(xi, y∗ i )} {xi} × Y marginal (a l`a EM) {xi} × Y X × Y contrastive {xi} × Y N(xi) × Y Table 2: Supervised (upper box) and unsupervised (lower box) estimation with log-linear models in terms of Eq. 5. where Ai ⊂Bi (for each i). For log-linear models this is simply Y i P (x,y)∈Ai u x, y | ⃗θ P (x,y)∈Bi u x, y | ⃗θ (6) So there is no need to compute Z(⃗θ), but we do need to compute sums over A and B. Tab. 2 summarizes some concrete examples; see also §3.1–3.2. We would prefer to choose an objective function such that these sums are easy. CE focuses on choosing appropriate small contrast sets Bi, both for efficiency and to guide the learner. The natural choice for Ai (which is usually easier to sum over) is the set of (x, y) that are consistent with what was observed (partially or completely) about the ith training example, i.e., the numerator P (x,y)∈Ai p(x, y | ⃗θ) is designed to find p(observation i | ⃗θ). The idea is to focus the probability mass within Bi on the subset Ai where the i the training example is known to be. It is possible to build log-linear models where each xi is a sequence.2 In this paper, each model is a weighted finite-state automaton (WFSA) where states correspond to POS tags. The parameter vector ⃗θ ∈Rn specifies a weight for each of the n transitions in the automaton. y is a hidden path through the automaton (determining a POS sequence), and x is the string it emits. u(x, y | ⃗θ) is defined by applying exp to the total weight of all transitions in y. This is an example of Eqs. 4 and 6 where fj(x, y) is the number of times the path y takes the jth transition. The partition function Z(⃗θ) of the WFSA is found by adding up the u-scores of all paths through the WFSA. For a k-state WFSA, this equates to solving a linear system of k equations in k variables (Tarjan, 1981). But if the WFSA contains cycles this infinite sum may diverge. Alternatives to exact com2These are exemplified by CRFs (Lafferty et al., 2001), which can be viewed alternately as undirected dynamic graphical models with a chain topology, as log-linear models over entire sequences with local features, or as WFSAs. Because “CRF” implies CL estimation, we use the term “WFSA.” 356 putation, like random sampling (see, e.g., Abney, 1997), will not help to avoid this difficulty; in addition, convergence rates are in general unknown and bounds difficult to prove. We would prefer to sum over finitely many paths in Bi. 3.1 Parameter estimation (supervised) For log-linear models, both CL and JL estimation (Tab. 1) are available. In terms of Eq. 5, both set Ai = {(xi, y∗ i )}. The difference is in B: for JL, Bi = X × Y, so summing over Bi is equivalent to computing the partition function Z(⃗θ). Because that sum is typically difficult, CL is preferred; Bi = {xi} × Y for xi, which is often tractable. For sequence models like WFSAs it is computed using a dynamic programming algorithm (the forward algorithm for WFSAs). Klein and Manning (2002) argue for CL on grounds of accuracy, but see also Johnson (2001). See Tab. 2; other contrast sets Bi are also possible. When Bi contains only xi paired with the current best competitor (ˆy) to y∗ i , we have a technique that resembles maximum margin training (Crammer and Singer, 2001). Note that ˆy will then change across training iterations, making Bi dynamic. 3.2 Parameter estimation (unsupervised) The difference between supervised and unsupervised learning is that in the latter case, Ai is forced to sum over label sequences y because they weren’t observed. In the unsupervised case, CE maximizes LN ⃗θ = log Y i X y∈Y u xi, y | ⃗θ X (x,y)∈N(xi)×Y u x, y | ⃗θ (7) In terms of Eq. 5, A = {xi}×Y and B = N(xi)×Y. EM’s objective function (Eq. 1) is a special case where N(xi) = X, for all i, and the denominator becomes Z(⃗θ). An alternative is to restrict the neighborhood to the set of observed training examples rather than all possible examples (Riezler, 1999; Johnson et al., 1999; Riezler et al., 2000): Y i " u xi | ⃗θ ,X j u xj | ⃗θ # (8) Viewed as a CE method, this approach (though effective when there are few hypotheses) seems misguided; the objective says to move mass to each example at the expense of all other training examples. Another variant is conditional EM. Let xi be a pair (xi,1, xi,2) and define the neighborhood to be N(xi) = {¯x = (¯x1, xi,2)}. This approach has been applied to conditional densities (Jebara and Pentland, 1998) and conditional training of acoustic models with hidden variables (Valtchev et al., 1997). Generally speaking, CE is equivalent to some kind of EM when N(·) is an equivalence relation on examples, so that the neighborhoods partition X. Then if q is any fixed (untrained) distribution over neighborhoods, CE equates to running EM on the model defined by p′ x, y | ⃗θ def = q (N(x)) · p x, y | N(x), ⃗θ (9) CE may also be viewed as an importance sampling approximation to EM, where the sample space X is replaced by N(xi). We will demonstrate experimentally that CE is not just an approximation to EM; it makes sense from a modeling perspective. In §4, we will describe neighborhoods of sequences that can be represented as acyclic lattices built directly from an observed sequence. The sum over Bi is then the total u-score in our model of all paths in the neighborhood lattice. To compute this, intersect the WFSA and the lattice, obtaining a new acyclic WFSA, and sum the u-scores of all its paths (Eisner, 2002) using a simple dynamic programming algorithm akin to the forward algorithm. The sum over Ai may be computed similarly. CE with lattice neighborhoods is not confined to the WFSAs of this paper; when estimating weighted CFGs, the key algorithm is the inside algorithm for lattice parsing (Smith and Eisner, 2005). 3.3 Numerical optimization To maximize the neighborhood likelihood (Eq. 7), we apply a standard numerical optimization method (L-BFGS) that iteratively climbs the function using knowledge of its value and gradient (Liu and Nocedal, 1989). The partial derivative of LN with respect to the jth feature weight θj is ∂LN ∂θj = X i E⃗θ [fj | xi] −E⃗θ [fj | N(xi)] (10) This looks similar to the gradient of log-linear likelihood functions on complete data, though the expectation on the left is in those cases replaced by an observed feature value fj(xi, y∗ i ). In this paper, the 357 natural language is a delicate thing a. DEL1WORD: natural language is a delicate thing language is a delicate thing is a delicate thing ?:ε ? ? b. TRANS1: natural language a delicate thing is delicate is is a natural a is a delicate thing language language delicate thing :x x2 1 x2 x1 : : x x 2 3 : x x 3 2 : x x m m−1 xm−1:xm ? ? ... (Each bigram xi+1 i in the sentence has an arc pair (xi : xi+1, xi+1 : xi).) c. DEL1SUBSEQ: natural language is a delicate thing language is is a a a delicate thing ?:ε ?:ε ?:ε ? ? ? ? ε ε Figure 1: A sentence and three lattices representing some of its neighborhoods. The transducer used to generate each neighborhood lattice (via composition with the sentence, followed by determinization and minimization) is shown to its right. expectations in Eq. 10 are computed by the forwardbackward algorithm generalized to lattices. We emphasize that the function LN is not globally concave; our search will lead only to a local optimum.3 Therefore, as with all unsupervised statistical learning, the bias in the initialization of ⃗θ will affect the quality of the estimate and the performance of the method. In future we might wish to apply techniques for avoiding local optima, such as deterministic annealing (Smith and Eisner, 2004). 4 Lattice Neighborhoods We next consider some non-classical neighborhood functions for sequences. When X = Σ+ for some symbol alphabet Σ, certain kinds of neighborhoods have natural, compact representations. Given an input string x = ⟨x1, x2, ..., xm⟩, we write xj i for the substring ⟨xi, xi+1, ..., xj⟩and xm 1 for the whole string. Consider first the neighborhood consisting of all sequences generated by deleting a single symbol from the m-length sequence xm 1 : DEL1WORD(xm 1 ) = n xℓ−1 1 xm ℓ+1 | 1 ≤ℓ≤m o ∪{xm 1 } This set consists of m + 1 strings and can be compactly represented as a lattice (see Fig. 1a). Another 3Without any hidden variables, LN is globally concave. neighborhood involves transposing any pair of adjacent words: TRANS1(xm 1 ) = n xℓ−1 1 xℓ+1xℓxm ℓ+2 | 1 ≤ℓ< m o ∪{xm 1 } This set can also be compactly represented as a lattice (Fig. 1b). We can combine DEL1WORD and TRANS1 by taking their union; this gives a larger neighborhood, DELORTRANS1.4 The DEL1SUBSEQ neighborhood allows the deletion of any contiguous subsequence of words that is strictly smaller than the whole sequence. This lattice is similar to that of DEL1WORD, but adds some arcs (Fig. 1c); the size of this neighborhood is O(m2). A final neighborhood we will consider is LENGTH, which consists of Σm. CE with the LENGTH neighborhood is very similar to EM; it is equivalent to using EM to estimate the parameters of a model defined by Eq. 9 where q is any fixed (untrained) distribution over lengths. When the vocabulary Σ is the set of words in a natural language, it is never fully known; approximations for defining LENGTH = Σm include using observed Σ from the training set (as we do) or adding a special OOV symbol. 4In general, the lattices are obtained by composing the observed sequence with a small FST and determinizing and minimizing the result; the relevant transducers are shown in Fig. 1. 358 30 40 50 60 70 80 90 100 0.1 1 10 % correct tags smoothing parameter 0 8 12K 24K 48K 96K sel. oracle sel. oracle sel. oracle sel. oracle CRF (supervised) 100.0 99.8 99.8 99.5 HMM (supervised) 99.3 98.5 97.9 97.2 LENGTH 74.9 77.4 78.7 81.5 78.3 81.3 78.9 79.3 DELORTR1 70.8 70.8 78.6 78.6 78.3 79.1 75.2 78.8 TRANS1 72.7 72.7 77.2 77.2 78.1 79.4 74.7 79.0 EM 49.5 52.9 55.5 58.0 59.4 60.9 60.9 62.1 DEL1WORD 55.4 55.6 58.6 60.3 59.9 60.2 59.9 60.4 DEL1SSQ 53.0 53.3 55.0 56.7 55.3 55.4 57.3 58.7 random expected 35.2 35.1 35.1 35.1 ambiguous words 6,244 12,923 25,879 51,521 Figure 2: Percent ambiguous words tagged correctly in the 96K dataset, as the smoothing parameter (λ in the case of EM, σ2 in the CE cases) varies. The model selected from each criterion using unlabeled development data is circled in the plot. Dataset size is varied in the table at right, which shows models selected using unlabeled development data (“sel.”) and using an oracle (“oracle,” the highest point on a curve). Across conditions, some neighborhood roughly splits the difference between supervised models and EM. 5 Experiments We compare CE (using neighborhoods from §4) with EM on POS tagging using unlabeled data. 5.1 Comparison with EM Our experiments are inspired by those in Merialdo (1994); we train a trigram tagger using only unlabeled data, assuming complete knowledge of the tagging dictionary.5 In our experiments, we varied the amount of data available (12K–96K words of WSJ), the heaviness of smoothing, and the estimation criterion. In all cases, training stopped when the relative change in the criterion fell below 10−4 between steps (typically ≤100 steps). For this corpus and tag set, on average, a tagger must decide between 2.3 tags for a given token. The generative model trained by EM was identical to Merialdo’s: a second-order HMM. We smoothed using a flat Dirichlet prior with single parameter λ for all distributions (λ-values from 0 to 10 were tested).6 The model was initialized uniformly. The log-linear models trained by CE used the same feature set, though the feature weights are no longer log-probabilities and there are no sum-to-one constraints. In addition to an unsmoothed trial, we tried diagonal Gaussian priors (quadratic penalty) with σ2 ranging from 0.1 to 10. The models were initialized with all θj = 0. Unsupervised model selection. For each (crite5Without a tagging dictionary, tag names are interchangeable and cannot be evaluated on gold-standard accuracy. We address the tagging dictionary assumption in §5.2. 6This is equivalent to add-λ smoothing within every M step. rion, dataset) pair, we selected the smoothing trial that gave the highest estimation criterion score on a 5K-word development set (also unlabeled). Results. The plot in Fig. 2 shows the Viterbi accuracy of each criterion trained on the 96K-word dataset as smoothing was varied; the table shows, for each (criterion, dataset) pair the performance of the selected λ or σ2 and the one chosen by an oracle. LENGTH, TRANS1, and DELORTRANS1 are consistently the best, far out-stripping EM. These gains dwarf the performance of EM on over 1.1M words (66.6% as reported by Smith and Eisner (2004)), even when the latter uses improved search (70.0%). DEL1WORD and DEL1SUBSEQ, on the other hand, are poor, even worse than EM on larger datasets. An important result is that neighborhoods do not succeed by virtue of approximating log-linear EM; if that were so, we would expect larger neighborhoods (like DEL1SUBSEQ) to out-perform smaller ones (like TRANS1)—this is not so. DEL1SUBSEQ and DEL1WORD are poor because they do not give helpful classes of negative evidence: deleting a word or a short subsequence often does very little damage. Put another way, models that do a good job of explaining why no word or subsequence should be deleted do not do so using the familiar POS categories. The LENGTH neighborhood is as close to loglinear EM as it is practical to get. The inconsistencies in the LENGTH curve (Fig. 2) are notable and also appeared at the other training set sizes. Believing this might be indicative of brittleness in Viterbi label selection, we computed the expected 359 DELORTRANS1 TRANS1 LENGTH EM words in trigram trigram + spelling trigram trigram + spelling trigram trigram + spelling trigram tagging dict. sel. oracle sel. oracle sel. oracle sel. oracle sel. oracle sel. oracle sel. oracle random expected ambiguous words ave. tags/token all train & dev. 78.3 90.1 80.9 91.1 90.4 90.4 88.7 90.9 87.8 90.4 87.1 91.9 78.0 84.4 69.5 13,150 2.3 1st 500 sents. 72.3 84.8 80.2 90.8 80.8 82.9 88.1 90.1 68.1 78.3 76.9 83.2 77.2 80.5 60.5 13,841 3.7 count ≥2 69.5 81.3 79.5 90.3 77.0 78.6 78.7 90.1 65.3 75.2 73.3 73.8 70.1 70.9 56.6 14,780 4.4 count ≥3 65.0 77.2 78.3 89.8 71.7 73.4 78.4 89.5 62.8 72.3 73.2 73.6 66.5 66.5 51.0 15,996 5.5 Table 3: Percent of all words correctly tagged in the 24K dataset, as the tagging dictionary is diluted. Unsupervised model selection (“sel.”) and oracle model selection (“oracle”) across smoothing parameters are shown. Note that we evaluated on all words (unlike Fig. 3) and used 17 coarse tags, giving higher scores than in Fig. 2. accuracy of the LENGTH models; the same “dips” were present. This could indicate that the learner was trapped in a local maximum, suggesting that, since other criteria did not exhibit this behavior, LENGTH might be a bumpier objective surface. It would be interesting to measure the bumpiness (sensitivity to initial conditions) of different contrastive objectives.7 5.2 Removing knowledge, adding features The assumption that the tagging dictionary is completely known is difficult to justify. While a POS lexicon might be available for a new language, certainly it will not give exhaustive information about all word types in a corpus. We experimented with removing knowledge from the tagging dictionary, thereby increasing the difficulty of the task, to see how well various objective functions could recover. One means to recovery is the addition of features to the model—this is easy with log-linear models but not with classical generative models. We compared the performance of the best neighborhoods (LENGTH, DELORTRANS1, and TRANS1) from the first experiment, plus EM, using three diluted dictionaries and the original one, on the 24K dataset. A diluted dictionary adds (tag, word) entries so that rare words are allowed with any tag, simulating zero prior knowledge about the word. “Rare” might be defined in different ways; we used three definitions: words unseen in the first 500 sentences (about half of the 24K training corpus); singletons (words with count ≤1); and words with count ≤2. To allow more trials, we projected the original 45 tags onto a coarser set of 17 (e.g., 7A reviewer suggested including a table comparing different criterion values for each learned model (i.e., each neighborhood evaluated on each other neighborhood). This table contained no big surprises; we note only that most models were the best on their own criterion, and among unsupervised models, LENGTH performed best on the CL criterion. RB∗→ADV). To take better advantage of the power of loglinear models—specifically, their ability to incorporate novel features—we also ran trials augmenting the model with spelling features, allowing exploitation of correlations between parts of the word and a possible tag. Our spelling features included all observed 1-, 2-, and 3-character suffixes, initial capitalization, containing a hyphen, and containing a digit. Results. Fig. 3 plots tagging accuracy (on ambiguous words) for each dictionary on the 24K dataset. The x-axis is the smoothing parameter (λ for EM, σ2 for CE). Note that the different plots are not comparable, because their y-axes are based on different sets of ambiguous words. So that models under different dilution conditions could be compared, we computed accuracy on all words; these are shown in Tab. 3. The reader will notice that there is often a large gap between unsupervised and oracle model selection; this draws attention to a need for better unsupervised regularization and model selection techniques. Without spelling features, all models perform worse as knowledge is removed. But LENGTH suffers most substantially, relative to its initial performance. Why is this? LENGTH (like EM) requires the model to explain why a given sentence was seen instead of some other sentence of the same length. One way to make this explanation is to manipulate emission weights (i.e., for (tag, word) features): the learner can construct a good class-based unigram model of the text (where classes are tags). This is good for the LENGTH objective, but not for learning good POS tag sequences. In contrast, DELORTRANS1 and TRANS1 do not allow the learner to manipulate emission weights for words not in the sentence. The sentence’s goodness must be explained in a way other than by the words it contains: namely through the POS tags. To 360 check this intuition, we built local normalized models p(word | tag) from the parameters learned by TRANS1 and LENGTH. For each tag, these were compared by KL divergence to the empirical lexical distributions (from labeled data). For the ten tags accounting for 95.6% of the data, LENGTH more closely matched the empirical lexical distributions. LENGTH is learning a correct distribution, but that distribution is not helpful for the task. The improvement from adding spelling features is striking: DELORTRANS1 and TRANS1 recover nearly completely (modulo the model selection problem) from the diluted dictionaries. LENGTH sees far less recovery. Hence even our improved feature sets cannot compensate for the choice of neighborhood. This highlights our argument that a neighborhood is not an approximation to log-linear EM; LENGTH tries very hard to approximate log-linear EM but requires a good dictionary to be on par with the other criteria. Good neighborhoods, rather, perform well in their own right. 6 Future Work Foremost for future work is the “minimally supervised” paradigm in which a small amount of labeled data is available (see, e.g., Clark et al. (2003)). Unlike well-known “bootstrapping” approaches (Yarowsky, 1995), EM and CE have the possible advantage of maintaining posteriors over hidden labels (or structure) throughout learning; bootstrapping either chooses, for each example, a single label, or remains completely agnostic. One can envision a mixed objective function that tries to fit the labeled examples while discriminating unlabeled examples from their neighborhoods.8 Regardless of how much (if any) data are labeled, the question of good smoothing techniques requires more attention. Here we used a single zero-mean, constant-variance Gaussian prior for all parameters. Better performance might be achieved by allowing different variances for different feature types. This 8Zhu and Ghahramani (2002) explored the semi-supervised classification problem for spatially-distributed data, where some data are labeled, using a Boltzmann machine to model the dataset. For them, the Markov random field is over labeling configurations for all examples, not, as in our case, complex structured labels for a particular example. Hence their B (Eq. 5), though very large, was finite and could be sampled. All train & development words are in the tagging dictionary: 40 45 50 55 60 65 70 75 80 85 Tagging dictionary taken from the first 500 sentences: 40 45 50 55 60 65 70 75 80 85 Tagging dictionary contains words with count ≥2: 40 45 50 55 60 65 70 75 80 85 Tagging dictionary contains words with count ≥3: 40 45 50 55 60 65 70 75 80 85 40 45 50 55 60 65 70 75 80 85 0.1 1 10 smoothing parameter 0 8 50 DELORTRANS1 ■ ■ TRANS1 □ □ LENGTH △ ▽ EM trigram model × trigram + spelling Figure 3: Percent ambiguous words tagged correctly (with coarse tags) on the 24K dataset, as the dictionary is diluted and with spelling features. Each graph corresponds to a different level of dilution. Models selected using unlabeled development data are circled. These plots (unlike Tab. 3) are not comparable to each other because each is measured on a different set of ambiguous words. 361 leads to a need for more efficient tuning of the prior parameters on development data. The effectiveness of CE (and different neighborhoods) for dependency grammar induction is explored in Smith and Eisner (2005) with considerable success. We introduce there the notion of designing neighborhoods to guide learning for particular tasks. Instead of guiding an unsupervised learner to match linguists’ annotations, the choice of neighborhood might be made to direct the learner toward hidden structure that is helpful for error-correction tasks like spelling correction and punctuation restoration that may benefit from a grammatical model. Wang et al. (2002) discuss the latent maximum entropy principle. They advocate running EM many times and selecting the local maximum that maximizes entropy. One might do the same for the local maxima of any CE objective, though theoretical and experimental support for this idea remain for future work. 7 Conclusion We have presented contrastive estimation, a new probabilistic estimation criterion that forces a model to explain why the given training data were better than bad data implied by the positive examples. We have shown that for unsupervised sequence modeling, this technique is efficient and drastically outperforms EM; for POS tagging, the gain in accuracy over EM is twice what we would get from ten times as much data and improved search, sticking with EM’s criterion (Smith and Eisner, 2004). On this task, with certain neighborhoods, contrastive estimation suffers less than EM does from diminished prior knowledge and is able to exploit new features—that EM can’t—to largely recover from the loss of knowledge. References S. P. Abney. 1997. Stochastic attribute-value grammars. Computational Linguistics, 23(4):597–617. Y. Altun, M. Johnson, and T. Hofmann. 2003. Investigating loss functions and optimization methods for discriminative learning of label sequences. In Proc. of EMNLP. E. Charniak. 1993. Statistical Language Learning. MIT Press. S. Clark, J. R. Curran, and M. Osborne. 2003. Bootstrapping POS taggers using unlabelled data. In Proc. of CoNLL. M. Collins. 2000. Discriminative reranking for natural language parsing. In Proc. of ICML. K. Crammer and Y. Singer. 2001. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2(5):265–92. A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likelihood estimation from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1–38. J. Eisner. 2002. Parameter estimation for probabilistic finitestate transducers. In Proc. of ACL. G. E. Hinton. 2003. Training products of experts by minimizing contrastive divergence. Technical Report GCNU TR 2000-004, University College London. T. Jebara and A. Pentland. 1998. Maximum conditional likelihood via bound maximization and the CEM algorithm. In Proc. of NIPS. M. Johnson, S. Geman, S. Canon, Z. Chi, and S. Riezler. 1999. Estimators for stochastic “unification-based” grammars. In Proc. of ACL. M. Johnson. 2001. Joint and conditional estimation of tagging and parsing models. In Proc. of ACL. B.-H. Juang and S. Katagiri. 1992. Discriminative learning for minimum error classification. IEEE Trans. Signal Processing, 40:3043–54. D. Klein and C. D. Manning. 2002. Conditional structure vs. conditional estimation in NLP models. In Proc. of EMNLP. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. of ICML. D. C. Liu and J. Nocedal. 1989. On the limited memory method for large scale optimization. Mathematical Programming B, 45(3):503–28. A. McCallum and W. Li. 2003. Early results for namedentity extraction with conditional random fields. In Proc. of CoNLL. B. Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2):155–72. Y. Miyao and J. Tsujii. 2002. Maximum entropy estimation for feature forests. In Proc. of HLT. A. Ratnaparkhi, S. Roukos, and R. T. Ward. 1994. A maximum entropy model for parsing. In Proc. of ICSLP. S. Riezler, D. Prescher, J. Kuhn, and M. Johnson. 2000. Lexicalized stochastic modeling of constraint-based grammars using log-linear measures and EM training. In Proc. of ACL. S. Riezler. 1999. Probabilistic Constraint Logic Programming. Ph.D. thesis, Universit¨at T¨ubingen. R. Rosenfeld. 1994. Adaptive Statistical Language Modeling: A Maximum Entropy Approach. Ph.D. thesis, CMU. F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proc. of HLT-NAACL. N. A. Smith and J. Eisner. 2004. Annealing techniques for unsupervised statistical language learning. In Proc. of ACL. N. A. Smith and J. Eisner. 2005. Guiding unsupervised grammar induction using contrastive estimation. In Proc. of IJCAI Workshop on Grammatical Inference Applications. R. E. Tarjan. 1981. A unified approach to path problems. Journal of the ACM, 28(3):577–93. V. Valtchev, J. J. Odell, P. C. Woodland, and S. J. Young. 1997. MMIE training of large vocabulary speech recognition systems. Speech Communication, 22(4):303–14. S. Wang, R. Rosenfeld, Y. Zhao, and D. Schuurmans. 2002. The latent maximum entropy principle. In Proc. of ISIT. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. of ACL. X. Zhu and Z. Ghahramani. 2002. Towards semi-supervised classification with Markov random fields. Technical Report CMU-CALD-02-106, Carnegie Mellon University. 362 | 2005 | 44 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 363–370, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling Jenny Rose Finkel, Trond Grenager, and Christopher Manning Computer Science Department Stanford University Stanford, CA 94305 {jrfinkel, grenager, mannning}@cs.stanford.edu Abstract Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9% over state-of-the-art systems on two established information extraction tasks. 1 Introduction Most statistical models currently used in natural language processing represent only local structure. Although this constraint is critical in enabling tractable model inference, it is a key limitation in many tasks, since natural language contains a great deal of nonlocal structure. A general method for solving this problem is to relax the requirement of exact inference, substituting approximate inference algorithms instead, thereby permitting tractable inference in models with non-local structure. One such algorithm is Gibbs sampling, a simple Monte Carlo algorithm that is appropriate for inference in any factored probabilistic model, including sequence models and probabilistic context free grammars (Geman and Geman, 1984). Although Gibbs sampling is widely used elsewhere, there has been extremely little use of it in natural language processing.1 Here, we use it to add non-local dependencies to sequence models for information extraction. Statistical hidden state sequence models, such as Hidden Markov Models (HMMs) (Leek, 1997; Freitag and McCallum, 1999), Conditional Markov Models (CMMs) (Borthwick, 1999), and Conditional Random Fields (CRFs) (Lafferty et al., 2001) are a prominent recent approach to information extraction tasks. These models all encode the Markov property: decisions about the state at a particular position in the sequence can depend only on a small local window. It is this property which allows tractable computation: the Viterbi, Forward Backward, and Clique Calibration algorithms all become intractable without it. However, information extraction tasks can benefit from modeling non-local structure. As an example, several authors (see Section 8) mention the value of enforcing label consistency in named entity recognition (NER) tasks. In the example given in Figure 1, the second occurrence of the token Tanjug is mislabeled by our CRF-based statistical NER system, because by looking only at local evidence it is unclear whether it is a person or organization. The first occurrence of Tanjug provides ample evidence that it is an organization, however, and by enforcing label consistency the system should be able to get it right. We show how to incorporate constraints of this form into a CRF model by using Gibbs sampling instead of the Viterbi algorithm as our inference procedure, and demonstrate that this technique yields significant improvements on two established IE tasks. 1Prior uses in NLP of which we are aware include: Kim et al. (1995), Della Pietra et al. (1997) and Abney (1997). 363 the news agency Tanjug reported . . . airport , Tanjug said . Figure 1: An example of the label consistency problem excerpted from a document in the CoNLL 2003 English dataset. 2 Gibbs Sampling for Inference in Sequence Models In hidden state sequence models such as HMMs, CMMs, and CRFs, it is standard to use the Viterbi algorithm, a dynamic programming algorithm, to infer the most likely hidden state sequence given the input and the model (see, e.g., Rabiner (1989)). Although this is the only tractable method for exact computation, there are other methods for computing an approximate solution. Monte Carlo methods are a simple and effective class of methods for approximate inference based on sampling. Imagine we have a hidden state sequence model which defines a probability distribution over state sequences conditioned on any given input. With such a model M we should be able to compute the conditional probability PM(s|o) of any state sequence s = {s0, . . . , sN} given some observed input sequence o = {o0, . . . , oN}. One can then sample sequences from the conditional distribution defined by the model. These samples are likely to be in high probability areas, increasing our chances of finding the maximum. The challenge is how to sample sequences efficiently from the conditional distribution defined by the model. Gibbs sampling provides a clever solution (Geman and Geman, 1984). Gibbs sampling defines a Markov chain in the space of possible variable assignments (in this case, hidden state sequences) such that the stationary distribution of the Markov chain is the joint distribution over the variables. Thus it is called a Markov Chain Monte Carlo (MCMC) method; see Andrieu et al. (2003) for a good MCMC tutorial. In practical terms, this means that we can walk the Markov chain, occasionally outputting samples, and that these samples are guaranteed to be drawn from the target distribution. Furthermore, the chain is defined in very simple terms: from each state sequence we can only transition to a state sequence obtained by changing the state at any one position i, and the distribution over these possible transitions is just PG(s(t)|s(t−1)) = PM(s(t) i |s(t−1) −i , o). (1) where s−i is all states except si. In other words, the transition probability of the Markov chain is the conditional distribution of the label at the position given the rest of the sequence. This quantity is easy to compute in any Markov sequence model, including HMMs, CMMs, and CRFs. One easy way to walk the Markov chain is to loop through the positions i from 1 to N, and for each one, to resample the hidden state at that position from the distribution given in Equation 1. By outputting complete sequences at regular intervals (such as after resampling all N positions), we can sample sequences from the conditional distribution defined by the model. This is still a gravely inefficient process, however. Random sampling may be a good way to estimate the shape of a probability distribution, but it is not an efficient way to do what we want: find the maximum. However, we cannot just transition greedily to higher probability sequences at each step, because the space is extremely non-convex. We can, however, borrow a technique from the study of non-convex optimization and use simulated annealing (Kirkpatrick et al., 1983). Geman and Geman (1984) show that it is easy to modify a Gibbs Markov chain to do annealing; at time t we replace the distribution in (1) with PA(s(t)|s(t−1)) = PM(s(t) i |s(t−1) −i , o)1/ct P j PM(s(t) j |s(t−1) −j , o)1/ct (2) where c = {c0, . . . , cT } defines a cooling schedule. At each step, we raise each value in the conditional distribution to an exponent and renormalize before sampling from it. Note that when c = 1 the distribution is unchanged, and as c →0 the distribution 364 Inference CoNLL Seminars Viterbi 85.51 91.85 Gibbs 85.54 91.85 Sampling 85.51 91.85 85.49 91.85 85.51 91.85 85.51 91.85 85.51 91.85 85.51 91.85 85.51 91.85 85.51 91.86 Mean 85.51 91.85 Std. Dev. 0.01 0.004 Table 1: An illustration of the effectiveness of Gibbs sampling, compared to Viterbi inference, for the two tasks addressed in this paper: the CoNLL named entity recognition task, and the CMU Seminar Announcements information extraction task. We show 10 runs of Gibbs sampling in the same CRF model that was used for Viterbi. For each run the sampler was initialized to a random sequence, and used a linear annealing schedule that sampled the complete sequence 1000 times. CoNLL performance is measured as per-entity F1, and CMU Seminar Announcements performance is measured as per-token F1. becomes sharper, and when c = 0 the distribution places all of its mass on the maximal outcome, having the effect that the Markov chain always climbs uphill. Thus if we gradually decrease c from 1 to 0, the Markov chain increasingly tends to go uphill. This annealing technique has been shown to be an effective technique for stochastic optimization (Laarhoven and Arts, 1987). To verify the effectiveness of Gibbs sampling and simulated annealing as an inference technique for hidden state sequence models, we compare Gibbs and Viterbi inference methods for a basic CRF, without the addition of any non-local model. The results, given in Table 1, show that if the Gibbs sampler is run long enough, its accuracy is the same as a Viterbi decoder. 3 A Conditional Random Field Model Our basic CRF model follows that of Lafferty et al. (2001). We choose a CRF because it represents the state of the art in sequence modeling, allowing both discriminative training and the bi-directional flow of probabilistic information across the sequence. A CRF is a conditional sequence model which represents the probability of a hidden state sequence given some observations. In order to facilitate obtaining the conditional probabilities we need for Gibbs sampling, we generalize the CRF model in a Feature NER TF Current Word ✓ ✓ Previous Word ✓ ✓ Next Word ✓ ✓ Current Word Character n-gram all length ≤6 Current POS Tag ✓ Surrounding POS Tag Sequence ✓ Current Word Shape ✓ ✓ Surrounding Word Shape Sequence ✓ ✓ Presence of Word in Left Window size 4 size 9 Presence of Word in Right Window size 4 size 9 Table 2: Features used by the CRF for the two tasks: named entity recognition (NER) and template filling (TF). way that is consistent with the Markov Network literature (see Cowell et al. (1999)): we create a linear chain of cliques, where each clique, c, represents the probabilistic relationship between an adjacent pair of states2 using a clique potential φc, which is just a table containing a value for each possible state assignment. The table is not a true probability distribution, as it only accounts for local interactions within the clique. The clique potentials themselves are defined in terms of exponential models conditioned on features of the observation sequence, and must be instantiated for each new observation sequence. The sequence of potentials in the clique chain then defines the probability of a state sequence (given the observation sequence) as PCRF(s|o) ∝ N Y i=1 φi(si−1, si) (3) where φi(si−1, si) is the element of the clique potential at position i corresponding to states si−1 and si.3 Although a full treatment of CRF training is beyond the scope of this paper (our technique assumes the model is already trained), we list the features used by our CRF for the two tasks we address in Table 2. During training, we regularized our exponential models with a quadratic prior and used the quasi-Newton method for parameter optimization. As is customary, we used the Viterbi algorithm to infer the most likely state sequence in a CRF. 2CRFs with larger cliques are also possible, in which case the potentials represent the relationship between a subsequence of k adjacent states, and contain |S|k elements. 3To handle the start condition properly, imagine also that we define a distinguished start state s0. 365 The clique potentials of the CRF, instantiated for some observation sequence, can be used to easily compute the conditional distribution over states at a position given in Equation 1. Recall that at position i we want to condition on the states in the rest of the sequence. The state at this position can be influenced by any other state that it shares a clique with; in particular, when the clique size is 2, there are 2 such cliques. In this case the Markov blanket of the state (the minimal set of states that renders a state conditionally independent of all other states) consists of the two neighboring states and the observation sequence, all of which are observed. The conditional distribution at position i can then be computed simply as PCRF(si|s−i, o) ∝φi(si−1, si)φi+1(si, si+1) (4) where the factor tables F in the clique chain are already conditioned on the observation sequence. 4 Datasets and Evaluation We test the effectiveness of our technique on two established datasets: the CoNLL 2003 English named entity recognition dataset, and the CMU Seminar Announcements information extraction dataset. 4.1 The CoNLL NER Task This dataset was created for the shared task of the Seventh Conference on Computational Natural Language Learning (CoNLL),4 which concerned named entity recognition. The English data is a collection of Reuters newswire articles annotated with four entity types: person (PER), location (LOC), organization (ORG), and miscellaneous (MISC). The data is separated into a training set, a development set (testa), and a test set (testb). The training set contains 945 documents, and approximately 203,000 tokens. The development set has 216 documents and approximately 51,000 tokens, and the test set has 231 documents and approximately 46,000 tokens. We evaluate performance on this task in the manner dictated by the competition so that results can be properly compared. Precision and recall are evaluated on a per-entity basis (and combined into an F1 score). There is no partial credit; an incorrect entity 4Available at http://cnts.uia.ac.be/conll2003/ner/. boundary is penalized as both a false positive and as a false negative. 4.2 The CMU Seminar Announcements Task This dataset was developed as part of Dayne Freitag’s dissertation research Freitag (1998).5 It consists of 485 emails containing seminar announcements at Carnegie Mellon University. It is annotated for four fields: speaker, location, start time, and end time. Sutton and McCallum (2004) used 5-fold cross validation when evaluating on this dataset, so we obtained and used their data splits, so that results can be properly compared. Because the entire dataset is used for testing, there is no development set. We also used their evaluation metric, which is slightly different from the method for CoNLL data. Instead of evaluating precision and recall on a per-entity basis, they are evaluated on a per-token basis. Then, to calculate the overall F1 score, the F1 scores for each class are averaged. 5 Models of Non-local Structure Our models of non-local structure are themselves just sequence models, defining a probability distribution over all possible state sequences. It is possible to flexibly model various forms of constraints in a way that is sensitive to the linguistic structure of the data (e.g., one can go beyond imposing just exact identity conditions). One could imagine many ways of defining such models; for simplicity we use the form PM(s|o) ∝ Y λ∈Λ θ#(λ,s,o) λ (5) where the product is over a set of violation types Λ, and for each violation type λ we specify a penalty parameter θλ. The exponent #(λ, s, o) is the count of the number of times that the violation λ occurs in the state sequence s with respect to the observation sequence o. This has the effect of assigning sequences with more violations a lower probability. The particular violation types are defined specifically for each task, and are described in the following two sections. This model, as defined above, is not normalized, and clearly it would be expensive to do so. This 5Available at http://nlp.shef.ac.uk/dot.kom/resources.html. 366 PER LOC ORG MISC PER 3141 4 5 0 LOC 6436 188 3 ORG 2975 0 MISC 2030 Table 3: Counts of the number of times multiple occurrences of a token sequence is labeled as different entity types in the same document. Taken from the CoNLL training set. PER LOC ORG MISC PER 1941 5 2 3 LOC 0 167 6 63 ORG 22 328 819 191 MISC 14 224 7 365 Table 4: Counts of the number of times an entity sequence is labeled differently from an occurrence of a subsequence of it elsewhere in the document. Rows correspond to sequences, and columns to subsequences. Taken from the CoNLL training set. doesn’t matter, however, because we only use the model for Gibbs sampling, and so only need to compute the conditional distribution at a single position i (as defined in Equation 1). One (inefficient) way to compute this quantity is to enumerate all possible sequences differing only at position i, compute the score assigned to each by the model, and renormalize. Although it seems expensive, this computation can be made very efficient with a straightforward memoization technique: at all times we maintain data structures representing the relationship between entity labels and token sequences, from which we can quickly compute counts of different types of violations. 5.1 CoNLL Consistency Model Label consistency structure derives from the fact that within a particular document, different occurrences of a particular token sequence are unlikely to be labeled as different entity types. Although any one occurrence may be ambiguous, it is unlikely that all instances are unclear when taken together. The CoNLL training data empirically supports the strength of the label consistency constraint. Table 3 shows the counts of entity labels for each pair of identical token sequences within a document, where both are labeled as an entity. Note that inconsistent labelings are very rare.6 In addition, we also 6A notable exception is the labeling of the same text as both organization and location within the same document. This is a consequence of the large portion of sports news in the CoNLL want to model subsequence constraints: having seen Geoff Woods earlier in a document as a person is a good indicator that a subsequent occurrence of Woods should also be labeled as a person. However, if we examine all cases of the labelings of other occurrences of subsequences of a labeled entity, we find that the consistency constraint does not hold nearly so strictly in this case. As an example, one document contains references to both The China Daily, a newspaper, and China, the country. Counts of subsequence labelings within a document are listed in Table 4. Note that there are many offdiagonal entries: the China Daily case is the most common, occurring 328 times in the dataset. The penalties used in the long distance constraint model for CoNLL are the Empirical Bayes estimates taken directly from the data (Tables 3 and 4), except that we change counts of 0 to be 1, so that the distribution remains positive. So the estimate of a PER also being an ORG is 5 3151; there were 5 instance of an entity being labeled as both, PER appeared 3150 times in the data, and we add 1 to this for smoothing, because PER-MISC never occured. However, when we have a phrase labeled differently in two different places, continuing with the PER-ORG example, it is unclear if we should penalize it as PER that is also an ORG or an ORG that is also a PER. To deal with this, we multiply the square roots of each estimate together to form the penalty term. The penalty term is then multiplied in a number of times equal to the length of the offending entity; this is meant to “encourage” the entity to shrink.7 For example, say we have a document with three entities, Rotor Volgograd twice, once labeled as PER and once as ORG, and Rotor, labeled as an ORG. The likelihood of a PER also being an ORG is 5 3151, and of an ORG also being a PER is 5 3169, so the penalty for this violation is ( q 5 3151 × q 5 3151)2. The likelihood of a ORG being a subphrase of a PER is 2 842. So the total penalty would be 5 3151 × 5 3169 × 2 842. dataset, so that city names are often also team names. 7While there is no theoretical justification for this, we found it to work well in practice. 367 5.2 CMU Seminar Announcements Consistency Model Due to the lack of a development set, our consistency model for the CMU Seminar Announcements is much simpler than the CoNLL model, the numbers where selected due to our intuitions, and we did not spend much time hand optimizing the model. Specifically, we had three constraints. The first is that all entities labeled as start time are normalized, and are penalized if they are inconsistent. The second is a corresponding constraint for end times. The last constraint attempts to consistently label the speakers. If a phrase is labeled as a speaker, we assume that the last word is the speaker’s last name, and we penalize for each occurrance of that word which is not also labeled speaker. For the start and end times the penalty is multiplied in based on how many words are in the entity. For the speaker, the penalty is only multiplied in once. We used a hand selected penalty of exp −4.0. 6 Combining Sequence Models In the previous section we defined two models of non-local structure. Now we would like to incorporate them into the local model (in our case, the trained CRF), and use Gibbs sampling to find the most likely state sequence. Because both the trained CRF and the non-local models are themselves sequence models, we simply combine the two models into a factored sequence model of the following form PF(s|o) ∝PM(s|o)PL(s|o) (6) where M is the local CRF model, L is the new nonlocal model, and F is the factored model.8 In this form, the probability again looks difficult to compute (because of the normalizing factor, a sum over all hidden state sequences of length N). However, since we are only using the model for Gibbs sampling, we never need to compute the distribution explicitly. Instead, we need only the conditional probability of each position in the sequence, which can be computed as PF (si|s−i, o) ∝PM(si|s−i, o)PL(si|s−i, o). (7) 8This model double-generates the state sequence conditioned on the observations. In practice we don’t find this to be a problem. CoNLL Approach LOC ORG MISC PER ALL B&M LT-RMN – – – – 80.09 B&M GLT-RMN – – – – 82.30 Local+Viterbi 88.16 80.83 78.51 90.36 85.51 NonLoc+Gibbs 88.51 81.72 80.43 92.29 86.86 Table 5: F1 scores of the local CRF and non-local models on the CoNLL 2003 named entity recognition dataset. We also provide the results from Bunescu and Mooney (2004) for comparison. CMU Seminar Announcements Approach STIME ETIME SPEAK LOC ALL S&M CRF 97.5 97.5 88.3 77.3 90.2 S&M Skip-CRF 96.7 97.2 88.1 80.4 90.6 Local+Viterbi 96.67 97.36 83.39 89.98 91.85 NonLoc+Gibbs 97.11 97.89 84.16 90.00 92.29 Table 6: F1 scores of the local CRF and non-local models on the CMU Seminar Announcements dataset. We also provide the results from Sutton and McCallum (2004) for comparison. At inference time, we then sample from the Markov chain defined by this transition probability. 7 Results and Discussion In our experiments we compare the impact of adding the non-local models with Gibbs sampling to our baseline CRF implementation. In the CoNLL named entity recognition task, the non-local models increase the F1 accuracy by about 1.3%. Although such gains may appear modest, note that they are achieved relative to a near state-of-the-art NER system: the winner of the CoNLL English task reported an F1 score of 88.76. In contrast, the increases published by Bunescu and Mooney (2004) are relative to a baseline system which scores only 80.9% on the same task. Our performance is similar on the CMU Seminar Announcements dataset. We show the per-field F1 results that were reported by Sutton and McCallum (2004) for comparison, and note that we are again achieving gains against a more competitive baseline system. For all experiments involving Gibbs sampling, we used a linear cooling schedule. For the CoNLL dataset we collected 200 samples per trial, and for the CMU Seminar Announcements we collected 100 samples. We report the average of all trials, and in all cases we outperform the baseline with greater than 95% confidence, using the standard t-test. The trials had low standard deviations - 0.083% and 0.007% and high minimun F-scores - 86.72%, and 92.28% 368 - for the CoNLL and CMU Seminar Announcements respectively, demonstrating the stability of our method. The biggest drawback to our model is the computational cost. Taking 100 samples dramatically increases test time. Averaged over 3 runs on both Viterbi and Gibbs, CoNLL testing time increased from 55 to 1738 seconds, and CMU Seminar Announcements testing time increases from 189 to 6436 seconds. 8 Related Work Several authors have successfully incorporated a label consistency constraint into probabilistic sequence model named entity recognition systems. Mikheev et al. (1999) and Finkel et al. (2004) incorporate label consistency information by using adhoc multi-stage labeling procedures that are effective but special-purpose. Malouf (2002) and Curran and Clark (2003) condition the label of a token at a particular position on the label of the most recent previous instance of that same token in a prior sentence of the same document. Note that this violates the Markov property, but is achieved by slightly relaxing the requirement of exact inference. Instead of finding the maximum likelihood sequence over the entire document, they classify one sentence at a time, allowing them to condition on the maximum likelihood sequence of previous sentences. This approach is quite effective for enforcing label consistency in many NLP tasks, however, it permits a forward flow of information only, which is not sufficient for all cases of interest. Chieu and Ng (2002) propose a solution to this problem: for each token, they define additional features taken from other occurrences of the same token in the document. This approach has the added advantage of allowing the training procedure to automatically learn good weightings for these “global” features relative to the local ones. However, this approach cannot easily be extended to incorporate other types of non-local structure. The most relevant prior works are Bunescu and Mooney (2004), who use a Relational Markov Network (RMN) (Taskar et al., 2002) to explicitly models long-distance dependencies, and Sutton and McCallum (2004), who introduce skip-chain CRFs, which maintain the underlying CRF sequence model (which (Bunescu and Mooney, 2004) lack) while adding skip edges between distant nodes. Unfortunately, in the RMN model, the dependencies must be defined in the model structure before doing any inference, and so the authors use crude heuristic part-of-speech patterns, and then add dependencies between these text spans using clique templates. This generates a extremely large number of overlapping candidate entities, which then necessitates additional templates to enforce the constraint that text subsequences cannot both be different entities, something that is more naturally modeled by a CRF. Another disadvantage of this approach is that it uses loopy belief propagation and a voted perceptron for approximate learning and inference – ill-founded and inherently unstable algorithms which are noted by the authors to have caused convergence problems. In the skip-chain CRFs model, the decision of which nodes to connect is also made heuristically, and because the authors focus on named entity recognition, they chose to connect all pairs of identical capitalized words. They also utilize loopy belief propagation for approximate learning and inference. While the technique we propose is similar mathematically and in spirit to the above approaches, it differs in some important ways. Our model is implemented by adding additional constraints into the model at inference time, and does not require the preprocessing step necessary in the two previously mentioned works. This allows for a broader class of long-distance dependencies, because we do not need to make any initial assumptions about which nodes should be connected, and is helpful when you wish to model relationships between nodes which are the same class, but may not be similar in any other way. For instance, in the CMU Seminar Announcements dataset, we can normalize all entities labeled as a start time and penalize the model if multiple, nonconsistent times are labeled. This type of constraint cannot be modeled in an RMN or a skip-CRF, because it requires the knowledge that both entities are given the same class label. We also allow dependencies between multi-word phrases, and not just single words. Additionally, our model can be applied on top of a pre-existing trained sequence model. As such, our method does not require complex training procedures, and can 369 instead leverage all of the established methods for training high accuracy sequence models. It can indeed be used in conjunction with any statistical hidden state sequence model: HMMs, CMMs, CRFs, or even heuristic models. Third, our technique employs Gibbs sampling for approximate inference, a simple and probabilistically well-founded algorithm. As a consequence of these differences, our approach is easier to understand, implement, and adapt to new applications. 9 Conclusions We have shown that a constraint model can be effectively combined with an existing sequence model in a factored architecture to successfully impose various sorts of long distance constraints. Our model generalizes naturally to other statistical models and other tasks. In particular, it could in the future be applied to statistical parsing. Statistical context free grammars provide another example of statistical models which are restricted to limiting local structure, and which could benefit from modeling nonlocal structure. Acknowledgements This work was supported in part by the Advanced Researchand Development Activity (ARDA)’s Advanced Question Answeringfor Intelligence (AQUAINT) Program. Additionally, we would like to that our reviewers for their helpful comments. References S. Abney. 1997. Stochastic attribute-value grammars. Computational Linguistics, 23:597–618. C. Andrieu, N. de Freitas, A. Doucet, and M. I. Jordan. 2003. An introduction to MCMC for machine learning. Machine Learning, 50:5–43. A. Borthwick. 1999. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. thesis, New York University. R. Bunescu and R. J. Mooney. 2004. Collective information extraction with relational Markov networks. In Proceedings of the 42nd ACL, pages 439–446. H. L. Chieu and H. T. Ng. 2002. Named entity recognition: a maximum entropy approach using global information. In Proceedings of the 19th Coling, pages 190–196. R. G. Cowell, A. Philip Dawid, S. L. Lauritzen, and D. J. Spiegelhalter. 1999. Probabilistic Networks and Expert Systems. Springer-Verlag, New York. J. R. Curran and S. Clark. 2003. Language independent NER using a maximum entropy tagger. In Proceedings of the 7th CoNLL, pages 164–167. S. Della Pietra, V. Della Pietra, and J. Lafferty. 1997. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19:380–393. J. Finkel, S. Dingare, H. Nguyen, M. Nissim, and C. D. Manning. 2004. Exploiting context for biomedical entity recognition: from syntax to the web. In Joint Workshop on Natural Language Processing in Biomedicine and Its Applications at Coling 2004. D. Freitag and A. McCallum. 1999. Information extraction with HMMs and shrinkage. In Proceedings of the AAAI-99 Workshop on Machine Learning for Information Extraction. D. Freitag. 1998. Machine learning for information extraction in informal domains. Ph.D. thesis, Carnegie Mellon University. S. Geman and D. Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transitions on Pattern Analysis and Machine Intelligence, 6:721–741. M. Kim, Y. S. Han, and K. Choi. 1995. Collocation map for overcoming data sparseness. In Proceedings of the 7th EACL, pages 53–59. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. 1983. Optimization by simulated annealing. Science, 220:671–680. P. J. Van Laarhoven and E. H. L. Arts. 1987. Simulated Annealing: Theory and Applications. Reidel Publishers. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th ICML, pages 282–289. Morgan Kaufmann, San Francisco, CA. T. R. Leek. 1997. Information extraction using hidden Markov models. Master’s thesis, U.C. San Diego. R. Malouf. 2002. Markov models for language-independent named entity recognition. In Proceedings of the 6th CoNLL, pages 187–190. A. Mikheev, M. Moens, and C. Grover. 1999. Named entity recognition without gazetteers. In Proceedings of the 9th EACL, pages 1–8. L. R. Rabiner. 1989. A tutorial on Hidden Markov Models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. C. Sutton and A. McCallum. 2004. Collective segmentation and labeling of distant entities in information extraction. In ICML Workshop on Statistical Relational Learning and Its connections to Other Fields. B. Taskar, P. Abbeel, and D. Koller. 2002. Discriminative probabilistic models for relational data. In Proceedings of the 18th Conference on Uncertianty in Artificial Intelligence (UAI-02), pages 485–494, Edmonton, Canada. 370 | 2005 | 45 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 371–378, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Unsupervised Learning of Field Segmentation Models for Information Extraction Trond Grenager Computer Science Department Stanford University Stanford, CA 94305 [email protected] Dan Klein Computer Science Division U.C. Berkeley Berkeley, CA 94709 [email protected] Christopher D. Manning Computer Science Department Stanford University Stanford, CA 94305 [email protected] Abstract The applicability of many current information extraction techniques is severely limited by the need for supervised training data. We demonstrate that for certain field structured extraction tasks, such as classified advertisements and bibliographic citations, small amounts of prior knowledge can be used to learn effective models in a primarily unsupervised fashion. Although hidden Markov models (HMMs) provide a suitable generative model for field structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains. However, one can dramatically improve the quality of the learned structure by exploiting simple prior knowledge of the desired solutions. In both domains, we found that unsupervised methods can attain accuracies with 400 unlabeled examples comparable to those attained by supervised methods on 50 labeled examples, and that semi-supervised methods can make good use of small amounts of labeled data. 1 Introduction Information extraction is potentially one of the most useful applications enabled by current natural language processing technology. However, unlike general tools like parsers or taggers, which generalize reasonably beyond their training domains, extraction systems must be entirely retrained for each application. As an example, consider the task of turning a set of diverse classified advertisements into a queryable database; each type of ad would require tailored training data for a supervised system. Approaches which required little or no training data would therefore provide substantial resource savings and extend the practicality of extraction systems. The term information extraction was introduced in the MUC evaluations for the task of finding short pieces of relevant information within a broader text that is mainly irrelevant, and returning it in a structured form. For such “nugget extraction” tasks, the use of unsupervised learning methods is difficult and unlikely to be fully successful, in part because the nuggets of interest are determined only extrinsically by the needs of the user or task. However, the term information extraction was in time generalized to a related task that we distinguish as field segmentation. In this task, a document is regarded as a sequence of pertinent fields, and the goal is to segment the document into fields, and to label the fields. For example, bibliographic citations, such as the one in Figure 1(a), exhibit clear field structure, with fields such as author, title, and date. Classified advertisements, such as the one in Figure 1(b), also exhibit field structure, if less rigidly: an ad consists of descriptions of attributes of an item or offer, and a set of ads for similar items share the same attributes. In these cases, the fields present a salient, intrinsic form of linguistic structure, and it is reasonable to hope that field segmentation models could be learned in an unsupervised fashion. In this paper, we investigate unsupervised learning of field segmentation models in two domains: bibliographic citations and classified advertisements for apartment rentals. General, unconstrained induction of HMMs using the EM algorithm fails to detect useful field structure in either domain. However, we demonstrate that small amounts of prior knowledge can be used to greatly improve the learned model. In both domains, we found that unsupervised methods can attain accuracies with 400 unlabeled examples comparable to those attained by supervised methods on 50 labeled examples, and that semi-supervised methods can make good use of small amounts of labeled data. 371 (a) AUTH Pearl AUTH , AUTH J. DATE ( DATE 1988 DATE ) DATE . TTL Probabilistic TTL Reasoning TTL in TTL Intelligent TTL Systems TTL : TTL Networks TTL of TTL Plausible TTL Inference TTL . PUBL Morgan PUBL Kaufmann PUBL . (b) SIZE Spacious SIZE 1 SIZE Bedroom SIZE apt SIZE . FEAT newly FEAT remodeled FEAT , FEAT gated FEAT , FEAT new FEAT appliance FEAT , FEAT new FEAT carpet FEAT , NBRHD near NBRHD public NBRHD transportion NBRHD , NBRHD close NBRHD to NBRHD 580 NBRHD freeway NBRHD , RENT $ RENT 500.00 RENT Deposit CONTACT (510)655-0106 (c) RB No , , PRP it VBD was RB n’t NNP Black NNP Monday . . Figure 1: Examples of three domains for HMM learning: the bibliographic citation fields in (a) and classified advertisements for apartment rentals shown in (b) exhibit field structure. Contrast these to part-of-speech tagging in (c) which does not. 2 Hidden Markov Models Hidden Markov models (HMMs) are commonly used to represent a wide range of linguistic phenomena in text, including morphology, parts-ofspeech (POS), named entity mentions, and even topic changes in discourse. An HMM consists of a set of states S, a set of observations (in our case words or tokens) W, a transition model specifying P(st|st−1), the probability of transitioning from state st−1 to state st, and an emission model specifying P(w|s) the probability of emitting word w while in state s. For a good tutorial on general HMM techniques, see Rabiner (1989). For all of the unsupervised learning experiments we fit an HMM with the same number of hidden states as gold labels to an unannotated training set using EM.1 To compute hidden state expectations efficiently, we use the Forward-Backward algorithm in the standard way. Emission models are initialized to almost-uniform probability distributions, where a small amount of noise is added to break initial symmetry. Transition model initialization varies by experiment. We run the EM algorithm to convergence. Finally, we use the Viterbi algorithm with the learned parameters to label the test data. All baselines and experiments use the same tokenization, normalization, and smoothing techniques, which were not extensively investigated. Tokenization was performed in the style of the Penn Treebank, and tokens were normalized in various ways: numbers, dates, phone numbers, URLs, and email 1EM is a greedy hill-climbing algorithm designed for this purpose, but it is not the only option; one could also use coordinate ascent methods or sampling methods. addresses were collapsed to dedicated tokens, and all remaining tokens were converted to lowercase. Unless otherwise noted, the emission models use simple add-λ smoothing, where λ was 0.001 for supervised techniques, and 0.2 for unsupervised techniques. 3 Datasets and Evaluation The bibliographic citations data is described in McCallum et al. (1999), and is distributed at http://www.cs.umass.edu/~mccallum/. It consists of 500 hand-annotated citations, each taken from the reference section of a different computer science research paper. The citations are annotated with 13 fields, including author, title, date, journal, and so on. The average citation has 35 tokens in 5.5 fields. We split this data, using its natural order, into a 300document training set, a 100-document development set, and a 100-document test set. The classified advertisements data set is novel, and consists of 8,767 classified advertisements for apartment rentals in the San Francisco Bay Area downloaded in June 2004 from the Craigslist website. It is distributed at http://www.stanford.edu/~grenager/. 302 of the ads have been labeled with 12 fields, including size, rent, neighborhood, features, and so on. The average ad has 119 tokens in 8.7 fields. The annotated data is divided into a 102-document training set, a 100-document development set, and a 100-document test set. The remaining 8465 documents form an unannotated training set. In both cases, all system development and parameter tuning was performed on the development set, 372 size rent features restrictions neighborhood utilities available contact photos roomates other address author title editor journal booktitle volume pages publisher location tech institution date DT JJ NN NNS NNP PRP CC MD VBD VB TO IN (a) (b) (c) Figure 2: Matrix representations of the target transition structure in two field structured domains: (a) classified advertisements (b) bibliographic citations. Columns and rows are indexed by the same sequence of fields. Also shown is (c) a submatrix of the transition structure for a part-of-speech tagging task. In all cases the column labels are the same as the row labels. and the test set was only used once, for running final experiments. Supervised learning experiments train on documents selected randomly from the annotated training set and test on the complete test set. Unsupervised learning experiments also test on the complete test set, but create a training set by first adding documents from the test set (without annotation), then adding documents from the annotated training set (without annotation), and finally adding documents from the unannotated training set. Thus if an unsupervised training set is larger than the test set, it fully contains the test set. To evaluate our models, we first learn a set of model parameters, and then use the parameterized model to label the sequence of tokens in the test data with the model’s hidden states. We then compare the similarity of the guessed sequence to the humanannotated sequence of gold labels, and compute accuracy on a per-token basis.2 In evaluation of supervised methods, the model states and gold labels are the same. For models learned in a fully unsupervised fashion, we map each model state in a greedy fashion to the gold label to which it most often corresponds in the gold data. There is a worry with this kind of greedy mapping: it increasingly inflates the results as the number of hidden states grows. To keep the accuracies meaningful, all of our models have exactly the same number of hidden states as gold labels, and so the comparison is valid. 2This evaluation method is used by McCallum et al. (1999) but otherwise is not very standard. Compared to other evaluation methods for information extraction systems, it leads to a lower penalty for boundary errors, and allows long fields also contribute more to accuracy than short ones. 4 Unsupervised Learning Consider the general problem of learning an HMM from an unlabeled data set. Even abstracting away from concrete search methods and objective functions, the diversity and simultaneity of linguistic structure is already worrying; in Figure 1 compare the field structure in (a) and (b) to the parts-ofspeech in (c). If strong sequential correlations exist at multiple scales, any fixed search procedure will detect and model at most one of these levels of structure, not necessarily the level desired at the moment. Worse, as experience with part-of-speech and grammar learning has shown, induction systems are quite capable of producing some uninterpretable mix of various levels and kinds of structure. Therefore, if one is to preferentially learn one kind of inherent structure over another, there must be some way of constraining the process. We could hope that field structure is the strongest effect in classified ads, while parts-of-speech is the strongest effect in newswire articles (or whatever we would try to learn parts-of-speech from). However, it is hard to imagine how one could bleach the local grammatical correlations and long-distance topical correlations from our classified ads; they are still English text with part-of-speech patterns. One approach is to vary the objective function so that the search prefers models which detect the structures which we have in mind. This is the primary way supervised methods work, with the loss function relativized to training label patterns. However, for unsupervised learning, the primary candidate for an objective function is the data likelihood, and we don’t have another suggestion here. Another approach is to inject some prior knowledge into the 373 search procedure by carefully choosing the starting point; indeed smart initialization has been critical to success in many previous unsupervised learning experiments. The central idea of this paper is that we can instead restrict the entire search domain by constraining the model class to reflect the desired structure in the data, thereby directing the search toward models of interest. We do this in several ways, which are described in the following sections. 4.1 Baselines To situate our results, we provide three different baselines (see Table 1). First is the most-frequentfield accuracy, achieved by labeling all tokens with the same single label which is then mapped to the most frequent field. This gives an accuracy of 46.4% on the advertisements data and 27.9% on the citations data. The second baseline method is to presegment the unlabeled data using a crude heuristic based on punctuation, and then to cluster the resulting segments using a simple Na¨ıve Bayes mixture model with the Expectation-Maximization (EM) algorithm. This approach achieves an accuracy of 62.4% on the advertisements data, and 46.5% on the citations data. As a final baseline, we trained a supervised firstorder HMM from the annotated training data using maximum likelihood estimation. With 100 training examples, supervised models achieve an accuracy of 74.4% on the advertisements data, and 72.5% on the citations data. With 300 examples, supervised methods achieve accuracies of 80.4 on the citations data. The learning curves of the supervised training experiments for different amounts of training data are shown in Figure 4. Note that other authors have achieved much higher accuracy on the the citation dataset using HMMs trained with supervision: McCallum et al. (1999) report accuracies as high as 92.9% by using more complex models and millions of words of BibTeX training data. 4.2 Unconstrained HMM Learning From the supervised baseline above we know that there is some first-order HMM over |S| states which captures the field structure we’re interested in, and we would like to find such a model without supervision. As a first attempt, we try fitting an unconstrained HMM, where the transition function is ini1 2 3 4 5 6 7 8 9 10 11 12 (a) Classified Advertisements 1 2 3 4 5 6 7 8 9 10 11 12 (b) Citations Figure 3: Matrix representations of typical transition models learned by initializing the transition model uniformly. tialized randomly, to the unannotated training data. Not surprisingly, the unconstrained approach leads to predictions which poorly align with the desired field segmentation: with 400 unannotated training documents, the accuracy is just 48.8% for the advertisements and 49.7% for the citations: better than the single state baseline but far from the supervised baseline. To illustrate what is (and isn’t) being learned, compare typical transition models learned by this method, shown in Figure 3, to the maximumlikelihood transition models for the target annotations, shown in Figure 2. Clearly, they aren’t anything like the target models: the learned classified advertisements matrix has some but not all of the desired diagonal structure, and the learned citations matrix has almost no mass on the diagonal, and appears to be modeling smaller scale structure. 4.3 Diagonal Transition Models To adjust our procedure to learn larger-scale patterns, we can constrain the parametric form of the transition model to be P(st|st−1) = σ + (1−σ) |S| if st = st−1 (1−σ) |S| otherwise where |S| is the number of states, and σ is a global free parameter specifying the self-loop probability: 374 (a) Classified advertisements (b) Bibliographic citations Figure 4: Learning curves for supervised learning and unsupervised learning with a diagonal transition matrix on (a) classified advertisements, and (b) bibliographic citations. Results are averaged over 50 runs. the probability of a state transitioning to itself. (Note that the expected mean field length for transition functions of this form is 1 1−σ.) This constraint provides a notable performance improvement: with 400 unannotated training documents the accuracy jumps from 48.8% to 70.0% for advertisements and from 49.7% to 66.3% for citations. The complete learning curves for models of this form are shown in Figure 4. We have tested training on more unannotated data; the slope of the learning curve is leveling out, but by training on 8000 unannotated ads, accuracy improves significantly to 72.4%. On the citations task, an accuracy of approximately 66% can be achieved either using supervised training on 50 annotated citations, or unsupervised training using 400 unannotated citations. 3 Although σ can easily be reestimated with EM (even on a per-field basis), doing so does not yield 3We also tested training on 5000 additional unannotated citations collected from papers found on the Internet. Unfortunately the addition of this data didn’t help accuracy. This probably results from the fact that the datasets were collected from different sources, at different times. Figure 5: Unsupervised accuracy as a function of the expected mean field length 1 1−σ for the classified advertisements dataset. Each model was trained with 500 documents and tested on the development set. Results are averaged over 50 runs. better models.4 On the other hand, model accuracy is not very sensitive to the exact choice of σ, as shown in Figure 5 for the classified advertisements task (the result for the citations task has a similar shape). For the remaining experiments on the advertisements data, we use σ = 0.9, and for those on the citations data, we use σ = 0.5. 4.4 Hierarchical Mixture Emission Models Consider the highest-probability state emissions learned by the diagonal model, shown in Figure 6(a). In addition to its characteristic content words, each state also emits punctuation and English function words devoid of content. In fact, state 3 seems to have specialized entirely in generating such tokens. This can become a problem when labeling decisions are made on the basis of the function words rather than the content words. It seems possible, then, that removing function words from the field-specific emission models could yield an improvement in labeling accuracy. One way to incorporate this knowledge into the model is to delete stopwords, which, while perhaps not elegant, has proven quite effective in the past. A better founded way of making certain words unavailable to the model is to emit those words from all states with equal probability. This can be accomplished with the following simple hierarchical mixture emission model Ph(w|s) = αPc(w) + (1 −α)P(w|s) where Pc is the common word distribution, and α is 4While it may be surprising that disallowing reestimation of the transition function is helpful here, the same has been observed in acoustic modeling (Rabiner and Juang, 1993). 375 State 10 Most Common Words 1 . $ no ! month deposit , pets rent available 2 , . room and with in large living kitchen 3 . a the is and for this to , in 4 [NUM1] [NUM0] , bedroom bath / - . car garage 5 , . and a in - quiet with unit building 6 - . [TIME] [PHONE] [DAY] call [NUM8] at (a) State 10 Most Common Words 1 [NUM2] bedroom [NUM1] bath bedrooms large sq car ft garage 2 $ no month deposit pets lease rent available year security 3 kitchen room new , with living large floors hardwood fireplace 4 [PHONE] call please at or for [TIME] to [DAY] contact 5 san street at ave st # [NUM:DDD] francisco ca [NUM:DDDD] 6 of the yard with unit private back a building floor Comm. *CR* . , and - the in a / is with : of for to (b) Figure 6: Selected state emissions from a typical model learned from unsupervised data using the constrained transition function: (a) with a flat emission model, and (b) with a hierarchical emission model. a new global free parameter. In such a model, before a state emits a token it flips a coin, and with probability α it allows the common word distribution to generate the token, and with probability (1−α) it generates the token from its state-specific emission model (see Vaithyanathan and Dom (2000) and Toutanova et al. (2001) for more on such models). We tuned α on the development set and found that a range of values work equally well. We used a value of 0.5 in the following experiments. We ran two experiments on the advertisements data, both using the fixed transition model described in Section 4.3 and the hierarchical emission model. First, we initialized the emission model of Pc to a general-purpose list of stopwords, and did not reestimate it. This improved the average accuracy from 70.0% to 70.9%. Second, we learned the emission model of Pc using EM reestimation. Although this method did not yield a significant improvement in accuracy, it learns sensible common words: Figure 6(b) shows a typical emission model learned with this technique. Unfortunately, this technique does not yield improvements on the citations data. 4.5 Boundary Models Another source of error concerns field boundaries. In many cases, fields are more or less correct, but the boundaries are off by a few tokens, even when punctuation or syntax make it clear to a human reader where the exact boundary should be. One way to address this is to model the fact that in this data fields often end with one of a small set of boundary tokens, such as punctuation and new lines, which are shared across states. To accomplish this, we enriched the Markov process so that each field s is now modeled by two states, a non-final s−∈S−and a final s+ ∈S+. The transition model for final states is the same as before, but the transition model for non-final states has two new global free parameters: λ, the probability of staying within the field, and µ, the probability of transitioning to the final state given that we are staying in the field. The transition function for nonfinal states is then P(s′|s−) = (1 −µ)(λ + (1−λ) |S−| ) if s′ = s− µ(λ + (1−λ) |S−| ) if s′ = s+ (1−λ) |S−| if s′ ∈S−\s− 0 otherwise. Note that it can bypass the final state, and transition directly to other non-final states with probability (1 −λ), which models the fact that not all field occurrences end with a boundary token. The transition function for non-final states is then P(s′|s+) = σ + (1−σ) |S−| if s′ = s− (1−σ) |S−| if s′ ∈S−\s− 0 otherwise. Note that this has the form of the standard diagonal function. The reason for the self-loop from the final state back to the non-final state is to allow for field internal punctuation. We tuned the free parameters on the development set, and found that σ = 0.5 and λ = 0.995 work well for the advertisements domain, and σ = 0.3 and λ = 0.9 work well for the citations domain. In all cases it works well to set µ = 1 −λ. Emissions from non-final states are as 376 Ads Citations Baseline 46.4 27.9 Segment and cluster 62.4 46.5 Supervised 74.4 72.5 Unsup. (learned trans) 48.8 49.7 Unsup. (diagonal trans) 70.0 66.3 + Hierarchical (learned) 70.1 39.1 + Hierarchical (given) 70.9 62.1 + Boundary (learned) 70.4 64.3 + Boundary (given) 71.9 68.2 + Hier. + Bnd. (learned) 71.0 — + Hier. + Bnd. (given) 72.7 — Table 1: Summary of results. For each experiment, we report percentage accuracy on the test set. Supervised experiments use 100 training documents, and unsupervised experiments use 400 training documents. Because unsupervised techniques are stochastic, those results are averaged over 50 runs, and differences greater than 1.0% are significant at p=0.05% or better according to the t-test. The last 6 rows are not cumulative. before (hierarchical or not depending on the experiment), while all final states share a boundary emission model. Note that the boundary emissions are not smoothed like the field emissions. We tested both supplying the boundary token distributions and learning them with reestimation during EM. In experiments on the advertisements data we found that learning the boundary emission model gives an insignificant raise from 70.0% to 70.4%, while specifying the list of allowed boundary tokens gives a significant increase to 71.9%. When combined with the given hierarchical emission model from the previous section, accuracy rises to 72.7%, our best unsupervised result on the advertisements data with 400 training examples. In experiments on the citations data we found that learning boundary emission model hurts accuracy, but that given the set of boundary tokens it boosts accuracy significantly: increasing it from 66.3% to 68.2%. 5 Semi-supervised Learning So far, we have largely focused on incorporating prior knowledge in rather general and implicit ways. As a final experiment we tested the effect of adding a small amount of supervision: augmenting the large amount of unannotated data we use for unsupervised learning with a small amount of annotated data. There are many possible techniques for semisupervised learning; we tested a particularly simple one. We treat the annotated labels as observed variables, and when computing sufficient statistics in the M-step of EM we add the observed counts from the Figure 7: Learning curves for semi-supervised learning on the citations task. A separate curve is drawn for each number of annotated documents. All results are averaged over 50 runs. annotated documents to the expected counts computed in the E-step. We estimate the transition function using maximum likelihood from the annotated documents only, and do not reestimate it. Semi-supervised results for the citations domain are shown in Figure 7. Adding 5 annotated citations yields no improvement in performance, but adding 20 annotated citations to 300 unannotated citations boosts performance greatly from 65.2% to 71.3%. We also tested the utility of this approach in the classified advertisement domain, and found that it did not improve accuracy. We believe that this is because the transition information provided by the supervised data is very useful for the citations data, which has regular transition structure, but is not as useful for the advertisements data, which does not. 6 Previous Work A good amount of prior research can be cast as supervised learning of field segmentation models, using various model families and applied to various domains. McCallum et al. (1999) were the first to compare a number of supervised methods for learning HMMs for parsing bibliographic citations. The authors explicitly claim that the domain would be suitable for unsupervised learning, but they do not present experimental results. McCallum et al. (2000) applied supervised learning of Maximum Entropy Markov Models (MEMMs) to the domain of parsing Frequently Asked Question (FAQ) lists into their component field structure. More recently, Peng and McCallum (2004) applied supervised learning of Conditional Random Field (CRF) sequence models to the problem of parsing the head377 ers of research papers. There has also been some previous work on unsupervised learning of field segmentation models in particular domains. Pasula et al. (2002) performs limited unsupervised segmentation of bibliographic citations as a small part of a larger probabilistic model of identity uncertainty. However, their system does not explicitly learn a field segmentation model for the citations, and encodes a large amount of hand-supplied information about name forms, abbreviation schemes, and so on. More recently, Barzilay and Lee (2004) defined content models, which can be viewed as field segmentation models occurring at the level of discourse. They perform unsupervised learning of these models from sets of news articles which describe similar events. The fields in that case are the topics discussed in those articles. They consider a very different set of applications from the present work, and show that the learned topic models improve performance on two discourse-related tasks: information ordering and extractive document summarization. Most importantly, their learning method differs significantly from ours; they use a complex and special purpose algorithm, which is difficult to adapt, while we see our contribution to be a demonstration of the interplay between model family and learned structure. Because the structure of the HMMs they learn is similar to ours it seems that their system could benefit from the techniques of this paper. Finally, Blei and Moreno (2001) use an HMM augmented by an aspect model to automatically segment documents, similar in goal to the system of Hearst (1997), but using techniques more similar to the present work. 7 Conclusions In this work, we have examined the task of learning field segmentation models using unsupervised learning. In two different domains, classified advertisements and bibliographic citations, we showed that by constraining the model class we were able to restrict the search space of EM to models of interest. We used unsupervised learning methods with 400 documents to yield field segmentation models of a similar quality to those learned using supervised learning with 50 documents. We demonstrated that further refinements of the model structure, including hierarchical mixture emission models and boundary models, produce additional increases in accuracy. Finally, we also showed that semi-supervised methods with a modest amount of labeled data can sometimes be effectively used to get similar good results, depending on the nature of the problem. While there are enough resources for the citation task that much better numbers than ours can be and have been obtained (with more knowledge and resource intensive methods), in domains like classified ads for lost pets or used bicycles unsupervised learning may be the only practical option. In these cases, we find it heartening that the present systems do as well as they do, even without field-specific prior knowledge. 8 Acknowledgements We would like to thank the reviewers for their consideration and insightful comments. References R. Barzilay and L. Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of HLT-NAACL 2004, pages 113– 120. D. Blei and P. Moreno. 2001. Topic segmentation with an aspect hidden Markov model. In Proceedings of the 24th SIGIR, pages 343–348. M. A. Hearst. 1997. TextTiling: Segmenting text into multiparagraph subtopic passages. Computational Linguistics, 23(1):33–64. A. McCallum, K. Nigam, J. Rennie, and K. Seymore. 1999. A machine learning approach to building domain-specific search engines. In IJCAI-1999. A. McCallum, D. Freitag, and F. Pereira. 2000. Maximum entropy Markov models for information extraction and segmentation. In Proceedings of the 17th ICML, pages 591–598. Morgan Kaufmann, San Francisco, CA. H. Pasula, B. Marthi, B. Milch, S. Russell, and I. Shpitser. 2002. Identity uncertainty and citation matching. In Proceedings of NIPS 2002. F. Peng and A. McCallum. 2004. Accurate information extraction from research papers using Conditional Random Fields. In Proceedings of HLT-NAACL 2004. L. R. Rabiner and B.-H. Juang. 1993. Fundamentals of Speech Recognition. Prentice Hall. L. R. Rabiner. 1989. A tutorial on Hidden Markov Models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. K. Toutanova, F. Chen, K. Popat, and T. Hofmann. 2001. Text classification in a hierarchical mixture model for small training sets. In CIKM ’01: Proceedings of the tenth international conference on Information and knowledge management, pages 105–113. ACM Press. S. Vaithyanathan and B. Dom. 2000. Model-based hierarchical clustering. In UAI-2000. 378 | 2005 | 46 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 379–386, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics A Semantic Approach to IE Pattern Induction Mark Stevenson and Mark A. Greenwood Department of Computer Science University of Sheffield Sheffield, S1 4DP, UK marks,[email protected] Abstract This paper presents a novel algorithm for the acquisition of Information Extraction patterns. The approach makes the assumption that useful patterns will have similar meanings to those already identified as relevant. Patterns are compared using a variation of the standard vector space model in which information from an ontology is used to capture semantic similarity. Evaluation shows this algorithm performs well when compared with a previously reported document-centric approach. 1 Introduction Developing systems which can be easily adapted to new domains with the minimum of human intervention is a major challenge in Information Extraction (IE). Early IE systems were based on knowledge engineering approaches but suffered from a knowledge acquisition bottleneck. For example, Lehnert et al. (1992) reported that their system required around 1,500 person-hours of expert labour to modify for a new extraction task. One approach to this problem is to use machine learning to automatically learn the domain-specific information required to port a system (Riloff, 1996). Yangarber et al. (2000) proposed an algorithm for learning extraction patterns for a small number of examples which greatly reduced the burden on the application developer and reduced the knowledge acquisition bottleneck. Weakly supervised algorithms, which bootstrap from a small number of examples, have the advantage of requiring only small amounts of annotated data, which is often difficult and time-consuming to produce. However, this also means that there are fewer examples of the patterns to be learned, making the learning task more challenging. Providing the learning algorithm with access to additional knowledge can compensate for the limited number of annotated examples. This paper presents a novel weakly supervised algorithm for IE pattern induction which makes use of the WordNet ontology (Fellbaum, 1998). Extraction patterns are potentially useful for many language processing tasks, including question answering and the identification of lexical relations (such as meronomy and hyponymy). In addition, IE patterns encode the different ways in which a piece of information can be expressed in text. For example, “Acme Inc. fired Jones”, “Acme Inc. let Jones go”, and “Jones was given notice by his employers, Acme Inc.” are all ways of expressing the same fact. Consequently the generation of extraction patterns is pertinent to paraphrase identification which is central to many language processing problems. We begin by describing the general process of pattern induction and an existing approach, based on the distribution of patterns in a corpus (Section 2). We then introduce a new algorithm which makes use of WordNet to generalise extraction patterns (Section 3) and describe an implementation (Section 4). Two evaluation regimes are described; one based on the identification of relevant documents and another which aims to identify sentences in a corpus which 379 are relevant for a particular IE task (Section 5). Results on each of these evaluation regimes are then presented (Sections 6 and 7). 2 Extraction Pattern Learning We begin by outlining the general process of learning extraction patterns, similar to one presented by (Yangarber, 2003). 1. For a given IE scenario we assume the existence of a set of documents against which the system can be trained. The documents are unannotated and may be either relevant (contain the description of an event relevant to the scenario) or irrelevant although the algorithm has no access to this information. 2. This corpus is pre-processed to generate the set of all patterns which could be used to represent sentences contained in the corpus, call this set S. The aim of the learning process is to identify the subset of S representing patterns which are relevant to the IE scenario. 3. The user provides a small set of seed patterns, Sseed, which are relevant to the scenario. These patterns are used to form the set of currently accepted patterns, Sacc, so Sacc ←Sseed. The remaining patterns are treated as candidates for inclusion in the accepted set, these form the set Scand(= S −Sacc). 4. A function, f, is used to assign a score to each pattern in Scand based on those which are currently in Sacc. This function assigns a real number to candidate patterns so ∀c ϵ Scand, f(c, Sacc) 7→ℜ. A set of high scoring patterns (based on absolute scores or ranks after the set of patterns has been ordered by scores) are chosen as being suitable for inclusion in the set of accepted patterns. These form the set Slearn. 5. The patterns in Slearn are added to Sacc and removed from Scand, so Sacc ←Sacc ∪Slearn and Scand ←Sacc −Slearn 6. If a suitable set of patterns has been learned then stop, otherwise go to step 4 2.1 Document-centric approach A key choice in the development of such an algorithm is step 4, the process of ranking the candidate patterns, which effectively determines which of the candidate patterns will be learned. Yangarber et al. (2000) chose an approach motivated by the assumption that documents containing a large number of patterns already identified as relevant to a particular IE scenario are likely to contain further relevant patterns. This approach, which can be viewed as being document-centric, operates by associating confidence scores with patterns and relevance scores with documents. Initially seed patterns are given a maximum confidence score of 1 and all others a 0 score. Each document is given a relevance score based on the patterns which occur within it. Candidate patterns are ranked according to the proportion of relevant and irrelevant documents in which they occur, those found in relevant documents far more than in irrelevant ones are ranked highly. After new patterns have been accepted all patterns’ confidence scores are updated, based on the documents in which they occur, and documents’ relevance according to the accepted patterns they contain. This approach has been shown to successfully acquire useful extraction patterns which, when added to an IE system, improved its performance (Yangarber et al., 2000). However, it relies on an assumption about the way in which relevant patterns are distributed in a document collection and may learn patterns which tend to occur in the same documents as relevant ones whether or not they are actually relevant. For example, we could imagine an IE scenario in which relevant documents contain a piece of information which is related to, but distinct from, the information we aim to extract. If patterns expressing this information were more likely to occur in relevant documents than irrelevant ones the documentcentric approach would also learn the irrelevant patterns. Rather than focusing on the documents matched by a pattern, an alternative approach is to rank patterns according to how similar their meanings are to those which are known to be relevant. This semantic-similarity approach avoids the problem which may be present in the document-centric approach since patterns which happen to co-occur in the same documents as relevant ones but have different meanings will not be ranked highly. We now go on to describe a new algorithm which implements this approach. 380 3 Semantic IE Pattern Learning For these experiments extraction patterns consist of predicate-argument structures, as proposed by Yangarber (2003). Under this scheme patterns consist of triples representing the subject, verb and object (SVO) of a clause. The first element is the “semantic” subject (or agent), for example “John” is a clausal subject in each of these sentences “John hit Bill”, “Bill was hit by John”, “Mary saw John hit Bill”, and “John is a bully”. The second element is the verb in the clause and the third the object (patient) or predicate. “Bill” is a clausal object in the first three example sentences and “bully” in the final one. When a verb is being used intransitively, the pattern for that clause is restricted to only the first pair of elements. The filler of each pattern element can be either a lexical item or semantic category such as person name, country, currency values, numerical expressions etc. In this paper lexical items are represented in lower case and semantic categories are capitalised. For example, in the pattern COMPANY+fired+ceo, fired and ceo are lexical items and COMPANY a semantic category which could match any lexical item belonging to that type. The algorithm described here relies on identifying patterns with similar meanings. The approach we have developed to do this is inspired by the vector space model which is commonly used in Information Retrieval (Salton and McGill, 1983) and language processing in general (Pado and Lapata, 2003). Each pattern can be represented as a set of pattern element-filler pairs. For example, the pattern COMPANY+fired+ceo consists of three pairs: subject COMPANY, verb fired and object ceo. Each pair consists of either a lexical item or semantic category, and pattern element. Once an appropriate set of pairs has been established a pattern can be represented as a binary vector in which an element with value 1 denotes that the pattern contains a particular pair and 0 that it does not. 3.1 Pattern Similarity The similarity of two pattern vectors can be compared using the measure shown in Equation 1. Here ⃗a and⃗b are pattern vectors, ⃗bT the transpose of⃗b and Patterns Matrix labels a. chairman+resign 1. subject chairman b. ceo+quit 2. subject ceo c. chairman+comment 3. verb resign 4. verb quit 5. verb comment Similarity matrix Similarity values 1 0.95 0 0 0 0.95 1 0 0 0 0 0 1 0.9 0.1 0 0 0.9 1 0.1 0 0 0.1 0.1 1 sim(⃗a,⃗b) = 0.925 sim(⃗a, ⃗c) = 0.55 sim(⃗b, ⃗c) = 0.525 Figure 1: Similarity scores and matrix for an example vector space formed from three patterns W a matrix that lists the similarity between each of the possible pattern element-filler pairs. sim(⃗a,⃗b) = ⃗aW ⃗bT |⃗a||⃗b| (1) The semantic similarity matrix W contains information about the similarity of each pattern elementfiller pair stored as non-negative real numbers and is crucial for this measure. Assume that the set of patterns, P, consists of n element-filler pairs denoted by p1, p2, ...pn. Each row and column of W represents one of these pairs and they are consistently labelled. So, for any i such that 1 ≤i ≤n, row i and column i are both labelled with pair pi. If wij is the element of W in row i and column j then the value of wij represents the similarity between the pairs pi and pj. Note that we assume the similarity of two element-filler pairs is symmetric, so wij = wji and, consequently, W is a symmetric matrix. Pairs with different pattern elements (i.e. grammatical roles) are automatically given a similarity score of 0. Diagonal elements of W represent the self-similarity between pairs and have the greatest values. Figure 1 shows an example using three patterns, chairman+resign, ceo+quit and chairman+comment. This shows how these patterns are represented as vectors and gives a sample semantic similarity matrix. It can be seen that the first pair of patterns are the most similar using the proposed measure. The measure in Equation 1 is similar to the cosine metric, commonly used to determine the similarity of documents in the vector space model approach 381 to Information Retrieval. However, the cosine metric will not perform well for our application since it does not take into account the similarity between elements of a vector and would assign equal similarity to each pair of patterns in the example shown in Figure 1.1 The semantic similarity matrix in Equation 1 provides a mechanism to capture semantic similarity between lexical items which allows us to identify chairman+resign and ceo+quit as the most similar pair of patterns. 3.2 Populating the Matrix It is important to choose appropriate values for the elements of W. We chose to make use of the research that has concentrated on computing similarity between pairs of lexical items using the WordNet hierarchy (Resnik, 1995; Jiang and Conrath, 1997; Patwardhan et al., 2003). We experimented with several of the measures which have been reported in the literature and found that the one proposed by Jiang and Conrath (1997) to be the most effective. The similarity measure proposed by Jiang and Conrath (1997) relies on a technique developed by Resnik (1995) which assigns numerical values to each sense in the WordNet hierarchy based upon the amount of information it represents. These values are derived from corpus counts of the words in the synset, either directly or via the hyponym relation and are used to derive the Information Content (IC) of a synset c thus IC(c) = −log(Pr(c)). For two senses, s1 and s2, the lowest common subsumer, lcs(s1, s2), is defined as the sense with the highest information content (most specific) which subsumes both senses in the WordNet hierarchy. Jiang and Conrath used these elements to calculate the semantic distance between a pair or words, w1 and w2, according to this formula (where senses(w) is the set 1The cosine metric for a pair of vectors is given by the calculation a.b |a||b|. Substituting the matrix multiplication in the numerator of Equation 1 for the dot product of vectors ⃗a and ⃗b would give the cosine metric. Note that taking the dot product of a pair of vectors is equivalent to multiplying by the identity matrix, i.e. ⃗a.⃗b = ⃗aI ⃗bT . Under our interpretation of the similarity matrix, W, this equates to each pattern element-filler pair being identical to itself but not similar to anything else. of all possible WordNet senses for word w): ARGMAX s1 ϵ senses(w1), s2 ϵ senses(w2) IC(s1)+IC(s2)−2×IC(lcs(s1, s2)) (2) Patwardhan et al. (2003) convert this distance metric into a similarity measure by taking its multiplicative inverse. Their implementation was used in the experiments described later. As mentioned above, the second part of a pattern element-filler pair can be either a lexical item or a semantic category, such as company. The identifiers used to denote these categories, i.e. COMPANY, do not appear in WordNet and so it is not possible to directly compare their similarity with other lexical items. To avoid this problem these tokens are manually mapped onto the most appropriate node in the WordNet hierarchy which is then used for similarity calculations. This mapping process is not particularly time-consuming since the number of named entity types with which a corpus is annotated is usually quite small. For example, in the experiments described in this paper just seven semantic classes were sufficient to annotate the corpus. 3.3 Learning Algorithm This pattern similarity measure can be used to create a weakly supervised approach to pattern acquisition following the general outline provided in Section 2. Each candidate pattern is compared against the set of currently accepted patterns using the measure described in Section 3.1. We experimented with several techniques for ranking candidate patterns based on these scores, including using the best and average score, and found that the best results were obtained when each candidate pattern was ranked according to its score when compared against the centroid vector of the set of currently accepted patterns. We also experimented with several schemes for deciding which of the scored patterns to accept (a full description would be too long for this paper) resulting in a scheme where the four highest scoring patterns whose score is within 0.95 of the best pattern are accepted. Our algorithm disregards any patterns whose corpus occurrences are below a set threshold, α, since these may be due to noise. In addition, a second 382 threshold, β, is used to determine the maximum number of documents in which a pattern can occur since these very frequent patterns are often too general to be useful for IE. Patterns which occur in more than β × C, where C is the number of documents in the collection, are not learned. For the experiments in this paper we set α to 2 and β to 0.3. 4 Implementation A number of pre-processing stages have to be applied to documents in order for the set of patterns to be extracted before learning can take place. Firstly, items belonging to semantic categories are identified by running the text through the named entity identifier in the GATE system (Cunningham et al., 2002). The corpus is then parsed, using a version of MINIPAR (Lin, 1999) adapted to process text marked with named entities, to produce dependency trees from which SVO-patterns are extracted. Active and passive voice is taken into account in MINIPAR’s output so the sentences “COMPANY fired their C.E.O.” and “The C.E.O. was fired by COMPANY” would yield the same triple, COMPANY+fire+ceo. The indirect object of ditransitive verbs is not extracted; these verbs are treated like transitive verbs for the purposes of this analysis. An implementation of the algorithm described in Section 3 was completed in addition to an implementation of the document-centric algorithm described in Section 2.1. It is important to mention that this implementation is not identical to the one described by Yangarber et al. (2000). Their system makes some generalisations across pattern elements by grouping certain elements together. However, there is no difference between the expressiveness of the patterns learned by either approach and we do not believe this difference has any effect on the results of our experiments. 5 Evaluation Various approaches have been suggested for the evaluation of automatic IE pattern acquisition. Riloff (1996) judged the precision of patterns learned by reviewing them manually. Yangarber et al. (2000) developed an indirect method which allowed automatic evaluation. In addition to learning a set of patterns, their system also notes the relevance of documents based on the current set of accepted patterns. Assuming the subset of documents relevant to a particular IE scenario is known, it is possible to use these relevance judgements to determine how accurately a given set of patterns can discriminate the relevant documents from the irrelevant. This evaluation is similar to the “text-filtering” sub-task used in the sixth Message Understanding Conference (MUC-6) (1995) in which systems were evaluated according to their ability to identify the documents relevant to the extraction task. The document filtering evaluation technique was used to allow comparison with previous studies. Identifying the document containing relevant information can be considered as a preliminary stage of an IE task. A further step is to identify the sentences within those documents which are relevant. This “sentence filtering” task is a more fine-grained evaluation and is likely to provide more information about how well a given set of patterns is likely to perform as part of an IE system. Soderland (1999) developed a version of the MUC-6 corpus in which events are marked at the sentence level. The set of patterns learned by the algorithm after each iteration can be compared against this corpus to determine how accurately they identify the relevant sentences for this extraction task. 5.1 Evaluation Corpus The evaluation corpus used for the experiments was compiled from the training and testing corpus used in MUC-6, where the task was to extract information about the movements of executives from newswire texts. A document is relevant if it has a filled template associated with it. 590 documents from a version of the MUC-6 evaluation corpus described by Soderland (1999) were used. After the pre-processing stages described in Section 4, the MUC-6 corpus produced 15,407 pattern tokens from 11,294 different types. 10,512 patterns appeared just once and these were effectively discarded since our learning algorithm only considers patterns which occur at least twice (see Section 3.3). The document-centric approach benefits from a large corpus containing a mixture of relevant and irrelevant documents. We provided this using a subset of the Reuters Corpus Volume I (Rose et al., 2002) which, like the MUC-6 corpus, consists of newswire 383 COMPANY+appoint+PERSON COMPANY+elect+PERSON COMPANY+promote+PERSON COMPANY+name+PERSON PERSON+resign PERSON+depart PERSON+quit Table 1: Seed patterns for extraction task texts. 3000 documents relevant to the management succession task (identified using document metadata) and 3000 irrelevant documents were used to produce the supplementary corpus. This supplementary corpus yielded 126,942 pattern tokens and 79,473 types with 14,576 of these appearing more than once. Adding the supplementary corpus to the data set used by the document-centric approach led to an improvement of around 15% on the document filtering task and over 70% for sentence filtering. It was not used for the semantic similarity algorithm since there was no benefit. The set of seed patterns listed in Table 1 are indicative of the management succession extraction task and were used for these experiments. 6 Results 6.1 Document Filtering Results for both the document and sentence filtering experiments are reported in Table 2 which lists precision, recall and F-measure for each approach on both evaluations. Results from the document filtering experiment are shown on the left hand side of the table and continuous F-measure scores for the same experiment are also presented in graphical format in Figure 2. While the document-centric approach achieves the highest F-measure of either system (0.83 on the 33rd iteration compared against 0.81 after 48 iterations of the semantic similarity approach) it only outperforms the proposed approach for a few iterations. In addition the semantic similarity approach learns more quickly and does not exhibit as much of a drop in performance after it has reached its best value. Overall the semantic similarity approach was found to be significantly better than the document-centric approach (p < 0.001, Wilcoxon Signed Ranks Test). Although it is an informative evaluation, the document filtering task is limited for evaluating IE pat0 20 40 60 80 100 120 Iteration 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 F-measure Semantic Similarity Document-centric Figure 2: Evaluating document filtering. tern learning. This evaluation indicates whether the set of patterns being learned can identify documents containing descriptions of events but does not provide any information about whether it can find those events within the documents. In addition, the set of seed patterns used for these experiments have a high precision and low recall (Table 2). We have found that the distribution of patterns and documents in the corpus means that learning virtually any pattern will help improve the F-measure. Consequently, we believe the sentence filtering evaluation to be more useful for this problem. 6.2 Sentence Filtering Results from the sentence filtering experiment are shown in tabular format in the right hand side of Table 22 and graphically in Figure 3. The semantic similarity algorithm can be seen to outperform the document-centric approach. This difference is also significant (p < 0.001, Wilcoxon Signed Ranks Text). The clear difference between these results shows that the semantic similarity approach can indeed identify relevant sentences while the documentcentric method identifies patterns which match relevant documents, although not necessarily relevant sentences. 2The set of seed patterns returns a precision of 0.81 for this task. The precision is not 1 since the pattern PERSON+resign matches sentences describing historical events (“Jones resigned last year.”) which were not marked as relevant in this corpus following MUC guidelines. 384 Document Filtering Sentence Filtering Number of Document-centric Semantic similarity Document-centric Semantic similarity Iterations P R F P R F P R F P R F 0 1.00 0.26 0.42 1.00 0.26 0.42 0.81 0.10 0.18 0.81 0.10 0.18 20 0.75 0.68 0.71 0.77 0.78 0.77 0.30 0.29 0.29 0.61 0.49 0.54 40 0.72 0.96 0.82 0.70 0.93 0.80 0.40 0.67 0.51 0.47 0.64 0.55 60 0.65 0.96 0.78 0.68 0.96 0.80 0.32 0.70 0.44 0.42 0.73 0.54 80 0.56 0.96 0.71 0.61 0.98 0.76 0.18 0.71 0.29 0.37 0.89 0.52 100 0.56 0.96 0.71 0.58 0.98 0.73 0.18 0.73 0.28 0.28 0.92 0.42 120 0.56 0.96 0.71 0.58 0.98 0.73 0.17 0.75 0.28 0.26 0.95 0.41 Table 2: Comparison of the different approaches over 120 iterations 0 20 40 60 80 100 120 Iteration 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 F-measure Semantic Similarity Document-centric Figure 3: Evaluating sentence filtering. The precision scores for the sentence filtering task in Table 2 show that the semantic similarity algorithm consistently learns more accurate patterns than the existing approach. At the same time it learns patterns with high recall much faster than the document-centric approach, by the 120th iteration the pattern set covers almost 95% of relevant sentences while the document-centric approach covers only 75%. 7 Discussion The approach to IE pattern acquisition presented here is related to other techniques but uses different assumptions regarding which patterns are likely to be relevant to a particular extraction task. Evaluation has showed that the semantic generalisation approach presented here performs well when compared to a previously reported document-centric method. Differences between the two approaches are most obvious when the results of the sentence filtering task are considered and it seems that this is a more informative evaluation for this problem. The semantic similarity approach has the additional advantage of not requiring a large corpus containing a mixture of documents relevant and irrelevant to the extraction task. This corpus is unannotated, and so may not be difficult to obtain, but is nevertheless an additional requirement. The best score recorded by the proposed algorithm on the sentence filtering task is an F-measure of 0.58 (22nd iteration). While this result is lower than those reported for IE systems based on knowledge engineering approaches these results should be placed in the context of a weakly supervised learning algorithm which could be used to complement manual approaches. These results could be improved by manual filtering the patterns identified by the algorithm. The learning algorithm presented in Section 3 includes a mechanism for comparing two extraction patterns using information about lexical similarity derived from WordNet. This approach is not restricted to this application and could be applied to other language processing tasks such as question answering, paraphrase identification and generation or as a variant of the vector space model commonly used in Information Retrieval. In addition, Sudo et al. (2003) proposed representations for IE patterns which extends the SVO representation used here and, while they did not appear to significantly improve IE, it is expected that it will be straightforward to extend the vector space model to those pat385 tern representations. One of the reasons for the success of the approach described here is the appropriateness of WordNet which is constructed on paradigmatic principles, listing the words which may be substituted for one another, and is consequently an excellent resource for this application. WordNet is also a generic resource not associated with a particular domain which means the learning algorithm can make use of that knowledge to acquire patterns for a diverse range of IE tasks. This work represents a step towards truly domain-independent IE systems. Employing a weakly supervised learning algorithm removes much of the requirement for a human annotator to provide example patterns. Such approaches are often hampered by a lack of information but the additional knowledge in WordNet helps to compensate. Acknowledgements This work was carried out as part of the RESuLT project funded by the EPSRC (GR/T06391). Roman Yangarber provided advice on the reimplementation of the document-centric algorithm. We are also grateful for the detailed comments provided by the anonymous reviewers of this paper. References H. Cunningham, D. Maynard, K. Bontcheva, and V. Tablan. 2002. GATE: an Architecture for Development of Robust HLT. In Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL-02), pages 168–175, Philadelphia, PA. C. Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database and some of its Applications. MIT Press, Cambridge, MA. J. Jiang and D. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of International Conference on Research in Computational Linguistics, Taiwan. W. Lehnert, C. Cardie, D. Fisher, J. McCarthy, E. Riloff, and S. Soderland. 1992. University of Massachusetts: Description of the CIRCUS System used for MUC-4. In Proceedings of the Fourth Message Understanding Conference (MUC-4), pages 282–288, San Francisco, CA. D. Lin. 1999. MINIPAR: a minimalist parser. In Maryland Linguistics Colloquium, University of Maryland, College Park. MUC. 1995. Proceedings of the Sixth Message Understanding Conference (MUC-6), San Mateo, CA. Morgan Kaufmann. S. Pado and M. Lapata. 2003. Constructing semantic space models from parsed corpora. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03), pages 128–135, Sapporo, Japan. S. Patwardhan, S. Banerjee, and T. Pedersen. 2003. Using measures of semantic relatedness for word sense disambiguation. In Proceedings of the Fourth International Conferences on Intelligent Text Processing and Computational Linguistics, pages 241–257, Mexico City. P. Resnik. 1995. Using Information Content to evaluate Semantic Similarity in a Taxonomy. In Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95), pages 448–453, Montreal, Canada. E. Riloff. 1996. Automatically generating extraction patterns from untagged text. In Thirteenth National Conference on Artificial Intelligence (AAAI-96), pages 1044–1049, Portland, OR. T. Rose, M. Stevenson, and M. Whitehead. 2002. The Reuters Corpus Volume 1 - from Yesterday’s news to tomorrow’s language resources. In LREC-02, pages 827–832, La Palmas, Spain. G. Salton and M. McGill. 1983. Introduction to Modern Information Retrieval. McGraw-Hill, New York. S. Soderland. 1999. Learning Information Extraction Rules for Semi-structured and free text. Machine Learning, 31(1-3):233–272. K. Sudo, S. Sekine, and R. Grishman. 2003. An Improved Extraction Pattern Representation Model for Automatic IE Pattern Acquisition. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03), pages 224–231. R. Yangarber, R. Grishman, P. Tapanainen, and S. Huttunen. 2000. Automatic acquisition of domain knowledge for information extraction. In Proceedings of the 18th International Conference on Computational Linguistics (COLING 2000), pages 940–946, Saarbr¨ucken, Germany. R. Yangarber. 2003. Counter-training in the discovery of semantic patterns. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03), pages 343–350, Sapporo, Japan. 386 | 2005 | 47 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 387–394, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Word Sense Disambiguation vs. Statistical Machine Translation Marine CARPUAT Dekai WU1 [email protected] [email protected] Human Language Technology Center HKUST Department of Computer Science University of Science and Technology Clear Water Bay, Hong Kong Abstract We directly investigate a subject of much recent debate: do word sense disambigation models help statistical machine translation quality? We present empirical results casting doubt on this common, but unproved, assumption. Using a state-ofthe-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system, we find that word sense disambiguation does not yield significantly better translation quality than the statistical machine translation system alone. Error analysis suggests several key factors behind this surprising finding, including inherent limitations of current statistical MT architectures. 1 Introduction Word sense disambiguation or WSD, the task of determining the correct sense of a word in context, is a much studied problem area with a long and honorable history. Recent years have seen steady accuracy gains in WSD models, driven in particular by controlled evaluations such as the Senseval series of workshops. Word sense disambiguation is often assumed to be an intermediate task, which should then help higher level applications such as machine 1The authors would like to thank the Hong Kong Research Grants Council (RGC) for supporting this research in part through grants RGC6083/99E, RGC6256/00E, and DAG03/04.EG09, and several anonymous reviewers for insights and suggestions. translation or information retrieval. However, WSD is usually performed and evaluated as a standalone task, and to date there have been very few efforts to integrate the learned WSD models into full statistical MT systems. An energetically debated question at conferences over the past year is whether even the new stateof-the-art word sense disambiguation models actually have anything to offer to full statistical machine translation systems. Among WSD circles, this can sometimes elicit responses that border on implying that even asking the question is heretical. In efforts such as Senseval we tend to regard the construction of WSD models as an obviously correct, if necessarily simplified, approach that will eventually lead to essential disambiguation components within larger applications like machine translation. There is no question that the word sense disambiguation perspective has led to numerous insights in machine translation, even of the statistical variety. It is often simply an unstated assumption that any full translation system, to achieve full performance, will sooner or later have to incorporate individual WSD components. However, in some translation architectures and particularly in statistical machine translation (SMT), the translation engine already implicitly factors in many contextual features into lexical choice. From this standpoint, SMT models can be seen as WSD models in their own right, albeit with several major caveats. But typical statistical machine translation models only rely on a local context to choose among lexical translation candidates, as discussed in greater detail later. It is therefore often assumed that dedicated WSD-based lexical choice models, which can incor387 porate a wider variety of context features, can make better predictions than the “weaker” models implicit in statistical MT, and that these predictions will help the translation quality. Nevertheless, this assumption has not been empirically verified, and we should not simply assume that WSD models can contribute more than what the SMT models perform. It may behoove us to take note of the sobering fact that, perhaps analogously, WSD has yet to be conclusively shown to help information retrieval systems after many years of attempts. In this work, we propose to directly investigate whether word sense disambiguation—at least as it is typically currently formulated—is useful for statistical machine translation. We tackle a real Chinese to English translation task using a state-of-the-art supervised WSD system and a typical SMT model. We show that the unsupervised SMT model, trained on parallel data without any manual sense annotation, yields higher BLEU scores than the case where the SMT model makes use of the lexical choice predictions from the supervised WSD model, which are more expensive to create. The reasons for the surprising difficulty of improving over the translation quality of the SMT model are then discussed and analyzed. 2 Word sense disambiguation vs. statistical machine translation We begin by examining the respective strengths and weaknesses of dedicated WSD models versus full SMT models, that could be expected to be relevant to improving lexical choice. 2.1 Features Unique to WSD Dedicated WSD is typically cast as a classification task with a predefined sense inventory. Sense distinctions and granularity are often manually predefined, which means that they can be adapted to the task at hand, but also that the translation candidates are limited to an existing set. To improve accuracy, dedicated WSD models typically employ features that are not limited to the local context, and that include more linguistic information than the surface form of words. This often requires several stages of preprocessing, such as part-of-speech tagging and/or parsing. (Preprocessor domain can be an issue, since WSD accuracy may suffer from domain mismatches between the data the preprocessors were trained on, and the data they are applied to.) For example, a typical dedicated WSD model might employ features as described by Yarowsky and Florian (2002) in their “feature-enhanced naive Bayes model”, with position-sensitive, syntactic, and local collocational features. The feature set made available to the WSD model to predict lexical choices is therefore much richer than that used by a statistical MT model. Also, dedicated WSD models can be supervised, which yields significantly higher accuracies than unsupervised. For the experiments described in this study we employed supervised training, exploiting the annotated corpus that was produced for the Senseval-3 evaluation. 2.2 Features Unique to SMT Unlike lexical sample WSD models, SMT models simultaneously translate complete sentences rather than isolated target words. The lexical choices are made in a way that heavily prefers phrasal cohesion in the output target sentence, as scored by the language model. That is, the predictions benefit from the sentential context of the target language. This has the general effect of improving translation fluency. The WSD accuracy of the SMT model depends critically on the phrasal cohesion of the target language. As we shall see, this phrasal cohesion property has strong implications for the utility of WSD. In other work (forthcoming), we investigated the inverse question of evaluating the Chinese-toEnglish SMT model on word sense disambiguation performance, using standard WSD evaluation methodology and datasets from the Senseval-3 Chinese lexical sample task. We showed the accuracy of the SMT model to be significantly lower than that of all the dedicated WSD models considered, even after adding the lexical sample data to the training set for SMT to allow for a fair comparison. These results highlight the relative strength, and the potential hoped-for advantage of dedicated supervised WSD models. 388 3 The WSD system The WSD system used for the experiments is based on the model that achieved the best performance, by a large margin, on the Senseval-3 Chinese lexical sample task (Carpuat et al., 2004). 3.1 Classification model The model consists of an ensemble of four voting models combined by majority vote. The first voting model is a naive Bayes model, since Yarowsky and Florian (2002) found this model to be the most accurate classifier in a comparative study on a subset of Senseval-2 English lexical sample data. The second voting model is a maximum entropy model (Jaynes, 1978), since Klein and Manning (2002) found that this model yielded higher accuracy than naive Bayes in a subsequent comparison of WSD performance. (Note, however, that a different subset of either Senseval-1 or Senseval-2 English lexical sample data was used for their comparison.) The third voting model is a boosting model (Freund and Schapire, 1997), since has consistently turned in very competitive scores on related tasks such as named entity classification (Carreras et al., 2002) . Specifically, an AdaBoost.MH model was used (Schapire and Singer, 2000), which is a multiclass generalization of the original boosting algorithm, with boosting on top of decision stump classifiers (i.e., decision trees of depth one). The fourth voting model is a Kernel PCA-based model (Wu et al., 2004). Kernel Principal Component Analysis (KPCA) is a nonlinear kernel method for extracting nonlinear principal components from vector sets where, conceptually, the n-dimensional input vectors are nonlinearly mapped from their original space Rn to a high-dimensional feature space F where linear PCA is performed, yielding a transform by which the input vectors can be mapped nonlinearly to a new set of vectors (Sch¨olkopf et al., 1998). WSD can be performed by a Nearest Neighbor Classifier in the high-dimensional KPCA feature space. (Carpuat et al., 2004) showed that KPCAbased WSD models achieve close accuracies to the best individual WSD models, while having a significantly different bias. All these classifiers have the ability to handle large numbers of sparse features, many of which may be irrelevant. Moreover, the maximum entropy and boosting models are known to be well suited to handling features that are highly interdependent. The feature set used consists of position-sensitive, syntactic, and local collocational features, as described by Yarowsky and Florian (2002). 3.2 Lexical choice mapping model Ideally, we would like the WSD model to predict English translations given Chinese target words in context. Such a model requires Chinese training data annotated with English senses, but such data is not available. Instead, the WSD system was trained using the Senseval-3 Chinese lexical sample task data. (This is suboptimal, but reflects the difficulties that arise when considering a real translation task; we cannot assume that sense-annotated data will always be available for all language pairs.) The Chinese lexical sample task includes 20 target words. For each word, several senses are defined using the HowNet knowledge base. There are an average of 3.95 senses per target word type, ranging from 2 to 8. Only about 37 training instances per target word are available. For the purpose of Chinese to English translation, the WSD model should predict English translations instead of HowNet senses. Fortunately, HowNet provides English glosses. This allows us to map each HowNet sense candidate to a set of English translations, converting the monolingual Chinese WSD system into a translation lexical choice model. We further extended the mapping to include any significant translation choice considered by the SMT system but not in HowNet. 4 The SMT system To build a representative baseline statistical machine translation system, we restricted ourselves to making use of freely available tools, since the potential contribution of WSD should be easier to see against this baseline. Note that our focus here is not on the SMT model itself; our aim is to evaluate the impact of WSD on a real Chinese to English statistical machine translation task. 389 Table 1: Example of the translation candidates before and after mapping for the target word “4” (lu) HowNet Sense ID HowNet glosses HowNet glosses + improved translations 56520 distance distance 56521 sort sort 56524 Lu Lu 56525, 56526, 56527, 56528 path, road, route, way path, road, route, way, circuit, roads 56530, 56531, 56532 line, means, sequence line, means, sequence, lines 56533, 56534 district, region district, region 4.1 Alignment model The alignment model was trained with GIZA++ (Och and Ney, 2003), which implements the most typical IBM and HMM alignment models. Translation quality could be improved using more advanced hybrid phrasal or tree models, but this would interfere with the questions being investigated here. The alignment model used is IBM-4, as required by our decoder. The training scheme consists of IBM-1, HMM, IBM-3 and IBM-4, following (Och and Ney, 2003). The training corpus consists of about 1 million sentences from the United Nations Chinese-English parallel corpus from LDC. This corpus was automatically sentence-aligned, so the training data does not require as much manual annotation as for the WSD model. 4.2 Language model The English language model is a trigram model trained on the Gigaword newswire data and on the English side of the UN and Xinhua parallel corpora. The language model is also trained using a publicly available software, the CMU-Cambridge Statistical Language Modeling Toolkit (Clarkson and Rosenfeld, 1997). 4.3 Decoding The ISI ReWrite decoder (Germann, 2003), which implements an efficient greedy decoding algorithm, is used to translate the Chinese sentences, using the alignment model and language model previously described. Notice that very little contextual information is available to the SMT models. Lexical choice during decoding essentially depends on the translation probabilities learned for the target word, and on the English language model scores. 5 Experimental method 5.1 Test set selection We extracted the Chinese sentences from the NIST MTEval-04 test set that contain any of the 20 target words from the Senseval-3 Chinese lexical sample target set. For a couple of targets, no instances were available from the test set. The resulting test set contains a total of 175 sentences, which is smaller than typical MT evaluation test sets, but slightly larger than the one used for the Senseval Chinese lexical sample task. 5.2 Integrating the WSD system predictions with the SMT model There are numerous possible ways to integrate the WSD system predictions with the SMT model. We choose two different straightforward approaches, which will help analyze the effect of the different components of the SMT system, as we will see in Section 6.5. 5.2.1 Using WSD predictions for decoding In the first approach, we use the WSD sense predictions to constrain the set of English sense candidates considered by the decoder for each of the target words. Instead of allowing all the word translation candidates from the translation model, when we use the WSD predictions we override the translation model and force the decoder to choose the best translation from the predefined set of glosses that maps to the HowNet sense predicted by the WSD model. 390 Table 2: Translation quality with and without the WSD model Translation System BLEU score SMT 0.1310 SMT + WSD for postprocessing 0.1253 SMT + WSD for decoding 0.1239 SMT + WSD for decoding with improved translation candidates 0.1232 5.2.2 Using WSD predictions for postprocessing In the second approach, we use the WSD predictions to postprocess the output of the SMT system: in each output sentence, the translation of the target word chosen by the SMT model is directly replaced by the WSD prediction. When the WSD system predicts more than one candidate, a unique translation is randomly chosen among them. As discussed later, this approach can be used to analyze the effect of the language model on the output. It would also be interesting to use the gold standard or correct sense of the target words instead of the WSD model predictions in these experiments. This would give an upper-bound on performance and would quantify the effect of WSD errors. However, we do not have a corpus which contains both sense annotation and multiple reference translations: the MT evaluation corpus is not annotated with the correct senses of Senseval target words, and the Senseval corpus does not include English translations of the sentences. 6 Results 6.1 Even state-of-the-art WSD does not help BLEU score Table 2 summarizes the translation quality scores obtained with and without the WSD model. Using our WSD model to constrain the translation candidates given to the decoder hurts translation quality, as measured by the automated BLEU metric (Papineni et al., 2002). Note that we are evaluating on only difficult sentences containing the problematic target words from the lexical sample task, so BLEU scores can be expected to be on the low side. 6.2 WSD still does not help BLEU score with improved translation candidates One could argue that the translation candidates chosen by the WSD models do not help because they are only glosses obtained from the HowNet dictionary. They consist of the root form of words only, while the SMT model can learn many more translations for each target word, including inflected forms and synonyms. In order to avoid artificially penalizing the WSD system by limiting its translation candidates to the HowNet glosses, we expand the translation set using the bilexicon learned during translation model training. For each target word, we consider the English words that are given a high translation probability, and manually map each of these English words to the sense categories defined for the Senseval model. At decoding time, the set of translation candidates considered by the language model is therefore larger, and closer to that considered by the pure SMT system. The results in Table 2 show that the improved translation candidates do not help BLEU score. The translation quality obtained with SMT alone is still better than when the improved WSD Model is used. The simpler approach of using WSD predictions in postprocessing yields better BLEU score than the decoding approach, but still does not outperform the SMT model. 6.3 WSD helps translation quality for very few target words If we break down the test set and evaluate the effect of the WSD per target word, we find that for all but two of the target words WSD either hurts the BLEU score or does not help it, which shows that the decrease in BLEU is not only due to a few isolated target words for which the Senseval sense distinctions 391 are not helpful. 6.4 The “language model effect” Error analysis revealed some surprising effects. One particularly dismaying effect is that even in cases where the WSD model is able to predict a better target word translation than the SMT model, to use the better target word translation surprisingly often still leads to a lower BLEU score. The phrasal coherence property can help explain this surprising effect we observed. The translation chosen by the SMT model will tend to be more likely than the WSD prediction according to the language model; otherwise, it would also have been predicted by SMT. The translation with the higher language model probability influences the translation of its neighbors, thus potentially improving BLEU score, while the WSD prediction may not have been seen occurring within phrases often enough, thereby lowering BLEU score. For example, we observe that the WSD model sometimes correctly predicts “impact” as a better translation for “àâ” (chongji), where the SMT model selects “shock”. In these cases, some of the reference translations also use “impact”. However, even when the WSD model constrains the decoder to select “impact” rather than “shock”, the resulting sentence translation yields a lower BLEU score. This happens because the SMT model does not know how to use “impact” correctly (if it did, it would likely have chosen “impact” itself). Forcing the lexical choice “impact” simply causes the SMT model to generate phrases such as “against Japan for peace constitution impact” instead of “against Japan for peace constitution shocks”. This actually lowers BLEU score, because of the n-gram effects. 6.5 Using WSD predictions in postprocessing does not help BLEU score either In the postprocessing approach, decoding is done before knowing the WSD predictions, which eliminates the “language model effect”. Even in these conditions, the SMT model alone is still the best performing system. The postprocessing approach also outperforms the integrated decoding approach, which shows that the language model is not able to make use of the WSD predictions. One could expect that letting the Table 3: BLEU scores per target word: WSD helps for very few target words Target word SMT SMT + WSD ²º bawo 0.1482 0.1484 Ý bao 0.1891 0.1891 aî cailiao 0.0863 0.0863 àâ chongji 0.1396 0.1491 0 difang 0.1233 0.1083 I fengzi 0.1404 0.1402 ÙÄ huodong 0.1365 0.1465 lao 0.1153 0.1136 4 lu 0.1322 0.1208 åu qilai 0.1104 0.1082 qian 0.1948 0.1814 Bñ tuchu 0.0975 0.0989 ÏÄ yanjiu 0.1089 0.1089 äÄ zhengdong 0.1267 0.1251 zhou 0.0825 0.0808 decoder choose among the WSD translations also yields a better translation of the context. This is indeed the case, but for very few examples only: for instance the target word “0” (difang) is better used in the integrated decoding ouput “the place of local employment” , than in the postprocessing output “the place employment situation”. Instead, the majority of cases follow the pattern illustrated by the following example where the target word is “” (lao): the SMT system produces the best output (“the newly elected President will still face old problems”), the postprocessed output uses the fluent sentence with a different translation (“the newly elected President will still face outdated problems”), while the translation is not used correctly with the decoding approach (“the newly elected President will face problems still to be outdated”). 6.6 BLEU score bias The “language model effect” highlights one of the potential weaknesses of the BLEU score. BLEU penalizes for phrasal incoherence, which in the present study means that it can sometimes sacrifice adequacy for fluency. However, the characteristics of BLEU are by 392 no means solely responsible for the problems with WSD that we observed. To doublecheck that n-gram effects were not unduly impacting our study, we also evaluated using BLEU-1, which gave largely similar results as the standard BLEU-4 scores reported above. 7 Related work Most translation disambiguation tasks are defined similarly to the Senseval Multilingual lexical sample tasks. In Senseval-3, the English to Hindi translation disambigation task was defined identically to the English lexical sample task, except that the WSD models are expected to predict Hindi translations instead of WordNet senses. This differs from our approach which consists of producing the translation of complete sentences, and not only of a predefined set of target words. Brown et al. (1991) proposed a WSD algorithm to disambiguate English translations of French target words based on the single most informative context feature. In a pilot study, they found that using this WSD method in their French-English SMT system helped translation quality, manually evaluated using the number of acceptable translations. However, this study is limited to the unrealistic case of words that have exactly two senses in the other language. Most previous work has focused on the distinct problem of exploiting various bilingual resources (e.g., parallel or comparable corpora, or even MT systems) to help WSD. The goal is to achieve accurate WSD with minimum amounts of annotated data. Again, this differs from our objective which consists of using WSD to improve performance on a full machine translation task, and is measured in terms of translation quality. For instance, Ng et al. (2003) showed that it is possible to use word aligned parallel corpora to train accurate supervised WSD models. The objective is different; it is not possible for us to use this method to train our WSD model without undermining the question we aim to investigate: we would need to use the SMT model to word-align the parallel sentences, which could too strongly bias the predictions of the WSD model towards those of the SMT model, instead of combining predictive information from independent sources as we aim to study here. Other work includes Li and Li (2002) who propose a bilingual bootstrapping method to learn a translation disambiguation WSD model, and Diab (2004) who exploited large amounts of automatically generated noisy parallel data to learn WSD models in an unsupervised bootstrapping scheme. 8 Conclusion The empirical study presented here argues that we can expect that it will be quite difficult, at the least, to use standard WSD models to obtain significant improvements to statistical MT systems, even when supervised WSD models are used. This casts significant doubt on a commonly-held, but unproven, assumption to the contrary. We have presented empirically based analysis of the reasons for this surprising finding. We have seen that one major factor is that the statistical MT model is sufficiently accurate so that within the training domain, even the state-of-the-art dedicated WSD model is only able to improve on its lexical choice predictions in a relatively small proportion of cases. A second major factor is that even when the dedicated WSD model makes better predictions, current statistical MT models are unable to exploit this. Under this interpretation of our results, the dependence on the language model in current SMT architectures is excessive. One could of course argue that drastically increasing the amount of training data for the language model might overcome the problems from the language model effect. Given combinatorial problems, however, there is no way at present of telling whether the amount of data needed to achieve this is realistic, particularly for translation across many different domains. On the other hand, if the SMT architecture cannot make use of WSD predictions, even when they are in fact better than the SMT’s lexical choices, then perhaps some alternative model striking a different balance of adequacy and fluency is called for. Ultimately, after all, WSD is a method of compensating for sparse data. Thus it may be that the present inability of WSD models to help improve accuracy of SMT systems stems not from an inherent weakness of dedicated WSD models, but rather from limitations of present-day SMT architectures. 393 To further test this, our experiments could be tried on other statistical MT models. For example, the WSD model’s predictions could be employed in a Bracketing ITG translation model such as Wu (1996) or Zens et al. (2004), or alternatively they could be incorporated as features for reranking in a maximum-entropy SMT model (Och and Ney, 2002), instead of using them to constrain the sentence translation hypotheses as done here. However, the preceding discussion argues that it is doubtful that this would produce significantly different results, since the inherent problem from the “language model effect” would largely remain, causing sentence translations that include the WSD’s preferred lexical choices to be discounted. For similar reasons, we suspect our findings may also hold even for more sophisticated statistical MT models that rely heavily on n-gram language models. A more grammatically structured statistical MT model that less ngram oriented, such as the ITG based “grammatical channel” translation model (Wu and Wong, 1998), might make more effective use of the WSD model’s predictions. References Peter Brown, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer. Word-sense disambiguation using statistical methods. In Proceedings of 29th meeting of the Association for Computational Linguistics, pages 264–270, Berkeley, California, 1991. Marine Carpuat, Weifeng Su, and Dekai Wu. Augmenting ensemble classification for word sense disambiguation with a Kernel PCA model. In Proceedings of Senseval-3, Third International Workshop on Evaluating Word Sense Disambiguation Systems, Barcelona, July 2004. SIGLEX, Association for Computational Linguistics. Xavier Carreras, Llu´ıs M`arques, and Llu´ıs Padr´o. Named entity extraction using AdaBoost. In Dan Roth and Antal van den Bosch, editors, Proceedings of CoNLL-2002, pages 167– 170, Taipei, Taiwan, 2002. Philip Clarkson and Ronald Rosenfeld. Statistical language modeling using the CMU-Cambridge toolkit. In Proceedings of Eurospeech ’97, pages 2707–2710, Rhodes, Greece, 1997. Mona Diab. Relieving the data acquisition bottleneck in word sense disambiguation. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, 2004. Yoram Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In Journal of Computer and System Sciences, 55(1), pages 119–139, 1997. Ulrich Germann. Greeedy decoding for statistical machine translation in almost linear time. In Proceedings of HLTNAACL-2003. Edmonton, AB, Canada, 2003. E.T. Jaynes. Where do we Stand on Maximum Entropy? MIT Press, Cambridge MA, 1978. Dan Klein and Christopher D. Manning. Conditional structure versus conditional estimation in NLP models. In Proceedings of EMNLP-2002, Conference on Empirical Methods in Natural Language Processing, pages 9–16, Philadelphia, July 2002. SIGDAT, Association for Computational Linguistics. Cong Li and Hang Li. Word translation disambiguation using bilingual bootstrapping. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 343–351, 2002. Hwee Tou Ng, Bin Wang, and Yee Seng Chan. Exploiting parallel texts for word sense disambiguation: An empirical study. In Proceedings of ACL-03, Sapporo, Japan, pages 455–462, 2003. Franz Och and Hermann Ney. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of ACL-02, Philadelphia, 2002. Franz Josef Och and Hermann Ney. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–52, 2003. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 2002. Robert E. Schapire and Yoram Singer. BoosTexter: A boostingbased system for text categorization. Machine Learning, 39(2):135–168, 2000. Bernhard Sch¨olkopf, Alexander Smola, and Klaus-Rober M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5), 1998. Dekai Wu and Hongsing Wong. Machine translation with a stochastic grammatical channel. In Proceedings of COLINGACL’98, Montreal,Canada, August 1998. Dekai Wu, Weifeng Su, and Marine Carpuat. A Kernel PCA method for superior word sense disambiguation. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, July 2004. Dekai Wu. A polynomial-time algorithm for statistical machine translation. In Proceedings of 34th Annual Meeting of the Association for Computational Linguistics, Santa Cruz, California, June 1996. David Yarowsky and Radu Florian. Evaluating sense disambiguation across diverse parameter spaces. Natural Language Engineering, 8(4):293–310, 2002. Richard Zens, Hermann Ney, Taro Watanabe, and Eiichiro Sumita. Reordering constraints for phrase-based statistical machine translation. In Proceedings of COLING-2004, Geneva,Switzerland, August 2004. 394 | 2005 | 48 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 395–402, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Word Sense Disambiguation Using Label Propagation Based Semi-Supervised Learning Zheng-Yu Niu, Dong-Hong Ji Institute for Infocomm Research 21 Heng Mui Keng Terrace 119613 Singapore {zniu, dhji}@i2r.a-star.edu.sg Chew Lim Tan Department of Computer Science National University of Singapore 3 Science Drive 2 117543 Singapore [email protected] Abstract Shortage of manually sense-tagged data is an obstacle to supervised word sense disambiguation methods. In this paper we investigate a label propagation based semisupervised learning algorithm for WSD, which combines labeled and unlabeled data in learning process to fully realize a global consistency assumption: similar examples should have similar labels. Our experimental results on benchmark corpora indicate that it consistently outperforms SVM when only very few labeled examples are available, and its performance is also better than monolingual bootstrapping, and comparable to bilingual bootstrapping. 1 Introduction In this paper, we address the problem of word sense disambiguation (WSD), which is to assign an appropriate sense to an occurrence of a word in a given context. Many methods have been proposed to deal with this problem, including supervised learning algorithms (Leacock et al., 1998), semi-supervised learning algorithms (Yarowsky, 1995), and unsupervised learning algorithms (Sch¨utze, 1998). Supervised sense disambiguation has been very successful, but it requires a lot of manually sensetagged data and can not utilize raw unannotated data that can be cheaply acquired. Fully unsupervised methods do not need the definition of senses and manually sense-tagged data, but their sense clustering results can not be directly used in many NLP tasks since there is no sense tag for each instance in clusters. Considering both the availability of a large amount of unlabelled data and direct use of word senses, semi-supervised learning methods have received great attention recently. Semi-supervised methods for WSD are characterized in terms of exploiting unlabeled data in learning procedure with the requirement of predefined sense inventory for target words. They roughly fall into three categories according to what is used for supervision in learning process: (1) using external resources, e.g., thesaurus or lexicons, to disambiguate word senses or automatically generate sense-tagged corpus, (Lesk, 1986; Lin, 1997; McCarthy et al., 2004; Seo et al., 2004; Yarowsky, 1992), (2) exploiting the differences between mapping of words to senses in different languages by the use of bilingual corpora (e.g. parallel corpora or untagged monolingual corpora in two languages) (Brown et al., 1991; Dagan and Itai, 1994; Diab and Resnik, 2002; Li and Li, 2004; Ng et al., 2003), (3) bootstrapping sensetagged seed examples to overcome the bottleneck of acquisition of large sense-tagged data (Hearst, 1991; Karov and Edelman, 1998; Mihalcea, 2004; Park et al., 2000; Yarowsky, 1995). As a commonly used semi-supervised learning method for WSD, bootstrapping algorithm works by iteratively classifying unlabeled examples and adding confidently classified examples into labeled dataset using a model learned from augmented labeled dataset in previous iteration. It can be found that the affinity information among unlabeled examples is not fully explored in this bootstrapping process. Bootstrapping is based on a local consistency assumption: examples close to labeled examples within same class will have same labels, which is also the assumption underlying many supervised learning algorithms, such as kNN. Recently a promising family of semi-supervised learning algorithms are introduced, which can effectively combine unlabeled data with labeled data 395 in learning process by exploiting cluster structure in data (Belkin and Niyogi, 2002; Blum et al., 2004; Chapelle et al., 1991; Szummer and Jaakkola, 2001; Zhu and Ghahramani, 2002; Zhu et al., 2003). Here we investigate a label propagation based semisupervised learning algorithm (LP algorithm) (Zhu and Ghahramani, 2002) for WSD, which works by representing labeled and unlabeled examples as vertices in a connected graph, then iteratively propagating label information from any vertex to nearby vertices through weighted edges, finally inferring the labels of unlabeled examples after this propagation process converges. Compared with bootstrapping, LP algorithm is based on a global consistency assumption. Intuitively, if there is at least one labeled example in each cluster that consists of similar examples, then unlabeled examples will have the same labels as labeled examples in the same cluster by propagating the label information of any example to nearby examples according to their proximity. This paper is organized as follows. First, we will formulate WSD problem in the context of semisupervised learning in section 2. Then in section 3 we will describe LP algorithm and discuss the difference between a supervised learning algorithm (SVM), bootstrapping algorithm and LP algorithm. Section 4 will provide experimental results of LP algorithm on widely used benchmark corpora. Finally we will conclude our work and suggest possible improvement in section 5. 2 Problem Setup Let X = {xi}n i=1 be a set of contexts of occurrences of an ambiguous word w, where xi represents the context of the i-th occurrence, and n is the total number of this word’s occurrences. Let S = {sj}c j=1 denote the sense tag set of w. The first l examples xg(1 ≤g ≤l) are labeled as yg (yg ∈S) and other u (l+u = n) examples xh(l+1 ≤h ≤n) are unlabeled. The goal is to predict the sense of w in context xh by the use of label information of xg and similarity information among examples in X. The cluster structure in X can be represented as a connected graph, where each vertex corresponds to an example, and the edge between any two examples xi and xj is weighted so that the closer the vertices in some distance measure, the larger the weight associated with this edge. The weights are defined as follows: Wij = exp(− d2 ij σ2 ) if i ̸= j and Wii = 0 (1 ≤i, j ≤n), where dij is the distance (ex. Euclidean distance) between xi and xj, and σ is used to control the weight Wij. 3 Semi-supervised Learning Method 3.1 Label Propagation Algorithm In LP algorithm (Zhu and Ghahramani, 2002), label information of any vertex in a graph is propagated to nearby vertices through weighted edges until a global stable stage is achieved. Larger edge weights allow labels to travel through easier. Thus the closer the examples, more likely they have similar labels (the global consistency assumption). In label propagation process, the soft label of each initial labeled example is clamped in each iteration to replenish label sources from these labeled data. Thus the labeled data act like sources to push out labels through unlabeled data. With this push from labeled examples, the class boundaries will be pushed through edges with large weights and settle in gaps along edges with small weights. If the data structure fits the classification goal, then LP algorithm can use these unlabeled data to help learning classification plane. Let Y 0 ∈Nn×c represent initial soft labels attached to vertices, where Y 0 ij = 1 if yi is sj and 0 otherwise. Let Y 0 L be the top l rows of Y 0 and Y 0 U be the remaining u rows. Y 0 L is consistent with the labeling in labeled data, and the initialization of Y 0 U can be arbitrary. Optimally we expect that the value of Wij across different classes is as small as possible and the value of Wij within same class is as large as possible. This will make label propagation to stay within same class. In later experiments, we set σ as the average distance between labeled examples from different classes. Define n × n probability transition matrix Tij = P(j →i) = Wij Pn k=1 Wkj , where Tij is the probability to jump from example xj to example xi. Compute the row-normalized matrix T by T ij = Tij/ Pn k=1 Tik. This normalization is to maintain the class probability interpretation of Y . 396 −2 −1 0 1 2 3 4 −2 −1 0 1 2 −2 −1 0 1 2 3 −2 −1 0 1 2 −2 −1 0 1 2 3 −2 −1 0 1 2 −2 −1 0 1 2 3 −2 −1 0 1 2 labeled +1 unlabeled labeled −1 (a) Dataset with Two−Moon Pattern (b) SVM (c) Bootstrapping (d) Ideal Classification A 8 A 9 B8 B9 A10 B10 A0 B0 Figure 1: Classification result on two-moon pattern dataset. (a) Two-moon pattern dataset with two labeled points, (b) classification result by SVM, (c) labeling procedure of bootstrapping algorithm, (d) ideal classification. Then LP algorithm is defined as follows: 1. Initially set t=0, where t is iteration index; 2. Propagate the label by Y t+1 = TY t; 3. Clamp labeled data by replacing the top l row of Y t+1 with Y 0 L. Repeat from step 2 until Y t converges; 4. Assign xh(l + 1 ≤h ≤n) with a label sˆj, where ˆj = argmaxjYhj. This algorithm has been shown to converge to a unique solution, which is bYU = limt→∞Y t U = (I −T uu)−1T ulY 0 L (Zhu and Ghahramani, 2002). We can see that this solution can be obtained without iteration and the initialization of Y 0 U is not important, since Y 0 U does not affect the estimation of bYU. I is u × u identity matrix. T uu and T ul are acquired by splitting matrix T after the l-th row and the l-th column into 4 sub-matrices. 3.2 Comparison between SVM, Bootstrapping and LP For WSD, SVM is one of the state of the art supervised learning algorithms (Mihalcea et al., 2004), while bootstrapping is one of the state of the art semi-supervised learning algorithms (Li and Li, 2004; Yarowsky, 1995). For comparing LP with SVM and bootstrapping, let us consider a dataset with two-moon pattern shown in Figure 1(a). The upper moon consists of 9 points, while the lower moon consists of 13 points. There is only one labeled point in each moon, and other 20 points are un−2 −1 0 1 2 3 −2 −1 0 1 2 −2 −1 0 1 2 3 −2 −1 0 1 2 −2 −1 0 1 2 3 −2 −1 0 1 2 −2 −1 0 1 2 3 −2 −1 0 1 2 −2 −1 0 1 2 3 −2 −1 0 1 2 −2 −1 0 1 2 3 −2 −1 0 1 2 (a) Minimum Spanning Tree (b) t=1 (c) t=7 (d) t=10 (e) t=12 (f) t=100 B A C Figure 2: Classification result of LP on two-moon pattern dataset. (a) Minimum spanning tree of this dataset. The convergence process of LP algorithm with t varying from 1 to 100 is shown from (b) to (f). labeled. The distance metric is Euclidian distance. We can see that the points in one moon should be more similar to each other than the points across the moons. Figure 1(b) shows the classification result of SVM. Vertical line denotes classification hyperplane, which has the maximum separating margin with respect to the labeled points in two classes. We can see that SVM does not work well when labeled data can not reveal the structure (two moon pattern) in each class. The reason is that the classification hyperplane was learned only from labeled data. In other words, the coherent structure (two-moon pattern) in unlabeled data was not explored when inferring class boundary. Figure 1(c) shows bootstrapping procedure using kNN (k=1) as base classifier with user-specified parameter b = 1 (the number of added examples from unlabeled data into classified data for each class in each iteration). Termination condition is that the distance between labeled and unlabeled points is more than inter-class distance (the distance between A0 and B0). Each arrow in Figure 1(c) represents one classification operation in each iteration for each class. After eight iterations, A1 ∼A8 were tagged 397 as +1, and B1 ∼B8 were tagged as −1, while A9 ∼A10 and B9 ∼B10 were still untagged. Then at the ninth iteration, A9 was tagged as +1 since the label of A9 was determined only by labeled points in kNN model: A9 is closer to any point in {A0 ∼A8} than to any point in {B0 ∼B8}, regardless of the intrinsic structure in data: A9 ∼A10 and B9 ∼B10 are closer to points in lower moon than to points in upper moon. In other words, bootstrapping method uses the unlabeled data under a local consistency based strategy. This is the reason that two points A9 and A10 are misclassified (shown in Figure 1(c)). From above analysis we see that both SVM and bootstrapping are based on a local consistency assumption. Finally we ran LP on a connected graph-minimum spanning tree generated for this dataset, shown in Figure 2(a). A, B, C represent three points, and the edge A −B connects the two moons. Figure 2(b)- 2(f) shows the convergence process of LP with t increasing from 1 to 100. When t = 1, label information of labeled data was pushed to only nearby points. After seven iteration steps (t = 7), point B in upper moon was misclassified as −1 since it first received label information from point A through the edge connecting two moons. After another three iteration steps (t=10), this misclassified point was retagged as +1. The reason of this self-correcting behavior is that with the push of label information from nearby points, the value of YB,+1 became higher than YB,−1. In other words, the weight of edge B −C is larger than that of edge B −A, which makes it easier for +1 label of point C to travel to point B. Finally, when t ≥12 LP converged to a fixed point, which achieved the ideal classification result. 4 Experiments and Results 4.1 Experiment Design For empirical comparison with SVM and bootstrapping, we evaluated LP on widely used benchmark corpora - “interest”, “line” 1 and the data in English lexical sample task of SENSEVAL-3 (including all 57 English words ) 2. 1Available at http://www.d.umn.edu/∼tpederse/data.html 2Available at http://www.senseval.org/senseval3 Table 1: The upper two tables summarize accuracies (averaged over 20 trials) and paired t-test results of SVM and LP on SENSEVAL-3 corpus with percentage of training set increasing from 1% to 100%. The lower table lists the official result of baseline (using most frequent sense heuristics) and top 3 systems in ELS task of SENSEVAL-3. Percentage SVM LPcosine LPJS 1% 24.9±2.7% 27.5±1.1% 28.1±1.1% 10% 53.4±1.1% 54.4±1.2% 54.9±1.1% 25% 62.3±0.7% 62.3±0.7% 63.3±0.9% 50% 66.6±0.5% 65.7±0.5% 66.9±0.6% 75% 68.7±0.4% 67.3±0.4% 68.7±0.3% 100% 69.7% 68.4% 70.3% Percentage SVM vs. LPcosine SVM vs. LPJS p-value Sign. p-value Sign. 1% 8.7e-004 ≪ 8.5e-005 ≪ 10% 1.9e-006 ≪ 1.0e-008 ≪ 25% 9.2e-001 ∼ 3.0e-006 ≪ 50% 1.9e-006 ≫ 6.2e-002 ∼ 75% 7.4e-013 ≫ 7.1e-001 ∼ 100% Systems Baseline htsa3 IRST-Kernels nusels Accuracy 55.2% 72.9% 72.6% 72.4% We used three types of features to capture contextual information: part-of-speech of neighboring words with position information, unordered single words in topical context, and local collocations (as same as the feature set used in (Lee and Ng, 2002) except that we did not use syntactic relations). For SVM, we did not perform feature selection on SENSEVAL-3 data since feature selection deteriorates its performance (Lee and Ng, 2002). When running LP on the three datasets, we removed the features with occurrence frequency (counted in both training set and test set) less than 3 times. We investigated two distance measures for LP: cosine similarity and Jensen-Shannon (JS) divergence (Lin, 1991). For the three datasets, we constructed connected graphs following (Zhu et al., 2003): two instances u, v will be connected by an edge if u is among v’s k nearest neighbors, or if v is among u’s k nearest neighbors as measured by cosine or JS distance measure. For “interest” and “line” corpora, k is 10 (following (Zhu et al., 2003)), while for SENSEVAL-3 data, k is 5 since the size of dataset for each word in SENSEVAL-3 is much less than that of “interest” and “line” datasets. 398 4.2 Experiment 1: LP vs. SVM In this experiment, we evaluated LP and SVM 3 on the data of English lexical sample task in SENSEVAL-3. We used l examples from training set as labeled data, and the remaining training examples and all the test examples as unlabeled data. For each labeled set size l, we performed 20 trials. In each trial, we randomly sampled l labeled examples for each word from training set. If any sense was absent from the sampled labeled set, we redid the sampling. We conducted experiments with different values of l, including 1% × Nw,train, 10% × Nw,train, 25%×Nw,train, 50%×Nw,train, 75%× Nw,train, 100% × Nw,train (Nw,train is the number of examples in training set of word w). SVM and LP were evaluated using accuracy 4 (fine-grained score) on test set of SENSEVAL-3. We conducted paired t-test on the accuracy figures for each value of l. Paired t-test is not run when percentage= 100%, since there is only one paired accuracy figure. Paired t-test is usually used to estimate the difference in means between normal populations based on a set of random paired observations. {≪, ≫}, {<, >}, and ∼correspond to pvalue ≤0.01, (0.01, 0.05], and > 0.05 respectively. ≪(or ≫) means that the performance of LP is significantly better (or significantly worse) than SVM. < (or >) means that the performance of LP is better (or worse) than SVM. ∼means that the performance of LP is almost as same as SVM. Table 1 reports the average accuracies and paired t-test results of SVM and LP with different sizes of labled data. It also lists the official results of baseline method and top 3 systems in ELS task of SENSEVAL-3. From Table 1, we see that with small labeled dataset (percentage of labeled data ≤10%), LP performs significantly better than SVM. When the percentage of labeled data increases from 50% to 75%, the performance of LPJS and SVM become almost same, while LPcosine performs significantly worse than SVM. 3we used linear SV M light, available at http://svmlight.joachims.org/. 4If there are multiple sense tags for an instance in training set or test set, then only the first tag is considered as correct answer. Furthermore, if the answer of the instance in test set is “U”, then this instance will be removed from test set. Table 2: Accuracies from (Li and Li, 2004) and average accuracies of LP with c × b labeled examples on “interest” and “line” corpora. Major is a baseline method in which they always choose the most frequent sense. MB-D denotes monolingual bootstrapping with decision list as base classifier, MB-B represents monolingual bootstrapping with ensemble of Naive Bayes as base classifier, and BB is bilingual bootstrapping with ensemble of Naive Bayes as base classifier. Ambiguous Accuracies from (Li and Li, 2004) words Major MB-D MB-B BB interest 54.6% 54.7% 69.3% 75.5% line 53.5% 55.6% 54.1% 62.7% Ambiguous Our results words #labeled examples LPcosine LPJS interest 4×15=60 80.2±2.0% 79.8±2.0% line 6×15=90 60.3±4.5% 59.4±3.9% 4.3 Experiment 2: LP vs. Bootstrapping Li and Li (2004) used “interest” and “line” corpora as test data. For the word “interest”, they used its four major senses. For comparison with their results, we took reduced “interest” corpus (constructed by retaining four major senses) and complete “line” corpus as evaluation data. In their algorithm, c is the number of senses of ambiguous word, and b (b = 15) is the number of examples added into classified data for each class in each iteration of bootstrapping. c × b can be considered as the size of initial labeled data in their bootstrapping algorithm. We ran LP with 20 trials on reduced “interest” corpus and complete “line” corpus. In each trial, we randomly sampled b labeled examples for each sense of “interest” or “line” as labeled data. The rest served as both unlabeled data and test data. Table 2 summarizes the average accuracies of LP on the two corpora. It also lists the accuracies of monolingual bootstrapping algorithm (MB), bilingual bootstrapping algorithm (BB) on “interest” and “line” corpora. We can see that LP performs much better than MB-D and MB-B on both “interest” and “line” corpora, while the performance of LP is comparable to BB on these two corpora. 4.4 An Example: Word “use” For investigating the reason for LP to outperform SVM and monolingual bootstrapping, we used the data of word “use” in English lexical sample task of SENSEVAL-3 as an example (totally 26 examples in training set and 14 examples in test set). For data 399 −0.4 −0.2 0 0.2 0.4 0.6 −0.5 0 0.5 −0.4 −0.2 0 0.2 0.4 0.6 −0.5 0 0.5 −0.4 −0.2 0 0.2 0.4 0.6 −0.5 0 0.5 −0.4 −0.2 0 0.2 0.4 0.6 −0.5 0 0.5 −0.4 −0.2 0 0.2 0.4 0.6 −0.5 0 0.5 −0.4 −0.2 0 0.2 0.4 0.6 −0.5 0 0.5 (a) Initial Setting (b) Ground−truth (c) SVM (d) Bootstrapping (e) Bootstrapping (f) LP B A C Figure 3: Comparison of sense disambiguation results between SVM, monolingual bootstrapping and LP on word “use”. (a) only one labeled example for each sense of word “use” as training data before sense disambiguation (◦and ⊲denote the unlabeled examples in SENSEVAL-3 training set and test set respectively, and other five symbols (+, ×, △, ⋄, and ∇) represent the labeled examples with different sense tags sampled from SENSEVAL-3 training set.), (b) ground-truth result, (c) classification result on SENSEVAL-3 test set by SVM (accuracy= 3 14 = 21.4%), (d) classified data after bootstrapping, (e) classification result on SENSEVAL-3 training set and test set by 1NN (accuracy= 6 14 = 42.9% ), (f) classification result on SENSEVAL-3 training set and test set by LP (accuracy= 10 14 = 71.4% ). visualization, we conducted unsupervised nonlinear dimensionality reduction5 on these 40 feature vectors with 210 dimensions. Figure 3 (a) shows the dimensionality reduced vectors in two-dimensional space. We randomly sampled only one labeled example for each sense of word “use” as labeled data. The remaining data in training set and test set served as unlabeled data for bootstrapping and LP. All of these three algorithms are evaluated using accuracy on test set. From Figure 3 (c) we can see that SVM misclassi5We used Isomap to perform dimensionality reduction by computing two-dimensional, 39-nearest-neighbor-preserving embedding of 210-dimensional input. Isomap is available at http://isomap.stanford.edu/. fied many examples from class + into class × since using only features occurring in training set can not reveal the intrinsic structure in full dataset. For comparison, we implemented monolingual bootstrapping with kNN (k=1) as base classifier. The parameter b is set as 1. Only b unlabeled examples nearest to labeled examples and with the distance less than dinter−class (the minimum distance between labeled examples with different sense tags) will be added into classified data in each iteration till no such unlabeled examples can be found. Firstly we ran this monolingual bootstrapping on this dataset to augment initial labeled data. The resulting classified data is shown in Figure 3 (d). Then a 1NN model was learned on this classified data and we used this model to perform classification on the remaining unlabeled data. Figure 3 (e) reports the final classification result by this 1NN model. We can see that bootstrapping does not perform well since it is susceptible to small noise in dataset. For example, in Figure 3 (d), the unlabeled example B 6 happened to be closest to labeled example A, then 1NN model tagged example B with label ⋄. But the correct label of B should be + as shown in Figure 3 (b). This error caused misclassification of other unlabeled examples that should have label +. In LP, the label information of example C can travel to B through unlabeled data. Then example A will compete with C and other unlabeled examples around B when determining the label of B. In other words, the labels of unlabeled examples are determined not only by nearby labeled examples, but also by nearby unlabeled examples. Using this classification strategy achieves better performance than the local consistency based strategy adopted by SVM and bootstrapping. 4.5 Experiment 3: LPcosine vs. LPJS Table 3 summarizes the performance comparison between LPcosine and LPJS on three datasets. We can see that on SENSEVAL-3 corpus, LPJS per6In the two-dimensional space, example B is not the closest example to A. The reason is that: (1) A is not close to most of nearby examples around B, and B is not close to most of nearby examples around A; (2) we used Isomap to maximally preserve the neighborhood information between any example and all other examples, which caused the loss of neighborhood information between a few example pairs for obtaining a globally optimal solution. 400 Table 3: Performance comparison between LPcosine and LPJS and the results of three model selection criteria are reported in following two tables. In the lower table, < (or >) means that the average value of function H(Qcosine) is lower (or higher) than H(QJS), and it will result in selecting cosine (or JS) as distance measure. Qcosine (or QJS) represents a matrix using cosine similarity (or JS divergence). √and × denote correct and wrong prediction results respectively, while ◦means that any prediction is acceptable. LPcosine vs. LPJS Data p-value Significance SENSEVAL-3 (1%) 1.1e-003 ≪ SENSEVAL-3 (10%) 8.9e-005 ≪ SENSEVAL-3 (25%) 9.0e-009 ≪ SENSEVAL-3 (50%) 3.2e-010 ≪ SENSEVAL-3 (75%) 7.7e-013 ≪ SENSEVAL-3 (100%) interest 3.3e-002 > line 8.1e-002 ∼ H(D) H(W) H(YU) Data cos. vs. JS cos. vs. JS cos. vs. JS SENSEVAL-3 (1%) > (√) > (√) < (×) SENSEVAL-3 (10%) < (×) > (√) < (×) SENSEVAL-3 (25%) < (×) > (√) < (×) SENSEVAL-3 (50%) > (√) > (√) > (√) SENSEVAL-3 (75%) > (√) > (√) > (√) SENSEVAL-3 (100%) < (◦) > (◦) < (◦) interest < (√) > (×) < (√) line > (◦) > (◦) > (◦) forms significantly better than LPcosine, but their performance is almost comparable on “interest” and “line” corpora. This observation motivates us to automatically select a distance measure that will boost the performance of LP on a given dataset. Cross-validation on labeled data is not feasible due to the setting of semi-supervised learning (l ≪u). In (Zhu and Ghahramani, 2002; Zhu et al., 2003), they suggested a label entropy criterion H(YU) for model selection, where Y is the label matrix learned by their semi-supervised algorithms. The intuition behind their method is that good parameters should result in confident labeling. Entropy on matrix W (H(W)) is a commonly used measure for unsupervised feature selection (Dash and Liu, 2000), which can be considered here. Another possible criterion for model selection is to measure the entropy of c × c inter-class distance matrix D calculated on labeled data (denoted as H(D)), where Di,j represents the average distance between the ith class and the j-th class. We will investigate three criteria, H(D), H(W) and H(YU), for model selection. The distance measure can be automatically selected by minimizing the average value of function H(D), H(W) or H(YU) over 20 trials. Let Q be the M × N matrix. Function H(Q) can measure the entropy of matrix Q, which is defined as (Dash and Liu, 2000): Si,j = exp (−α ∗Qi,j), (1) H(Q) = − M X i=1 N X j=1 (Si,j log Si,j + (1 −Si,j) log (1 −Si,j)), (2) where α is positive constant. The possible value of α is −ln 0.5 ¯I , where ¯I = 1 MN P i,j Qi,j. S is introduced for normalization of matrix Q. For SENSEVAL3 data, we calculated an overall average score of H(Q) by P w Nw,test P w Nw,test H(Qw). Nw,test is the number of examples in test set of word w. H(D), H(W) and H(YU) can be obtained by replacing Q with D, W and YU respectively. Table 3 reports the automatic prediction results of these three criteria. From Table 3, we can see that using H(W) can consistently select the optimal distance measure when the performance gap between LPcosine and LPJS is very large (denoted by ≪or ≫). But H(D) and H(YU) fail to find the optimal distance measure when only very few labeled examples are available (percentage of labeled data ≤10%). H(W) measures the separability of matrix W. Higher value of H(W) means that distance measure decreases the separability of examples in full dataset. Then the boundary between clusters is obscured, which makes it difficult for LP to locate this boundary. Therefore higher value of H(W) results in worse performance of LP. When labeled dataset is small, the distances between classes can not be reliably estimated, which results in unreliable indication of the separability of examples in full dataset. This is the reason that H(D) performs poorly on SENSEVAL-3 corpus when the percentage of labeled data is less than 25%. For H(YU), small labeled dataset can not reveal intrinsic structure in data, which may bias the estimation of YU. Then labeling confidence (H(YU)) can not properly indicate the performance of LP. This may interpret the poor performance of H(YU) on SENSEVAL-3 data when percentage ≤25%. 401 5 Conclusion In this paper we have investigated a label propagation based semi-supervised learning algorithm for WSD, which fully realizes a global consistency assumption: similar examples should have similar labels. In learning process, the labels of unlabeled examples are determined not only by nearby labeled examples, but also by nearby unlabeled examples. Compared with semi-supervised WSD methods in the first and second categories, our corpus based method does not need external resources, including WordNet, bilingual lexicon, aligned parallel corpora. Our analysis and experimental results demonstrate the potential of this cluster assumption based algorithm. It achieves better performance than SVM when only very few labeled examples are available, and its performance is also better than monolingual bootstrapping and comparable to bilingual bootstrapping. Finally we suggest an entropy based method to automatically identify a distance measure that can boost the performance of LP algorithm on a given dataset. It has been shown that one sense per discourse property can improve the performance of bootstrapping algorithm (Li and Li, 2004; Yarowsky, 1995). This heuristics can be integrated into LP algorithm by setting weight Wi,j = 1 if the i-th and j-th instances are in the same discourse. In the future we may extend the evaluation of LP algorithm and related cluster assumption based algorithms using more benchmark data for WSD. Another direction is to use feature clustering technique to deal with data sparseness and noisy feature problem. Acknowledgements We would like to thank anonymous reviewers for their helpful comments. Z.Y. Niu is supported by A*STAR Graduate Scholarship. References Belkin, M., & Niyogi, P.. 2002. Using Manifold Structure for Partially Labeled Classification. NIPS 15. Blum, A., Lafferty, J., Rwebangira, R., & Reddy, R.. 2004. Semi-Supervised Learning Using Randomized Mincuts. ICML-2004. Brown P., Stephen, D.P., Vincent, D.P., & Robert, Mercer.. 1991. Word Sense Disambiguation Using Statistical Methods. ACL-1991. Chapelle, O., Weston, J., & Sch¨olkopf, B. 2002. Cluster Kernels for Semisupervised Learning. NIPS 15. Dagan, I. & Itai A.. 1994. Word Sense Disambiguation Using A Second Language Monolingual Corpus. Computational Linguistics, Vol. 20(4), pp. 563596. Dash, M., & Liu, H.. 2000. Feature Selection for Clustering. PAKDD(pp. 110– 121). Diab, M., & Resnik. P.. 2002. An Unsupervised Method for Word Sense Tagging Using Parallel Corpora. ACL-2002(pp. 255–262). Hearst, M.. 1991. Noun Homograph Disambiguation using Local Context in Large Text Corpora. Proceedings of the 7th Annual Conference of the UW Centre for the New OED and Text Research: Using Corpora, 24:1, 1–41. Karov, Y. & Edelman, S.. 1998. Similarity-Based Word Sense Disambiguation. Computational Linguistics, 24(1): 41-59. Leacock, C., Miller, G.A. & Chodorow, M.. 1998. Using Corpus Statistics and WordNet Relations for Sense Identification. Computational Linguistics, 24:1, 147–165. Lee, Y.K. & Ng, H.T.. 2002. An Empirical Evaluation of Knowledge Sources and Learning Algorithms for Word Sense Disambiguation. EMNLP-2002, (pp. 41-48). Lesk M.. 1986. Automated Word Sense Disambiguation Using Machine Readable Dictionaries: How to Tell a Pine Cone from an Ice Cream Cone. Proceedings of the ACM SIGDOC Conference. Li, H. & Li, C.. 2004. Word Translation Disambiguation Using Bilingual Bootstrapping. Computational Linguistics, 30(1), 1-22. Lin, D.K.. 1997. Using Syntactic Dependency as Local Context to Resolve Word Sense Ambiguity. ACL-1997. Lin, J. 1991. Divergence Measures Based on the Shannon Entropy. IEEE Transactions on Information Theory, 37:1, 145–150. McCarthy, D., Koeling, R., Weeds, J., & Carroll, J.. 2004. Finding Predominant Word Senses in Untagged Text. ACL-2004. Mihalcea R.. 2004. Co-training and Self-training for Word Sense Disambiguation. CoNLL-2004. Mihalcea R., Chklovski, T., & Kilgariff, A.. 2004. The SENSEVAL-3 English Lexical Sample Task. SENSEVAL-2004. Ng, H.T., Wang, B., & Chan, Y.S.. 2003. Exploiting Parallel Texts for Word Sense Disambiguation: An Empirical Study. ACL-2003, pp. 455-462. Park, S.B., Zhang, B.T., & Kim, Y.T.. 2000. Word Sense Disambiguation by Learning from Unlabeled Data. ACL-2000. Sch¨utze, H.. 1998. Automatic Word Sense Discrimination. Computational Linguistics, 24:1, 97–123. Seo, H.C., Chung, H.J., Rim, H.C., Myaeng. S.H., & Kim, S.H.. 2004. Unsupervised Word Sense Disambiguation Using WordNet Relatives. Computer, Speech and Language, 18:3, 253–273. Szummer, M., & Jaakkola, T.. 2001. Partially Labeled Classification with Markov Random Walks. NIPS 14. Yarowsky, D.. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. ACL-1995, pp. 189-196. Yarowsky, D.. 1992. Word Sense Disambiguation Using Statistical Models of Roget’s Categories Trained on Large Corpora. COLING-1992, pp. 454-460. Zhu, X. & Ghahramani, Z.. 2002. Learning from Labeled and Unlabeled Data with Label Propagation. CMU CALD tech report CMU-CALD-02-107. Zhu, X., Ghahramani, Z., & Lafferty, J.. 2003. Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions. ICML-2003. 402 | 2005 | 49 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 34–41, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Learning Semantic Classes for Word Sense Disambiguation Upali S. Kohomban Wee Sun Lee Department of Computer Science National University of Singapore Singapore, 117584 {upalisat,leews}@comp.nus.edu.sg Abstract Word Sense Disambiguation suffers from a long-standing problem of knowledge acquisition bottleneck. Although state of the art supervised systems report good accuracies for selected words, they have not been shown to be promising in terms of scalability. In this paper, we present an approach for learning coarser and more general set of concepts from a sense tagged corpus, in order to alleviate the knowledge acquisition bottleneck. We show that these general concepts can be transformed to fine grained word senses using simple heuristics, and applying the technique for recent SENSEVAL data sets shows that our approach can yield state of the art performance. 1 Introduction Word Sense Disambiguation (WSD) is the task of determining the meaning of a word in a given context. This task has a long history in natural language processing, and is considered to be an intermediate task, success of which is considered to be important for other tasks such as Machine Translation, Language Understanding, and Information Retrieval. Despite a long history of attempts to solve WSD problem by empirical means, there is not any clear consensus on what it takes to build a high performance implementation of WSD. Algorithms based on Supervised Learning, in general, show better performance compared to unsupervised systems. But they suffer from a serious drawback: the difficulty of acquiring considerable amounts of training data, also known as knowledge acquisition bottleneck. In the typical setting, supervised learning needs training data created for each and every polysemous word; Ng (1997) estimates an effort of 16 personyears for acquiring training data for 3,200 significant words in English. Mihalcea and Chklovski (2003) provide a similar estimate of an 80 person-year effort for creating manually labelled training data for about 20,000 words in a common English dictionary. Two basic approaches have been tried as solutions to the lack of training data, namely unsupervised systems and semi-supervised bootstrapping techniques. Unsupervised systems mostly work on knowledge-based techniques, exploiting sense knowledge encoded in machine-readable dictionary entries, taxonomical hierarchies such as WORDNET (Fellbaum, 1998), and so on. Most of the bootstrapping techniques start from a few ‘seed’ labelled examples, classify some unlabelled instances using this knowledge, and iteratively expand their knowledge using information available within newly labelled data. Some others employ hierarchical relatives such as hypernyms and hyponyms. In this work, we present another practical alternative: we reduce the WSD problem to a one of finding generic semantic class of a given word instance. We show that learning such classes can help relieve the problem of knowledge acquisition bottleneck. 1.1 Learning senses as concepts As the semantic classes we propose learning, we use WORDNET lexicographer file identifiers corre34 sponding to each fine-grained sense. By learning these generic classes, we show that we can reuse training data, without having to rely on specific training data for each word. This can be done because the semantic classes are common to words unlike senses; for learning the properties of a given class, we can use the data from various words. For instance, the noun crane falls into two semantic classes ANIMAL and ARTEFACT. We can expect the words such as pelican and eagle (in the bird sense) to have similar usage patterns to those of ANIMAL sense of crane, and to provide common training examples for that particular class. For learning these classes, we can make use of any training example labelled with WORDNET senses for supervised WSD, as we describe in section 3.1. Once the classification is done for an instance, the resulting semantic classes can be transformed into finer grained senses using some heuristical mapping, as we show in the next sub section. This would not guarantee a perfect conversion because such a mapping can miss some finer senses, but as we show in what follows, this problem in itself does not prevent us from attaining good performance in a practical WSD setting. 1.2 Information loss in coarse grained senses As an empirical verification of the hypothesis that we can still build effective fine-grained sense disambiguators despite the loss of information, we analyzed the performance of a hypothetical coarse grained classifier that can perform at 100% accuracy. As the general set of classes, we used WORDNET unique beginners, of which there are 25 for nouns, and 15 for verbs. To simulate this classifier on SENSEVAL English all-words tasks’ data (Edmonds and Cotton, 2001; Snyder and Palmer, 2004), we mapped the finegrained senses from official answer keys to their respective beginners. There is an information loss in this mapping, because each unique beginner can typically include more than one sense. To see how this ‘classifier’ fares in a fine-grained task, we can map the ‘answers’ back to WORDNET fine-grained senses by picking up the sense with the lowest sense number that falls within each unique beginner. In principal, this is the most likely sense within the class, because WORDNET senses are said to be
Figure 1: Performance of a hypothetical coarsegrained classifier, output mapped to fine-grained senses, on SENSEVAL English all-words tasks. ordered in descending order of frequency. Since this sense is not necessarily the same as the original sense of the instance, the accuracy of the finegrained answers will be below 100%. Figure 1 shows the performance of this transformed fine-grained classifier (CG) for nouns and verbs with SENSEVAL-2 and 3 English all words task data (marked as S2 and S3 respectively), along with the baseline WORDNET first sense (BL), and the best-performer classifiers at each SENSEVAL excercise (CL), SMUaw (Mihalcea, 2002) and GAMBL-AW (Decadt et al., 2004) respectively. There is a considerable difference in terms of improvement over baseline, between the state-of-theart systems and the hypothetical optimal coarsegrained system. This shows us that there is an improvement in performance that we can attain over the state-of-the-art, if we can create a classifier for even a very coarse level of senses, with sufficiently high accuracy. We believe that the chances for such a high accuracy in a coarse-grained sense classifier is better, for several reasons: • previously reported good performance for coarse grained systems (Yarowsky, 1992) • better availability of data, due to the possibility of reusing data created for different words. For instance, labelled data for the noun ‘crane’ is not found in SEMCOR corpus at all, but there are more than 1000 sample instances for the concept ANIMAL, and more than 9000 for ARTEFACT. 35 • higher inter-annotator agreement levels and lower corpus/genre dependencies in training/testing data due to coarser senses. 1.3 Overall approach Basically, we assume that we can learn the ‘concepts’, in terms of WORDNET unique beginners, using a set of data labelled with these concepts, regardless of the actual word that is labelled. Hence, we can use a generic data set that is large enough, where various words provide training examples for these concepts, instead of relying upon data from the examples of the same word that is being classified. Unfortunately, simply labelling each instance with its semantic class and then using standard supervised learning algorithms did not work well. This is probably because the effectiveness of the feature patterns often depend on the actual word being disambiguated and not just its semantic class. For example, the phrase ‘run the newspaper’ effectively indicates that ‘newspaper’ belongs to the semantic class GROUP. But ‘run the tape’ indicates that ‘tape’ belongs to the semantic class ARTEFACT. The collocation ‘run the’ is effective for indicating the GROUP sense only for ‘newspaper’ and closely related words such as ‘department’ or ‘school’. In this experiment, we use a k-nearest neighbor classifier. In order to allow training examples of different words from the same semantic class to effectively provide information for each other, we modify the distance between instances in a way that makes the distance between instances of similar words smaller. This is described in Section 3. The rest of the paper is organized as follows: In section 2, we discuss several related work. We proceed on to a detailed description of our system in section 3, and discuss the empirical results in section 4, showing that our representation can yield state of the art performance. 2 Related Work Using generic classes as word senses has been done several times in WSD, in various contexts. Resnik (1997) described a method to acquire a set of conceptual classes for word senses, employing selectional preferences, based on the idea that certain linguistic predicates constraint the semantic interpretation of underlying words into certain classes. The method he proposed could acquire these constraints from a raw corpus automatically. Classification proposed by Levin (1993) for English verbs remains a matter of interest. Although these classes are based on syntactic properties unlike those in WORDNET, it has been shown that they can be used in automatic classifications (Stevenson and Merlo, 2000). Korhonen (2002) proposed a method for mapping WORDNET entries into Levin classes. WSD System presented by Crestan et al. (2001) in SENSEVAL-2 classified words into WORDNET unique beginners. However, their approach did not use the fact that the primes are common for words, and training data can hence be reused. Yarowsky (1992) used Roget’s Thesaurus categories as classes for word senses. These classes differ from those mentioned above, by the fact that they are based on topical context rather than syntax or grammar. 3 Basic Design of the System The system consists of three classifiers, built using local context, part of speech and syntax-based relationships respectively, and combined with the mostfrequent sense classifier by using weighted majority voting. Our experiments (section 4.3) show that building separate classifiers from different subsets of features and combining them works better than building one classifier by concatenating the features together. For training and testing, we used publicly available data sets, namely SEMCOR corpus (Miller et al., 1993) and SENSEVAL English all-words task data. In order to evaluate the systems performance in vivo, we mapped the outputs of our classifier to the answers given in the key. Although we face a penalty here due to the loss of granularity, this approach allows a direct comparison of actual usability of our system. 3.1 Data As training corpus, we used Brown-1 and Brown2 parts of SEMCOR corpus; these parts have all of their open-class words tagged with corresponding WORDNET senses. A part of the training corpus was set aside as the development corpus. This part was selected by randomly selecting a portion of multi36 class words (600 instances for each part of speech) from the training data set. As labels, the semantic class (lexicographic file number) was extracted from the sense key of each instance. Testing data sets from SENSEVAL-2 and SENSEVAL-3 English all-words tasks were used as testing corpora. 3.2 Features The feature set we selected was fairly simple; As we understood from our initial experiments, widewindow context features and topical context were not of much use for learning semantic classes from a multi-word training data set. Instead of generalizing, wider context windows add to noise, as seen from validation experiments with held-out data. Following are the features we used: 3.2.1 Local context This is a window of n words to the left, and n words to the right, where n ∈{1, 2, 3} is a parameter we selected via cross validation.1 Punctuation marks were removed and all words were converted into lower case. The feature vector was calculated the same way for both nouns and verbs. The window did not exceed the boundaries of a sentence; when there were not enough words to either side of the word within the window, the value NULL was used to fill the remaining positions. For instance, for the noun ‘companion’ in sentence (given with POS tags) ‘Henry/NNP peered/VBD doubtfully/RB at/IN his/PRP$ drinking/NN companion/NN through/IN bleary/JJ ,/, tearfilled/JJ eyes/NNS ./.’ the local context feature vector is [at, his, drinking, through, bleary, tear-filled], for window size n = 3. Notice that we did not consider the hyphenated words as two words, when the data files had them annotated as a single token. 3.2.2 Part of speech This consists of parts of speech for a window of n words to both sides of word (excluding the word 1Validation results showed that a window of two words to both sides yields the best performance for both local context and POS features. n = 2 is the size we used in actual evaluation. Feature Example Value nouns Subject - verb [art] represents a culture represent Verb - object He sells his [art] sell Adjectival modifiers the ancient [art] of runes ancient Prepositional connectors academy of folk [art] academy of Post-nominal modifiers the [art] of fishing of fishing verbs Subject - verb He [sells] his art he Verb - object He [sells] his art art Infinitive connector He will [sell] his art he Adverbial modifier He can [paint] well well Words in split infinitives to boldly [go] boldly Table 1: Syntactic relations used as features. The target word is shown inside [brackets] itself), with quotation signs and punctuation marks ignored. For SEMCOR files, existing parts of speech were used; for SENSEVAL data files, parts of speech from the accompanying Penn-Treebank parsed data files were aligned with the XML data files. The value vector is calculated the same way as the local context, with the same constraint on sentence boundaries, replacing vacancies with NULL. As an example, for the sentence we used in the previous example, the part-of-speech vector with context size n = 3 for the verb peered is [NULL, NULL, NNP, RB, IN, PRP$]. 3.2.3 Syntactic relations with the word The words that hold several kinds of syntactic relations with the word under consideration were selected. We used Link Grammar parser due to Sleator and Temperley (1991) because of the informationrich parse results produced by it. Sentences in SEMCOR corpus files and the SENSEVAL files were parsed with Link parser, and words were aligned with links. A given instance of a word can have more than one syntactic features present. Each of these features was considered as a binary feature, and a vector of binary values was constructed, of which each element denoted a unique feature found in the test set of the word. Each syntactic pattern feature falls into either of two types collocation or relation: Collocation features Collocation features are such features that connect the word under consideration to another word, with a preposition or an infinitive in between — for instance, the phrase ‘art of change-ringing’ for the word art. For these features, the feature value consists of two words, which are connected to the given word either from left or 37 from right, in a given order. For the above example, the feature value is [∼.of.change-ringing], where ∼denotes the placeholder for word under consideration. Relational features Relational features represent more direct grammatical relationships, such as subject-verb or noun-adjective, the word under consideration has with surrounding words. When encoding the feature value, we specified the relation type and the value of the feature in the given instance. For instance, in the phrase ‘Henry peered doubtfully’, the adverbial modifier feature for the verb ‘peered’ is encoded as [adverb-mod doubtfully]. A description of the relations for each part of speech is given in the table 1. 3.3 Classifier and instance weighting The classifier we used was TiMBL, a memory based learner due to Daelemans et al. (2003). One reason for this choice was that memory based learning has shown to perform well in previous word sense disambiguation tasks, including some best performers in SENSEVAL, such as (Hoste et al., 2001; Decadt et al., 2004; Mihalcea and Faruque, 2004). Another reason is that TiMBL supported exemplar weights, a necessary feature for our system for the reasons we describe in the next section. One of the salient features of our system is that it does not consider every example to be equally important. Due to the fact that training instances from different instances can provide confusing examples, as shown in section 1.3, such an approach cannot be trusted to give good performance; we verified this by our own findings through empirical evaluations as shown in section 4.2. 3.3.1 Weighting instances with similarity We use a similarity based measure to assign weights to training examples. In the method we use, these weights are used to adjust the distances between the test instance and the example instances. The distances are adjusted according to the formula ∆E(X, Y ) = ∆(X, Y ) ewX + ϵ , where ∆E(X, Y ) is the adjusted distance between instance Y and example X, ∆(X, Y ) is the original distance, ewX is the exemplar weight of instance X. The small constant ϵ is added to avoid division by zero. There are various schemes used to measure intersense similarity. Our experiments showed that the measure defined by Jiang and Conrath (1997) (JCn) yields best results. Results for various weighting schemes are discussed in section 4.2. 3.3.2 Instance weighting explained The exemplar weights were derived from the following method: 1. pick a labelled example e, and extract its sense se and semantic class ce. 2. if the class ce is a candidate for the current test word w, i.e. w has any senses that fall into ce, find out the most frequent sense of w, sce w , within ce. We define the most frequent sense within a class as the sense that has the lowest WORDNET sense number within that class. If none of the senses of w fall into ce, we ignore that example. 3. calculate the relatedness measure between se and sce w , using whatever the similarity metric being considered. This is the exemplar weight for example e. In the implementation, we used freely available WordNet::Similarity package (Pedersen et al., 2004). 2 3.4 Classifier optimization A part of SEMCOR corpus was used as a validation set (see section 3.1). The rest was used as training data in validation phase. In the preliminary experiments, it was seen that the generally recommended classifier options yield good enough performance, although variations of switches could improve performance slightly in certain cases. Classifier options were selected by a search over the available option space for only three basic classifier parameters, namely, number of nearest neighbors, distance metric and feature weighting scheme. 2WordNet::Similarity is a perl package available freely under GNU General Public Licence. http://wnsimilarity.sourceforge.net. 38 Classifier Senseval-2 Senseval-3 Baseline 0.617 0.627 POS 0.616 0.614 Local context 0.627 0.633 Synt. Pat 0.620 0.612 Concatenated 0.609 0.611 Combined 0.631 0.643 Table 2: Results of baseline, individual, and combined classifiers: recall measures for nouns and verbs combined. 4 Results In what follows, we present the results of our experiments in various test cases.3 We combined the three classifiers and the WORDNET first-sense classifier through simple majority voting. For evaluating the systems with SENSEVAL data sets, we mapped the outputs of our classifiers to WORDNET senses by picking the most-frequent sense (the one with the lowest sense number) within each of the class. This mapping was used in all tests. For all evaluations, we used SENSEVAL official scorer. We could use the setting only for nouns and verbs, because the similarity measures we used were not defined for adjectives or adverbs, due to the fact that hypernyms are not defined for these two parts of speech. So we list the initial results only for nouns and verbs. 4.1 Individual classifiers vs. combination We evaluated the results of the individual classifiers before combination. Only local context classifier could outperform the baseline in general, although there is a slight improvement with the syntactic pattern classifier on SENSEVAL-2 data. The results are given in the table 2, together with the results of voted combination, and baseline WORDNET first sense. Classifier shown as ‘concatenated’ is a single classifier trained from all of these feature vectors concatenated to make a single vector. Concatenating features this way does not seem to improve performance. Although exact reasons for this are not clear, this is consistent with pre3Note that the experiments and results are reported for SENSEVAL data for comparison purposes, and were not involved in parameter optimization, which was done with the development sample. Senseval-2 Senseval-3 No similarity used 0.608 0.599 Resnik 0.540 0.522 JCn 0.631 0.643 Table 3: Effect of different similarity schemes on recall, combined results for nouns and verbs Senseval-2 Senseval-3 SM 0.631 0.643 GW 0.634 0.649 LW 0.641 0.650 Table 4: Improvement of performance with classifier weighting. Combined results for nouns and verbs with voting schemes Simple Majority (SM), Global classifier weights (GW) and local weights (LW). vious observations (Hoste et al., 2001; Decadt et al., 2004) that combining classifiers, each using different features, can yield good performance. 4.2 Effect of similarity measure Table 3 shows the effect of JCn and Resnik similarity measures, along with no similarity weighting, for the combined classifier. It is clear that proper similarity measure has a major impact on the performance, with Resnik measure performing worse than the baseline. 4.3 Optimizing the voting process Several voting schemes were tried for combining classifiers. Simple majority voting improves performance over baseline. However, previously reported results such as (Hoste et al., 2001) and (Decadt et al., 2004) have shown that optimizing the voting process helps improve the results. We used a variation of Weighted Majority Algorithm (Littlestone and Warmuth, 1994). The original algorithm was formulated for binary classification tasks; however, our use of it for multi-class case proved to be successful. We used the held-out development data set for adjusting classifier weights. Originally, all classifiers have the same weight of 1. With each test instance, the classifier builds the final output considering the weights. If this output turns out to be wrong, the classifiers that contributed to the wrong answer get their weights reduced by some factor. We could ad39 Senseval-2 Senseval-3 System 0.777 0.806 Baseline 0.756 0.783 Table 5: Coarse grained results just the weights locally or globally; In global setting, the weights were adjusted using a random sample of held-out data, which contained different words. These weights were used for classifying all words in the actual test set. In local setting, each classifier weight setting was optimized for individual words that were present in test sets, by picking up random samples of the same word from SEMCOR .4 Table 4 shows the improvements with each setting. Coarse grained (at semantic-class level) results for the same system are shown in table 5. Baseline figures reported are for the most-frequent class. 4.4 Final results on SENSEVAL data Here, we list the performance of the system with adjectives and adverbs added for the ease of comparison. Due to the facts mentioned at the beginning of this section, our system was not applicable for these parts of speech, and we classified all instances of these two POS types with their most frequent sense. We also identified the multi-word phrases from the test documents. These phrases generally have a unique sense in WORDNET ; we marked all of them with their first sense without classifying them. All the multiple-class instances of nouns and verbs were classified and converted to WORDNET senses by the method described above, with locally optimized classifier voting. The results of the systems are shown in tables 7 and 8. Our system’s results in both cases are listed as Simil-Prime, along with the baseline WORDNET first sense (including multi-word phrases and ‘U’ answers), and the two best performers’ results reported.5 These results compare favorably with the official results reported in both tasks. 4Words for which there were no samples in SEMCOR were classified using a weight of 1 for all classifiers. 5The differences of the baseline figures from the previously reported figures are clearly due to different handling of multiword phrases, hyphenated words, and unknown words in each system. We observed by analyzing the answer keys that even better baseline figures are technically possible, with better techniques to identify these special cases. Senseval-2 Senseval-3 Micro Average < 0.0001 < 0.0001 Macro Average 0.0073 0.0252 Table 6: One tailed paired t-test significance levels of results: P(T ⩽t) System Recall SMUaw (Mihalcea, 2002) 0.690 Simil-Prime 0.664 Baseline (WORDNET first sense) 0.648 CNTS-Antwerp (Hoste et al., 2001) 0.636 Table 7: Results for SENSEVAL-2 English all words data for all parts of speech and fine grained scoring. Significance of results To verify the significance of these results, we used one-tailed paired t-test, using results of baseline WORDNET first sense and our system as pairs. Tests were done both at microaverage level and macro-average level, (considering test data set as a whole and considering per-word average). Null hypothesis was that there is no significant improvement over the baseline. Both settings yield good significance levels, as shown in table 6. 5 Conclusion and Future Work We analyzed the problem of Knowledge Acquisition Bottleneck in WSD, proposed using a general set of semantic classes as a trade-off, and discussed why such a system is promising. Our formulation allowed us to use training examples from words different from the actual word being classified. This makes the available labelled data reusable for different words, relieving the above problem. In order to facilitate learning, we introduced a technique based on word sense similarity. The generic classes we learned can be mapped to System Recall Simil-Prime 0.661 GAMBL-AW-S (Decadt et al., 2004) 0.652 SenseLearner (Mihalcea and Faruque, 2004) 0.646 Baseline (WORDNET first sense) 0.642 Table 8: Results for SENSEVAL-3 English all words data for all parts of speech and fine grained scoring. 40 finer grained senses with simple heuristics. Through empirical findings, we showed that our system can attain state of the art performance, when applied to standard fine-grained WSD evaluation tasks. In the future, we hope to improve on these results: Instead of using WORDNET unique beginners, using more natural semantic classes based on word usage would possibly improve the accuracy, and finding such classes would be a worthwhile area of research. As seen from our results, selecting correct similarity measure has an impact on the final outcome. We hope to work on similarity measures that are more applicable in our task. 6 Acknowledgements Authors wish to thank the three anonymous reviewers for their helpful suggestions and comments. References E. Crestan, M. El-B`eze, and C. De Loupy. 2001. Improving wsd with multi-level view of context monitored by similarity measure. In Proceeding of SENSEVAL-2: Second International Workshop on Evaluating Word Sense Disambiguation Systems, Toulouse, France. Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2003. TiMBL: Tilburg Memory Based Learner, version 5.0, reference guide. Technical report, ILK 03-10. Bart Decadt, V´eronique Hoste, Walter Daelemans, and Antal Van den Bosch. 2004. GAMBL, genetic algorithm optimization of memory-based wsd. In Senseval-3: Third Intl. Workshop on the Evaluation of Systems for the Semantic Analysis of Text. P. Edmonds and S. Cotton. 2001. Senseval-2: Overview. In Proc. of the Second Intl. Workshop on Evaluating Word Sense Disambiguation Systems (Senseval-2). C. Fellbaum. 1998. WordNet: An Electronic Lexical Database. The MIT Press, Cambridge, MA. V´eronique Hoste, Anne Kool, and Walter Daelmans. 2001. Classifier optimization and combination in English all words task. In Proceeding of SENSEVAL-2: Second International Workshop on Evaluating Word Sense Disambiguation Systems. J. Jiang and D. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of International Conference on Research in Computational Linguistics. Anna Korhonen. 2002. Assigning verbs to semantic classes via wordnet. In Proceedings of the COLING Workshop on Building and Using Semantic Networks. Beth Levin. 1993. English Verb Classes and Alternations. University of Chicago Press, Chicago, IL. N Littlestone and M.K. Warmuth. 1994. The weighted majority algorithm. Information and Computation, 108(2):212–261. Rada Mihalcea and Tim Chklovski. 2003. Open Mind Word Expert: Creating large annotated data collections with web users’ help. In Proceedings of the EACL 2003 Workshop on Linguistically Annotated Corpora. Rada Mihalcea and Ehsanul Faruque. 2004. Senselearner: Minimally supervised word sense disambiguation for all words in open text. In Senseval-3: Third Intl. Workshop on the Evaluation of Systems for the Semantic Analysis of Text. Rada Mihalcea. 2002. Bootstrapping large sense tagged corpora. In Proc. of the 3rd Intl. Conference on Languages Resources and Evaluations. G. Miller, C. Leacock, T. Randee, and R. Bunker. 1993. A semantic concordance. In Proc. of the 3rd DARPA Workshop on Human Language Technology. Hwee Tou Ng. 1997. Getting serious about word sense disambiguation. In Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How?, pages 1–7. T. Pedersen, S. Patwardhan, and J. Michelizzi. 2004. Wordnet::Similarity - Measuring the relatedness of concepts. In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI-04). P. Resnik. 1997. Selectional preference and sense disambiguation. In Proc. of ACL Siglex Workshop on Tagging Text with Lexical Semantics, Why, What and How? D. Sleator and D. Temperley. 1991. Parsing English with a Link Grammar. Technical report, Carnegie Mellon University Computer Science CMU-CS-91-196. B. Snyder and M. Palmer. 2004. The English all-words task. In Senseval-3: Third Intl. Workshop on the Evaluation of Systems for the Semantic Analysis of Text. Suzanne Stevenson and Paola Merlo. 2000. Automatic lexical acquisition based on statistical distributions. In Proc. of the 17th conf. on Computational linguistics. David Yarowsky. 1992. Word-sense disambiguation using statistical models of Roget’s categories trained on large corpora. In Proceedings of COLING-92, pages 454–460. 41 | 2005 | 5 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 403–410, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Domain Kernels for Word Sense Disambiguation Alfio Gliozzo and Claudio Giuliano and Carlo Strapparava ITC-irst, Istituto per la Ricerca Scientifica e Tecnologica I-38050, Trento, ITALY {gliozzo,giuliano,strappa}@itc.it Abstract In this paper we present a supervised Word Sense Disambiguation methodology, that exploits kernel methods to model sense distinctions. In particular a combination of kernel functions is adopted to estimate independently both syntagmatic and domain similarity. We defined a kernel function, namely the Domain Kernel, that allowed us to plug “external knowledge” into the supervised learning process. External knowledge is acquired from unlabeled data in a totally unsupervised way, and it is represented by means of Domain Models. We evaluated our methodology on several lexical sample tasks in different languages, outperforming significantly the state-of-the-art for each of them, while reducing the amount of labeled training data required for learning. 1 Introduction The main limitation of many supervised approaches for Natural Language Processing (NLP) is the lack of available annotated training data. This problem is known as the Knowledge Acquisition Bottleneck. To reach high accuracy, state-of-the-art systems for Word Sense Disambiguation (WSD) are designed according to a supervised learning framework, in which the disambiguation of each word in the lexicon is performed by constructing a different classifier. A large set of sense tagged examples is then required to train each classifier. This methodology is called word expert approach (Small, 1980; Yarowsky and Florian, 2002). However this is clearly unfeasible for all-words WSD tasks, in which all the words of an open text should be disambiguated. On the other hand, the word expert approach works very well for lexical sample WSD tasks (i.e. tasks in which it is required to disambiguate only those words for which enough training data is provided). As the original rationale of the lexical sample tasks was to define a clear experimental settings to enhance the comprehension of WSD, they should be considered as preceding exercises to all-words tasks. However this is not the actual case. Algorithms designed for lexical sample WSD are often based on pure supervision and hence “data hungry”. We think that lexical sample WSD should regain its original explorative role and possibly use a minimal amount of training data, exploiting instead external knowledge acquired in an unsupervised way to reach the actual state-of-the-art performance. By the way, minimal supervision is the basis of state-of-the-art systems for all-words tasks (e.g. (Mihalcea and Faruque, 2004; Decadt et al., 2004)), that are trained on small sense tagged corpora (e.g. SemCor), in which few examples for a subset of the ambiguous words in the lexicon can be found. Thus improving the performance of WSD systems with few learning examples is a fundamental step towards the direction of designing a WSD system that works well on real texts. In addition, it is a common opinion that the performance of state-of-the-art WSD systems is not satisfactory from an applicative point of view yet. 403 To achieve these goals we identified two promising research directions: 1. Modeling independently domain and syntagmatic aspects of sense distinction, to improve the feature representation of sense tagged examples (Gliozzo et al., 2004). 2. Leveraging external knowledge acquired from unlabeled corpora. The first direction is motivated by the linguistic assumption that syntagmatic and domain (associative) relations are both crucial to represent sense distictions, while they are basically originated by very different phenomena. Syntagmatic relations hold among words that are typically located close to each other in the same sentence in a given temporal order, while domain relations hold among words that are typically used in the same semantic domain (i.e. in texts having similar topics (Gliozzo et al., 2004)). Their different nature suggests to adopt different learning strategies to detect them. Regarding the second direction, external knowledge would be required to help WSD algorithms to better generalize over the data available for training. On the other hand, most of the state-of-the-art supervised approaches to WSD are still completely based on “internal” information only (i.e. the only information available to the training algorithm is the set of manually annotated examples). For example, in the Senseval-3 evaluation exercise (Mihalcea and Edmonds, 2004) many lexical sample tasks were provided, beyond the usual labeled training data, with a large set of unlabeled data. However, at our knowledge, none of the participants exploited this unlabeled material. Exploring this direction is the main focus of this paper. In particular we acquire a Domain Model (DM) for the lexicon (i.e. a lexical resource representing domain associations among terms), and we exploit this information inside our supervised WSD algorithm. DMs can be automatically induced from unlabeled corpora, allowing the portability of the methodology among languages. We identified kernel methods as a viable framework in which to implement the assumptions above (Strapparava et al., 2004). Exploiting the properties of kernels, we have defined independently a set of domain and syntagmatic kernels and we combined them in order to define a complete kernel for WSD. The domain kernels estimate the (domain) similarity (Magnini et al., 2002) among contexts, while the syntagmatic kernels evaluate the similarity among collocations. We will demonstrate that using DMs induced from unlabeled corpora is a feasible strategy to increase the generalization capability of the WSD algorithm. Our system far outperforms the state-ofthe-art systems in all the tasks in which it has been tested. Moreover, a comparative analysis of the learning curves shows that the use of DMs allows us to remarkably reduce the amount of sense-tagged examples, opening new scenarios to develop systems for all-words tasks with minimal supervision. The paper is structured as follows. Section 2 introduces the notion of Domain Model. In particular an automatic acquisition technique based on Latent Semantic Analysis (LSA) is described. In Section 3 we present a WSD system based on a combination of kernels. In particular we define a Domain Kernel (see Section 3.1) and a Syntagmatic Kernel (see Section 3.2), to model separately syntagmatic and domain aspects. In Section 4 our WSD system is evaluated in the Senseval-3 English, Italian, Spanish and Catalan lexical sample tasks. 2 Domain Models The simplest methodology to estimate the similarity among the topics of two texts is to represent them by means of vectors in the Vector Space Model (VSM), and to exploit the cosine similarity. More formally, let C = {t1, t2, . . . , tn} be a corpus, let V = {w1, w2, . . . , wk} be its vocabulary, let T be the k × n term-by-document matrix representing C, such that ti,j is the frequency of word wi into the text tj. The VSM is a k-dimensional space Rk, in which the text tj ∈C is represented by means of the vector ⃗tj such that the ith component of ⃗tj is ti,j. The similarity among two texts in the VSM is estimated by computing the cosine among them. However this approach does not deal well with lexical variability and ambiguity. For example the two sentences “he is affected by AIDS” and “HIV is a virus” do not have any words in common. In the 404 VSM their similarity is zero because they have orthogonal vectors, even if the concepts they express are very closely related. On the other hand, the similarity between the two sentences “the laptop has been infected by a virus” and “HIV is a virus” would turn out very high, due to the ambiguity of the word virus. To overcome this problem we introduce the notion of Domain Model (DM), and we show how to use it in order to define a domain VSM in which texts and terms are represented in a uniform way. A DM is composed by soft clusters of terms. Each cluster represents a semantic domain, i.e. a set of terms that often co-occur in texts having similar topics. A DM is represented by a k×k′ rectangular matrix D, containing the degree of association among terms and domains, as illustrated in Table 1. MEDICINE COMPUTER SCIENCE HIV 1 0 AIDS 1 0 virus 0.5 0.5 laptop 0 1 Table 1: Example of Domain Matrix DMs can be used to describe lexical ambiguity and variability. Lexical ambiguity is represented by associating one term to more than one domain, while variability is represented by associating different terms to the same domain. For example the term virus is associated to both the domain COMPUTER SCIENCE and the domain MEDICINE (ambiguity) while the domain MEDICINE is associated to both the terms AIDS and HIV (variability). More formally, let D = {D1, D2, ..., Dk′} be a set of domains, such that k′ ≪k. A DM is fully defined by a k×k′ domain matrix D representing in each cell di,z the domain relevance of term wi with respect to the domain Dz. The domain matrix D is used to define a function D : Rk →Rk′, that maps the vectors ⃗tj expressed into the classical VSM, into the vectors ⃗t′ j in the domain VSM. D is defined by1 D(⃗tj) = ⃗tj(IIDFD) = ⃗t′ j (1) 1In (Wong et al., 1985) the formula 1 is used to define a Generalized Vector Space Model, of which the Domain VSM is a particular instance. where IIDF is a k × k diagonal matrix such that iIDF i,i = IDF(wi), ⃗tj is represented as a row vector, and IDF(wi) is the Inverse Document Frequency of wi. Vectors in the domain VSM are called Domain Vectors (DVs). DVs for texts are estimated by exploiting the formula 1, while the DV ⃗w′ i, corresponding to the word wi ∈V is the ith row of the domain matrix D. To be a valid domain matrix such vectors should be normalized (i,e. ⟨⃗w′ i, ⃗w′ i⟩= 1). In the Domain VSM the similarity among DVs is estimated by taking into account second order relations among terms. For example the similarity of the two sentences “He is affected by AIDS” and “HIV is a virus” is very high, because the terms AIDS, HIV and virus are highly associated to the domain MEDICINE. A DM can be estimated from hand made lexical resources such as WORDNET DOMAINS (Magnini and Cavagli`a, 2000), or by performing a term clustering process on a large corpus. We think that the second methodology is more attractive, because it allows us to automatically acquire DMs for different languages. In this work we propose the use of Latent Semantic Analysis (LSA) to induce DMs from corpora. LSA is an unsupervised technique for estimating the similarity among texts and terms in a corpus. LSA is performed by means of a Singular Value Decomposition (SVD) of the term-by-document matrix T describing the corpus. The SVD algorithm can be exploited to acquire a domain matrix D from a large corpus C in a totally unsupervised way. SVD decomposes the term-by-document matrix T into three matrixes T ≃VΣk′UT where Σk′ is the diagonal k × k matrix containing the highest k′ ≪k eigenvalues of T, and all the remaining elements set to 0. The parameter k′ is the dimensionality of the Domain VSM and can be fixed in advance2. Under this setting we define the domain matrix DLSA as DLSA = INV p Σk′ (2) where IN is a diagonal matrix such that iN i,i = 1 q ⟨⃗w′ i, ⃗w′ i⟩, ⃗w′ i is the ith row of the matrix V√Σk′.3 2It is not clear how to choose the right dimensionality. In our experiments we used 50 dimensions. 3When DLSA is substituted in Equation 1 the Domain VSM 405 3 Kernel Methods for WSD In the introduction we discussed two promising directions for improving the performance of a supervised disambiguation system. In this section we show how these requirements can be efficiently implemented in a natural and elegant way by using kernel methods. The basic idea behind kernel methods is to embed the data into a suitable feature space F via a mapping function φ : X →F, and then use a linear algorithm for discovering nonlinear patterns. Instead of using the explicit mapping φ, we can use a kernel function K : X × X →R, that corresponds to the inner product in a feature space which is, in general, different from the input space. Kernel methods allow us to build a modular system, as the kernel function acts as an interface between the data and the learning algorithm. Thus the kernel function becomes the only domain specific module of the system, while the learning algorithm is a general purpose component. Potentially any kernel function can work with any kernel-based algorithm. In our system we use Support Vector Machines (Cristianini and Shawe-Taylor, 2000). Exploiting the properties of the kernel functions, it is possible to define the kernel combination schema as KC(xi, xj) = n X l=1 Kl(xi, xj) p Kl(xj, xj)Kl(xi, xi) (3) Our WSD system is then defined as combination of n basic kernels. Each kernel adds some additional dimensions to the feature space. In particular, we have defined two families of kernels: Domain and Syntagmatic kernels. The former is composed by both the Domain Kernel (KD) and the Bag-ofWords kernel (KBoW ), that captures domain aspects (see Section 3.1). The latter captures the syntagmatic aspects of sense distinction and it is composed by two kernels: the collocation kernel (KColl) and is equivalent to a Latent Semantic Space (Deerwester et al., 1990). The only difference in our formulation is that the vectors representing the terms in the Domain VSM are normalized by the matrix IN, and then rescaled, according to their IDF value, by matrix IIDF. Note the analogy with the tf idf term weighting schema (Salton and McGill, 1983), widely adopted in Information Retrieval. the Part of Speech kernel (KP oS) (see Section 3.2). The WSD kernels (K′ W SD and KW SD) are then defined by combining them (see Section 3.3). 3.1 Domain Kernels In (Magnini et al., 2002), it has been claimed that knowing the domain of the text in which the word is located is a crucial information for WSD. For example the (domain) polysemy among the COMPUTER SCIENCE and the MEDICINE senses of the word virus can be solved by simply considering the domain of the context in which it is located. This assumption can be modeled by defining a kernel that estimates the domain similarity among the contexts of the words to be disambiguated, namely the Domain Kernel. The Domain Kernel estimates the similarity among the topics (domains) of two texts, so to capture domain aspects of sense distinction. It is a variation of the Latent Semantic Kernel (Shawe-Taylor and Cristianini, 2004), in which a DM (see Section 2) is exploited to define an explicit mapping D : Rk →Rk′ from the classical VSM into the Domain VSM. The Domain Kernel is defined by KD(ti, tj) = ⟨D(ti), D(tj)⟩ p ⟨D(ti), D(tj)⟩⟨D(ti), D(tj)⟩ (4) where D is the Domain Mapping defined in equation 1. Thus the Domain Kernel requires a Domain Matrix D. For our experiments we acquire the matrix DLSA, described in equation 2, from a generic collection of unlabeled documents, as explained in Section 2. A more traditional approach to detect topic (domain) similarity is to extract Bag-of-Words (BoW) features from a large window of text around the word to be disambiguated. The BoW kernel, denoted by KBoW , is a particular case of the Domain Kernel, in which D = I, and I is the identity matrix. The BoW kernel does not require a DM, then it can be applied to the “strictly” supervised settings, in which an external knowledge source is not provided. 3.2 Syntagmatic kernels Kernel functions are not restricted to operate on vectorial objects ⃗x ∈Rk. In principle kernels can be defined for any kind of object representation, as for 406 example sequences and trees. As stated in Section 1, syntagmatic relations hold among words collocated in a particular temporal order, thus they can be modeled by analyzing sequences of words. We identified the string kernel (or word sequence kernel) (Shawe-Taylor and Cristianini, 2004) as a valid instrument to model our assumptions. The string kernel counts how many times a (noncontiguous) subsequence of symbols u of length n occurs in the input string s, and penalizes noncontiguous occurrences according to the number of gaps they contain (gap-weighted subsequence kernel). Formally, let V be the vocabulary, the feature space associated with the gap-weighted subsequence kernel of length n is indexed by a set I of subsequences over V of length n. The (explicit) mapping function is defined by φn u(s) = X i:u=s(i) λl(i), u ∈V n (5) where u = s(i) is a subsequence of s in the positions given by the tuple i, l(i) is the length spanned by u, and λ ∈]0, 1] is the decay factor used to penalize non-contiguous subsequences. The associate gap-weighted subsequence kernel is defined by kn(si, sj) = ⟨φn(si), φn(sj)⟩= X u∈V n φn(si)φn(sj) (6) We modified the generic definition of the string kernel in order to make it able to recognize collocations in a local window of the word to be disambiguated. In particular we defined two Syntagmatic kernels: the n-gram Collocation Kernel and the ngram PoS Kernel. The n-gram Collocation kernel Kn Coll is defined as a gap-weighted subsequence kernel applied to sequences of lemmata around the word l0 to be disambiguated (i.e. l−3, l−2, l−1, l0, l+1, l+2, l+3). This formulation allows us to estimate the number of common (sparse) subsequences of lemmata (i.e. collocations) between two examples, in order to capture syntagmatic similarity. In analogy we defined the PoS kernel Kn P oS, by setting s to the sequence of PoSs p−3, p−2, p−1, p0, p+1, p+2, p+3, where p0 is the PoS of the word to be disambiguated. The definition of the gap-weighted subsequence kernel, provided by equation 6, depends on the parameter n, that represents the length of the subsequences analyzed when estimating the similarity among sequences. For example, K2 Coll allows us to represent the bigrams around the word to be disambiguated in a more flexible way (i.e. bigrams can be sparse). In WSD, typical features are bigrams and trigrams of lemmata and PoSs around the word to be disambiguated, then we defined the Collocation Kernel and the PoS Kernel respectively by equations 7 and 84. KColl(si, sj) = p X l=1 Kl Coll(si, sj) (7) KP oS(si, sj) = p X l=1 Kl P oS(si, sj) (8) 3.3 WSD kernels In order to show the impact of using Domain Models in the supervised learning process, we defined two WSD kernels, by applying the kernel combination schema described by equation 3. Thus the following WSD kernels are fully specified by the list of the kernels that compose them. Kwsd composed by KColl, KP oS and KBoW K′ wsd composed by KColl, KP oS, KBoW and KD The only difference between the two systems is that K′ wsd uses Domain Kernel KD. K′ wsd exploits external knowledge, in contrast to Kwsd, whose only available information is the labeled training data. 4 Evaluation and Discussion In this section we present the performance of our kernel-based algorithms for WSD. The objectives of these experiments are: • to study the combination of different kernels, • to understand the benefits of plugging external information using domain models, • to verify the portability of our methodology among different languages. 4The parameters p and λ are optimized by cross-validation. The best results are obtained setting p = 2, λ = 0.5 for KColl and λ →0 for KP oS. 407 4.1 WSD tasks We conducted the experiments on four lexical sample tasks (English, Catalan, Italian and Spanish) of the Senseval-3 competition (Mihalcea and Edmonds, 2004). Table 2 describes the tasks by reporting the number of words to be disambiguated, the mean polysemy, and the dimension of training, test and unlabeled corpora. Note that the organizers of the English task did not provide any unlabeled material. So for English we used a domain model built from a portion of BNC corpus, while for Spanish, Italian and Catalan we acquired DMs from the unlabeled corpora made available by the organizers. #w pol # train # test # unlab Catalan 27 3.11 4469 2253 23935 English 57 6.47 7860 3944 Italian 45 6.30 5145 2439 74788 Spanish 46 3.30 8430 4195 61252 Table 2: Dataset descriptions 4.2 Kernel Combination In this section we present an experiment to empirically study the kernel combination. The basic kernels (i.e. KBoW , KD, KColl and KP oS) have been compared to the combined ones (i.e. Kwsd and K′ wsd) on the English lexical sample task. The results are reported in Table 3. The results show that combining kernels significantly improves the performance of the system. KD KBoW KP oS KColl Kwsd K′ wsd F1 65.5 63.7 62.9 66.7 69.7 73.3 Table 3: The performance (F1) of each basic kernel and their combination for English lexical sample task. 4.3 Portability and Performance We evaluated the performance of K′ wsd and Kwsd on the lexical sample tasks described above. The results are showed in Table 4 and indicate that using DMs allowed K′ wsd to significantly outperform Kwsd. In addition, K′ wsd turns out the best systems for all the tested Senseval-3 tasks. Finally, the performance of K′ wsd are higher than the human agreement for the English and Spanish tasks5. Note that, in order to guarantee an uniform application to any language, we do not use any syntactic information provided by a parser. 4.4 Learning Curves The Figures 1, 2, 3 and 4 show the learning curves evaluated on K′ wsd and Kwsd for all the lexical sample tasks. The learning curves indicate that K′ wsd is far superior to Kwsd for all the tasks, even with few examples. The result is extremely promising, for it demonstrates that DMs allow to drastically reduce the amount of sense tagged data required for learning. It is worth noting, as reported in Table 5, that K′ wsd achieves the same performance of Kwsd using about half of the training data. % of training English 54 Catalan 46 Italian 51 Spanish 50 Table 5: Percentage of sense tagged examples required by K′ wsd to achieve the same performance of Kwsd with full training. 5 Conclusion and Future Works In this paper we presented a supervised algorithm for WSD, based on a combination of kernel functions. In particular we modeled domain and syntagmatic aspects of sense distinctions by defining respectively domain and syntagmatic kernels. The Domain kernel exploits Domain Models, acquired from “external” untagged corpora, to estimate the similarity among the contexts of the words to be disambiguated. The syntagmatic kernels evaluate the similarity between collocations. We evaluated our algorithm on several Senseval3 lexical sample tasks (i.e. English, Spanish, Italian and Catalan) significantly improving the state-otthe-art for all of them. In addition, the performance 5It is not clear if the inter-annotator-agreement can be considerated the upper bound for a WSD system. 408 MF Agreement BEST Kwsd K′ wsd DM+ English 55.2 67.3 72.9 69.7 73.3 3.6 Catalan 66.3 93.1 85.2 85.2 89.0 3.8 Italian 18.0 89.0 53.1 53.1 61.3 8.2 Spanish 67.7 85.3 84.2 84.2 88.2 4.0 Table 4: Comparative evaluation on the lexical sample tasks. Columns report: the Most Frequent baseline, the inter annotator agreement, the F1 of the best system at Senseval-3, the F1 of Kwsd, the F1 of K′ wsd, DM+ (the improvement due to DM, i.e. K′ wsd −Kwsd). 0.5 0.55 0.6 0.65 0.7 0.75 0 0.2 0.4 0.6 0.8 1 F1 Percentage of training set K'wsd K wsd Figure 1: Learning curves for English lexical sample task. 0.65 0.7 0.75 0.8 0.85 0.9 0 0.2 0.4 0.6 0.8 1 F1 Percentage of training set K'wsd K wsd Figure 2: Learning curves for Catalan lexical sample task. of our system outperforms the inter annotator agreement in both English and Spanish, achieving the upper bound performance. We demonstrated that using external knowledge 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0 0.2 0.4 0.6 0.8 1 F1 Percentage of training set K'wsd K wsd Figure 3: Learning curves for Italian lexical sample task. 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0 0.2 0.4 0.6 0.8 1 F1 Percentage of training set K'wsd K wsd Figure 4: Learning curves for Spanish lexical sample task. inside a supervised framework is a viable methodology to reduce the amount of training data required for learning. In our approach the external knowledge is represented by means of Domain Models automat409 ically acquired from corpora in a totally unsupervised way. Experimental results show that the use of Domain Models allows us to reduce the amount of training data, opening an interesting research direction for all those NLP tasks for which the Knowledge Acquisition Bottleneck is a crucial problem. In particular we plan to apply the same methodology to Text Categorization, by exploiting the Domain Kernel to estimate the similarity among texts. In this implementation, our WSD system does not exploit syntactic information produced by a parser. For the future we plan to integrate such information by adding a tree kernel (i.e. a kernel function that evaluates the similarity among parse trees) to the kernel combination schema presented in this paper. Last but not least, we are going to apply our approach to develop supervised systems for all-words tasks, where the quantity of data available to train each word expert classifier is very low. Acknowledgments Alfio Gliozzo and Carlo Strapparava were partially supported by the EU project Meaning (IST-200134460). Claudio Giuliano was supported by the EU project Dot.Kom (IST-2001-34038). We would like to thank Oier Lopez de Lacalle for useful comments. References N. Cristianini and J. Shawe-Taylor. 2000. An introduction to Support Vector Machines. Cambridge University Press. B. Decadt, V. Hoste, W. Daelemens, and A. van den Bosh. 2004. Gambl, genetic algorithm optimization of memory-based wsd. In Proc. of Senseval-3, Barcelona, July. S. Deerwester, S. Dumais, G. Furnas, T. Landauer, and R. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society of Information Science. A. Gliozzo, C. Strapparava, and I. Dagan. 2004. Unsupervised and supervised exploitation of semantic domains in lexical disambiguation. Computer Speech and Language, 18(3):275–299. B. Magnini and G. Cavagli`a. 2000. Integrating subject field codes into WordNet. In Proceedings of LREC2000, pages 1413–1418, Athens, Greece, June. B. Magnini, C. Strapparava, G. Pezzulo, and A. Gliozzo. 2002. The role of domain information in word sense disambiguation. Natural Language Engineering, 8(4):359–373. R. Mihalcea and P. Edmonds, editors. 2004. Proceedings of SENSEVAL-3, Barcelona, Spain, July. R. Mihalcea and E. Faruque. 2004. Senselearner: Minimally supervised WSD for all words in open text. In Proceedings of SENSEVAL-3, Barcelona, Spain, July. G. Salton and M.H. McGill. 1983. Introduction to modern information retrieval. McGraw-Hill, New York. J. Shawe-Taylor and N. Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press. S. Small. 1980. Word Expert Parsing: A Theory of Distributed Word-based Natural Language Understanding. Ph.D. Thesis, Department of Computer Science, University of Maryland. C. Strapparava, A. Gliozzo, and C. Giuliano. 2004. Pattern abstraction and term similarity for word sense disambiguation: Irst at senseval-3. In Proc. of SENSEVAL-3 Third International Workshop on Evaluation of Systems for the Semantic Analysis of Text, pages 229–234, Barcelona, Spain, July. S.K.M. Wong, W. Ziarko, and P.C.N. Wong. 1985. Generalized vector space model in information retrieval. In Proceedings of the 8th ACM SIGIR Conference. D. Yarowsky and R. Florian. 2002. Evaluating sense disambiguation across diverse parameter space. Natural Language Engineering, 8(4):293–310. 410 | 2005 | 50 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 411–418, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Improving Name Tagging by Reference Resolution and Relation Detection Heng Ji Ralph Grishman Department of Computer Science New York University New York, NY, 10003, USA [email protected] [email protected] Abstract Information extraction systems incorporate multiple stages of linguistic analysis. Although errors are typically compounded from stage to stage, it is possible to reduce the errors in one stage by harnessing the results of the other stages. We demonstrate this by using the results of coreference analysis and relation extraction to reduce the errors produced by a Chinese name tagger. We use an N-best approach to generate multiple hypotheses and have them re-ranked by subsequent stages of processing. We obtained thereby a reduction of 24% in spurious and incorrect name tags, and a reduction of 14% in missed tags. 1 Introduction Systems which extract relations or events from a document typically perform a number of types of linguistic analysis in preparation for information extraction. These include name identification and classification, parsing (or partial parsing), semantic classification of noun phrases, and coreference analysis. These tasks are reflected in the evaluation tasks introduced for MUC-6 (named entity, coreference, template element) and MUC-7 (template relation). In most extraction systems, these stages of analysis are arranged sequentially, with each stage using the results of prior stages and generating a single analysis that gets enriched by each stage. This provides a simple modular organization for the extraction system. Unfortunately, each stage also introduces a certain level of error into the analysis. Furthermore, these errors are compounded – for example, errors in name recognition may lead to errors in parsing. The net result is that the final output (relations or events) may be quite inaccurate. This paper considers how interactions between the stages can be exploited to reduce the error rate. For example, the results of coreference analysis or relation identification may be helpful in name classification, and the results of relation or event extraction may be helpful in coreference. Such interactions are not easily exploited in a simple sequential model … if name classification is performed at the beginning of the pipeline, it cannot make use of the results of subsequent stages. It may even be difficult to use this information implicitly, by using features which are also used in later stages, because the representation used in the initial stages is too limited. To address these limitations, some recent systems have used more parallel designs, in which a single classifier (incorporating a wide range of features) encompasses what were previously several separate stages (Kambhatla, 2004; Zelenko et al., 2004). This can reduce the compounding of errors of the sequential design. However, it leads to a very large feature space and makes it difficult to select linguistically appropriate features for particular analysis tasks. Furthermore, because these decisions are being made in parallel, it becomes much harder to express interactions between the levels of analysis based on linguistic intuitions. 411 In order to capture these interactions more explicitly, we have employed a sequential design in which multiple hypotheses are forwarded from each stage to the next, with hypotheses being reranked and pruned using the information from later stages. We shall apply this design to show how named entity classification can be improved by ‘feedback’ from coreference analysis and relation extraction. We shall show that this approach can capture these interactions in a natural and efficient manner, yielding a substantial improvement in name identification and classification. 2 Prior Work A wide variety of trainable models have been applied to the name tagging task, including HMMs (Bikel et al., 1997), maximum entropy models (Borthwick, 1999), support vector machines (SVMs), and conditional random fields. People have spent considerable effort in engineering appropriate features to improve performance; most of these involve internal name structure or the immediate local context of the name. Some other named entity systems have explored global information for name tagging. (Borthwick, 1999) made a second tagging pass which uses information on token sequences tagged in the first pass; (Chieu and Ng, 2002) used as features information about features assigned to other instances of the same token. Recently, in (Ji and Grishman, 2004) we proposed a name tagging method which applied an SVM based on coreference information to filter the names with low confidence, and used coreference rules to correct and recover some names. One limitation of this method is that in the process of discarding many incorrect names, it also discarded some correct names. We attempted to recover some of these names by heuristic rules which are quite language specific. In addition, this singlehypothesis method placed an upper bound on recall. Traditional statistical name tagging methods have generated a single name hypothesis. BBN proposed the N-Best algorithm for speech recognition in (Chow and Schwartz, 1989). Since then NBest methods have been widely used by other researchers (Collins, 2002; Zhai et al., 2004). In this paper, we tried to combine the advantages of the prior work, and incorporate broader knowledge into a more general re-ranking model. 3 Task and Terminology Our experiments were conducted in the context of the ACE Information Extraction evaluations, and we will use the terminology of these evaluations: entity: an object or a set of objects in one of the semantic categories of interest mention: a reference to an entity (typically, a noun phrase) name mention: a reference by name to an entity nominal mention: a reference by a common noun or noun phrase to an entity relation: one of a specified set of relationships between a pair of entities The 2004 ACE evaluation had 7 types of entities, of which the most common were PER (persons), ORG (organizations), and GPE (‘geo-political entities’ – locations which are also political units, such as countries, counties, and cities). There were 7 types of relations, with 23 subtypes. Examples of these relations are “the CEO of Microsoft” (an employ-exec relation), “Fred’s wife” (a family relation), and “a military base in Germany” (a located relation). In this paper we look at the problem of identifying name mentions in Chinese text and classifying them as persons, organizations, or GPEs. Because Chinese has neither capitalization nor overt word boundaries, it poses particular problems for name identification. 4 Baseline System 4.1 Baseline Name Tagger Our baseline name tagger consists of a HMM tagger augmented with a set of post-processing rules. The HMM tagger generally follows the Nymble model (Bikel et al, 1997), but with multiple hypotheses as output and a larger number of states (12) to handle name prefixes and suffixes, and transliterated foreign names separately. It operates on the output of a word segmenter from Tsinghua University. Within each of the name class states, a statistical bigram model is employed, with the usual oneword-per-state emission. The various probabilities involve word co-occurrence, word features, and class probabilities. Then it uses A* search decoding to generate multiple hypotheses. Since these probabilities are estimated based on observations 412 seen in a corpus, “back-off models” are used to reflect the strength of support for a given statistic, as for the Nymble system. We also add post-processing rules to correct some omissions and systematic errors using name lists (for example, a list of all Chinese last names; lists of organization and location suffixes) and particular contextual patterns (for example, verbs occurring with people’s names). They also deal with abbreviations and nested organization names. The HMM tagger also computes the margin – the difference between the log probabilities of the top two hypotheses. This is used as a rough measure of confidence in the top hypothesis (see sections 5.3 and 6.2, below). The name tagger used for these experiments identifies the three main ACE entity types: Person (PER), Organization (ORG), and GPE (names of the other ACE types are identified by a separate component of our system, not involved in the experiments reported here). 4.2 Nominal Mention Tagger Our nominal mention tagger (noun group recognizer) is a maximum entropy tagger trained on the Chinese TreeBank from the University of Pennsylvania, supplemented by list matching. 4.3 Reference Resolver Our baseline reference resolver goes through two successive stages: first, coreference rules will identify some high-confidence positive and negative mention pairs, in training data and test data; then the remaining samples will be used as input of a maximum entropy tagger. The features used in this tagger involve distance, string matching, lexical information, position, semantics, etc. We separate the task into different classifiers for different mention types (name / noun / pronoun). Then we incorporate the results from the relation tagger to adjust the probabilities from the classifiers. Finally we apply a clustering algorithm to combine them into entities (sets of coreferring mentions). 4.4 Relation Tagger The relation tagger uses a k-nearest-neighbor algorithm. For both training and test, we consider all pairs of entity mentions where there is at most one other mention between the heads of the two mentions of interest1. Each training / test example consists of the pair of mentions and the sequence of intervening words. Associated with each training example is either one of the ACE relation types or no relation at all. We defined a distance metric between two examples based on whether the heads of the mentions match whether the ACE types of the heads of the mentions match (for example, both are people or both are organizations) whether the intervening words match To tag a test example, we find the k nearest training examples (where k = 3) and use the distance to weight each neighbor, then select the most common class in the weighted neighbor set. To provide a crude measure of the confidence of our relation tagger, we define two thresholds, Dnear and Dfar. If the average distance d to the nearest neighbors d < Dnear, we consider this a definite relation. If Dnear < d < Dfar, we consider this a possible relation. If d > Dfar, the tagger assumes that no relation exists (regardless of the class of the nearest neighbor). 5 Information from Coreference and Relations Our system is processing a document consisting of multiple sentences. For each sentence, the name recognizer generates multiple hypotheses, each of which is an NE tagging of the entire sentence. The names in the hypothesis, plus the nouns in the categories of interest constitute the mention set for that hypothesis. Coreference resolution links these mentions, assigning each to an entity. In symbols: Si is the i-th sentence in the document. Hi is the hypotheses set for Si hij is the j-th hypothesis in Si Mij is the mention set for hij mijk is the k-th mention in Mij eijk is the entity which mijk belongs to according to the current reference resolution results 5.1 Coreference Features For each mention we compute seven quantities based on the results of name tagging and reference resolution: 1 This constraint is relaxed for parallel structures such as “mention1, mention2, [and] mention3….”; in such cases there can be more than one intervening mention. 413 CorefNumijk is the number of mentions in eijk WeightSumijk is the sum of all the link weights between mijk and other mentions in eijk , 0.8 for name-name coreference; 0.5 for apposition; 0.3 for other name-nominal coreference FirstMentionijk is 1 if mijk is the first name mention in the entity; otherwise 0 Headijk is 1 if mijk includes the head word of name; otherwise 0 Withoutidiomijk is 1 if mijk is not part of an idiom; otherwise 0 PERContextijk is the number of PER context words around a PER name such as a title or an action verb involving a PER ORGSuffixijk is 1 if ORGmijk includes a suffix word; otherwise 0 The first three capture evidence of the correctness of a name provided by reference resolution; for example, a name which is coreferenced with more other mentions is more likely to be correct. The last four capture local or name-internal evidence; for instance, that an organization name includes an explicit, organization-indicating suffix. We then compute, for each of these seven quantities, the sum over all mentions k in a sentence, obtaining values for CorefNumij, WeightSumij, etc.: CorefNum CorefNum ij ijk k = ∑ etc. Finally, we determine, for a given sentence and hypothesis, for each of these seven quantities, whether this quantity achieves the maximum of its values for this hypothesis: BestCorefNumij ≡ CorefNumij = maxq CorefNumiq etc. We will use these properties of the hypothesis as features in assessing the quality of a hypothesis. 5.2 Relation Word Clusters In addition to using relation information for reranking name hypotheses, we used the relation training corpus to build word clusters which could more directly improve name tagging. Name taggers rely heavily on words in the immediate context to identify and classify names; for example, specific job titles, occupations, or family relations can be used to identify people names. Such words are learned individually from the name tagger’s training corpus. If we can provide the name tagger with clusters of related words, the tagger will be able to generalize from the examples in the training corpus to other words in the cluster. The set of ACE relations includes several involving employment, social, and family relations. We gathered the words appearing as an argument of one of these relations in the training corpus, eliminated low-frequency terms and manually edited the ten resulting clusters to remove inappropriate terms. These were then combined with lists (of titles, organization name suffixes, location suffixes) used in the baseline tagger. 5.3 Relation Features Because the performance of our relation tagger is not as good as our coreference resolver, we have used the results of relation detection in a relatively simple way to enhance name detection. The basic intuition is that a name which has been correctly identified is more likely to participate in a relation than one which has been erroneously identified. For a given range of margins (from the HMM), the probability that a name in the first hypothesis is correct is shown in the following table, for names participating and not participating in a relation: Margin In Relation(%) Not in Relation(%) <4 90.7 55.3 <3 89.0 50.1 <2 86.9 42.2 <1.5 81.3 28.9 <1.2 78.8 23.1 <1 75.7 19.0 <0.5 66.5 14.3 Table 1 Probability of a name being correct Table 1 confirms that names participating in relations are much more likely to be correct than names that do not participate in relations. We also see, not surprisingly, that these probabilities are strongly affected by the HMM hypothesis margin (the difference in log probabilities) between the first hypothesis and the second hypothesis. So it is natural to use participation in a relation (coupled with a margin value) as a valuable feature for reranking name hypotheses. Let mijk be the k-th name mention for hypothesishij of sentence; then we define: 414 Inrelationijk = 1 if mijk is in a definite relation = 0 if mijk is in a possible relation = -1 if mijk is not in a relation Inrelation Inrelation ij ijk k = ∑ Mostrelated Inrelation Inrelation ij ij q iq ≡ = ( max ) Finally, to capture the interaction with the margin, we let zi = the margin for sentence Si and divide the range of values of zi into six intervals Mar1, … Mar6. And we define the hypothesis ranking information: FirstHypothesisij = 1 if j =1; otherwise 0. We will use as features for ranking hij the conjunction of Mostrelatedij, zi ∈ Marp (p = 1, …, 6), and FirstHypothesisij . 6 Using the Information from Coreference and Relations 6.1 Word Clustering based on Relations As we described in section 5.2, we can generate word clusters based on relation information. If a word is not part of a relation cluster, we consider it an independent (1-word) cluster. The Nymble name tagger (Bikel et al., 1999) relies on a multi-level linear interpolation model for backoff. We extended this model by adding a level from word to cluster, so as to estimate more reliable probabilities for words in these clusters. Table 2 shows the extended backoff model for each of the three probabilities used by Nymble. Transition Probability First-Word Emission Probability Non-First-Word Emission Probability P(NC2|NC1, <w1, f1>) P(<w2,f2>| NC1, NC2) P(<w2,f2>| <w1,f1>, NC2) P(<Cluster2,f2>| NC1, NC2) P(<Cluster2,f2>| <w1,f1>, NC2) P(NC2|NC1, <Cluster1, f1>) P(<Cluster2,f2>| <+begin+, other>, NC2) P(<Cluster2,f2>| <Cluster1,f1>, NC2) P(NC2|NC1) P(<Cluster2, f2>|NC2) P(NC2) P(Cluster2|NC2) * P(f2|NC2) 1/#(name classes) 1/#(cluster) * 1/#(word features) Table2 Extended Backoff Model 6.2 Pre-pruning by Margin The HMM tagger produces the N best hypotheses for each sentence. 2 In order to decide when we need to rely on global (coreference and relation) information for name tagging, we want to have some assessment of the confidence that the name tagger has in the first hypothesis. In this paper, we use the margin for this purpose. A large margin indicates greater confidence that the first hypothesis is correct.3 So if the margin of a sentence is above a threshold, we select the first hypothesis, dropping the others and by-passing the reranking. 6.3 Re-ranking based on Coreference We described in section 5.1, above, the coreference features which will be used for reranking the hypotheses after pre-pruning. A maximum entropy model for re-ranking these hypotheses is then trained and applied as follows: Training 1. Use K-fold cross-validation to generate multiple name tagging hypotheses for each document in the training data Dtrain (in each of the K iterations, we use K-1 subsets to train the HMM and then generate hypotheses from the Kth subset). 2. For each document d in Dtrain, where d includes n sentences S1…Sn For i = 1…n, let m = the number of hypotheses for Si (1) Pre-prune the candidate hypotheses using the HMM margin (2) For each hypothesis hij, j = 1…m (a) Compare hij with the key, set the prediction Valueij “Best” or “Not Best” (b) Run the Coreference Resolver on hij and the best hypothesis for each of the other sentences, generate entity results for each candidate name in hij (c) Generate a coreference feature vector Vij for hij (d) Output Vij and Valueij 2 We set different N = 5, 10, 20 or 30 for different margin ranges, by crossvalidation checking the training data about the ranking position of the best hypothesis for each sentence. With this N, optimal reranking (selecting the best hypothesis among the N best) would yield Precision = 96.9 Recall = 94.5 F = 95.7 on our test corpus. 3 Similar methods based on HMM margins were used by (Scheffer et al., 2001). 415 3. Train Maxent Re-ranking system on all Vij and Valueij Test 1. Run the baseline name tagger to generate multiple name tagging hypotheses for each document in the test data Dtest 2. For each document d in Dtest, where d includes n sentences S1…Sn (1) Initialize: Dynamic input of coreference resolver H = {hi-best | i = 1…n, hi-best is the current best hypothesis for Si} (2) For i = 1…n, assume m = the number of hypotheses for Si (a) Pre-prune the candidate hypotheses using the HMM margin (b) For each hypothesis hij, j = 1…m • hi-best = hij • Run the Coreference Resolver on H, generate entity results for each name candidate in hij • Generate a coreference feature vector Vij for hij • Run Maxent Re-ranking system on Vij, produce Probij of “Best” value (c) hi-best = the hypothesis with highest Probij of “Best” value, update H and output hi-best 6.4 Re-ranking based on Relations From the above first-stage re-ranking by coreference, for each hypothesis we got the probability of its being the best one. By using these results and relation information we proceed to a second-stage re-ranking. As we described in section 5.3, the information of “in relation or not” can be used together with margin as another important measure of confidence. In addition, we apply the mechanism of weighted voting among hypotheses (Zhai et al., 2004) as an additional feature in this second-stage re-ranking. This approach allows all hypotheses to vote on a possible name output. A recognized name is considered correct only when it occurs in more than 30 percent of the hypotheses (weighted by their probability). In our experiments we use the probability produced by the HMM, probij , for hypothesishij . We normalize this probability weight as: W prob prob ij ij iq q = ∑ exp( ) exp( ) For each name mention mijk inhij , we define: Occur m q ijk ( ) = 1 if mijk occurs in hq = 0 otherwise Then we count its voting value as follows: Votingijk is 1 if W Occur m iq q ijk q × ∑ ( ) >0.3; otherwise 0. The voting value of hij is: Voting Voting ij ijk k = ∑ Finally we define the following voting feature: BestVoting Voting Voting ij ij q iq ≡ = ( max ) This feature is used, together with the features described at the end of section 5.3 and the probability score from the first stage, for the secondstage maxent re-ranking model. One appeal of the above two re-ranking algorithms is its flexibility in incorporating features into a learning model: essentially any coreference or relation features which might be useful in discriminating good from bad structures can be included. 7 System Pipeline Combining all the methods presented above, the flow of our final system is shown in figure 1. 8 Evaluation Results 8.1 Training and Test Data We took 346 documents from the 2004 ACE training corpus and official test set, including both broadcast news and newswire, as our blind test set. To train our name tagger, we used the Beijing University Insititute of Computational Linguistics corpus – 2978 documents from the People’s Daily in 1998 – and 667 texts in the training corpus for the 2003 & 2004 ACE evaluation. Our reference resolver is trained on these 667 ACE texts. The relation tagger is trained on 546 ACE 2004 texts, from which we also extracted the relation clusters. The test set included 11715 names: 3551 persons, 5100 GPEs and 3064 organizations. 416 Figure 1 System Flow 8.2 Overall Performance Comparison Table 3 shows the performance of the baseline system; Table 4 is the system with relation word clusters; Table 5 is the system with both relation clusters and re-ranking based on coreference features; and Table 6 is the whole system with second-stage re-ranking using relations. The results indicate that relation word clusters help to improve the precision and recall of most name types. Although the overall gain in F-score is small (0.7%), we believe further gain can be achieved if the relation corpus is enlarged in the future. The re-ranking using the coreference features had the largest impact, improving precision and recall consistently for all types. Compared to our system in (Ji and Grishman, 2004), it helps to distinguish the good and bad hypotheses without any loss of recall. The second-stage re-ranking using the relation participation feature yielded a small further gain in F score for each type, improving precision at a slight cost in recall. The overall system achieves a 24.1% relative reduction on the spurious and incorrect tags, and 14.3% reduction in the missing rate over a state-ofthe-art baseline HMM trained on the same material. Furthermore, it helps to disambiguate many name type errors: the number of cases of type confusion in name classification was reduced from 191 to 102. Name Precision Recall F PER 88.6 89.2 88.9 GPE 88.1 84.9 86.5 ORG 88.8 87.3 88.0 ALL 88.4 86.7 87.5 Table 3 Baseline Name Tagger Name Precision Recall F PER 89.4 90.1 89.7 GPE 88.9 85.8 89.4 ORG 88.7 87.4 88.0 ALL 89.0 87.4 88.2 Table 4 Baseline + Word Clustering by Relation Name Precision Recall F PER 90.1 91.2 90.5 GPE 89.7 86.8 88.2 ORG 90.6 89.8 90.2 ALL 90.0 88.8 89.4 Table 5 Baseline + Word Clustering by Relation + Re-ranking by Coreference Name Precision Recall F PER 90.7 91.0 90.8 GPE 91.2 86.9 89.0 ORG 91.7 89.1 90.4 ALL 91.2 88.6 89.9 Table 6 Baseline + Word Clustering by Relation + Re-ranking by Coreference + Re-ranking by Relation In order to check how robust these methods are, we conducted significance testing (sign test) on the 346 documents. We split them into 5 folders, 70 documents in each of the first four folders and 66 in the fifth folder. We found that each enhancement (word clusters, coreference reranking, relation reranking) produced an improvement in F score for each folder, allowing us to reject the hypothesis that these improvements were random at a 95% confidence level. The overall F-measure improvements (using all enhancements) for the 5 folders were: 2.3%, 1.6%, 2.1%, 3.5%, and 2.1%. HMM Name Tagger, word clustering based on relations, pruned by margin Multiple name hypotheses Maxent Re-ranking by coreference Single name hypothesis Post-processing by heuristic rules Input Nominal Mention Tagger Nominal Mentions Relation Tagger Maxent Re-ranking by relation Coreference Resolver 417 9 Conclusion This paper explored methods for exploiting the interaction of analysis components in an information extraction system to reduce the error rate of individual components. The ACE task hierarchy provided a good opportunity to explore these interactions, including the one presented here between reference resolution/relation detection and name tagging. We demonstrated its effectiveness for Chinese name tagging, obtaining an absolute improvement of 2.4% in F-measure (a reduction of 19% in the (1 – F) error rate). These methods are quite low-cost because we don’t need any extra resources or components compared to the baseline information extraction system. Because no language-specific rules are involved and no additional training resources are required, we expect that the approach described here can be straightforwardly applied to other languages. It should also be possible to extend this re-ranking framework to other levels of analysis in information extraction –- for example, to use event detection to improve name tagging; to incorporate subtype tagging results to improve name tagging; and to combine name tagging, reference resolution and relation detection to improve nominal mention tagging. For Chinese (and other languages without overt word segmentation) it could also be extended to do character-based name tagging, keeping multiple segmentations among the N-Best hypotheses. Also, as information extraction is extended to capture cross-document information, we should expect further improvements in performance of the earlier stages of analysis, including in particular name identification. For some levels of analysis, such as name tagging, it will be natural to apply lattice techniques to organize the multiple hypotheses, at some gain in efficiency. Acknowledgements This research was supported by the Defense Advanced Research Projects Agency under Grant N66001-04-1-8920 from SPAWAR San Diego, and by the National Science Foundation under Grant 03-25657. This paper does not necessarily reflect the position or the policy of the U.S. Government. References Daniel M. Bikel, Scott Miller, Richard Schwartz, and Ralph Weischedel. 1997. Nymble: a highperformance Learning Name-finder. Proc. Fifth Conf. on Applied Natural Language Processing, Washington, D.C. Andrew Borthwick. 1999. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. Dissertation, Dept. of Computer Science, New York University. Hai Leong Chieu and Hwee Tou Ng. 2002. Named Entity Recognition: A Maximum Entropy Approach Using Global Information. Proc.: 17th Int’l Conf. on Computational Linguistics (COLING 2002), Taipei, Taiwan. Yen-Lu Chow and Richard Schwartz. 1989. The N-Best Algorithm: An efficient Procedure for Finding Top N Sentence Hypotheses. Proc. DARPA Speech and Natural Language Workshop Michael Collins. 2002. Ranking Algorithms for NamedEntity Extraction: Boosting and the Voted Perceptron. Proc. ACL 2002 Heng Ji and Ralph Grishman. 2004. Applying Coreference to Improve Name Recognition. Proc. ACL 2004 Workshop on Reference Resolution and Its Applications, Barcelona, Spain N. Kambhatla. 2004. Combining Lexical, Syntactic, and Semantic Features with Maximum Entropy Models for Extracting Relations. Proc. ACL 2004. Tobias Scheffer, Christian Decomain, and Stefan Wrobel. 2001. Active Hidden Markov Models for Information Extraction. Proc. Int’l Symposium on Intelligent Data Analysis (IDA-2001). Dmitry Zelenko, Chinatsu Aone, and Jason Tibbets. 2004. Binary Integer Programming for Information Extraction. ACE Evaluation Meeting, September 2004, Alexandria, VA. Lufeng Zhai, Pascale Fung, Richard Schwartz, Marine Carpuat, and Dekai Wu. 2004. Using N-best Lists for Named Entity Recognition from Chinese Speech. Proc. NAACL 2004 (Short Papers) 418 | 2005 | 51 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 419–426, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Extracting Relations with Integrated Information Using Kernel Methods Shubin Zhao Ralph Grishman Department of Computer Science New York University 715 Broadway, 7th Floor, New York, NY 10003 [email protected] [email protected] Abstract Entity relation detection is a form of information extraction that finds predefined relations between pairs of entities in text. This paper describes a relation detection approach that combines clues from different levels of syntactic processing using kernel methods. Information from three different levels of processing is considered: tokenization, sentence parsing and deep dependency analysis. Each source of information is represented by kernel functions. Then composite kernels are developed to integrate and extend individual kernels so that processing errors occurring at one level can be overcome by information from other levels. We present an evaluation of these methods on the 2004 ACE relation detection task, using Support Vector Machines, and show that each level of syntactic processing contributes useful information for this task. When evaluated on the official test data, our approach produced very competitive ACE value scores. We also compare the SVM with KNN on different kernels. 1 Introduction Information extraction subsumes a broad range of tasks, including the extraction of entities, relations and events from various text sources, such as newswire documents and broadcast transcripts. One such task, relation detection, finds instances of predefined relations between pairs of entities, such as a Located-In relation between the entities Centre College and Danville, KY in the phrase Centre College in Danville, KY. The ‘entities’ are the individuals of selected semantic types (such as people, organizations, countries, …) which are referred to in the text. Prior approaches to this task (Miller et al., 2000; Zelenko et al., 2003) have relied on partial or full syntactic analysis. Syntactic analysis can find relations not readily identified based on sequences of tokens alone. Even ‘deeper’ representations, such as logical syntactic relations or predicate-argument structure, can in principle capture additional generalizations and thus lead to the identification of additional instances of relations. However, a general problem in Natural Language Processing is that as the processing gets deeper, it becomes less accurate. For instance, the current accuracy of tokenization, chunking and sentence parsing for English is about 99%, 92%, and 90% respectively. Algorithms based solely on deeper representations inevitably suffer from the errors in computing these representations. On the other hand, low level processing such as tokenization will be more accurate, and may also contain useful information missed by deep processing of text. Systems based on a single level of representation are forced to choose between shallower representations, which will have fewer errors, and deeper representations, which may be more general. Based on these observations, Zhao et al. (2004) proposed a discriminative model to combine information from different syntactic sources using a kernel SVM (Support Vector Machine). We showed that adding sentence level word trigrams as global information to local dependency context boosted the performance of finding slot fillers for 419 management succession events. This paper describes an extension of this approach to the identification of entity relations, in which syntactic information from sentence tokenization, parsing and deep dependency analysis is combined using kernel methods. At each level, kernel functions (or kernels) are developed to represent the syntactic information. Five kernels have been developed for this task, including two at the surface level, one at the parsing level and two at the deep dependency level. Our experiments show that each level of processing may contribute useful clues for this task, including surface information like word bigrams. Adding kernels one by one continuously improves performance. The experiments were carried out on the ACE RDR (Relation Detection and Recognition) task with annotated entities. Using SVM as a classifier along with the full composite kernel produced the best performance on this task. This paper will also show a comparison of SVM and KNN (k-Nearest-Neighbors) under different kernel setups. 2 Kernel Methods Many machine learning algorithms involve only the dot product of vectors in a feature space, in which each vector represents an object in the object domain. Kernel methods (Muller et al., 2001) can be seen as a generalization of feature-based algorithms, in which the dot product is replaced by a kernel function (or kernel) Ψ(X,Y) between two vectors, or even between two objects. Mathematically, as long as Ψ(X,Y) is symmetric and the kernel matrix formed by Ψ is positive semi-definite, it forms a valid dot product in an implicit Hilbert space. In this implicit space, a kernel can be broken down into features, although the dimension of the feature space could be infinite. Normal feature-based learning can be implemented in kernel functions, but we can do more than that with kernels. First, there are many wellknown kernels, such as polynomial and radial basis kernels, which extend normal features into a high order space with very little computational cost. This could make a linearly non-separable problem separable in the high order feature space. Second, kernel functions have many nice combination properties: for example, the sum or product of existing kernels is a valid kernel. This forms the basis for the approach described in this paper. With these combination properties, we can combine individual kernels representing information from different sources in a principled way. Many classifiers can be used with kernels. The most popular ones are SVM, KNN, and voted perceptrons. Support Vector Machines (Vapnik, 1998; Cristianini and Shawe-Taylor, 2000) are linear classifiers that produce a separating hyperplane with largest margin. This property gives it good generalization ability in high-dimensional spaces, making it a good classifier for our approach where using all the levels of linguistic clues could result in a huge number of features. Given all the levels of features incorporated in kernels and training data with target examples labeled, an SVM can pick up the features that best separate the targets from other examples, no matter which level these features are from. In cases where an error occurs in one processing result (especially deep processing) and the features related to it become noisy, SVM may pick up clues from other sources which are not so noisy. This forms the basic idea of our approach. Therefore under this scheme we can overcome errors introduced by one processing level; more particularly, we expect accurate low level information to help with less accurate deep level information. 3 Related Work Collins et al. (1997) and Miller et al. (2000) used statistical parsing models to extract relational facts from text, which avoided pipeline processing of data. However, their results are essentially based on the output of sentence parsing, which is a deep processing of text. So their approaches are vulnerable to errors in parsing. Collins et al. (1997) addressed a simplified task within a confined context in a target sentence. Zelenko et al. (2003) described a recursive kernel based on shallow parse trees to detect personaffiliation and organization-location relations, in which a relation example is the least common subtree containing two entity nodes. The kernel matches nodes starting from the roots of two subtrees and going recursively to the leaves. For each pair of nodes, a subsequence kernel on their child nodes is invoked, which matches either contiguous or non-contiguous subsequences of node. Compared with full parsing, shallow parsing is more reliable. But this model is based solely on the out420 put of shallow parsing so it is still vulnerable to irrecoverable parsing errors. In their experiments, incorrectly parsed sentences were eliminated. Culotta and Sorensen (2004) described a slightly generalized version of this kernel based on dependency trees. Since their kernel is a recursive match from the root of a dependency tree down to the leaves where the entity nodes reside, a successful match of two relation examples requires their entity nodes to be at the same depth of the tree. This is a strong constraint on the matching of syntax so it is not surprising that the model has good precision but very low recall. In their solution a bag-of-words kernel was used to compensate for this problem. In our approach, more flexible kernels are used to capture regularization in syntax, and more levels of syntactic information are considered. Kambhatla (2004) described a Maximum Entropy model using features from various syntactic sources, but the number of features they used is limited and the selection of features has to be a manual process.1 In our model, we use kernels to incorporate more syntactic information and let a Support Vector Machine decide which clue is crucial. Some of the kernels are extended to generate high order features. We think a discriminative classifier trained with all the available syntactic features should do better on the sparse data. 4 Kernel Relation Detection 4.1 ACE Relation Detection Task ACE (Automatic Content Extraction)2 is a research and development program in information extraction sponsored by the U.S. Government. The 2004 evaluation defined seven major types of relations between seven types of entities. The entity types are PER (Person), ORG (Organization), FAC (Facility), GPE (Geo-Political Entity: countries, cities, etc.), LOC (Location), WEA (Weapon) and VEH (Vehicle). Each mention of an entity has a mention type: NAM (proper name), NOM (nominal) or 1 Kambhatla also evaluated his system on the ACE relation detection task, but the results are reported for the 2003 task, which used different relations and different training and test data, and did not use hand-annotated entities, so they cannot be readily compared to our results. 2Task description: http://www.itl.nist.gov/iad/894.01/tests/ace/ ACE guidelines: http://www.ldc.upenn.edu/Projects/ACE/ PRO (pronoun); for example George W. Bush, the president and he respectively. The seven relation types are EMP-ORG (Employment/Membership/Subsidiary), PHYS (Physical), PER-SOC (Personal/Social), GPE-AFF (GPEAffiliation), Other-AFF (Person/ORG Affiliation), ART (Agent-Artifact) and DISC (Discourse). There are also 27 relation subtypes defined by ACE, but this paper only focuses on detection of relation types. Table 1 lists examples of each relation type. Type Example EMP-ORG the CEO of Microsoft PHYS a military base in Germany GPE-AFF U.S. businessman PER-SOC a spokesman for the senator DISC many of these people ART the makers of the Kursk Other-AFF Cuban-American people Table 1. ACE relation types and examples. The heads of the two entity arguments in a relation are marked. Types are listed in decreasing order of frequency of occurrence in the ACE corpus. Figure 1 shows a sample newswire sentence, in which three relations are marked. In this sentence, we expect to find a PHYS relation between Hezbollah forces and areas, a PHYS relation between Syrian troops and areas and an EMP-ORG relation between Syrian troops and Syrian. In our approach, input text is preprocessed by the Charniak sentence parser (including tokenization and POS tagging) and the GLARF (Meyers et al., 2001) dependency analyzer produced by NYU. Based on treebank parsing, GLARF produces labeled deep dependencies between words (syntactic relations such as logical subject and logical object). It handles linguistic phenomena like passives, relatives, reduced relatives, conjunctions, etc. Figure 1. Example sentence from newswire text 4.2 Definitions In our model, kernels incorporate information from PHYS PHYS EMP-ORG That's because Israel was expected to retaliate against Hezbollah forces in areas controlled by Syrian troops. 421 tokenization, parsing and deep dependency analysis. A relation candidate R is defined as R = (arg1, arg2, seq, link, path), where arg1 and arg2 are the two entity arguments which may be related; seq=(t1, t2, …, tn) is a token vector that covers the arguments and intervening words; link=(t1, t2, …, tm) is also a token vector, generated from seq and the parse tree; path is a dependency path connecting arg1 and arg2 in the dependency graph produced by GLARF. path can be empty if no such dependency path exists. The difference between link and seq is that link only retains the “important” words in seq in terms of syntax. For example, all noun phrases occurring in seq are replaced by their heads. Words and constituent types in a stop list, such as time expressions, are also removed. A token T is defined as a string triple, T = (word, pos, base), where word, pos and base are strings representing the word, part-of-speech and morphological base form of T. Entity is a token augmented with other attributes, E = (tk, type, subtype, mtype), where tk is the token associated with E; type, subtype and mtype are strings representing the entity type, subtype and mention type of E. The subtype contains more specific information about an entity. For example, for a GPE entity, the subtype tells whether it is a country name, city name and so on. Mention type includes NAM, NOM and PRO. It is worth pointing out that we always treat an entity as a single token: for a nominal, it refers to its head, such as boys in the two boys; for a proper name, all the words are connected into one token, such as Bashar_Assad. So in a relation example R whose seq is (t1, t2, …, tn), it is always true that arg1=t1 and arg2=tn. For names, the base form of an entity is its ACE type (person, organization, etc.). To introduce dependencies, we define a dependency token to be a token augmented with a vector of dependency arcs, DT=(word, pos, base, dseq), where dseq = (arc1, ... , arcn ). A dependency arc is ARC = (w, dw, label, e), where w is the current token; dw is a token connected by a dependency to w; and label and e are the role label and direction of this dependency arc respectively. From now on we upgrade the type of tk in arg1 and arg2 to be dependency tokens. Finally, path is a vector of dependency arcs, path = (arc1 , ... , arcl ), where l is the length of the path and arci (1≤i≤l) satisfies arc1.w=arg1.tk, arci+1.w=arci.dw and arcl.dw=arg2.tk. So path is a chain of dependencies connecting the two arguments in R. The arcs in it do not have to be in the same direction. Figure 2. Illustration of a relation example R. The link sequence is generated from seq by removing some unimportant words based on syntax. The dependency links are generated by GLARF. Figure 2 shows a relation example generated from the text “… in areas controlled by Syrian troops”. In this relation example R, arg1 is ((“areas”, “NNS”, “area”, dseq), “LOC”, “Region”, “NOM”), and arg1.dseq is ((OBJ, areas, in, 1), (OBJ, areas, controlled, 1)). arg2 is ((“troops”, “NNS”, “troop”, dseq), “ORG”, “Government”, “NOM”) and arg2.dseq = ((A-POS, troops, Syrian, 0), (SBJ, troops, controlled, 1)). path is ((OBJ, areas, controlled, 1), (SBJ, controlled, troops, 0)). The value 0 in a dependency arc indicates forward direction from w to dw, and 1 indicates backward direction. The seq and link sequences of R are shown in Figure 2. Some relations occur only between very restricted types of entities, but this is not true for every type of relation. For example, PER-SOC is a relation mainly between two person entities, while PHYS can happen between any type of entity and a GPE or LOC entity. 4.3 Syntactic Kernels In this section we will describe the kernels designed for different syntactic sources and explain the intuition behind them. We define two kernels to match relation examples at surface level. Using the notation just defined, we can write the two surface kernels as follows: 1) Argument kernel troops areas controlled by A-POS OBJ arg1 arg2 SBJ OBJ path in seq link areas controlled by Syrian troops COMP 422 where KE is a kernel that matches two entities, KT is a kernel that matches two tokens. I(x, y) is a binary string match operator that gives 1 if x=y and 0 otherwise. Kernel Ψ1 matches attributes of two entity arguments respectively, such as type, subtype and lexical head of an entity. This is based on the observation that there are type constraints on the two arguments. For instance PER-SOC is a relation mostly between two person entities. So the attributes of the entities are crucial clues. Lexical information is also important to distinguish relation types. For instance, in the phrase U.S. president there is an EMP-ORG relation between president and U.S., while in a U.S. businessman there is a GPE-AFF relation between businessman and U.S. 2) Bigram kernel where Operator <t1, t2> concatenates all the string elements in tokens t1 and t2 to produce a new token. So Ψ2 is a kernel that simply matches unigrams and bigrams between the seq sequences of two relation examples. The information this kernel provides is faithful to the text. 3) Link sequence kernel where min_len is the length of the shorter link sequence in R1 and R2. Ψ3 is a kernel that matches token by token between the link sequences of two relation examples. Since relations often occur in a short context, we expect many of them have similar link sequences. 4) Dependency path kernel where ) .' , . ( )) .' , . ( e arc e arc I dw arc dw arc K j i j i T × Intuitively the dependency path connecting two arguments could provide a high level of syntactic regularization. However, a complete match of two dependency paths is rare. So this kernel matches the component arcs in two dependency paths in a pairwise fashion. Two arcs can match only when they are in the same direction. In cases where two paths do not match exactly, this kernel can still tell us how similar they are. In our experiments we placed an upper bound on the length of dependency paths for which we computed a non-zero kernel. 5) Local dependency where ) .' , . ( )) .' , . ( e arc e arc I dw arc dw arc K j i j i T × This kernel matches the local dependency context around the relation arguments. This can be helpful especially when the dependency path between arguments does not exist. We also hope the dependencies on each argument may provide some useful clues about the entity or connection of the entity to the context outside of the relation example. 4.4 Composite Kernels Having defined all the kernels representing shallow and deep processing results, we can define composite kernels to combine and extend the individual kernels. 1) Polynomial extension This kernel combines the argument kernel Ψ1 and link kernel Ψ3 and applies a second-degree polynomial kernel to extend them. The combination of Ψ1 and Ψ3 covers the most important clues for this task: information about the two arguments and the word link between them. The polynomial extension is equivalent to adding pairs of features as ), arg . , arg . ( ) , ( 2 1 2 ,1 2 1 1 i i i E R R K R R ∑ = = ψ + + = ) . , . ( ) . , . ( ) , ( 2 1 2 1 2 1 type E type E I tk E tk E K E E K T E ) . , . ( ) . , . ( 2 1 2 1 mtype E mtype E I subtype E subtype E I + + = ) . , . ( ) , ( 2 1 2 1 word T word T I T T KT ) . , . ( ) . , . ( 2 1 2 1 base T base T I pos T pos T I + ), . , . ( ) , ( 2 1 2 1 2 seq R seq R K R R seq = ψ ∑ ∑ < ≤ < ≤ + = len seq i len seq j j i T seq tk tk K seq seq K . 0 .' 0 )' , ( ( ') , ( )) ' ,' , , ( 1 1 > < > < + + j j i i T tk tk tk tk K ) . , . ( ) , ( 2 1 2 1 3 link R link R K R R link = ψ ,) . . , . . ( 2 1 min_ 0 i i len i T kt link R kt link R K ∑ < ≤ = ), . , . ( ) , ( 2 1 2 1 4 path R path R K R R path = ψ )' , ( path path K path ∑ ∑ < ≤ < ≤ + = len path i len path j j i label arc label arc I . 0 .' 0 ) .' , . ( (( ,) . arg . , . arg . ( ) , ( 2 ,1 2 1 2 1 5 ∑ = = i i i D dseq R dseq R K R R ψ )' , ( dseq dseq K D ∑ ∑ < ≤ < ≤ + = len dseq i len dseq j j i label arc label arc I . 0 .' 0 ) .' , . ( ( 4 / ) ( ) ( ) , ( 2 3 1 3 1 2 1 1 ψ ψ ψ ψ + + + = Φ R R 423 new features. Intuitively this introduces new features like: the subtype of the first argument is a country name and the word of the second argument is president, which could be a good clue for an EMP-ORG relation. The polynomial kernel is down weighted by a normalization factor because we do not want the high order features to overwhelm the original ones. In our experiment, using polynomial kernels with degree higher than 2 does not produce better results. 2) Full kernel This is the final kernel we used for this task, which is a combination of all the previous kernels. In our experiments, we set all the scalar factors to 1. Different values were tried, but keeping the original weight for each kernel yielded the best results for this task. All the individual kernels we designed are explicit. Each kernel can be seen as a matching of features and these features are enumerable on the given data. So it is clear that they are all valid kernels. Since the kernel function set is closed under linear combination and polynomial extension, the composite kernels are also valid. The reason we propose to use a feature-based kernel is that we can have a clear idea of what syntactic clues it represents and what kind of information it misses. This is important when developing or refining kernels, so that we can make them generate complementary information from different syntactic processing results. 5 Experiments Experiments were carried out on the ACE RDR (Relation Detection and Recognition) task using hand-annotated entities, provided as part of the ACE evaluation. The ACE corpora contain documents from two sources: newswire (nwire) documents and broadcast news transcripts (bnews). In this section we will compare performance of different kernel setups trained with SVM, as well as different classifiers, KNN and SVM, with the same kernel setup. The SVM package we used is SVMlight. The training parameters were chosen using cross-validation. One-against-all classification was applied to each pair of entities in a sentence. When SVM predictions conflict on a relation example, the one with larger margin will be selected as the final answer. 5.1 Corpus The ACE RDR training data contains 348 documents, 125K words and 4400 relations. It consists of both nwire and bnews documents. Evaluation of kernels was done on the training data using 5-fold cross-validation. We also evaluated the full kernel setup with SVM on the official test data, which is about half the size of the training data. All the data is preprocessed by the Charniak parser and GLARF dependency analyzer. Then relation examples are generated based these results. 5.2 Results Table 2 shows the performance of the SVM on different kernel setups. The kernel setups in this experiment are incremental. From this table we can see that adding kernels continuously improves the performance, which indicates they provide additional clues to the previous setup. The argument kernel treats the two arguments as independent entities. The link sequence kernel introduces the syntactic connection between arguments, so adding it to the argument kernel boosted the performance. Setup F shows the performance of adding only dependency kernels to the argument kernel. The performance is not as good as setup B, indicating that dependency information alone is not as crucial as the link sequence. Kernel Performance prec recall F-score A Argument (Ψ1) 52.96% 58.47% 55.58% B A + link (Ψ1+Ψ3) 58.77% 71.25% 64.41%* C B-poly (Φ1) 66.98% 70.33% 68.61%* D C + dep (Φ1+Ψ4+Ψ5) 69.10% 71.41% 70.23%* E D + bigram (Φ2) 69.23% 70.50% 70.35% F A + dep (Ψ1+Ψ4+Ψ5) 57.86% 68.50% 62.73% Table 2. SVM performance on incremental kernel setups. Each setup adds one level of kernels to the previous one except setup F. Evaluated on the ACE training data with 5-fold cross-validation. Fscores marked by * are significantly better than the previous setup (at 95% confidence level). 2 5 4 1 2 1 2 ) , ( χψ βψ αψ + + + Φ = Φ R R 424 Another observation is that adding the bigram kernel in the presence of all other level of kernels improved both precision and recall, indicating that it helped in both correcting errors in other processing results and providing supplementary information missed by other levels of analysis. In another experiment evaluated on the nwire data only (about half of the training data), adding the bigram kernel improved F-score 0.5% and this improvement is statistically significant. Type KNN (Ψ1+Ψ3) KNN (Φ2) SVM (Φ2) EMP-ORG 75.43% 72.66% 77.76% PHYS 62.19 % 61.97% 66.37% GPE-AFF 58.67% 56.22% 62.13% PER-SOC 65.11% 65.61% 73.46% DISC 68.20% 62.91% 66.24% ART 69.59% 68.65% 67.68% Other-AFF 51.05% 55.20% 46.55% Total 67.44% 65.69% 70.35% Table 3. Performance of SVM and KNN (k=3) on different kernel setups. Types are ordered in decreasing order of frequency of occurrence in the ACE corpus. In SVM training, the same parameters were used for all 7 types. Table 3 shows the performance of SVM and KNN (k Nearest Neighbors) on different kernel setups. For KNN, k was set to 3. In the first setup of KNN, the two kernels which seem to contain most of the important information are used. It performs quite well when compared with the SVM result. The other two tests are based on the full kernel setup. For the two KNN experiments, adding more kernels (features) does not help. The reason might be that all kernels (features) were weighted equally in the composite kernel Φ2 and this may not be optimal for KNN. Another reason is that the polynomial extension of kernels does not have any benefit in KNN because it is a monotonic transformation of similarity values. So the results of KNN on kernel (Ψ1+Ψ3) and Φ1 would be exactly the same. We also tried different k for KNN and k=3 seems to be the best choice in either case. For the four major types of relations SVM does better than KNN, probably due to SVM’s generalization ability in the presence of large numbers of features. For the last three types with many fewer examples, performance of SVM is not as good as KNN. The reason we think is that training of SVM on these types is not sufficient. We tried different training parameters for the types with fewer examples, but no dramatic improvement obtained. We also evaluated our approach on the official ACE RDR test data and obtained very competitive scores.3 The primary scoring metric4 for the ACE evaluation is a 'value' score, which is computed by deducting from 100 a penalty for each missing and spurious relation; the penalty depends on the types of the arguments to the relation. The value scores produced by the ACE scorer for nwire and bnews test data are 71.7 and 68.0 repectively. The value score on all data is 70.1.5 The scorer also reports an F-score based on full or partial match of relations to the keys. The unweighted F-score for this test produced by the ACE scorer on all data is 76.0%. For this evaluation we used nearest neighbor to determine argument ordering and relation subtypes. The classification scheme in our experiments is one-against-all. It turned out there is not so much confusion between relation types. The confusion matrix of predictions is fairly clean. We also tried pairwise classification, and it did not help much. 6 Discussion In this paper, we have shown that using kernels to combine information from different syntactic sources performed well on the entity relation detection task. Our experiments show that each level of syntactic processing contains useful information for the task. Combining them may provide complementary information to overcome errors arising from linguistic analysis. Especially, low level information obtained with high reliability helped with the other deep processing results. This design feature of our approach should be best employed when the preprocessing errors at each level are independent, namely when there is no dependency between the preprocessing modules. The model was tested on text with annotated entities, but its design is generic. It can work with 3 As ACE participants, we are bound by the participation agreement not to disclose other sites’ scores, so no direct comparison can be provided. 4 http://www.nist.gov/speech/tests/ace/ace04/software.htm 5 No comparable inter-annotator agreement scores are available for this task, with pre-defined entities. However, the agreement scores across multiple sites for similar relation tagging tasks done in early 2005, using the value metric, ranged from about 0.70 to 0.80. 425 noisy entity detection input from an automatic tagger. With all the existing information from other processing levels, this model can be also expected to recover from errors in entity tagging. 7 Further Work Kernel functions have many nice properties. There are also many well known kernels, such as radial basis kernels, which have proven successful in other areas. In the work described here, only linear combinations and polynomial extensions of kernels have been evaluated. We can explore other kernel properties to integrate the existing syntactic kernels. In another direction, training data is often sparse for IE tasks. String matching is not sufficient to capture semantic similarity of words. One solution is to use general purpose corpora to create clusters of similar words; another option is to use available resources like WordNet. These word similarities can be readily incorporated into the kernel framework. To deal with sparse data, we can also use deeper text analysis to capture more regularities from the data. Such analysis may be based on newly-annotated corpora like PropBank (Kingsbury and Palmer, 2002) at the University of Pennsylvania and NomBank (Meyers et al., 2004) at New York University. Analyzers based on these resources can generate regularized semantic representations for lexically or syntactically related sentence structures. Although deeper analysis may even be less accurate, our framework is designed to handle this and still obtain some improvement in performance. 8 Acknowledgement This research was supported in part by the Defense Advanced Research Projects Agency under Grant N66001-04-1-8920 from SPAWAR San Diego, and by the National Science Foundation under Grant ITS-0325657. This paper does not necessarily reflect the position of the U.S. Government. We wish to thank Adam Meyers of the NYU NLP group for his help in producing deep dependency analyses. References M. Collins and S. Miller. 1997. Semantic tagging using a probabilistic context free grammar. In Proceedings of the 6th Workshop on Very Large Corpora. N. Cristianini and J. Shawe-Taylor. 2000. An introduction to support vector machines. Cambridge University Press. A. Culotta and J. Sorensen. 2004. Dependency Tree Kernels for Relation Extraction. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics. D. Gildea and M. Palmer. 2002. The Necessity of Parsing for Predicate Argument Recognition. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. N. Kambhatla. 2004. Combining Lexical, Syntactic, and Semantic Features with Maximum Entropy Models for Extracting Relations. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics. P. Kingsbury and M. Palmer. 2002. From treebank to propbank. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC-2002). C. D. Manning and H. Schutze 2002. Foundations of Statistical Natural Language Processing. The MIT Press, page 454-455. A. Meyers, R. Grishman, M. Kosaka and S. Zhao. 2001. Covering Treebanks with GLARF. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics. A. Meyers, R. Reeves, Catherine Macleod, Rachel Szekeley, Veronkia Zielinska, Brian Young, and R. Grishman. 2004. The Cross-Breeding of Dictionaries. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC2004). S. Miller, H. Fox, L. Ramshaw, and R. Weischedel. 2000. A novel use of statistical parsing to extract information from text. In 6th Applied Natural Language Processing Conference. K.-R. Müller, S. Mika, G. Ratsch, K. Tsuda and B. Scholkopf. 2001. An introduction to kernel-based learning algorithms, IEEE Trans. Neural Networks, 12, 2, pages 181-201. V. N. Vapnik. 1998. Statistical Learning Theory. WileyInterscience Publication. D. Zelenko, C. Aone and A. Richardella. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research. Shubin Zhao, Adam Meyers, Ralph Grishman. 2004. Discriminative Slot Detection Using Kernel Methods. In the Proceedings of the 20th International Conference on Computational Linguistics. 426 | 2005 | 52 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 427–434, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Exploring Various Knowledge in Relation Extraction ZHOU GuoDong SU Jian ZHANG Jie ZHANG Min Institute for Infocomm research 21 Heng Mui Keng Terrace, Singapore 119613 Email: {zhougd, sujian, zhangjie, mzhang}@i2r.a-star.edu.sg Abstract Extracting semantic relationships between entities is challenging. This paper investigates the incorporation of diverse lexical, syntactic and semantic knowledge in feature-based relation extraction using SVM. Our study illustrates that the base phrase chunking information is very effective for relation extraction and contributes to most of the performance improvement from syntactic aspect while additional information from full parsing gives limited further enhancement. This suggests that most of useful information in full parse trees for relation extraction is shallow and can be captured by chunking. We also demonstrate how semantic information such as WordNet and Name List, can be used in feature-based relation extraction to further improve the performance. Evaluation on the ACE corpus shows that effective incorporation of diverse features enables our system outperform previously best-reported systems on the 24 ACE relation subtypes and significantly outperforms tree kernel-based systems by over 20 in F-measure on the 5 ACE relation types. 1 Introduction With the dramatic increase in the amount of textual information available in digital archives and the WWW, there has been growing interest in techniques for automatically extracting information from text. Information Extraction (IE) systems are expected to identify relevant information (usually of pre-defined types) from text documents in a certain domain and put them in a structured format. According to the scope of the NIST Automatic Content Extraction (ACE) program, current research in IE has three main objectives: Entity Detection and Tracking (EDT), Relation Detection and Characterization (RDC), and Event Detection and Characterization (EDC). The EDT task entails the detection of entity mentions and chaining them together by identifying their coreference. In ACE vocabulary, entities are objects, mentions are references to them, and relations are semantic relationships between entities. Entities can be of five types: persons, organizations, locations, facilities and geo-political entities (GPE: geographically defined regions that indicate a political boundary, e.g. countries, states, cities, etc.). Mentions have three levels: names, nomial expressions or pronouns. The RDC task detects and classifies implicit and explicit relations1 between entities identified by the EDT task. For example, we want to determine whether a person is at a location, based on the evidence in the context. Extraction of semantic relationships between entities can be very useful for applications such as question answering, e.g. to answer the query “Who is the president of the United States?”. This paper focuses on the ACE RDC task and employs diverse lexical, syntactic and semantic knowledge in feature-based relation extraction using Support Vector Machines (SVMs). Our study illustrates that the base phrase chunking information contributes to most of the performance inprovement from syntactic aspect while additional full parsing information does not contribute much, largely due to the fact that most of relations defined in ACE corpus are within a very short distance. We also demonstrate how semantic information such as WordNet (Miller 1990) and Name List can be used in the feature-based framework. Evaluation shows that the incorporation of diverse features enables our system achieve best reported performance. It also shows that our fea 1 In ACE (http://www.ldc.upenn.edu/Projects/ACE), explicit relations occur in text with explicit evidence suggesting the relationships. Implicit relations need not have explicit supporting evidence in text, though they should be evident from a reading of the document. 427 ture-based approach outperforms tree kernel-based approaches by 11 F-measure in relation detection and more than 20 F-measure in relation detection and classification on the 5 ACE relation types. The rest of this paper is organized as follows. Section 2 presents related work. Section 3 and Section 4 describe our approach and various features employed respectively. Finally, we present experimental setting and results in Section 5 and conclude with some general observations in relation extraction in Section 6. 2 Related Work The relation extraction task was formulated at the 7th Message Understanding Conference (MUC-7 1998) and is starting to be addressed more and more within the natural language processing and machine learning communities. Miller et al (2000) augmented syntactic full parse trees with semantic information corresponding to entities and relations, and built generative models for the augmented trees. Zelenko et al (2003) proposed extracting relations by computing kernel functions between parse trees. Culotta et al (2004) extended this work to estimate kernel functions between augmented dependency trees and achieved 63.2 F-measure in relation detection and 45.8 F-measure in relation detection and classification on the 5 ACE relation types. Kambhatla (2004) employed Maximum Entropy models for relation extraction with features derived from word, entity type, mention level, overlap, dependency tree and parse tree. It achieves 52.8 Fmeasure on the 24 ACE relation subtypes. Zhang (2004) approached relation classification by combining various lexical and syntactic features with bootstrapping on top of Support Vector Machines. Tree kernel-based approaches proposed by Zelenko et al (2003) and Culotta et al (2004) are able to explore the implicit feature space without much feature engineering. Yet further research work is still expected to make it effective with complicated relation extraction tasks such as the one defined in ACE. Complicated relation extraction tasks may also impose a big challenge to the modeling approach used by Miller et al (2000) which integrates various tasks such as part-of-speech tagging, named entity recognition, template element extraction and relation extraction, in a single model. This paper will further explore the feature-based approach with a systematic study on the extensive incorporation of diverse lexical, syntactic and semantic information. Compared with Kambhatla (2004), we separately incorporate the base phrase chunking information, which contributes to most of the performance improvement from syntactic aspect. We also show how semantic information like WordNet and Name List can be equipped to further improve the performance. Evaluation on the ACE corpus shows that our system outperforms Kambhatla (2004) by about 3 F-measure on extracting 24 ACE relation subtypes. It also shows that our system outperforms tree kernel-based systems (Culotta et al 2004) by over 20 F-measure on extracting 5 ACE relation types. 3 Support Vector Machines Support Vector Machines (SVMs) are a supervised machine learning technique motivated by the statistical learning theory (Vapnik 1998). Based on the structural risk minimization of the statistical learning theory, SVMs seek an optimal separating hyper-plane to divide the training examples into two classes and make decisions based on support vectors which are selected as the only effective instances in the training set. Basically, SVMs are binary classifiers. Therefore, we must extend SVMs to multi-class (e.g. K) such as the ACE RDC task. For efficiency, we apply the one vs. others strategy, which builds K classifiers so as to separate one class from all others, instead of the pairwise strategy, which builds K*(K-1)/2 classifiers considering all pairs of classes. The final decision of an instance in the multiple binary classification is determined by the class which has the maximal SVM output. Moreover, we only apply the simple linear kernel, although other kernels can peform better. The reason why we choose SVMs for this purpose is that SVMs represent the state-of–the-art in the machine learning research community, and there are good implementations of the algorithm available. In this paper, we use the binary-class SVMLight2 deleveloped by Joachims (1998). 2 Joachims has just released a new version of SVMLight for multi-class classification. However, this paper only uses the binary-class version. For details about SVMLight, please see http://svmlight.joachims.org/ 428 4 Features The semantic relation is determined between two mentions. In addition, we distinguish the argument order of the two mentions (M1 for the first mention and M2 for the second mention), e.g. M1-ParentOf-M2 vs. M2-Parent-Of-M1. For each pair of mentions3, we compute various lexical, syntactic and semantic features. 4.1 Words According to their positions, four categories of words are considered: 1) the words of both the mentions, 2) the words between the two mentions, 3) the words before M1, and 4) the words after M2. For the words of both the mentions, we also differentiate the head word4 of a mention from other words since the head word is generally much more important. The words between the two mentions are classified into three bins: the first word in between, the last word in between and other words in between. Both the words before M1 and after M2 are classified into two bins: the first word next to the mention and the second word next to the mention. Since a pronominal mention (especially neutral pronoun such as ‘it’ and ‘its’) contains little information about the sense of the mention, the coreference chain is used to decide its sense. This is done by replacing the pronominal mention with the most recent non-pronominal antecedent when determining the word features, which include: • WM1: bag-of-words in M1 • HM1: head word of M1 3 In ACE, each mention has a head annotation and an extent annotation. In all our experimentation, we only consider the word string between the beginning point of the extent annotation and the end point of the head annotation. This has an effect of choosing the base phrase contained in the extent annotation. In addition, this also can reduce noises without losing much of information in the mention. For example, in the case where the noun phrase “the former CEO of McDonald” has the head annotation of “CEO” and the extent annotation of “the former CEO of McDonald”, we only consider “the former CEO” in this paper. 4 In this paper, the head word of a mention is normally set as the last word of the mention. However, when a preposition exists in the mention, its head word is set as the last word before the preposition. For example, the head word of the name mention “University of Michigan” is “University”. • WM2: bag-of-words in M2 • HM2: head word of M2 • HM12: combination of HM1 and HM2 • WBNULL: when no word in between • WBFL: the only word in between when only one word in between • WBF: first word in between when at least two words in between • WBL: last word in between when at least two words in between • WBO: other words in between except first and last words when at least three words in between • BM1F: first word before M1 • BM1L: second word before M1 • AM2F: first word after M2 • AM2L: second word after M2 4.2 Entity Type This feature concerns about the entity type of both the mentions, which can be PERSON, ORGANIZATION, FACILITY, LOCATION and Geo-Political Entity or GPE: • ET12: combination of mention entity types 4.3 Mention Level This feature considers the entity level of both the mentions, which can be NAME, NOMIAL and PRONOUN: • ML12: combination of mention levels 4.4 Overlap This category of features includes: • #MB: number of other mentions in between • #WB: number of words in between • M1>M2 or M1<M2: flag indicating whether M2/M1is included in M1/M2. Normally, the above overlap features are too general to be effective alone. Therefore, they are also combined with other features: 1) ET12+M1>M2; 2) ET12+M1<M2; 3) HM12+M1>M2; 4) HM12+M1<M2. 4.5 Base Phrase Chunking It is well known that chunking plays a critical role in the Template Relation task of the 7th Message Understanding Conference (MUC-7 1998). The related work mentioned in Section 2 extended to explore the information embedded in the full parse trees. In this paper, we separate the features of base 429 phrase chunking from those of full parsing. In this way, we can separately evaluate the contributions of base phrase chunking and full parsing. Here, the base phrase chunks are derived from full parse trees using the Perl script5 written by Sabine Buchholz from Tilburg University and the Collins’ parser (Collins 1999) is employed for full parsing. Most of the chunking features concern about the head words of the phrases between the two mentions. Similar to word features, three categories of phrase heads are considered: 1) the phrase heads in between are also classified into three bins: the first phrase head in between, the last phrase head in between and other phrase heads in between; 2) the phrase heads before M1 are classified into two bins: the first phrase head before and the second phrase head before; 3) the phrase heads after M2 are classified into two bins: the first phrase head after and the second phrase head after. Moreover, we also consider the phrase path in between. • CPHBNULL when no phrase in between • CPHBFL: the only phrase head when only one phrase in between • CPHBF: first phrase head in between when at least two phrases in between • CPHBL: last phrase head in between when at least two phrase heads in between • CPHBO: other phrase heads in between except first and last phrase heads when at least three phrases in between • CPHBM1F: first phrase head before M1 • CPHBM1L: second phrase head before M1 • CPHAM2F: first phrase head after M2 • CPHAM2F: second phrase head after M2 • CPP: path of phrase labels connecting the two mentions in the chunking • CPPH: path of phrase labels connecting the two mentions in the chunking augmented with head words, if at most two phrases in between 4.6 Dependency Tree This category of features includes information about the words, part-of-speeches and phrase labels of the words on which the mentions are dependent in the dependency tree derived from the syntactic full parse tree. The dependency tree is built by using the phrase head information returned by the Collins’ parser and linking all the other 5 http://ilk.kub.nl/~sabine/chunklink/ fragments in a phrase to its head. It also includes flags indicating whether the two mentions are in the same NP/PP/VP. • ET1DW1: combination of the entity type and the dependent word for M1 • H1DW1: combination of the head word and the dependent word for M1 • ET2DW2: combination of the entity type and the dependent word for M2 • H2DW2: combination of the head word and the dependent word for M2 • ET12SameNP: combination of ET12 and whether M1 and M2 included in the same NP • ET12SamePP: combination of ET12 and whether M1 and M2 exist in the same PP • ET12SameVP: combination of ET12 and whether M1 and M2 included in the same VP 4.7 Parse Tree This category of features concerns about the information inherent only in the full parse tree. • PTP: path of phrase labels (removing duplicates) connecting M1 and M2 in the parse tree • PTPH: path of phrase labels (removing duplicates) connecting M1 and M2 in the parse tree augmented with the head word of the top phrase in the path. 4.8 Semantic Resources Semantic information from various resources, such as WordNet, is used to classify important words into different semantic lists according to their indicating relationships. Country Name List This is to differentiate the relation subtype “ROLE.Citizen-Of”, which defines the relationship between a person and the country of the person’s citizenship, from other subtypes, especially “ROLE.Residence”, where defines the relationship between a person and the location in which the person lives. Two features are defined to include this information: • ET1Country: the entity type of M1 when M2 is a country name • CountryET2: the entity type of M2 when M1 is a country name 430 Personal Relative Trigger Word List This is used to differentiate the six personal social relation subtypes in ACE: Parent, Grandparent, Spouse, Sibling, Other-Relative and OtherPersonal. This trigger word list is first gathered from WordNet by checking whether a word has the semantic class “person|…|relative”. Then, all the trigger words are semi-automatically6 classified into different categories according to their related personal social relation subtypes. We also extend the list by collecting the trigger words from the head words of the mentions in the training data according to their indicating relationships. Two features are defined to include this information: • ET1SC2: combination of the entity type of M1 and the semantic class of M2 when M2 triggers a personal social subtype. • SC1ET2: combination of the entity type of M2 and the semantic class of M1 when the first mention triggers a personal social subtype. 5 Experimentation This paper uses the ACE corpus provided by LDC to train and evaluate our feature-based relation extraction system. The ACE corpus is gathered from various newspapers, newswire and broadcasts. In this paper, we only model explicit relations because of poor inter-annotator agreement in the annotation of implicit relations and their limited number. 5.1 Experimental Setting We use the official ACE corpus from LDC. The training set consists of 674 annotated text documents (~300k words) and 9683 instances of relations. During development, 155 of 674 documents in the training set are set aside for fine-tuning the system. The testing set is held out only for final evaluation. It consists of 97 documents (~50k words) and 1386 instances of relations. Table 1 lists the types and subtypes of relations for the ACE Relation Detection and Characterization (RDC) task, along with their frequency of occurrence in the ACE training set. It shows that the 6 Those words that have the semantic classes “Parent”, “GrandParent”, “Spouse” and “Sibling” are automatically set with the same classes without change. However, The remaining words that do not have above four classes are manually classified. ACE corpus suffers from a small amount of annotated data for a few subtypes such as the subtype “Founder” under the type “ROLE”. It also shows that the ACE RDC task defines some difficult subtypes such as the subtypes “Based-In”, “Located” and “Residence” under the type “AT”, which are difficult even for human experts to differentiate. Type Subtype Freq AT(2781) Based-In 347 Located 2126 Residence 308 NEAR(201) Relative-Location 201 PART(1298) Part-Of 947 Subsidiary 355 Other 6 ROLE(4756) Affiliate-Partner 204 Citizen-Of 328 Client 144 Founder 26 General-Staff 1331 Management 1242 Member 1091 Owner 232 Other 158 SOCIAL(827) Associate 91 Grandparent 12 Other-Personal 85 Other-Professional 339 Other-Relative 78 Parent 127 Sibling 18 Spouse 77 Table 1: Relation types and subtypes in the ACE training data In this paper, we explicitly model the argument order of the two mentions involved. For example, when comparing mentions m1 and m2, we distinguish between m1-ROLE.Citizen-Of-m2 and m2ROLE.Citizen-Of-m1. Note that only 6 of these 24 relation subtypes are symmetric: “RelativeLocation”, “Associate”, “Other-Relative”, “OtherProfessional”, “Sibling”, and “Spouse”. In this way, we model relation extraction as a multi-class classification problem with 43 classes, two for each relation subtype (except the above 6 symmetric subtypes) and a “NONE” class for the case where the two mentions are not related. 5.2 Experimental Results In this paper, we only measure the performance of relation extraction on “true” mentions with “true” chaining of coreference (i.e. as annotated by the corpus annotators) in the ACE corpus. Table 2 measures the performance of our relation extrac431 tion system over the 43 ACE relation subtypes on the testing set. It shows that our system achieves best performance of 63.1%/49.5%/ 55.5 in precision/recall/F-measure when combining diverse lexical, syntactic and semantic features. Table 2 also measures the contributions of different features by gradually increasing the feature set. It shows that: Features P R F Words 69.2 23.7 35.3 +Entity Type 67.1 32.1 43.4 +Mention Level 67.1 33.0 44.2 +Overlap 57.4 40.9 47.8 +Chunking 61.5 46.5 53.0 +Dependency Tree 62.1 47.2 53.6 +Parse Tree 62.3 47.6 54.0 +Semantic Resources 63.1 49.5 55.5 Table 2: Contribution of different features over 43 relation subtypes in the test data • Using word features only achieves the performance of 69.2%/23.7%/35.3 in precision/recall/Fmeasure. • Entity type features are very useful and improve the F-measure by 8.1 largely due to the recall increase. • The usefulness of mention level features is quite limited. It only improves the F-measure by 0.8 due to the recall increase. • Incorporating the overlap features gives some balance between precision and recall. It increases the F-measure by 3.6 with a big precision decrease and a big recall increase. • Chunking features are very useful. It increases the precision/recall/F-measure by 4.1%/5.6%/ 5.2 respectively. • To our surprise, incorporating the dependency tree and parse tree features only improve the Fmeasure by 0.6 and 0.4 respectively. This may be due to the fact that most of relations in the ACE corpus are quite local. Table 3 shows that about 70% of relations exist where two mentions are embedded in each other or separated by at most one word. While short-distance relations dominate and can be resolved by above simple features, the dependency tree and parse tree features can only take effect in the remaining much less long-distance relations. However, full parsing is always prone to long distance errors although the Collins’ parser used in our system represents the state-of-the-art in full parsing. • Incorporating semantic resources such as the country name list and the personal relative trigger word list further increases the F-measure by 1.5 largely due to the differentiation of the relation subtype “ROLE.Citizen-Of” from “ROLE. Residence” by distinguishing country GPEs from other GPEs. The effect of personal relative trigger words is very limited due to the limited number of testing instances over personal social relation subtypes. Table 4 separately measures the performance of different relation types and major subtypes. It also indicates the number of testing instances, the number of correctly classified instances and the number of wrongly classified instances for each type or subtype. It is not surprising that the performance on the relation type “NEAR” is low because it occurs rarely in both the training and testing data. Others like “PART.Subsidary” and “SOCIAL. Other-Professional” also suffer from their low occurrences. It also shows that our system performs best on the subtype “SOCIAL.Parent” and “ROLE. Citizen-Of”. This is largely due to incorporation of two semantic resources, i.e. the country name list and the personal relative trigger word list. Table 4 also indicates the low performance on the relation type “AT” although it frequently occurs in both the training and testing data. This suggests the difficulty of detecting and classifying the relation type “AT” and its subtypes. Table 5 separates the performance of relation detection from overall performance on the testing set. It shows that our system achieves the performance of 84.8%/66.7%/74.7 in precision/recall/Fmeasure on relation detection. It also shows that our system achieves overall performance of 77.2%/60.7%/68.0 and 63.1%/49.5%/55.5 in precision/recall/F-measure on the 5 ACE relation types and the best-reported systems on the ACE corpus. It shows that our system achieves better performance by ~3 F-measure largely due to its gain in recall. It also shows that feature-based methods dramatically outperform kernel methods. This suggests that feature-based methods can effectively combine different features from a variety of sources (e.g. WordNet and gazetteers) that can be brought to bear on relation extraction. The tree kernels developed in Culotta et al (2004) are yet to be effective on the ACE RDC task. Finally, Table 6 shows the distributions of errors. It shows that 73% (627/864) of errors results 432 from relation detection and 27% (237/864) of errors results from relation characterization, among which 17.8% (154/864) of errors are from misclassification across relation types and 9.6% (83/864) of errors are from misclassification of relation subtypes inside the same relation types. This suggests that relation detection is critical for relation extraction. # of other mentions in between # of relations 0 1 2 3 >=4 Overall 0 3991 161 11 0 0 4163 1 2350 315 26 2 0 2693 2 465 95 7 2 0 569 3 311 234 14 0 0 559 4 204 225 29 2 3 463 5 111 113 38 2 1 265 >=6 262 297 277 148 134 1118 # of the words in between Overall 7694 1440 402 156 138 9830 Table 3: Distribution of relations over #words and #other mentions in between in the training data Type Subtype #Testing Instances #Correct #Error P R F AT 392 224 105 68.1 57.1 62.1 Based-In 85 39 10 79.6 45.9 58.2 Located 241 132 120 52.4 54.8 53.5 Residence 66 19 9 67.9 28.8 40.4 NEAR 35 8 1 88.9 22.9 36.4 Relative-Location 35 8 1 88.9 22.9 36.4 PART 164 106 39 73.1 64.6 68.6 Part-Of 136 76 32 70.4 55.9 62.3 Subsidiary 27 14 23 37.8 51.9 43.8 ROLE 699 443 82 84.4 63.4 72.4 Citizen-Of 36 25 8 75.8 69.4 72.6 General-Staff 201 108 46 71.1 53.7 62.3 Management 165 106 72 59.6 64.2 61.8 Member 224 104 36 74.3 46.4 57.1 SOCIAL 95 60 21 74.1 63.2 68.5 Other-Professional 29 16 32 33.3 55.2 41.6 Parent 25 17 0 100 68.0 81.0 Table 4: Performance of different relation types and major subtypes in the test data Relation Detection RDC on Types RDC on Subtypes System P R F P R F P R F Ours: feature-based 84.8 66.7 74.7 77.2 60.7 68.0 63.1 49.5 55.5 Kambhatla (2004):feature-based - - - - - - 63.5 45.2 52.8 Culotta et al (2004):tree kernel 81.2 51.8 63.2 67.1 35.0 45.8 - - - Table 5: Comparison of our system with other best-reported systems on the ACE corpus Error Type #Errors False Negative 462 Detection Error False Positive 165 Cross Type Error 154 Characterization Error Inside Type Error 83 Table 6: Distribution of errors 6 Discussion and Conclusion In this paper, we have presented a feature-based approach for relation extraction where diverse lexical, syntactic and semantic knowledge are employed. Instead of exploring the full parse tree information directly as previous related work, we incorporate the base phrase chunking information first. Evaluation on the ACE corpus shows that base phrase chunking contributes to most of the performance improvement from syntactic aspect while further incorporation of the parse tree and dependence tree information only slightly improves the performance. This may be due to three reasons: First, most of relations defined in ACE have two mentions being close to each other. While short-distance relations dominate and can be resolved by simple features such as word and chunking features, the further dependency tree and parse tree features can only take effect in the remaining much less and more difficult long-distance relations. Second, it is well known that full parsing 433 is always prone to long-distance parsing errors although the Collins’ parser used in our system achieves the state-of-the-art performance. Therefore, the state-of-art full parsing still needs to be further enhanced to provide accurate enough information, especially PP (Preposition Phrase) attachment. Last, effective ways need to be explored to incorporate information embedded in the full parse trees. Besides, we also demonstrate how semantic information such as WordNet and Name List, can be used in feature-based relation extraction to further improve the performance. The effective incorporation of diverse features enables our system outperform previously bestreported systems on the ACE corpus. Although tree kernel-based approaches facilitate the exploration of the implicit feature space with the parse tree structure, yet the current technologies are expected to be further advanced to be effective for relatively complicated relation extraction tasks such as the one defined in ACE where 5 types and 24 subtypes need to be extracted. Evaluation on the ACE RDC task shows that our approach of combining various kinds of evidence can scale better to problems, where we have a lot of relation types with a relatively small amount of annotated data. The experiment result also shows that our feature-based approach outperforms the tree kernel-based approaches by more than 20 F-measure on the extraction of 5 ACE relation types. In the future work, we will focus on exploring more semantic knowledge in relation extraction, which has not been covered by current research. Moreover, our current work is done when the Entity Detection and Tracking (EDT) has been perfectly done. Therefore, it would be interesting to see how imperfect EDT affects the performance in relation extraction. References Agichtein E. and Gravano L. (2000). Snowball: Extracting relations from large plain text collections. In Proceedings of 5th ACM International Conference on Digital Libraries. 4-7 June 2000. San Antonio, TX. Brin S. (1998). Extracting patterns and relations from the World Wide Web. In Proceedings of WebDB workshop at 6th International Conference on Extending DataBase Technology (EDBT’1998).23-27 March 1998, Valencia, Spain Collins M. (1999). Head-driven statistical models for natural language parsing. Ph.D. Dissertation, University of Pennsylvania. Collins M. and Duffy N. (2002). Covolution kernels for natural language. In Dietterich T.G., Becker S. and Ghahramani Z. editors. Advances in Neural Information Processing Systems 14. Cambridge, MA. Culotta A. and Sorensen J. (2004). Dependency tree kernels for relation extraction. In Proceedings of 42th Annual Meeting of the Association for Computational Linguistics. 21-26 July 2004. Barcelona, Spain Cumby C.M. and Roth D. (2003). On kernel methods for relation learning. In Fawcett T. and Mishra N. editors. In Proceedings of 20th International Conference on Machine Learning (ICML’2003). 21-24 Aug 2003. Washington D.C. USA. AAAI Press. Haussler D. (1999). Covention kernels on discrete structures. Technical Report UCS-CRL-99-10. University of California, Santa Cruz. Joachims T. (1998). Text categorization with Support Vector Machines: Learning with many relevant features. In Proceedings of European Conference on Machine Learning(ECML’1998). 21-23 April 1998. Chemnitz, Germany Miller G.A. (1990). WordNet: An online lexical database. International Journal of Lexicography. 3(4):235-312. Miller S., Fox H., Ramshaw L. and Weischedel R. (2000). A novel use of statistical parsing to extract information from text. In Proceedings of 6th Applied Natural Language Processing Conference. 29 April - 4 May 2000, Seattle, USA MUC-7. (1998). Proceedings of the 7th Message Understanding Conference (MUC-7). Morgan Kaufmann, San Mateo, CA. Kambhatla N. (2004). Combining lexical, syntactic and semantic features with Maximum Entropy models for extracting relations. In Proceedings of 42th Annual Meeting of the Association for Computational Linguistics. 21-26 July 2004. Barcelona, Spain. Roth D. and Yih W.T. (2002). Probabilistic reasoning for entities and relation recognition. In Proceedings of 19th International Conference on Computational Linguistics(CoLING’2002). Taiwan. Vapnik V. (1998). Statistical Learning Theory. Whiley, Chichester, GB. Zelenko D., Aone C. and Richardella. (2003). Kernel methods for relation extraction. Journal of Machine Learning Research. pp1083-1106. Zhang Z. (2004). Weekly-supervised relation classification for Information Extraction. In Proceedings of ACM 13th Conference on Information and Knowledge Management (CIKM’2004). 8-13 Nov 2004. Washington D.C., USA. 434 | 2005 | 53 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 435–442, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics A Quantitative Analysis of Lexical Differences Between Genders in Telephone Conversations Constantinos Boulis Department of Electrical Engineering University of Washington Seattle, 98195 [email protected] Mari Ostendorf Department of Electrical Engineering University of Washington Seattle, 98195 [email protected] Abstract In this work, we provide an empirical analysis of differences in word use between genders in telephone conversations, which complements the considerable body of work in sociolinguistics concerned with gender linguistic differences. Experiments are performed on a large speech corpus of roughly 12000 conversations. We employ machine learning techniques to automatically categorize the gender of each speaker given only the transcript of his/her speech, achieving 92% accuracy. An analysis of the most characteristic words for each gender is also presented. Experiments reveal that the gender of one conversation side influences lexical use of the other side. A surprising result is that we were able to classify male-only vs. female-only conversations with almost perfect accuracy. 1 Introduction Linguistic and prosodic differences between genders in American English have been studied for decades. The interest in analyzing the gender linguistic differences is two-fold. From the scientific perspective, it will increase our understanding of language production. From the engineering perspective, it can help improve the performance of a number of natural language processing tasks, such as text classification, machine translation or automatic speech recognition by training better language models. Traditionally, these differences have been investigated in the fields of sociolinguistics and psycholinguistics, see for example (Coates, 1997), (Eckert and McConnell-Ginet, 2003) or http://www.ling.lancs.ac.uk/groups/gal/genre.htm for a comprehensive bibliography on language and gender. Sociolinguists have approached the issue from a mostly non-computational perspective using relatively small and very focused data collections. Recently, the work of (Koppel et al., 2002) has used computational methods to characterize the differences between genders in written text, such as literary books. A number of monologues have been analyzed in (Singh, 2001) in terms of lexical richness using multivariate analysis techniques. The question of gender linguistic differences shares a number of issues with stylometry and author/speaker attribution research (Stamatatos et al., 2000), (Doddington, 2001), but novel issues emerge with analysis of conversational speech, such as studying the interaction of genders. In this work, we focus on lexical differences between genders on telephone conversations and use machine learning techniques applied on text categorization and feature selection to characterize these differences. Therefore our conclusions are entirely data-driven. We use a very large corpus created for automatic speech recognition - the Fisher corpus described in (Cieri et al., 2004). The Fisher corpus is annotated with the gender of each speaker making it an ideal resource to study not only the characteristics of individual genders but also of gender pairs in spontaneous, conversational speech. The size and 435 scope of the Fisher corpus is such that robust results can be derived for American English. The computational methods we apply can assist us in answering questions, such as “To which degree are genderdiscriminative words content-bearing words?” or “Which words are most characteristic for males in general or males talking to females?”. In section 2, we describe the corpus we have based our analysis on. In section 3, the machine learning tools are explained, while the experimental results are described in section 4 with a specific research question for each subsection. We conclude in section 5 with a summary and future directions. 2 The Corpus and Data Preparation The Fisher corpus (Cieri et al., 2004) was used in all our experiments. It consists of telephone conversations between two people, randomly assigned to speak to each other. At the beginning of each conversation a topic is suggested at random from a list of 40. The latest release of the Fisher collection has more than 16 000 telephone conversations averaging 10 minutes each. Each person participates in 1-3 conversations, and each conversation is annotated with a topicality label. The topicality label gives the degree to which the suggested topic was followed and is an integer number from 0 to 4, 0 being the worse. In our site, we had an earlier version of the Fisher corpus with around 12 000 conversations. After removing conversations where at least one of the speakers was non-native1 and conversations with topicality 0 or 1 we were left with 10 127 conversations. The original transcripts were minimally processed; acronyms were normalized to a sequence of characters with no intervening spaces, e.g. t. v. to tv; word fragments were converted to the same token wordfragment; all words were lowercased; and punctuation marks and special characters were removed. Some non-lexical tokens are maintained such as laughter and filled pauses such as uh, um. Backchannels and acknowledgments such as uh-huh, mm-hmm are also kept. The gender distribution of the Fisher corpus is 53% female and 47% male. Age distribution is 38% 16-29, 45% 30-49% and 17% 50+. Speakers were connected at random 1About 10% of speakers are non-native making this corpus suitable for investigating their lexical differences compared to American English speakers. from a pool recruited in a national ad campaign. It is unlikely that the speakers knew their conversation partner. All major American English dialects are well represented, see (Cieri et al., 2004) for more details. The Fisher corpus was primarily created to facilitate automatic speech recognition research. The subset we have used has about 17.8M words or about 1 600 hours of speech and it is the largest resource ever used to analyze gender linguistic differences. In comparison, (Singh, 2001) has used about 30 000 words for their analysis. Before attempting to analyze the gender differences, there are two main biases that need to be removed. The first bias, which we term the topic bias is introduced by not accounting for the fact that the distribution of topics in males and females is uneven, despite the fact that the topic is pre-assigned randomly. For example, if topic A happened to be more common for males than females and we failed to account for that, then we would be implicitly building a topic classifier rather than a gender classifier. Our intention here is to analyze gender linguistic differences controlling for the topic effect as if both genders talk equally about the same topics. The second bias, which we term speaker bias is introduced by not accounting for the fact that specific speakers have idiosyncratic expressions. If our training data consisted of a small number of speakers appearing in both training and testing data, then we will be implicitly modeling speaker differences rather than gender differences. To normalize for these two important biases, we made sure that both genders have the same percent of conversation sides for each topic and there are 8899 speakers in training and 2000 in testing with no overlap between the two sets. After these two steps, there were 14969 conversation sides used for training and 3738 sides for testing. The median length of a conversation side was 954. 3 Machine Learning Methods Used The methods we have used for characterizing the differences between genders and gender pairs are similar to what has been used for the task of text classification. In text classification, the objective is to classify a document ⃗d to one (or more) of T predefined topics y. A number of N tuples (⃗dn, yn) 436 are provided for training the classifier. A major challenge of text classification is the very high dimensionality for representing each document which brings forward the need for feature selection, i.e. selecting the most discriminative words and discarding all others. In this study, we chose two ways for characterizing the differences between gender categories. The first, is to classify the transcript of each speaker, i.e. each conversation side, to the appropriate gender category. This approach can show the cumulative effect of all terms on the distinctiveness of gender categories. The second approach is to apply feature selection methods, similar to those used in text categorization, to reveal the most characteristic features for each gender. Classifying a transcript of speech according to gender can be done with a number of different learning methods. We have compared Support Vector Machines (SVMs), Naive Bayes, Maximum Entropy and the tfidf/Rocchio classifier and found SVMs to be the most successful. A possible difference between text classification and gender classification is that different methods for feature weighting may be appropriate. In text classification, inverse document frequency is applied to the frequency of each term resulting in the deweighting of common terms. This weighting scheme is effective for text classification because common terms do not contribute to the topic of a document. However, the reverse may be true for gender classification, where the common terms may be the ones that mostly contribute to the gender category. This is an issue that we will investigate in section 4 and has implications for the feature weighting scheme that needs to be applied to the vector representation. In addition to classification, we have applied feature selection techniques to assess the discriminative ability of each individual feature. Information gain has been shown to be one of the most successful feature selection methods for text classification (Forman, 2003). It is given by: IG(w) = H(C) −p(w)H(C|w) −p( ¯w)H(C| ¯w) (1) where H(C) = −PC c=1 p(c) log p(c) denotes the entropy of the discrete gender category random variable C. Each document is represented with the Bernoulli model, i.e. a vector of 1 or 0 depending if the word appears or not in the document. We have also implemented another feature selection mechanism, the KL-divergence, which is given by: KL(w) = D[p(c|w)||p(c)] = C X c=1 p(c|w) log p(c|w) p(c) (2) In the KL-divergence we have used the multinomial model, i.e. each document is represented as a vector of word counts. We smoothed the p(w|c) distributions by assuming that every word in the vocabulary is observed at least 5 times for each class. 4 Experiments Having explained the methods and data that we have used, we set forward to investigate a number of research questions concerning the nature of differences between genders. Each subsection is concerned with a single question. 4.1 Given only the transcript of a conversation, is it possible to classify conversation sides according to the gender of the speaker? The first hypothesis we investigate is whether simple features, such as counts of individual terms (unigrams) or pairs of terms (bigrams) have different distributions between genders. The set of possible terms consists of all words in the Fisher corpus plus some non-lexical tokens such as laughter and filled pauses. One way to assess the difference in their distribution is by attempting to classify conversation sides according to the gender of the speaker. The results are shown in Table 1, where a number of different text classification algorithms were applied to classify conversation sides. 14969 conversation sides are used for training and 3738 sides are used for testing. No feature selection was performed; in all classifiers a vocabulary of all unigrams or bigrams with 5 or more occurrences is used (20513 for unigrams, 306779 for bigrams). For all algorithms, except Naive Bayes, we have used the tf·idf representation. The Rainbow toolkit (McCallum, 1996) was used for training the classifiers. Results show that differences between genders are clear and the best results are obtained by using SVMs. The fact that classification performance is significantly above chance for a variety of learning methods shows that 437 lexical differences between genders are inherent in the data and not in a specific choice of classifier. From Table 1 we also observe that using bigrams is consistently better than unigrams, despite the fact that the number of unique terms rises from ∼20K to ∼300K. This suggests that gender differences become even more profound for phrases, a finding similar to (Doddington, 2001) for speaker differences. Table 1: Classification accuracy of different learning methods for the task of classifying the transcript of a conversation side according to the gender male/female - of the speaker. Unigrams Bigrams Rocchio 76.3 86.5 Naive Bayes 83.0 89.2 MaxEnt 85.6 90.3 SVM 88.6 92.5 4.2 Does the gender of a conversation side influence lexical usage of the other conversation side? Each conversation always consists of two people talking to each other. Up to this point, we have only attempted to analyze a conversation side in isolation, i.e. without using transcriptions from the other side. In this subsection, we attempt to assess the degree to which, if any, the gender of one speaker influences the language of the other speaker. In the first experiment, instead of defining two categories we define four; the Cartesian product of the gender of the current speaker and the gender of the other speaker. These categories are symbolized with two letters: the first characterizing the gender of the current speaker and the second the gender of the other speaker, i.e. FF, FM, MF, MM. The task remains the same: given the transcript of a conversation side, classify it according to the appropriate category. This is a task much harder than the binary classification we had in subsection 4.1, because given only the transcript of a conversation side we must make inferences about the gender of the current as well as the other conversation side. We have used SVMs as the learning method. In their basic formulation, SVMs are binary classifiers (although there has been recent work on multi-class SVMs). We followed the original binary formulation and converted the 4-class problem to 6 2-class problems. The final decision is taken by voting of the individual systems. The confusion matrix of the 4-way classification is shown in Table 2. Table 2: Confusion matrix for 4-way classification of gender of both sides using transcripts from one side. Unigrams are used as features, SVMs as classification method. Each row represents the true category and each column the hypothesized category. FF FM MF MM F-measure FF 1447 30 40 65 .778 FM 456 27 43 77 .074 MF 167 25 104 281 .214 MM 67 44 210 655 .638 The results show that although two of the four categories, FF and MM, are quite robustly detected the other two, FM and MF, are mostly confused with FF and MM respectively. These results can be mapped to single gender detection, giving accuracy of 85.9% for classifying the gender of the given transcript (as in Table 1) and 68.5% for classifying the gender of the conversational partner. The accuracy of 68.5% is higher than chance (57.8%) showing that genders alter their linguistic patterns depending on the gender of their conversational partner. In the next experiment we design two binary classifiers. In the first classifier, the task is to correctly classify FF vs. MM transcripts, and in the second classifier the task is to classify FM vs. MF transcripts. Therefore, we attempt to classify the gender of a speaker given knowledge of whether the conversation is same-gender or cross-gender. For both classifiers 4526 sides were used for training equally divided among each class. 2558 sides were used for testing of the FF-MM classifier and 1180 sides for the FM-MF classifier. The results are shown in Table 3. It is clear from Table 3 that there is a significant difference in performance between the FF-MM and FM-MF classifiers, suggesting that people alter their linguistic patterns depending on the gender of the person they are talking to. In same-gender conversations, almost perfect accuracy is reached, indicating that the linguistic patterns of the two genders be438 Table 3: Classification accuracies in same-gender and cross-gender conversations. SVMs are used as the classification method; no feature selection is applied. Unigrams Bigrams FF-MM 98.91 99.49 FM-MF 69.15 78.90 come very distinct. In cross-gender conversations the differences become less prominent since classification accuracy drops compared to same-gender conversations. This result, however, does not reveal how this convergence of linguistic patterns is achieved. Is it the case that the convergence is attributed to one of the genders, for example males attempting to match the patterns of females, or is it collectively constructed? To answer this question, we can examine the classification performance of two other binary classifiers FF vs. FM and MM vs. MF. The results are shown in Table 4. In both classifiers 4608 conversation sides are used for training, equally divided in each class. The number of sides used for testing is 989 and 689 for the FF-FM and MM-MF classifier respectively. Table 4: Classifying the gender of speaker B given only the transcript of speaker A. SVMs are used as the classification method; no feature selection is applied. Unigrams Bigrams FF-FM 57.94 59.66 MM-MF 60.38 59.80 The results in Table 4 suggest that both genders equally alter their linguistic patterns to match the opposite gender. It is interesting to see that the gender of speaker B can be detected better than chance given only the transcript and gender of speaker A. The results are better than chance at the 0.0005 significance level. 4.3 Are some features more indicative of gender than other? Having shown that gender lexical differences are prominent enough to classify each speaker according to gender quite robustly, another question is whether the high classification accuracies can be attributed to a small number of features or are rather the cumulative effect of a high number of them. In Table 5 we apply the two feature selection criteria that were described in 3. Table 5: Effect of feature selection criteria on gender classification using SVM as the learning method. Horizontal axis refers to the fraction of the original vocabulary size (∼20K for unigrams, ∼300K for bigrams) that was used. 1.0 0.7 0.4 0.1 0.03 KL 1-gram 88.6 88.8 87.8 86.3 85.6 2-gram 92.5 92.6 92.2 91.9 90.3 IG 1-gram 88.6 88.5 88.9 87.6 87.0 2-gram 92.5 92.4 92.6 91.8 90.8 The results of Table 5 show that lexical differences between genders are not isolated in a small set of words. The best results are achieved with 40% (IG) and 70% (KL) of the features, using fewer features steadily degrades the performance. Using the 5000 least discriminative unigrams and Naive Bayes as the classification method resulted in 58.4% classification accuracy which is not statistically better than chance (this is the test set of Tables 1 and 2 not of Table 4) . Using the 15000 least useful unigrams resulted in a classification accuracy of 66.4%, which shows that the number of irrelevant features is rather small, about 5K features. It is also instructive to see which features are most discriminative for each gender. The features that when present are most indicative of each gender (positive features) are shown in Table 6. They are sorted using the KL distance and dropping the summation over both genders in equation (2). Looking at the top 2000 features for each number we observed that a number of swear words appear as most discriminative for males and family-relation terms are often associated with females. For example the following words are in the top 2000 (out of 20513) most useful features for males shit, bullshit, shitty, fuck, fucking, fucked, bitching, bastards, ass, asshole, sucks, sucked, suck, sucker, damn, goddamn, damned. The following words are in the top 2000 features for females children, grandchild, 439 Table 6: The 10 most discriminative features for each gender according to KL distance. Words higher in the list are more discriminative. Male Female dude husband shit husband’s fucking refunding wife goodness wife’s boyfriend matt coupons steve crafts bass linda ben gosh fuck cute child, grandchildren, childhood, childbirth, kids, grandkids, son, grandson, daughter, granddaughter, boyfriend, marriage, mother, grandmother. It is also interesting to note that a number of nonlexical tokens are strongly associated with a certain gender. For example, [laughter] and acknowledgments/backchannels such as uh-huh,uhuh were in the top 2000 features for females. On the other hand, filled pauses such as uh were strong male indicators. Our analysis also reveals that a high number of useful features are names. A possible explanation is that people usually introduce themselves at the beginning of the conversation. In the top 30 words per gender, names represent over half of the words for males and nearly a quarter for females. Nearly a third were family-relations words for females, and 17 When examining cross-gender conversations, the discriminative words were quite substantially different. We can quantify the degree of change by measuring KLSG(w) −KLCG(w) where KLSG(w) is the KL measure of word w for same-gender conversations. The analysis reveals that swear terms are highly associated with male-only conversations, while family-relation words are highly associated with female-only conversations. From the traditional sociolinguistic perspective, these methods offer a way of discovering rather than testing words or phrases that have distinct usage between genders. For example, in a recent paper (Kiesling, in press) the word dude is analyzed as a male-to-male indicator. In our work, the word dude emerged as a male feature. As another example, our observation that some acknowledgments and backchannels (uh-huh) are more common for females than males while the reverse is true for filled pauses asserts a popular theory in sociolinguistics that males assume a more dominant role than females in conversations (Coates, 1997). Males tend to hold the floor more than women (more filled pauses) and females tend to be more responsive (more acknowledgments/backchannels). 4.4 Are gender-discriminative features content-bearing words? Do the most gender-discriminative words contribute to the topic of the conversation, or are they simple fill-in words with no content? Since each conversation is labeled with one of 40 possible topics, we can rank features with IG or KL using topics instead of genders as categories. In fact, this is the standard way of performing feature selection for text classification. We can then compare the performance of classifying conversations to topics using the top-N features according to the gender or topic ranking. The results are shown in Table 7. Table 7: Classification accuracies using topic- and gender-discriminative words, sorted using the information gain criterion. When randomly selecting 5000 features, 10 independent runs were performed and numbers reported are mean and standard deviation. Using the bottom 5000 topic words resulted in chance performance (∼5.0) Top 5K Bottom 5K Random 5K Gender ranking 78.51 66.72 74.99±2.2 Topic ranking 87.72 74.99±2.2 From Table 7 we can observe that genderdiscriminative words are clearly not the most relevant nor the most irrelevant features for topic classification. They are slightly more topic-relevant features than topic-irrelevant but not by a significant margin. The bottom 5000 features for gender discrimination are more strongly topic-irrelevant words. These results show that gender linguistic differences are not merely isolated in a set of words that 440 would function as markers of gender identity but are rather closely intertwined with semantics. We attempted to improve topic classification by training gender-dependent topic models but we did not observe any gains. 4.5 Can gender lexical differences be exploited to improve automatic speech recognition? Are the observed gender linguistic differences valuable from an engineering perspective as well? In other words, can a natural language processing task benefit from modeling these differences? In this subsection, we train gender-dependent language models and compare their perplexities with standard baselines. An advantage of using gender information for automatic speech recognition is that it can be robustly detected using acoustic features. In Tables 8 and 9 the perplexities of different genderdependent language models are shown. The SRILM toolkit (Stolcke, 2002) was used for training the language models using Kneser-Ney smoothing (Kneser and Ney, 1987). The perplexities reported include the end-of-turn as a separate token. 2300 conversation sides are used for training each one of {FF,FM,MF,MM} models of Table 8, while 7670 conversation sides are used for training each one of {F,M} models of Table 9. In both tables, the same 1678 sides are used for testing. Table 8: Perplexity of gender-dependent bigram language models. Four gender categories are used. Each column has the perplexities for a given test set, each row for a train set. FF FM MF MM FF 85.3 91.1 96.5 99.9 FM 85.7 90.0 94.5 97.5 MF 87.8 91.4 93.3 95.4 MM 89.9 93.1 94.1 95.2 ALL 82.1 86.3 89.8 91.7 In Tables 8 and 9 we observe that we get lower perplexities in matched than mismatched conditions in training and testing. This is another way to show that different data do exhibit different properties. However, the best results are obtained by pooling all the data and training a single language model. Therefore, despite the fact there are different modes, Table 9: Perplexity of gender-dependent bigram language models. Two gender categories are used. Each column has the perplexities for a given test set, each row for a train set. F M F 82.8 94.2 M 86.0 90.6 ALL 81.8 89.5 the benefit of more training data outweighs the benefit of gender-dependent models. Interpolating ALL with F and ALL with M resulted in insignificant improvements (81.6 for F and 89.3 for M). 5 Conclusions We have presented evidence of linguistic differences between genders using a large corpus of telephone conversations. We have approached the issue from a purely computational perspective and have shown that differences are profound enough that we can classify the transcript of a conversation side according to the gender of the speaker with accuracy close to 93%. Our computational tools have allowed us to quantitatively show that the gender of one speaker influences the linguistic patterns of the other speaker. Specifically, classifying same-gender conversations can be done with almost perfect accuracy, while evidence of some convergence of male and female linguistic patterns in cross-gender conversations was observed. An analysis of the features revealed that the most characteristic features for males are swear words while for females are family-relation words. Leveraging these differences in simple gender-dependent language models is not a win, but this does not imply that more sophisticated language model training methods cannot help. For example, instead of conditioning every word in the vocabulary on gender we can choose to do so only for the top-N, determined by KL or IG. The probability estimates for the rest of the words will be tied for both genders. Future work will examine empirical differences in other features such as dialog acts or turntaking. 441 References C. Cieri, D. Miller, and K. Walker. 2004. The Fisher corpus: a resource for the next generations of speechto-text. In 4th International Conference on Language Resources and Evaluation, LREC, pages 69–71. J. Coates, editor. 1997. Language and Gender: A Reader. Blackwell Publishers. G. Doddington. 2001. Speaker recognition based on idiolectal differences between speakers. In Proceedings of the 7th European Conference on Speech Communication and Technology (Eurospeech 2001), pages 2251–2254. P. Eckert and S. McConnell-Ginet, editors. 2003. Language and Gender. Cambridge University Press. G. Forman. 2003. An extensive empirical study of feature selection metrics for text classification. Machine Learning Research, 3:1289–1305. S. Kiesling. in press. Dude. American Speech. R. Kneser and H. Ney. 1987. Improved backing-off for m-gram language modeling. In Proc. Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pages 181–184. M. Koppel, S. Argamon, and A.R. Shimoni. 2002. Automatically categorizing written texts by author gender. Literary and Linguistic Computing, 17(4):401–412. A. McCallum. 1996. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http://www.cs.cmu.edu/ mccallum/bow. S. Singh. 2001. A pilot study on gender differences in conversational speech on lexical richness measures. Literary and Linguistic Computing, 16(3):251–264. E. Stamatatos, N. Fakotakis, and G. Kokkinakis. 2000. Automatic text categorization in terms of genre and author. Computational Linguistics, 26:471–495. A. Stolcke. 2002. An extensible language modeling toolkit. In Proc. Intl. Conf. on Spoken Language Processing (ICSLP), pages 901–904. 442 | 2005 | 54 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 443–450, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Position Specific Posterior Lattices for Indexing Speech Ciprian Chelba and Alex Acero Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 {chelba, alexac}@microsoft.com Abstract The paper presents the Position Specific Posterior Lattice, a novel representation of automatic speech recognition lattices that naturally lends itself to efficient indexing of position information and subsequent relevance ranking of spoken documents using proximity. In experiments performed on a collection of lecture recordings — MIT iCampus data — the spoken document ranking accuracy was improved by 20% relative over the commonly used baseline of indexing the 1-best output from an automatic speech recognizer. The Mean Average Precision (MAP) increased from 0.53 when using 1-best output to 0.62 when using the new lattice representation. The reference used for evaluation is the output of a standard retrieval engine working on the manual transcription of the speech collection. Albeit lossy, the PSPL lattice is also much more compact than the ASR 3-gram lattice from which it is computed — which translates in reduced inverted index size as well — at virtually no degradation in word-error-rate performance. Since new paths are introduced in the lattice, the ORACLE accuracy increases over the original ASR lattice. 1 Introduction Ever increasing computing power and connectivity bandwidth together with falling storage costs result in an overwhelming amount of data of various types being produced, exchanged, and stored. Consequently, search has emerged as a key application as more and more data is being saved (Church, 2003). Text search in particular is the most active area, with applications that range from web and intranet search to searching for private information residing on one’s hard-drive. Speech search has not received much attention due to the fact that large collections of untranscribed spoken material have not been available, mostly due to storage constraints. As storage is becoming cheaper, the availability and usefulness of large collections of spoken documents is limited strictly by the lack of adequate technology to exploit them. Manually transcribing speech is expensive and sometimes outright impossible due to privacy concerns. This leads us to exploring an automatic approach to searching and navigating spoken document collections. Our current work aims at extending the standard keyword search paradigm from text documents to spoken documents. In order to deal with limitations of current automatic speech recognition (ASR) technology we propose an approach that uses recognition lattices — which are considerably more accurate than the ASR 1-best output. A novel contribution is the use of a representation of ASR lattices which retains only position information for each word. The Position Specific Posterior 443 Lattice (PSPL) is a lossy but compact representation of a speech recognition lattice that lends itself to the standard inverted indexing done in text search — which retains the position as well as other contextual information for each hit. Since our aim is to bridge the gap between text and speech -grade search technology, we take as our reference the output of a text retrieval engine that runs on the manual transcription. The rest of the paper is structured as follows: in the next section we review previous work in the area, followed by Section 3 which presents a brief overview of state-of-the-art text search technology. We then introduce the PSPL representation in Section 4 and explain its use for indexing and searching speech in the next section. Experiments evaluating ASR accuracy on iCampus, highlighting empirical aspects of PSPL lattices as well as search accuracy results are reported in Section 6. We conclude by outlining future work. 2 Previous Work The main research effort aiming at spoken document retrieval (SDR) was centered around the SDRTREC evaluations (Garofolo et al., 2000), although there is a large body of work in this area prior to the SDR-TREC evaluations, as well as more recent work outside this community. Most notable are the contributions of (Brown et al., 1996) and (James, 1995). One problem encountered in work published prior or outside the SDR-TREC community is that it doesn’t always evaluate performance from a document retrieval point of view — using a metric like Mean Average Precision (MAP) or similar, see trec_eval (NIST, www) — but rather uses wordspotting measures, which are more technologyrather than user- centric. We believe that ultimately it is the document retrieval performance that matters and the word-spotting accuracy is just an indicator for how a SDR system might be improved. The TREC-SDR 8/9 evaluations — (Garofolo et al., 2000) Section 6 — focused on using Broadcast News speech from various sources: CNN, ABC, PRI, Voice of America. About 550 hrs of speech were segmented manually into 21,574 stories each comprising about 250 words on the average. The approximate manual transcriptions — closed captioning for video — used for SDR system comparison with text-only retrieval performance had fairly high WER: 14.5% video and 7.5% radio broadcasts. ASR systems tuned to the Broadcast News domain were evaluated on detailed manual transcriptions and were able to achieve 15-20% WER, not far from the accuracy of the approximate manual transcriptions. In order to evaluate the accuracy of retrieval systems, search queries —“topics” — along with binary relevance judgments were compiled by human assessors. SDR systems indexed the ASR 1-best output and their retrieval performance — measured in terms of MAP — was found to be flat with respect to ASR WER variations in the range of 15%-30%. Simply having a common task and an evaluation-driven collaborative research effort represents a huge gain for the community. There are shortcomings however to the SDR-TREC framework. It is well known that ASR systems are very brittle to mismatched training/test conditions and it is unrealistic to expect error rates in the range 10-15% when decoding speech mismatched with respect to the training data. It is thus very important to consider ASR operating points which have higher WER. Also, the out-of-vocabulary (OOV) rate was very low, below 1%. Since the “topics”/queries were long and stated in plain English rather than using the keyword search paradigm, the query-side OOV (Q-OOV) was very low as well, an unrealistic situation in practice. (Woodland et al., 2000) evaluates the effect of Q-OOV rate on retrieval performance by reducing the ASR vocabulary size such that the Q-OOV rate comes closer to 15%, a much more realistic figure since search keywords are typically rare words. They show severe degradation in MAP performance — 50% relative, from 44 to 22. The most common approach to dealing with OOV query words is to represent both the query and the spoken document using sub-word units — typically phones or phone n-grams — and then match sequences of such units. In his thesis, (Ng, 2000) shows the feasibility of sub-word SDR and advocates for tighter integration between ASR and IR technology. Similar conclusions are drawn by the excellent work in (Siegler, 1999). As pointed out in (Logan et al., 2002), word level 444 indexing and querying is still more accurate, were it not for the OOV problem. The authors argue in favor of a combination of word and sub-word level indexing. Another problem pointed out by the paper is the abundance of word-spotting false-positives in the sub-word retrieval case, somewhat masked by the MAP measure. Similar approaches are taken by (Seide and Yu, 2004). One interesting feature of this work is a twopass system whereby an approximate match is carried out at the document level after which the costly detailed phonetic match is carried out on only 15% of the documents in the collection. More recently, (Saraclar and Sproat, 2004) shows improvement in word-spotting accuracy by using lattices instead of 1-best. An inverted index from symbols — word or phone — to links allows to evaluate adjacency of query words but more general proximity information is harder to obtain — see Section 4. Although no formal comparison has been carried out, we believe our approach should yield a more compact index. Before discussing our architectural design decisions it is probably useful to give a brief presentation of a state-of-the-art text document retrieval engine that is using the keyword search paradigm. 3 Text Document Retrieval Probably the most widespread text retrieval model is the TF-IDF vector model (Baeza-Yates and RibeiroNeto, 1999). For a given query Q = q1 . . . qi . . . qQ and document Dj one calculates a similarity measure by accumulating the TF-IDF score wi,j for each query term qi, possibly weighted by a document specific weight: S(Dj, Q) = Q X i=1 wi,j wi,j = fi,j · idfi where fi,j is the normalized frequency of word qi in document Dj and the inverse document frequency for query term qi is idfi = log N ni where N is the total number of documents in the collection and ni is the number of documents containing qi. The main criticism to the TF-IDF relevance score is the fact that the query terms are assumed to be independent. Proximity information is not taken into account at all, e.g. whether the words LANGUAGE and MODELING occur next to each other or not in a document is not used for relevance scoring. Another issue is that query terms may be encountered in different contexts in a given document: title, abstract, author name, font size, etc. For hypertext document collections even more context information is available: anchor text, as well as other mark-up tags designating various parts of a given document being just a few examples. The TF-IDF ranking scheme completely discards such information although it is clearly important in practice. 3.1 Early Google Approach Aside from the use of PageRank for relevance ranking, (Brin and Page, 1998) also uses both proximity and context information heavily when assigning a relevance score to a given document — see Section 4.5.1 of (Brin and Page, 1998) for details. For each given query term qi one retrieves the list of hits corresponding to qi in document D. Hits can be of various types depending on the context in which the hit occurred: title, anchor text, etc. Each type of hit has its own type-weight and the typeweights are indexed by type. For a single word query, their ranking algorithm takes the inner-product between the type-weight vector and a vector consisting of count-weights (tapered counts such that the effect of large counts is discounted) and combines the resulting score with PageRank in a final relevance score. For multiple word queries, terms co-occurring in a given document are considered as forming different proximity-types based on their proximity, from adjacent to “not even close”. Each proximity type comes with a proximity-weight and the relevance score includes the contribution of proximity information by taking the inner product over all types, including the proximity ones. 3.2 Inverted Index Of essence to fast retrieval on static document collections of medium to large size is the use of an inverted index. The inverted index stores a list of hits for each word in a given vocabulary. The hits are grouped by document. For each document, the list of hits for a given query term must include position — needed to evaluate counts of proximity types — 445 as well as all the context information needed to calculate the relevance score of a given document using the scheme outlined previously. For details, the reader is referred to (Brin and Page, 1998), Section 4. 4 Position Specific Posterior Lattices As highlighted in the previous section, position information is crucial for being able to evaluate proximity information when assigning a relevance score to a given document. In the spoken document case however, we are faced with a dilemma. On one hand, using 1-best ASR output as the transcription to be indexed is suboptimal due to the high WER, which is likely to lead to low recall — query terms that were in fact spoken are wrongly recognized and thus not retrieved. On the other hand, ASR lattices do have much better WER — in our case the 1-best WER was 55% whereas the lattice WER was 30% — but the position information is not readily available: it is easy to evaluate whether two words are adjacent but questions about the distance in number of links between the occurrences of two query words in the lattice are very hard to answer. The position information needed for recording a given word hit is not readily available in ASR lattices — for details on the format of typical ASR lattices and the information stored in such lattices the reader is referred to (Young et al., 2002). To simplify the discussion let’s consider that a traditional text-document hit for given word consists of just (document id, position). The occurrence of a given word in a lattice obtained from a given spoken document is uncertain and so is the position at which the word occurs in the document. The ASR lattices do contain the information needed to evaluate proximity information, since on a given path through the lattice we can easily assign a position index to each link/word in the normal way. Each path occurs with a given posterior probability, easily computable from the lattice, so in principle one could index soft-hits which specify (document id, position, posterior probability) for each word in the lattice. Since it is likely that s
_
1
s
_
i
s
_
q
n
P
(
l
_
1
)
P
(
l
_
i
)
P
(
l
_
q
)
Figure 1: State Transitions more than one path contains the same word in the same position, one would need to sum over all possible paths in a lattice that contain a given word at a given position. A simple dynamic programming algorithm which is a variation on the standard forward-backward algorithm can be employed for performing this computation. The computation for the backward pass stays unchanged, whereas during the forward pass one needs to split the forward probability arriving at a given node n, αn, according to the length l — measured in number of links along the partial path that contain a word; null (ϵ) links are not counted when calculating path length — of the partial paths that start at the start node of the lattice and end at node n: αn[l] .= X π:end(π)=n,length(π)=l P(π) The backward probability βn has the standard definition (Rabiner, 1989). To formalize the calculation of the positionspecific forward-backward pass, the initialization, and one elementary forward step in the forward pass are carried out using Eq. (1), respectively — see Figure 1 for notation: αn[l + 1] = q X i=1 αsi[l + δ(li, ϵ)] · P(li) αstart[l] = ½1.0, l = 0 0.0, l ̸= 0 (1) The “probability” P(li) of a given link li is stored as a log-probability and commonly evaluated in ASR using: log P(li) = FLATw · [1/LMw · log PAM(li)+ log PLM(word(li)) −1/LMw · logPIP ] (2) 446 where log PAM(li) is the acoustic model score, log PLM(word(li)) is the language model score, LMw > 0 is the language model weight, logPIP > 0 is the “insertion penalty” and FLATw is a flattening weight. In N-gram lattices where N ≥2, all links ending at a given node n must contain the same word word(n), so the posterior probability of a given word w occurring at a given position l can be easily calculated using: P(w, l|LAT) = P n s.t. αn[l]·βn>0 αn[l]·βn βstart · δ(w, word(n)) The Position Specific Posterior Lattice (PSPL) is a representation of the P(w, l|LAT) distribution: for each position bin l store the words w along with their posterior probability P(w, l|LAT). 5 Spoken Document Indexing and Search Using PSPL Spoken documents rarely contain only speech. Often they have a title, author and creation date. There might also be a text abstract associated with the speech, video or even slides in some standard format. The idea of saving context information when indexing HTML documents and web pages can thus be readily used for indexing spoken documents, although the context information is of a different nature. As for the actual speech content of a spoken document, the previous section showed how ASR technology and PSPL lattices can be used to automatically convert it to a format that allows the indexing of soft hits — a soft index stores posterior probability along with the position information for term occurrences in a given document. 5.1 Speech Content Indexing Using PSPL Speech content can be very long. In our case the speech content of a typical spoken document was approximately 1 hr long; it is customary to segment a given speech file in shorter segments. A spoken document thus consists of an ordered list of segments. For each segment we generate a corresponding PSPL lattice. Each document and each segment in a given collection are mapped to an integer value using a collection descriptor file which lists all documents and segments. Each soft hit in our index will store the PSPL position and posterior probability. 5.2 Speech Content Relevance Ranking Using PSPL Representation Consider a given query Q = q1 . . . qi . . . qQ and a spoken document D represented as a PSPL. Our ranking scheme follows the description in Section 3.1. The words in the document D clearly belong to the ASR vocabulary V whereas the words in the query may be out-of-vocabulary (OOV). As argued in Section 2, the query-OOV rate is an important factor in evaluating the impact of having a finite ASR vocabulary on the retrieval accuracy. We assume that the words in the query are all contained in V; OOV words are mapped to UNK and cannot be matched in any document D. For all query terms, a 1-gram score is calculated by summing the PSPL posterior probability across all segments s and positions k. This is equivalent to calculating the expected count of a given query term qi according to the PSPL probability distribution P(wk(s)|D) for each segment s of document D. The results are aggregated in a common value S1−gram(D, Q): S(D, qi) = log " 1 + X s X k P(wk(s) = qi|D) # S1−gram(D, Q) = Q X i=1 S(D, qi) (3) Similar to (Brin and Page, 1998), the logarithmic tapering off is used for discounting the effect of large counts in a given document. Our current ranking scheme takes into account proximity in the form of matching N-grams present in the query. Similar to the 1-gram case, we calculate an expected tapered-count for each N-gram qi . . . qi+N−1 in the query and then aggregate the results in a common value SN−gram(D, Q) for each order N: S(D, qi . . . qi+N−1) = (4) log h 1 + P s P k QN−1 l=0 P(wk+l(s) = qi+l|D) i SN−gram(D, Q) = Q−N+1 X i=1 S(D, qi . . . qi+N−1) 447 The different proximity types, one for each Ngram order allowed by the query length, are combined by taking the inner product with a vector of weights. S(D, Q) = Q X N=1 wN · SN−gram(D, Q) (5) Only documents containing all the terms in the query are returned. In the current implementation the weights increase linearly with the N-gram order. Clearly, better weight assignments must exist, and as the hit types are enriched beyond using just Ngrams, the weights will have to be determined using machine learning techniques. It is worth noting that the transcription for any given segment can also be represented as a PSPL with exactly one word per position bin. It is easy to see that in this case the relevance scores calculated according to Eq. (3-4) are the ones specified by 3.1. 6 Experiments We have carried all our experiments on the iCampus corpus prepared by MIT CSAIL. The main advantages of the corpus are: realistic speech recording conditions — all lectures are recorded using a lapel microphone — and the availability of accurate manual transcriptions — which enables the evaluation of a SDR system against its text counterpart. 6.1 iCampus Corpus The iCampus corpus (Glass et al., 2004) consists of about 169 hours of lecture materials: 20 Introduction to Computer Programming Lectures (21.7 hours), 35 Linear Algebra Lectures (27.7 hours), 35 Electro-magnetic Physics Lectures (29.1 hours), 79 Assorted MIT World seminars covering a wide variety of topics (89.9 hours). Each lecture comes with a word-level manual transcription that segments the text into semantic units that could be thought of as sentences; word-level time-alignments between the transcription and the speech are also provided. The speech style is in between planned and spontaneous. The speech is recorded at a sampling rate of 16kHz (wide-band) using a lapel microphone. The speech was segmented at the sentence level based on the time alignments; each lecture is considered to be a spoken document consisting of a set of one-sentence long segments determined this way — see Section 5.1. The final collection consists of 169 documents, 66,102 segments and an average document length of 391 segments. We have then used a standard large vocabulary ASR system for generating 3-gram ASR lattices and PSPL lattices. The 3-gram language model used for decoding is trained on a large amount of text data, primarily newswire text. The vocabulary of the ASR system consisted of 110kwds, selected based on frequency in the training data. The acoustic model is trained on a variety of wide-band speech and it is a standard clustered tri-phone, 3-states-per-phone model. Neither model has been tuned in any way to the iCampus scenario. On the first lecture L01 of the Introduction to Computer Programming Lectures the WER of the ASR system was 44.7%; the OOV rate was 3.3%. For the entire set of lectures in the Introduction to Computer Programming Lectures, the WER was 54.8%, with a maximum value of 74% and a minimum value of 44%. 6.2 PSPL lattices We have then proceeded to generate 3-gram lattices and PSPL lattices using the above ASR system. Table 1 compares the accuracy/size of the 3-gram lattices and the resulting PSPL lattices for the first lecture L01. As it can be seen the PSPL represenLattice Type 3-gram PSPL Size on disk 11.3MB 3.2MB Link density 16.3 14.6 Node density 7.4 1.1 1-best WER 44.7% 45% ORACLE WER 26.4% 21.7% Table 1: Comparison between 3-gram and PSPL lattices for lecture L01 (iCampus corpus): node and link density, 1-best and ORACLE WER, size on disk tation is much more compact than the original 3gram lattices at a very small loss in accuracy: the 1-best path through the PSPL lattice is only 0.3% absolute worse than the one through the original 3gram lattice. As expected, the main reduction comes from the drastically smaller node density — 7 times smaller, measured in nodes per word in the reference transcription. Since the PSPL representation 448 introduces new paths compared to the original 3gram lattice, the ORACLE WER path — least errorful path in the lattice — is also about 20% relative better than in the original 3-gram lattice — 5% absolute. Also to be noted is the much better WER in both PSPL/3-gram lattices versus 1-best. 6.3 Spoken Document Retrieval Our aim is to narrow the gap between speech and text document retrieval. We have thus taken as our reference the output of a standard retrieval engine working according to one of the TF-IDF flavors, see Section 3. The engine indexes the manual transcription using an unlimited vocabulary. All retrieval results presented in this section have used the standard trec_eval package used by the TREC evaluations. The PSPL lattices for each segment in the spoken document collection were indexed as explained in 5.1. In addition, we generated the PSPL representation of the manual transcript and of the 1-best ASR output and indexed those as well. This allows us to compare our retrieval results against the results obtained using the reference engine when working on the same text document collection. 6.3.1 Query Collection and Retrieval Setup The missing ingredient for performing retrieval experiments are the queries. We have asked a few colleagues to issue queries against a demo shell using the index built from the manual transcription. The only information1 provided to them was the same as the summary description in Section 6.1. We have collected 116 queries in this manner. The query out-of-vocabulary rate (Q-OOV) was 5.2% and the average query length was 1.97 words. Since our approach so far does not index sub-word units, we cannot deal with OOV query words. We have thus removed the queries which contained OOV words — resulting in a set of 96 queries — which clearly biases the evaluation. On the other hand, the results on both the 1-best and the lattice indexes are equally favored by this. 1Arguably, more motivated users that are also more familiar with the document collection would provide a better query collection framework 6.3.2 Retrieval Experiments We have carried out retrieval experiments in the above setup. Indexes have been built from: • trans: manual transcription filtered through ASR vocabulary • 1-best: ASR 1-best output • lat: PSPL lattices. No tuning of retrieval weights, see Eq. (5), or link scoring weights, see Eq. (2) has been performed. Table 2 presents the results. As a sanity check, the retrieval results on transcription — trans — match almost perfectly the reference. The small difference comes from stemming rules that the baseline engine is using for query enhancement which are not replicated in our retrieval engine. The results on lattices (lat) improve significantly on (1-best) — 20% relative improvement in mean average precision (MAP). trans 1-best lat # docs retrieved 1411 3206 4971 # relevant docs 1416 1416 1416 # rel retrieved 1411 1088 1301 MAP 0.99 0.53 0.62 R-precision 0.99 0.53 0.58 Table 2: Retrieval performance on indexes built from transcript, ASR 1-best and PSPL lattices, respectively 6.3.3 Why Would This Work? A legitimate question at this point is: why would anyone expect this to work when the 1-best ASR accuracy is so poor? In favor of our approach, the ASR lattice WER is much lower than the 1-best WER, and PSPL have even lower WER than the ASR lattices. As reported in Table 1, the PSPL WER for L01 was 22% whereas the 1-best WER was 45%. Consider matching a 2-gram in the PSPL —the average query length is indeed 2 wds so this is a representative situation. A simple calculation reveals that it is twice — (1 −0.22)2/(1 −0.45)2 = 2 — more likely to find a query match in the PSPL than in the 1-best — if the query 2-gram was indeed spoken at that position. According to this heuristic argument one could expect a dramatic increase in Recall. Another aspect 449 is that people enter typical N-grams as queries. The contents of adjacent PSPL bins are fairly random in nature so if a typical 2-gram is found in the PSPL, chances are it was actually spoken. This translates in little degradation in Precision. 7 Conclusions and Future work We have developed a new representation for ASR lattices — the Position Specific Posterior Lattice (PSPL) — that lends itself naturally to indexing speech content and integrating state-of-the-art IR techniques that make use of proximity and context information. In addition, the PSPL representation is also much more compact at no loss in WER — both 1-best and ORACLE. The retrieval results obtained by indexing the PSPL and performing adequate relevance ranking are 20% better than when using the ASR 1-best output, although still far from the performance achieved on text data. The experiments presented in this paper are truly a first step. We plan to gather a much larger number of queries. The binary relevance judgments — a given document is deemed either relevant or irrelevant to a given query in the reference “ranking” — assumed by the standard trec_eval tool are also a serious shortcoming; a distance measure between rankings of documents needs to be used. Finally, using a baseline engine that in fact makes use of proximity and context information is a priority if such information is to be used in our algorithms. 8 Acknowledgments We would like to thank Jim Glass and T J Hazen at MIT for providing the iCampus data. We would also like to thank Frank Seide for offering valuable suggestions and our colleagues for providing queries. References Ricardo Baeza-Yates and Berthier Ribeiro-Neto, 1999. Modern Information Retrieval, chapter 2, pages 27– 30. Addison Wesley, New York. Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30(1–7):107–117. M. G. Brown, J. T. Foote, G. J. F. Jones, K. Sp¨arck Jones, and S. J. Young. 1996. Open-vocabulary speech indexing for voice and video mail retrieval. In Proc. ACM Multimedia 96, pages 307–316, Boston, November. Kenneth Ward Church. 2003. Speech and language processing: Where have we been and where are we going? In Proceedings of Eurospeech, Geneva, Switzerland. J. Garofolo, G. Auzanne, and E. Voorhees. 2000. The TREC spoken document retrieval track: A success story. In Proceedings of the Recherche d’Informations Assiste par Ordinateur: ContentBased Multimedia Information Access Conference, April. James Glass, T. J. Hazen, Lee Hetherington, and Chao Wang. 2004. Analysis and processing of lecture audio data: Preliminary investigations. In HLT-NAACL 2004 Workshop: Interdisciplinary Approaches to Speech Indexing and Retrieval, pages 9–12, Boston, Massachusetts, May. David Anthony James. 1995. The Application of Classical Information Retrieval Techniques to Spoken Documents. Ph.D. thesis, University of Cambridge, Downing College. B. Logan, P. Moreno, and O. Deshmukh. 2002. Word and sub-word indexing approaches for reducing the effects of OOV queries on spoken audio. In Proc. HLT. Kenney Ng. 2000. Subword-Based Approaches for Spoken Document Retrieval. Ph.D. thesis, Massachusetts Institute of Technology. NIST. www. The TREC evaluation package. In wwwnlpir.nist.gov/projects/trecvid/trecvid.tools/trec eval. L. R. Rabiner. 1989. A tutorial on hidden markov models and selected applications in speech recognition. In Proceedings IEEE, volume 77(2), pages 257–285. Murat Saraclar and Richard Sproat. 2004. Lattice-based search for spoken utterance retrieval. In HLT-NAACL 2004, pages 129–136, Boston, Massachusetts, May. F. Seide and P. Yu. 2004. Vocabulary-independent search in spontaneous speech. In Proceedings of ICASSP, Montreal, Canada. Matthew A. Siegler. 1999. Integration of Continuous Speech Recognition and Information Retrieval for Mutually Optimal Performance. Ph.D. thesis, Carnegie Mellon University. P. C. Woodland, S. E. Johnson, P. Jourlin, and K. Sp¨arck Jones. 2000. Effects of out of vocabulary words in spoken document retrieval. In Proceedings of SIGIR, pages 372–374, Athens, Greece. Steve Young, Gunnar Evermann, Thomas Hain, Dan Kershaw, Gareth Moore, Julian Odell, Dan Povey Dave Ollason, Valtcho Valtchev, and Phil Woodland. 2002. The HTK Book. Cambridge University Engineering Department, Cambridge, England, December. 450 | 2005 | 55 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 451–458, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Using Conditional Random Fields For Sentence Boundary Detection In Speech Yang Liu ICSI, Berkeley [email protected] Andreas Stolcke Elizabeth Shriberg SRI and ICSI stolcke,[email protected] Mary Harper Purdue University [email protected] Abstract Sentence boundary detection in speech is important for enriching speech recognition output, making it easier for humans to read and downstream modules to process. In previous work, we have developed hidden Markov model (HMM) and maximum entropy (Maxent) classifiers that integrate textual and prosodic knowledge sources for detecting sentence boundaries. In this paper, we evaluate the use of a conditional random field (CRF) for this task and relate results with this model to our prior work. We evaluate across two corpora (conversational telephone speech and broadcast news speech) on both human transcriptions and speech recognition output. In general, our CRF model yields a lower error rate than the HMM and Maxent models on the NIST sentence boundary detection task in speech, although it is interesting to note that the best results are achieved by three-way voting among the classifiers. This probably occurs because each model has different strengths and weaknesses for modeling the knowledge sources. 1 Introduction Standard speech recognizers output an unstructured stream of words, in which the important structural features such as sentence boundaries are missing. Sentence segmentation information is crucial and assumed in most of the further processing steps that one would want to apply to such output: tagging and parsing, information extraction, summarization, among others. 1.1 Sentence Segmentation Using HMM Most prior work on sentence segmentation (Shriberg et al., 2000; Gotoh and Renals, 2000; Christensen et al., 2001; Kim and Woodland, 2001; NISTRT03F, 2003) have used an HMM approach, in which the word/tag sequences are modeled by Ngram language models (LMs) (Stolcke and Shriberg, 1996). Additional features (mostly related to speech prosody) are modeled as observation likelihoods attached to the N-gram states of the HMM (Shriberg et al., 2000). Figure 1 shows the graphical model representation of the variables involved in the HMM for this task. Note that the words appear in both the states1 and the observations, such that the word stream constrains the possible hidden states to matching words; the ambiguity in the task stems entirely from the choice of events. This architecture differs from the one typically used for sequence tagging (e.g., part-of-speech tagging), in which the “hidden” states represent only the events or tags. Empirical investigations have shown that omitting words in the states significantly degrades system performance for sentence boundary detection (Liu, 2004). The observation probabilities in the HMM, implemented using a decision tree classifier, capture the probabilities of generating the prosodic features 1In this sense, the states are only partially “hidden”. 451 P (F i jE i ; W i ).2 An N-gram LM is used to calculate the transition probabilities: P (W i E i jW E : : : W i E i ) = P (W i jW E : : : W i E i ) P (E i jW E : : : W i E i E i ) In the HMM, the forward-backward algorithm is used to determine the event with the highest posterior probability for each interword boundary: ^ E i = arg max E i P (E i jW ; F ) (1) The HMM is a generative modeling approach since it describes a stochastic process with hidden variables (sentence boundary) that produces the observable data. This HMM approach has two main drawbacks. First, standard training methods maximize the joint probability of observed and hidden events, as opposed to the posterior probability of the correct hidden variable assignment given the observations, which would be a criterion more closely related to classification performance. Second, the N-gram LM underlying the HMM transition model makes it difficult to use features that are highly correlated (such as words and POS labels) without greatly increasing the number of model parameters, which in turn would make robust estimation difficult. More details about using textual information in the HMM system are provided in Section 3. 1.2 Sentence Segmentation Using Maxent A maximum entropy (Maxent) posterior classification method has been evaluated in an attempt to overcome some of the shortcomings of the HMM approach (Liu et al., 2004; Huang and Zweig, 2002). For a boundary position i, the Maxent model takes the exponential form: P (E i jT i ; F i ) = Z (T i ; F i ) e P k k g k (E i ;T i ;F i ) (2) where Z (T i ; F i ) is a normalization term and T i represents textual information. The indicator functions g k (E i ; T i ; F i ) correspond to features defined over events, words, and prosody. The parameters in 2In the prosody model implementation, we ignore the word identity in the conditions, only using the timing or word alignment information. W
i
E
i
F
i
O
i
W
i+1
E
i+1
O
i+1
W
i
F
i+1
W
i+1
Figure 1: A graphical model of HMM for the sentence boundary detection problem. Only one word+event pair is depicted in each state, but in a model based on N-grams, the previous N tokens would condition the transition to the next state. O are observations consisting of words W and prosodic features F, and E are sentence boundary events. Maxent are chosen to maximize the conditional likelihood Q i P (E i jT i ; F i ) over the training data, better matching the classification accuracy metric. The Maxent framework provides a more principled way to combine the largely correlated textual features, as confirmed by the results of (Liu et al., 2004); however, it does not model the state sequence. A simple combination of the results from the Maxent and HMM was found to improve upon the performance of either model alone (Liu et al., 2004) because of the complementary strengths and weaknesses of the two models. An HMM is a generative model, yet it is able to model the sequence via the forward-backward algorithm. Maxent is a discriminative model; however, it attempts to make decisions locally, without using sequential information. A conditional random field (CRF) model (Lafferty et al., 2001) combines the benefits of the HMM and Maxent approaches. Hence, in this paper we will evaluate the performance of the CRF model and relate the results to those using the HMM and Maxent approaches on the sentence boundary detection task. The rest of the paper is organized as follows. Section 2 describes the CRF model and discusses how it differs from the HMM and Maxent models. Section 3 describes the data and features used in the models to be compared. Section 4 summarizes the experimental results for the sentence boundary detection task. Conclusions and future work appear in Section 5. 452 2 CRF Model Description A CRF is a random field that is globally conditioned on an observation sequence O. CRFs have been successfully used for a variety of text processing tasks (Lafferty et al., 2001; Sha and Pereira, 2003; McCallum and Li, 2003), but they have not been widely applied to a speech-related task with both acoustic and textual knowledge sources. The top graph in Figure 2 is a general CRF model. The states of the model correspond to event labels E. The observations O are composed of the textual features, as well as the prosodic features. The most likely event sequence ^ E for the given input sequence (observations) O is ^ E = arg max E e P k k G k (E ;O ) Z (O ) (3) where the functions G are potential functions over the events and the observations, and Z is the normalization term: Z (O ) = X E e P k k G k (E ;O ) (4) Even though a CRF itself has no restriction on the potential functions G k (E ; O ), to simplify the model (considering computational cost and the limited training set size), we use a first-order CRF in this investigation, as at the bottom of Figure 2. In this model, an observation O i (consisting of textual features T i and prosodic features F i) is associated with a state E i. The model is trained to maximize the conditional log-likelihood of a given training set. Similar to the Maxent model, the conditional likelihood is closely related to the individual event posteriors used for classification, enabling this type of model to explicitly optimize discrimination of correct from incorrect labels. The most likely sequence is found using the Viterbi algorithm.3 A CRF differs from an HMM with respect to its training objective function (joint versus conditional likelihood) and its handling of dependent word features. Traditional HMM training does not maximize the posterior probabilities of the correct labels; whereas, the CRF directly estimates posterior 3The forward-backward algorithm would most likely be better here, but it is not implemented in the software we used (McCallum, 2002). E
1
E
2
E
i
E
N
O
E
i
O
i
E
i-1
O
i-1
E
i+1
O
i+1
Figure 2: Graphical representations of a general CRF and the first-order CRF used for the sentence boundary detection problem. E represent the state tags (i.e., sentence boundary or not). O are observations consisting of words W or derived textual features T and prosodic features F. boundary label probabilities P (E jO ). The underlying N-gram sequence model of an HMM does not cope well with multiple representations (features) of the word sequence (e.g., words, POS), especially when the training set is small; however, the CRF model supports simultaneous correlated features, and therefore gives greater freedom for incorporating a variety of knowledge sources. A CRF differs from the Maxent method with respect to its ability to model sequence information. The primary advantage of the CRF over the Maxent approach is that the model is optimized globally over the entire sequence; whereas, the Maxent model makes a local decision, as shown in Equation (2), without utilizing any state dependency information. We use the Mallet package (McCallum, 2002) to implement the CRF model. To avoid overfitting, we employ a Gaussian prior with a zero mean on the parameters (Chen and Rosenfeld, 1999), similar to what is used for training Maxent models (Liu et al., 2004). 3 Experimental Setup 3.1 Data and Task Description The sentence-like units in speech are different from those in written text. In conversational speech, these units can be well-formed sentences, phrases, or even a single word. These units are called SUs in the DARPA EARS program. SU boundaries, as 453 well as other structural metadata events, were annotated by LDC according to an annotation guideline (Strassel, 2003). Both the transcription and the recorded speech were used by the annotators when labeling the boundaries. The SU detection task is conducted on two corpora: Broadcast News (BN) and Conversational Telephone Speech (CTS). BN and CTS differ in genre and speaking style. The average length of SUs is longer in BN than in CTS, that is, 12.35 words (standard deviation 8.42) in BN compared to 7.37 words (standard deviation 8.72) in CTS. This difference is reflected in the frequency of SU boundaries: about 14% of interword boundaries are SUs in CTS compared to roughly 8% in BN. Training and test data for the SU detection task are those used in the NIST Rich Transcription 2003 Fall evaluation. We use both the development set and the evaluation set as the test set in this paper in order to obtain more meaningful results. For CTS, there are about 40 hours of conversational data (around 480K words) from the Switchboard corpus for training and 6 hours (72 conversations) for testing. The BN data has about 20 hours of Broadcast News shows (about 178K words) in the training set and 3 hours (6 shows) in the test set. Note that the SU-annotated training data is only a subset of the data used for the speech recognition task because more effort is required to annotate the boundaries. For testing, the system determines the locations of sentence boundaries given the word sequence W and the speech. The SU detection task is evaluated on both the reference human transcriptions (REF) and speech recognition outputs (STT). Evaluation across transcription types allows us to obtain the performance for the best-case scenario when the transcriptions are correct; thus factoring out the confounding effect of speech recognition errors on the SU detection task. We use the speech recognition output obtained from the SRI recognizer (Stolcke et al., 2003). System performance is evaluated using the official NIST evaluation tools.4 System output is scored by first finding a minimum edit distance alignment between the hypothesized word string and the refer4See http://www.nist.gov/speech/tests/rt/rt2003/fall/ for more details about scoring. ence transcriptions, and then comparing the aligned event labels. The SU error rate is defined as the total number of deleted or inserted SU boundary events, divided by the number of true SU boundaries. In addition to this NIST SU error metric, we use the total number of interword boundaries as the denominator, and thus obtain results for the per-boundarybased metric. 3.2 Feature Extraction and Modeling To obtain a good-quality estimation of the conditional probability of the event tag given the observations P (E i jO i ), the observations should be based on features that are discriminative of the two events (SU versus not). As in (Liu et al., 2004), we utilize both textual and prosodic information. We extract prosodic features that capture duration, pitch, and energy patterns associated with the word boundaries (Shriberg et al., 2000). For all the modeling methods, we adopt a modular approach to model the prosodic features, that is, a decision tree classifier is used to model them. During testing, the decision tree prosody model estimates posterior probabilities of the events given the associated prosodic features for a word boundary. The posterior probability estimates are then used in various modeling approaches in different ways as described later. Since words and sentence boundaries are mutually constraining, the word identities themselves (from automatic recognition or human transcriptions) constitute a primary knowledge source for sentence segmentation. We also make use of various automatic taggers that map the word sequence to other representations. Tagged versions of the word stream are provided to support various generalizations of the words and to smooth out possibly undertrained word-based probability estimates. These tags include part-of-speech tags, syntactic chunk tags, and automatically induced word classes. In addition, we use extra text corpora, which were not annotated according to the guideline used for the training and test data (Strassel, 2003). For BN, we use the training corpus for the LM for speech recognition. For CTS, we use the Penn Treebank Switchboard data. There is punctuation information in both, which we use to approximate SUs as defined in the annotation guideline (Strassel, 2003). As explained in Section 1, the prosody model and 454 Table 1: Knowledge sources and their representations in different modeling approaches: HMM, Maxent, and CRF. HMM Maxent CRF generative model conditional approach Sequence information yes no yes LDC data set (words or tags) LM N-grams as indicator functions Probability from prosody model real-valued cumulatively binned Additional text corpus N-gram LM binned posteriors Speaker turn change in prosodic features a separate feature, in addition to being in the prosodic feature set Compound feature no POS tags and decisions from prosody model the N-gram LM can be integrated in an HMM. When various textual information is used, jointly modeling words and tags may be an effective way to model the richer feature set; however, a joint model requires more parameters. Since the training set for the SU detection task in the EARS program is quite limited, we use a loosely coupled approach: Linearly combine three LMs: the word-based LM from the LDC training data, the automaticclass-based LMs, and the word-based LM trained from the additional corpus. These interpolated LMs are then combined with the prosody model via the HMM. The posterior probabilities of events at each boundary are obtained from this step, denoted as P H M M (E i jW ; C ; F ). Apply the POS-based LM alone to the POS sequence (obtained by running the POS tagger on the word sequence W) and generate the posterior probabilities for each word boundary P posLM (E i jP O S ), which are then combined from the posteriors from the previous step, i.e., P f inal (E i jT ; F ) = P H M M (E i jW ; C ; F ) + P posLM (E i jP ). The features used for the CRF are the same as those used for the Maxent model devised for the SU detection task (Liu et al., 2004), briefly listed below. N-grams of words or various tags (POS tags, automatically induced classes). Different Ns and different position information are used (N varies from one through four). The cumulative binned posterior probabilities from the decision tree prosody model. The N-gram LM trained from the extra corpus is used to estimate posterior event probabilities for the LDC-annotated training and test sets, and these posteriors are then thresholded to yield binary features. Other features: speaker or turn change, and compound features of POS tags and decisions from the prosody model. Table 1 summarizes the features and their representations used in the three modeling approaches. The same knowledge sources are used in these approaches, but with different representations. The goal of this paper is to evaluate the ability of these three modeling approaches to combine prosodic and textual knowledge sources, not in a rigidly parallel fashion, but by exploiting the inherent capabilities of each approach. We attempt to compare the models in as parallel a fashion as possible; however, it should be noted that the two discriminative methods better model the textual sources and the HMM better models prosody given its representation in this study. 4 Experimental Results and Discussion SU detection results using the CRF, HMM, and Maxent approaches individually, on the reference transcriptions or speech recognition output, are shown in Tables 2 and 3 for CTS and BN data, respectively. We present results when different knowledge sources are used: word N-gram only, word Ngram and prosodic information, and using all the 455 Table 2: Conversational telephone speech SU detection results reported using the NIST SU error rate (%) and the boundary-based error rate (% in parentheses) using the HMM, Maxent, and CRF individually and in combination. Note that the ‘all features’ condition uses all the knowledge sources described in Section 3.2. ‘Vote’ is the result of the majority vote over the three modeling approaches, each of which uses all the features. The baseline error rate when assuming there is no SU boundary at each word boundary is 100% for the NIST SU error rate and 15.7% for the boundary-based metric. Conversational Telephone Speech HMM Maxent CRF word N-gram 42.02 (6.56) 43.70 (6.82) 37.71 (5.88) REF word N-gram + prosody 33.72 (5.26) 35.09 (5.47) 30.88 (4.82) all features 31.51 (4.92) 30.66 (4.78) 29.47 (4.60) Vote: 29.30 (4.57) word N-gram 53.25 (8.31) 53.92 (8.41) 50.20 (7.83) STT word N-gram + prosody 44.93 (7.01) 45.50 (7.10) 43.12 (6.73) all features 43.05 (6.72) 43.02 (6.71) 42.00 (6.55) Vote: 41.88 (6.53) features described in Section 3.2. The word Ngrams are from the LDC training data and the extra text corpora. ‘All the features’ means adding textual information based on tags, and the ‘other features’ in the Maxent and CRF models as well. The detection error rate is reported using the NIST SU error rate, as well as the per-boundary-based classification error rate (in parentheses in the table) in order to factor out the effect of the different SU priors. Also shown in the tables are the majority vote results over the three modeling approaches when all the features are used. 4.1 CTS Results For CTS, we find from Table 2 that the CRF is superior to both the HMM and the Maxent model across all conditions (the differences are significant at p < 0:0). When using only the word N-gram information, the gain of the CRF is the greatest, with the differences among the models diminishing as more features are added. This may be due to the impact of the sparse data problem on the CRF or simply due to the fact that differences among modeling approaches are less when features become stronger, that is, the good features compensate for the weaknesses in models. Notice that with fewer knowledge sources (e.g., using only word N-gram and prosodic information), the CRF is able to achieve performance similar to or even better than other methods using all the knowledges sources. This may be useful when feature extraction is computationally expensive. We observe from Table 2 that there is a large increase in error rate when evaluating on speech recognition output. This happens in part because word information is inaccurate in the recognition output, thus impacting the effectiveness of the LMs and lexical features. The prosody model is also affected, since the alignment of incorrect words to the speech is imperfect, thereby degrading prosodic feature extraction. However, the prosody model is more robust to recognition errors than textual knowledge, because of its lesser dependence on word identity. The results show that the CRF suffers most from the recognition errors. By focusing on the results when only word N-gram information is used, we can see the effect of word errors on the models. The SU detection error rate increases more in the STT condition for the CRF model than for the other models, suggesting that the discriminative CRF model suffers more from the mismatch between the training (using the reference transcription) and the test condition (features obtained from the errorful words). We also notice from the CTS results that when only word N-gram information is used (with or without combining with prosodic information), the HMM is superior to the Maxent; only when various additional textual features are included in the feature set does Maxent show its strength compared to 456 Table 3: Broadcast news SU detection results reported using the NIST SU error rate (%) and the boundarybased error rate (% in parentheses) using the HMM, Maxent, and CRF individually and in combination. The baseline error rate is 100% for the NIST SU error rate and 7.2% for the boundary-based metric. Broadcast News HMM Maxent CRF word N-gram 80.44 (5.83) 81.30 (5.89) 74.99 (5.43) REF word N-gram + prosody 59.81 (4.33) 59.69 (4.33) 54.92 (3.98) all features 48.72 (3.53) 48.61 (3.52) 47.92 (3.47) Vote: 46.28 (3.35) word N-gram 84.71 (6.14) 86.13 (6.24) 80.50 (5.83) STT word N-gram + prosody 64.58 (4.68) 63.16 (4.58) 59.52 (4.31) all features 55.37 (4.01) 56.51 (4.10) 55.37 (4.01) Vote: 54.29 (3.93) the HMM, highlighting the benefit of Maxent’s handling of the textual features. The combined result (using majority vote) of the three approaches in Table 2 is superior to any model alone (the improvement is not significant though). Previously, it was found that the Maxent and HMM posteriors combine well because the two approaches have different error patterns (Liu et al., 2004). For example, Maxent yields fewer insertion errors than HMM because of its reliance on different knowledge sources. The toolkit we use for the implementation of the CRF does not generate a posterior probability for a sequence; therefore, we do not combine the system output via posterior probability interpolation, which is expected to yield better performance. 4.2 BN Results Table 3 shows the SU detection results for BN. Similar to the patterns found for the CTS data, the CRF consistently outperforms the HMM and Maxent, except on the STT condition when all the features are used. The CRF yields relatively less gain over the other approaches on BN than on CTS. One possible reason for this difference is that there is more training data for the CTS task, and both the CRF and Maxent approaches require a relatively larger training set than the HMM. Overall the degradation on the STT condition for BN is smaller than on CTS. This can be easily explained by the difference in word error rates, 22.9% on CTS and 12.1% on BN. Finally, the vote among the three approaches outperforms any model on both the REF and STT conditions, and the gain from voting is larger for BN than CTS. Comparing Table 2 and Table 3, we find that the NIST SU error rate on BN is generally higher than on CTS. This is partly because the NIST error rate is measured as the percentage of errors per reference SU, and the number of SUs in CTS is much larger than for BN, giving a large denominator and a relatively lower error rate for the same number of boundary detection errors. Another reason is that the training set is smaller for BN than for CTS. Finally, the two genres differ significantly: CTS has the advantage of the frequent backchannels and first person pronouns that provide good cues for SU detection. When the boundary-based classification metric is used (results in parentheses), the SU error rate is lower on BN than on CTS; however, it should also be noted that the baseline error rate (i.e., the priors of the SUs) is lower on BN than CTS. 5 Conclusion and Future Work Finding sentence boundaries in speech transcriptions is important for improving readability and aiding downstream language processing modules. In this paper, prosodic and textual knowledge sources are integrated for detecting sentence boundaries in speech. We have shown that a discriminatively trained CRF model is a competitive approach for the sentence boundary detection task. The CRF combines the advantages of being discriminatively trained and able to model the entire sequence, and so it outperforms the HMM and Maxent approaches 457 consistently across various testing conditions. The CRF takes longer to train than the HMM and Maxent models, especially when the number of features becomes large; the HMM requires the least training time of all approaches. We also find that as more features are used, the differences among the modeling approaches decrease. We have explored different approaches to modeling various knowledge sources in an attempt to achieve good performance for sentence boundary detection. Note that we have not fully optimized each modeling approach. For example, for the HMM, using discriminative training methods is likely to improve system performance, but possibly at a cost of reducing the accuracy of the combined system. In future work, we will examine the effect of Viterbi decoding versus forward-backward decoding for the CRF approach, since the latter better matches the classification accuracy metric. To improve SU detection results on the STT condition, we plan to investigate approaches that model recognition uncertainty in order to mitigate the effect of word errors. Another future direction is to investigate how to effectively incorporate prosodic features more directly in the Maxent or CRF framework, rather than using a separate prosody model and then binning the resulting posterior probabilities. Important ongoing work includes investigating the impact of SU detection on downstream language processing modules, such as parsing. For these applications, generating probabilistic SU decisions is crucial since that information can be more effectively used by subsequent modules. 6 Acknowledgments The authors thank the anonymous reviewers for their valuable comments, and Andrew McCallum and Aron Culotta at the University of Massachusetts and Fernando Pereira at the University of Pennsylvania for their assistance with their CRF toolkit. This work has been supported by DARPA under contract MDA972-02-C-0038, NSF-STIMULATE under IRI9619921, NSF KDI BCS-9980054, and ARDA under contract MDA904-03-C-1788. Distribution is unlimited. Any opinions expressed in this paper are those of the authors and do not reflect the funding agencies. Part of the work was carried out while the last author was on leave from Purdue University and at NSF. References S. Chen and R. Rosenfeld. 1999. A Gaussian prior for smoothing maximum entropy models. Technical report, Carnegie Mellon University. H. Christensen, Y. Gotoh, and S. Renal. 2001. Punctuation annotation using statistical prosody models. In ISCA Workshop on Prosody in Speech Recognition and Understanding. Y. Gotoh and S. Renals. 2000. Sentence boundary detection in broadcast speech transcripts. In Proceedings of ISCA Workshop: Automatic Speech Recognition: Challenges for the New Millennium ASR-2000, pages 228–235. J. Huang and G. Zweig. 2002. Maximum entropy model for punctuation annotation from speech. In Proceedings of the International Conference on Spoken Language Processing, pages 917–920. J. Kim and P. C. Woodland. 2001. The use of prosody in a combined system for punctuation generation and speech recognition. In Proceedings of the European Conference on Speech Communication and Technology, pages 2757–2760. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random field: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the International Conference on Machine Learning, pages 282–289. Y. Liu, A. Stolcke, E. Shriberg, and M. Harper. 2004. Comparing and combining generative and posterior probability models: Some advances in sentence boundary detection in speech. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Y. Liu. 2004. Structural Event Detection for Rich Transcription of Speech. Ph.D. thesis, Purdue University. A. McCallum and W. Li. 2003. Early results for named entity recognition with conditional random fields. In Proceedings of the Conference on Computational Natural Language Learning. A. McCallum. 2002. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu. NIST-RT03F. 2003. RT-03F workshop agenda and presentations. http://www.nist.gov/speech/tests/rt/rt2003/ fall/presentations/, November. F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proceedings of Human Language Technology Conference / North American Chapter of the Association for Computational Linguistics annual meeting. E. Shriberg, A. Stolcke, D. Hakkani-Tur, and G. Tur. 2000. Prosody-based automatic segmentation of speech into sentences and topics. Speech Communication, pages 127–154. A. Stolcke and E. Shriberg. 1996. Automatic linguistic segmentation of conversational speech. In Proceedings of the International Conference on Spoken Language Processing, pages 1005–1008. A. Stolcke, H. Franco, R. Gadde, M. Graciarena, K. Precoda, A. Venkataraman, D. Vergyri, W. Wang, and J. Zheng. 2003. Speech-to-text research at SRIICSI-UW. http://www.nist.gov/speech/tests/rt/rt2003/ spring/presentations/index.htm. S. Strassel, 2003. Simple Metadata Annotation Specification V5.0. Linguistic Data Consortium. 458 | 2005 | 56 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 459–466, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Log-linear Models for Word Alignment Yang Liu , Qun Liu and Shouxun Lin Institute of Computing Technology Chinese Academy of Sciences No. 6 Kexueyuan South Road, Haidian District P. O. Box 2704, Beijing, 100080, China {yliu, liuqun, sxlin}@ict.ac.cn Abstract We present a framework for word alignment based on log-linear models. All knowledge sources are treated as feature functions, which depend on the source langauge sentence, the target language sentence and possible additional variables. Log-linear models allow statistical alignment models to be easily extended by incorporating syntactic information. In this paper, we use IBM Model 3 alignment probabilities, POS correspondence, and bilingual dictionary coverage as features. Our experiments show that log-linear models significantly outperform IBM translation models. 1 Introduction Word alignment, which can be defined as an object for indicating the corresponding words in a parallel text, was first introduced as an intermediate result of statistical translation models (Brown et al., 1993). In statistical machine translation, word alignment plays a crucial role as word-aligned corpora have been found to be an excellent source of translation-related knowledge. Various methods have been proposed for finding word alignments between parallel texts. There are generally two categories of alignment approaches: statistical approaches and heuristic approaches. Statistical approaches, which depend on a set of unknown parameters that are learned from training data, try to describe the relationship between a bilingual sentence pair (Brown et al., 1993; Vogel and Ney, 1996). Heuristic approaches obtain word alignments by using various similarity functions between the types of the two languages (Smadja et al., 1996; Ker and Chang, 1997; Melamed, 2000). The central distinction between statistical and heuristic approaches is that statistical approaches are based on well-founded probabilistic models while heuristic ones are not. Studies reveal that statistical alignment models outperform the simple Dice coefficient (Och and Ney, 2003). Finding word alignments between parallel texts, however, is still far from a trivial work due to the diversity of natural languages. For example, the alignment of words within idiomatic expressions, free translations, and missing content or function words is problematic. When two languages widely differ in word order, finding word alignments is especially hard. Therefore, it is necessary to incorporate all useful linguistic information to alleviate these problems. Tiedemann (2003) introduced a word alignment approach based on combination of association clues. Clues combination is done by disjunction of single clues, which are defined as probabilities of associations. The crucial assumption of clue combination that clues are independent of each other, however, is not always true. Och and Ney (2003) proposed Model 6, a log-linear combination of IBM translation models and HMM model. Although Model 6 yields better results than naive IBM models, it fails to include dependencies other than IBM models and HMM model. Cherry and Lin (2003) developed a 459 statistical model to find word alignments, which allow easy integration of context-specific features. Log-linear models, which are very suitable to incorporate additional dependencies, have been successfully applied to statistical machine translation (Och and Ney, 2002). In this paper, we present a framework for word alignment based on log-linear models, allowing statistical models to be easily extended by incorporating additional syntactic dependencies. We use IBM Model 3 alignment probabilities, POS correspondence, and bilingual dictionary coverage as features. Our experiments show that log-linear models significantly outperform IBM translation models. We begin by describing log-linear models for word alignment. The design of feature functions is discussed then. Next, we present the training method and the search algorithm for log-linear models. We will follow with our experimental results and conclusion and close with a discussion of possible future directions. 2 Log-linear Models Formally, we use following definition for alignment. Given a source (’English’) sentence e = eI 1 = e1, .. . , ei, . . ., eI and a target language (’French’) sentence f = fJ 1 = f1, .. ., fj, . . . , fJ. We define a link l = (i, j) to exist if ei and fj are translation (or part of a translation) of one another. We define the null link l = (i, 0) to exist if ei does not correspond to a translation for any French word in f. The null link l = (0, j) is defined similarly. An alignment a is defined as a subset of the Cartesian product of the word positions: a ⊆{(i, j) : i = 0, . . . , I; j = 0, . . . , J} (1) We define the alignment problem as finding the alignment a that maximizes Pr(a | e, f) given e and f. We directly model the probability Pr(a | e, f). An especially well-founded framework is maximum entropy (Berger et al., 1996). In this framework, we have a set of M feature functions hm(a, e, f), m = 1, . . . , M. For each feature function, there exists a model parameter λm, m = 1, . . . , M. The direct alignment probability is given by: Pr(a|e, f) = exp[PM m=1 λmhm(a, e, f)] P a′ exp[PM m=1 λmhm(a′, e, f)] (2) This approach has been suggested by (Papineni et al., 1997) for a natural language understanding task and successfully applied to statistical machine translation by (Och and Ney, 2002). We obtain the following decision rule: ˆa = argmax a ½ M X m=1 λmhm(a, e, f) ¾ (3) Typically, the source language sentence e and the target sentence f are the fundamental knowledge sources for the task of finding word alignments. Linguistic data, which can be used to identify associations between lexical items are often ignored by traditional word alignment approaches. Linguistic tools such as part-of-speech taggers, parsers, namedentity recognizers have become more and more robust and available for many languages by now. It is important to make use of linguistic information to improve alignment strategies. Treated as feature functions, syntactic dependencies can be easily incorporated into log-linear models. In order to incorporate a new dependency which contains extra information other than the bilingual sentence pair, we modify Eq.2 by adding a new variable v: Pr(a|e, f, v) = exp[PM m=1 λmhm(a, e, f, v)] P a′ exp[PM m=1 λmhm(a′, e, f, v)] (4) Accordingly, we get a new decision rule: ˆa = argmax a ½ M X m=1 λmhm(a, e, f, v) ¾ (5) Note that our log-linear models are different from Model 6 proposed by Och and Ney (2003), which defines the alignment problem as finding the alignment a that maximizes Pr(f, a | e) given e. 3 Feature Functions In this paper, we use IBM translation Model 3 as the base feature of our log-linear models. In addition, we also make use of syntactic information such as part-of-speech tags and bilingual dictionaries. 460 3.1 IBM Translation Models Brown et al. (1993) proposed a series of statistical models of the translation process. IBM translation models try to model the translation probability Pr(fJ 1 |eI 1), which describes the relationship between a source language sentence eI 1 and a target language sentence fJ 1 . In statistical alignment models Pr(fJ 1 , aJ 1 |eI 1), a ’hidden’ alignment a = aJ 1 is introduced, which describes a mapping from a target position j to a source position i = aj. The relationship between the translation model and the alignment model is given by: Pr(fJ 1 |eI 1) = X aJ 1 Pr(fJ 1 , aJ 1 |eI 1) (6) Although IBM models are considered more coherent than heuristic models, they have two drawbacks. First, IBM models are restricted in a way such that each target word fj is assigned to exactly one source word eaj. A more general way is to model alignment as an arbitrary relation between source and target language positions. Second, IBM models are typically language-independent and may fail to tackle problems occurred due to specific languages. In this paper, we use Model 3 as our base feature function, which is given by 1: h(a, e, f) = Pr(fJ 1 , aJ 1 |eI 1) = Ã m −φ0 φ0 ! p0m−2φ0p1φ0 lY i=1 φi!n(φi|ei) × m Y j=1 t(fj|eaj)d(j|aj, l, m) (7) We distinguish between two translation directions to use Model 3 as feature functions: treating English as source language and French as target language or vice versa. 3.2 POS Tags Transition Model The first linguistic information we adopt other than the source language sentence e and the target language sentence f is part-of-speech tags. The use of POS information for improving statistical alignment quality of the HMM-based model is described 1If there is a target word which is assigned to more than one source words, h(a, e, f) = 0. in (Toutanova et al., 2002). They introduce additional lexicon probability for POS tags in both languages. In IBM models as well as HMM models, when one needs the model to take new information into account, one must create an extended model which can base its parameters on the previous model. In log-linear models, however, new information can be easily incorporated. We use a POS Tags Transition Model as a feature function. This feature learns POS Tags transition probabilities from held-out data (via simple counting) and then applies the learned distributions to the ranking of various word alignments. We define eT = eT I 1 = eT1, . . . , eTi, . . . , eTI and fT = fT J 1 = fT1, . . . , fTj, . . . , fTJ as POS tag sequences of the sentence pair e and f. POS Tags Transition Model is formally described as: Pr(fT|a, eT) = Y a t(fTa(j)|eTa(i)) (8) where a is an element of a, a(i) is the corresponding source position of a and a(j) is the target position. Hence, the feature function is: h(a, e, f, eT, fT) = Y a t(fTa(j)|eTa(i)) (9) We still distinguish between two translation directions to use POS tags Transition Model as feature functions: treating English as source language and French as target language or vice versa. 3.3 Bilingual Dictionary A conventional bilingual dictionary can be considered an additional knowledge source. We could use a feature that counts how many entries of a conventional lexicon co-occur in a given alignment between the source sentence and the target sentence. Therefore, the weight for the provided conventional dictionary can be learned. The intuition is that the conventional dictionary is expected to be more reliable than the automatically trained lexicon and therefore should get a larger weight. We define a bilingual dictionary as a set of entries: D = {(e, f, conf)}. e is a source language word, f is a target langauge word, and conf is a positive real-valued number (usually, conf = 1.0) assigned 461 by lexicographers to evaluate the validity of the entry. Therefore, the feature function using a bilingual dictionary is: h(a, e, f, D) = X a occur(ea(i), fa(j), D) (10) where occur(e, f, D) = ( conf if (e, f) occurs in D 0 else (11) 4 Training We use the GIS (Generalized Iterative Scaling) algorithm (Darroch and Ratcliff, 1972) to train the model parameters λM 1 of the log-linear models according to Eq. 4. By applying suitable transformations, the GIS algorithm is able to handle any type of real-valued features. In practice, We use YASMET 2 written by Franz J. Och for performing training. The renormalization needed in Eq. 4 requires a sum over a large number of possible alignments. If e has length l and f has length m, there are possible 2lm alignments between e and f (Brown et al., 1993). It is unrealistic to enumerate all possible alignments when lm is very large. Hence, we approximate this sum by sampling the space of all possible alignments by a large set of highly probable alignments. The set of considered alignments are also called n-best list of alignments. We train model parameters on a development corpus, which consists of hundreds of manually-aligned bilingual sentence pairs. Using an n-best approximation may result in the problem that the parameters trained with the GIS algorithm yield worse alignments even on the development corpus. This can happen because with the modified model scaling factors the n-best list can change significantly and can include alignments that have not been taken into account in training. To avoid this problem, we iteratively combine n-best lists to train model parameters until the resulting n-best list does not change, as suggested by Och (2002). However, as this training procedure is based on maximum likelihood criterion, there is only a loose relation to the final alignment quality on unseen bilingual texts. In practice, 2Available at http://www.fjoch.com/YASMET.html having a series of model parameters when the iteration ends, we select the model parameters that yield best alignments on the development corpus. After the bilingual sentences in the development corpus are tokenized (or segmented) and POS tagged, they can be used to train POS tags transition probabilities by counting relative frequencies: p(fT|eT) = NA(fT, eT) N(eT) Here, NA(fT, eT) is the frequency that the POS tag fT is aligned to POS tag eT and N(eT) is the frequency of eT in the development corpus. 5 Search We use a greedy search algorithm to search the alignment with highest probability in the space of all possible alignments. A state in this space is a partial alignment. A transition is defined as the addition of a single link to the current state. Our start state is the empty alignment, where all words in e and f are assigned to null. A terminal state is a state in which no more links can be added to increase the probability of the current alignment. Our task is to find the terminal state with the highest probability. We can compute gain, which is a heuristic function, instead of probability for efficiency. A gain is defined as follows: gain(a, l) = exp[PM m=1 λmhm(a ∪l, e, f)] exp[PM m=1 λmhm(a, e, f)] (12) where l = (i, j) is a link added to a. The greedy search algorithm for general loglinear models is formally described as follows: Input: e, f, eT, fT, and D Output: a 1. Start with a = φ. 2. Do for each l = (i, j) and l /∈a: Compute gain(a, l) 3. Terminate if ∀l, gain(a, l) ≤1. 4. Add the link ˆl with the maximal gain(a, l) to a. 5. Goto 2. 462 The above search algorithm, however, is not efficient for our log-linear models. It is time-consuming for each feature to figure out a probability when adding a new link, especially when the sentences are very long. For our models, gain(a, l) can be obtained in a more efficient way 3: gain(a, l) = M X m=1 λmlog µhm(a ∪l, e, f) hm(a, e, f) ¶ (13) Note that we restrict that h(a, e, f) ≥0 for all feature functions. The original terminational condition for greedy search algorithm is: gain(a, l) = exp[PM m=1 λmhm(a ∪l, e, f)] exp[PM m=1 λmhm(a, e, f)] ≤1.0 That is: M X m=1 λm[hm(a ∪l, e, f) −hm(a, e, f)] ≤0.0 By introducing gain threshold t, we obtain a new terminational condition: M X m=1 λmlog µhm(a ∪l, e, f) hm(a, e, f) ¶ ≤t where t = M X m=1 λm ½ log µhm(a ∪l, e, f) hm(a, e, f) ¶ −[hm(a ∪l, e, f) −hm(a, e, f)] ¾ Note that we restrict h(a, e, f) ≥0 for all feature functions. Gain threshold t is a real-valued number, which can be optimized on the development corpus. Therefore, we have a new search algorithm: Input: e, f, eT, fT, D and t Output: a 1. Start with a = φ. 2. Do for each l = (i, j) and l /∈a: Compute gain(a, l) 3We still call the new heuristic function gain to reduce notational overhead, although the gain in Eq. 13 is not equivalent to the one in Eq. 12. 3. Terminate if ∀l, gain(a, l) ≤t. 4. Add the link ˆl with the maximal gain(a, l) to a. 5. Goto 2. The gain threshold t depends on the added link l. We remove this dependency for simplicity when using it in search algorithm by treating it as a fixed real-valued number. 6 Experimental Results We present in this section results of experiments on a parallel corpus of Chinese-English texts. Statistics for the corpus are shown in Table 1. We use a training corpus, which is used to train IBM translation models, a bilingual dictionary, a development corpus, and a test corpus. Chinese English Train Sentences 108 925 Words 3 784 106 3 862 637 Vocabulary 49 962 55 698 Dict Entries 415 753 Vocabulary 206 616 203 497 Dev Sentences 435 Words 11 462 14 252 Ave. SentLen 26.35 32.76 Test Sentences 500 Words 13 891 15 291 Ave. SentLen 27.78 30.58 Table 1. Statistics of training corpus (Train), bilingual dictionary (Dict), development corpus (Dev), and test corpus (Test). The Chinese sentences in both the development and test corpus are segmented and POS tagged by ICTCLAS (Zhang et al., 2003). The English sentences are tokenized by a simple tokenizer of ours and POS tagged by a rule-based tagger written by Eric Brill (Brill, 1995). We manually aligned 935 sentences, in which we selected 500 sentences as test corpus. The remaining 435 sentences are used as development corpus to train POS tags transition probabilities and to optimize the model parameters and gain threshold. Provided with human-annotated word-level alignment, we use precision, recall and AER (Och and 463 Size of Training Corpus 1K 5K 9K 39K 109K Model 3 E →C 0.4497 0.4081 0.4009 0.3791 0.3745 Model 3 C →E 0.4688 0.4261 0.4221 0.3856 0.3469 Intersection 0.4588 0.4106 0.4044 0.3823 0.3687 Union 0.4596 0.4210 0.4157 0.3824 0.3703 Refined Method 0.4154 0.3586 0.3499 0.3153 0.3068 Model 3 E →C 0.4490 0.3987 0.3834 0.3639 0.3533 + Model 3 C →E 0.3970 0.3317 0.3217 0.2949 0.2850 + POS E →C 0.3828 0.3182 0.3082 0.2838 0.2739 + POS C →E 0.3795 0.3160 0.3032 0.2821 0.2726 + Dict 0.3650 0.3092 0.2982 0.2738 0.2685 Table 2. Comparison of AER for results of using IBM Model 3 (GIZA++) and log-linear models. Ney, 2003) for scoring the viterbi alignments of each model against gold-standard annotated alignments: precision = |A ∩P| |A| recall = |A ∩S| |S| AER = 1 −|A ∩S| + |A ∩P| |A| + |S| where A is the set of word pairs aligned by word alignment systems, S is the set marked in the gold standard as ”sure” and P is the set marked as ”possible” (including the ”sure” pairs). In our ChineseEnglish corpus, only one type of alignment was marked, meaning that S = P. In the following, we present the results of loglinear models for word alignment. We used GIZA++ package (Och and Ney, 2003) to train IBM translation models. The training scheme is 15H535, which means that Model 1 are trained for five iterations, HMM model for five iterations and finally Model 3 for five iterations. Except for changing the iterations for each model, we use default configuration of GIZA++. After that, we used three types of methods for performing a symmetrization of IBM models: intersection, union, and refined methods (Och and Ney , 2003). The base feature of our log-linear models, IBM Model 3, takes the parameters generated by GIZA++ as parameters for itself. In other words, our loglinear models share GIZA++ with the same parameters apart from POS transition probability table and bilingual dictionary. Table 2 compares the results of our log-linear models with IBM Model 3. From row 3 to row 7 are results obtained by IBM Model 3. From row 8 to row 12 are results obtained by log-linear models. As shown in Table 2, our log-linear models achieve better results than IBM Model 3 in all training corpus sizes. Considering Model 3 E →C of GIZA++ and ours alone, greedy search algorithm described in Section 5 yields surprisingly better alignments than hillclimbing algorithm in GIZA++. Table 3 compares the results of log-linear models with IBM Model 5. The training scheme is 15H5354555. Our log-linear models still make use of the parameters generated by GIZA++. Comparing Table 3 with Table 2, we notice that our log-linear models yield slightly better alignments by employing parameters generated by the training scheme 15H5354555 rather than 15H535, which can be attributed to improvement of parameters after further Model 4 and Model 5 training. For log-linear models, POS information and an additional dictionary are used, which is not the case for GIZA++/IBM models. However, treated as a method for performing symmetrization, log-linear combination alone yields better results than intersection, union, and refined methods. Figure 1 shows how gain threshold has an effect on precision, recall and AER with fixed model scaling factors. Figure 2 shows the effect of number of features 464 Size of Training Corpus 1K 5K 9K 39K 109K Model 5 E →C 0.4384 0.3934 0.3853 0.3573 0.3429 Model 5 C →E 0.4564 0.4067 0.3900 0.3423 0.3239 Intersection 0.4432 0.3916 0.3798 0.3466 0.3267 Union 0.4499 0.4051 0.3923 0.3516 0.3375 Refined Method 0.4106 0.3446 0.3262 0.2878 0.2748 Model 3 E →C 0.4372 0.3873 0.3724 0.3456 0.3334 + Model 3 C →E 0.3920 0.3269 0.3167 0.2842 0.2727 + POS E →C 0.3807 0.3122 0.3039 0.2732 0.2667 + POS C →E 0.3731 0.3091 0.3017 0.2722 0.2657 + Dict 0.3612 0.3046 0.2943 0.2658 0.2625 Table 3. Comparison of AER for results of using IBM Model 5 (GIZA++) and log-linear models. -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 gain threshold Precision Recall AER Figure 1. Precision, recall and AER over different gain thresholds with the same model scaling factors. and size of training corpus on search efficiency for log-linear models. Table 4 shows the resulting normalized model scaling factors. We see that adding new features also has an effect on the other model scaling factors. 7 Conclusion We have presented a framework for word alignment based on log-linear models between parallel texts. It allows statistical models easily extended by incorporating syntactic information. We take IBM Model 3 as base feature and use syntactic information such as POS tags and bilingual dictionary. Experimental 1k 5k 9k 39k 109k 200 400 600 800 1000 1200 time consumed for searching (second) size of training corpus M3EC M3EC + M3CE M3EC + M3CE + POSEC M3EC + M3CE + POSEC + POSCE M3EC + M3CE + POSEC + POSCE + Dict Figure 2. Effect of number of features and size of training corpus on search efficiency. MEC +MCE +PEC +PCE +Dict λ1 1.000 0.466 0.291 0.202 0.151 λ2 0.534 0.312 0.212 0.167 λ3 0.397 0.270 0.257 λ4 0.316 0.306 λ5 0.119 Table 4. Resulting model scaling factors: λ1: Model 3 E →C (MEC); λ2: Model 3 C →E (MCE); λ3: POS E →C (PEC); λ4: POS C →E (PCE); λ5: Dict (normalized such that P5 m=1 λm = 1). results show that log-linear models for word alignment significantly outperform IBM translation models. However, the search algorithm we proposed is 465 supervised, relying on a hand-aligned bilingual corpus, while the baseline approach of IBM alignments is unsupervised. Currently, we only employ three types of knowledge sources as feature functions. Syntax-based translation models, such as tree-to-string model (Yamada and Knight, 2001) and tree-to-tree model (Gildea, 2003), may be very suitable to be added into log-linear models. It is promising to optimize the model parameters directly with respect to AER as suggested in statistical machine translation (Och, 2003). Acknowledgement This work is supported by National High Technology Research and Development Program contract ”Generally Technical Research and Basic Database Establishment of Chinese Platform” (Subject No. 2004AA114010). References Adam L. Berger, Stephen A. Della Pietra, and Vincent J. DellaPietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-72, March. Eric Brill. 1995. Transformation-based-error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4), December. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311. Colin Cherry and Dekang Lin. 2003. A probability model to improve word alignment. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL), Sapporo, Japan. J. N. Darroch and D. Ratcliff. 1972. Generalized iterative scaling for log-linear models. Annals of Mathematical Statistics, 43:1470-1480. Daniel Gildea. 2003. Loosely tree-based alignment for machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL), Sapporo, Japan. Sue J. Ker and Jason S. Chang. 1997. A class-based approach to word alignment. Computational Linguistics, 23(2):313-343, June. I. Dan Melamed 2000. Models of translational equivalence among words. Computational Linguistics, 26(2):221-249, June. Franz J. Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 295-302, Philadelphia, PA, July. Franz J. Och. 2002. Statistical Machine Translation: From Single-Word Models to Alignment Templates. Ph.D. thesis, Computer Science Department, RWTH Aachen, Germany, October. Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL), pages: 160-167, Sapporo, Japan. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51, March. Kishore A. Papineni, Salim Roukos, and Todd Ward. 1997. Feature-based language understanding. In European Conf. on Speech Communication and Technology, pages 1435-1438, Rhodes, Greece, September. Frank Smadja, Vasileios Hatzivassiloglou, and Kathleen R. McKeown 1996. Translating collocations for bilingual lexicons: A statistical approach. Computational Linguistics, 22(1):1-38, March. J¨org Tiedemann. 2003. Combining clues for word alignment. In Proceedings of the 10th Conference of European Chapter of the ACL (EACL), Budapest, Hungary, April. Kristina Toutanova, H. Tolga Ilhan, and Christopher D. Manning. 2003. Extensions to HMM-based statistical word alignment models. In Proceedings of Empirical Methods in Natural Langauge Processing, Philadelphia, PA. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In Proceedings of the 16th Int. Conf. on Computational Linguistics, pages 836-841, Copenhagen, Denmark, August. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical machine translation model. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL), pages: 523-530, Toulouse, France, July. Huaping Zhang, Hongkui Yu, Deyi Xiong, and Qun Liu. 2003. HHMM-based Chinese lexical analyzer ICTCLAS. In Proceedings of the second SigHan Workshop affiliated with 41th ACL, pages: 184-187, Sapporo, Japan. 466 | 2005 | 57 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 467–474, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Alignment Model Adaptation for Domain-Specific Word Alignment WU Hua, WANG Haifeng, LIU Zhanyi Toshiba (China) Research and Development Center 5/F., Tower W2, Oriental Plaza No.1, East Chang An Ave., Dong Cheng District Beijing, 100738, China {wuhua, wanghaifeng, liuzhanyi}@rdc.toshiba.com.cn Abstract This paper proposes an alignment adaptation approach to improve domain-specific (in-domain) word alignment. The basic idea of alignment adaptation is to use out-of-domain corpus to improve in-domain word alignment results. In this paper, we first train two statistical word alignment models with the large-scale out-of-domain corpus and the small-scale in-domain corpus respectively, and then interpolate these two models to improve the domain-specific word alignment. Experimental results show that our approach improves domain-specific word alignment in terms of both precision and recall, achieving a relative error rate reduction of 6.56% as compared with the state-of-the-art technologies. 1 Introduction Word alignment was first proposed as an intermediate result of statistical machine translation (Brown et al., 1993). In recent years, many researchers have employed statistical models (Wu, 1997; Och and Ney, 2003; Cherry and Lin, 2003) or association measures (Smadja et al., 1996; Ahrenberg et al., 1998; Tufis and Barbu, 2002) to build alignment links. In order to achieve satisfactory results, all of these methods require a large-scale bilingual corpus for training. When the large-scale bilingual corpus is not available, some researchers use existing dictionaries to improve word alignment (Ker and Chang, 1997). However, only a few studies (Wu and Wang, 2004) directly address the problem of domain-specific word alignment when neither the large-scale domain-specific bilingual corpus nor the domain-specific translation dictionary is available. In this paper, we address the problem of word alignment in a specific domain, in which only a small-scale corpus is available. In the domain-specific (in-domain) corpus, there are two kinds of words: general words, which also frequently occur in the out-of-domain corpus, and domain-specific words, which only occur in the specific domain. Thus, we can use the out-of-domain bilingual corpus to improve the alignment for general words and use the in-domain bilingual corpus for domain-specific words. We implement this by using alignment model adaptation. Although the adaptation technology is widely used for other tasks such as language modeling (Iyer et al., 1997), only a few studies, to the best of our knowledge, directly address word alignment adaptation. Wu and Wang (2004) adapted the alignment results obtained with the out-of-domain corpus to the results obtained with the in-domain corpus. This method first trained two models and two translation dictionaries with the in-domain corpus and the out-of-domain corpus, respectively. Then these two models were applied to the in-domain corpus to get different results. The trained translation dictionaries were used to select alignment links from these different results. Thus, this method performed adaptation through result combination. The experimental results showed a significant error rate reduction as compared with the method directly combining the two corpora as training data. In this paper, we improve domain-specific word alignment through statistical alignment model adaptation instead of result adaptation. Our method includes the following steps: (1) two word alignment models are trained using a small-scale in-domain bilingual corpus and a large-scale 467 out-of-domain bilingual corpus, respectively. (2) A new alignment model is built by interpolating the two trained models. (3) A translation dictionary is also built by interpolating the two dictionaries that are trained from the two training corpora. (4) The new alignment model and the translation dictionary are employed to improve domain-specific word alignment results. Experimental results show that our approach improves domain-specific word alignment in terms of both precision and recall, achieving a relative error rate reduction of 6.56% as compared with the state-of-the-art technologies. The remainder of the paper is organized as follows. Section 2 introduces the statistical word alignment model. Section 3 describes our alignment model adaptation method. Section 4 describes the method used to build the translation dictionary. Section 5 describes the model adaptation algorithm. Section 6 presents the evaluation results. The last section concludes our approach. 2 Statistical Word Alignment According to the IBM models (Brown et al., 1993), the statistical word alignment model can be generally represented as in Equation (1). ∑ = ' ) | ,' ( ) | , ( ) , | ( a a p a p a p e f e f e f (1) In this paper, we use a simplified IBM model 4 (Al-Onaizan et al., 1999), which is shown in Equation (2). This simplified version does not take word classes into account as described in (Brown et al., 1993). ) ))) ( ( )] ( ([ )) ( )] ( ([ ( ) | ( ) | ( ) | , Pr( ) | , ( 0 ,1 1 0 ,1 1 1 1 1 2 0 0 0 ) , ( 0 0 ∏ ∏ ∏ ∏ ∑ ≠ = > ≠ = = = − − ⋅ ≠ + − ⋅ = ⋅ ⋅ ⋅ − = = m a j j m a j j m j a j l i i i m j j j a j j p j d a h j c j d a h j e f t e n p p m a p ρ φ φ π τ φ φ φ π τ e e f (2) m l, are the lengths of the target sentence and the source sentence respectively. j is the position index of the source word. j a is the position of the target word aligned to the jth source word. i φ is the fertility of . ie 1 p is the fertility probability for e , and . 0 1 1 0 = + p p ) j a j|e t(f is the word translation probability. ) | ( i i e n φ is the fertility probability. ) ( 1 j a c j d ρ − is the distortion probability for the head of each cept1. )) ( ( 1 j p j d − > is the distortion probability for the remaining words of the cept. } : { min ) ( k k a i k i h = = is the head of cept i. } : { max ) ( k j j k a a k j p = = < i ρ is the first word before with non-zero fertility. If , ; else . ie 0 ∧ }i 0 |} 0 : {| ' ' ' > < < > i i i iφ 0 0 'i < < ∧ 0 = i ρ : max{ ' ' i i i > = φ ρ i j j i j i a c φ ∑ ⋅ = = ] [ is the center of cept i. During the training process, IBM model 3 is first trained, and then the parameters in model 3 are employed to train model 4. During the testing process, the trained model 3 is also used to get an initial alignment result, and then the trained model 4 is employed to improve this alignment result. For convenience, we describe model 3 in Equation (3). The main difference between model 3 and model 4 lies in the calculation of distortion probability. ∏ ∏ ∏ ∏ ∑ ≠ = = = − ⋅ ⋅ ⋅ ⋅ − = = m a j j m j a j l i i l i i i m j j m l a j d e f t e n p p m a p 0 : 1 1 1 1 2 0 0 0 ) , ( ) , , | ( ) | ( ! ) | ( ) | , Pr( ) | , ( 0 0 φ φ φ φ π τ φ φ π τ e e f (3) 1 A cept is defined as the set of target words connected to a source word (Brown et al., 1993). 468 However, both model 3 and model 4 do not take the multiword cept into account. Only one-to-one and many-to-one word alignments are considered. Thus, some multi-word units in the domain-specific corpus cannot be correctly aligned. In order to deal with this problem, we perform word alignment in two directions (source to target, and target to source) as described in (Och and Ney, 2000). The GIZA++ toolkit2 is used to perform statistical word alignment. We use and to represent the bi-directional alignment sets, which are shown in Equation (4) and (5). For alignment in both sets, we use j for source words and i for target words. If a target word in position i is connected to source words in positions and , then . We call an element in the alignment set an alignment link. 1 SG 2 SG 2j 1j } , { 2 1 j j Ai = }} 0 , | { |) , {( 1 ≥ = = = j j i i a i a j A i A SG (4) }} 0 , | { |) , {( 2 ≥ = = = j j j j a a i i A A j SG (5) 3 Word Alignment Model Adaptation In this paper, we first train two models using the out-of-domain training data and the in-domain training data, and then build a new alignment model through linear interpolation of the two trained models. In other words, we make use of the out-of-domain training data and the in-domain training data by interpolating the trained alignment models. One method to perform model adaptation is to directly interpolate the alignment models as shown in Equation (6). ) , | ( ) 1( ) , | ( ) , | ( e f a p e f a p e f a p O I ⋅ − + ⋅ = λ λ (6) ) , | ( e f a pI and are the alignment model trained using the in-domain corpus and the out-of-domain corpus, respectively. ) , | ( e f a pO λ is an interpolation weight. It can be a constant or a function of and . f e However, in both model 3 and model 4, there are mainly three kinds of parameters: translation probability, fertility probability and distortion probability. These three kinds of parameters have their own interpretation in these two models. In order to obtain fine-grained interpolation models, we separate the alignment model interpolation into three parts: translation probability interpolation, fertility probability interpolation and distortion probability interpolation. For these probabilities, we use different interpolation methods to calculate the interpolation weights. After interpolation, we replace the corresponding parameters in equation (2) and (3) with the interpolated probabilities to get new alignment models. 2 It is located at http://www.fjoch.com/GIZA++.html. In the following subsections, we will perform linear interpolation for word alignment in the source to target direction. For the word alignment in the target to source direction, we use the same interpolation method. 3.1 Translation Probability Interpolation The word translation probability is very important in translation models. The same word may have different distributions in the in-domain corpus and the out-of-domain corpus. Thus, the interpolation weight for the translation probability is taken as a variant. The interpolation model for is described in Equation (7). ) | ( j a j e f t ) | ( j a j e f t ) | ( )) ( 1( ) | ( ) ( ) | ( j j j j j a j O a t a j I a t a j e f t e e f t e e f t ⋅ − + ⋅ = λ λ (7) The interpolation weight in (7) is a function of . It is calculated as shown in Equation (8). ) ( j a t e λ j a e α λ + = ) ( ) ( ) ( ) ( j j j j a O a I a I a t e p e p e p e (8) ) ( j a I e p and are the relative frequencies of in the in-domain corpus and in the out-of-domain corpus, respectively. ) ( j a O e p j a e α is an adaptation coefficient, such that 0 ≥ α . Equation (8) indicates that if a word occurs more frequently in a specific domain than in the general domain, it can usually be considered as a domain-specific word (Peñas et al., 2001). For example, if is much larger than , the word is a domain-specific word and the interpolation weight approaches to 1. In this case, we trust more on the translation probability obtained from the in-domain corpus than that obtained from the out-of-domain corpus. ) ( j a I e p j a ) ( j a O e p e 469 3.2 3.3 4 Fertility Probability Interpolation The fertility probability describes the distribution of the number of words that is aligned to. The interpolation model is shown in (9). ) | ( i i e n φ ie ) | ( ) 1( ) | ( ) | ( i i O n i i I n i i e n e n e n φ λ φ λ φ ⋅ − + ⋅ = (9) Where, is a constant. This constant is obtained using a manually annotated held-out data set. In fact, we can also set the interpolation weight to be a function of the word . From the word alignment results on the held-out set, we conclude that these two weighting schemes do not perform quite differently. n λ ie Distortion Probability Interpolation The distortion probability describes the distribution of alignment positions. We separate it into two parts: one is the distortion probability in model 3, and the other is the distortion probability in model 4. The interpolation model for the distortion probability in model 3 is shown in (10). Since the distortion probability is irrelevant with any specific source or target words, we take as a constant. This constant is obtained using the held-out set. d λ ) , , | ( ) 1( ) , , | ( ) , , | ( m l a j d m l a j d m l a j d j O d j I d j ⋅ − + ⋅ = λ λ (10) For the distortion probability in model 4, we use the same interpolation method and take the interpolation weight as a constant. Translation Dictionary Acquisition We use the translation dictionary trained from the training data to further improve the alignment results. When we train the bi-directional statistical word alignment models with the training data, we get two word alignment results for the training data. By taking the intersection of the two word alignment results, we build a new alignment set. The alignment links in this intersection set are extended by iteratively adding word alignment links into it as described in (Och and Ney, 2000). Based on the extended alignment links, we build a translation dictionary. In order to filter the noise caused by the error alignment links, we only retain those translation pairs whose log-likelihood ratio scores (Dunning, 1993) are above a threshold. Based on the alignment results on the out-of-domain corpus, we build a translation dictionary filtered with a threshold . Based on the alignment results on a small-scale in-domain corpus, we build another translation dictionary filtered with a threshold . 1 D 2 D 1 δ 2 δ After obtaining the two dictionaries, we combine two dictionaries through linearly interpolating the translation probabilities in the two dictionaries, which is shown in (11). The symbols f and e represent a single word or a phrase in the source and target languages. This differs from the translation probability in Equation (7), where these two symbols only represent single words. ) | ( )) ( 1( ) | ( ) ( ) | ( e f p e e f p e e f p O I ⋅ − + ⋅ = λ λ (11) The interpolation weight is also a function of e. It is calculated as shown in (12)3. ) ( ) ( ) ( ) ( e p e p e p e O I I + = λ (12) ) (e pI and represent the relative frequencies of e in the in-domain corpus and out-of-domain corpus, respectively. ) (e pO 5 6 Evaluation Adaptation Algorithm The adaptation algorithms include two parts: a training algorithm and a testing algorithm. The training algorithm is shown in Figure 1. After getting the two adaptation models and the translation dictionary, we apply them to the in-domain corpus to perform word alignment. Here we call it testing algorithm. The detailed algorithm is shown in Figure 2. For each sentence pair, there are two different word alignment results, from which the final alignment links are selected according to their translation probabilities in the dictionary D. The selection order is similar to that in the competitive linking algorithm (Melamed, 1997). The difference is that we allow many-to-one and one-to-many alignments. We compare our method with four other methods. The first method is descried in (Wu and Wang, 2004). We call it “Result Adaptation (ResAdapt)”. 3 We also tried an adaptation coefficient to calculate the interpolation weight as in (8). However, the alignment results are not improved by using this coefficient for the dictionary. 470 Input: In-domain training data Out-of-domain training data (1) Train two alignment models (source to target) and (target to source) using the in-domain corpus. st I M ts I M (2) Train the other two alignment models and using the out-of-domain corpus. st O M ts O M (3) Build an adaptation model st M based on and , and build the other adaptation model st I M st O M ts M based on and using the interpolation methods described in section 3. ts I M ts O M (4) Train a dictionary using the alignment results on the in-domain training data. 1 D (5) Train another dictionary using the alignment results on the out-of-domain training data. 2 D (6) Build an adaptation dictionary D based on and using the interpolation method described in section 4. 1 D 2 D Output: Alignment models st M and ts M Translation dictionary D Figure 1. Training Algorithm Input: Alignment models st M and ts M , translation dictionary D , and testing data (1) Apply the adaptation model st M and ts M to the testing data to get two different alignment results. (2) Select the alignment links with higher translation probability in the translation dictionary D . Output: Alignment results on the testing data Figure 2. Testing Algorithm The second method “Gen+Spec” directly combines the out-of-domain corpus and the in-domain corpus as training data. The third method “Gen” only uses the out-of-domain corpus as training data. The fourth method “Spec” only uses the in-domain corpus as training data. For each of the last three methods, we first train bi-directional alignment models using the training data. Then we build a translation dictionary based on the alignment results on the training data and filter it using log-likelihood ratio as described in section 4. 6.1 6.2 Training and Testing Data In this paper, we take English-Chinese word alignment as a case study. We use a sentence- aligned out-of-domain English-Chinese bilingual corpus, which includes 320,000 bilingual sentence pairs. The average length of the English sentences is 13.6 words while the average length of the Chinese sentences is 14.2 words. We also use a sentence-aligned in-domain English-Chinese bilingual corpus (operation manuals for diagnostic ultrasound systems), which includes 5,862 bilingual sentence pairs. The average length of the English sentences is 12.8 words while the average length of the Chinese sentences is 11.8 words. From this domain-specific corpus, we randomly select 416 pairs as testing data. We also select 400 pairs to be manually annotated as held-out set (development set) to adjust parameters. The remained 5,046 pairs are used as domain-specific training data. The Chinese sentences in both the training set and the testing set are automatically segmented into words. In order to exclude the effect of the segmentation errors on our alignment results, the segmentation errors in our testing set are post-corrected. The alignments in the testing set are manually annotated, which includes 3,166 alignment links. Among them, 504 alignment links include multiword units. Evaluation Metrics We use the same evaluation metrics as described in (Wu and Wang, 2004). If we use to represent the set of alignment links identified by the proposed methods and to denote the reference alignment set, the methods to calculate the precision, recall, f-measure, and alignment error rate (AER) are shown in Equation (13), (14), (15), and (16). It can be seen that the higher the f-measure is, the lower the alignment error rate is. Thus, we will only show precision, recall and AER scores in the evaluation results. G S C S | S | | S S | G C G ∩ = precision (13) 471 | S | | S S | C C G ∩ = recall (14) | | | | | | 2 C G C G S S S S fmeasure + ∩ × = (15) fmeasure S S S S AER C G C G − = + ∩ × − = 1 | | | | | | 2 1 (16) 6.3 Evaluation Results We use the held-out set described in section 6.1 to set the interpolation weights. The coefficient α in Equation (8) is set to 0.8, the interpolation weight in Equation (9) is set to 0.1, the interpolation weight in model 3 in Equation (10) is set to 0.1, and the interpolation weight in model 4 is set to 1. In addition, log-likelihood ratio score thresholds are set to and . With these parameters, we get the lowest alignment error rate on the held-out set. n λ d λ d λ 30 1 = δ 25 2 = δ Using these parameters, we build two adaptation models and a translation dictionary on the training data, which are applied to the testing set. The evaluation results on our testing set are shown in Table 1. From the results, it can be seen that our approach performs the best among all of the methods, achieving the lowest alignment error rate. Compared with the method “ResAdapt”, our method achieves a higher precision without loss of recall, resulting in an error rate reduction of 6.56%. Compared with the method “Gen+Spec”, our method gets a higher recall, resulting in an error rate reduction of 17.43%. This indicates that our model adaptation method is very effective to alleviate the data-sparseness problem of domain-specific word alignment. Method Precision Recall AER Ours 0.8490 0.7599 0.1980 ResAdapt 0.8198 0.7587 0.2119 Gen+Spec 0.8456 0.6905 0.2398 Gen 0.8589 0.6576 0.2551 Spec 0.8386 0.6731 0.2532 Table 1. Word Alignment Adaptation Results The method that only uses the large-scale out-of-domain corpus as training data does not produce good result. The alignment error rate is almost the same as that of the method only using the in-domain corpus. In order to further analyze the result, we classify the alignment links into two classes: single word alignment links (SWA) and multiword alignment links (MWA). Single word alignment links only include one-to-one alignments. The multiword alignment links include those links in which there are multiword units in the source language or/and the target language. The results are shown in Table 2. From the results, it can be seen that the method “Spec” produces better results for multiword alignment while the method “Gen” produces better results for single word alignment. This indicates that the multiword alignment links mainly include the domain-specific words. Among the 504 multiword alignment links, about 60% of the links include domain-specific words. In Table 2, we also present the results of our method. Our method achieves the lowest error rate results on both single word alignment and multiword alignment. Method Precision Recall AER Ours (SWA) 0.8703 0.8621 0.1338 Ours (MWA) 0.5635 0.2202 0.6833 Gen (SWA) 0.8816 0.7694 0.1783 Gen (MWA) 0.3366 0.0675 0.8876 Spec (SWA) 0.8710 0.7633 0.1864 Spec (MWA) 0.4760 0.1964 0.7219 Table 2. Single Word and Multiword Alignment Results In order to further compare our method with the method described in (Wu and Wang, 2004). We do another experiment using almost the same-scale in-domain training corpus as described in (Wu and Wang, 2004). From the in-domain training corpus, we randomly select about 500 sentence pairs to build the smaller training set. The testing data is the same as shown in section 6.1. The evaluation results are shown in Table 3. Method Precision Recall AER Ours 0.8424 0.7378 0.2134 ResAdapt 0.8027 0.7262 0.2375 Gen+Spec 0.8041 0.6857 0.2598 Table 3. Alignment Adaptation Results Using a Smaller In-Domain Corpus Compared with the method “Gen+Spec”, our method achieves an error rate reduction of 17.86% 472 while the method “ResAdapt” described in (Wu and Wang, 2004) only achieves an error rate reduction of 8.59%. Compared with the method “ResAdapt”, our method achieves an error rate reduction of 10.15%. This result is different from that in (Wu and Wang, 2004), where their method achieved an error rate reduction of 21.96% as compared with the method “Gen+Spec”. The main reason is that the in-domain training corpus and testing corpus in this paper are different from those in (Wu and Wang, 2004). The training data and the testing data described in (Wu and Wang, 2004) are from a single manual. The data in our corpus are from several manuals describing how to use the diagnostic ultrasound systems. In addition to the above evaluations, we also evaluate our model adaptation method using the "refined" combination in Och and Ney (2000) instead of the translation dictionary. Using the "refined" method to select the alignments produced by our model adaptation method (AER: 0.2371) still yields better result than directly combining out-of-domain and in-domain corpora as training data of the "refined" method (AER: 0.2290). 6.4 The Effect of In-Domain Corpus In general, it is difficult to obtain large-scale in-domain bilingual corpus. For some domains, only a very small-scale bilingual sentence pairs are available. Thus, in order to analyze the effect of the size of in-domain corpus, we randomly select sentence pairs from the in-domain training corpus to generate five training sets. The numbers of sentence pairs in these five sets are 1,010, 2,020, 3,030, 4,040 and 5,046. For each training set, we use model 4 in section 2 to train an in-domain model. The out-of-domain corpus for the adaptation experiments and the testing set are the same as described in section 6.1. # Sentence Pairs Precision Recall AER 1010 0.8385 0.7394 0.2142 2020 0.8388 0.7514 0.2073 3030 0.8474 0.7558 0.2010 4040 0.8482 0.7555 0.2008 5046 0.8490 0.7599 0.1980 Table 4. Alignment Adaptation Results Using In-Domain Corpora of Different Sizes # Sentence Pairs Precision Recall AER 1010 0.8737 0.6642 0.2453 2020 0.8502 0.6804 0.2442 3030 0.8473 0.6874 0.2410 4040 0.8430 0.6917 0.2401 5046 0.8456 0.6905 0.2398 Table 5. Alignment Results Directly Combining Out-of-Domain and In-Domain Corpora The results are shown in Table 4 and Table 5. Table 4 describes the alignment adaptation results using in-domain corpora of different sizes. Table 5 describes the alignment results by directly combining the out-of-domain corpus and the in-domain corpus of different sizes. From the results, it can be seen that the larger the size of in-domain corpus is, the smaller the alignment error rate is. However, when the number of the sentence pairs increase from 3030 to 5046, the error rate reduction in Table 4 is very small. This is because the contents in the specific domain are highly replicated. This also shows that increasing the domain-specific corpus does not obtain great improvement on the word alignment results. Comparing the results in Table 4 and Table 5, we find out that our adaptation method reduces the alignment error rate on all of the in-domain corpora of different sizes. 6.5 The Effect of Out-of-Domain Corpus In order to further analyze the effect of the out-of-domain corpus on the adaptation results, we randomly select sentence pairs from the out-of-domain corpus to generate five sets. The numbers of sentence pairs in these five sets are 65,000, 130,000, 195,000, 260,000, and 320,000 (the entire out-of-domain corpus). In the adaptation experiments, we use the entire in-domain corpus (5046 sentence pairs). The adaptation results are shown in Table 6. From the results in Table 6, it can be seen that the larger the size of out-of-domain corpus is, the smaller the alignment error rate is. However, when the number of the sentence pairs is more than 130,000, the error rate reduction is very small. This indicates that we do not need a very large bilingual out-of-domain corpus to improve domain-specific word alignment results. 473 # Sentence Pairs (k) Precision Recall AER 65 0.8441 0.7284 0.2180 130 0.8479 0.7413 0.2090 195 0.8454 0.7461 0.2073 260 0.8426 0.7508 0.2059 320 0.8490 0.7599 0.1980 Table 6. Adaptation Alignment Results Using Out-of-Domain Corpora of Different Sizes 7 Conclusion This paper proposes an approach to improve domain-specific word alignment through alignment model adaptation. Our approach first trains two alignment models with a large-scale out-of-domain corpus and a small-scale domain-specific corpus. Second, we build a new adaptation model by linearly interpolating these two models. Third, we apply the new model to the domain-specific corpus and improve the word alignment results. In addition, with the training data, an interpolated translation dictionary is built to select the word alignment links from different alignment results. Experimental results indicate that our approach achieves a precision of 84.90% and a recall of 75.99% for word alignment in a specific domain. Our method achieves a relative error rate reduction of 17.43% as compared with the method directly combining the out-of-domain corpus and the in-domain corpus as training data. It also achieves a relative error rate reduction of 6.56% as compared with the previous work in (Wu and Wang, 2004). In addition, when we train the model with a smaller-scale in-domain corpus as described in (Wu and Wang, 2004), our method achieves an error rate reduction of 10.15% as compared with the method in (Wu and Wang, 2004). We also use in-domain corpora and out-of-domain corpora of different sizes to perform adaptation experiments. The experimental results show that our model adaptation method improves alignment results on in-domain corpora of different sizes. The experimental results also show that even a not very large out-of-domain corpus can help to improve the domain-specific word alignment through alignment model adaptation. References L. Ahrenberg, M. Merkel, M. Andersson. 1998. A Simple Hybrid Aligner for Generating Lexical Correspondences in Parallel Tests. In Proc. of ACL/COLING-1998, pp. 29-35. Y. Al-Onaizan, J. Curin, M. Jahr, K. Knight, J. Lafferty, D. Melamed, F. J. Och, D. Purdy, N. A. Smith, D. Yarowsky. 1999. Statistical Machine Translation Final Report. Johns Hopkins University Workshop. P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, R. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2): 263-311. C. Cherry and D. Lin. 2003. A Probability Model to Improve Word Alignment. In Proc. of ACL-2003, pp. 88-95. T. Dunning. 1993. Accurate Methods for the Statistics of Surprise and Coincidence. Computational Linguistics, 19(1): 61-74. R. Iyer, M. Ostendorf, H. Gish. 1997. Using Out-of-Domain Data to Improve In-Domain Language Models. IEEE Signal Processing Letters, 221-223. S. J. Ker and J. S. Chang. 1997. A Class-based Approach to Word Alignment. Computational Linguistics, 23(2): 313-343. I. D. Melamed. 1997. A Word-to-Word Model of Translational Equivalence. In Proc. of ACL 1997, pp. 490-497. F. J. Och and H. Ney. 2000. Improved Statistical Alignment Models. In Proc. of ACL-2000, pp. 440-447. A. Peñas, F. Verdejo, J. Gonzalo. 2001. Corpus-based Terminology Extraction Applied to Information Access. In Proc. of the Corpus Linguistics 2001, vol. 13. F. Smadja, K. R. McKeown, V. Hatzivassiloglou. 1996. Translating Collocations for Bilingual Lexicons: a Statistical Approach. Computational Linguistics, 22(1): 1-38. D. Tufis and A. M. Barbu. 2002. Lexical Token Alignment: Experiments, Results and Application. In Proc. of LREC-2002, pp. 458-465. D. Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3): 377-403. H. Wu and H. Wang. 2004. Improving Domain-Specific Word Alignment with a General Bilingual Corpus. In R. E. Frederking and K. B. Taylor (Eds.), Machine Translation: From Real Users to Research: 6th conference of AMTA-2004, pp. 262-271. 474 | 2005 | 58 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 475–482, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Stochastic Lexicalized Inversion Transduction Grammar for Alignment Hao Zhang and Daniel Gildea Computer Science Department University of Rochester Rochester, NY 14627 Abstract We present a version of Inversion Transduction Grammar where rule probabilities are lexicalized throughout the synchronous parse tree, along with pruning techniques for efficient training. Alignment results improve over unlexicalized ITG on short sentences for which full EM is feasible, but pruning seems to have a negative impact on longer sentences. 1 Introduction The Inversion Transduction Grammar (ITG) of Wu (1997) is a syntactically motivated algorithm for producing word-level alignments of pairs of translationally equivalent sentences in two languages. The algorithm builds a synchronous parse tree for both sentences, and assumes that the trees have the same underlying structure but that the ordering of constituents may differ in the two languages. This probabilistic, syntax-based approach has inspired much subsequent reasearch. Alshawi et al. (2000) use hierarchical finite-state transducers. In the tree-to-string model of Yamada and Knight (2001), a parse tree for one sentence of a translation pair is projected onto the other string. Melamed (2003) presents algorithms for synchronous parsing with more complex grammars, discussing how to parse grammars with greater than binary branching and lexicalization of synchronous grammars. Despite being one of the earliest probabilistic syntax-based translation models, ITG remains stateof-the art. Zens and Ney (2003) found that the constraints of ITG were a better match to the decoding task than the heuristics used in the IBM decoder of Berger et al. (1996). Zhang and Gildea (2004) found ITG to outperform the tree-to-string model for word-level alignment, as measured against human gold-standard alignments. One explanation for this result is that, while a tree representation is helpful for modeling translation, the trees assigned by the traditional monolingual parsers (and the treebanks on which they are trained) may not be optimal for translation of a specific language pair. ITG has the advantage of being entirely data-driven – the trees are derived from an expectation maximization procedure given only the original strings as input. In this paper, we extend ITG to condition the grammar production probabilities on lexical information throughout the tree. This model is reminiscent of lexicalization as used in modern statistical parsers, in that a unique head word is chosen for each constituent in the tree. It differs in that the head words are chosen through EM rather than deterministic rules. This approach is designed to retain the purely data-driven character of ITG, while giving the model more information to work with. By conditioning on lexical information, we expect the model to be able capture the same systematic differences in languages’ grammars that motive the tree-to-string model, for example, SVO vs. SOV word order or prepositions vs. postpositions, but to be able to do so in a more fine-grained manner. The interaction between lexical information and word order also explains the higher performance of IBM model 4 over IBM model 3 for alignment. We begin by presenting the probability model in the following section, detailing how we address issues of pruning and smoothing that lexicalization introduces. We present alignment results on a parallel Chinese-English corpus in Section 3. 475 2 Lexicalization of Inversion Transduction Grammars An Inversion Transduction Grammar can generate pairs of sentences in two languages by recursively applying context-free bilingual production rules. Most work on ITG has focused on the 2-normal form, which consists of unary production rules that are responsible for generating word pairs: X →e/f and binary production rules in two forms that are responsible for generating syntactic subtree pairs: X →[Y Z] and X →⟨Y Z⟩ The rules with square brackets enclosing the right hand side expand the left hand side symbol into the two symbols on the right hand side in the same order in the two languages, whereas the rules with pointed brackets expand the left hand side symbol into the two right hand side symbols in reverse order in the two languages. One special case of ITG is the bracketing ITG that has only one nonterminal that instantiates exactly one straight rule and one inverted rule. The ITG we apply in our experiments has more structural labels than the primitive bracketing grammar: it has a start symbol S, a single preterminal C, and two intermediate nonterminals A and B used to ensure that only one parse can generate any given word-level alignment, as discussed by Wu (1997) and Zens and Ney (2003). As an example, Figure 1 shows the alignment and the corresponding parse tree for the sentence pair Je les vois / I see them using the unambiguous bracketing ITG. A stochastic ITG can be thought of as a stochastic CFG extended to the space of bitext. The independence assumptions typifying S-CFGs are also valid for S-ITGs. Therefore, the probability of an S-ITG parse is calculated as the product of the probabilities of all the instances of rules in the parse tree. For instance, the probability of the parse in Figure 1 is: P(S →A) · P(A →[CB]) · P(B →⟨CC⟩) · P(C →I/Je) · P(C →see/vois) · P(C →them/les) It is important to note that besides the bottomlevel word-pairing rules, the other rules are all nonlexical, which means the structural alignment component of the model is not sensitive to the lexical contents of subtrees. Although the ITG model can effectively restrict the space of alignment to make polynomial time parsing algorithms possible, the preference for inverted or straight rules only passively reflect the need of bottom level word alignment. We are interested in investigating how much help it would be if we strengthen the structural alignment component by making the orientation choices dependent on the real lexical pairs that are passed up from the bottom. The first step of lexicalization is to associate a lexical pair with each nonterminal. The head word pair generation rules are designed for this purpose: X →X(e/f) The word pair e/f is representative of the lexical content of X in the two languages. For binary rules, the mechanism of head selection is introduced. Now there are 4 forms of binary rules: X(e/f) →[Y (e/f)Z] X(e/f) →[Y Z(e/f)] X(e/f) →⟨Y (e/f)Z⟩ X(e/f) →⟨Y Z(e/f)⟩ determined by the four possible combinations of head selections (Y or Z) and orientation selections (straight or inverted). The rules for generating lexical pairs at the leaves of the tree are now predetermined: X(e/f) →e/f Putting them all together, we are able to derive a lexicalized bilingual parse tree for a given sentence pair. In Figure 2, the example in Figure 1 is revisited. The probability of the lexicalized parse is: P(S →S(see/vois)) · P(S(see/vois) →A(see/vois)) · P(A(see/vois) →[CB(see/vois)]) · P(C →C(I/Je)) 476 I see them Je les vois C B C A see/vois them/les I/Je S C Figure 1: ITG Example I see them Je les vois S(see/vois) C(see/vois) C(I/Je) C S C(them/les) C B(see/vois) A(see/vois) Figure 2: Lexicalized ITG Example. see/vois is the headword of both the 2x2 cell and the entire alignment. · P(B(see/vois) →⟨C(see/vois)C⟩) · P(C →C(them/les)) The factors of the product are ordered to show the generative process of the most probable parse. Starting from the start symbol S, we first choose the head word pair for S, which is see/vois in the example. Then, we recursively expand the lexicalized head constituents using the lexicalized structural rules. Since we are only lexicalizing rather than bilexicalizing the rules, the non-head constituents need to be lexicalized using head generation rules so that the top-down generation process can proceed in all branches. By doing so, word pairs can appear at all levels of the final parse tree in contrast with the unlexicalized parse tree in which the word pairs are generated only at the bottom. The binary rules are lexicalized rather than bilexicalized.1 This is a trade-off between complexity and expressiveness. After our lexicalization, the number of lexical rules, thus the number of parameters in the statistical model, is still at the order of O(|V ||T|), where |V | and |T| are the vocabulary sizes of the 1In a sense our rules are bilexicalized in that they condition on words from both languages; however they do not capture head-modifier relations within a language. two languages. 2.1 Parsing Given a bilingual sentence pair, a synchronous parse can be built using a two-dimensional extension of chart parsing, where chart items are indexed by their nonterminal X, head word pair e/f if specified, beginning and ending positions l, m in the source language string, and beginning and ending positions i, j in the target language string. For Expectation Maximization training, we compute lexicalized inside probabilities β(X(e/f), l, m, i, j), as well as unlexicalized inside probabilities β(X, l, m, i, j), from the bottom up as outlined in Algorithm 1. The algorithm has a complexity of O(N4 s N4 t ), where Ns and Nt are the lengths of source and target sentences respectively. The complexity of parsing for an unlexicalized ITG is O(N3 s N3 t ). Lexicalization introduces an additional factor of O(NsNt), caused by the choice of headwords e and f in the pseudocode. Assuming that the lengths of the source and target sentences are proportional, the algorithm has a complexity of O(n8), where n is the average length of the source and target sentences. 477 Algorithm 1 LexicalizedITG(s, t) for all l, m such that 0 ≤l ≤m ≤Ns do for all i, j such that 0 ≤i ≤j ≤Nt do for all e ∈{el+1 . . . em} do for all f ∈{fi+1 . . . fj} do for all n such that l ≤n ≤m do for all k such that i ≤k ≤j do for all rules X →Y Z ∈G do β(X(e/f), l, m, i, j) += straight rule, where Y is head P([Y (e/f)Z] | X(e/f)) ·β(Y (e/f), l, n, i, k) · β(Z, n, m, k, j) inverted rule, where Y is head + P(⟨Y (e/f)Z⟩| X(e/f)) ·β(Y (e/f), n, m, i, k) · β(Z, l, n, k, j) straight rule, where Z is head + P([Y Z(e/f)] | X(e/f)) ·β(Y, l, n, i, k) · β(Z(e/f), n, m, k, j) inverted rule, where Z is head + P(⟨Y Z(e/f)⟩| X(e/f)) ·β(Y, n, m, i, k) · β(Z(e/f), l, n, k, j) end for end for end for word pair generation rule β(X, l, m, i, j) += P(X(e/f) | X) ·β(X(e/f), l, m, i, j) end for end for end for end for 2.2 Pruning We need to further restrict the space of alignments spanned by the source and target strings to make the algorithm feasible. Our technique involves computing an estimate of how likely each of the n4 cells in the chart is before considering all ways of building the cell by combining smaller subcells. Our figure of merit for a cell involves an estimate of both the inside probability of the cell (how likely the words within the box in both dimensions are to align) and the outside probability (how likely the words outside the box in both dimensions are to align). In including an estimate of the outside probability, our technique is related to A* methods for monolingual parsing (Klein and Manning, 2003), although our estimate is not guaranteed to be lower than complete outside probabity assigned by ITG. Figure 3(a) displays the tic-tac-toe pattern for the inside and outside components of a particular cell. We use IBM Model 1 as our estimate of both the inside and outside probabilities. In the Model 1 estimate of the outside probability, source and target words can align using any combination of points from the four outside corners of the tic-tac-toe pattern. Thus in Figure 3(a), there is one solid cell (corresponding to the Model 1 Viterbi alignment) in each column, falling either in the upper or lower outside shaded corner. This can be also be thought of as squeezing together the four outside corners, creating a new cell whose probability is estimated using IBM Model 1. Mathematically, our figure of merit for the cell (l, m, i, j) is a product of the inside Model 1 probability and the outside Model 1 probability: P(f (i,j) | e(l,m)) · P(f(i,j) | e(l,m)) (1) = λ|(l,m)|,|(i,j)| Y t∈(i,j) X s∈{0,(l,m)} t(ft | es) · λ|(l,m)|,|(i,j)| Y t∈(i,j) X s∈{0,(l,m)} t(ft | es) 478 l m i j i j l m i j (a) (b) (c) Figure 3: The tic-tac-toe figure of merit used for pruning bitext cells. The shaded regions in (a) show alignments included in the figure of merit for bitext cell (l, m, i, j) (Equation 1); solid black cells show the Model 1 Viterbi alignment within the shaded area. (b) shows how to compute the inside probability of a unit-width cell by combining basic cells (Equation 2), and (c) shows how to compute the inside probability of any cell by combining unit-width cells (Equation 3). where (l, m) and (i, j) represent the complementary spans in the two languages. λL1,L2 is the probability of any word alignment template for a pair of L1word source string and L2-word target string, which we model as a uniform distribution of word-forword alignment patterns after a Poisson distribution of target string’s possible lengths, following Brown et al. (1993). As an alternative, the P operator can be replaced by the max operator as the inside operator over the translation probabilities above, meaning that we use the Model 1 Viterbi probability as our estimate, rather than the total Model 1 probability.2 A na¨ıve implementation would take O(n6) steps of computation, because there are O(n4) cells, each of which takes O(n2) steps to compute its Model 1 probability. Fortunately, we can exploit the recursive nature of the cells. Let INS(l, m, i, j) denote the major factor of our Model 1 estimate of a cell’s inside probability, Q t∈(i,j) P s∈{0,(l,m)} t(ft | es). It turns out that one can compute cells of width one (i = j) in constant time from a cell of equal width and lower height: INS(l, m, j, j) = Y t∈(j,j) X s∈{0,(l,m)} t(ft | es) = X s∈{0,(l,m)} t(fj | es) = INS(l, m −1, j, j) + t(fj | em) (2) Similarly, one can compute cells of width greater than one by combining a cell of one smaller width 2The experimental difference of the two alternatives was small. For our results, we used the max version. with a cell of width one: INS(l, m, i, j) = Y t∈(i,j) X s∈{0,(l,m)} t(ft | es) = Y t∈(i,j) INS(l, m, t, t) = INS(l, m, i, j −1) · INS(l, m, j, j) (3) Figure 3(b) and (c) illustrate the inductive computation indicated by the two equations. Each of the O(n4) inductive steps takes one additive or multiplicative computation. A similar dynammic programing technique can be used to efficiently compute the outside component of the figure of merit. Hence, the algorithm takes just O(n4) steps to compute the figure of merit for all cells in the chart. Once the cells have been scored, there can be many ways of pruning. In our experiments, we applied beam ratio pruning to each individual bucket of cells sharing a common source substring. We prune cells whose probability is lower than a fixed ratio below the best cell for the same source substring. As a result, at least one cell will be kept for each source substring. We safely pruned more than 70% of cells using 10−5 as the beam ratio for sentences up to 25 words. Note that this pruning technique is applicable to both the lexicalized ITG and the conventional ITG. In addition to pruning based on the figure of merit described above, we use top-k pruning to limit the number of hypotheses retained for each cell. This is necessary for lexicalized ITG because the number of distinct hypotheses in the two-dimensional ITG 479 chart has increased to O(N3 s N3 t ) from O(N2 s N2 t ) due to the choice one of O(Ns) source language words and one of O(Nt) target language words as the head. We keep only the top-k lexicalized items for a given chart cell of a certain nonterminal Y contained in the cell l, m, i, j. Thus the additional complexity of O(NsNt) will be replaced by a constant factor. The two pruning techniques can work for both the computation of expected counts during the training process and for the Viterbi-style algorithm for extracting the most probable parse after training. However, if we initialize EM from a uniform distribution, all probabilties are equal on the first iteration, giving us no basis to make pruning decisions. So, in our experiments, we initialize the head generation probabilities of the form P(X(e/f) | X) to be the same as P(e/f | C) from the result of the unlexicalized ITG training. 2.3 Smoothing Even though we have controlled the number of parameters of the model to be at the magnitude of O(|V ||T|), the problem of data sparseness still renders a smoothing method necessary. We use backing off smoothing as the solution. The probabilities of the unary head generation rules are in the form of P(X(e/f) | X). We simply back them off to the uniform distribution. The probabilities of the binary rules, which are conditioned on lexicalized nonterminals, however, need to be backed off to the probabilities of generalized rules in the following forms: P([Y (∗)Z] | X(∗)) P([Y Z(∗)] | X(∗)) P(⟨Y (∗)Z⟩| X(∗)) P(⟨Y Z(∗)⟩| X(∗)) where ∗stands for any lexical pair. For instance, P([Y (e/f)Z] | X(e/f)) = (1 −λ)PEM([Y (e/f)Z] | X(e/f)) + λP([Y (∗)Z] | X(∗)) where λ = 1/(1 + Expected Counts(X(e/f))) The more often X(e/f) occurred, the more reliable are the estimated conditional probabilities with the condition part being X(e/f). 3 Experiments We trained both the unlexicalized and the lexicalized ITGs on a parallel corpus of Chinese-English newswire text. The Chinese data were automatically segmented into tokens, and English capitalization was retained. We replaced words occurring only once with an unknown word token, resulting in a Chinese vocabulary of 23,783 words and an English vocabulary of 27,075 words. In the first experiment, we restricted ourselves to sentences of no more than 15 words in either language, resulting in a training corpus of 6,984 sentence pairs with a total of 66,681 Chinese words and 74,651 English words. In this experiment, we didn’t apply the pruning techniques for the lexicalized ITG. In the second experiment, we enabled the pruning techniques for the LITG with the beam ratio for the tic-tac-toe pruning as 10−5 and the number k for the top-k pruning as 25. We ran the experiments on sentences up to 25 words long in both languages. The resulting training corpus had 18,773 sentence pairs with a total of 276,113 Chinese words and 315,415 English words. We evaluate our translation models in terms of agreement with human-annotated word-level alignments between the sentence pairs. For scoring the Viterbi alignments of each system against goldstandard annotated alignments, we use the alignment error rate (AER) of Och and Ney (2000), which measures agreement at the level of pairs of words: AER = 1 −|A ∩GP | + |A ∩GS| |A| + |GS| where A is the set of word pairs aligned by the automatic system, GS is the set marked in the gold standard as “sure”, and GP is the set marked as “possible” (including the “sure” pairs). In our Chinese-English data, only one type of alignment was marked, meaning that GP = GS. In our hand-aligned data, 20 sentence pairs are less than or equal to 15 words in both languages, and were used as the test set for the first experiment, and 47 sentence pairs are no longer than 25 words in either language and were used to evaluate the pruned 480 Alignment Precision Recall Error Rate IBM Model 1 .59 .37 .54 IBM Model 4 .63 .43 .49 ITG .62 .47 .46 Lexicalized ITG .66 .50 .43 Table 1: Alignment results on Chinese-English corpus (≤15 words on both sides). Full ITG vs. Full LITG Alignment Precision Recall Error Rate IBM Model 1 .56 .42 .52 IBM Model 4 .67 .43 .47 ITG .68 .52 .40 Lexicalized ITG .69 .51 .41 Table 2: Alignment results on Chinese-English corpus (≤25 words on both sides). Full ITG vs. Pruned LITG LITG against the unlexicalized ITG. A separate development set of hand-aligned sentence pairs was used to control overfitting. The subset of up to 15 words in both languages was used for cross-validating in the first experiment. The subset of up to 25 words in both languages was used for the same purpose in the second experiment. Table 1 compares results using the full (unpruned) model of unlexicalized ITG with the full model of lexicalized ITG. The two models were initialized from uniform distributions for all rules and were trained until AER began to rise on our held-out cross-validation data, which turned out to be 4 iterations for ITG and 3 iterations for LITG. The results from the second experiment are shown in Table 2. The performance of the full model of unlexicalized ITG is compared with the pruned model of lexicalized ITG using more training data and evaluation data. Under the same check condition, we trained ITG for 3 iterations and the pruned LITG for 1 iteration. For comparison, we also included the results from IBM Model 1 and Model 4. The numbers of iterations for the training of the IBM models were chosen to be the turning points of AER changing on the cross-validation data. 4 Discussion As shown by the numbers in Table 1, the full lexicalized model produced promising alignment results on sentence pairs that have no more than 15 words on both sides. However, due to its prohibitive O(n8) computational complexity, our C++ implementation of the unpruned lexicalized model took more than 500 CPU hours, which were distributed over multiple machines, to finish one iteration of training. The number of CPU hours would increase to a point that is unacceptable if we doubled the average sentence length. Some type of pruning is a must-have. Our pruned version of LITG controlled the running time for one iteration to be less than 1200 CPU hours, despite the fact that both the number of sentences and the average length of sentences were more than doubled. To verify the safety of the tic-tac-toe pruning technique, we applied it to the unlexicalized ITG using the same beam ratio (10−5) and found that the AER on the test data was not changed. However, whether or not the top-k lexical head pruning technique is equally safe remains a question. One noticeable implication of this technique for training is the reliance on initial probabilities of lexical pairs that are discriminative enough. The comparison of results for ITG and LITG in Table 2 and the fact that AER began to rise after only one iteration of training seem to indicate that keeping few distinct lexical heads caused convergence on a suboptimal set 481 of parameters, leading to a form of overfitting. In contrast, overfitting did not seem to be a problem for LITG in the unpruned experiment of Table 1, despite the much larger number of parameters for LITG than for ITG and the smaller training set. We also want to point out that for a pair of long sentences, it would be hard to reflect the inherent bilingual syntactic structure using the lexicalized binary bracketing parse tree. In Figure 2, A(see/vois) echoes IP(see/vois) and B(see/vois) echoes V P(see/vois) so that it means IP(see/vois) is not inverted from English to French but its right child V P(see/vois) is inverted. However, for longer sentences with more than 5 levels of bracketing and the same lexicalized nonterminal repeatedly appearing at different levels, the correspondences would become less linguistically plausible. We think the limitations of the bracketing grammar are another reason for not being able to improve the AER of longer sentence pairs after lexicalization. The space of alignments that is to be considered by LITG is exactly the space considered by ITG since the structural rules shared by them define the alignment space. The lexicalized ITG is designed to be more sensitive to the lexical influence on the choices of inversions so that it can find better alignments. Wu (1997) demonstrated that for pairs of sentences that are less than 16 words, the ITG alignment space has a good coverage over all possibilities. Hence, it’s reasonable to see a better chance of improving the alignment result for sentences less than 16 words. 5 Conclusion We presented the formal description of a Stochastic Lexicalized Inversion Transduction Grammar with its EM training procedure, and proposed specially designed pruning and smoothing techniques. The experiments on a parallel corpus of Chinese and English showed that lexicalization helped for aligning sentences of up to 15 words on both sides. The pruning and the limitations of the bracketing grammar may be the reasons that the result on sentences of up to 25 words on both sides is not better than that of the unlexicalized ITG. Acknowledgments We are very grateful to Rebecca Hwa for assistance with the Chinese-English data, to Kevin Knight and Daniel Marcu for their feedback, and to the authors of GIZA. This work was partially supported by NSF ITR IIS-09325646 and NSF ITR IIS-0428020. References Hiyan Alshawi, Srinivas Bangalore, and Shona Douglas. 2000. Learning dependency translation models as collections of finite state head transducers. Computational Linguistics, 26(1):45–60. Adam Berger, Peter Brown, Stephen Della Pietra, Vincent Della Pietra, J. R. Fillett, Andrew Kehler, and Robert Mercer. 1996. Language translation apparatus and method of using context-based tanslation models. United States patent 5,510,981. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Dan Klein and Christopher D. Manning. 2003. A* parsing: Fast exact viterbi parse selection. In Proceedings of the 2003 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-03). I. Dan Melamed. 2003. Multitext grammars and synchronous parsers. In Proceedings of the 2003 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-03), Edmonton. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Conference of the Association for Computational Linguistics (ACL-00), pages 440–447, Hong Kong, October. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Proceedings of the 39th Annual Conference of the Association for Computational Linguistics (ACL-01), Toulouse, France. Richard Zens and Hermann Ney. 2003. A comparative study on reordering constraints in statistical machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Sapporo, Japan. Hao Zhang and Daniel Gildea. 2004. Syntax-based alignment: Supervised or unsupervised? In Proceedings of the 20th International Conference on Computational Linguistics (COLING-04), Geneva, Switzerland, August. 482 | 2005 | 59 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 42–49, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics The Role of Semantic Roles in Disambiguating Verb Senses Hoa Trang Dang National Institute of Standards and Technology Gaithersburg, MD 20899 [email protected] Martha Palmer Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 [email protected] Abstract We describe an automatic Word Sense Disambiguation (WSD) system that disambiguates verb senses using syntactic and semantic features that encode information about predicate arguments and semantic classes. Our system performs at the best published accuracy on the English verbs of Senseval-2. We also experiment with using the gold-standard predicateargument labels from PropBank for disambiguating fine-grained WordNet senses and course-grained PropBank framesets, and show that disambiguation of verb senses can be further improved with better extraction of semantic roles. 1 Introduction A word can have different meanings depending on the context in which it is used. Word Sense Disambiguation (WSD) is the task of determining the correct meaning (“sense”) of a word in context, and several efforts have been made to develop automatic WSD systems. Early work on WSD (Yarowsky, 1995) was successful for easily distinguishable homonyms like bank, which have multiple unrelated meanings. While homonyms are fairly tractable, highly polysemous verbs, which have related but subtly distinct senses, pose the greatest challenge for WSD systems (Palmer et al., 2001). Verbs are syntactically complex, and their syntax is thought to be determined by their underlying semantics (Grimshaw, 1990; Levin, 1993). Levin verb classes, for example, are based on the ability of a verb to occur in pairs of syntactic frames (diathesis alternations); different senses of a verb belong to different verb classes, which have different sets of syntactic frames that are supposed to reflect underlying semantic components that constrain allowable arguments. If this is true, then the correct sense of a verb should be revealed (at least partially) in its arguments. In this paper we show that the performance of automatic WSD systems can be improved by using richer linguistic features that capture information about predicate arguments and their semantic classes. We describe our approach to automatic WSD of verbs using maximum entropy models to combine information from lexical collocations, syntax, and semantic class constraints on verb arguments. The system performs at the best published accuracy on the English verbs of the Senseval-2 (Palmer et al., 2001) exercise on evaluating automatic WSD systems. The Senseval-2 verb instances have been manually tagged with their WordNet sense and come primarily from the Penn Treebank WSJ. The WSJ corpus has also been manually annotated for predicate arguments as part of PropBank (Kingsbury and Palmer, 2002), and the intersection of PropBank and Senseval-2 forms a corpus containing gold-standard annotations of WordNet senses and PropBank semantic role labels. This provides a unique opportunity to investigate the role of predicate arguments in verb sense disambiguation. We show that our system’s accuracy improves significantly by adding features from PropBank, which explicitly encodes the predicate-argument informa42 tion that our original set of syntactic and semantic class features attempted to capture. 2 Basic automatic system Our WSD system was built to combine information from many different sources, using as much linguistic knowledge as could be gathered automatically by NLP tools. In particular, our goal was to see the extent to which sense-tagging of verbs could be improved by adding features that capture information about predicate-arguments and selectional restrictions. We used the Mallet toolkit (McCallum, 2002) for learning maximum entropy models with Gaussian priors for all our experiments. In order to extract the linguistic features necessary for the models, all sentences containing the target word were automatically part-of-speech-tagged using a maximum entropy tagger (Ratnaparkhi, 1998) and parsed using the Collins parser (Collins, 1997). In addition, an automatic named entity tagger (Bikel et al., 1997) was run on the sentences to map proper nouns to a small set of semantic classes.1 2.1 Topical features We categorized the possible model features into topical features and several types of local contextual features. Topical features for a verb in a sentence look for the presence of keywords occurring anywhere in the sentence and any surrounding sentences provided as context (usually one or two sentences). These features are supposed to show the domain in which the verb is being used, since some verb senses are used in only certain domains. The set of keywords is specific to each verb lemma to be disambiguated and is determined automatically from training data so as to minimize the entropy of the probability of the senses conditioned on the keyword. All alphabetic characters are converted to lower case. Words occuring less than twice in the training data or that are in a stoplist2 of pronouns, prepositions, and conjunctions are ignored. 1The inclusion or omission of a particular company or product implies neither endorsement nor criticism by NIST. Any opinions, findings, and conclusions expressed are the authors’ own and do not necessarily reflect those of NIST. 2http://www.d.umn.edu/˜tpederse/Group01/ WordNet/words.txt 2.2 Local features The local features for a verb in a particular sentence tend to look only within the smallest clause containing . They include collocational features requiring no linguistic preprocessing beyond partof-speech tagging, syntactic features that capture relations between the verb and its complements, and semantic features that incorporate information about noun classes for subjects and objects: Collocational features: Collocational features refer to ordered sequences of part-of-speech tags or word tokens immediately surrounding . They include: unigrams: words , , , , and parts of speech , , , , , where
and are at position relative to bigrams: , , ; , , trigrams: , , , ; , , , Syntactic features: The system uses heuristics to extract syntactic elements from the parse for the sentence containing . Let commander VP be the lowest VP that dominates and that is not immediately dominated by another VP, and let head VP be the lowest VP dominating (See Figure 1). Then we define the subject of to be the leftmost NP sibling of commander VP, and a complement of to be a node that is a child of the head VP, excluding NPs whose head is a number or a noun from a list of common temporal nouns (“week”, “tomorrow”, “Monday”, etc.). The system extracts the following binary syntactic features: Is the sentence passive? Is there a subject, direct object (leftmost NP complement of ), indirect object (second leftmost NP complement of ), or clausal complement (S complement of )? What is the word (if any) that is the particle or head of the subject, direct object, or indirect object? 43 S NP John (commander) VP VB had (head) VP VB pulled NP the blanket PP across the carpet S to create static Figure 1: Example parse tree for =“pulled”, from which is extracted the syntactic features: morph=normal subj dobj sent-comp subj=john dobj=blanket prep=across across-obj=carpet. If there is a PP complement, what is the preposition, and what is the object of the preposition? Semantic features: What is the Named Entity tag (PERSON, ORGANIZATION, LOCATION, UNKNOWN) for each proper noun in the syntactic positions above? What are the possible WordNet synsets and hypernyms for each noun in the syntactic positions above? (Nouns are not explicitly disambiguated; all possible synsets and hypernyms for the noun are included.) This set of local features relies on access to syntactic structure as well as semantic class information, and attempts to model richer linguistic information about predicate arguments. However, the heuristics for extracting the syntactic features are able to identify subjects and objects of only simple clauses. The heuristics also do not differentiate between arguments and adjuncts; for example, the feature sent-comp is intended to identify clausal complements such as in (S (NP Mary) (VP (VB called) (S him a bastard))), but Figure 1 shows how a purpose clause can be mistakenly labeled as a clausal complement. 2.3 Evaluation We tested the system on the 1806 test instances of the 29 verbs from the English lexical sample task for Senseval-2 (Palmer et al., 2001). Accuracy was defined to be the fraction of the instances for which the system got the correct sense. All significance testing between different accuracies was done using a onetailed z-test, assuming a binomial distribution of the successes; differences in accuracy were considered to be significant if . In Senseval-2, senses involving multi-word constructions could be identified directly from the sense tags themselves, and the head word and satellites of multi-word constructions were explicitly marked in the training and test data. We trained one model for each of the verbs and used a filter to consider only phrasal senses whenever there were satellites of multi-word constructions marked in the test data. Feature Accuracy co 0.571 co+syn 0.598 co+syn+sem 0.625 Table 1: Accuracy of system on Senseval-2 verbs using topical features and different subsets of local features. Table 1 shows the accuracy of the system using topical features and different subsets of local fea44 tures. Adding features from richer linguistic sources always improves accuracy. Adding lexical syntactic (“syn”) features improves accuracy significantly over using just collocational (“co”) features ( ). When semantic class (“sem”) features are added, the improvement is also significant. Adding topical information to all the local features improves accuracy, but not significantly; when the topical features are removed the accuracy of our system falls only slightly, to 62.0%. Senses based on domain or topic occur rarely in the Senseval-2 corpus. Most of the information provided by topical features already seem to be captured by the local features for the frequent senses. Features Accuracy co+syn 0.598 co+syn+ne 0.597 co+syn+wn 0.623 co+syn+ne+wn 0.625 Table 2: Accuracy of system on Senseval-2 verbs, using topical features and different subsets of semantic class features. Semantic class information plays a significant role in sense distinctions. Table 2 shows the relative contribution of adding only named entity tags to the collocational and syntactic features (“co+syn+ne”), versus adding only the WordNet classes (“co+syn+wn”), versus adding both named entity and WordNet classes (“co+syn+ne+wn”). Adding all possible WordNet noun class features for arguments contributes a large number of parameters to the model, but this use of WordNet with no separate disambiguation of noun arguments proves to be very useful. In fact, the use of WordNet for common nouns proves to be even more beneficial than the use of a named entity tagger for proper nouns. Given enough data, the maximum entropy model is able to assign high weights to the correct hypernyms of the correct noun sense if they represent defining selectional restrictions. Incorporating topical keywords as well as collocational, syntactic, and semantic local features, our system achieves 62.5% accuracy. This is in comparison to the 61.1% accuracy achieved by (Lee and Ng, 2002), which has been the best published result on this corpus. 3 PropBank semantic annotations Our WSD system uses heuristics to attempt to detect predicate arguments from parsed sentences. However, recognition of predicate argument structures is not straightforward, because a natural language will have several different syntactic realizations of the same predicate argument relations. PropBank is a corpus in which verbs are annotated with semantic tags, including coarse-grained sense distinctions and predicate-argument structures. PropBank adds a layer of semantic annotation to the Penn Wall Street Journal Treebank II. An important goal is to provide consistent predicateargument structures across different syntactic realizations of the same verb. Polysemous verbs are also annotated with different framesets. Frameset tags are based on differences in subcategorization frames and correspond to a coarse notion of word senses. A verb’s semantic arguments in PropBank are numbered beginning with 0. Arg0 is roughly equivalent to the thematic role of Agent, and Arg1 usually corresponds to Theme or Patient; however, argument labels are not necessarily consistent across different senses of the same verb, or across different verbs, as thematic roles are usually taken to be. In addition to the core, numbered arguments, verbs can take any of a set of general, adjunct-like arguments (ARGM), whose labels are derived from the Treebank functional tags (DIRection, LOCation, etc.). PropBank provides manual annotation of predicate-argument information for a large number of verb instances in the Senseval-2 data set. The intersection of PropBank and Senseval-2 forms a corpus containing gold-standard annotations of fine-grained WordNet senses, coarse-grained PropBank framesets, and PropBank role labels. The combination of such gold-standard semantic annotations provides a unique opportunity to investigate the role of predicate-argument features in word sense disambiguation, for both coarse-grained framesets and fine-grained WordNet senses. 3.1 PropBank features We conducted experiments on the effect of using features from PropBank for sense-tagging verbs. Both PropBank role labels and PropBank framesets were used. In the case of role labels, only the 45 gold-standard labels found in PropBank were used, because the best automatic semantic role labelers only perform at about 84% precision and 75% recall (Pradhan et al., 2004). From the PropBank annotation for each sentence, we extracted the following features: 1. Labels of the semantic roles: rel, ARG0, ARG1, ARG2-WITH, ARG2, ..., ARGMLOC, ARGM-TMP, ARGM-NEG, ... 2. Syntactic labels of the constituent instantiating each semantic role: ARG0=NP, ARGMTMP=PP, ARG2-WITH=PP, ... 3. Head word of each constituent in (2): rel=called, sats=up, ARG0=company, ARGMTMP=day, ... 4. Semantic classes (named entity tag, WordNet hypernyms) of the nouns in (3): ARGOsyn=ORGANIZATION, ARGOsyn=16185, ARGM-TMPsyn=13018, ... When a numbered role appears in a prepositional phrase (e.g., ARG2-WITH), we take the “head word” to be the object of the preposition. If a constituent instantiating some semantic role is a trace, we take the head of its referent instead. [ ! #" Mr. Bush] has [ $&%(' called] [ ! #" )*,+ $ for an agreement by next September at the latest] . For example, the PropBank features that we extract for the sentence above are: arg0 arg0=bush arg0syn=person arg0syn=1740 ... rel rel=called arg1-for arg1 arg1=agreement arg1syn=12865 ... 3.2 Role labels for frameset tagging We collected all instances of the Senseval-2 verbs from the PropBank corpus. Only 20 of these verbs had more than one frameset in the PropBank corpus, resulting in 4887 instances of polysemous verbs. The instances for each word were partitioned randomly into 10 equal parts, and the system was tested on each part after being trained on the remaining nine. For these 20 verbs with more than one PropBank frameset tag, choosing the most frequent frameset gives a baseline accuracy of 76.0%. The sentences were automatically pos-tagged with the Ratnaparki tagger and parsed with the Collins parser. We extracted local contextual features as for WordNet sense-tagging and used the local features to train our WSD system on the coarsegrained sense-tagging task of automatically assigning PropBank frameset tags. We tested the effect of using only collocational features (“co”) for frameset tagging, as well as using only PropBank role features (“pb”) or only our original syntactic/semantic features (“synsem”) for this task, and found that the combination of collocational features with PropBank features worked best. The system has the worst performance on the word strike, which has a high number of framesets and a low number of training instances. Table 3 shows the performance of the system on different subsets of local features. Feature Accuracy baseline 0.760 co 0.853 synsem 0.859 co+synsem 0.883 pb 0.901 co+pb 0.908 co+synsem+pb 0.907 Table 3: Accuracy of system on frameset-tagging task for verbs with more than one frameset, using different types of local features (no topical features); all features except pb were extracted from automatically pos-tagged and parsed sentences. We obtained an overall accuracy of 88.3% using our original local contextual features. However, the system’s performance improved significantly when we used only PropBank role features, achieving an accuracy of 90.1%. Furthermore, adding collocational features and heuristically extracted syntactic/semantic features to the PropBank features do not provide additional information and affects the accuracy of frameset-tagging only negligibly. It is not surprising that for the coarse-grained sense-tagging task of assigning the correct PropBank frameset tag to a verb, using the PropBank role labels is better than syntactic/semantic features heuristically extracted from parses because these heuristics are meant to capture the predicate-argument informa46 tion that is encoded more directly in the PropBank role labels. Even when the original local features were extracted from the gold-standard pos-tagged and parsed sentences of the Penn Treebank, the system performed significantly worse than when PropBank role features were used. This suggests that more effort should be applied to improving the heuristics for extracting syntactic features. We also experimented with adding topical features and ARGM features from PropBank. In all cases, these additional features reduced overall accuracy, but the difference was never significant ( .-/0 ). Topical features do not help because frameset tags are based on differences in subcategorization frames and not on the domain or topic. ARGM features do not help because they are supposedly used uniformly across verbs and framesets. 3.3 Role labels for WordNet sense-tagging We experimented with using PropBank role labels for fine-grained WordNet sense-tagging. While ARGM features are not useful for coarse-grained frameset-tagging, some sense distinctions in WordNet are based on adverbial modifiers, such as “live well” or “serves someone well.” Therefore, we included PropBank ARGM features in our models for WordNet sense-tagging to capture a wider range of linguistic behavior. We looked at the 2571 instances of 29 Senseval-2 verbs that were in both Senseval-2 and the PropBank corpus. Features Accuracy co 0.628 synsem 0.638 co+synsem 0.666 pb 0.656 co+pb 0.681 co+synsem+pb 0.694 Table 4: Accuracy of system on WordNet sensetagging for instances in both Senseval-2 and PropBank, using different types of local features (no topical features). Table 4 shows the accuracy of the system on WordNet sense-tagging using different subsets of features; all features except pb were extracted from automatically pos-tagged and parsed sentences. By adding PropBank role features to our original local feature set, accuracy rose from 0.666 to to 0.694 on this subset of the Senseval-2 verbs ( 123 ); the extraction of syntactic features from the parsed sentences is again not successfully capturing all the predicate-argument information that is explicit in PropBank. The verb “match” illustrates why accuracy improves using additional PropBank features. As shown in Figure 2, the matched objects may occur in different grammatical relations with respect to the verb (subject, direct object, object of a preposition), but they each have an ARG1 semantic role label in PropBank.3 Furthermore, only one of the matched objects needs to be specified, as in Example 3 where the second matched object (presumably the company’s prices) is unstated. Our heuristics do not handle these alternations, and cannot detect that the syntactic subject in Example 1 has a different semantic role than the subject of Example 3. Roleset match.01 “match”: Arg0: person performing match Arg1: matching objects Ex1: [ 4!576 the wallpaper] [ 8:9<; matched] [ 475!6 the paint] Ex2: [ 475!6 The architect] [ 8:9<; matched] [ 4!576 the paint] [ 4 8<= )?>A@CBED with the wallpaper] Ex3: [ 475!6 The company] [ 8:9<; matched] [ 4!576 Kodak’s higher prices] Figure 2: PropBank roleset for “match” Our basic WSD system (using local features extracted from automatic parses) confused WordNet Sense 1 with Sense 4: 1. match, fit, correspond, check, jibe, gibe, tally, agree – (be compatible, similar or consistent; coincide in their characteristics; “The two stories don’t agree in many details”; “The handwriting checks with the signature on the check”; “The suspect’s fingerprints don’t match those on the gun”) 4. equal, touch, rival, match – (be equal to in 3PropBank annotation for “match” allows multiple ARG1 labels, one for each of the matching objects. Other verbs that have more than a single ARG1 in PropBank include: “attach, bolt, coincide, connect, differ, fit, link, lock, pin, tack, tie.” 47 quality or ability; “Nothing can rival cotton for durability”; “Your performance doesn’t even touch that of your colleagues”; “Her persistence and ambition only matches that of her parents”) The senses are differentiated in that the matching objects (ARG1) in Sense 4 have some quantifiable characteristic that can be measured on some scale, whereas those in Sense 1 are more general. Goldstandard PropBank annotation of ARG1 allows the system to generalize over the semantic classes of the arguments and distinguish these two senses more accurately. 3.4 Frameset tags for WordNet sense-tagging PropBank frameset tags (either gold-standard or automatically tagged) were incorporated as features in our WSD system to see if knowing the coarsegrained sense tags would be useful in assigning finegrained WordNet sense tags. A frameset tag for the instance was appended to each feature; this effectively partitions the feature set according to the coarse-grained sense provided by the frameset. To automatically tag an instance of a verb with its frameset, the set of all instances of the verb in PropBank was partitioned into 10 subsets, and an instance in one subset was tagged by training a maximum entropy model on the instances in the other nine subsets. Various local features were considered, and the same feature types were used to train the frameset tagger and the WordNet sense tagger that used the automatically-assigned frameset. For the 20 Senseval-2 verbs that had more than one frameset in PropBank, we extracted all instances that were in both Senseval-2 and PropBank, yielding 1468 instances. We examined the effect of incorporating the gold-standard PropBank frameset tags into our maximum entropy models for these 20 verbs by partitioning the instances according to their frameset tag. Table 5 shows a breakdown of the accuracy by feature type. Adding the gold-standard frameset tag (“*fset”) to our original local features (“orig”) did not increase the accuracy significantly. However, the increase in accuracy (from 59.7% to 62.8%) was significant when these frameset tags were incorporated into the model that used both our original features and all the PropBank features. Feature Accuracy orig 0.564 orig*fset 0.587 orig+pb 0.597 (orig+pb)*fset 0.628 Table 5: Accuracy of system on WordNet sensetagging of 20 Senseval-2 verbs with more than one frameset, with and without gold-standard frameset tag. However, partitioning the instances using the automatically generated frameset tags has no significant effect on the system’s performance; the information provided by the automatically assigned coarse-grained sense tag is already encoded in the features used for fine-grained sense-tagging. 4 Related Work Our approach of using rich linguistic features combined in a single maximum entropy framework contrasts with that of (Florian et al., 2002). Their feature space was much like ours, but did not include semantic class features for noun complements. With this more impoverished feature set, they experimented with combining diverse classifiers to achieve an improvement of 2.1% over all parts of speech (noun, verb, adjective) in the Senseval-2 lexical sample task; however, this improvement was over an initial accuracy of 56.6% on verbs, indicating that their performance is still below ours for verbs. (Lee and Ng, 2002) explored the relative contribution of different knowledge sources and learning algorithms to WSD; they used Support Vector Machines (SVM) and included local collocations and syntactic relations, and also found that adding syntactic features improved accuracy. Our features are similar to theirs, but we added semantic class features for the verb arguments. We found that the difference in machine learning algorithms did not play a large role in performance; when we used our features in SVM we obtained almost no difference in performance over using maximum entropy models with Gaussian priors. (Gomez, 2001) described an algorithm using WordNet to simultaneously determine verb senses and attachments of prepositional phrases, and iden48 tify thematic roles and adjuncts; our work is different in that it is trained on manually annotated corpora to show the relevance of semantic roles for verb sense disambiguation. 5 Conclusion We have shown that disambiguation of verb senses can be improved by leveraging information about predicate arguments and their semantic classes. Our system performs at the best published accuracy on the English verbs of Senseval-2 even though our heuristics for extracting syntactic features fail to identify all and only the arguments of a verb. We show that associating WordNet semantic classes with nouns is beneficial even without explicit disambiguation of the noun senses because, given enough data, maximum entropy models are able to assign high weights to the correct hypernyms of the correct noun sense if they represent defining selectional restrictions. Knowledge of gold-standard predicate-argument information from PropBank improves WSD on both coarse-grained senses (PropBank framesets) and fine-grained WordNet senses. Furthermore, partitioning instances according to their gold-standard frameset tags, which are based on differences in subcategorization frames, also improves the system’s accuracy on fine-grained WordNet sense-tagging. Our experiments suggest that sense disambiguation for verbs can be improved through more accurate extraction of features representing information such as that contained in the framesets and predicate argument structures annotated in PropBank. 6 Acknowledgments The authors would like to thank the anonymous reviewers for their valuable comments. This paper describes research that was conducted while the first author was at the University of Pennsylvania. References Daniel M. Bikel, Scott Miller, Richard Schwartz, and Ralph Weischedel. 1997. Nymble: A highperformance learning name-finder. In Proceedings of the Fifth Conference on Applied Natural Language Processing, Washington, DC. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, Madrid, Spain, July. Radu Florian, Silviu Cucerzan, Charles Schafer, and David Yarowsky. 2002. Combining classifiers for word sense disambiguation. Natural Language Engineering, 8(4):327–341. Fernando Gomez. 2001. An algorithm for aspects of semantic interpretation using an enhanced wordnet. In Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics. Jane Grimshaw. 1990. Argument Structure. MIT Press, Cambridge, MA. Paul Kingsbury and Martha Palmer. 2002. From Treebank to PropBank. In Proceedings of Third International Conference on Language Resources and Evaluation, Las Palmas, Canary Islands, Spain, May. Yoong Keok Lee and Hwee Tou Ng. 2002. An empirical evaluation of knowledge sources and learning algorithms for word sense disambiguation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Philadelphia, PA. Beth Levin. 1993. English Verb Classes and Alternations: A Preliminary Investigation. The University of Chicago Press. Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu. Martha Palmer, Christiane Fellbaum, Scott Cotton, Lauren Delfs, and Hoa Trang Dang. 2001. English tasks: All-words and verb lexical sample. In Proceedings of SENSEVAL-2: Second International Workshop on Evaluating Word Sense Disambiguation Systems, Toulouse, France, July. Sameer Pradhan, Wayne Ward, Kadri Hacioglu, James H. Martin, and Daniel Jurafsky. 2004. Shallow semantic parsing using support vector machines. In Proceedings of the Human Language Technology Conference and Meeting of the North American Chapter of the Association for Computational Linguistics, May. Adwait Ratnaparkhi. 1998. Maximum Entropy Models for Natural Language Ambiguity Resolution. Ph.D. thesis, University of Pennsylvania. D. Yarowsky. 1995. Three Machine Learning Algorithms for Lexical Ambiguity Resolution. Ph.D. thesis, University of Pennsylvania Department of Computer and Information Sciences. 49 | 2005 | 6 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 483–490, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Multi-Field Information Extraction and Cross-Document Fusion Gideon S. Mann and David Yarowsky Department of Computer Science The Johns Hopkins University Baltimore, MD 21218 USA {gsm,yarowsky}@cs.jhu.edu Abstract In this paper, we examine the task of extracting a set of biographic facts about target individuals from a collection of Web pages. We automatically annotate training text with positive and negative examples of fact extractions and train Rote, Na¨ıve Bayes, and Conditional Random Field extraction models for fact extraction from individual Web pages. We then propose and evaluate methods for fusing the extracted information across documents to return a consensus answer. A novel cross-field bootstrapping method leverages data interdependencies to yield improved performance. 1 Introduction Much recent statistical information extraction research has applied graphical models to extract information from one particular document after training on a large corpus of annotated data (Leek, 1997; Freitag and McCallum, 1999).1 Such systems are widely applicable, yet there remain many information extraction tasks that are not readily amenable to these methods. Annotated data required for training statistical extraction systems is sometimes unavailable, while there are examples of the desired information. Further, the goal may be to find a few interrelated pieces of information that are stated multiple times in a set of documents. Here, we investigate one task that meets the above criteria. Given the name of a celebrity such as 1Alternatively, Riloff (1996) trains on in-domain and out-of-domain texts and then has a human filtering step. Huffman (1995) proposes a method to train a different type of extraction system by example. “Frank Zappa”, our goal is to extract a set of biographic facts (e.g., birthdate, birth place and occupation) about that person from documents on the Web. First, we describe a general method of automatic annotation for training from positive and negative examples and use the method to train Rote, Na¨ıve Bayes, and Conditional Random Field models (Section 2). We then examine how multiple extractions can be combined to form one consensus answer (Section 3). We compare fusion methods and show that frequency voting outperforms the single highest confidence answer by an average of 11% across the various extractors. Increasing the number of retrieved documents boosts the overall system accuracy as additional documents which mention the individual in question lead to higher recall. This improved recall more than compensates for a loss in per-extraction precision from these additional documents. Next, we present a method for cross-field bootstrapping (Section 4) which improves per-field accuracy by 7%. We demonstrate that a small training set with only the most relevant documents can be as effective as a larger training set with additional, less relevant documents (Section 5). 2 Training by Automatic Annotation Typically, statistical extraction systems (such as HMMs and CRFs) are trained using hand-annotated data. Annotating the necessary data by hand is timeconsuming and brittle, since it may require largescale re-annotation when the annotation scheme changes. For the special case of Rote extractors, a more attractive alternative has been proposed by Brin (1998), Agichtein and Gravano (2000), and Ravichandran and Hovy (2002). 483 Essentially, for any text snippet of the form A1pA2qA3, these systems estimate the probability that a relationship r(p, q) holds between entities p and q, given the interstitial context, as2 P(r(p, q) | pA2q) = P(r(p, q) | pA2q) = P x,y∈T c(xA2y) P x c(xA2) That is, the probability of a relationship r(p, q) is the number of times that pattern xA2y predicts any relationship r(x, y) in the training set T. c(.) is the count. We will refer to x as the hook3 and y as the target. In this paper, the hook is always an individual. Training a Rote extractor is straightforward given a set T of example relationships r(x, y). For each hook, download a separate set of relevant documents (a hook corpus, Dx) from the Web.4 Then for any particular pattern A2 and an element x, count how often the pattern xA2 predicts y and how often it retrieves a spurious ¯y.5 This annotation method extends to training other statistical models with positive examples, for example a Na¨ıve Bayes (NB) unigram model. In this model, instead of looking for an exact A2 pattern as above, each individual word in the pattern A2 is used to predict the presence of a relationship. P(r(p, q) | pA2q) ∝P(pA2q | r(p, q))P(r(p, q)) =P(A2 | r(p, q)) = Y a∈A2 P(a | r(p, q)) We perform add-lambda smoothing for out-ofvocabulary words and thus assign a positive probability to any sequence. As before, a set of relevant 2The above Rote models also condition on the preceding and trailing words, for simplicity we only model interstitial words A2. 3Following (Ravichandran and Hovy, 2002). 4In the following experiments we assume that there is one main object of interest p, for whom we want to find certain pieces of information r(p, q), where r denotes the type of relationship (e.g., birthday) and q is a value (e.g., May 20th). We require one hook corpus for each hook, not a separate one for each relationship. 5Having a functional constraint ∀¯q ̸= q, ¯r(p, ¯q) makes this estimate much more reliable, but it is possible to use this method of estimation even when this constraint does not hold. documents is downloaded for each particular hook. Then every hook and target is annotated. From that markup, we can pick out the interstitial A2 patterns and calculate the necessary probabilities. Since the NB model assigns a positive probability to every sequence, we need to pick out likely targets from those proposed by the NB extractor. We construct a background model which is a basic unigram language model, P(A2) = Q a∈A2 P(a). We then pick targets chosen by the confidence estimate CNB(q) = logP(A2 | r(p, q)) P(A2) However, this confidence estimate does not workwell in our dataset. We propose to use negative examples to estimate P(A2 | ¯r(p, q))6 as well as P(A2 | r(p, q)). For each relationship, we define the target set Er to be all potential targets and model it using regular expressions.7 In training, for each relationship r(p, q), we markup the hook p, the target q, and all spurious targets (¯q ∈{Er −q}) which provide negative examples. Targets can then be chosen with the following confidence estimate CNB+E(q) = logP(A2 | r(p, q)) P(A2 | ¯r(p, q)) We call this NB+E in the following experiments. The above process describes a general method for automatically annotating a corpus with positive and negative examples, and this corpus can be used to train statistical models that rely on annotated data.8 In this paper, we test automatic annotation using Conditional Random Fields (CRFs) (Lafferty et al., 2001) which have achieved high performance for information extraction. CRFs are undirected graphical models that estimate the conditional probability of a state sequence given an output sequence P(s | o) = 1 Z exp T X t=1 X k λkfk(st−1, st, o, t) 6¯r stands in for all other possible relationships (including no relationship) between p and q. P(A2 | ¯r(p, q)) is estimated as P(A2 | r(p, q)) is, except with spurious targets. 7e.g., Ebirthyear = {\d\d\d\d}. This is the only source of human knowledge put into the system and required only around 4 hours of effort, less effort than annotating an entire corpus or writing information extraction rules. 8This corpus markup gives automatic annotation that yields noisier training data than manual annotation would. 484 p q A_2 B p A_2 A_2 q B q Figure 1: CRF state-transition graphs for extracting a relationship r(p, q) from a sentence pA2q. Left: CRF Extraction with a background model (B). Right: CRF+E As before but with spurious target prediction (pA2¯q). We use the Mallet system (McCallum, 2002) for training and evaluation of the CRFs. In order to examine the improvement by using negative examples, we train CRFs with two topologies (Figure 1). The first, CRF, models the target relationship and background sequences and is trained on a corpus where targets (positive examples) are annotated. The second, CRF+E, models the target relationship, spurious targets and background sequences, and it is trained on a corpus where targets (positive examples) as well as spurious targets (negative examples) are annotated. Experimental Results To test the performance of the different extractors, we collected a set of 152 semistructured mini-biographies from an online site (www.infoplease.com), and used simple rules to extract a biographic fact database of birthday and month (henceforth birthday), birth year, occupation, birth place, and year of death (when applicable). An example of the data can be found in Table 1. In our system, we normalized birthdays, and performed capitalization normalization for the remaining fields. We did no further normalization, such as normalizing state names to their two letter acronyms (e.g., California →CA). Fifteen names were set aside as training data, and the rest were used for testing. For each name, 150 documents were downloaded from Google to serve as the hook corpus for either training or testing.9 In training, we automatically annotated documents using people in the training set as hooks, and in testing, tried to get targets that exactly matched what was present in the database. This is a very strict method of evaluation for three reasons. First, since the facts were automatically collected, they contain 9Name polyreference, along with ranking errors, result in the retrieval of undesired documents. Aaron Neville Frank Zappa Birthday January 24 December 21 Birth year 1941 1940 Occupation Singer Musician Birthplace New Orleans Baltimore,Maryland Year of Death 1993 Table 1: Two of 152 entries in the Biographic Database. Each entry contains incomplete information about various celebrities. Here, Aaron Neville’s birth state is missing, and Frank Zappa could be equally well described as a guitarist or rock-star. errors and thus the system is tested against wrong answers.10 Second, the extractors might have retrieved information that was simply not present in the database but nevertheless correct (e.g., someone’s occupation might be listed as writer and the retrieved occupation might be novelist). Third, since the retrieved targets were not normalized, there system may have retrieved targets that were correct but were not recognized (e.g., the database birthplace is New York, and the system retrieves NY). In testing, we rejected candidate targets that were not present in our target set models Er. In some cases, this resulted in the system being unable to find the correct target for a particular relationship, since it was not in the target set. Before fusion (Section 3), we gathered all the facts extracted by the system and graded them in isolation. We present the per-extraction precision Pre-Fusion Precision = # Correct Extracted Targets # Total Extracted Targets We also present the pseudo-recall, which is the average number of times per person a correct target was extracted. It is difficult to calculate true recall without manual annotation of the entire corpus, since it cannot be known for certain how many times the document set contains the desired information.11 Pre-Fusion Pseudo-Recall = # Correct Extracted Targets #People The precision of each of the various extraction methods is listed in Table 2. The data show that on average the Rote method has the best precision, 10These deficiencies in testing also have implications for training, since the models will be trained on annotated data that has errors. The phenomenon of missing and inaccurate data was most prevalent for occupation and birthplace relationships, though it was observed for other relationships as well. 11It is insufficient to count all text matches as instances that the system should extract. To obtain the true recall, it is necessary to decide whether each sentence contains the desired relationship, even in cases where the information is not what the biographies have listed. 485 Birthday Birth year Occupation Birthplace Year of Death Avg. Rote .789 .355 .305 .510 .527 .497 NB+E .423 .361 .255 .217 .088 .269 CRF .509 .342 .219 .139 .267 .295 CRF+E .680 .654 .246 .357 .314 .450 Table 2: Pre-Fusion Precision of extracted facts for various extraction systems, trained on 15 people each with 150 documents, and tested on 137 people each with 150 documents. Birthday Birth year Occupation Birthplace Year of Death Avg. Rote 4.8 1.9 1.5 1.0 0.1 1.9 NB+E 9.6 11.5 20.3 11.3 0.7 10.9 CRF 3.0 16.3 31.1 10.7 3.2 12.9 CRF+E 6.8 9.9 3.2 3.6 1.4 5.0 Table 3: Pre-Fusion Pseudo-Recall of extract facts with the identical training/testing set-up as above. while the NB+E extractor has the worst. Training the CRF with negative examples (CRF+E) gave better precision in extracted information then training it without negative examples. Table 3 lists the pseudo-recall or average number of correctly extracted targets per person. The results illustrate that the Rote has the worst pseudo-recall, and the plain CRF, trained without negative examples, has the best pseudo-recall. To test how the extraction precision changes as more documents are retrieved from the ranked results from Google, we created retrieval sets of 1, 5, 15, 30, 75, and 150 documents per person and repeated the above experiments with the CRF+E extractor. The data in Figure 2 suggest that there is a gradual drop in extraction precision throughout the corpus, which may be caused by the fact that documents further down the retrieved list are less relevant, and therefore less likely to contain the relevant biographic data. Pre−Fusion Precision # Retrieved Documents per Person 80 160 140 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 1 60 40 20 120 0 100 Birthday Birthplace Birthyear Occupation Deathyear Figure 2: As more documents are retrieved per person, prefusion precision drops. However, even though the extractor’s precision drops, the data in Figure 3 indicate that there continue to be instances of the relevant biographic data. # Retrieved Documents Per Person Pre−Fusion Pseudo−Recall 1 2 3 4 5 6 7 8 9 10 0 0 20 40 60 80 100 120 140 160 Birthyear Birthday Birthplace Occupation Deathyear Figure 3: Pre-fusion pseudo-recall increases as more documents are added. 3 Cross-Document Information Fusion The per-extraction performance was presented in Section 2, but the final task is to find the single correct target for each person.12 In this section, we examine two basic methodologies for combining candidate targets. Masterson and Kushmerick (2003) propose Best which gives each candidate a score equal to its highest confidence extraction: Best(x) = argmax x C(x).13 We further consider Voting, which counts the number of times each candidate x was extracted: Vote(x) = |C(x) > 0|. Each of these methods ranks the candidate targets by score and chooses the top-ranked one. The experimental setup used in the fusion experiments was the same as before: training on 15 people, and testing on 137 people. However, the postfusion evaluation differs from the pre-fusion evaluation. After fusion, the system returns one consensus target for each person and thus the evaluation is on the accuracy of those targets. That is, missing tar12This is a simplifying assumption, since there are many cases where there might exist multiple possible values, e.g., a person may be both a writer and a musician. 13C(x) is either the confidence estimate (NB+E) or the probability score (Rote,CRF,CRF+E). 486 Best Vote Rote .364 .450 NB+E .385 .588 CRF .513 .624 CRF+E .650 .678 Table 4: Average Accuracy of the Highest Confidence (Best) and Most Frequent (Vote) across five extraction fields. gets are graded as wrong.14 Post-Fusion Accuracy = # People with Correct Target # People Additionally, since the targets are ranked, we also calculated the mean reciprocal rank (MRR).15 The data in Table 4 show the average system performance with the different fusion methods. Frequency voting gave anywhere from a 2% to a 20% improvement over picking the highest confidence candidate. CRF+E (the CRF trained with negative examples) was the highest performing system overall. Birth Day Fusion Accuracy Fusion MRR Rote Vote .854 .877 NB+E Vote .854 .889 CRF Vote .650 .703 CRF+E Vote .883 .911 Birth year Rote Vote .387 .497 NB+E Vote .778 .838 CRF Vote .796 .860 CRF+E Vote .869 .876 Occupation Rote Vote .299 .405 NB+E Vote .642 .751 CRF Vote .606 .740 CRF+E Vote .423 .553 Birthplace Rote Vote .321 .338 NB+E Vote .474 .586 CRF Vote .321 .476 CRF+E Vote .467 .560 Year of Death Rote Vote .389 .389 NB+E Vote .194 .383 CRF .750 .840 CRF+E Vote .750 .827 Table 5: Voting for information fusion, evaluated per person. CRF+E has best average performance (67.8%). Table 5 shows the results of using each of these extractors to extract correct relationships from the top 150 ranked documents downloaded from the 14For year of death, we only graded cases where the person had died. 15The reciprocal rank = 1 / the rank of the correct target. Web. CRF+E was a top performer in 3/5 of the cases. In the other 2 cases, the NB+E was the most successful, perhaps because NB+E’s increased recall was more useful than CRF+E’s improved precision. Retrieval Set Size and Performance As with pre-fusion, we performed a set of experiments with different retrieval set sizes and used the CRF+E extraction system trained on 150 documents per person. The data in Figure 4 show that performance improves as the retrieval set size increases. Most of the gains come in the first 30 documents, where average performance increased from 14% (1 document) to 63% (30 documents). Increasing the retrieval set size to 150 documents per person yielded an additional 5% absolute improvement. Post−Fusion Accuracy # Retrieved Documents Per Person 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 20 40 60 80 100 120 140 160 Occupation Birthyear Birthday Deathyear Birthplace Figure 4: Fusion accuracy increases with more documents per person Post-fusion errors come from two major sources. The first source is the misranking of correct relationships. The second is the case where relevant information is not retrieved at all, which we measure as Post-Fusion Missing = # Missing Targets # People The data in Figure 5 suggest that the decrease in missing targets is a significant contributing factor to the improvement in performance with increased document size. Missing targets were a major problem for Birthplace, constituting more than half the errors (32% at 150 documents). 4 Cross-Field Bootstrapping Sections 2 and 3 presented methods for training separate extractors for particular relationships and for doing fusion across multiple documents. In this section, we leverage data interdependencies to improve performance. The method we propose is to bootstrap across fields and use knowledge of one relationship to improve performance on the extraction of another. For 487 # Retrieved Documents Per Person Post−Fusion Missing Targets 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 20 0 40 60 80 100 120 140 160 Birthplace Occupation Deathyear Birthday Birthyear Figure 5: Additional documents decrease the number of postfusion missing targets, targets which are never extracted in any document. Birth year Extraction Precision Fusion Accuracy CRF .342 .797 + birthday .472 .861 CRF+E .654 .869 + birthday .809 .891 Occupation Extraction Precision Fusion Accuracy CRF .219 .606 + birthday .217 .569 + birth year(f) 21.9 .599 + all .214 .591 CRF+E .246 .423 + birthday .325 .577 + birth year(f) .387 .672 + all .382 .642 Birthplace Extraction Precision Fusion Accuracy CRF .139 .321 + birthday .158 .372 + birth year(f) .156 .350 CRF+E .357 .467 + birthday .350 .474 + birth year(f) .294 .350 + occupation(f) .314 .354 + all .362 .532 Table 6: Performance of Cross-Field Bootstrapping Models. (f) indicates that the best fused result was taken. birth year(f) means birth years were annotated using the system that discovered the most accurate birth years. example, to extract birth year given knowledge of the birthday, in training we mark up each hook corpus Dx with the known birthday b : birthday(x, b) and the target birth year y : birthyear(x, y) and add an additional feature to the CRF that indicates whether the birthday has been seen in the sentence.16 In testing, for each hook, we first find the birthday using the methods presented in the previous sections, annotate the corpus with the extracted birthday, and then apply the birth year CRF (see Figure 6 next page). 16The CRF state model doesn’t change. When bootstrapping from multiple fields, we add the conjunctions of the fields as features. Table 6 shows the effect of using this bootstrapped data to estimate other fields. Based on the relative performance of each of the individual extraction systems, we chose the following schedule for performing the bootstrapping: 1) Birthday, 2) Birth year, 3) Occupation, 4) Birthplace. We tried adding in all knowledge available to the system at each point in the schedule.17 There are gains in accuracy for birth year, occupation and birthplace by using cross-field bootstrapping. The performance of the plain CRF+E averaged across all five fields is 67.4%, while for the best bootstrapped system it is 74.6%, a gain of 7%. Doing bootstrapping in this way improves for people whose information is already partially correct. As a result, the percentage of people who have completely correct information improves to 37% from 13.8%, a gain of 24% over the nonbootstrapped CRF+E system. Additionally, erroneous extractions do not hurt accuracy on extraction of other fields. Performance in the bootstrapped system for birthyear, occupation and birth place when the birthday is wrong is almost the same as performance in the non-bootstrapped system. 5 Training Set Size Reduction One of the results from Section 2 is that lower ranked documents are less likely to contain the relevant biographic information. While this does not have an dramatic effect on the post-fusion accuracy (which improves with more documents), it suggests that training on a smaller corpus, with more relevant documents and more sentences with the desired information, might lead to equivalent or improved performance. In a final set of experiments we looked at system performance when the extractor is trained on fewer than 150 documents per person. The data in Figure 7 show that training on 30 documents per person yields around the same performance as training on 150 documents per person. Average performance when the system was trained on 30 documents per person is 70%, while average performance when trained on 150 documents per person is 68%. Most of this loss in performance comes from losses in occupation, but the other relationships 17This system has the extra knowledge of which fused method is the best for each relationship. This was assessed by inspection. 488 Frank Zappa was born on December 21. 1. Birthday Zappa : December 21, 1940. 2. Birthyear 1. Birthday 2. Birthyear 3. Birthplace Zappa was born in 1940 in Baltimore. Figure 6: Cross-Field Bootstrapping: In step (1) The birthday, December 21, is extracted and the text marked. In step 2, cooccurrences with the discovered birthday make 1940 a better candidate for birthyear. In step (3), the discovered birthyear appears in contexts where the discovered birthday does not and improves extraction of birth place. Post−Fusion Accuracy # Training Documents Per Person 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 20 40 60 80 100 120 140 160 Birthday Birthyear Deathyear Occupation Birthplace Figure 7: Fusion accuracy doesn’t improve with more than 30 training documents per person. have either little or no gain from training on additional documents. There are two possible reasons why more training data may not help, and even may hurt performance. One possibility is that higher ranked retrieved documents are more likely to contain biographical facts, while in later documents it is more likely that automatically annotated training instances are in fact false positives. That is, higher ranked documents are cleaner training data. Pre-Fusion precision results (Figure 8) support this hypothesis since it appears that later instances are often contaminating earlier models. Pre−Fusion Precision # Training Documents Per Person 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 20 40 60 80 100 120 140 160 Birthday Birthyear Birthplace Occupation Deathyear Figure 8: Pre-Fusion precision shows slight drops with increased training documents. The data in Figure 9 suggest an alternate possibility that later documents also shift the prior toward a model where it is less likely that a relationship is observed as fewer targets are extracted. Pre−Fusion Pseudo−Recall # Training Documents Per Person 0 1 2 3 4 5 6 7 8 9 10 11 0 20 40 60 80 100 120 140 160 Birthday Birthplace Deathyear Birthyear Occupation Figure 9: Pre-Fusion Pseudo-Recall also drops with increased training documents. 6 Related Work The closest related work to the task of biographic fact extraction was done by Cowie et al. (2000) and Schiffman et al. (2001), who explore the problem of biographic summarization. There has been rather limited published work in multi-document information extraction. The closest work to what we present here is Masterson and Kushmerick (2003), who perform multi-document information extraction trained on manually annotated training data and use Best Confidence to resolve each particular template slot. In summarizarion, many systems have examined the multi-document case. Notable systems are SUMMONS (Radev and McKeown, 1998) and RIPTIDE (White et al., 2001), which assume perfect extracted information and then perform closed domain summarization. Barzilay et al. (1999) does not explicitly extract facts, but instead picks out relevant repeated elements and combines them to obtain a summary which retains the semantics of the original. In recent question answering research, information fusion has been used to combine multiple candidate answers to form a consensus answer. Clarke et al. (2001) use frequency of n-gram occurrence to pick answers for particular questions. Another example of answer fusion comes in (Brill et al., 2001) which combines the output of multiple question answering systems in order to rank answers. Dalmas and Webber (2004) use a WordNet cover heuristic to choose an appropriate location from a large candidate set of answers. There has been a considerable amount of work in training information extraction systems from annotated data since the mid-90s. The initial work in the field used lexico-syntactic template patterns learned using a variety of different empirical approaches (Riloff and Schmelzenbach, 1998; Huffman, 1995; 489 Soderland et al., 1995). Seymore et al. (1999) use HMMs for information extraction and explore ways to improve the learning process. Nahm and Mooney (2002) suggest a method to learn word-to-word relationships across fields by doing data mining on information extraction results. Prager et al. (2004) uses knowledge of birth year to weed out candidate years of death that are impossible. Using the CRF extractors in our data set, this heuristic did not yield any improvement. More distantly related work for multi-field extraction suggests methods for combining information in graphical models across multiple extraction instances (Sutton et al., 2004; Bunescu and Mooney, 2004) . 7 Conclusion This paper has presented new experimental methodologies and results for cross-document information fusion, focusing on the task of biographic fact extraction and has proposed a new method for crossfield bootstrapping. In particular, we have shown that automatic annotation can be used effectively to train statistical information extractors such Na¨ıve Bayes and CRFs, and that CRF extraction accuracy can be improved by 5% with a negative example model. We looked at cross-document fusion and demonstrated that voting outperforms choosing the highest confidence extracted information by 2% to 20%. Finally, we introduced a cross-field bootstrapping method that improved average accuracy by 7%. References E. Agichtein and L. Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of ICDL, pages 85–94. R. Barzilay, K. R. McKeown, and M. Elhadad. 1999. Information fusion in the context of multi-document summarization. In Proceedings of ACL, pages 550–557. E. Brill, J. Lin, M. Banko, S. Dumais, and A. Ng. 2001. Dataintensive question answering. In Proceedings of TREC, pages 183–189. S. Brin. 1998. Extracting patterns and relations from the world wide web. In WebDB Workshop at 6th International Conference on Extending Database Technology, EDBT’98, pages 172–183. R. Bunescu and R. Mooney. 2004. Collective information extraction with relational markov networks. In Proceedings of ACL, pages 438–445. C. L. A. Clarke, G. V. Cormack, and T. R. Lynam. 2001. Exploiting redundancy in question answering. In Proceedings of SIGIR, pages 358–365. J. Cowie, S. Nirenburg, and H. Molina-Salgado. 2000. Generating personal profiles. In The International Conference On MT And Multilingual NLP. T. Dalmas and B. Webber. 2004. Information fusion for answering factoid questions. In Proceedings of 2nd CoLogNET-ElsNET Symposium. Questions and Answers: Theoretical Perspectives. D. Freitag and A. McCallum. 1999. Information extraction with hmms and shrinkage. In Proceedings of the AAAI-99 Workshop on Machine Learning for Information Extraction, pages 31–36. S. B. Huffman. 1995. Learning information extraction patterns from examples. In Working Notes of the IJCAI-95 Workshop on New Approaches to Learning for Natural Language Processing, pages 127–134. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML, pages 282– 289. T. R. Leek. 1997. Information extraction using hidden markov models. Master’s Thesis, UC San Diego. D. Masterson and N. Kushmerick. 2003. Information extraction from multi-document threads. In Proceedings of ECML-2003: Workshop on Adaptive Text Extraction and Mining, pages 34–41. A. McCallum. 2002. Mallet: A machine learning for language toolkit. U. Nahm and R. Mooney. 2002. Text mining with information extraction. In Proceedings of the AAAI 2220 Spring Symposium on Mining Answers from Texts and Knowledge Bases, pages 60–67. J. Prager, J. Chu-Carroll, and K. Czuba. 2004. Question answering by constraint satisfaction: Qa-by-dossier with constraints. In Proceedings of ACL, pages 574–581. D. R. Radev and K. R. McKeown. 1998. Generating natural language summaries from multiple on-line sources. Computational Linguistics, 24(3):469–500. D. Ravichandran and E. Hovy. 2002. Learning surface text patterns for a question answering system. In Proceedings of ACL, pages 41–47. E. Riloff and M. Schmelzenbach. 1998. An empirical approach to conceptual case frame acquisition. In Proceedings of WVLC, pages 49–56. E. Riloff. 1996. Automatically Generating Extraction Patterns from Untagged Text. In Proceedings of AAAI, pages 1044– 1049. B. Schiffman, I. Mani, and K. J. Concepcion. 2001. Producing biographical summaries: Combining linguistic knowledge with corpus statistics. In Proceedings of ACL, pages 450–457. K. Seymore, A. McCallum, and R. Rosenfeld. 1999. Learning hidden markov model structure for information extraction. In AAAI’99 Workshop on Machine Learning for Information Extraction, pages 37–42. S. Soderland, D. Fisher, J. Aseltine, and W. Lehnert. 1995. CRYSTAL: Inducing a conceptual dictionary. In Proceedings of IJCAI, pages 1314–1319. C. Sutton, K. Rohanimanesh, and A. McCallum. 2004. Dynamic conditional random fields: factorize probabilistic models for labeling and segmenting sequence data. In Proceedings of ICML. M. White, T. Korelsky, C. Cardie, V. Ng, D. Pierce, and K. Wagstaff. 2001. Multi-document summarization via information extraction. In Proceedings of HLT. 490 | 2005 | 60 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 491–498, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Simple Algorithms for Complex Relation Extraction with Applications to Biomedical IE Ryan McDonald1 Fernando Pereira1 Seth Kulick2 1CIS and 2IRCS, University of Pennsylvania, Philadelphia, PA {ryantm,pereira}@cis.upenn.edu, [email protected] Scott Winters Yang Jin Pete White Division of Oncology, Children’s Hospital of Pennsylvania, Philadelphia, PA {winters,jin,white}@genome.chop.edu Abstract A complex relation is any n-ary relation in which some of the arguments may be be unspecified. We present here a simple two-stage method for extracting complex relations between named entities in text. The first stage creates a graph from pairs of entities that are likely to be related, and the second stage scores maximal cliques in that graph as potential complex relation instances. We evaluate the new method against a standard baseline for extracting genomic variation relations from biomedical text. 1 Introduction Most research on text information extraction (IE) has focused on accurate tagging of named entities. Successful early named-entity taggers were based on finite-state generative models (Bikel et al., 1999). More recently, discriminatively-trained models have been shown to be more accurate than generative models (McCallum et al., 2000; Lafferty et al., 2001; Kudo and Matsumoto, 2001). Both kinds of models have been developed for tagging entities such as people, places and organizations in news material. However, the rapid development of bioinformatics has recently generated interest on the extraction of biological entities such as genes (Collier et al., 2000) and genomic variations (McDonald et al., 2004b) from biomedical literature. The next logical step for IE is to begin to develop methods for extracting meaningful relations involving named entities. Such relations would be extremely useful in applications like question answering, automatic database generation, and intelligent document searching and indexing. Though not as well studied as entity extraction, relation extraction has still seen a significant amount of work. We discuss some previous approaches at greater length in Section 2. Most relation extraction systems focus on the specific problem of extracting binary relations, such as the employee of relation or protein-protein interaction relation. Very little work has been done in recognizing and extracting more complex relations. We define a complex relation as any n-ary relation among n typed entities. The relation is defined by the schema (t1, . . . , tn) where ti ∈T are entity types. An instance (or tuple) in the relation is a list of entities (e1, . . . , en) such that either type(ei) = ti, or ei =⊥indicating that the ith element of the tuple is missing. For example, assume that the entity types are T = {person, job, company} and we are interested in the ternary relation with schema (person, job, company) that relates a person to their job at a particular company. For the sentence “John Smith is the CEO at Inc. Corp.”, the system would ideally extract the tuple (John Smith, CEO, Inc. Corp.). However, for the sentence “Everyday John Smith goes to his office at Inc. Corp.”, the system would extract (John Smith, ⊥, Inc. Corp.), since there is no mention of a job title. Hence, the goal of complex relation extraction is to identify all instances of the relation of interest in some piece of text, including 491 incomplete instances. We present here several simple methods for extracting complex relations. All the methods start by recognized pairs of entity mentions, that is, binary relation instances, that appear to be arguments of the relation of interest. Those pairs can be seen as the edges of a graph with entity mentions as nodes. The algorithms then try to reconstruct complex relations by making tuples from selected maximal cliques in the graph. The methods are general and can be applied to any complex relation fitting the above definition. We also assume throughout the paper that the entities and their type are known a priori in the text. This is a fair assumption given the current high standard of state-of-the-art named-entity extractors. A primary advantage of factoring complex relations into binary relations is that it allows the use of standard classification algorithms to decide whether particular pairs of entity mentions are related. In addition, the factoring makes training data less sparse and reduces the computational cost of extraction. We will discuss these benefits further in Section 4. We evaluated the methods on a large set of annotated biomedical documents to extract relations related to genomic variations, demonstrating a considerable improvement over a reasonable baseline. 2 Previous work A representative approach to relation extraction is the system of Zelenko et al. (2003), which attempts to identify binary relations in news text. In that system, each pair of entity mentions of the correct types in a sentence is classified as to whether it is a positive instance of the relation. Consider the binary relation employee of and the sentence “John Smith, not Jane Smith, works at IBM”. The pair (John Smith, IBM) is a positive instance, while the pair (Jane Smith, IBM) is a negative instance. Instances are represented by a pair of entities and their position in a shallow parse tree for the containing sentence. Classification is done by a support-vector classifier with a specialized kernel for that shallow parse representation. This approach — enumerating all possible entity pairs and classifying each as positive or negative — is the standard method in relation extraction. The main differences among systems are the choice of trainable classifier and the representation for instances. For binary relations, this approach is quite tractable: if the relation schema is (t1, t2), the number of potential instances is O(|t1| |t2|), where |t| is the number of entity mentions of type t in the text under consideration. One interesting system that does not belong to the above class is that of Miller et al. (2000), who take the view that relation extraction is just a form of probabilistic parsing where parse trees are augmented to identify all relations. Once this augmentation is made, any standard parser can be trained and then run on new sentences to extract new relations. Miller et al. show such an approach can yield good results. However, it can be argued that this method will encounter problems when considering anything but binary relations. Complex relations would require a large amount of tree augmentation and most likely result in extremely sparse probability estimates. Furthermore, by integrating relation extraction with parsing, the system cannot consider long-range dependencies due to the local parsing constraints of current probabilistic parsers. The higher the arity of a relation, the more likely it is that entities will be spread out within a piece of text, making long range dependencies especially important. Roth and Yih (2004) present a model in which entity types and relations are classified jointly using a set of global constraints over locally trained classifiers. This joint classification is shown to improve accuracy of both the entities and relations returned by the system. However, the system is based on constraints for binary relations only. Recently, there has also been many results from the biomedical IE community. Rosario and Hearst (2004) compare both generative and discriminative models for extracting seven relationships between treatments and diseases. Though their models are very flexible, they assume at most one relation per sentence, ruling out cases where entities participate in multiple relations, which is a common occurrence in our data. McDonald et al. (2004a) use a rulebased parser combined with a rule-based relation identifier to extract generic binary relations between biological entities. As in predicate-argument extraction (Gildea and Jurafsky, 2002), each relation is 492 always associated with a verb in the sentence that specifies the relation type. Though this system is very general, it is limited by the fact that the design ignores relations not expressed by a verb, as the employee of relation in“John Smith, CEO of Inc. Corp., announced he will resign”. Most relation extraction systems work primarily on a sentential level and never consider relations that cross sentences or paragraphs. Since current data sets typically only annotate intra-sentence relations, this has not yet proven to be a problem. 3 Definitions 3.1 Complex Relations Recall that a complex n-ary relation is specified by a schema (t1, . . . , tn) where ti ∈T are entity types. Instances of the relation are tuples (e1, . . . , en) where either type(ei) = ti, or ei =⊥(missing argument). The only restriction this definition places on a relation is that the arity must be known. As we discuss it further in Section 6, this is not required by our methods but is assumed here for simplicity. We also assume that the system works on a single relation type at a time, although the methods described here are easily generalizable to systems that can extract many relations at once. 3.2 Graphs and Cliques An undirected graph G = (V, E) is specified by a set of vertices V and a set of edges E, with each edge an unordered pair (u, v) of vertices. G′ = (V ′, E′) is a subgraph of G if V ′ ⊆V and E′ = {(u, v) : u, v ∈V ′, (u, v) ∈E}. A clique C of G is a subgraph of G in which there is an edge between every pair of vertices. A maximal clique of G is a clique C = (VC, EC) such that there is no other clique C′ = (VC′, EC′) such that VC ⊂VC′. 4 Methods We describe now a simple method for extracting complex relations. This method works by first factoring all complex relations into a set of binary relations. A classifier is then trained in the standard manner to recognize all pairs of related entities. Finally a graph is constructed from the output of this classifier and the complex relations are determined from the cliques of this graph. a. All possible relation instances (John, CEO, Inc. Corp.) (John, ⊥, Inc. Corp.) (John, CEO, Biz. Corp.) (John, ⊥, Biz. Corp.) (John, CEO, ⊥) (Jane, CEO, Inc. Corp.) (Jane, ⊥, Inc. Corp.) (Jane, CEO, Biz. Corp.) (Jane, ⊥, Biz. Corp.) (Jane, CEO, ⊥) (⊥, CEO, Inc. Corp.) (⊥, CEO, Biz. Corp.) b. All possible binary relations (John, CEO) (John, Inc. Corp.) (John, Biz. Corp.) (CEO, Inc. Corp.) (CEO, Biz. Corp.) (Jane, CEO) (Jane, Inc. Corp.) (Jane, Biz. Corp.) Figure 1: Relation factorization of the sentence: John and Jane are CEOs at Inc. Corp. and Biz. Corp. respectively. 4.1 Classifying Binary Relations Consider again the motivating example of the (person, job, company) relation and the sentence “John and Jane are CEOs at Inc. Corp. and Biz. Corp. respectively”. This sentence contains two people, one job title and two companies. One possible method for extracting the relation of interest would be to first consider all 12 possible tuples shown in Figure 1a. Using all these tuples, it should then be possible to train a classifier to distinguish valid instances such as (John, CEO, Inc. Corp.) from invalid ones such as (Jane, CEO, Inc. Corp.). This is analogous to the approach taken by Zelenko et al. (2003) for binary relations. There are problems with this approach. Computationally, for an n-ary relation, the number of possible instances is O(|t1| |t2| · · · |tn|). Conservatively, letting m be the smallest |ti|, the run time is O(mn), exponential in the arity of the relation. The second problem is how to manage incomplete but correct instances such as (John, ⊥, Inc. Corp.) when training the classifier. If this instance is marked as negative, then the model might incorrectly disfavor features that correlate John to Inc. Corp.. However, if this instance is labeled positive, then the model may tend to prefer the shorter and more compact incomplete relations since they will be abundant in the positive training examples. We could always ignore instances of this form, but then the data would be heavily skewed towards negative instances. 493 Instead of trying to classify all possible relation instances, in this work we first classify pairs of entities as being related or not. Then, as discussed in Section 4.2, we reconstruct the larger complex relations from a set of binary relation instances. Factoring relations into a set of binary decisions has several advantages. The set of possible pairs is much smaller then the set of all possible complex relation instances. This can be seen in Figure 1b, which only considers pairs that are consistent with the relation definition. More generally, the number of pairs to classify is O((P i |ti|)2) , which is far better than the exponentially many full relation instances. There is also no ambiguity when labeling pairs as positive or negative when constructing the training data. Finally, we can rely on previous work on classification for binary relation extraction to identify pairs of related entities. To train a classifier to identify pairs of related entities, we must first create the set of all positive and negative pairs in the data. The positive instances are all pairs that occur together in a valid tuple. For the example sentence in Figure 1, these include the pairs (John, CEO), (John, Inc. Corp.), (CEO, Inc. Corp.), (CEO, Biz. Corp.), (Jane, CEO) and (Jane, Biz. Corp.). To gather negative instances, we extract all pairs that never occur together in a valid relation. From the same example these would be the pairs (John, Biz. Corp.) and (Jane, Inc. Corp.). This leads to a large set of positive and negative binary relation instances. At this point we could employ any binary relation classifier and learn to identify new instances of related pairs of entities. We use a standard maximum entropy classifier (Berger et al., 1996) implemented as part of MALLET (McCallum, 2002). The model is trained using the features listed in Table 1. This is a very simple binary classification model. No deep syntactic structure such as parse trees is used. All features are basically over the words separating two entities and their part-of-speech tags. Of course, it would be possible to use more syntactic information if available in a manner similar to that of Zelenko et al. (2003). However, the primary purpose of our experiments was not to create a better binary relation extractor, but to see if complex relations could be extracted through binary factorizaFeature Set entity type of e1 and e2 words in e1 and e2 word bigrams in e1 and e2 POS of e1 and e2 words between e1 and e2 word bigrams between e1 and e2 POS between e1 and e2 distance between e1 and e2 concatenations of above features Table 1: Feature set for maximum entropy binary relation classifier. e1 and e2 are entities. a. Relation graph G John Jane CEO Inc. Corp. Biz. Corp. b. Tuples from G (John, CEO, ⊥) (John, ⊥, Inc. Corp.) (John, ⊥, Biz. Corp.) (Jane, CEO, ⊥) (⊥, CEO, Inc. Corp.) (⊥, CEO, Biz. Corp.) (John, CEO, Inc. Corp.) (John, CEO, Biz. Corp.) Figure 2: Example of a relation graph and tuples from all the cliques in the graph. tion followed by reconstruction. In Section 5.2 we present an empirical evaluation of the binary relation classifier. 4.2 Reconstructing Complex Relations 4.2.1 Maximal Cliques Having identified all pairs of related entities in the text, the next stage is to reconstruct the complex relations from these pairs. Let G = (V, E) be an undirected graph where the vertices V are entity mentions in the text and the edges E represent binary relations between entities. We reconstruct the complex relation instances by finding maximal cliques in the graphs. The simplest approach is to create the graph so that two entities in the graph have an edge if the binary classifier believes they are related. For example, consider the binary factorization in Figure 1 and imagine the classifier identified the following pairs as being related: (John, CEO), (John, Inc. Corp.), (John, Biz. Corp.), (CEO, Inc. Corp.), (CEO, Biz. Corp.) and (Jane, CEO). The resulting graph can be seen in Figure 2a. Looking at this graph, one solution to construct494 ing complex relations would be to consider all the cliques in the graph that are consistent with the definition of the relation. This is equivalent to having the system return only relations in which the binary classifier believes that all of the entities involved are pairwise related. All the cliques in the example are shown in Figure 2b. We add ⊥fields to the tuples to be consistent with the relation definition. This could lead to a set of overlapping cliques, for instance (John, CEO, Inc. Corp.) and (John, CEO, ⊥). Instead of having the system return all cliques, our system just returns the maximal cliques, that is, those cliques that are not subsets of other cliques. Hence, for the example under consideration in Figure 2, the system would return the one correct relation, (John, CEO, Inc. Corp.), and two incorrect relations, (John, CEO, Biz. Corp.) and (Jane, CEO, ⊥). The second is incorrect since it does not specify the company slot of the relation even though that information is present in the text. It is possible to find degenerate sentences in which perfect binary classification followed by maximal clique reconstruction will lead to errors. One such sentence is, “John is C.E.O. and C.F.O. of Inc. Corp. and Biz. Corp. respectively and Jane vice-versa”. However, we expect such sentences to be rare; in fact, they never occur in our data. The real problem with this approach is that an arbitrary graph can have exponentially many cliques, negating any efficiency advantage over enumerating all n-tuples of entities. Fortunately, there are algorithms for finding all maximal cliques that are efficient in practice. We use the algorithm of Bron and Kerbosch (1973). This is a well known branch and bound algorithm that has been shown to empirically run linearly in the number of maximal cliques in the graph. In our experiments, this algorithm found all maximal cliques in a matter of seconds. 4.2.2 Probabilistic Cliques The above approach has a major shortcoming in that it assumes the output of the binary classifier to be absolutely correct. For instance, the classifier may have thought with probability 0.49, 0.99 and 0.99 that the following pairs were related: (Jane, Biz. Corp.), (CEO, Biz. Corp.) and (Jane, CEO) respectively. The maximal clique method would not produce the tuple (Jane, CEO, Biz. Corp.) since it never considers the edge between Jane and Biz. Corp. However, given the probability of the edges, we would almost certainly want this tuple returned. What we would really like to model is a belief that on average a clique represents a valid relation instance. To do this we use the complete graph G = (V, E) with edges between all pairs of entity mentions. We then assign weight w(e) to edge e equal to the probability that the two entities in e are related, according to the classifier. We define the weight of a clique w(C) as the mean weight of the edges in the clique. Since edge weights represent probabilities (or ratios), we use the geometric mean w(C) = Y e∈EC w(e) 1/|EC| We decide that a clique C represents a valid tuple if w(C) ≥0.5. Hence, the system finds all maximal cliques as before, but considers only those where w(C) ≥0.5, and it may select a non-maximal clique if the weight of all larger cliques falls below the threshold. The cutoff of 0.5 is not arbitrary, since it ensures that the average probability of a clique representing a relation instance is at least as large as the average probability of it not representing a relation instance. We ran experiments with varying levels of this threshold and found that, roughly, lower thresholds result in higher precision at the expense of recall since the system returns fewer but larger tuples. Optimum results were obtained for a cutoff of approximately 0.4, but we report results only for w(C) ≥0.5. The major problem with this approach is that there will always be exponentially many cliques since the graph is fully connected. However, in our experiments we pruned all edges that would force any containing clique C to have w(C) < 0.5. This typically made the graphs very sparse. Another problem with this approach is the assumption that the binary relation classifier outputs probabilities. For maximum entropy and other probabilistic frameworks this is not an issue. However, many classifiers, such as SVMs, output scores or distances. It is possible to transform the scores from those models through a sigmoid to yield probabili495 ties, but there is no guarantee that those probability values will be well calibrated. 5 Experiments 5.1 Problem Description and Data We test these methods on the task of extracting genomic variation events from biomedical text (McDonald et al., 2004b). Briefly, we define a variation event as an acquired genomic aberration: a specific, one-time alteration at the genomic level and described at the nucleic acid level, amino acid level or both. Each variation event is identified by the relationship between a type of variation, its location, and the corresponding state change from an initialstate to an altered-state. This can be formalized as the following complex schema (var-type, location, initial-state, altered-state) A simple example is the sentence “At codons 12 and 61, the occurrence of point mutations from G/A to T/G were observed” which gives rise to the tuples (point mutation, codon 12, G, T) (point mutation, codon 61, A, G) Our data set consists of 447 abstracts selected from MEDLINE as being relevant to populating a database with facts of the form: gene X with variation event Y is associated with malignancy Z. Abstracts were randomly chosen from a larger corpus identified as containing variation mentions pertaining to cancer. The current data consists of 4691 sentences that have been annotated with 4773 entities and 1218 relations. Of the 1218 relations, 760 have two ⊥arguments, 283 have one ⊥argument, and 175 have no ⊥arguments. Thus, 38% of the relations tagged in this data cannot be handled using binary relation classification alone. In addition, 4% of the relations annotated in this data are non-sentential. Our system currently only produces sentential relations and is therefore bounded by a maximum recall of 96%. Finally, we use gold standard entities in our experiments. This way we can evaluate the performance of the relation extraction system isolated from any kind of pipelined entity extraction errors. Entities in this domain can be found with fairly high accuracy (McDonald et al., 2004b). It is important to note that just the presence of two entity types does not entail a relation between them. In fact, 56% of entity pairs are not related, due either to explicit disqualification in the text (e.g. “... the lack of G to T transversion ...”) or ambiguities that arise from multiple entities of the same type. 5.2 Results Because the data contains only 1218 examples of relations we performed 10-fold cross-validation tests for all results. We compared three systems: • MC: Uses the maximum entropy binary classifier coupled with the maximal clique complex relation reconstructor. • PC: Same as above, except it uses the probabilistic clique complex relation reconstructor. • NE: A maximum entropy classifier that naively enumerates all possible relation instances as described in Section 4.1. In training system NE, all incomplete but correct instances were marked as positive since we found this had the best performance. We used the same pairwise entity features in the binary classifier of the above two systems. However, we also added higher order versions of the pairwise features. For this system we only take maximal relations,that is, if (John, CEO, Inc. Corp.) and (John, ⊥, Inc. Corp.) are both labeled positive, the system would only return the former. Table 2 contains the results of the maximum entropy binary relation classifier (used in systems MC and PC). The 1218 annotated complex relations produced 2577 unique binary pairs of related entities. We can see that the maximum entropy classifier performs reasonably well, although performance may be affected by the lack of rich syntactic features, which have been shown to help performance (Miller et al., 2000; Zelenko et al., 2003). Table 3 compares the three systems on the real problem of extracting complex relations. An extracted complex relation is considered correct if and only if all the entities in the relation are correct. There is no partial credit. All training and clique finding algorithms took under 5 minutes for the entire data set. Naive enumeration took approximately 26 minutes to train. 496 ACT PRD COR 2577 2722 2101 Prec Rec F-Meas 0.7719 0.8153 0.7930 Table 2: Binary relation classification results for the maximum entropy classifier. ACT: actual number of related pairs, PRD: predicted number of related pairs and COR: correctly identified related pairs. System Prec Rec F-Meas NE 0.4588 0.6995 0.5541 MC 0.5812 0.7315 0.6480 PC 0.6303 0.7726 0.6942 Table 3: Full relation classification results. For a relation to be classified correctly, all the entities in the relation must be correctly identified. First we observe that the maximal clique method combined with maximum entropy (system MC) reduces the relative error rate over naively enumerating and classifying all instances (system NE) by 21%. This result is very positive. The system based on binary factorization not only is more efficient then naively enumerating all instances, but significantly outperforms it as well. The main reason naive enumeration does so poorly is that all correct but incomplete instances are marked as positive. Thus, even slight correlations between partially correct entities would be enough to classify an instance as correct, which results in relatively good recall but poor precision. We tried training only with correct and complete positive instances, but the result was a system that only returned few relations since negative instances overwhelmed the training set. With further tuning, it may be possible to improve the performance of this system. However, we use it only as a baseline and to demonstrate that binary factorization is a feasible and accurate method for extracting complex relations. Furthermore, we see that using probabilistic cliques (system PC) provides another large improvement, a relative error reduction of 13% over using maximal cliques and 31% reduction over enumeration. Table 4 shows the breakdown of relations returned by type. There are three types of relations, 2-ary, 3-ary and 4-ary, each with 2, 1 and 0 ⊥arguments respectively, e.g. System 2-ary 3-ary 4-ary NE 760:1097:600 283:619:192 175:141:60 MC 760:1025:601 283:412:206 175:95:84 PC 760:870:590 283:429:223 175:194:128 Table 4: Breakdown of true positive relations by type that were returned by each system. Each cell contains three numbers, Actual:Predicted:Correct, which represents for each arity the actual, predicted and correct number of relations for each system. (point mutation, codon 12, ⊥, ⊥) is a 2-ary relation. Clearly the probabilistic clique method is much more likely to find larger non-binary relations, verifying the motivation that there are some low probability edges that can still contribute to larger cliques. 6 Conclusions and Future Work We presented a method for complex relation extraction, the core of which was to factorize complex relations into sets of binary relations, learn to identify binary relations and then reconstruct the complex relations by finding maximal cliques in graphs that represent relations between pairs of entities. The primary advantage of this method is that it allows for the use of almost any binary relation classifier, which have been well studied and are often accurate. We showed that such a method can be successful with an empirical evaluation on a large set of biomedical data annotated with genomic variation relations. In fact, this approach is both significantly quicker and more accurate then enumerating and classifying all possible instances. We believe this work provides a good starting point for continued research in this area. A distinction may be made between the factored system presented here and one that attempts to classify complex relations without factorization. This is related to the distinction between methods that learn local classifiers that are combined with global constraints after training and methods that incorporate the global constraints into the learning process. McCallum and Wellner (2003) showed that learning binary co-reference relations globally improves performance over learning relations in isolation. However, their model relied on the transitive property inherent in the co-reference relation. Our system can be seen as an instance of a local learner. Punyakanok 497 et al. (2004) argued that local learning actually outperforms global learning in cases when local decisions can easily be learnt by the classifier. Hence, it is reasonable to assume that our binary factorization method will perform well when binary relations can be learnt with high accuracy. As for future work, there are many things that we plan to look at. The binary relation classifier we employ is quite simplistic and most likely can be improved by using features over a deeper representation of the data such as parse trees. Other more powerful binary classifiers should be tried such as those based on tree kernels (Zelenko et al., 2003). We also plan on running these algorithms on more data sets to test if the algorithms empirically generalize to different domains. Perhaps the most interesting open problem is how to learn the complex reconstruction phase. One possibility is recent work on supervised clustering. Letting the edge probabilities in the graphs represent a distance in some space, it may be possible to learn how to cluster vertices into relational groups. However, since a vertex/entity can participate in one or more relation, any clustering algorithm would be required to produce non-disjoint clusters. We mentioned earlier that the only restriction of our complex relation definition is that the arity of the relation must be known in advance. It turns out that the algorithms we described can actually handle dynamic arity relations. All that is required is to remove the constraint that maximal cliques must be consistent with the structure of the relation. This represents another advantage of binary factorization over enumeration, since it would be infeasible to enumerate all possible instances for dynamic arity relations. Acknowledgments The authors would like to thank Mark Liberman, Mark Mandel and Eric Pancoast for useful discussion, suggestions and technical support. This work was supported in part by NSF grant ITR 0205448. References A. L. Berger, S. A. Della Pietra, and V. J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1). D.M. Bikel, R. Schwartz, and R.M. Weischedel. 1999. An algorithm that learns what’s in a name. Machine Learning Journal Special Issue on Natural Language Learning, 34(1/3):221–231. C. Bron and J. Kerbosch. 1973. Algorithm 457: finding all cliques of an undirected graph. Communications of the ACM, 16(9):575–577. N. Collier, C. Nobata, and J. Tsujii. 2000. Extracting the names of genes and gene products with a hidden Markov model. In Proc. COLING. D. Gildea and D. Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics. Taku Kudo and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proc. NAACL. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML. A. McCallum and B. Wellner. 2003. Toward conditional models of identity uncertainty with application to proper noun coreference. In IJCAI Workshop on Information Integration on the Web. A. McCallum, D. Freitag, and F. Pereira. 2000. Maximum entropy Markov models for information extraction and segmentation. In Proc. ICML. A. K. McCallum. 2002. MALLET: A machine learning for language toolkit. D.M. McDonald, H. Chen, H. Su, and B.B. Marshall. 2004a. Extracting gene pathway relations using a hybrid grammar: the Arizona Relation Parser. Bioinformatics, 20(18):3370–78. R.T. McDonald, R.S. Winters, M. Mandel, Y. Jin, P.S. White, and F. Pereira. 2004b. An entity tagger for recognizing acquired genomic variations in cancer literature. Bioinformatics, 20(17):3249–3251. S. Miller, H. Fox, L.A. Ramshaw, and R.M. Weischedel. 2000. A novel use of statistical parsing to extract information from text. In Proc. NAACL. V. Punyakanok, D. Roth, W. Yih, and D. Zimak. 2004. Learning via inference over structurally constrained output. In Workshop on Learning Structured with Output, NIPS. Barbara Rosario and Marti A. Hearst. 2004. Classifying semantic relations in bioscience texts. In ACL. D. Roth and W. Yih. 2004. A linear programmingformulation for global inference in natural language tasks. In Proc. CoNLL. D. Zelenko, C. Aone, and A. Richardella. 2003. Kernel methods for relation extraction. JMLR. 498 | 2005 | 61 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 499–506, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Resume Information Extraction with Cascaded Hybrid Model Kun Yu Gang Guan Ming Zhou Department of Computer Science and Technology Department of Electronic Engineering Microsoft Research Asia University of Science and Technology of China Tsinghua University 5F Sigma Center, No.49 Zhichun Road, Haidian Hefei, Anhui, China, 230027 Bejing, China, 100084 Bejing, China, 100080 [email protected] [email protected] [email protected] Abstract This paper presents an effective approach for resume information extraction to support automatic resume management and routing. A cascaded information extraction (IE) framework is designed. In the first pass, a resume is segmented into a consecutive blocks attached with labels indicating the information types. Then in the second pass, the detailed information, such as Name and Address, are identified in certain blocks (e.g. blocks labelled with Personal Information), instead of searching globally in the entire resume. The most appropriate model is selected through experiments for each IE task in different passes. The experimental results show that this cascaded hybrid model achieves better F-score than flat models that do not apply the hierarchical structure of resumes. It also shows that applying different IE models in different passes according to the contextual structure is effective. 1 Introduction Big enterprises and head-hunters receive hundreds of resumes from job applicants every day. Automatically extracting structured information from resumes of different styles and formats is needed to support the automatic construction of database, searching and resume routing. The definition of resume information fields varies in different applications. Normally, resume information is described as a hierarchical structure The research was carried out in Microsoft Research Asia. with two layers. The first layer is composed of consecutive general information blocks such as Personal Information, Education etc. Then within each general information block, detailed information pieces can be found, e.g., in Personal Information block, detailed information such as Name, Address, Email etc. can be further extracted. Info Hierarchy Info Type (Label) General Info Personal Information(G1); Education(G2); Research Experience(G3); Award(G4); Activity(G5); Interests(G6); Skill(G7) Personal Detailed Info (Personal Information) Name(P1); Gender(P2); Birthday(P3); Address(P4); Zip code(P5); Phone(P6); Mobile(P7); Email(P8); Registered Residence(P9); Marriage(P10); Residence(P11); Graduation School(P12); Degree(P13); Major(P14) Detailed Info Educational Detailed Info (Education) Graduation School(D1); Degree(D2); Major(D3); Department(D4) Table 1. Predefined information types. Based on the requirements of an ongoing recruitment management system which incorporates database construction with IE technologies and resume recommendation (routing), as shown in Table 1, 7 general information fields are defined. Then, for Personal Information, 14 detailed information fields are designed; for Education, 4 detailed information fields are designed. The IE task, as exemplified in Figure 1, includes segmenting a resume into consecutive blocks labelled with general information types, and further extracting the detailed information such as Name and Address from certain blocks. Extracting information from resumes with high precision and recall is not an easy task. In spite of 499 Figure 1. Example of a resume and the extracted information. constituting a restricted domain, resumes can be written in multitude of formats (e.g. structured tables or plain texts), in different languages (e.g. Chinese and English) and in different file types (e.g. Text, PDF, Word etc.). Moreover, writing styles could be very diversified. Among the methods in IE, Hidden Markov modelling has been widely used (Freitag and McCallum, 1999; Borkar et al., 2001). As a statebased model, HMMs are good at extracting information fields that hold a strong order of sequence. Classification is another popular method in IE. By assuming the independence of information types, it is feasible to classify segmented units as either information types to be extracted (Kushmerick et al., 2001; Peshkin and Pfeffer, 2003; Sitter and Daelemans, 2003), or information boundaries (Finn and Kushmerick, 2004). This method specializes in settling the extraction problem of independent information types. Resume shares a document-level hierarchical contextual structure where the related information units usually occur in the same textual block, and text blocks of different information categories usually occur in a relatively fixed order. Such characteristics have been successfully used in the categorization of multi-page documents by Frasconi et al. (2001). In this paper, given the hierarchy of resume information, a cascaded two-pass IE framework is designed. In the first pass, the general information is extracted by segmenting the entire resume into consecutive blocks and each block is annotated with a label indicating its category. In the second pass, detailed information pieces are further extracted within the boundary of certain blocks. Moreover, for different types of information, the most appropriate extraction method is selected through experiments. For the first pass, since there exists a strong sequence among blocks, a HMM model is applied to segment a resume and each block is labelled with a category of general information. We also apply HMM for the educational detailed information extraction for the same reason. In addition, classification based method is selected for the personal detailed information extraction where information items appear relatively independently. Tested with 1,200 Chinese resumes, experimental results show that exploring the hierarchical structure of resumes with this proposed cascaded framework improves the average F-score of detailed information extraction 500 greatly, and combining different IE models in different layer properly is effective to achieve good precision and recall. The remaining part of this paper is structured as follows. Section 2 introduces the related work. Section 3 presents the structure of the cascaded hybrid IE model and introduces the HMM model and SVM model in detail. Experimental results and analysis are shown in Section 4. Section 5 provides a discussion of our cascaded hybrid model. Section 6 is the conclusion and future work. 2 Related Work As far as we know, there are few published works on resume IE except some products, for which there is no way to determine the technical details. One of the published results on resume IE was shown in Ciravegna and Lavelli (2004). In this work, they applied (LP)2 , a toolkit of IE, to learn information extraction rules for resumes written in English. The information defined in their task includes a flat structure of Name, Street, City, Province, Email, Telephone, Fax and Zip code. This flat setting is not only different from our hierarchical structure but also different from our detailed information pieces. Besides, there are some applications that are analogous to resume IE, such as seminar announcement IE (Freitag and McCallum, 1999), job posting IE (Sitter and Daelemans, 2003; Finn and Kushmerick, 2004) and address segmentation (Borkar et al., 2001; Kushmerick et al., 2001). Most of the approaches employed in these applications view a text as flat and extract information from all the texts directly (Freitag and McCallum, 1999; Kushmerick et al., 2001; Peshkin and Pfeffer, 2003; Finn and Kushmerick, 2004). Only a few approaches extract information hierarchically like our model. Sitter and Daelemans (2003) present a double classification approach to perform IE by extracting words from pre-extracted sentences. Borkar et al. (2001) develop a nested model, where the outer HMM captures the sequencing relationship among elements and the inner HMMs learn the finer structure within each element. But these approaches employ the same IE methods for all the information types. Compared with them, our model applies different methods in different subtasks to fit the special contextual structure of information in each sub-task well. 3 Cascaded Hybrid Model Figure 2 is the structure of our cascaded hybrid model. The first pass (on the left hand side) segments a resume into consecutive blocks with a HMM model. Then based on the result, the second pass (on the right hand side) uses HMM to extract the educational detailed information and SVM to extract the personal detailed information, respectively. The block selection module is used to decide the range of detailed information extraction in the second pass. Figure 2. Structure of cascaded hybrid model. 3.1 HMM Model 3.1.1 Model Design For general information, the IE task is viewed as labelling the segmented units with predefined class labels. Given an input resume T which is a sequence of words w1,w2,…,wk, the result of general information extraction is a sequence of blocks in which some words are grouped into a certain block T = t1, t2,…, tn, where ti is a block. Assuming the expected label sequence of T is L=l1, l2,…, ln, with each block being assigned a label li, we get the sequence of block and label pairs Q=(t1, l1), (t2, l2),…,(tn, ln). In our research, we simply assume that the segmentation is based on the natural paragraph of T. Table 1 gives the list of information types to be extracted, where general information is represented as G1~G7. For each kind of general information, say Gi, two labels are set: Gi-B means the beginning of Gi, Gi-M means the remainder part of Gi. In addition, label O is defined to represent a block that does not belong to any general information types. With these positional information labels, general information can be obtained. For instance, if the label sequence Q for 501 a resume with 10 paragraphs is Q=(t1, G1-B), (t2, G1-M) , (t3, G2-B) , (t4, G2-M) , (t5, G2-M) , (t6, O) , (t7, O) , (t8, G3-B) , (t9, G3-M) , (t10, G3-M), three types of general information can be extracted as follows: G1:[t1, t2], G2:[t3, t4, t5], G3:[t8, t9, t10]. Formally, given a resume T=t1,t2,…,tn, seek a label sequence L*=l1,l2,…,ln, such that the probability of the sequence of labels is maximal. ) | ( max arg * T L P L L = (1) According to Bayes’ equation, we have ) ( ) | ( max arg * L P L T P L L × = (2) If we assume the independent occurrence of blocks labelled as the same information types, we have ∏ = = n i i i l t P L T P 1 ) | ( ) | ( (3) We assume the independence of words occurring in ti and use a unigram model, which multiplies the probabilities of these words to get the probability of ti. } ,... , { where ), | ( ) | ( 2 1 1 m i i m r r i i w w w t l w P l t P = =∏ = (4) If a tri-gram model is used to estimate P(L), we have ∏ = − − × = n i i i i l l l P l l P l P L P 3 2 1 1 2 1 ) , | ( ) | ( ) ( ) ( (5) To extract educational detailed information from Education general information, we use another HMM. It also uses two labels Di-B and DiM to represent the beginning and remaining part of Di, respectively. In addition, we use label O to represent that the corresponding word does not belong to any kind of educational detailed information. But this model expresses a text T as word sequence T=w1,w2,…,wn. Thus in this model, the probability P(L) is calculated with Formula 5 and the probability P(T|L) is calculated by ∏ = = n i i i l w P L T P 1 ) | ( ) | ( (6) Here we assume the independent occurrence of words labelled as the same information types. 3.1.2 Parameter Estimation Both words and named entities are used as features in our HMMs. A Chinese resume C= c1’,c2’,…,ck’ is first tokenized into C= w1,w2,…,wk with a Chinese word segmentation system LSP (Gao et al., 2003). This system outputs predefined features, including words and named entities in 8 types (Name, Date, Location, Organization, Phone, Number, Period, and Email). The named entities of the same type are normalized into single ID in feature set. In both HMMs, fully connected structure with one state representing one information label is applied due to its convenience. To estimate the probabilities introduced in 3.1.1, maximum likelihood estimation is used, which are ) , ( ) , , ( ) , | ( 2 1 2 1 2 1 − − − − − − = i i i i i i i i l l count l l l count l l l P (7) ) ( ) , ( ) | ( 1 1 1 − − − = i i i i i l count l l count l l P (8) ords distinct w m contains i state where , ) , ( ) , ( ) | ( 1∑ = = m r i r i r i r l w count l w count l w P (9) 3.1.3 Smoothing Short of training data to estimate probability is a big problem for HMMs. Such problems may occur when estimating either P(T|L) with unknown word wi or P(L) with unknown events. Bikel et al. (1999) mapped all unknown words to one token _UNK_ and then used a held-out data to train the bi-gram models where unknown words occur. They also applied a back-off strategy to solve the data sparseness problem when estimating the context model with unknown events, which interpolates the estimation from training corpus and the estimation from the back-off model with calculated parameter λ (Bikel et al., 1999). Freitag and McCallum (1999) used shrinkage to estimate the emission probability of unknown words, which combines the estimates from data-sparse states of the complex model and the estimates in related data-rich states of the simpler models with a weighted average. In our HMMs, we first apply Good Turing smoothing (Gale, 1995) to estimate the probability P(wr|li) when training data is sparse. For word wr seen in training data, the emission probability is P(wr|li)×(1-x), where P(wr|li) is the emission probability calculated with Formula 9 and x=Ei/Si (Ei is the number of words appearing only once in state i and Si is the total number of words occurring in state i). For unknown word wr, the emission probability is x/(M-mi), where M is the number of all the words appearing in training data, 502 and mi is the number of distinct words occurring in state i. Then, we use a back-off schema (Katz, 1987) to deal with the data sparseness problem when estimating the probability P(L) (Gao et al., 2003). 3.2 SVM Model 3.2.1 Model Design We convert personal detailed information extraction into a classification problem. Here we select SVM as the classification model because of its robustness to over-fitting and high performance (Sebastiani, 2002). In the SVM model, the IE task is also defined as labelling segmented units with predefined class labels. We still use two labels to represent personal detailed information Pi: Pi-B represents the beginning of Pi and Pi-M represents the remainder part of Pi. Besides of that, label O means that the corresponding unit does not belong to any personal detailed information boundaries and information types. For example, for part of a resume “Name:Alice (Female)”, we got three units after segmentation with punctuations, i.e. “Name”, “Alice”, “Female”. After applying SVM classification, we can get the label sequence as P1B,P1-M,P2-B. With this sequence of unit and label pairs, two types of personal detailed information can be extracted as P1: [Name:Alice] and P2: [Female]. Various ways can be applied to segment T. In our work, segmentation is based on the natural sentence of T. This is based on the empirical observation that detailed information is usually separated by punctuations (e.g. comma, Tab tag or Enter tag). The extraction of personal detailed information can be formally expressed as follows: given a text T=t1,t2,…,tn, where ti is a unit defined by the segmenting method mentioned above, seek a label sequence L* = l1,l2,…,ln, such that the probability of the sequence of labels is maximal. ) | ( max arg * T L P L L = (10) The key assumption to apply classification in IE is the independence of label assignment between units. With this assumption, Formula 10 can be described as ∏ = = = n i i i l l l L t l P L n 1 ... , * ) | ( max arg 2 1 (11) Thus this probability can be maximized by maximizing each term in turn. Here, we use the SVM score of labelling ti with li to replace P(li|ti). 3.2.2 Multi-class Classification SVM is a binary classification model. But in our IE task, it needs to classify units into N classes, where N is two times of the number of personal detailed information types. There are two popular strategies to extend a binary classification task to N classes (A.Berger, 1999). The first is One vs. All strategy, where N classifiers are built to separate one class from others. The other is Pairwise strategy, where N×(N-1)/2 classifiers considering all pairs of classes are built and final decision is given by their weighted voting. In our model, we apply the One vs. All strategy for its good efficiency in classification. We construct one classifier for each type, and classify each unit with all these classifiers. Then we select the type that has the highest score in classification. If the selected score is higher than a predefined threshold, then the unit is labelled as this type. Otherwise it is labelled as O. 3.2.3 Feature Definition Features defined in our SVM model are described as follows: Word: Words that occur in the unit. Each word appearing in the dictionary is a feature. We use TF×IDF as feature weight, where TF means word frequency in the text, and IDF is defined as: w N N Log w IDF 2 ) ( = (12) N: the total number of training examples; Nw: the total number of positive examples that contain word w Named Entity: Similar to the HMM models, 8 types of named entities identified by LSP, i.e., Name, Date, Location, Organization, Phone, Number, Period, Email, are selected as binary features. If any one type of them appears in the text, then the weight of this feature is 1, otherwise is 0. 3.3 Block Selection Block selection is used to select the blocks generated from the first pass as the input of the second pass for detailed information extraction. Error analysis of preliminary experiments shows that the majority of the mistakes of general information extraction resulted from labelling non- 503 Personal Detailed Info (SVM) Educational Detailed Info (HMM) Model Avg.P (%) Avg.R (%) Avg.F (%) Avg.P (%) Avg.R (%) Avg.F (%) Flat 77.49 82.02 77.74 58.83 77.35 66.02 Cascaded 86.83 (+9.34) 76.89 (-5.13) 80.44 (+2.70) 70.78 (+11.95) 76.80 (-0.55) 73.40 (+7.38) Table 2. IE results with cascaded model and flat model. boundary blocks as boundaries in the first pass. Therefore we apply a fuzzy block selection strategy, which not only selects the blocks labelled with target general information, but also selects their neighboring two blocks, so as to enlarge the extracting range. 4 Experiments and Analysis 4.1 Data and Experimental Setting We evaluated this cascaded hybrid model with 1,200 Chinese resumes. The data set was divided into 3 parts: training data, parameter tuning data and testing data with the proportion of 4:1:1. 6folder cross validation was conducted in all the experiments. We selected SVMlight (Joachims, 1999) as the SVM classifier toolkit and LSP (Gao et al., 2003) for Chinese word segmentation and named entity identification. Precision (P), recall (R) and F-score (F=2PR/(P+R)) were used as the basic evaluation metrics and macro-averaging strategy was used to calculate the average results. For the special application background of our resume IE model, the “Overlap” criterion (Lavelli et al., 2004) was used to match reference instances and extracted instances. We define that if the proportion of the overlapping part of extracted instance and reference instance is over 90%, then they match each other. A set of experiments have been designed to verify the effectiveness of exploring documentlevel hierarchical structure of resume and choose the best IE models (HMM vs. classification) for each sub-task. z Cascaded model vs. flat model Two flat models with different IE methods (SVM and HMM) are designed to extract personal detailed information and educational detailed information respectively. In these models, no hierarchical structure is used and the detailed information is extracted from the entire resume texts rather than from specific blocks. These two flat models will be compared with our proposed cascaded model. z Model selection for different IE tasks Both SVM and HMM are tested for all the IE tasks in first pass and in second pass. 4.2 Cascaded Model vs. Flat Model We tested the flat model and cascaded model with detailed information extraction to verify the effectiveness of exploring document-level hierarchical structure. Results (see Table 2) show that with the cascaded model, the precision is greatly improved compared with the flat model with identical IE method, especially for educational detailed information. Although there is some loss in recall, the average F-score is still largely improved in the cascaded model. 4.3 Model Selection for Different IE Tasks Then we tested different models for the general information and detailed information to choose the most appropriate IE model for each sub-task. Model Avg.P (%) Avg.R (%) SVM 80.95 72.87 HMM 75.95 75.89 Table 3. General information extraction with different models. Personal Detailed Info Educational Detailed Info Model Avg.P (%) Avg.R (%) Avg.P (%) Avg.R (%) SVM 86.83 76.89 67.36 66.21 HMM 79.64 60.16 70.78 76.80 Table 4. Detailed information extraction with different models. Results (see Table 3) show that compared with SVM, HMM achieves better recall. In our cascaded framework, the extraction range of detailed information is influenced by the result of general information extraction. Thus better recall of general information leads to better recall of detailed information subsequently. For this reason, 504 we choose HMM in the first pass of our cascaded hybrid model. Then in the second pass, different IE models are tested in order to select the most appropriate one for different sub-tasks. Results (see Table 4) show that HMM performs much better in both precision and recall than SVM for educational detailed information extraction. We think that this is reasonable because HMM takes into account the sequence constraints among educational detailed information types. Therefore HMM model is selected to extract educational detailed information in our cascaded hybrid model. While for the personal detailed information extraction, we find that the SVM model gets better precision and recall than HMM model. We think that this is because of the independent occurrence of personal detailed information. Therefore, we select SVM to extract personal detailed information in our cascaded model. 5 Discussion Our cascaded framework is a “pipeline” approach and it may suffer from error propagation. For instance, the error in the first pass may be transferred to the second pass when determining the extraction range of detailed information. Therefore the precision and recall of detailed information extraction in the second pass may be decreased subsequently. But we are not sure whether N-Best approach (Zhai et al., 2004) would be helpful. Because our cascaded hybrid model applies different IE methods for different sub-tasks, it is difficult to incorporate the N-best strategy by either simply combining the scores of the first pass and the second pass, or using the scores of the second pass to do re-ranking to select the best results. Instead of using N-best, we apply a fuzzy block selection strategy to enlarge the search scope. Experimental results of personal detailed information extraction show that compared with the exact block selection strategy, this fuzzy strategy improves the average recall of personal detailed information from 68.48% to 71.34% and reduce the average precision from 83.27% to 81.71%. Therefore the average F-score is improved by the fuzzy strategy from 75.15% to 76.17%. Features are crucial to our SVM model. For some fields (such as Name, Address and Graduation School), only using words as features may result in low accuracy in IE. The named entity (NE) features used in our model enhance the accuracy of detailed information extraction. As exemplified by the results (see Table 5) on personal detailed information extraction, after adding named entity features, the F-score are improved greatly. Field Word +NE (%) Word (%) Name 90.22 3.11 Birthday 87.31 84.82 Address 67.76 49.16 Phone 81.57 75.31 Mobile 70.64 58.01 Email 88.76 85.96 Registered Residence 75.97 72.73 Residence 51.61 42.86 Graduation School 40.96 15.38 Degree 73.20 63.16 Major 63.09 43.24 Table 5. Personal detailed information extraction with different features (Avg.F). In our cascaded hybrid model, we apply HMM and SVM in different pass separately to explore the contextual structure of information types. It guarantees the simplicity of our hybrid model. However, there are other ways to combine statebased and discriminative ideas. For example, Peng and McCallum (2004) applied Conditional Random Fields to extract information, which draws together the advantages of both HMM and SVM. This approach could be considered in our future experiments. Some personal detailed information types do not achieve good average F-score in our model, such as Zip code (74.50%) and Mobile (73.90%). Error analysis shows that it is because these fields do not contain distinguishing words and named entities. For example, it is difficult to extract Mobile from the text “Phone: 010-62617711 (13859750123)”. But these fields can be easily distinguished with their internal characteristics. For example, Mobile often consists of certain length of digital figures. To identify these fields, the Finite-State Automaton (FSA) that employs hand-crafted grammars is very effective (Hsu and Chang, 1999). Alternatively, rules learned from annotated data are also very promising in handling this case (Ciravegna and Lavelli, 2004). We assume the independence of words occurring in unit ti to calculate the probability 505 P(ti|li) in HMM model. While in Bikel et al. (1999), a bi-gram model is applied where each word is conditioned on its immediate predecessor when generating words inside the current name-class. We will compare this method with our current method in the future. 6 Conclusions and Future Work We have shown that a cascaded hybrid model yields good results for the task of information extraction from resumes. We tested different models for the first pass and the second pass, and for different IE tasks. Our experimental results show that the HMM model is effective in handling the general information extraction and educational detailed information extraction, where there exists strong sequence of information pieces. And the SVM model is effective for the personal detailed information extraction. We hope to continue this work in the future by investigating the use of other well researched IE methods. As our future works, we will apply FSA or learned rules to improve the precision and recall of some personal detailed information (such as Zip code and Mobile). Other smoothing methods such as (Bikel et al. 1999) will be tested in order to better overcome the data sparseness problem. 7 Acknowledgements The authors wish to thank Dr. JianFeng Gao, Dr. Mu Li, Dr. Yajuan Lv for their help with the LSP tool, and Dr. Hang Li, Yunbo Cao for their valuable discussions on classification approaches. We are indebted to Dr. John Chen for his assistance to polish the English. We want also thank Long Jiang for his assistance to annotate the training and testing data. We also thank the three anonymous reviewers for their valuable comments. References A.Berger. Error-correcting output coding for text classification. 1999. In Proceedings of the IJCAI-99 Workshop on Machine Learning for Information Filtering. D.M.Bikel, R.Schwartz, R.M.Weischedel. 1999. An algorithm that learns what’s in a name. Machine Learning, 34(1):211-231. V.Borkar, K.Deshmukh and S.Sarawagi. 2001. Automatic segmentation of text into structured records. In Proceedings of ACM SIGMOD Conference. pp.175-186. F.Ciravegna, A.Lavelli. 2004. LearningPinocchio: adaptive information extraction for real world applications. Journal of Natural Language Engineering, 10(2):145-165. A.Finn and N.Kushmerick. 2004. Multi-level boundary classification for information extraction. In Proceedings of ECML04. P.Frasconi, G.Soda and A.Vullo. 2001. Text categorization for multi-page documents: a hybrid Naïve Bayes HMM approach. In Proceedings of the 1st ACM/IEEE-CS Joint Conference on Digital Libraries. pp.11-20. D.Freitag and A.McCallum. 1999. Information extraction with HMMs and shrinkage. In AAAI99 Workshop on Machine Learning for Information Extraction. pp.31-36. W.Gale. 1995. Good-Turing smoothing without tears. Journal of Quantitative Linguistics, 2:217-237. J.F.Gao, M.Li and C.N.Huang. 2003. Improved sourcechannel models for Chinese word segmentation. In Proceedings of ACL03. pp.272-279. C.N.Hsu and C.C.Chang. 1999. Finite-state transducers for semi-structured text mining. In Proceedings of IJCAI99 Workshop on Text Mining: Foundations, Techniques and Applications. pp.38-49. T.Joachims. 1999. Making large-scale SVM learning practical. Advances in Kernel Methods - Support Vector Learning. MIT-Press. S.M.Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE ASSP, 35(3):400-401. N.Kushmerick, E.Johnston and S.McGuinness. 2001. Information extraction by text classification. In IJCAI01 Workshop on Adaptive Text Extraction and Mining. A.Lavelli, M.E.Califf, F.Ciravegna, D.Freitag, C.Giuliano, N.Kushmerick and L.Romano. 2004. A critical survey of the methodology for IE evaluation. In Proceedings of the 4th International Conference on Language Resources and Evaluation. F.Peng and A.McCallum. 2004. Accurate information extraction from research papers using conditional random fields. In Proceedings of HLT/NAACL-2004. pp.329-336. L.Peshkin and A.Pfeffer. 2003. Bayesian information extraction network. In Proceedings of IJCAI03. pp.421426. F.Sebastiani. 2002. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1-47. A.D.Sitter and W.Daelemans. 2003. Information extraction via double classification. In Proceedings of ATEM03. L.Zhai, P.Fung, R.Schwartz, M.Carpuat and D.Wu. 2004. Using N-best lists for named entity recognition from Chinese speech. In Proceedings of HLT/NAACL-2004. 506 | 2005 | 62 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 507–514, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Discriminative Syntactic Language Modeling for Speech Recognition Michael Collins MIT CSAIL [email protected] Brian Roark OGI/OHSU [email protected] Murat Saraclar Bogazici University [email protected] Abstract We describe a method for discriminative training of a language model that makes use of syntactic features. We follow a reranking approach, where a baseline recogniser is used to produce 1000-best output for each acoustic input, and a second “reranking” model is then used to choose an utterance from these 1000-best lists. The reranking model makes use of syntactic features together with a parameter estimation method that is based on the perceptron algorithm. We describe experiments on the Switchboard speech recognition task. The syntactic features provide an additional 0.3% reduction in test–set error rate beyond the model of (Roark et al., 2004a; Roark et al., 2004b) (significant at p < 0.001), which makes use of a discriminatively trained n-gram model, giving a total reduction of 1.2% over the baseline Switchboard system. 1 Introduction The predominant approach within language modeling for speech recognition has been to use an ngram language model, within the “source-channel” or “noisy-channel” paradigm. The language model assigns a probability Pl(w) to each string w in the language; the acoustic model assigns a conditional probability Pa(a|w) to each pair (a, w) where a is a sequence of acoustic vectors, and w is a string. For a given acoustic input a, the highest scoring string under the model is w∗= arg max w (β log Pl(w) + log Pa(a|w)) (1) where β > 0 is some value that reflects the relative importance of the language model; β is typically chosen by optimization on held-out data. In an n-gram language model, a Markov assumption is made, namely that each word depends only on the previous (n −1) words. The parameters of the language model are usually estimated from a large quantity of text data. See (Chen and Goodman, 1998) for an overview of estimation techniques for n-gram models. This paper describes a method for incorporating syntactic features into the language model, using discriminative parameter estimation techniques. We build on the work in Roark et al. (2004a; 2004b), which was summarized and extended in Roark et al. (2005). These papers used discriminative methods for n-gram language models. Our approach reranks the 1000-best output from the Switchboard recognizer of Ljolje et al. (2003).1 Each candidate string w is parsed using the statistical parser of Collins (1999) to give a parse tree T (w). Information from the parse tree is incorporated in the model using a feature-vector approach: we define Φ(a, w) to be a d-dimensional feature vector which in principle could track arbitrary features of the string w together with the acoustic input a. In this paper we restrict Φ(a, w) to only consider the string w and/or the parse tree T (w) for w. For example, Φ(a, w) might track counts of context-free rule productions in T (w), or bigram lexical dependencies within T (w). The optimal string under our new model is defined as w∗ = arg max w (β log Pl(w) + ⟨¯α, Φ(a, w)⟩+ log Pa(a|w)) (2) where the arg max is taken over all strings in the 1000-best list, and where ¯α ∈Rd is a parameter vector specifying the “weight” for each feature in Φ (note that we define ⟨x, y⟩to be the inner, or dot 1Note that (Roark et al., 2004a; Roark et al., 2004b) give results for an n-gram approach on this data which makes use of both lattices and 1000-best lists. The results on 1000-best lists were very close to results on lattices for this domain, suggesting that the 1000-best approximation is a reasonable one. 507 product, between vectors x and y). For this paper, we train the parameter vector ¯α using the perceptron algorithm (Collins, 2004; Collins, 2002). The perceptron algorithm is a very fast training method, in practice requiring only a few passes over the training set, allowing for a detailed comparison of a wide variety of feature sets. A number of researchers have described work that incorporates syntactic language models into a speech recognizer. These methods have almost exclusively worked within the noisy channel paradigm, where the syntactic language model has the task of modeling a distribution over strings in the language, in a very similar way to traditional n-gram language models. The Structured Language Model (Chelba and Jelinek, 1998; Chelba and Jelinek, 2000; Chelba, 2000; Xu et al., 2002; Xu et al., 2003) makes use of an incremental shift-reduce parser to enable the probability of words to be conditioned on k previous c-commanding lexical heads, rather than simply on the previous k words. Incremental topdown and left-corner parsing (Roark, 2001a; Roark, 2001b) and head-driven parsing (Charniak, 2001) approaches have directly used generative PCFG models as language models. In the work of Wen Wang and Mary Harper (Wang and Harper, 2002; Wang, 2003; Wang et al., 2004), a constraint dependency grammar and a finite-state tagging model derived from that grammar were used to exploit syntactic dependencies. Our approach differs from previous work in a couple of important respects. First, through the featurevector representations Φ(a, w) we can essentially incorporate arbitrary sources of information from the string or parse tree into the model. We would argue that our method allows considerably more flexibility in terms of the choice of features in the model; in previous work features were incorporated in the model through modification of the underlying generative parsing or tagging model, and modifying a generative model is a rather indirect way of changing the features used by a model. In this respect, our approach is similar to that advocated in Rosenfeld et al. (2001), which used Maximum Entropy modeling to allow for the use of shallow syntactic features for language modeling. A second contrast between our work and previous work, including that of Rosenfeld et al. (2001), is in the use of discriminative parameter estimation techniques. The criterion we use to optimize the parameter vector ¯α is closely related to the end goal in speech recognition, i.e., word error rate. Previous work (Roark et al., 2004a; Roark et al., 2004b) has shown that discriminative methods within an ngram approach can lead to significant reductions in WER, in spite of the features being of the same type as the original language model. In this paper we extend this approach, by including syntactic features that were not in the baseline speech recognizer. This paper describe experiments using a variety of syntactic features within this approach. We tested the model on the Switchboard (SWB) domain, using the recognizer of Ljolje et al. (2003). The discriminative approach for n-gram modeling gave a 0.9% reduction in WER on this domain; the syntactic features we describe give a further 0.3% reduction. In the remainder of this paper, section 2 describes previous work, including the parameter estimation methods we use, and section 3 describes the featurevector representations of parse trees that we used in our experiments. Section 4 describes experiments using the approach. 2 Background 2.1 Previous Work Techniques for exploiting stochastic context-free grammars for language modeling have been explored for more than a decade. Early approaches included algorithms for efficiently calculating string prefix probabilities (Jelinek and Lafferty, 1991; Stolcke, 1995) and approaches to exploit such algorithms to produce n-gram models (Stolcke and Segal, 1994; Jurafsky et al., 1995). The work of Chelba and Jelinek (Chelba and Jelinek, 1998; Chelba and Jelinek, 2000; Chelba, 2000) involved the use of a shift-reduce parser trained on Penn treebank style annotations, that maintains a weighted set of parses as it traverses the string from left-to-right. Each word is predicted by each candidate parse in this set at the point when the word is shifted, and the conditional probability of the word given the previous words is taken as the weighted sum of the conditional probabilities provided by each parse. In this approach, the probability of a word is conditioned by the top two lexical heads on the stack of the par508 ticular parse. Enhancements in the feature set and improved parameter estimation techniques have extended this approach in recent years (Xu et al., 2002; Xu et al., 2003). Roark (2001a; 2001b) pursued a different derivation strategy from Chelba and Jelinek, and used the parse probabilities directly to calculate the string probabilities. This work made use of a left-to-right, top-down, beam-search parser, which exploits rich lexico-syntactic features from the left context of each derivation to condition derivation move probabilities, leading to a very peaked distribution. Rather than normalizing a prediction of the next word over the beam of candidates, as in Chelba and Jelinek, in this approach the string probability is derived by simply summing the probabilities of all derivations for that string in the beam. Other work on syntactic language modeling includes that of Charniak (2001), which made use of a non-incremental, head-driven statistical parser to produce string probabilities. In the work of Wen Wang and Mary Harper (Wang and Harper, 2002; Wang, 2003; Wang et al., 2004), a constraint dependency grammar and a finite-state tagging model derived from that grammar, were used to exploit syntactic dependencies. The processing advantages of the finite-state encoding of the model has allowed for the use of probabilities calculated off-line from this model to be used in the first pass of decoding, which has provided additional benefits. Finally, Och et al. (2004) use a reranking approach with syntactic information within a machine translation system. Rosenfeld et al. (2001) investigated the use of syntactic features in a Maximum Entropy approach. In their paper, they used a shallow parser to annotate base constituents, and derived features from sequences of base constituents. The features were indicator features that were either (1) exact matches between a set or sequence of base constituents with those annotated on the hypothesis transcription; or (2) tri-tag features from the constituent sequence. The generative model that resulted from their feature set resulted in only a very small improvement in either perplexity or word-error-rate. 2.2 Global Linear Models We follow the framework of Collins (2002; 2004), recently applied to language modeling in Roark et al. (2004a; 2004b). The model we propose consists of the following components: • GEN(a) is a set of candidate strings for an acoustic input a. In our case, GEN(a) is a set of 1000-best strings from a first-pass recognizer. • T (w) is the parse tree for string w. • Φ(a, w) ∈Rd is a feature-vector representation of an acoustic input a together with a string w. • ¯α ∈Rd is a parameter vector. • The output of the recognizer for an input a is defined as F(a) = argmax w∈GEN(a) ⟨Φ(a, w), ¯α⟩ (3) In principle, the feature vector Φ(a, w) could take into account any features of the acoustic input a together with the utterance w. In this paper we make a couple of restrictions. First, we define the first feature to be Φ1(a, w) = β log Pl(w) + log Pa(a|w) where Pl(w) and Pa(a|w) are language and acoustic model scores from the baseline speech recognizer. In our experiments we kept β fixed at the value used in the baseline recogniser. It can then be seen that our model is equivalent to the model in Eq. 2. Second, we restrict the remaining features Φ2(a, w) . . . Φd(a, w) to be sensitive to the string w alone.2 In this sense, the scope of this paper is limited to the language modeling problem. As one example, the language modeling features might take into account n-grams, for example through definitions such as Φ2(a, w) = Count of the the in w Previous work (Roark et al., 2004a; Roark et al., 2004b) considered features of this type. In this paper, we introduce syntactic features, which may be sensitive to the parse tree for w, for example Φ3(a, w) = Count of S →NP VP in T (w) where S →NP VP is a context-free rule production. Section 3 describes the full set of features used in the empirical results presented in this paper. 2Future work may consider features of the acoustic sequence a together with the string w, allowing the approach to be applied to acoustic modeling. 509 2.2.1 Parameter Estimation We now describe how the parameter vector ¯α is estimated from a set of training utterances. The training set consists of examples (ai, wi) for i = 1 . . . m, where ai is the i’th acoustic input, and wi is the transcription of this input. We briefly review the two training algorithms described in Roark et al. (2004b), the perceptron algorithm and global conditional log-linear models (GCLMs). Figure 1 shows the perceptron algorithm. It is an online algorithm, which makes several passes over the training set, updating the parameter vector after each training example. For a full description of the algorithm, see Collins (2004; 2002). A second parameter estimation method, which was used in (Roark et al., 2004b), is to optimize the log-likelihood under a log-linear model. Similar approaches have been described in Johnson et al. (1999) and Lafferty et al. (2001). The objective function used in optimizing the parameters is L(¯α) = X i log P(si|ai, ¯α) −C X j α2 j (4) where P(si|ai, ¯α) = e⟨Φ(ai,si),¯α⟩ P w∈GEN(ai) e⟨Φ(ai,w),¯α⟩. Here, each si is the member of GEN(ai) which has lowest WER with respect to the target transcription wi. The first term in L(¯α) is the log-likelihood of the training data under a conditional log-linear model. The second term is a regularization term which penalizes large parameter values. C is a constant that dictates the relative weighting given to the two terms. The optimal parameters are defined as ¯α∗= arg max ¯α L(¯α) We refer to these models as global conditional loglinear models (GCLMs). Each of these algorithms has advantages. A number of results—e.g., in Sha and Pereira (2003) and Roark et al. (2004b)—suggest that the GCLM approach leads to slightly higher accuracy than the perceptron training method. However the perceptron converges very quickly, often in just a few passes over the training set—in comparison GCLM’s can take tens or hundreds of gradient calculations before convergence. In addition, the perceptron can be used as an effective feature selection technique, in that Input: A parameter specifying the number of iterations over the training set, T. A value for the first parameter, α. A feature-vector representation Φ(a, w) ∈Rd. Training examples (ai, wi) for i = 1 . . . m. An n-best list GEN(ai) for each training utterance. We take si to be the member of GEN(ai) which has the lowest WER when compared to wi. Initialization: Set α1 = α, and αj = 0 for j = 2 . . . d. Algorithm: For t = 1 . . . T, i = 1 . . . m •Calculate yi = arg maxw∈GEN(ai) ⟨Φ(ai, w), ¯α⟩ • For j = 2 . . . m, set ¯αj = ¯αj + Φj(ai, si) − Φj(ai, yi) Output: Either the final parameters ¯α, or the averaged parameters ¯αavg defined as ¯αavg = P t,i ¯αt,i/mT where ¯αt,i is the parameter vector after training on the i’th training example on the t’th pass through the training data. Figure 1: The perceptron training algorithm. Following Roark et al. (2004a), the parameter α1 is set to be some constant α that is typically chosen through optimization over the development set. Recall that α1 dictates the weight given to the baseline recognizer score. at each training example it only increments features seen on si or yi, effectively ignoring all other features seen on members of GEN(ai). For example, in the experiments in Roark et al. (2004a), the perceptron converged in around 3 passes over the training set, while picking non-zero values for around 1.4 million n-gram features out of a possible 41 million n-gram features seen in the training set. For the present paper, to get a sense of the relative effectiveness of various kinds of syntactic features that can be derived from the output of a parser, we are reporting results using just the perceptron algorithm. This has allowed us to explore more of the potential feature space than we would have been able to do using the more costly GCLM estimation techniques. In future we plan to apply GLCM parameter estimation methods to the task. 3 Parse Tree Features We tagged each candidate transcription with (1) part-of-speech tags, using the tagger documented in Collins (2002); and (2) a full parse tree, using the parser documented in Collins (1999). The models for both of these were trained on the Switchboard 510 S NP PRP we VP VBD helped NP PRP her VP VB paint NP DT the NN house Figure 2: An example parse tree treebank, and applied to candidate transcriptions in both the training and test sets. Each transcription received one POS-tag annotation and one parse tree annotation, from which features were extracted. Figure 2 shows a Penn Treebank style parse tree that is of the sort produced by the parser. Given such a structure, there is a tremendous amount of flexibility in selecting features. The first approach that we follow is to map each parse tree to sequences encoding part-of-speech (POS) decisions, and “shallow” parsing decisions. Similar representations have been used by (Rosenfeld et al., 2001; Wang and Harper, 2002). Figure 3 shows the sequential representations that we used. The first simply makes use of the POS tags for each word. The latter representations make use of sequences of non-terminals associated with lexical items. In 3(b), each word in the string is associated with the beginning or continuation of a shallow phrase or “chunk” in the tree. We include any non-terminals above the level of POS tags as potential chunks: a new “chunk” (VP, NP, PP etc.) begins whenever we see the initial word of the phrase dominated by the non-terminal. In 3(c), we show how POS tags can be added to these sequences. The final type of sequence mapping, shown in 3(d), makes a similar use of chunks, but preserves only the headword seen with each chunk.3 From these sequences of categories, various features can be extracted, to go along with the n-gram features used in the baseline. These include n-tag features, e.g. ti−2ti−1ti (where ti represents the 3It should be noted that for a very small percentage of hypotheses, the parser failed to return a full parse tree. At the end of every shallow tag or category sequence, a special end of sequence tag/word pair “</parse> </parse>”was emitted. In contrast, when a parse failed, the sequence consisted of solely “<noparse> <noparse>”. (a) we/PRP helped/VBD her/PRP paint/VB the/DT house/NN (b) we/NPb helped/VPb her/NPb paint/VPb the/NPb house/NPc (c) we/PRP-NPb helped/VBD-VPb her/PRP-NPb paint/VB-VPb the/DT-NPb house/NN-NPc (d) we/NP helped/VP her/NP paint/VP house/NP Figure 3: Sequences derived from a parse tree: (a) POS-tag sequence; (b) Shallow parse tag sequence—the superscripts b and c refer to the beginning and continuation of a phrase respectively; (c) Shallow parse tag plus POS tag sequence; and (d) Shallow category with lexical head sequence tag in position i); and composite tag/word features, e.g. tiwi (where wi represents the word in position i) or, more complicated configurations, such as ti−2ti−1wi−1tiwi. These features can be extracted from whatever sort of tag/word sequence we provide for feature extraction, e.g. POS-tag sequences or shallow parse tag sequences. One variant that we performed in feature extraction had to do with how speech repairs (identified as EDITED constituents in the Switchboard style parse trees) and filled pauses or interjections (labeled with the INTJ label) were dealt with. In the simplest version, these are simply treated like other constituents in the parse tree. However, these can disrupt what may be termed the intended sequence of syntactic categories in the utterance, so we also tried skipping these constituents when mapping from the parse tree to shallow parse sequences. The second set of features we employed made use of the full parse tree when extracting features. For this paper, we examined several features templates of this type. First, we considered context-free rule instances, extracted from each local node in the tree. Second, we considered features based on lexical heads within the tree. Let us first distinguish between POS-tags and non-POS non-terminal categories by calling these latter constituents NTs. For each constituent NT in the tree, there is an associated lexical head (HNT) and the POS-tag of that lexical head (HPNT). Two simple features are NT/HNT and NT/HPNT for every NT constituent in the tree. 511 Feature Examples from figure 2 (P,HCP,Ci,{+,-}{1,2},HP,HCi) (VP,VB,NP,1,paint,house) (S,VP,NP,-1,helped,we) (P,HCP,Ci,{+,-}{1,2},HP,HPCi) (VP,VB,NP,1,paint,NN) (S,VP,NP,-1,helped,PRP) (P,HCP,Ci,{+,-}{1,2},HPP,HCi) (VP,VB,NP,1,VB,house) (S,VP,NP,-1,VBD,we) (P,HCP,Ci,{+,-}{1,2},HPP,HPCi) (VP,VB,NP,1,VB,NN) (S,VP,NP,-1,VBD,PRP) Table 1: Examples of head-to-head features. The examples are derived from the tree in figure 2. Using the heads as identified in the parser, example features from the tree in figure 2 would be S/VBD, S/helped, NP/NN, and NP/house. Beyond these constituent/head features, we can look at the head-to-head dependencies of the sort used by the parser. Consider each local tree, consisting of a parent node (P), a head child (HCP), and k non-head children (C1 ...Ck). For each non-head child Ci, it is either to the left or right of HCP, and is either adjacent or non-adjacent to HCP. We denote these positional features as an integer, positive if to the right, negative if to the left, 1 if adjacent, and 2 if non-adjacent. Table 1 shows four head-to-head features that can be extracted for each non-head child Ci. These features include dependencies between pairs of lexical items, between a single lexical item and the part-of-speech of another item, and between pairs of part-of-speech tags in the parse. 4 Experiments The experimental set-up we use is very similar to that of Roark et al. (2004a; 2004b), and the extensions to that work in Roark et al. (2005). We make use of the Rich Transcription 2002 evaluation test set (rt02) as our development set, and use the Rich Transcription 2003 Spring evaluation CTS test set (rt03) as test set. The rt02 set consists of 6081 sentences (63804 words) and has three subsets: Switchboard 1, Switchboard 2, Switchboard Cellular. The rt03 set consists of 9050 sentences (76083 words) and has two subsets: Switchboard and Fisher. The training set consists of 297580 transcribed utterances (3297579 words)4. For each utterance, 4Note that Roark et al. (2004a; 2004b; 2005) used 20854 of these utterances (249774 words) as held out data. In this work we simply use the rt02 test set as held out and development data. a weighted word-lattice was produced, representing alternative transcriptions, from the ASR system. The baseline ASR system that we are comparing against then performed a rescoring pass on these first pass lattices, allowing for better silence modeling, and replaces the trigram language model score with a 6-gram model. 1000-best lists were then extracted from these lattices. For each candidate in the 1000best lists, we identified the number of edits (insertions, deletions or substitutions) for that candidate, relative to the “target” transcribed utterance. The oracle score for the 1000-best lists was 16.7%. To produce the word-lattices, each training utterance was processed by the baseline ASR system. In a naive approach, we would simply train the baseline system (i.e., an acoustic model and language model) on the entire training set, and then decode the training utterances with this system to produce lattices. We would then use these lattices with the perceptron algorithm. Unfortunately, this approach is likely to produce a set of training lattices that are very different from test lattices, in that they will have very low word-error rates, given that the lattice for each utterance was produced by a model that was trained on that utterance. To somewhat control for this, the training set was partitioned into 28 sets, and baseline Katz backoff trigram models were built for each set by including only transcripts from the other 27 sets. Lattices for each utterance were produced with an acoustic model that had been trained on the entire training set, but with a language model that was trained on the 27 data portions that did not include the current utterance. Since language models are generally far more prone to overtraining than standard acoustic models, this goes a long way toward making the training conditions similar to testing conditions. Similar procedures were used to train the parsing and tagging models for the training set, since the Switchboard treebank overlaps extensively with the ASR training utterances. Table 2 presents the word-error rates on rt02 and rt03 of the baseline ASR system, 1000-best perceptron and GCLM results from Roark et al. (2005) under this condition, and our 1000-best perceptron results. Note that our n-best result, using just ngram features, improves upon the perceptron result of (Roark et al., 2005) by 0.2 percent, putting us within 0.1 percent of their GCLM result for that 512 WER Trial rt02 rt03 ASR system output 37.1 36.4 Roark et al. (2005) perceptron 36.6 35.7 Roark et al. (2005) GCLM 36.3 35.4 n-gram perceptron 36.4 35.5 Table 2: Baseline word-error rates versus Roark et al. (2005) rt02 Trial WER ASR system output 37.1 n-gram perceptron 36.4 n-gram + POS (1) perceptron 36.1 n-gram + POS (1,2) perceptron 36.1 n-gram + POS (1,3) perceptron 36.1 Table 3: Use of POS-tag sequence derived features condition. (Note that the perceptron–trained n-gram features were trigrams (i.e., n = 3).) This is due to a larger training set being used in our experiments; we have added data that was used as held-out data in (Roark et al., 2005) to the training set that we use. The first additional features that we experimented with were POS-tag sequence derived features. Let ti and wi be the POS tag and word at position i, respectively. We experimented with the following three feature definitions: 1. (ti−2ti−1ti), (ti−1ti), (ti), (tiwi) 2. (ti−2ti−1wi) 3. (ti−2wi−2ti−1wi−1tiwi), (ti−2ti−1wi−1tiwi), (ti−1wi−1tiwi), (ti−1tiwi) Table 3 summarizes the results of these trials on the held out set. Using the simple features (number 1 above) yielded an improvement beyond just n-grams, but additional, more complicated features failed to yield additional improvements. Next, we considered features derived from shallow parsing sequences. Given the results from the POS-tag sequence derived features, for any given sequence, we simply use n-tag and tag/word features (number 1 above). The first sequence type from which we extracted features was the shallow parse tag sequence (S1), as shown in figure 3(b). Next, we tried the composite shallow/POS tag sequence (S2), as in figure 3(c). Finally, we tried extracting features from the shallow constituent sequence (S3), as shown in figure 3(d). When EDITED and rt02 Trial WER ASR system output 37.1 n-gram perceptron 36.4 n-gram + POS perceptron 36.1 n-gram + POS + S1 perceptron 36.1 n-gram + POS + S2 perceptron 36.0 n-gram + POS + S3 perceptron 36.0 n-gram + POS + S3-E perceptron 36.0 n-gram + POS + CF perceptron 36.1 n-gram + POS + H2H perceptron 36.0 Table 4: Use of shallow parse sequence and full parse derived features INTJ nodes are ignored, we refer to this condition as S3-E. For full-parse feature extraction, we tried context-free rule features (CF) and head-to-head features (H2H), of the kind shown in table 1. Table 4 shows the results of these trials on rt02. Although the single digit precision in the table does not show it, the H2H trial, using features extracted from the full parses along with n-grams and POS-tag sequence features, was the best performing model on the held out data, so we selected it for application to the rt03 test data. This yielded 35.2% WER, a reduction of 0.3% absolute over what was achieved with just n-grams, which is significant at p < 0.001,5 reaching a total reduction of 1.2% over the baseline recognizer. 5 Conclusion The results presented in this paper are a first step in examining the potential utility of syntactic features for discriminative language modeling for speech recognition. We tried two possible sets of features derived from the full annotation, as well as a variety of possible feature sets derived from shallow parse and POS tag sequences, the best of which gave a small but significant improvement beyond what was provided by the n-gram features. Future work will include a further investigation of parser– derived features. In addition, we plan to explore the alternative parameter estimation methods described in (Roark et al., 2004a; Roark et al., 2004b), which were shown in this previous work to give further improvements over the perceptron. 5We use the Matched Pair Sentence Segment test for WER, a standard measure of significance, to calculate this p-value. 513 References Eugene Charniak. 2001. Immediate-head parsing for language models. In Proc. ACL. Ciprian Chelba and Frederick Jelinek. 1998. Exploiting syntactic structure for language modeling. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, pages 225–231. Ciprian Chelba and Frederick Jelinek. 2000. Structured language modeling. Computer Speech and Language, 14(4):283–332. Ciprian Chelba. 2000. Exploiting Syntactic Structure for Natural Language Modeling. Ph.D. thesis, The Johns Hopkins University. Stanley Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report, TR-10-98, Harvard University. Michael J. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proc. EMNLP, pages 1–8. Michael Collins. 2004. Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods. In Harry Bunt, John Carroll, and Giorgio Satta, editors, New Developments in Parsing Technology. Kluwer Academic Publishers, Dordrecht. Frederick Jelinek and John Lafferty. 1991. Computation of the probability of initial substring generation by stochastic context-free grammars. Computational Linguistics, 17(3):315–323. Mark Johnson, Stuart Geman, Steven Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic “unificationbased”grammars. In Proc. ACL, pages 535–541. Daniel Jurafsky, Chuck Wooters, Jonathan Segal, Andreas Stolcke, Eric Fosler, Gary Tajchman, and Nelson Morgan. 1995. Using a stochastic context-free grammar as a language model for speech recognition. In Proceedings of the IEEE Conference on Acoustics, Speech, and Signal Processing, pages 189–192. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML, pages 282–289, Williams College, Williamstown, MA, USA. Andrej Ljolje, Enrico Bocchieri, Michael Riley, Brian Roark, Murat Saraclar, and Izhak Shafran. 2003. The AT&T 1xRT CTS system. In Rich Transcription Workshop. Franz Josef Och, Daniel Gildea, Sanjeev Khudanpur, Anoop Sarkar, Kenji Yamada, Alex Fraser, Shankar Kumar, Libin Shen, David Smith, Katherine Eng, Viren Jain, Zhen Jin, and Dragomir Radev. 2004. A smorgasbord of features for statistical machine translation. In Proceedings of HLT-NAACL 2004. Brian Roark, Murat Saraclar, and Michael Collins. 2004a. Corrective language modeling for large vocabulary ASR with the perceptron algorithm. In Proc. ICASSP, pages 749–752. Brian Roark, Murat Saraclar, Michael Collins, and Mark Johnson. 2004b. Discriminative language modeling with conditional random fields and the perceptron algorithm. In Proc. ACL. Brian Roark, Murat Saraclar, and Michael Collins. 2005. Discriminative n-gram language modeling. Computer Speech and Language. submitted. Brian Roark. 2001a. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249– 276. Brian Roark. 2001b. Robust Probabilistic Predictive Syntactic Processing. Ph.D. thesis, Brown University. http://arXiv.org/abs/cs/0105019. Ronald Rosenfeld, Stanley Chen, and Xiaojin Zhu. 2001. Whole-sentence exponential language models: a vehicle for linguistic-statistical integration. In Computer Speech and Language. Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proceedings of the Human Language Technology Conference and Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), Edmonton, Canada. Andreas Stolcke and Jonathan Segal. 1994. Precise n-gram probabilities from stochastic context-free grammars. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, pages 74–79. Andreas Stolcke. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165–202. Wen Wang and Mary P. Harper. 2002. The superARV language model: Investigating the effectiveness of tightly integrating multiple knowledge sources. In Proc. EMNLP, pages 238– 247. Wen Wang, Andreas Stolcke, and Mary P. Harper. 2004. The use of a linguistically motivated language model in conversational speech recognition. In Proc. ICASSP. Wen Wang. 2003. Statistical parsing and language modeling based on constraint dependency grammar. Ph.D. thesis, Purdue University. Peng Xu, Ciprian Chelba, and Frederick Jelinek. 2002. A study on richer syntactic dependencies for structured language modeling. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 191–198. Peng Xu, Ahmad Emami, and Frederick Jelinek. 2003. Training connectionist models for the structured language model. In Proc. EMNLP, pages 160–167. 514 | 2005 | 63 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 515–522, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics A Phonotactic Language Model for Spoken Language Identification Haizhou Li and Bin Ma Institute for Infocomm Research Singapore 119613 {hli,mabin}@i2r.a-star.edu.sg Abstract We have established a phonotactic language model as the solution to spoken language identification (LID). In this framework, we define a single set of acoustic tokens to represent the acoustic activities in the world’s spoken languages. A voice tokenizer converts a spoken document into a text-like document of acoustic tokens. Thus a spoken document can be represented by a count vector of acoustic tokens and token n-grams in the vector space. We apply latent semantic analysis to the vectors, in the same way that it is applied in information retrieval, in order to capture salient phonotactics present in spoken documents. The vector space modeling of spoken utterances constitutes a paradigm shift in LID technology and has proven to be very successful. It presents a 12.4% error rate reduction over one of the best reported results on the 1996 NIST Language Recognition Evaluation database. 1 Introduction Spoken language and written language are similar in many ways. Therefore, much of the research in spoken language identification, LID, has been inspired by text-categorization methodology. Both text and voice are generated from language dependent vocabulary. For example, both can be seen as stochastic time-sequences corrupted by a channel noise. The n-gram language model has achieved equal amounts of success in both tasks, e.g. n-character slice for text categorization by language (Cavnar and Trenkle, 1994) and Phone Recognition followed by n-gram Language Modeling, or PRLM (Zissman, 1996) . Orthographic forms of language, ranging from Latin alphabet to Cyrillic script to Chinese characters, are far more unique to the language than their phonetic counterparts. From the speech production point of view, thousands of spoken languages from all over the world are phonetically articulated using only a few hundred distinctive sounds or phonemes (Hieronymus, 1994). In other words, common sounds are shared considerably across different spoken languages. In addition, spoken documents1, in the form of digitized wave files, are far less structured than written documents and need to be treated with techniques that go beyond the bounds of written language. All of this makes the identification of spoken language based on phonetic units much more challenging than the identification of written language. In fact, the challenge of LID is inter-disciplinary, involving digital signal processing, speech recognition and natural language processing. In general, a LID system usually has three fundamental components as follows: 1) A voice tokenizer which segments incoming voice feature frames and associates the segments with acoustic or phonetic labels, called tokens; 2) A statistical language model which captures language dependent phonetic and phonotactic information from the sequences of tokens; 3) A language classifier which identifies the language based on discriminatory characteristics of acoustic score from the voice tokenizer and phonotactic score from the language model. In this paper, we present a novel solution to the three problems, focusing on the second and third problems from a computational linguistic perspective. The paper is organized as follows: In Section 2, we summarize relevant existing approaches to the LID task. We highlight the shortcomings of existing approaches and our attempts to address the 1 A spoken utterance is regarded as a spoken document in this paper. 515 issues. In Section 3 we propose the bag-of-sounds paradigm to turn the LID task into a typical text categorization problem. In Section 4, we study the effects of different settings in experiments on the 1996 NIST Language Recognition Evaluation (LRE) database2. In Section 5, we conclude our study and discuss future work. 2 Related Work Formal evaluations conducted by the National Institute of Science and Technology (NIST) in recent years demonstrated that the most successful approach to LID used the phonotactic content of the voice signal to discriminate between a set of languages (Singer et al., 2003). We briefly discuss previous work cast in the formalism mentioned above: tokenization, statistical language modeling, and language identification. A typical LID system is illustrated in Figure 1 (Zissman, 1996), where language dependent voice tokenizers (VT) and language models (LM) are deployed in the Parallel PRLM architecture, or P-PRLM. Figure 1. L monolingual phoneme recognition front-ends are used in parallel to tokenize the input utterance, which is analyzed by LMs to predict the spoken language 2.1 Voice Tokenization A voice tokenizer is a speech recognizer that converts a spoken document into a sequence of tokens. As illustrated in Figure 2, a token can be of different sizes, ranging from a speech feature frame, to a phoneme, to a lexical word. A token is defined to describe a distinct acoustic/phonetic activity. In early research, low level spectral 2 http://www.nist.gov/speech/tests/ frames, which are assumed to be independent of each other, were used as a set of prototypical spectra for each language (Sugiyama, 1991). By adopting hidden Markov models, people moved beyond low-level spectral analysis towards modeling a frame sequence into a larger unit such as a phoneme and even a lexical word. Since the lexical word is language specific, the phoneme becomes the natural choice when building a language-independent voice tokenization front-end. Previous studies show that parallel language-dependent phoneme tokenizers effectively serve as the tokenization front-ends with P-PRLM being the typical example. However, a languageindependent phoneme set has not been explored yet experimentally. In this paper, we would like to explore the potential of voice tokenization using a unified phoneme set. Figure 2 Tokenization at different resolutions 2.2 n-gram Language Model With the sequence of tokens, we are able to estimate an n-gram language model (LM) from the statistics. It is generally agreed that phonotactics, i.e. the rules governing the phone/phonemes sequences admissible in a language, carry more language discriminative information than the phonemes themselves. An n-gram LM over the tokens describes well n-local phonotactics among neighboring tokens. While some systems model the phonotactics at the frame level (TorresCarrasquillo et al., 2002), others have proposed PPRLM. The latter has become one of the most promising solutions so far (Zissman, 1996). A variety of cues can be used by humans and machines to distinguish one language from another. These cues include phonology, prosody, morphology, and syntax in the context of an utterance. VT-1: Chinese VT-2: English VT-L: French LM-L: French LM-1 … LM-L LM-L: French LM-1 … LM-L LM-L: French LM-1 … LM-L language classifier spoken utterance hypothesized language word phoneme frame 516 However, global phonotactic cues at the level of utterance or spoken document remains unexplored in previous work. In this paper, we pay special attention to it. A spoken language always contains a set of high frequency function words, prefixes, and suffixes, which are realized as phonetic token substrings in the spoken document. Individually, those substrings may be shared across languages. However, the pattern of their co-occurrences discriminates one language from another. Perceptual experiments have shown (Muthusamy, 1994) that with adequate training, human listeners’ language identification ability increases when given longer excerpts of speech. Experiments have also shown that increased exposure to each language and longer training sessions improve listeners’ language identification performance. Although it is not entirely clear how human listeners make use of the high-order phonotactic/prosodic cues present in longer spans of a spoken document, strong evidence shows that phonotactics over larger context provides valuable LID cues beyond n-gram, which will be further attested by our experiments in Section 4. 2.3 Language Classifier The task of a language classifier is to make good use of the LID cues that are encoded in the model lλ to hypothesize from among L languages, Λ , as the one that is actually spoken in a spoken document O. The LID model ˆl lλ in PPRLM refers to extracted information from acoustic model and n-gram LM for language l. We have and { , AM } L LM l l l λ λ λ = ( 1,..., ) l l λ ∈Λ = . A maximum-likelihood classifier can be formulated as follows: ( ) ( ˆ argmax ( / ) argmax / , / l l AM LM l l l T l P O P O T P T λ λ λ ∈Λ ∈Λ ∈Γ = ≈ ∑ ) ) (1) The exact computation in Eq.(1) involves summing over all possible decoding of token sequences T given O. In many implementations, it is approximated by the maximum over all sequences in the sum by finding the most likely token sequence, , for each language l, using the Viterbi algorithm: ∈Γ ˆ lT ( ) ( ˆ ˆ ˆ argmax[ / , / ] AM LM l l l l l l P O T P T λ λ ∈Λ ≈ (2) Intuitively, individual sounds are heavily shared among different spoken languages due to the common speech production mechanism of humans. Thus, the acoustic score has little language discriminative ability. Many experiments (Yan and Barnard, 1995; Zissman, 1996) have further attested that the n-gram LM score provides more language discriminative information than their acoustic counterparts. In Figure 1, the decoding of voice tokenization is governed by the acoustic model AM lλ to arrive at an acoustic score ( ) ˆ / , AM l l P O T λ and a token sequence . The ngram LM derives the n-local phonotactic score ˆ lT ( ) ˆ / LM l l P T λ from the language model LM lλ . Clearly, the n-gram LM suffers the major shortcoming of having not exploited the global phonotactics in the larger context of a spoken utterance. Speech recognition researchers have so far chosen to only use n-gram local statistics for primarily pragmatic reasons, as this n-gram is easier to attain. In this work, a language independent voice tokenization front-end is proposed, that uses a unified acoustic model AM λ instead of multiple language dependent acoustic models AM lλ . The n-gram LM LM lλ is generalized to model both local and global phonotactics. 3 Bag-of-Sounds Paradigm The bag-of-sounds concept is analogous to the bag-of-words paradigm originally formulated in the context of information retrieval (IR) and text categorization (TC) (Salton 1971; Berry et al., 1995; Chu-Caroll and Carpenter, 1999). One focus of IR is to extract informative features for document representation. The bag-of-words paradigm represents a document as a vector of counts. It is believed that it is not just the words, but also the co-occurrence of words that distinguish semantic domains of text documents. Similarly, it is generally believed in LID that, although the sounds of different spoken languages overlap considerably, the phonotactics differentiates one language from another. Therefore, one can easily draw the analogy between an acoustic token in bag-of-sounds and a word in bag-of-words. Unlike words in a text document, the phonotactic information that distinguishes spoken languages is 517 concealed in the sound waves of spoken languages. After transcribing a spoken document into a text like document of tokens, many IR or TC techniques can then be readily applied. It is beyond the scope of this paper to discuss what would be a good voice tokenizer. We adopt phoneme size language-independent acoustic tokens to form a unified acoustic vocabulary in our voice tokenizer. Readers are referred to (Ma et al., 2005) for details of acoustic modeling. 3.1 Vector Space Modeling In human languages, some words invariably occur more frequently than others. One of the most common ways of expressing this idea is known as Zipf’s Law (Zipf, 1949). This law states that there is always a set of words which dominates most of the other words of the language in terms of their frequency of use. This is true both of written words and of spoken words. The short-term, or local phonotactics, is devised to describe Zipf’s Law. The local phonotactic constraints can be typically described by the token n-grams, or phoneme n-grams as in (Ng et al., 2000), which represents short-term statistics such as lexical constraints. Suppose that we have a token sequence, t1 t2 t3 t4. We derive the unigram statistics from the token sequence itself. We derive the bigram statistics from t1(t2) t2(t3) t3(t4) t4(#) where the token vocabulary is expanded over the token’s right context. Similarly, we derive the trigram statistics from the t1(#,t2) t2(t1,t3) t3(t2,t4) t4(t3,#) to account for left and right contexts. The # sign is a place holder for free context. In the interest of manageability, we propose to use up to token trigram. In this way, for an acoustic system of Y tokens, we have potentially bigram and Y trigram in the vocabulary. 2 Y 3 Meanwhile, motivated by the ideas of having both short-term and long-term phonotactic statistics, we propose to derive global phonotactics information to account for long-term phonotactics: The global phonotactic constraint is the highorder statistics of n-grams. It represents document level long-term phonotactics such as cooccurrences of n-grams. By representing a spoken document as a count vector of n-grams, also called bag-of-sounds vector, it is possible to explore the relations and higher-order statistics among the diverse n-grams through latent semantic analysis (LSA). It is often advantageous to weight the raw counts to refine the contribution of each n-gram to LID. We begin by normalizing the vectors representing the spoken document by making each vector of unit length. Our second weighting is based on the notion that an n-gram that only occurs in a few languages is more discriminative than an ngram that occurs in nearly every document. We use the inverse-document frequency (idf) weighting scheme (Spark Jones, 1972), in which a word is weighted inversely to the number of documents in which it occurs, by means of ( ) log / ( ) idf w D d w = , where w is a word in the vocabulary of W token n-grams. D is the total number of documents in the training corpus from L languages. Since each language has at least one document in the training corpus, we have D L ≥ . is the number of documents containing the word w. Letting be the count of word w in document d, we have the weighted count as ( ) d w , w d c 2 1/ 2 , , , 1 ( )/( ) w d w d w d w W c c idf w c ′ ′ ≤ ≤ ′ = × ∑ (3) and a vector to represent document d. A corpus is then represented by a term-document matrix 1, 2, , { , ,..., }T d d d W d c c c c ′ ′ ′ = 1 2 { , ,..., } D H c c c = of W D × . 3.2 Latent Semantic Analysis The fundamental idea in LSA is to reduce the dimension of a document vector, W to Q, where Q W << and Q D << , by projecting the problem into the space spanned by the rows of the closest rank-Q matrix to H in the Frobenius norm (Deerwester et al, 1990). Through singular value decomposition (SVD) of H, we construct a modified matrix HQ from the Q-largest singular values: T Q Q Q Q H U S V = (4) Q U is a W Q × left singular matrix with rows ,1 w u w W ≤ ≤ Q S ; is a Q Q × diagonal matrix of Qlargest singular values of H; is Q V D Q × right singular matrix with rows , 1 . dv d D ≤ ≤ With the SVD, we project the D document vectors in H into a reduced space , referred to as Q-space in the rest of this paper. A test document of unknown language ID is mapped to a pseudo-document in the Q-space by matrix Q V pc pv Q U 518 1 T p p p Q c v c U S − → = Q (5) After SVD, it is straightforward to arrive at a natural metric for the closeness between two spoken documents and in Q-space instead of their original W-dimensional space and . iv jv ic jc ( , ) cos( , ) || || || || T i j i j i j i j v v g c c v v v v ⋅ ≈ = ⋅ (6) ( , ) i j g c c indicates the similarity between two vectors, which can be transformed to a distance measure . 1 ( , ) cos ( , ) i j i j k c c g c c − = In the forced-choice classification, a test document, supposedly monolingual, is classified into one of the L languages. Note that the test document is unknown to the H matrix. We assume consistency between the test document’s intrinsic phonotactic pattern and one of the D patterns, that is extracted from the training data and is presented in the H matrix, so that the SVD matrices still apply to the test document, and Eq.(5) still holds for dimension reduction. 3.3 Bag-of-Sounds Language Classifier The bag-of-sounds phonotactic LM benefits from several properties of vector space modeling and LSA. 1) It allows for representing a spoken document as a vector of n-gram features, such as unigram, bigram, trigram, and the mixture of them; 2) It provides a well-defined distance metric for measurement of phonotactic distance between spoken documents; 3) It processes spoken documents in a lower dimensional Q-space, that makes the bag-ofsounds phonotactic language modeling, LM lλ , and classification computationally manageable. Suppose we have only one prototypical vector and its projection in the Q-space to represent language l. Applying LSA to the term-document matrix lc lv : H W L × , a minimum distance classifier is formulated: ˆ argmin ( , ) p l l l k v ∈Λ = v (7) In Eq.(7), is the Q-space projection of , a test document. pv pc Apparently, it is very restrictive for each language to have just one prototypical vector, also referred to as a centroid. The pattern of language distribution is inherently multi-modal, so it is unlikely well fitted by a single vector. One solution to this problem is to span the language space with multiple vectors. Applying LSA to a termdocument matrix : H W L′ × , where L L assuming each language l is represented by a set of M vectors, M ′ = × l Φ , a new classifier, using k-nearest neighboring rule (Duda and Hart, 1973) , is formulated, named k-nearest classifier (KNC): ˆ argmin ( , ) l p l l l l k φ ′ ∈Λ ′∈ = v v ∑ (8) where lφ is the set of k-nearest-neighbor to and pv l l φ ⊂Φ . Among many ways to derive the M centroid vectors, here is one option. Suppose that we have a set of training documents Dl for language l , as subset of corpus Ω, and . To derive the M vectors, we choose to carry out vector quantization (VQ) to partition D l D ⊂Ω 1 L l l D = ∪ = Ω l l into M cells Dl,m in the Q-space such that 1 , M m l m D D = ∪ = using similarity metric Eq.(6). All the documents in each cell ,l m D can then be merged to form a super-document, which is further projected into a Q-space vector . This results in M prototypical centroids . Using KNC, a test vector is compared with M vectors to arrive at the k-nearest neighbors for each language, which can be computationally expensive when M is large. ,l m v , ( 1,... l m l ) M ∈Φ v m = Alternatively, one can account for multi-modal distribution through finite mixture model. A mixture model is to represent the M discrete components with soft combination. To extend the KNC into a statistical framework, it is necessary to map our distance metric Eq.(6) into a probability measure. One way is for the distance measure to induce a family of exponential distributions with pertinent marginality constraints. In practice, what we need is a reasonable probability distribution, which sums to one, to act as a lookup table for the distance measure. We here choose to use the empirical multivariate distribution constructed by allocating the total probability mass in proportion to the distances observed with the training data. In short, this reduces the task to a histogram normalization. In this way, we map the distance to a conditional probability distribution ( , ) i j k c c ( | ) i j p v v 519 subject to . Now that we are in the probability domain, techniques such as mixture smoothing can be readily applied to model a language class with finer fitting. | | 1 ( | ) 1 i j i p v v Ω = = ∑ Let’s re-visit the task of L language forcedchoice classification. Similar to KNC, suppose we have M centroids in the Qspace for each language l. Each centroid represents a class. The class conditional probability can be described as a linear combination of , ( 1,... ) l m l v m ∈Φ = M , ( | ) i l m p v v : , 1 ( | ) ( ) ( | ) M LM i l l m i l m m , p v p v p v λ = =∑ v ) (9) the probability , ( l m p v , functionally serves as a mixture weight of , ( | ) i l m p v v . Together with a set of centroids , , ( 1,... l m l v m ) ∈Φ = , ( | ) i l m M p v v ) and , ( l m p v define a mixture model LM lλ . , ( | ) i l m p v v is estimated by histogram normalization and , ( l m) p v is estimated under the maximum likelihood criteria, , , ( ) / l m m l l p v C = C , where C is total number of documents in D l l, of which C documents fall into the cell m. , m l An Expectation-Maximization iterative process can be devised for training of LM lλ to maximize the likelihood Eq.(9) over the entire training corpus: | | 1 1 ( | ) ( | ) l D L LM d l l d p p v λ = = ΩΛ =∏∏ (10) Using the phonotactic LM score ( ) ˆ / LM l l P T for classification, with T being represented by the bag-of-sounds vector v , Eq.(2) can be reformulated as Eq.(11), named mixture-model classifier (MMC): λ ˆ l p , , 1 ˆ argmax ( | ) argmax ( ) ( | ) LM p l l M l m p l m l m l p v p v p v v λ ∈Λ ∈Λ = = = ∑ (11) To establish fair comparison with P-PRLM, as shown in Figure 3, we devise our bag-of-sounds classifier to solely use the LM score ( ) ˆ / LM l l P T λ for classification decision whereas the acoustic score ( ) ˆ / , AM l l P O may potentially help as reported in (Singer et al., 2003). T λ Figure 3. A bag-of-sounds classifier. A unified front-end followed by L parallel bag-of-sounds phonotactic LMs. 4 Experiments This section will experimentally analyze the performance of the proposed bag-of-sounds framework using the 1996 NIST Language Recognition Evaluation (LRE) data. The database was intended to establish a baseline of performance capability for language recognition of conversational telephone speech. The database contains recorded speech of 12 languages: Arabic, English, Farsi, French, German, Hindi, Japanese, Korean, Mandarin, Spanish, Tamil and Vietnamese. We use the training set and development set from LDC CallFriend corpus3 as the training data. Each conversation is segmented into overlapping sessions of about 30 seconds each, resulting in about 12,000 sessions for each language. The evaluation set consists of 1,492 30-sec sessions, each distributed among the various languages of interest. We treat a 30-sec session as a spoken document in both training and testing. We report error rates (ER) of the 1,492 test trials. 4.1 Effect of Acoustic Vocabulary The choice of n-gram affects the performance of LID systems. Here we would like to see how a better choice of acoustic vocabulary can help convert a spoken document into a phonotactically discriminative space. There are two parameters that determine the acoustic vocabulary: the choice of acoustic token, and the choice of n-grams. In this paper, the former concerns the size of an acoustic system Y in the unified front-end. It is studied in more details in (Ma et al., 2005). We set Y to 32 in 3 See http://www.ldc.upenn.edu/. The overlap between 1996 NIST evaluation data and CallFriend database has been removed from training data as suggested in the 2003 NIST LRE website http://www.nist.gov/speech/tests/index.htm LM lλ LM-L: French Unified VT 1 LM λ LM-1: Chinese 2 LM λ LM-2: English Language Classifier spoken utterance Hypothesized language AM λ 520 this experiment; the latter decides what features to be included in the vector space. The vector space modeling allows for multiple heterogeneous features in one vector. We introduce three types of acoustic vocabulary (AV) with mixture of token unigram, bigram, and trigram: a) AV1: 32 broad class phonemes as unigram, selected from 12 languages, also referred to as P-ASM as detailed in (Ma et al., 2005) b) AV2: AV1 augmented by 32 bigrams of AV1, amounting to 1,056 tokens 32 × c) AV3: AV2 augmented by 32 trigrams of AV1, amounting to 33,824 tokens 32 32 × × AV1 AV2 AV3 ER % 46.1 32.8 28.3 Table 1. Effect of acoustic vocabulary (KNC) We carry out experiments with KNC classifier of 4,800 centroids. Applying k-nearest-neighboring rule, k is empirically set to 3. The error rates are reported in Table 1 for the experiments over the three AV types. It is found that high-order token ngrams improve LID performance. This reaffirms many previous findings that n-gram phonotactics serves as a valuable cue in LID. 4.2 Effect of Model Size As discussed in KNC, one would expect to improve the phonotactic model by using more centroids. Let’s examine how the number of centroid vectors M affects the performance of KNC. We set the acoustic system size Y to 128, k-nearest to 3, and only use token bigrams in the bag-of-sounds vector. In Table 2, it is not surprising to find that the performance improves as M increases. However, it is not practical to have large M because comparisons need to take place in each test trial. L L M ′ = × #M 1,200 2,400 4,800 12,000 ER % 17.0 15.7 15.4 14.8 Table 2. Effect of number of centroids (KNC) To reduce computation, MMC attempts to use less number of mixtures M to represent the phonotactic space. With the smoothing effect of the mixture model, we expect to use less computation to achieve similar performance as KNC. In the experiment reported in Table 3, we find that MMC (M=1,024) achieves 14.9% error rate, which almost equalizes the best result in the KNC experiment (M=12,000) with much less computation. #M 4 16 64 256 1,024 ER % 29.6 26.4 19.7 16.0 14.9 Table 3. Effect of number of mixtures (MMC) 4.3 Discussion The bag-of-sounds approach has achieved equal success in both 1996 and 2003 NIST LRE databases. As more results are published on the 1996 NIST LRE database, we choose it as the platform of comparison. In Table 4, we report the performance across different approaches in terms of error rate for a quick comparison. MMC presents a 12.4% ER reduction over the best reported result4 (Torres-Carrasquillo et al., 2002). It is interesting to note that the bag-of-sounds classifier outperforms its P-PRLM counterpart by a wide margin (14.9% vs 22.0%). This is attributed to the global phonotactic features in LM lλ . The performance gain in (Torres-Carrasquillo et al., 2002; Singer et al., 2003) was obtained mainly by fusing scores from several classifiers, namely GMM, P-PRLM and SVM, to benefit from both acoustic and language model scores. Noting that the bag-of-sounds classifier in this work solely relies on the LM score, it is believed that fusing with scores from other classifiers will further boost the LID performance. ER % P-PRLM5 22.0 P-PRLM + GMM acoustic5 19.5 P-PRLM + GMM acoustic + GMM tokenizer5 17.0 Bag-of-sounds classifier (MMC) 14.9 Table 4. Benchmark of different approaches Besides the error rate reduction, the bag-ofsounds approach also simplifies the on-line computing procedure over its P-PRLM counterpart. It would be interesting to estimate the on-line computational need of MMC. The cost incurred has two main components: 1) the construction of the 4 Previous results are also reported in DCF, DET, and equal error rate (EER). Comprehensive benchmarking for bag-ofsounds phonotactic LM will be reported soon. 5 Results extracted from (Torres-Carrasquillo et al., 2002) 521 pseudo document vector, as done via Eq.(5); 2) vector comparisons. The computing cost is estimated to be per test trial (Bellegarda, 2000). For typical values of Q, this amounts to less than 0.05 Mflops. While this is more expensive than the usual table look-up in conventional n-gram LM, the performance improvement is able to justify the relatively modest computing overhead. L L M ′ = × 2 ( ) Q O 5 Conclusion We have proposed a phonotactic LM approach to LID problem. The concept of bag-of-sounds is introduced, for the first time, to model phonotactics present in a spoken language over a larger context. With bag-of-sounds phonotactic LM, a spoken document can be treated as a text-like document of acoustic tokens. This way, the well-established LSA technique can be readily applied. This novel approach not only suggests a paradigm shift in LID, but also brings 12.4% error rate reduction over one of the best reported results on the 1996 NIST LRE data. It has proven to be very successful. We would like to extend this approach to other spoken document categorization tasks. In monolingual spoken document categorization, we suggest that the semantic domain can be characterized by latent phonotactic features. Thus it is straightforward to extend the proposed bag-of-sounds framework to spoken document categorization. Acknowledgement The authors are grateful to Dr. Alvin F. Martin of the NIST Speech Group for his advice when preparing the 1996 NIST LRE experiments, to Dr G. M. White and Ms Y. Chen of Institute for Infocomm Research for insightful discussions. References Jerome R. Bellegarda. 2000. Exploiting latent semantic information in statistical language modeling, In Proc. of the IEEE, 88(8):1279-1296. M. W. Berry, S.T. Dumais and G.W. O’Brien. 1995. Using Linear Algebra for intelligent information retrieval, SIAM Review, 37(4):573-595. William B. Cavnar, and John M. Trenkle. 1994. NGram-Based Text Categorization, In Proc. of 3rd Annual Symposium on Document Analysis and Information Retrieval, pp. 161-169. Jennifer Chu-Carroll, and Bob Carpenter. 1999. Vectorbased Natural Language Call Routing, Computational Linguistics, 25(3):361-388. S. Deerwester, S. Dumais, G. Furnas, T. Landauer, and R. Harshman, 1990, Indexing by latent semantic analysis, Journal of the American Society for Informatin Science, 41(6):391-407 Richard O. Duda and Peter E. Hart. 1973. Pattern Classification and scene analysis. John Wiley & Sons James L. Hieronymus. 1994. ASCII Phonetic Symbols for the World’s Languages: Worldbet. Technical Report AT&T Bell Labs. Spark Jones, K. 1972. A statistical interpretation of term specificity and its application in retrieval, Journal of Documentation, 28:11-20 Bin Ma, Haizhou Li and Chin-Hui Lee, 2005. An Acoustic Segment Modeling Approach to Automatic Language Identification, submitted to Interspeech 2005 Yeshwant K. Muthusamy, Neena Jain, and Ronald A. Cole. 1994. Perceptual benchmarks for automatic language identification, In Proc. of ICASSP Corinna Ng , Ross Wilkinson , Justin Zobel, 2000. , Speech Communication, 32(1-2):6177 Experiments in spoken document retrieval using phoneme n-grams G. Salton, 1971. The SMART Retrieval System, Prentice-Hall, Englewood Cliffs, NJ, 1971 E. Singer, P.A. Torres-Carrasquillo, T.P. Gleason, W.M. Campbell and D.A. Reynolds. 2003. Acoustic, Phonetic and Discriminative Approaches to Automatic language recognition, In Proc. of Eurospeech Masahide Sugiyama. 1991. Automatic language recognition using acoustic features, In Proc. of ICASSP. Pedro A. Torres-Carrasquillo, Douglas A. Reynolds, and J.R. Deller. Jr. 2002. Language identification using Gaussian Mixture model tokenization, in Proc. of ICASSP. Yonghong Yan, and Etienne Barnard. 1995. An approach to automatic language identification based on language dependent phone recognition, In Proc. of ICASSP. George K. Zipf. 1949. Human Behavior and the Principal of Least effort, an introduction to human ecology. Addison-Wesley, Reading, Mass. Marc A. Zissman. 1996. Comparison of four approaches to automatic language identification of telephone speech, IEEE Trans. on Speech and Audio Processing, 4(1):31-44. 522 | 2005 | 64 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 523–530, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Reading Level Assessment Using Support Vector Machines and Statistical Language Models Sarah E. Schwarm Dept. of Computer Science and Engineering University of Washington Seattle, WA 98195-2350 [email protected] Mari Ostendorf Dept. of Electrical Engineering University of Washington Seattle, WA 98195-2500 [email protected] Abstract Reading proficiency is a fundamental component of language competency. However, finding topical texts at an appropriate reading level for foreign and second language learners is a challenge for teachers. This task can be addressed with natural language processing technology to assess reading level. Existing measures of reading level are not well suited to this task, but previous work and our own pilot experiments have shown the benefit of using statistical language models. In this paper, we also use support vector machines to combine features from traditional reading level measures, statistical language models, and other language processing tools to produce a better method of assessing reading level. 1 Introduction The U.S. educational system is faced with the challenging task of educating growing numbers of students for whom English is a second language (U.S. Dept. of Education, 2003). In the 2001-2002 school year, Washington state had 72,215 students (7.2% of all students) in state programs for Limited English Proficient (LEP) students (Bylsma et al., 2003). In the same year, one quarter of all public school students in California and one in seven students in Texas were classified as LEP (U.S. Dept. of Education, 2004). Reading is a critical part of language and educational development, but finding appropriate reading material for LEP students is often difficult. To meet the needs of their students, bilingual education instructors seek out “high interest level” texts at low reading levels, e.g. texts at a first or second grade reading level that support the fifth grade science curriculum. Teachers need to find material at a variety of levels, since students need different texts to read independently and with help from the teacher. Finding reading materials that fulfill these requirements is difficult and time-consuming, and teachers are often forced to rewrite texts themselves to suit the varied needs of their students. Natural language processing (NLP) technology is an ideal resource for automating the task of selecting appropriate reading material for bilingual students. Information retrieval systems successfully find topical materials and even answer complex queries in text databases and on the World Wide Web. However, an effective automated way to assess the reading level of the retrieved text is still needed. In this work, we develop a method of reading level assessment that uses support vector machines (SVMs) to combine features from statistical language models (LMs), parse trees, and other traditional features used in reading level assessment. The results presented here on reading level assessment are part of a larger project to develop teacher-support tools for bilingual education instructors. The larger project will include a text simplification system, adapting paraphrasing and summarization techniques. Coupled with an information retrieval system, these tools will be used to select and simplify reading material in multiple languages for use by language learners. In addition to students in bilingual education, these tools will also be useful for those with reading-related learning disabili523 ties and adult literacy students. In both of these situations, as in the bilingual education case, the student’s reading level does not match his/her intellectual level and interests. The remainder of the paper is organized as follows. Section 2 describes related work on reading level assessment. Section 3 describes the corpora used in our work. In Section 4 we present our approach to the task, and Section 5 contains experimental results. Section 6 provides a summary and description of future work. 2 Reading Level Assessment This section highlights examples and features of some commonly used measures of reading level and discusses current research on the topic of reading level assessment using NLP techniques. Many traditional methods of reading level assessment focus on simple approximations of syntactic complexity such as sentence length. The widelyused Flesch-Kincaid Grade Level index is based on the average number of syllables per word and the average sentence length in a passage of text (Kincaid et al., 1975) (as cited in (Collins-Thompson and Callan, 2004)). Similarly, the Gunning Fog index is based on the average number of words per sentence and the percentage of words with three or more syllables (Gunning, 1952). These methods are quick and easy to calculate but have drawbacks: sentence length is not an accurate measure of syntactic complexity, and syllable count does not necessarily indicate the difficulty of a word. Additionally, a student may be familiar with a few complex words (e.g. dinosaur names) but unable to understand complex syntactic constructions. Other measures of readability focus on semantics, which is usually approximated by word frequency with respect to a reference list or corpus. The Dale-Chall formula uses a combination of average sentence length and percentage of words not on a list of 3000 “easy” words (Chall and Dale, 1995). The Lexile framework combines measures of semantics, represented by word frequency counts, and syntax, represented by sentence length (Stenner, 1996). These measures are inadequate for our task; in many cases, teachers want materials with more difficult, topic-specific words but simple structure. Measures of reading level based on word lists do not capture this information. In addition to the traditional reading level metrics, researchers at Carnegie Mellon University have applied probabilistic language modeling techniques to this task. Si and Callan (2001) conducted preliminary work to classify science web pages using unigram models. More recently, Collins-Thompson and Callan manually collected a corpus of web pages ranked by grade level and observed that vocabulary words are not distributed evenly across grade levels. They developed a “smoothed unigram” classifier to better capture the variance in word usage across grade levels (Collins-Thompson and Callan, 2004). On web text, their classifier outperformed several other measures of semantic difficulty: the fraction of unknown words in the text, the number of distinct types per 100 token passage, the mean log frequency of the text relative to a large corpus, and the Flesch-Kincaid measure. The traditional measures performed better on some commercial corpora, but these corpora were calibrated using similar measures, so this is not a fair comparison. More importantly, the smoothed unigram measure worked better on the web corpus, especially on short passages. The smoothed unigram classifier is also more generalizable, since it can be trained on any collection of data. Traditional measures such as Dale-Chall and Lexile are based on static word lists. Although the smoothed unigram classifier outperforms other vocabulary-based semantic measures, it does not capture syntactic information. We believe that higher order n-gram models or class n-gram models can achieve better performance by capturing both semantic and syntactic information. This is particularly important for the tasks we are interested in, when the vocabulary (i.e. topic) and grade level are not necessarily well-matched. 3 Corpora Our work is currently focused on a corpus obtained from Weekly Reader, an educational newspaper with versions targeted at different grade levels (Weekly Reader, 2004). These data include a variety of labeled non-fiction topics, including science, history, and current events. Our corpus consists of articles from the second, third, fourth, and fifth grade edi524 Grade Num Articles Num Words 2 351 71.5k 3 589 444k 4 766 927k 5 691 1M Table 1: Distribution of articles and words in the Weekly Reader corpus. Corpus Num Articles Num Words Britannica 115 277k B. Elementary 115 74k CNN 111 51k CNN Abridged 111 37k Table 2: Distribution of articles and words in the Britannica and CNN corpora. tions of the newspaper. We design classifiers to distinguish each of these four categories. This corpus contains just under 2400 articles, distributed as shown in Table 1. Additionally, we have two corpora consisting of articles for adults and corresponding simplified versions for children or other language learners. Barzilay and Elhadad (2003) have allowed us to use their corpus from Encyclopedia Britannica, which contains articles from the full version of the encyclopedia and corresponding articles from Britannica Elementary, a new version targeted at children. The Western/Pacific Literacy Network’s (2004) web site has an archive of CNN news stories and abridged versions which we have also received permission to use. Although these corpora do not provide an explicit grade-level ranking for each article, broad categories are distinguished. We use these data as a supplement to the Weekly Reader corpus for learning models to distinguish broad reading level classes than can serve to provide features for more detailed classification. Table 2 shows the size of the supplemental corpora. 4 Approach Existing reading level measures are inadequate due to their reliance on vocabulary lists and/or a superficial representation of syntax. Our approach uses ngram language models as a low-cost automatic approximation of both syntactic and semantic analysis. Statistical language models (LMs) are used successfully in this way in other areas of NLP such as speech recognition and machine translation. We also use a standard statistical parser (Charniak, 2000) to provide syntactic analysis. In practice, a teacher is likely to be looking for texts at a particular level rather than classifying a group of texts into a variety of categories. Thus we construct one classifier per category which decides whether a document belongs in that category or not, rather than constructing a classifier which ranks documents into different categories relative to each other. 4.1 Statistical Language Models Statistical LMs predict the probability that a particular word sequence will occur. The most commonly used statistical language model is the n-gram model, which assumes that the word sequence is an (n−1)th order Markov process. For example, for the common trigram model where n = 3, the probability of sequence w is: P(w) = P(w1)P(w2|w1) m Y i=3 P(wi|wi−1, wi−2). (1) The parameters of the model are estimated using a maximum likelihood estimate based on the observed frequency in a training corpus and smoothed using modified Kneser-Ney smoothing (Chen and Goodman, 1999). We used the SRI Language Modeling Toolkit (Stolcke, 2002) for language model training. Our first set of classifiers consists of one n-gram language model per class c in the set of possible classes C. For each text document t, we can calculate the likelihood ratio between the probability given by the model for class c and the probabilities given by the other models for the other classes: LR = P(t|c)P(c) P c′̸=c P(t|c′)P(c′) (2) where we assume uniform prior probabilities P(c). The resulting value can be compared to an empirically chosen threshold to determine if the document is in class c or not. For each class c, a language model is estimated from a corpus of training texts. 525 In addition to using the likelihood ratio for classification, we can use scores from language models as features in another classifier (e.g. an SVM). For example, perplexity (PP) is an information-theoretic measure often used to assess language models: PP = 2H(t|c), (3) where H(t|c) is the entropy relative to class c of a length m word sequence t = w1, ..., wm, defined as H(t|c) = −1 m log2 P(t|c). (4) Low perplexity indicates a better match between the test data and the model, corresponding to a higher probability P(t|c). Perplexity scores are used as features in the SVM model described in Section 4.3. The likelihood ratio described above could also be used as a feature, but we achieved better results using perplexity. 4.2 Feature Selection Feature selection is a common part of classifier design for many classification problems; however, there are mixed results in the literature on feature selection for text classification tasks. In CollinsThompson and Callan’s work (2004) on readability assessment, LM smoothing techniques are more effective than other forms of explicit feature selection. However, feature selection proves to be important in other text classification work, e.g. Lee and Myaeng’s (2002) genre and subject detection work and Boulis and Ostendorf’s (2005) work on feature selection for topic classification. For our LM classifiers, we followed Boulis and Ostendorf’s (2005) approach for feature selection and ranked words by their ability to discriminate between classes. Given P(c|w), the probability of class c given word w, estimated empirically from the training set, we sorted words based on their information gain (IG). Information gain measures the difference in entropy when w is and is not included as a feature. IG(w) = − X c∈C P(c) log P(c) + P(w) X c∈C P(c|w) log P(c|w) + P( ¯w) X c∈C P(c| ¯w) log P(c| ¯w).(5) The most discriminative words are selected as features by plotting the sorted IG values and keeping only those words below the “knee” in the curve, as determined by manual inspection of the graph. In an early experiment, we replaced all remaining words with a single “unknown” tag. This did not result in an effective classifier, so in later experiments the remaining words were replaced with a small set of general tags. Motivated by our goal of representing syntax, we used part-of-speech (POS) tags as labeled by a maximum entropy tagger (Ratnaparkhi, 1996). These tags allow the model to represent patterns in the text at a higher level than that of individual words, using sequences of POS tags to capture rough syntactic information. The resulting vocabulary consisted of 276 words and 56 POS tags. 4.3 Support Vector Machines Support vector machines (SVMs) are a machine learning technique used in a variety of text classification problems. SVMs are based on the principle of structural risk minimization. Viewing the data as points in a high-dimensional feature space, the goal is to fit a hyperplane between the positive and negative examples so as to maximize the distance between the data points and the plane. SVMs were introduced by Vapnik (1995) and were popularized in the area of text classification by Joachims (1998a). The unit of classification in this work is a single article. Our SVM classifiers for reading level use the following features: • Average sentence length • Average number of syllables per word • Flesch-Kincaid score • 6 out-of-vocabulary (OOV) rate scores. • Parse features (per sentence): – Average parse tree height – Average number of noun phrases – Average number of verb phrases – Average number of “SBAR”s.1 • 12 language model perplexity scores The OOV scores are relative to the most common 100, 200 and 500 words in the lowest grade level 1SBAR is defined in the Penn Treebank tag set as a “clause introduced by a (possibly empty) subordinating conjunction.” It is an indicator of sentence complexity. 526 (grade 2) 2. For each article, we calculated the percentage of a) all word instances (tokens) and b) all unique words (types) not on these lists, resulting in three token OOV rate features and three type OOV rate features per article. The parse features are generated using the Charniak parser (Charniak, 2000) trained on the standard Wall Street Journal Treebank corpus. We chose to use this standard data set as we do not have any domain-specific treebank data for training a parser. Although clearly there is a difference between news text for adults and news articles intended for children, inspection of some of the resulting parses showed good accuracy. Ideally, the language model scores would be for LMs from domain-specific training data (i.e. more Weekly Reader data.) However, our corpus is limited and preliminary experiments in which the training data was split for LM and SVM training were unsuccessful due to the small size of the resulting data sets. Thus we made use of the Britannica and CNN articles to train models of three n-gram orders on “child” text and “adult” text. This resulted in 12 LM perplexity features per article based on trigram, bigram and unigram LMs trained on Britannica (adult), Britannica Elementary, CNN (adult) and CNN abridged text. For training SVMs, we used the SVMlight toolkit developed by Joachims (1998b). Using development data, we selected the radial basis function kernel and tuned parameters using cross validation and grid search as described in (Hsu et al., 2003). 5 Experiments 5.1 Test Data and Evaluation Criteria We divide the Weekly Reader corpus described in Section 3 into separate training, development, and test sets. The number of articles in each set is shown in Table 3. The development data is used as a test set for comparing classifiers, tuning parameters, etc, and the results presented in this section are based on the test set. We present results in three different formats. For analyzing our binary classifiers, we use Detection Error Tradeoff (DET) curves and precision/recall 2These lists are chosen from the full vocabulary independently of the feature selection for LMs described above. Grade Training Dev/Test 2 315 18 3 529 30 4 690 38 5 623 34 Table 3: Number of articles in the Weekly Reader corpus as divided into training, development and test sets. The dev and test sets are the same size and each consist of approximately 5% of the data for each grade level. measures. For comparison to other methods, e.g. Flesch-Kincaid and Lexile, which are not binary classifiers, we consider the percentage of articles which are misclassified by more than one grade level. Detection Error Tradeoff curves show the tradeoff between misses and false alarms for different threshold values for the classifiers. “Misses” are positive examples of a class that are misclassified as negative examples; “false alarms” are negative examples misclassified as positive. DET curves have been used in other detection tasks in language processing, e.g. Martin et al. (1997). We use these curves to visualize the tradeoff between the two types of errors, and select the minimum cost operating point in order to get a threshold for precision and recall calculations. The minimum cost operating point depends on the relative costs of misses and false alarms; it is conceivable that one type of error might be more serious than the other. After consultation with teachers (future users of our system), we concluded that there are pros and cons to each side, so for the purpose of this analysis we weighted the two types of errors equally. In this work, the minimum cost operating point is selected by averaging the percentages of misses and false alarms at each point and choosing the point with the lowest average. Unless otherwise noted, errors reported are associated with these actual operating points, which may not lie on the convex hull of the DET curve. Precision and recall are often used to assess information retrieval systems, and our task is similar. Precision indicates the percentage of the retrieved documents that are relevant, in this case the percentage of detected documents that match the target 527 grade level. Recall indicates the percentage of the total number of relevant documents in the data set that are retrieved, in this case the percentage of the total number of documents from the target level that are detected. 5.2 Language Model Classifier 1 2 5 10 20 40 60 80 90 1 2 5 10 20 40 60 80 90 False Alarm probability (in %) Miss probability (in %) grade 2 grade 3 grade 4 grade 5 Figure 1: DET curves (test set) for classifiers based on trigram language models. Figure 1 shows DET curves for the trigram LMbased classifiers. The minimum cost error rates for these classifiers, indicated by large dots in the plot, are in the range of 33-43%, with only one over 40%. The curves for bigram and unigram models have similar shapes, but the trigram models outperform the lower-order models. Error rates for the bigram models range from 37-45% and the unigram models have error rates in the 39-49% range, with all but one over 40%. Although our training corpus is small the feature selection described in Section 4.2 allows us to use these higher-order trigram models. 5.3 Support Vector Machine Classifier By combining language model scores with other features in an SVM framework, we achieve our best results. Figures 2 and 3 show DET curves for this set of classifiers on the development set and test set, respectively. The grade 2 and 5 classifiers have the best performance, probably because grade 3 and 4 must be distinguished from other classes at both higher and lower levels. Using threshold values selected based on minimum cost on the development 1 2 5 10 20 40 60 80 90 1 2 5 10 20 40 60 80 90 False Alarm probability (in %) Miss probability (in %) grade 2 grade 3 grade 4 grade 5 Figure 2: DET curves (development set) for SVM classifiers with LM features. 1 2 5 10 20 40 60 80 90 1 2 5 10 20 40 60 80 90 False Alarm probability (in %) Miss probability (in %) grade 2 grade 3 grade 4 grade 5 Figure 3: DET curves (test set) for SVM classifiers with LM features. set, indicated by large dots on the plot, we calculated precision and recall on the test set. Results are presented in Table 4. The grade 3 classifier has high recall but relatively low precision; the grade 4 classifier does better on precision and reasonably well on recall. Since the minimum cost operating points do not correspond to the equal error rate (i.e. equal percentage of misses and false alarms) there is variation in the precision-recall tradeoff for the different grade level classifiers. For example, for class 3, the operating point corresponds to a high probability of false alarms and a lower probability of misses, which results in low precision and high recall. For operating points chosen on the convex hull of the DET curves, the equal error rate ranges from 12-25% for the dif528 Grade Precision Recall 2 38% 61% 3 38% 87% 4 70% 60% 5 75% 79% Table 4: Precision and recall on test set for SVMbased classifiers. Grade Errors Flesch-Kincaid Lexile SVM 2 78% 33% 5.5% 3 67% 27% 3.3% 4 74% 26% 13% 5 59% 24% 21% Table 5: Percentage of articles which are misclassified by more than one grade level. ferent grade levels. We investigated the contribution of individual features to the overall performance of the SVM classifier and found that no features stood out as most important, and performance was degraded when any particular features were removed. 5.4 Comparison We also compared error rates for the best performing SVM classifier with two traditional reading level measures, Flesch-Kincaid and Lexile. The Flesch-Kincaid Grade Level index is a commonly used measure of reading level based on the average number of syllables per word and average sentence length. The Flesch-Kincaid score for a document is intended to directly correspond with its grade level. We chose the Lexile measure as an example of a reading level classifier based on word lists.3 Lexile scores do not correlate directly to numeric grade levels, however a mapping of ranges of Lexile scores to their corresponding grade levels is available on the Lexile web site (Lexile, 2005). For each of these three classifiers, Table 5 shows the percentage of articles which are misclassified by more than one grade level. Flesch-Kincaid performs poorly, as expected since its only features are sen3Other classifiers such as Dale-Chall do not have automatic software available. tence length and average syllable count. Although this index is commonly used, perhaps due to its simplicity, it is not accurate enough for the intended application. Our SVM classifier also outperforms the Lexile metric. Lexile is a more general measure while our classifier is trained on this particular domain, so the better performance of our model is not entirely surprising. Importantly, however, our classifier is easily tuned to any corpus of interest. To test our classifier on data outside the Weekly Reader corpus, we downloaded 10 randomly selected newspaper articles from the “Kidspost” edition of The Washington Post (2005). “Kidspost” is intended for grades 3-8. We found that our SVM classifier, trained on the Weekly Reader corpus, classified four of these articles as grade 4 and seven articles as grade 5 (with one overlap with grade 4). These results indicate that our classifier can generalize to other data sets. Since there was no training data corresponding to higher reading levels, the best performance we can expect for adult-level newspaper articles is for our classifiers to mark them as the highest grade level, which is indeed what happened for 10 randomly chosen articles from standard edition of The Washington Post. 6 Conclusions and Future Work Statistical LMs were used to classify texts based on reading level, with trigram models being noticeably more accurate than bigrams and unigrams. Combining information from statistical LMs with other features using support vector machines provided the best results. Future work includes testing additional classifier features, e.g. parser likelihood scores and features obtained using a syntax-based language model such as Chelba and Jelinek (2000) or Roark (2001). Further experiments are planned on the generalizability of our classifier to text from other sources (e.g. newspaper articles, web pages); to accomplish this we will add higher level text as negative training data. We also plan to test these techniques on languages other than English, and incorporate them with an information retrieval system to create a tool that may be used by teachers to help select reading material for their students. 529 Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. IIS-0326276. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Thank you to Paul Heavenridge (Literacyworks), the Weekly Reader Corporation, Regina Barzilay (MIT) and Noemie Elhadad (Columbia University) for sharing their data and corpora. References R. Barzilay and N. Elhadad. Sentence alignment for monolingual comparable corpora. In Proc. of EMNLP, pages 25–32, 2003. C. Boulis and M. Ostendorf. Text classification by augmenting the bag-of-words representation with redundancycompensated bigrams. Workshop on Feature Selection in Data Mining, in conjunction with SIAM conference on Data Mining, 2005. P. Bylsma, L. Ireland, and H. Malagon. Educating English Language Learners in Washington State. Office of the Superintendent of Public Instruction, Olympia, WA, 2003. J.S. Chall and E. Dale. Readability revisited: the new DaleChall readability formula. Brookline Books, Cambridge, Mass., 1995. E. Charniak. A maximum-entropy-inspired parser. In Proc. of NAACL, pages 132–139, 2000. C. Chelba and F. Jelinek. Structured Language Modeling. Computer Speech and Language, 14(4):283-332, 2000. S. Chen and J. Goodman. An empirical study of smoothing techniques for language modeling. Computer Speech and Language, 13(4):359–393, 1999. K. Collins-Thompson and J. Callan. A language modeling approach to predicting reading difficulty. In Proc. of HLT/NAACL, pages 193–200, 2004. R. Gunning. The technique of clear writing. McGraw-Hill, New York, 1952. C.-W. Hsu et al. A practical guide to support vector classification. http://www.csie.ntu.edu.tw/˜cjlin/ papers/guide/guide.pdf, 2003. Accessed 11/2004. T. Joachims. Text categorization with support vector machines: learning with many relevant features. In Proc. of the European Conference on Machine Learning, pages 137–142, 1998a. T. Joachims. Making large-scale support vector machine learning practical. In Advances in Kernel Methods: Support Vector Machines. B. Sch¨olkopf, C. Burges, A. Smola, eds. MIT Press, Cambridge, MA, 1998b. J.P. Kincaid, Jr., R.P. Fishburne, R.L. Rodgers, and B.S. Chisson. Derivation of new readability formulas for Navy enlisted personnel. Research Branch Report 8-75, U.S. Naval Air Station, Memphis, 1975. Y.-B. Lee and S.H. Myaeng. Text genre classification with genre-revealing and subject-revealing features. In Proc. of SIGIR, pages 145–150, 2002. The Lexile framework for reading. http://www.lexile. com, 2005. Accessed April 15, 2005. A. Martin, G. Doddington, T. Kamm, M. Ordowski, and M. Przybocki. The DET curve in assessment of detection task performance. Proc. of Eurospeech, v. 4, pp. 1895-1898, 1997. A. Ratnaparkhi. A maximum entropy part-of-speech tagger. In Proc. of EMNLP, pages 133–141, 1996. B. Roark. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249-276, 2001. L. Si and J.P. Callan. A statistical model for scientific readability. In Proc. of CIKM, pages 574–576, 2001. A.J. Stenner. Measuring reading comprehension with the Lexile framework. Presented at the Fourth North American Conference on Adolescent/Adult Literacy, 1996. A. Stolcke. SRILM - an extensible language modeling toolkit. Proc. ICSLP, v. 2, pp. 901-904, 2002. U.S. Department of Education, National Center for Educational Statistics. The condition of education. http://nces.ed.gov/programs/coe/2003/ section1/indicator04.asp, 2003. Accessed June 18, 2004. U.S. Department of Education, National Center for Educational Statistics. NCES fast facts: Bilingual education/Limited English Proficient students. http://nces.ed.gov/ fastfacts/display.asp?id=96, 2003. Accessed June 18, 2004. V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995. The Washington Post. http://www.washingtonpost. com, 2005. Accessed April 20, 2005. Weekly Reader. http://www.weeklyreader.com, 2004. Accessed July, 2004. Western/Pacific Literacy Network / Literacyworks. CNN SF learning resources. http://literacynet.org/ cnnsf/, 2004. Accessed June 15, 2004. 530 | 2005 | 65 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 531–540, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Clause Restructuring for Statistical Machine Translation Michael Collins MIT CSAIL [email protected] Philipp Koehn School of Informatics University of Edinburgh [email protected] Ivona Kuˇcerov´a MIT Linguistics Department [email protected] Abstract We describe a method for incorporating syntactic information in statistical machine translation systems. The first step of the method is to parse the source language string that is being translated. The second step is to apply a series of transformations to the parse tree, effectively reordering the surface string on the source language side of the translation system. The goal of this step is to recover an underlying word order that is closer to the target language word-order than the original string. The reordering approach is applied as a pre-processing step in both the training and decoding phases of a phrase-based statistical MT system. We describe experiments on translation from German to English, showing an improvement from 25.2% Bleu score for a baseline system to 26.8% Bleu score for the system with reordering, a statistically significant improvement. 1 Introduction Recent research on statistical machine translation (SMT) has lead to the development of phrasebased systems (Och et al., 1999; Marcu and Wong, 2002; Koehn et al., 2003). These methods go beyond the original IBM machine translation models (Brown et al., 1993), by allowing multi-word units (“phrases”) in one language to be translated directly into phrases in another language. A number of empirical evaluations have suggested that phrase-based systems currently represent the state–of–the–art in statistical machine translation. In spite of their success, a key limitation of phrase-based systems is that they make little or no direct use of syntactic information. It appears likely that syntactic information will be crucial in accurately modeling many phenomena during translation, for example systematic differences between the word order of different languages. For this reason there is currently a great deal of interest in methods which incorporate syntactic information within statistical machine translation systems (e.g., see (Alshawi, 1996; Wu, 1997; Yamada and Knight, 2001; Gildea, 2003; Melamed, 2004; Graehl and Knight, 2004; Och et al., 2004; Xia and McCord, 2004)). In this paper we describe an approach for the use of syntactic information within phrase-based SMT systems. The approach constitutes a simple, direct method for the incorporation of syntactic information in a phrase–based system, which we will show leads to significant improvements in translation accuracy. The first step of the method is to parse the source language string that is being translated. The second step is to apply a series of transformations to the resulting parse tree, effectively reordering the surface string on the source language side of the translation system. The goal of this step is to recover an underlying word order that is closer to the target language word-order than the original string. Finally, we apply a phrase-based system to the reordered string to give a translation into the target language. We describe experiments involving machine translation from German to English. As an illustrative example of our method, consider the following German sentence, together with a “translation” into English that follows the original word order: Original sentence: Ich werde Ihnen die entsprechenden Anmerkungen aushaendigen, damit Sie das eventuell bei der Abstimmung uebernehmen koennen. English translation: I will to you the corresponding comments pass on, so that you them perhaps in the vote adopt can. The German word order in this case is substantially different from the word order that would be seen in English. As we will show later in this paper, translations of sentences of this type pose difficulties for phrase-based systems. In our approach we reorder the constituents in a parse of the German sentence to give the following word order, which is much closer to the target English word order (words which have been “moved” are underlined): Reordered sentence: Ich werde aushaendigen Ihnen die entsprechenden Anmerkungen, damit Sie koennen uebernehmen das eventuell bei der Abstimmung. English translation: I will pass on to you the corresponding comments, so that you can adopt them perhaps in the vote. 531 We applied our approach to translation from German to English in the Europarl corpus. Source language sentences are reordered in test data, and also in training data that is used by the underlying phrasebased system. Results using the method show an improvement from 25.2% Bleu score to 26.8% Bleu score (a statistically significant improvement), using a phrase-based system (Koehn et al., 2003) which has been shown in the past to be a highly competitive SMT system. 2 Background 2.1 Previous Work 2.1.1 Research on Phrase-Based SMT The original work on statistical machine translation was carried out by researchers at IBM (Brown et al., 1993). More recently, phrase-based models (Och et al., 1999; Marcu and Wong, 2002; Koehn et al., 2003) have been proposed as a highly successful alternative to the IBM models. Phrase-based models generalize the original IBM models by allowing multiple words in one language to correspond to multiple words in another language. For example, we might have a translation entry specifying that I will in English is a likely translation for Ich werde in German. In this paper we use the phrase-based system of (Koehn et al., 2003) as our underlying model. This approach first uses the original IBM models to derive word-to-word alignments in the corpus of example translations. Heuristics are then used to grow these alignments to encompass phrase-tophrase pairs. The end result of the training process is a lexicon of phrase-to-phrase pairs, with associated costs or probabilities. In translation with the system, a beam search method with left-to-right search is used to find a high scoring translation for an input sentence. At each stage of the search, one or more English words are added to the hypothesized string, and one or more consecutive German words are “absorbed” (i.e., marked as having already been translated—note that each word is absorbed at most once). Each step of this kind has a number of costs: for example, the log probability of the phrase-tophrase correspondance involved, the log probability from a language model, and some “distortion” score indicating how likely it is for the proposed words in the English string to be aligned to the corresponding position in the German string. 2.1.2 Research on Syntax-Based SMT A number of researchers (Alshawi, 1996; Wu, 1997; Yamada and Knight, 2001; Gildea, 2003; Melamed, 2004; Graehl and Knight, 2004; Galley et al., 2004) have proposed models where the translation process involves syntactic representations of the source and/or target languages. One class of approaches make use of “bitext” grammars which simultaneously parse both the source and target languages. Another class of approaches make use of syntactic information in the target language alone, effectively transforming the translation problem into a parsing problem. Note that these models have radically different structures and parameterizations from phrase–based models for SMT. As yet, these systems have not shown significant gains in accuracy in comparison to phrase-based systems. Reranking methods have also been proposed as a method for using syntactic information (Koehn and Knight, 2003; Och et al., 2004; Shen et al., 2004). In these approaches a baseline system is used to generate -best output. Syntactic features are then used in a second model that reranks the -best lists, in an attempt to improve over the baseline approach. (Koehn and Knight, 2003) apply a reranking approach to the sub-task of noun-phrase translation. (Och et al., 2004; Shen et al., 2004) describe the use of syntactic features in reranking the output of a full translation system, but the syntactic features give very small gains: for example the majority of the gain in performance in the experiments in (Och et al., 2004) was due to the addition of IBM Model 1 translation probabilities, a non-syntactic feature. An alternative use of syntactic information is to employ an existing statistical parsing model as a language model within an SMT system. See (Charniak et al., 2003) for an approach of this form, which shows improvements in accuracy over a baseline system. 2.1.3 Research on Preprocessing Approaches Our approach involves a preprocessing step, where sentences in the language being translated are modified before being passed to an existing phrasebased translation system. A number of other re532 searchers (Berger et al., 1996; Niessen and Ney, 2004; Xia and McCord, 2004) have described previous work on preprocessing methods. (Berger et al., 1996) describe an approach that targets translation of French phrases of the form NOUN de NOUN (e.g., conflit d’int´erˆet). This was a relatively limited study, concentrating on this one syntactic phenomenon which involves relatively local transformations (a parser was not required in this study). (Niessen and Ney, 2004) describe a method that combines morphologically–split verbs in German, and also reorders questions in English and German. Our method goes beyond this approach in several respects, for example considering phenomena such as declarative (non-question) clauses, subordinate clauses, negation, and so on. (Xia and McCord, 2004) describe an approach for translation from French to English, where reordering rules are acquired automatically. The reordering rules in their approach operate at the level of context-free rules in the parse tree. Our method differs from that of (Xia and McCord, 2004) in a couple of important respects. First, we are considering German, which arguably has more challenging word order phenonema than French. German has relatively free word order, in contrast to both English and French: for example, there is considerable flexibility in terms of which phrases can appear in the first position in a clause. Second, Xia et. al’s (2004) use of reordering rules stated at the context-free level differs from ours. As one example, in our approach we use a single transformation that moves an infinitival verb to the first position in a verb phrase. Xia et. al’s approach would require learning of a different rule transformation for every production of the form VP => .... In practice the German parser that we are using creates relatively “flat” structures at the VP and clause levels, leading to a huge number of context-free rules (the flatness is one consequence of the relatively free word order seen within VP’s and clauses in German). There are clearly some advantages to learning reordering rules automatically, as in Xia et. al’s approach. However, we note that our approach involves a handful of linguistically–motivated transformations and achieves comparable improvements (albeit on a different language pair) to Xia et. al’s method, which in contrast involves over 56,000 transformations. S PPER-SB Ich VAFIN-HD werde VP PPER-DA Ihnen NP-OA ART die ADJA entsprechenden NN Anmerkungen VVINF-HD aushaendigen , , S KOUS damit PPER-SB Sie VP PDS-OA das ADJD eventuell PP APPR bei ART der NN Abstimmung VVINF-HD uebernehmen VMFIN-HD koennen Figure 1: An example parse tree. Key to non-terminals: PPER = personal pronoun; VAFIN = finite verb; VVINF = infinitival verb; KOUS = complementizer; APPR = preposition; ART = article; ADJA = adjective; ADJD = adverb; -SB = subject; -HD = head of a phrase; -DA = dative object; -OA = accusative object. 2.2 German Clause Structure In this section we give a brief description of the syntactic structure of German clauses. The characteristics we describe motivate the reordering rules described later in the paper. Figure 1 gives an example parse tree for a German sentence. This sentence contains two clauses: Clause 1: Ich/I werde/will Ihnen/to you die/the entsprechenden/corresponding Anmerkungen/comments aushaendigen/pass on Clause 2: damit/so that Sie/you das/them eventuell/perhaps bei/in der/the Abstimmung/vote uebernehmen/adopt koennen/can These two clauses illustrate a number of syntactic phenomena in German which lead to quite different word order from English: Position of finite verbs. In Clause 1, which is a matrix clause, the finite verb werde is in the second position in the clause. Finite verbs appear rigidly in 2nd position in matrix clauses. In contrast, in subordinate clauses, such as Clause 2, the finite verb comes last in the clause. For example, note that koennen is a finite verb which is the final element of Clause 2. Position of infinitival verbs. In German, infinitival verbs are final within their associated verb 533 phrase. For example, returning to Figure 1, notice that aushaendigen is the last element in its verb phrase, and that uebernehmen is the final element of its verb phrase in the figure. Relatively flexible word ordering. German has substantially freer word order than English. In particular, note that while the verb comes second in matrix clauses, essentially any element can be in the first position. For example, in Clause 1, while the subject Ich is seen in the first position, potentially any of the other constituents (e.g., Ihnen) could also appear in this position. Note that this often leads to the subject following the finite verb, something which happens very rarely in English. There are many other phenomena which lead to differing word order between German and English. Two others that we focus on in this paper are negation (the differing placement of items such as not in English and nicht in German), and also verb-particle constructions. We describe our treatment of these phenomena later in this paper. 2.3 Reordering with Phrase-Based SMT We have seen in the last section that German syntax has several characteristics that lead to significantly different word order from that of English. We now describe how these characteristics can lead to difficulties for phrase–based translation systems when applied to German to English translation. Typically, reordering models in phrase-based systems are based solely on movement distance. In particular, at each point in decoding a “cost” is associated with skipping over 1 or more German words. For example, assume that in translating Ich werde Ihnen die entsprechenden Anmerkungen aushaendigen. we have reached a state where “Ich” and “werde” have been translated into “I will” in English. A potential decoding decision at this point is to add the phrase “pass on” to the English hypothesis, at the same time absorbing “aushaendigen” from the German string. The cost of this decoding step will involve a number of factors, including a cost of skipping over a phrase of length 4 (i.e., Ihnen die entsprechenden Anmerkungen) in the German string. The ability to penalise “skips” of this type, and the potential to model multi-word phrases, are essentially the main strategies that the phrase-based system is able to employ when modeling differing word-order across different languages. In practice, when training the parameters of an SMT system, for example using the discriminative methods of (Och, 2003), the cost for skips of this kind is typically set to a very high value. In experiments with the system of (Koehn et al., 2003) we have found that in practice a large number of complete translations are completely monotonic (i.e., have skips), suggesting that the system has difficulty learning exactly what points in the translation should allow reordering. In summary, phrase-based systems have relatively limited potential to model word-order differences between different languages. The reordering stage described in this paper attempts to modify the source language (e.g., German) in such a way that its word order is very similar to that seen in the target language (e.g., English). In an ideal approach, the resulting translation problem that is passed on to the phrase-based system will be solvable using a completely monotonic translation, without any skips, and without requiring extremely long phrases to be translated (for example a phrasal translation corresponding to Ihnen die entsprechenden Anmerkungen aushaendigen). Note than an additional benefit of the reordering phase is that it may bring together groups of words in German which have a natural correspondance to phrases in English, but were unseen or rare in the original German text. For example, in the previous example, we might derive a correspondance between werde aushaendigen and will pass on that was not possible before reordering. Another example concerns verb-particle constructions, for example in Wir machen die Tuer auf machen and auf form a verb-particle construction. The reordering stage moves auf to precede machen, allowing a phrasal entry that “auf machen” is translated to to open in English. Without the reordering, the particle can be arbitrarily far from the verb that it modifies, and there is a danger in this example of translating machen as to make, the natural translation when no particle is present. 534 Original sentence: Ich werde Ihnen die entsprechenden Anmerkungen aushaendigen, damit Sie das eventuell bei der Abstimmung uebernehmen koennen. (I will to you the corresponding comments pass on, so that you them perhaps in the vote adopt can.) Reordered sentence: Ich werde aushaendigen Ihnen die entsprechenden Anmerkungen, damit Sie koennen uebernehmen das eventuell bei der Abstimmung. (I will pass on to you the corresponding comments, so that you can adopt them perhaps in the vote.) Figure 2: An example of the reordering process, showing the original German sentence and the sentence after reordering. 3 Clause Restructuring We now describe the method we use for reordering German sentences. As a first step in the reordering process, we parse the sentence using the parser described in (Dubey and Keller, 2003). The second step is to apply a sequence of rules that reorder the German sentence depending on the parse tree structure. See Figure 2 for an example German sentence before and after the reordering step. In the reordering phase, each of the following six restructuring steps were applied to a German parse tree, in sequence (see table 1 also, for examples of the reordering steps): [1] Verb initial In any verb phrase (i.e., phrase with label VP-...) find the head of the phrase (i.e., the child with label -HD) and move it into the initial position within the verb phrase. For example, in the parse tree in Figure 1, aushaendigen would be moved to precede Ihnen in the first verb phrase (VPOC), and uebernehmen would be moved to precede das in the second VP-OC. The subordinate clause would have the following structure after this transformation: S-MO KOUS-CP damit PPER-SB Sie VP-OC VVINF-HD uebernehmen PDS-OA das ADJD-MO eventuell PP-MO APPR-DA bei ART-DA der NN-NK Abstimmung VMFIN-HD koennen [2] Verb 2nd In any subordinate clause labelled S-..., with a complementizer KOUS, PREL, PWS or PWAV, find the head of the clause, and move it to directly follow the complementizer. For example, in the subordinate clause in Figure 1, the head of the clause koennen would be moved to follow the complementizer damit, giving the following structure: S-MO KOUS-CP damit VMFIN-HD koennen PPER-SB Sie VP-OC VVINF-HD uebernehmen PDS-OA das ADJD-MO eventuell PP-MO APPR-DA bei ART-DA der NN-NK Abstimmung [3] Move Subject For any clause (i.e., phrase with label S...), move the subject to directly precede the head. We define the subject to be the left-most child of the clause with label ...-SB or PPEREP, and the head to be the leftmost child with label ...-HD. For example, in the subordinate clause in Figure 1, the subject Sie would be moved to precede koennen, giving the following structure: S-MO KOUS-CP damit PPER-SB Sie VMFIN-HD koennen VP-OC VVINF-HD uebernehmen PDS-OA das ADJD-MO eventuell PP-MO APPR-DA bei ART-DA der NN-NK Abstimmung [4] Particles In verb particle constructions, move the particle to immediately precede the verb. More specifically, if a finite verb (i.e., verb tagged as VVFIN) and a particle (i.e., word tagged as PTKVZ) are found in the same clause, move the particle to precede the verb. As one example, the following clause contains both a verb (forden) as well as a particle (auf): S PPER-SB Wir VVFIN-HD fordern NP-OA ART das NN Praesidium PTKVZ-SVP auf After the transformation, the clause is altered to: S PPER-SB Wir PTKVZ-SVP auf VVFIN-HD fordern NP-OA ART das NN Praesidium 535 Transformation Example Verb Initial Before: Ich werde Ihnen die entsprechenden Anmerkungen aushaendigen, After: Ich werde aushaendigen Ihnen die entsprechenden Anmerkungen, English: I shall be passing on to you some comments, Verb 2nd Before: damit Sie uebernehmen das eventuell bei der Abstimmung koennen. After: damit koennen Sie uebernehmen das eventuell bei der Abstimmung . English: so that could you adopt this perhaps in the voting. Move Subject Before: damit koennen Sie uebernehmen das eventuell bei der Abstimmung. After: damit Sie koennen uebernehmen das eventuell bei der Abstimmung . English: so that you could adopt this perhaps in the voting. Particles Before: Wir fordern das Praesidium auf, After: Wir auf fordern das Praesidium, English: We ask the Bureau, Infinitives Before: Ich werde der Sache nachgehen dann, After: Ich werde nachgehen der Sache dann, English: I will look into the matter then, Negation Before: Wir konnten einreichen es nicht mehr rechtzeitig, After: Wir konnten nicht einreichen es mehr rechtzeitig, English: We could not hand it in in time, Table 1: Examples for each of the reordering steps. In each case the item that is moved is underlined. [5] Infinitives In some cases, infinitival verbs are still not in the correct position after transformations [1]–[4]. For this reason we add a second step that involves infinitives. First, we remove all internal VP nodes within the parse tree. Second, for any clause (i.e., phrase labeled S...), if the clause dominates both a finite and infinitival verb, and there is an argument (i.e., a subject, or an object) between the two verbs, then the infinitive is moved to directly follow the finite verb. As an example, the following clause contains an infinitival (einreichen) that is separated from a finite verb konnten by the direct object es: S PPER-SB Wir VMFIN-HD konnten PPER-OA es PTKNEG-NG nicht VP-OC VVINF-HD einreichen AP-MO ADV-MO mehr ADJD-HD rechtzeitig The transformation removes the VP-OC, and moves the infinitive, giving: S PPER-SB Wir VMFIN-HD konnten VVINF-HD einreichen PPER-OA es PTKNEG-NG nicht AP-MO ADV-MO mehr ADJD-HD rechtzeitig [6] Negation As a final step, we move negative particles. If a clause dominates both a finite and infinitival verb, as well as a negative particle (i.e., a word tagged as PTKNEG), then the negative particle is moved to directly follow the finite verb. As an example, the previous example now has the negative particle nicht moved, to give the following clause structure: S PPER-SB Wir VMFIN-HD konnten PTKNEG-NG nicht VVINF-HD einreichen PPER-OA es AP-MO ADV-MO mehr ADJD-HD rechtzeitig 4 Experiments This section describes experiments with the reordering approach. Our baseline is the phrase-based MT system of (Koehn et al., 2003). We trained this system on the Europarl corpus, which consists of 751,088 sentence pairs with 15,256,792 German words and 16,052,269 English words. Translation performance is measured on a 2000 sentence test set from a different part of the Europarl corpus, with average sentence length of 28 words. We use BLEU scores (Papineni et al., 2002) to measure translation accuracy. We applied our re536 Annotator 2 Annotator 1 R B E R 33 2 5 B 2 13 5 E 9 4 27 Table 2: Table showing the level of agreement between two annotators on 100 translation judgements. R gives counts corresponding to translations where an annotator preferred the reordered system; B signifies that the annotator preferred the baseline system; E means an annotator judged the two systems to give equal quality translations. ordering method to both the training and test data, and retrained the system on the reordered training data. The BLEU score for the new system was 26.8%, an improvement from 25.2% BLEU for the baseline system. 4.1 Human Translation Judgements We also used human judgements of translation quality to evaluate the effectiveness of the reordering rules. We randomly selected 100 sentences from the test corpus where the English reference translation was between 10 and 20 words in length.1 For each of these 100 translations, we presented the two annotators with three translations: the reference (human) translation, the output from the baseline system, and the output from the system with reordering. No indication was given as to which system was the baseline system, and the ordering in which the baseline and reordered translations were presented was chosen at random on each example, to prevent ordering effects in the annotators’ judgements. For each example, we asked each of the annotators to make one of two choices: 1) an indication that one translation was an improvement over the other; or 2) an indication that the translations were of equal quality. Annotator 1 judged 40 translations to be improved by the reordered model; 40 translations to be of equal quality; and 20 translations to be worse under the reordered model. Annotator 2 judged 44 translations to be improved by the reordered model; 37 translations to be of equal quality; and 19 translations to be worse under the reordered model. Table 2 gives figures indicating agreement rates between the annotators. Note that if we only consider preferences where both annotators were in agree1We chose these shorter sentences for human evaluation because in general they include a single clause, which makes human judgements relatively straightforward. ment (and consider all disagreements to fall into the “equal” category), then 33 translations improved under the reordering system, and 13 translations became worse. Figure 3 shows a random selection of the translations where annotator 1 judged the reordered model to give an improvement; Figure 4 shows examples where the baseline system was preferred by annotator 1. We include these examples to give a qualitative impression of the differences between the baseline and reordered system. Our (no doubt subjective) impression is that the cases in figure 3 are more clear cut instances of translation improvements, but we leave the reader to make his/her own judgement on this point. 4.2 Statistical Significance We now describe statistical significance tests for our results. We believe that applying significance tests to Bleu scores is a subtle issue, for this reason we go into some detail in this section. We used the sign test (e.g., see page 166 of (Lehmann, 1986)) to test the statistical significance of our results. For a source sentence , the sign test requires a function that is defined as follows:
If reordered system produces a better translation for than the baseline If baseline produces a better translation for than the reordered system.
If the two systems produce equal quality translations on We assume that sentences are drawn from some underlying distribution , and that the test set consists of independently, identically distributed (IID) sentences from this distribution. We can define the following probabilities: Probability (1) Probability "! (2) where the probability is taken with respect to the distribution # . The sign test has the null hypothesis $#% &' )( +* and the alternative hypothesis $-, &'. /)0 * . Given a sample of 1 test points & ,3254545432 76 * , the sign test depends on calculation of the following counts: 859;:<&3=?> 7@A * : , 8BCD:<&3=E> 7@F "! * : , 537 and 8 % ;:<&3=E> 7@F * : , where : : is the cardinality of the set . We now come to the definition of — how should we judge whether a translation from one system is better or worse than the translation from another system? A critical problem with Bleu scores is that they are a function of an entire test corpus and do not give translation scores for single sentences. Ideally we would have some measure E of the quality of the translation of sentence under the reordered system, and a corresponding function that measures the quality of the baseline translation. We could then define as follows: If ? / ! If ? If ? Unfortunately Bleu scores do not give persentence measures E and , and thus do not allow a definition of in this way. In general the lack of per-sentence scores makes it challenging to apply significance tests to Bleu scores.2 To get around this problem, we make the following approximation. For any test sentence @ , we calculate 7@F as follows. First, we define to be the Bleu score for the test corpus when translated by the baseline model. Next, we define @ to be the Bleu score when all sentences other than @ are translated by the baseline model, and where @ itself is translated by the reordered model. We then define 7@ If @ / 7@ "! If @
7@ If @ Note that strictly speaking, this definition of @F is not valid, as it depends on the entire set of sample points ,045454 76 rather than @ alone. However, we believe it is a reasonable approximation to an ideal 2The lack of per-sentence scores means that it is not possible to apply standard statistical tests such as the sign test or the ttest (which would test the hypothesis 0 , where is the expected value under ). Note that previous work (Koehn, 2004; Zhang and Vogel, 2004) has suggested the use of bootstrap tests (Efron and Tibshirani, 1993) for the calculation of confidence intervals for Bleu scores. (Koehn, 2004) gives empirical evidence that these give accurate estimates for Bleu statistics. However, correctness of the bootstrap method relies on some technical properties of the statistic (e.g., Bleu scores) being used (e.g., see (Wasserman, 2004) theorem 8.3); (Koehn, 2004; Zhang and Vogel, 2004) do not discuss whether Bleu scores meet any such criteria, which makes us uncertain of their correctness when applied to Bleu scores. function that indicates whether the translations have improved or not under the reordered system. Given this definition of , we found that 859 , 8B , and 8 % . (Thus 52.85% of all test sentences had improved translations under the baseline system, 36.4% of all sentences had worse translations, and 10.75% of all sentences had the same quality as before.) If our definition of was correct, these values for 8 and 8B would be significant at the level ( 4 . We can also calculate confidence intervals for the results. Define to be the probability that the reordered system improves on the baseline system, given that the two systems do not have equal performance. The relative frequency estimate of is !4"# . Using a normal approximation (e.g., see Example 6.17 from (Wasserman, 2004)) a 95% confidence interval for a sample size of 1785 is %$&4"'# , giving a 95% confidence interval of ()*4"!# 2+* 4"#-, for . 5 Conclusions We have demonstrated that adding knowledge about syntactic structure can significantly improve the performance of an existing state-of-the-art statistical machine translation system. Our approach makes use of syntactic knowledge to overcome a weakness of tradition SMT systems, namely long-distance reordering. We pose clause restructuring as a problem for machine translation. Our current approach is based on hand-crafted rules, which are based on our linguistic knowledge of how German and English syntax differs. In the future we may investigate data-driven approaches, in an effort to learn reordering models automatically. While our experiments are on German, other languages have word orders that are very different from English, so we believe our methods will be generally applicable. Acknowledgements We would like to thank Amit Dubey for providing the German parser used in our experiments. Thanks to Brooke Cowan and Luke Zettlemoyer for providing the human judgements of translation performance. Thanks also to Regina Barzilay for many helpful comments on an earlier draft of this paper. Any remaining errors are of course our own. Philipp Koehn was supported by a grant from NTT, Agmt. dtd. 6/21/1998. Michael Collins was supported by NSF grants IIS-0347631 and IIS-0415030. 538 R: the current difficulties should encourage us to redouble our efforts to promote cooperation in the euro-mediterranean framework. C: the current problems should spur us to intensify our efforts to promote cooperation within the framework of the europamittelmeerprozesses. B: the current problems should spur us, our efforts to promote cooperation within the framework of the europamittelmeerprozesses to be intensified. R: propaganda of any sort will not get us anywhere. C: with any propaganda to lead to nothing. B: with any of the propaganda is nothing to do here. R: yet we would point out again that it is absolutely vital to guarantee independent financial control. C: however, we would like once again refer to the absolute need for the independence of the financial control. B: however, we would like to once again to the absolute need for the independence of the financial control out. R: i cannot go along with the aims mr brok hopes to achieve via his report. C: i cannot agree with the intentions of mr brok in his report persecuted. B: i can intentions, mr brok in his report is not agree with. R: on method, i think the nice perspectives, from that point of view, are very interesting. C: what the method is concerned, i believe that the prospects of nice are on this point very interesting. B: what the method, i believe that the prospects of nice in this very interesting point. R: secondly, without these guarantees, the fall in consumption will impact negatively upon the entire industry. C: and, secondly, the collapse of consumption without these guarantees will have a negative impact on the whole sector. B: and secondly, the collapse of the consumption of these guarantees without a negative impact on the whole sector. R: awarding a diploma in this way does not contravene uk legislation and can thus be deemed legal. C: since the award of a diploms is not in this form contrary to the legislation of the united kingdom, it can be recognised as legitimate. B: since the award of a diploms in this form not contrary to the legislation of the united kingdom is, it can be recognised as legitimate. R: i should like to comment briefly on the directive concerning undesirable substances in products and animal nutrition. C: i would now like to comment briefly on the directive on undesirable substances and products of animal feed. B: i would now like to briefly to the directive on undesirable substances and products in the nutrition of them. R: it was then clearly shown that we can in fact tackle enlargement successfully within the eu ’s budget. C: at that time was clear that we can cope with enlargement, in fact, within the framework drawn by the eu budget. B: at that time was clear that we actually enlargement within the framework able to cope with the eu budget, the drawn. Figure 3: Examples where annotator 1 judged the reordered system to give an improved translation when compared to the baseline system. Recall that annotator 1 judged 40 out of 100 translations to fall into this category. These examples were chosen at random from these 40 examples, and are presented in random order. R is the human (reference) translation; C is the translation from the system with reordering; B is the output from the baseline system. References Alshawi, H. (1996). Head automata and bilingual tiling: Translation with minimal representations (invited talk). In Proceedings of ACL 1996. Berger, A. L., Pietra, S. A. D., and Pietra, V. J. D. (1996). A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–69. Brown, P. F., Pietra, S. A. D., Pietra, V. J. D., and Mercer, R. L. (1993). The mathematics of statistical machine translation. Computational Linguistics, 19(2):263–313. Charniak, E., Knight, K., and Yamada, K. (2003). Syntax-based language models for statistical machine translation. In Proceedings of the MT Summit IX. Dubey, A. and Keller, F. (2003). Parsing german with sisterhead dependencies. In Proceedings of ACL 2003. Efron, B. and Tibshirani, R. J. (1993). An Introduction to the Bootstrap. Springer-Verlag. Galley, M., Hopkins, M., Knight, K., and Marcu, D. (2004). What’s in a translation rule? In Proceedings of HLT-NAACL 2004. Gildea, D. (2003). Loosely tree-based alignment for machine translation. In Proceedings of ACL 2003. Graehl, J. and Knight, K. (2004). Training tree transducers. In Proceedings of HLT-NAACL 2004. Koehn, P. (2004). Statistical significance tests for machine translation evaluation. In Lin, D. and Wu, D., editors, Proceedings of EMNLP 2004. Koehn, P. and Knight, K. (2003). Feature-rich statistical translation of noun phrases. In Hinrichs, E. and Roth, D., editors, Proceedings of ACL 2003, pages 311–318. Koehn, P., Och, F. J., and Marcu, D. (2003). Statistical phrase based translation. In Proceedings of HLT-NAACL 2003. Lehmann, E. L. (1986). Testing Statistical Hypotheses (Second Edition). Springer-Verlag. 539 R: on the other hand non-british hauliers pay nothing when travelling in britain. C: on the other hand, foreign kraftverkehrsunternehmen figures anything if their lorries travelling through the united kingdom. B: on the other hand, figures foreign kraftverkehrsunternehmen nothing if their lorries travel by the united kingdom. R: i think some of the observations made by the consumer organisations are included in the commission ’s proposal. C: i think some of these considerations, the social organisations will be addressed in the commission proposal. B: i think some of these considerations, the social organisations will be taken up in the commission ’s proposal. R: during the nineties the commission produced several recommendations on the issue but no practical solutions were found. C: in the nineties, there were a number of recommendations to the commission on this subject to achieve without, however, concrete results. B: in the 1990s, there were a number of recommendations to the commission on this subject without, however, to achieve concrete results. R: now, in a panic, you resign yourselves to action. C: in the current paniksituation they must react necessity. B: in the current paniksituation they must of necessity react. R: the human aspect of the whole issue is extremely important. C: the whole problem is also a not inconsiderable human side. B: the whole problem also has a not inconsiderable human side. R: in this area we can indeed talk of a european public prosecutor. C: and we are talking here, in fact, a european public prosecutor. B: and here we can, in fact speak of a european public prosecutor. R: we have to make decisions in nice to avoid endangering enlargement, which is our main priority. C: we must take decisions in nice, enlargement to jeopardise our main priority. B: we must take decisions in nice, about enlargement be our priority, not to jeopardise. R: we will therefore vote for the amendments facilitating its use. C: in this sense, we will vote in favour of the amendments which, in order to increase the use of. B: in this sense we vote in favour of the amendments which seek to increase the use of. R: the fvo mission report mentioned refers specifically to transporters whose journeys originated in ireland. C: the quoted report of the food and veterinary office is here in particular to hauliers, whose rushed into shipments of ireland. B: the quoted report of the food and veterinary office relates in particular, to hauliers, the transport of rushed from ireland. Figure 4: Examples where annotator 1 judged the reordered system to give a worse translation than the baseline system. Recall that annotator 1 judged 20 out of 100 translations to fall into this category. These examples were chosen at random from these 20 examples, and are presented in random order. R is the human (reference) translation; C is the translation from the system with reordering; B is the output from the baseline system. Marcu, D. and Wong, W. (2002). A phrase-based, joint probability model for statistical machine translation. In Proceedings of EMNLP 2002. Melamed, I. D. (2004). Statistical machine translation by parsing. In Proceedings of ACL 2004. Niessen, S. and Ney, H. (2004). Statistical machine translation with scarce resources using morpho-syntactic information. Computational Linguistics, 30(2):181–204. Och, F. J. (2003). Minimum error rate training in statistical machine translation. In Proceedings of ACL 2003. Och, F. J., Gildea, D., Khudanpur, S., Sarkar, A., Yamada, K., Fraser, A., Kumar, S., Shen, L., Smith, D., Eng, K., Jain, V., Jin, Z., and Radev, D. (2004). A smorgasbord of features for statistical machine translation. In Proceedings of HLTNAACL 2004. Och, F. J., Tillmann, C., and Ney, H. (1999). Improved alignment models for statistical machine translation. In Proceedings of EMNLP 1999, pages 20–28. Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002. Shen, L., Sarkar, A., and Och, F. J. (2004). Discriminative reranking for machine translation. In Proceedings of HLTNAACL 2004. Wasserman, L. (2004). All of Statistics. Springer-Verlag. Wu, D. (1997). Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3). Xia, F. and McCord, M. (2004). Improving a statistical MT system with automatically learned rewrite patterns. In Proceedings of Coling 2004. Yamada, K. and Knight, K. (2001). A syntax-based statistical translation model. In Proceedings of ACL 2001. Zhang, Y. and Vogel, S. (2004). Measuring confidence intervals for the machine translation evaluation metrics. In Proceedings of the Tenth Conference on Theoretical and Methodological Issues in Machine Translation (TMI). 540 | 2005 | 66 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 541–548, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Machine Translation Using Probabilistic Synchronous Dependency Insertion Grammars Yuan Ding Martha Palmer Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, USA {yding, mpalmer}@linc.cis.upenn.edu Abstract Syntax-based statistical machine translation (MT) aims at applying statistical models to structured data. In this paper, we present a syntax-based statistical machine translation system based on a probabilistic synchronous dependency insertion grammar. Synchronous dependency insertion grammars are a version of synchronous grammars defined on dependency trees. We first introduce our approach to inducing such a grammar from parallel corpora. Second, we describe the graphical model for the machine translation task, which can also be viewed as a stochastic tree-to-tree transducer. We introduce a polynomial time decoding algorithm for the model. We evaluate the outputs of our MT system using the NIST and Bleu automatic MT evaluation software. The result shows that our system outperforms the baseline system based on the IBM models in both translation speed and quality. 1 Introduction Statistical approaches to machine translation, pioneered by (Brown et al., 1993), achieved impressive performance by leveraging large amounts of parallel corpora. Such approaches, which are essentially stochastic string-to-string transducers, do not explicitly model natural language syntax or semantics. In reality, pure statistical systems sometimes suffer from ungrammatical outputs, which are understandable at the phrasal level but sometimes hard to comprehend as a coherent sentence. In recent years, syntax-based statistical machine translation, which aims at applying statistical models to structural data, has begun to emerge. With the research advances in natural language parsing, especially the broad-coverage parsers trained from treebanks, for example (Collins, 1999), the utilization of structural analysis of different languages has been made possible. Ideally, by combining the natural language syntax and machine learning methods, a broad-coverage and linguistically wellmotivated statistical MT system can be constructed. However, structural divergences between languages (Dorr, 1994),which are due to either systematic differences between languages or loose translations in real corpora,pose a major challenge to syntax-based statistical MT. As a result, the syntax based MT systems have to transduce between non-isomorphic tree structures. (Wu, 1997) introduced a polynomial-time solution for the alignment problem based on synchronous binary trees. (Alshawi et al., 2000) represents each production in parallel dependency trees as a finite-state transducer. Both approaches learn the tree representations directly from parallel sentences, and do not make allowances for nonisomorphic structures. (Yamada and Knight, 2001, 2002) modeled translation as a sequence of tree operations transforming a syntactic tree into a string of the target language. When researchers try to use syntax trees in both languages, the problem of non-isomorphism must be addressed. In theory, stochastic tree transducers and some versions of synchronous grammars provide solutions for the non-isomorphic tree based transduction problem and hence possible solutions for MT. Synchronous Tree Adjoining Grammars, proposed by (Shieber and Schabes, 1990), were introduced primarily for semantics but were later also proposed for translation. Eisner (2003) proposed viewing the MT problem as a probabilistic synchronous tree substitution grammar parsing 541 problem. Melamed (2003, 2004) formalized the MT problem as synchronous parsing based on multitext grammars. Graehl and Knight (2004) defined training and decoding algorithms for both generalized tree-to-tree and tree-to-string transducers. All these approaches, though different in formalism, model the two languages using tree-based transduction rules or a synchronous grammar, possibly probabilistic, and using multi-lemma elementary structures as atomic units. The machine translation is done either as a stochastic tree-to-tree transduction or a synchronous parsing process. However, few of the above mentioned formalisms have large scale implementations. And to the best of our knowledge, the advantages of syntax based statistical MT systems over pure statistical MT systems have yet to be empirically verified. We believe difficulties in inducing a synchronous grammar or a set of tree transduction rules from large scale parallel corpora are caused by: 1. The abilities of synchronous grammars and tree transducers to handle non-isomorphism are limited. At some level, a synchronous derivation process must exist between the source and target language sentences. 2. The training and/or induction of a synchronous grammar or a set of transduction rules are usually computationally expensive if all the possible operations and elementary structures are allowed. The exhaustive search for all the possible sub-sentential structures in a syntax tree of a sentence is NP-complete. 3. The problem is aggravated by the non-perfect training corpora. Loose translations are less of a problem for string based approaches than for approaches that require syntactic analysis. Hajic et al. (2002) limited non-isomorphism by n-to-m matching of nodes in the two trees. However, even after extending this model by allowing cloning operations on subtrees, Gildea (2003) found that parallel trees over-constrained the alignment problem, and achieved better results with a tree-to-string model than with a tree-to-tree model using two trees. In a different approach, Hwa et al. (2002) aligned the parallel sentences using phrase based statistical MT models and then projected the alignments back to the parse trees. This motivated us to look for a more efficient and effective way to induce a synchronous grammar from parallel corpora and to build an MT system that performs competitively with the pure statistical MT systems. We chose to build the synchronous grammar on the parallel dependency structures of the sentences. The synchronous grammar is induced by hierarchical tree partitioning operations. The rest of this paper describes the system details as follows: Sections 2 and 3 describe the motivation behind the usage of dependency structures and how a version of synchronous dependency grammar is learned. This grammar is used as the primary translation knowledge source for our system. Section 4 defines the tree-to-tree transducer and the graphical model for the stochastic tree-to-tree transduction process and introduces a polynomial time decoding algorithm for the transducer. We evaluate our system in section 5 with the NIST/Bleu automatic MT evaluation software and the results are discussed in Section 6. 2 The Synchronous Grammar 2.1 Why Dependency Structures? According to Fox (2002), dependency representations have the best inter-lingual phrasal cohesion properties. The percentage for head crossings is 12.62% and that of modifier crossings is 9.22%. Furthermore, a grammar based on dependency structures has the advantage of being simple in formalism yet having CFG equivalent formal generative capacity (Ding and Palmer, 2004b). Dependency structures are inherently lexicalized as each node is one word. In comparison, phrasal structures (treebank style trees) have two node types: terminals store the lexical items and non-terminals store word order and phrasal scopes. 2.2 Synchronous Dependency Insertion Grammars Ding and Palmer (2004b) described one version of synchronous grammar: Synchronous Dependency Insertion Grammars. A Dependency Insertion Grammars (DIG) is a generative grammar formalism that captures word order phenomena within the dependency representation. In the scenario of two languages, the two sentences in the source and target languages can be modeled as being generated from a synchronous derivation process. A synchronous derivation process for the two syntactic structures of both languages suggests the level of cross-lingual isomorphism between the two trees (e.g. Synchronous Tree Adjoining Grammars (Shieber and Schabes, 1990)). 542 Apart from other details, a DIG can be viewed as a tree substitution grammar defined on dependency trees (as opposed to phrasal structure trees). The basic units of the grammar are elementary trees (ET), which are sub-sentential dependency structures containing one or more lexical items. The synchronous version, SDIG, assumes that the isomorphism of the two syntactic structures is at the ET level, rather than at the word level, hence allowing non-isomorphic tree to tree mapping. We illustrate how the SDIG works using the following pseudo-translation example: y [Source] The girl kissed her kitty cat. y [Target] The girl gave a kiss to her cat. Figure 1. An example Figure 2. Tree-to-tree transduction Almost any tree-transduction operations defined on a single node will fail to generate the target sentence from the source sentence without using insertion/deletion operations. However, if we view each dependency tree as an assembly of indivisible sub-sentential elementary trees (ETs), we can find a proper way to transduce the input tree to the output tree. An ET is a single “symbol” in a transducer’s language. As shown in Figure 2, each circle stands for an ET and thick arrows denote the transduction of each ET as a single symbol. 3 Inducing a Synchronous Dependency Insertion Grammar As the start to our syntax-based SMT system, the SDIG must be learned from the parallel corpora. 3.1 Cross-lingual Dependency Inconsistencies One straightforward way to induce a generative grammar is using EM style estimation on the generative process. Different versions of such training algorithms can be found in (Hajic et al., 2002; Eisner 2003; Gildea 2003; Graehl and Knight 2004). However, a synchronous derivation process cannot handle two types of cross-language mappings: crossing-dependencies (parent-descendent switch) and broken dependencies (descendent appears elsewhere), which are illustrated below: Figure 3. Cross-lingual dependency consistencies In the above graph, the two sides are English and the foreign dependency trees. Each node in a tree stands for a lemma in a dependency tree. The arrows denote aligned nodes and those resulting inconsistent dependencies are marked with a “*”. Fox (2002) collected the statistics mainly on French and English data: in dependency representations, the percentage of head crossings per chance (case [b] in the graph) is 12.62%. Using the statistics on cross-lingual dependency consistencies from a small word to word aligned Chinese-English parallel corpus1, we found that the percentage of crossing-dependencies (case [b]) between Chinese and English is 4.7% while that of broken dependencies (case [c]) is 59.3%. The large number of broken dependencies presents a major challenge for grammar induction based on a top-down style EM learning process. Such broken and crossing dependencies can be modeled by SDIG if they appear inside a pair of elementary trees. However, if they appear between the elementary trees, they are not compatible with the isomorphism assumption on which SDIG is based. Nevertheless, the hope is that the fact that the training corpus contains a significant percentage of dependency inconsistencies does not mean that during decoding the target language sentence cannot be written in a dependency consistent way. 3.2 Grammar Induction by Synchronous Hierarchical Tree Partitioning (Ding and Palmer, 2004a) gave a polynomial time solution for learning parallel sub-sentential de 1 Total 826 sentence pairs, 9957 Chinese words, 12660 English words. Data made available by the courtesy of Microsoft Research, Asia and IBM T.J. Watson Research. 543 pendency structures from non-isomorphic dependency trees. Our approach, while similar to (Ding and Palmer, 2004a) in that we also iteratively partition the parallel dependency trees based on a heuristic function, departs (Ding and Palmer, 2004a) in three ways: (1) we base the hierarchical tree partitioning operations on the categories of the dependency trees; (2) the statistics of the resultant tree pairs from the partitioning operation are collected at each iteration rather than at the end of the algorithm; (3) we do not re-train the word to word probabilities at each iteration. Our grammar induction algorithm is sketched below: Step 0. View each tree as a “bag of words” and train a statistical translation model on all the tree pairs to acquire word-to-word translation probabilities. In our implementation, the IBM Model 1 (Brown et al., 1993) is used. Step 1. Let i denote the current iteration and let [ ] C CategorySequence i = be the current syntactic category set. For each tree pair in the corpus, do { a) For the tentative synchronous partitioning operation, use a heuristic function to select the BEST word pair * * ( , ) i j e f , where both * * , i j e f are NOT “chosen”, * ( ) i Category e C ∈ and * ( ) j Category f C ∈ . b) If * * ( , ) i j e f is found in (a), mark * * , i j e f as “chosen” and go back to (a), else go to (c). c) Execute the synchronous tree partitioning operation on all the “chosen” word pairs on the tree pair. Hence, several new tree pairs are created. Replace the old tree pair with the new tree pairs together with the rest of the old tree pair. d) Collect the statistics for all the new tree pairs as elementary tree pairs. } Step 2. 1 i i = + . Go to Step 1 for the next iteration. At each iteration, one specific set of categories of nodes is handled. The category sequence we used in the grammar induction is: 1. Top-NP: the noun phrases that do not have another noun phrase as parent or ancestor. 2. NP: all the noun phrases 3. VP, IP, S, SBAR: verb phrases equivalents. 4. PP, ADJP, ADVP, JJ, RB: all the modifiers 5. CD: all the numbers. We first process top NP chunks because they are the most stable between languages. Interestingly, NPs are also used as anchor points to learn monolingual paraphrases (Ibrahim et al., 2003). The phrasal structure categories can be extracted from automatic parsers using methods in (Xia, 2001). An illustration is given below (Chinese in pinyin form). The placement of the dependency arcs reflects the relative word order between a parent node and all its immediate children. The collected ETs are put into square boxes and the partitioning operations taken are marked with dotted arrows. y [English] I have been in Canada since 1947. y [Chinese] Wo 1947 nian yilai yizhi zhu zai jianada. y [Glossary] I 1947 year since always live in Canada [ ITERATION 1 & 2 ] Partition at word pair (“I” and “wo”) (“Canada” and “janada”) [ ITERATION 3 ] (“been” and “zhu”) are chosen but no partition operation is taken because they are roots. [ ITERATION 4 ] Partition at word pair (“since” and “yilai”) (“in” and “zai”) [ ITERATION 5 ] Partition at “1947” and “1947” [ FINALLY ] Total of 6 resultant ET pairs (figure omitted) Figure 4. An Example 3.3 Heuristics Similar to (Ding and Palmer, 2004a), we also use a heuristic function in Step 1(a) of the algorithm to rank all the word pairs for the tentative tree parti544 tioning operation. The heuristic function is based on a set of heuristics, most of which are similar to those in (Ding and Palmer, 2004a). For a word pair ( , ) i j e f for the tentative partitioning operation, we briefly describe the heuristics: y Inside-outside probabilities: We borrow the idea from PCFG parsing. This is the probability of an English subtree (inside) generating a foreign subtree and the probability of the English residual tree (outside) generating a foreign residual tree. Here both probabilities are based on a “bag of words” model. y Inside-outside penalties: here the probabilities of the inside English subtree generating the outside foreign residual tree and outside English residual tree generating the inside English subtree are used as penalty terms. y Entropy: the entropy of the word to word translation probability of the English word ie . y Part-of-Speech mapping template: whether the POS tags of the two words are in the “highly likely to match” POS tag pairs. y Word translation probability: P( | ) j i f e . y Rank: the rank of the word to word probability of jf in as a translation of ie among all the foreign words in the current tree. The above heuristics are a set of real valued numbers. We use a Maximum Entropy model to interpolate the heuristics in a log-linear fashion, which is different from the error minimization training in (Ding and Palmer, 2004a). ( ) 0 1 P | ( , ), ( , )... ( , ) 1 exp ( , ) i j i j n i j k k i j s k y h e f h e f h e f h e f Z λ λ = + ∑ (1) where (0,1) y = as labeled in the training data whether the two words are mapped with each other. The MaxEnt model is trained using the same word level aligned parallel corpus as the one in Section 3.1. Although the training corpus isn’t large, the fact that we only have a handful of parameters to fit eased the problem. 3.4 A Scaled-down SDIG It is worth noting that the set of derived parallel dependency Elementary Trees is not a full-fledged SDIG yet. Many features in the SDIG formalism such as arguments, head percolation, etc. are not yet filled. We nevertheless use this derived grammar as a Mini-SDIG, assuming the unfilled features as empty by default. A full-fledged SDIG remains a goal for future research. 4 The Machine Translation System 4.1 System Architecture As discussed before (see Figure 1 and 2), the architecture of our syntax based statistical MT system is illustrated in Figure 5. Note that this is a nondeterministic process. The input sentence is first parsed using an automatic parser and a dependency tree is derived. The rest of the pipeline can be viewed as a stochastic tree transducer. The MT decoding starts first by decomposing the input dependency tree in to elementary trees. Several different results of the decomposition are possible. Each decomposition is indeed a derivation process on the foreign side of SDIG. Then the elementary trees go through a transfer phase and target ETs are combined together into the output. Figure 5. System architecture 4.2 The Graphical Model The stochastic tree-to-tree transducer we propose models MT as a probabilistic optimization process. Let f be the input sentence (foreign language), and e be the output sentence (English). We have P( | )P( ) P( | ) P( ) f e e e f f = , and the best translation is: * argmax P( | )P( ) e e f e e = (2) P( | ) f e and P( )e are also known as the “translation model” (TM) and the “language model” (LM). Assuming the decomposition of the foreign tree is given, our approach, which is based on ETs, uses the graphical model shown in Figure 6. In the model, the left side is the input dependency tree (foreign language) and the right side is the output dependency tree (English). Each circle stands for an ET. The solid lines denote the syntactical dependencies while the dashed arrows denote the statistical dependencies. 545 Figure 6 The graphical model Let T( ) x be the dependency tree constructed from sentence x . A tree-decomposition function D( )t is defined on a dependency tree t , and outputs a certain ET derivation tree of t , which is generated by decomposing t into ETs. Given t , there could be multiple decompositions. Conditioned on decomposition D , we can rewrite (2) as: * argmax P( , | )P( ) argmax P( | , )P( | )P( ) e D e D e f e D D f e D e D D = = ∑ ∑ (3) By definition, the ET derivation trees of the input and output trees should be isomorphic: D(T( )) D(T( )) f e ≅ . Let Tran( ) u be a set of possible translations for the ET u . We have: D(T( )), D(T( )), Tran( ) P( | , ) P(T( ) | P(T( ), ) P( | ) u f v e v u f e D f e D u v ∈ ∈ ∈ = = ∏ (4) For any ET v in a given ET derivation tree d , let Root( ) d be the root ET of d , and let Parent( ) v denote the parent ET of v . We have: ( ) ( ) D(T( )), Root(D(T( )) P( | ) P(T( ) | ) P Root D(T( ) P( | Parent( )) v e v e e D e D e v v ∈ ≠ = = ⋅ ⋅ ∏ (5) where, letting root( ) v denote the root word of v , ( ) ( ) ( ) P | Parent( ) P root( ) | root Parent( ) v v v v = (6) The prior probability of tree decomposition is defined as: ( ) D(T( )) P D(T( )) P( ) u f f u ∈ = ∏ (7) Figure 7 Comparing to the HMM An analogy between our model and a Hidden Markov Model (Figure 7) may be helpful. In Eq. (4), P( | ) u v is analogous to the emission probably P( | ) i i o s in an HMM. In Eq. (5), P( | Parent( )) v v is analogous to the transition probability 1 P( | ) i i s s − in an HMM. While HMM is defined on a sequence our model is defined on the derivation tree of ETs. 4.3 Other Factors y Augmenting parallel ET pairs In reality, the learned parallel ETs are unlikely to cover all the structures that we may encounter in decoding. As a unified approach, we augment the SDIG by adding all the possible word pairs ( , ) j i f e as a parallel ET pair and using the IBM Model 1 (Brown et al., 1993) word to word translation probability as the ET translation probability. y Smoothing the ET translation probabilities. The LM probabilities P( | Parent( )) v v are simply estimated using the relative frequencies. In order to handle possible noise from the ET pair learning process, the ET translation probabilities P ( | ) emp u v estimated by relative frequencies are smoothed using a word level model. For each ET pair ( , ) u v , we interpolate the empirical probability with the “bag of words” probability and then re-normalize: size( ) 1 1 P( | ) P ( , ) P( | ) size( ) i j emp j i v e v f u u v u v f e Z u ∈ ∈ = ⋅ ∑ ∏ (8) 4.4 Polynomial Time Decoding For efficiency reasons, we use maximum approximation for (3). Instead of summing over all the possible decompositions, we only search for the best decomposition as follows: , *, * argmax P( | , )P( | )P( ) e D e D f e D e D D = (9) So bringing equations (4) to (9) together, the best translation would maximize: ( ) P( | ) P Root( ) P( | Parent( )) P( ) u v e v v u ⋅ ⋅ ⋅ ∏ ∏ ∏ (10) Observing the similarity between our model and a HMM, our dynamic programming decoding algorithm is in spirit similar to the Viterbi algorithm except that instead of being sequential the decoding is done on trees in a top down fashion. As to the relative orders of the ETs, we currently choose not to reorder the children ETs given the parent ET because: (1) the permutation of the ETs is computationally expensive (2) it is possible that we can resort to simple linguistic treatments on the output dependency tree to order the ETs. Currently, all the ETs are attached to each other 546 at their root nodes. In our implementation, the different decompositions of the input dependency tree are stored in a shared forest structure, utilizing the dynamic programming property of the tree structures explicitly. Suppose the input sentence has n words and the shared forest representation has m nodes. Suppose for each word, there are maximally k different ETs containing it, we have kn m ≤ . Let b be the max breadth factor in the packed forest, it can be shown that the decoder visits at most mb nodes during execution. Hence, we have: ) ( ) ( kbn O decoding T ≤ (11) which is linear to the input size. Combined with a polynomial time parsing algorithm, the whole decoding process is polynomial time. 5 Evaluation We implemented the above approach for a Chinese-English machine translation system. We used an automatic syntactic parser (Bikel, 2002) to produce the parallel parse trees. The parser was trained using the Penn English/Chinese Treebanks. We then used the algorithm in (Xia 2001) to convert the phrasal structure trees to dependency trees to acquire the parallel dependency trees. The statistics of the datasets we used are shown as follows: Dataset Xinhua FBIS NIST Sentence# 56263 45212 206 Chinese word# 1456495 1185297 27.4 average English word# 1490498 1611932 37.7 average Usage training training testing Figure 8. Evaluation data details The training set consists of Xinhua newswire data from LDC and the FBIS data (mostly news), both filtered to ensure parallel sentence pair quality. We used the development test data from the 2001 NIST MT evaluation workshop as our test data for the MT system performance. In the testing data, each input Chinese sentence has 4 English translations as references. Our MT system was evaluated using the n-gram based Bleu (Papineni et al., 2002) and NIST machine translation evaluation software. We used the NIST software package “mteval” version 11a, configured as case-insensitive. In comparison, we deployed the GIZA++ MT modeling tool kit, which is an implementation of the IBM Models 1 to 4 (Brown et al., 1993; AlOnaizan et al., 1999; Och and Ney, 2003). The IBM models were trained on the same training data as our system. We used the ISI Rewrite decoder (Germann et al. 2001) to decode the IBM models. The results are shown in Figure 9. The score types “I” and “C” stand for individual and cumulative n-gram scores. The final NIST and Bleu scores are marked with bold fonts. Systems Score Type 1-gram 2-gram 3-gram 4-gram NIST 2.562 0.412 0.051 0.008 I Bleu 0.714 0.267 0.099 0.040 NIST 2.562 2.974 3.025 3.034 IBM Model 4 C Bleu 0.470 0.287 0.175 0.109 NIST 5.130 0.763 0.082 0.013 I Bleu 0.688 0.224 0.075 0.029 NIST 5.130 5.892 5.978 5.987 SDIG C Bleu 0.674 0.384 0.221 0.132 Figure 9. Evaluation Results. The evaluation results show that the NIST score achieved a 97.3% increase, while the Bleu score increased by 21.1%. In terms of decoding speed, the Rewrite decoder took 8102 seconds to decode the test sentences on a Xeon 1.2GHz 2GB memory machine. On the same machine, the SDIG decoder took 3 seconds to decode, excluding the parsing time. The recent advances in parsing have achieved parsers with 3 ( ) O n time complexity without the grammar constant (McDonald et al., 2005). It can be expected that the total decoding time for SDIG can be as short as 0.1 second per sentence. Neither of the two systems has any specific translation components, which are usually present in real world systems (E.g. components that translate numbers, dates, names, etc.) It is reasonable to expect that the performance of SDIG can be further improved with such specific optimizations. 6 Discussions We noticed that the SDIG system outputs tend to be longer than those of the IBM Model 4 system, and are closer to human translations in length. Translation Type Human SDIG IBM-4 Avg. Sent. Len. 37.7 33.6 24.2 Figure 10. Average Sentence Word Count This partly explains why the IBM Model 4 system has slightly higher individual n-gram precision scores (while the SDIG system outputs are still better in terms of absolute matches). 547 The relative orders between the parent and child ETs in the output tree is currently kept the same as the orders in the input tree. Admittedly, we benefited from the fact that both Chinese and English are SVO languages, and that many of orderings between the arguments and adjuncts can be kept the same. However, we did notice that this simple “ostrich” treatment caused outputs such as “foreign financial institutions the president of”. While statistical modeling of children reordering is one possible remedy for this problem, we believe simple linguistic treatment is another, as the output of the SDIG system is an English dependency tree rather than a string of words. 7 Conclusions and Future Work In this paper we presented a syntax-based statistical MT system based on a Synchronous Dependency Insertion Grammar and a non-isomorphic stochastic tree-to-tree transducer. A graphical model for the transducer is defined and a polynomial time decoding algorithm is introduced. The results of our current implementation were evaluated using the NIST and Bleu automatic MT evaluation software. The evaluation shows that the SDIG system outperforms an IBM Model 4 based system in both speed and quality. Future work includes a full-fledged version of SDIG and a more sophisticated MT pipeline with possibly a tri-gram language model for decoding. References Y. Al-Onaizan, J. Curin, M. Jahr, K. Knight, J. Lafferty, I. D. Melamed, F. Och, D. Purdy, N. A. Smith, and D. Yarowsky. 1999. Statistical machine translation. Technical report, CLSP, Johns Hopkins University. H. Alshawi, S. Bangalore, S. Douglas. 2000. Learning dependency translation models as collections of finite state head transducers. Comp. Linguistics, 26(1):45-60. Daniel M. Bikel. 2002. Design of a multi-lingual, parallel-processing statistical parsing engine. In HLT 2002. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2): 263-311. Michael John Collins. 1999. Head-driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia. Ding and Palmer. 2004a. Automatic Learning of Parallel Dependency Treelet Pairs. In First International Joint Conference on NLP (IJCNLP-04). Ding and Palmer. 2004b. Synchronous Dependency Insertion Grammars: A Grammar Formalism for Syntax Based Statistical MT. Workshop on Recent Advances in Dependency Grammars, COLING-04. Bonnie J. Dorr. 1994. Machine translation divergences: A formal description and proposed solution. Computational Linguistics, 20(4): 597-633. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In ACL-03. (companion volume), Sapporo, July. Heidi J. Fox. 2002. Phrasal cohesion and statistical machine translation. In Proceedings of EMNLP-02. Ulrich Germann, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2001. Fast Decoding and Optimal Decoding for Machine Translation. ACL-01. Daniel Gildea. 2003. Loosely tree based alignment for machine translation. ACL-03, Japan. Jonathan Graehl and Kevin Knight. 2004. Training Tree Transducers. In NAACL/HLT-2004 Jan Hajic, et al. 2002. Natural language generation in the context of machine translation. Summer workshop final report, Center for Language and Speech Processing, Johns Hopkins University, Baltimore. Rebecca Hwa, Philip S. Resnik, Amy Weinberg, and Okan Kolak. 2002. Evaluating translational correspondence using annotation projection. ACL-02 Ali Ibrahim, Boris Katz, and Jimmy Lin. 2003. Extracting Structural Paraphrases from Aligned Monolingual Corpora. In Proceedings of the Second International Workshop on Paraphrasing (IWP 2003) Dan Melamed. 2004. Statistical Machine Translation by Parsing. In ACL-04, Barcelona, Spain. Dan Melamed. 2003. Multitext Grammars and Synchronous Parsers, In NAACL/HLT-2003. K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. ACL-02, Philadelphia, USA. Ryan McDonald, Koby Crammer and Fernando Pereira. 2005. Online Large-Margin Training of Dependency Parsers. ACL-05. Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19–51. S. M. Shieber and Y. Schabes. 1990. Synchronous TreeAdjoining Grammars, Proceedings of the 13th COLING, pp. 253-258, August 1990. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):3-403. Fei Xia. 2001. Automatic grammar generation from two different perspectives. PhD thesis, U. of Pennsylvania. Kenji Yamada and Kevin Knight. 2001. A syntax based statistical translation model. ACL-01, France. Kenji Yamada and Kevin Knight. 2002. A decoder for syntax-based statistical MT. ACL-02, Philadelphia. 548 | 2005 | 67 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 549–556, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Context-dependent SMT Model using Bilingual Verb-Noun Collocation Young-Sook Hwang ATR SLT Research Labs 2-2-2 Hikaridai Seika-cho Soraku-gun Kyoto, 619-0288, JAPAN [email protected] Yutaka Sasaki ATR SLT Research Labs 2-2-2 Hikaridai Seika-cho Soraku-gun Kyoto, 619-0288, JAPAN [email protected] Abstract In this paper, we propose a new contextdependent SMT model that is tightly coupled with a language model. It is designed to decrease the translation ambiguities and efficiently search for an optimal hypothesis by reducing the hypothesis search space. It works through reciprocal incorporation between source and target context: a source word is determined by the context of previous and corresponding target words and the next target word is predicted by the pair consisting of the previous target word and its corresponding source word. In order to alleviate the data sparseness in chunk-based translation, we take a stepwise back-off translation strategy. Moreover, in order to obtain more semantically plausible translation results, we use bilingual verb-noun collocations; these are automatically extracted by using chunk alignment and a monolingual dependency parser. As a case study, we experimented on the language pair of Japanese and Korean. As a result, we could not only reduce the search space but also improve the performance. 1 Introduction For decades, many research efforts have contributed to the advance of statistical machine translation. Recently, various works have improved the quality of statistical machine translation systems by using phrase translation (Koehn et al., 2003; Marcu et al., 2002; Och et al., 1999; Och and Ney, 2000; Zens et al., 2004). Most of the phrase-based translation models have adopted the noisy-channel based IBM style models (Brown et al., 1993): (1) In these model, we have two types of knowledge: translation model, and language model, . The translation model links the source language sentence to the target language sentence. The language model describes the well-formedness of the target language sentence and might play a role in restricting hypothesis expansion during decoding. To recover the word order difference between two languages, it also allows modeling the reordering by introducing a relative distortion probability distribution. However, in spite of using such a language model and a distortion model, the translation outputs may not be fluent or in fact may produce nonsense. To make things worse, the huge hypothesis search space is much too large for an exhaustive search. If arbitrary reorderings are allowed, the search problem is NP-complete (Knight, 1999). According to a previous analysis (Koehn et al., 2004) of how many hypotheses are generated during an exhaustive search using the IBM models, the upper bound for the number of states is estimated by , where is the number of source words and is the size of the target vocabulary. Even though the number of possible translations of the last two words is much smaller than , we still need to make further improvement. The main concern is the ex549 ponential explosion from the possible configurations of source words covered by a hypothesis. In order to reduce the number of possible configurations of source words, decoding algorithms based on as well as the beam search algorithm have been proposed (Koehn et al., 2004; Och et al., 2001). (Koehn et al., 2004; Och et al., 2001) used heuristics for pruning implausible hypotheses. Our approach to this problem examines the possibility of utilizing context information in a given language pair. Under a given target context, the corresponding source word of a given target word is almost deterministic. Conversely, if a translation pair is given, then the related target or source context is predictable. This implies that if we considered bilingual context information in a given language pair during decoding, we can reduce the computational complexity of the hypothesis search; specifically, we could reduce the possible configurations of source words as well as the number of possible target translations. In this study, we present a statistical machine translation model as an alternative to the classical IBM-style model. This model is tightly coupled with target language model and utilizes bilingual context information. It is designed to not only reduce the hypothesis search space by decreasing the translation ambiguities but also improve translation performance. It works through reciprocal incorporation between source and target context: source words are determined by the context of previous and corresponding target words, and the next target words are predicted by the current translation pair. Accordingly, we do not need to consider any distortion model or language model as is the case with IBM-style models. Under this framework, we propose a chunk-based translation model for more grammatical, fluent and accurate output. In order to alleviate the data sparseness problem in chunk-based translation, we use a stepwise back-off method in the order of a chunk, sub-parts of the chunk, and word level. Moreover, we utilize verb-noun collocations in dealing with long-distance dependency which are automatically extracted by using chunk alignment and a monolingual dependency parser. As a case study, we developed a Japanese-toKorean translation model and performed some experiments on the BTEC corpus. 2 Overview of Translation Model The goal of machine translation is to transfer the meaning of a source language sentence,
, into a target language sentence,
. In most types of statistical machine translation, conditional probability is used to describe the correspondence between two sentences. This model is used directly for translation by solving the following maximization problem: (2) (3) (4) Since a source language sentence is given and the probability is applied to all possible corresponding target sentences, we can ignore the denominator in equation (3). As a result, the joint probability model can be used to describe the correspondence between two sentences. We apply Markov chain rules to the joint probability model and obtain the following decomposed model: (5) where is the index of the source word that is aligned to the word under the assumption of the fixed one-to-one alignment. In this model, we have two probabilities: source word prediction probability under a given target language context, target word prediction probability under the preceding translation pair, The probability of target word prediction is used for selecting the target word that follows the previous target words. In order to make this more deterministic, we use bilingual context, i.e. the translation pair of the preceding target word. For a given target word, the corresponding source word is predicted by source word prediction probability based on the current and preceding target words. 550 Since a target and a source word are predicted through reciprocal incorporation between source and target context from the beginning of a target sentence, the word order in the target sentence is automatically determined and the number of possible configurations of source words is decreased. Thus, we do not need to perform any computation for word re-ordering. Moreover, since correspondences are provided based on bilingual contextual evidence, translation ambiguities can be decreased. As a result, the proposed model is expected to reduce computational complexity during the decoding as well as improve performance. Furthermore, since a word-based translation approach is often incapable of handling complicated expressions such as an idiomatic expressions or complicated verb phrases, it often outputs nonsense translations. To avoid nonsense translations and to increase explanatory power, we incorporate structural aspects of the language into the chunk-based translation model. In our model, one source chunk is translated by exactly one target chunk, i.e., oneto-one chunk alignment. Thus we obtain: (6) (7) where is the number of chunks in a source and a target sentence. 3 Chunk-based J/K Translation Model with Back-Off With the translation framework described above, we built a chunk-based J/K translation model as a case study. Since a chunk-based translation model causes severe data sparseness, it is often impossible to obtain any translation of a given source chunk. In order to alleviate this problem, we apply back-off translation models while giving the consideration to linguistic characteristics. Japanese and Korean is a very close language pair. Both are agglutinative and inflected languages in the word formation of a bunsetsu and an eojeol. A bunsetsu/eojeol consists of two sub parts: the head part composed of content words and the tail part composed of functional words agglutinated at the end of the head part. The head part is related to the meaning of a given segment, while the tail part indicates a grammatical role of the head in a given sentence. By putting this linguistic knowledge to practical use, we build a head-tail based translation model as a back-off version of the chunk-based translation model. We place several constraints on this head-tail based translation model as follows: The head of a given source chunk corresponds to the head of a target chunk. The tail of the source chunk corresponds to the tail of a target chunk. If a chunk does not have a tail part, we assign NUL to the tail of the chunk. The head of a given chunk follows the tail of the preceding chunk and the tail follows the head of the given chunk. The constraints are designed to maintain the structural consistency of a chunk. Under these constraints, the head-tail based translation can be formulated as the following equation: (8) where denotes the head of the chunk and means the tail of the chunk. In the worst case, even the head-tail based model may fail to obtain translations. In this case, we back it off into a word-based translation model. In the word-based translation model, the constraints on the head-tail based translation model are not applied. The concept of the chunk-based J/K translation framework with back-off scheme can be summarized as follows: 1. Input a dependency-parsed sentence at the chunk level, 2. Apply the chunk-based translation model to the given sentence, 3. If one of chunks does not have any corresponding translation: divide the failed chunk into a head and a tail part, 551 Figure 1: An example of (a) chunk alignment for chunk-based, head-tail based translation and (b) bilingual verb-noun collocation by using the chunk alignment and a monolingual dependency parser back-off the translation into the head-tail based translation model, if the head or tail does not have any corresponding translation, apply a word-based translation model to the chunk. Here, the back-off model is applied only to the part that failed to get translation candidates. 3.1 Learning Chunk-based Translation We learn chunk alignments from a corpus that has been word-aligned by a training toolkit for wordbased translation models: the Giza++ (Och and Ney, 2000) toolkit for the IBM models (Brown et al., 1993). For aligning chunk pairs, we consider word(bunsetsu/eojeol) sequences to be chunks if they are in an immediate dependency relationship in a dependency tree. To identify chunks, we use a word-aligned corpus, in which source language sentences are annotated with dependency parse trees by a dependency parser (Kudo et al., 2002) and target language sentences are annotated with POS tags by a part-of-speech tagger (Rim, 2003). If a sequence of target words is aligned with the words in a single source chunk, the target word sequence is regarded as one chunk corresponding to the given source chunk. By applying this method to the corpus, we obtain a word- and chunk-aligned corpus (see Figure 1). From the aligned corpus, we directly estimate the phrase translation probabilities, , and the model parameters, , . These estimation are made based on relative frequencies. 3.2 Decoding For efficient decoding, we implement a multi-stack decoder and a beam search with algorithm. At each search level, the beam search moves through at most -best translation candidates, and a multi-stack is used for partial translations according to the translation cardinality. The output sentence is generated from left to right in the form of partial translations. Initially, we get translation candidates for each source chunk with the beam size . Every possible translation is sorted according to its translation probability. We start the decoding with the initialized beams and initial stack , the top of which has the information of the initial hypothesis, . The decoding algorithm is described in Table 1. In the decoding algorithm, estimating the backward score is so complicated that the computational complexity becomes too high because of the context consideration. Thus, in order to simplify this problem, we assume the context-independence of only the backward score estimation. The backward score is estimated by the translation probability and language model score of the uncovered segments. For each uncovered segment, we select the best translation with the highest score by multiplying the translation probability of the segment by its language model score. The translation probability and language model score are computed without giving consideration to context. After estimating the forward and backward score of each partial translation on stack , we try to 552 1. Push the initial hypothesis on the initial stack 2. for i=1 to K Pop the previous state information of from stack Get next target and corresponding source for all pairs of – Check the head-tail consistency – Mark the source segment as a covered one – Estimate forward and backward score – Push the state of pair onto stack Sort all translations on stack by the scores Prune the hypotheses 3. while (stack is not empty) Pop the state of the pair Compose translation output, 4. Output the best translations Table 1: multi-stack decoding algorithm prune the hypotheses. In pruning, we first sort the partial translations on stack according to their scores. If the gradient of scores steeply decreases over the given threshold at the translation, we prune the translations of lower scores than the one. Moreover, if the number of filtered translations is larger than , we only take the top translations. As a final translation, we output the single best translation. 4 Resolving Long-distance Dependency Since most of the current translation models take only the local context into account, they cannot account for long-distance dependency. This often causes syntactically or semantically incorrect translation to be output. In this section, we describe how this problem can be solved. For handling the long-distance dependency problem, we utilize bilingual verb-noun collocations that are automatically acquired from the chunk-aligned bilingual corpora. 4.1 Automatic Extraction of Bilingual Verb-Noun Collocation(BiVN) To automatically extract the bilingual verb-noun collocations, we utilize a monolingual dependency parser and the chunk alignment result. The basic concept is the same as that used in (Hwang et al., 2004): bilingual dependency parses are obtained by sharing the dependency relations of a monolingual dependency parser among the aligned chunks. Then bilingual verb sub-categorization patterns are acquired by navigating the bilingual dependency trees. A verb sub-categorization is the collocation of a verb and all of its argument/adjunct nouns, i.e. verb-noun collocation(see Figure 1). To acquire more reliable and general knowledge, we apply the following filtering method with statistical test and unification operation: step 1. Filter out the reliable translation correspondences from all of the alignment pairs by test at a probability level of step 2. Filter out reliable bilingual verb-noun collocations BiVN by a unification and test at a probability level of : Here, we assume that two bilingual pairs, and are unifiable into a frame iff both of them are reliable pairs filtered in step 1. and they share the same verb pair . 4.2 Application of BiVN The acquired BiVN is used to evaluate the bilingual correspondence of a verb-noun pair dependent on each other and to select the correct translation. It can be applied to any verb-noun pair regardless of the distance between them in a sentence. Moreover, since the verb-noun relation in BiVN is bilingual knowledge, the sense of each corresponding verb and noun can be almost completely disambiguated by each other. In our translation system, we apply this BiVN during decoding as follows: 1. Pivot verbs and their dependents in a given dependency-parsed source sentence 2. When extending a hypothesis, if one of the pivoted verb and noun pairs is covered and its corresponding translation pair is in BiVN, we give positive weight to the hypothesis. if otherwise 553 where and is a function that indicates whether the bilingual translation pair is in BiVN. By adding the weight of the function, we refine our model as follows: (10)
where is a function indicating whether the pair of a verb and its argument is covered with or and is a bilingual translation pair in the hypothesis. 5 Experiments 5.1 Corpus The corpus for the experiment was extracted from the Basic Travel Expression Corpus (BTEC), a collection of conversational travel phrases for Japanese and Korean (see Table 2). The entire corpus was split into two parts: 162,320 sentences in parallel for training and 10,150 sentences for test. The Japanese sentences were automatically dependency-parsed by CaboCha (Kudo et al., 2002) and the Korean sentences were automatically POS tagged by KUTagger (Rim, 2003) 5.2 Translation Systems Four translation systems were implemented for evaluation: 1) Word based IBM-style SMT System(WBIBM), 2) Chunk based IBM-style SMT System(CBIBM), 3) Word based LM tightly Coupled SMT System(WBLMC), and 4) Chunk based LM tightly Coupled SMT System(CBLMC). To examine the effect of BiVN, BiVN was optionally used for each system. The word-based IBM-style (WBIBM) system1 consisted of a word translation model and a bigram language model. The bi-gram language model was generated by using CMU LM toolkit (Clarkson et al., 1997). Instead of using a fertility model, we allowed a multi-word target of a given source word if it aligned with more than one word. We didn’t use any distortion model for word re-ordering. And we used a log-linear model 1In this experiment, a word denotes a morpheme for weighting the language model and the translation model. For decoding, we used a multi-stack decoder based on the algorithm, which is almost the same as that described in Section 3. The difference is the use of the language model for controlling the generation of target translations. The chunk-based IBM-style (CBIBM) system consisted of a chunk translation model and a bigram language model. To alleviate the data sparseness problem of the chunk translation model, we applied the back-off method at the head-tail or morpheme level. The remaining conditions are the same as those for WBIBM. The word-based LM tightly coupled (WBLMC) system was implemented for comparison with the chunk-based systems. Except for setting the translation unit as a morpheme, the other conditions are the same as those for the proposed chunk-based translation system. The chunk-based LM tightly coupled (CBLMC) system is the proposed translation system. A bigram language model was used for estimating the backward score. 5.3 Evaluation Translation evaluations were carried out on 510 sentences selected randomly from the test set. The metrics for the evaluations are as follows: PER(Position independent WER), which penalizes without considering positional disfluencies(Niesen et al., 2000). mWER(multi-reference Word Error Rate), which is based on the minimum edit distance between the target sentence and the sentences in the reference set (Niesen et al., 2000). BLEU, which is the ratio of the n-gram for the translation results found in the reference translations with a penalty for too short sentences (Papineni et al., 2001). NIST which is a weighted n-gram precision in combination with a penalty for too short sentences. For this evaluation, we made 10 multiple references available. We computed all of the above criteria with respect to these multiple references. 554 Training Test Japanese Korean Japanese Korean # of sentences 162,320 10,150 # of total morphemes 1,153,954 1,179,753 74,366 76,540 # of bunsetsu/eojeol 448,438 587,503 28,882 38,386 vocabulary size 15,682 15,726 5,144 4,594 Table 2: Statistics of Basic Travel Expression Corpus PER mWER BLEU NIST WBIBM 0.3415 / 0.3318 0.3668 / 0.3591 0.5747 / 0.5837 6.9075 / 7.1110 WBLMC 0.2667 / 0.2666 0.2998 / 0.2994 0.5681 / 0.5690 9.0149 / 9.0360 CBIBM 0.2677 / 0.2383 0.2992 / 0.2700 0.6347 / 0.6741 8.0900 / 8.6981 CBLMC 0.1954 / 0.1896 0.2176 / 0.2129 0.7060 / 0.7166 9.9167 / 10.027 Table 3: Evaluation Results of Translation Systems: without BiVN/with BiVN WBIBM WBLMC CBIBM CBLMC 0.8110 / 0.8330 2.5585 / 2.5547 0.3345 / 0.3399 0.9039 / 0.9052 Table 4: Translation Speed of Each Translation Systems(sec./sentence): without BiVN/with BiVN 5.4 Analysis and Discussion Table 3 shows the performance evaluation of each system. CBLMC outperformed CBIBM in overall evaluation criteria. WBLMC showed much better performance than WBIBM in most of the evaluation criteria except for BLEU score. The interesting point is that the performance of WBLMC is close to that of CBIBM in PER and mWER. The BLEU score of WBLMC is lower than that of CBIBM, but the NIST score of WBLMC is much better than that of CBIBM. The reason the proposed model provided better performance than the IBM-style models is because the use of contextual information in CBLMC and WBLMC enabled the system to reduce the translation ambiguities, which not only reduced the computational complexity during decoding, but also made the translation accurate and deterministic. In addition, chunk-based translation systems outperformed word-based systems. This is also strong evidence of the advantage of contextual information. To evaluate the effectiveness of bilingual verbnoun collocations, we used the BiVN filtered with
, where coverage is
on the test set and average ambiguity is
. We suffered a slight loss in the speed by using the BiVN(see Table 4), but we could improve performance in all of the translation systems(see Table 3). In particular, the performance improvement in CBIBM with BiVN was remarkable. This is a positive sign that the BiVN is useful for handling the problem of long-distance dependency. From this result, we believe that if we increased the coverage of BiVN and its accuracy, we could improve the performance much more. Table 4 shows the translation speed of each system. For the evaluation of processing time, we used the same machine, with a Xeon 2.8 GHz CPU and 4GB memory , and checked the time of the best performance of each system. The chunk-based translation systems are much faster than the word-based systems. It may be because the translation ambiguities of the chunk-based models are lower than those of the word-based models. However, the processing speed of the IBM-style models is faster than the proposed model. This tendency can be analyzed from two viewpoints: decoding algorithm and DB system for parameter retrieval. Theoretically, the computational complexity of the proposed model is lower than that of the IBM models. The use of a 555 sorting and pruning algorithm for partial translations provides shorter search times in all system. Since the number of parameters for the proposed model is much more than for the IBM-style models, it took a longer time to retrieve parameters. To decrease the processing time, we need to construct a more efficient DB system. 6 Conclusion In this paper, we proposed a new chunk-based statistical machine translation model that is tightly coupled with a language model. In order to alleviate the data sparseness in chunk-based translation, we applied the back-off translation method at the headtail and morpheme levels. Moreover, in order to get more semantically plausible translation results by considering long-distance dependency, we utilized verb-noun collocations which were automatically extracted by using chunk alignment and a monolingual dependency parser. As a case study, we experimented on the language pair of Japanese and Korean. Experimental results showed that the proposed translation model is very effective in improving performance. The use of bilingual verbnoun collocations is also useful for improving the performance. However, we still have some problems of the data sparseness and the low coverage of bilingual verbnoun collocation. In the near future, we will try to solve the data sparseness problem and to increase the coverage and accuracy of verb-noun collocations. References Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation, Computational Linguistics, 19(2):263-311. P.R. Clarkson and R. Rosenfeld. 1997. Statistical Language Modeling Using the CMU-Cambridge Toolkit, Proc. of ESCA Eurospeech. Young-Sook Hwang, Kyonghee Paik, and Yutaka Sasaki. 2004. Bilingual Knowledge Extraction Using Chunk Alignment, Proc. of the 18th Pacific Asia Conference on Language, Information and Computation (PACLIC-18), pp. 127-137, Tokyo. Kevin Knight. 1999. Decoding Complexity in WordReplacement Translation Models, Computational Linguistics, Squibs Discussion, 25(4). Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003 Statistical Phrase-Based Translation, Proc. of the Human Language Technology Conference(HLT/NAACL) Philipp Koehn. 2004 Pharaoh: a Beam Search Decoder for Phrase-Based Statistical Machine Translation Models, Proc. of AMTA’04 Taku Kudo, Yuji Matsumoto. 2002. Japanese Dependency Analyisis using Cascaded Chunking, Proc. of CoNLL-2002 Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine translation , Proc. of EMNLP. Sonja Niesen, Franz Josef Och, Gregor Leusch, Hermann Ney. 2000. An Evaluation Tool for Machine Translation: Fast Evaluation for MT Research, Proc. of the 2nd International Conference on Language Resources and Evaluation, pp. 39-45, Athens, Greece. Franz Josef Och, Christoph Tillmann, Hermann Ney. 1999. Improved alignment models for statistical machine translation, Proc. of EMNLP/WVLC. Franz Josef Och and Hermann Ney. 2000. Improved Statistical Alignment Models , Proc. of the 38th Annual Meeting of the Association for Computational Linguistics, pp. 440-447, Hongkong, China. Franz Josef Och, Nicola Ueffing, Hermann Ney. 2001. An Efficient A* Search Algorithm for Statistical Machine Translation , Data-Driven Machine Translation Workshop, pp. 55-62, Toulouse, France. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation , IBM Research Report, RC22176. Toshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hirofumi Yamamoto, and Seiichi Yamamoto. 2002. Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world, Proc. of LREC 2002, pp. 147-152, Spain. Richard Zens and Hermann Ney. 2004. Improvements in Phrase-Based Statistical Machine Translation, Proc. of the Human Language Technology Conference (HLT-NAACL) , Boston, MA, pp. 257-264. Hae-Chang Rim. 2003. Korean Morphological Analyzer and Part-of-Speech Tagger, Technical Report, NLP Lab. Dept. of Computer Science and Engineering, Korea University 556 | 2005 | 68 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 557–564, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics A Localized Prediction Model for Statistical Machine Translation Christoph Tillmann and Tong Zhang IBM T.J. Watson Research Center Yorktown Heights, NY 10598 USA ctill,tzhang @us.ibm.com Abstract In this paper, we present a novel training method for a localized phrase-based prediction model for statistical machine translation (SMT). The model predicts blocks with orientation to handle local phrase re-ordering. We use a maximum likelihood criterion to train a log-linear block bigram model which uses realvalued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features. Our training algorithm can easily handle millions of features. The best system obtains a % improvement over the baseline on a standard Arabic-English translation task. 1 Introduction In this paper, we present a block-based model for statistical machine translation. A block is a pair of phrases which are translations of each other. For example, Fig. 1 shows an Arabic-English translation example that uses blocks. During decoding, we view translation as a block segmentation process, where the input sentence is segmented from left to right and the target sentence is generated from bottom to top, one block at a time. A monotone block sequence is generated except for the possibility to swap a pair of neighbor blocks. We use an orientation model similar to the lexicalized block re-ordering model in (Tillmann, 2004; Och et al., 2004): to generate a block with orientation relative to its predecessor block . During decoding, we compute the probability
of a block sequence with orientation as a product of block bigram probabilities:
(1) "! #%$"&('")%$"*+$ ,.-/.0 '"1$ 2 '"34 0 '")%$"* 56*+3'"$ 07 0 8 7:9 3 7 1 9 7 0 ; 3 &< = 7 0 7 * 3 7 < 0 < = 1 ) 1> ? ' 3*+4('"@+$ 0 0 & ) 7 ) < 7 0 ACB 7 0 7 0 B 2 < 7 "D "E GF Figure 1: An Arabic-English block translation example, where the Arabic words are romanized. The following orientation sequence is generated: IHKJ ML HKN MO H J MP HKQ . where is a block and SRUT N eft Q ight V J eutral 6W is a three-valued orientation component linked to the block (the orientation of the predecessor block is currently ignored.). Here, the block sequence with orientation X is generated under the restriction that the concatenated source phrases of the blocks yield the input sentence. In modeling a block sequence, we emphasize adjacent block neighbors that have Right or Left orientation. Blocks with neutral orientation are supposed to be less strongly ’linked’ to their predecessor block and are handled separately. During decoding, most blocks have right orientation HYQ , since the block translations are mostly monotone. 557 The focus of this paper is to investigate issues in discriminative training of decoder parameters. Instead of directly minimizing error as in earlier work (Och, 2003), we decompose the decoding process into a sequence of local decision steps based on Eq. 1, and then train each local decision rule using convex optimization techniques. The advantage of this approach is that it can easily handle a large amount of features. Moreover, under this view, SMT becomes quite similar to sequential natural language annotation problems such as part-of-speech tagging, phrase chunking, and shallow parsing. The paper is structured as follows: Section 2 introduces the concept of block orientation bigrams. Section 3 describes details of the localized log-linear prediction model used in this paper. Section 4 describes the online training procedure and compares it to the well known perceptron training algorithm (Collins, 2002). Section 5 shows experimental results on an Arabic-English translation task. Section 6 presents a final discussion. 2 Block Orientation Bigrams This section describes a phrase-based model for SMT similar to the models presented in (Koehn et al., 2003; Och et al., 1999; Tillmann and Xia, 2003). In our paper, phrase pairs are named blocks and our model is designed to generate block sequences. We also model the position of blocks relative to each other: this is called orientation. To define block sequences with orientation, we define the notion of block orientation bigrams. Starting point for collecting these bigrams is a block set Z H H V[ V\] H V^_ `baXc d . Here, is a block consisting of a source phrase [ and a target phrase \ . e is the source phrase length and f is the target phrase length. Single source and target words are denoted by ^g and a respectively, where h H di+i+i e and j H di+ikiM f . We will also use a special single-word block set Z `l Z which contains only blocks for which e H f H . For the experiments in this paper, the block set is the one used in (Al-Onaizan et al., 2004). Although this is not investigated in the present paper, different blocksets may be used for computing the block statistics introduced in this paper, which may effect translation results. For the block set Z and a training sentence pair, we carry out a two-dimensional pattern matching algorithm to find adjacent matching blocks along with their position in the coordinate system defined by source and target positions (see Fig. 2). Here, we do not insist on a consistent block coverage as one would do during decoding. Among the matching blocks, two blocks and are adjacent if the target phrases \ and \` as well as the source phrases [ and [ are adjacent. is predecessor of block if and are adjacent and occurs below . A right adjacent successor block is said to have right orientation HmQ . A left adjacent successor block is said to have left orientab b' o=L b b' o=R x axis: source positions nporq"sutwv x oGy6z|{ x }~ tws x s ~ t Local Block Orientation Figure 2: Block 6 is the predecessor of block . The successor block occurs with either left HN or right HQ orientation. ’left’ and ’right’ are defined relative to the axis ; ’below’ is defined relative to the axis. For some discussion on global re-ordering see Section 6. tion H
N . There are matching blocks that have no predecessor, such a block has neutral orientation ( HYJ ). After matching blocks for a training sentence pair, we look for adjacent block pairs to collect block bigram orientation events of the type H k k . Our model to be presented in Section 3 is used to predict a future block orientation pair given its predecessor block history . In Fig. 1, the following block orientation bigrams occur: i J k , N + L , i J + O , O Q + P . Collecting orientation bigrams on all parallel sentence pairs, we obtain an orientation bigram list : H | H X + . u (2) Here, is the number of orientation bigrams in the ^ -th sentence pair. The total number J of orientation bigrams JH is about JH million for our training data consisting of [ HM"" sentence pairs. The orientation bigram list is used for the parameter training presented in Section 3. Ignoring the bigrams with neutral orientation J reduces the list defined in Eq. 2 to about million orientation bigrams. The Neutral orientation is handled separately as described in Section 5. Using the reduced orientation bigram list, we collect unigram orientation counts J d : how often a block occurs with a given orientation RT N Q W . J` k i J`¡ d typically holds for blocks involved in block swapping and the orientation model d is defined as: k H J d J` d(¢ J¡ kG In order to train a block bigram orientation model as described in Section 3.2, we define a successor set £ V for a block in the ^ -th training sentence pair: 558 £ V H T number of triples of type ¤ N + k or type ¤ Q + k R W The successor set £" V is defined for each event in the list . The average size of £" 6V is r successor blocks. If we were to compute a Viterbi block alignment for a training sentence pair, each block in this block alignment would have at most successor: Blocks may have several successors, because we do not inforce any kind of consistent coverage during training. During decoding, we generate a list of block orientation bigrams as described above. A DP-based beam search procedure identical to the one used in (Tillmann, 2004) is used to maximize over all oriented block segmentations X . During decoding orientation bigrams N + k with left orientation are only generated if J k¦¥ for the successor block . 3 Localized Block Model and Discriminative Training In this section, we describe the components used to compute the block bigram probability in Eq. 1. A block orientation pair §+ V¨ k k is represented as a feature-vector ©ª ¨M R« ¬ . For a model that uses all the components defined below, is . As featurevector components, we take the negative logarithm of some block model probabilities. We use the term ’float’ feature for these feature-vector components (the model score is stored as a float number). Additionally, we use binary block features. The letters (a)-(f) refer to Table 1: Unigram Models: we compute (a) the unigram probability k and (b) the orientation probability k . These probabilities are simple relative frequency estimates based on unigram and unigram orientation counts derived from the data in Eq. 2. For details see (Tillmann, 2004). During decoding, the unigram probability is normalized by the source phrase length. Two types of Trigram language model: (c) probability of predicting the first target word in the target clump of given the final two words of the target clump of ¤ , (d) probability of predicting the rest of the words in the target clump of . The language model is trained on a separate corpus. Lexical Weighting: (e) the lexical weight [ \] of the block H [ V\] is computed similarly to (Koehn et al., 2003), details are given in Section 3.4. Binary features: (f) binary features are defined using an indicator function ©ª ++ V which is if the block pair + V occurs more often than a given threshold J , e.g J®H¯ . Here, the orientation between the blocks is ignored. ©ª + H J + V° J else (3) 3.1 Global Model In our linear block model, for a given source sentence ^ , each translation is represented as a sequence of block/orientation pairs T X W consistent with the source. Using features such as those described above, we can parameterize the probability of such a sequence as
± ^ , where ± is a vector of unknown model parameters to be estimated from the training data. We use a log-linear probability model and maximum likelihood training— the parameter ± is estimated by maximizing the joint likelihood over all sentences. Denote by ²³^ the set of possible block/orientation sequences T W that are consistent with the source sentence ^ , then a loglinear probability model can be represented as
± ^ Hµ´+¶· ±¸ ©ª ¹ V^ (4) where ©ª denotes the feature vector of the corresponding block translation, and the partition function is: ¹ V^ H º » ¼ ½¾ ¿¼ ½SÀwÁdÂIà §Ä ´+¶· ± ¸ ©ª Å Æ uÅ `6 A disadvantage of this approach is that the summation over ²³V^ can be rather difficult to compute. Consequently some sophisticated approximate inference methods are needed to carry out the computation. A detailed investigation of the global model will be left to another study. 3.2 Local Model Restrictions In the following, we consider a simplification of the direct global model in Eq. 4. As in (Tillmann, 2004), we model the block bigram probability as R T N Q W ¤ in Eq. 1. We distinguish the two cases (1) SRÇT N Q W , and (2) HKJ . Orientation is modeled only in the context of immediate neighbors for blocks that have left or right orientation. The log-linear model is defined as: RÇT N Q W ¨ ± ^ (5) H ´+¶· ±¸ ©ª ¨M V6 ¹ ¨ ^ where ^ is the source sentence, ©ª ¨M ¤ V is a locally defined feature vector that depends only on the current and the previous oriented blocks + and § § . The features were described at the beginning of the section. The partition function is given by ¹ ¨ ^ H à » ¾ Ä ÁdÂIà » ¾ ÉÈ §Ä ´+¶· ± ¸ ©ª ¨M 6 (6) 559 The set ²³ § §¨ ^ is a restricted set of possible successor oriented blocks that are consistent with the current block position and the source sentence ^ , to be described in the following paragraph. Note that a straightforward normalization over all block orientation pairs in Eq. 5 is not feasible: there are tens of millions of possible successor blocks (if we do not impose any restriction). For each block H V[ \] , aligned with a source sentence ^ , we define a source-induced alternative set: Z k H T all blocks R Z that share an identical source phrase with ÆW The set Z k contains the block itself and the block target phrases of blocks in that set might differ. To restrict the number of alternatives further, the elements of Z k are sorted according to the unigram count J V and we keep at most the top Ê blocks for each source interval ^ . We also use a modified alternative set Z k , where the block as well as the elements in the set Z k are single word blocks. The partition function is computed slightly differently during training and decoding: Training: for each event ¤ + d in a sentence pair ^ in Eq. 2 we compute the successor set £ § . This defines a set of ’true’ block successors. For each true successor , we compute the alternative set Z k . ²³ 6 V¨ ^ is the union of the alternative set for each successor . Here, the orientation from the true successor is assigned to each alternative in Z k . We obtain on the average alternatives per training event 6 + k in the list . Decoding: Here, each block that matches a source interval following 6 in the sentence ^ is a potential successor. We simply set ²³ ¨ ^ H Z k . Moreover, setting ¹ ¤ ¨ ^ HË during decoding does not change performance: the list Z k just restricts the possible target translations for a source phrase. Under this model, the log-probability of a possible translation of a source sentence ^ , as in Eq. 1, can be written as Ì¿Í
± ^ H (7) H ÌÍ ´k¶· ±¸ ©ª ¨M 6 ¹ ¨ ^ In the maximum-likelihood training, we find ± by maximizing the sum of the log-likelihood over observed sentences, each of them has the form in Eq. 7. Although the training methodology is similar to the global formulation given in Eq. 4, this localized version is computationally much easier to manage since the summation in the partition function ¹ ¤ ¨ ^ is now over a relatively small set of candidates. This computational advantage is the main reason that we adopt the local model in this paper. 3.3 Global versus Local Models Both the global and the localized log-linear models described in this section can be considered as maximumentropy models, similar to those used in natural language processing, e.g. maximum-entropy models for POS tagging and shallow parsing. In the parsing context, global models such as in Eq. 4 are sometimes referred to as conditional random field or CRF (Lafferty et al., 2001). Although there are some arguments that indicate that this approach has some advantages over localized models such as Eq. 5, the potential improvements are relatively small, at least in NLP applications. For SMT, the difference can be potentially more significant. This is because in our current localized model, successor blocks of different sizes are directly compared to each other, which is intuitively not the best approach (i.e., probabilities of blocks with identical lengths are more comparable). This issue is closely related to the phenomenon of multiple counting of events, which means that a source/target sentence pair can be decomposed into different oriented blocks in our model. In our current training procedure, we select one as the truth, while consider the other (possibly also correct) decisions as non-truth alternatives. In the global modeling, with appropriate normalization, this issue becomes less severe. With this limitation in mind, the localized model proposed here is still an effective approach, as demonstrated by our experiments. Moreover, it is simple both computationally and conceptually. Various issues such as the ones described above can be addressed with more sophisticated modeling techniques, which we shall be left to future studies. 3.4 Lexical Weighting The lexical weight [ \] of the block H [ V\] is computed similarly to (Koehn et al., 2003), but the lexical translation probability ^ ad is derived from the block set itself rather than from a word alignment, resulting in a simplified training. The lexical weight is computed as follows: [ \] H _ g JIÎ ^ g \] c Ï ^Mg a ^ g a H J k » Á Î ½ à » Ä J Here, the single-word-based translation probability ^g a is derived from the block set itself. H ^g Va and H ^Mg aXÐ. are single-word blocks, where source and target phrases are of length . J Î V^g a c k is the number of blocks Ð H ^ g a Ð for Ñ R di+ikiM f for which ^ g a Ð ° . 560 4 Online Training of Maximum-entropy Model The local model described in Section 3 leads to the following abstract maximum entropy training formulation: Ò ± HËÓÕÔÖS×]Ø Í Ù Å C Ì¿Í g ÁdÂÛÚ ´k¶· ± ¸ ¾ g ´k¶· ± ¸ ¾ Ü Ú (8) In this formulation, ± is the weight vector which we want to compute. The set ² consists of candidate labels for the j -th training instance, with the true label R ² . The labels here are block identities , ² corresponds to the alternative set ²Ý ¤ V¨ ^ and the ’true’ blocks are defined by the successor set £" V . The vector ¾ g is the feature vector of the j -th instance, corresponding to label h R ² . The symbol is short-hand for the featurevector ©ª ¨M ¤ V . This formulation is slightly different from the standard maximum entropy formulation typically encountered in NLP applications, in that we restrict the summation over a subset ² of all labels. Intuitively, this method favors a weight vector such that for each j , ±¸ ¾ ÜkÚ(Þ ±¸ ¾ g is large when hUß H . This effect is desirable since it tries to separate the correct classification from the incorrect alternatives. If the problem is completely separable, then it can be shown that the computed linear separator, with appropriate regularization, achieves the largest possible separating margin. The effect is similar to some multi-category generalizations of support vector machines (SVM). However, Eq. 8 is more suitable for non-separable problems (which is often the case for SMT) since it directly models the conditional probability for the candidate labels. A related method is multi-category perceptron, which explicitly finds a weight vector that separates correct labels from the incorrect ones in a mistake driven fashion (Collins, 2002). The method works by examining one sample at a time, and makes an update ±áàâ± ¢ u ¾ Ü+ÚÞ ¾ g when ±¸ ¿ ¾ ÜkÚªÞ ¾ g is not positive. To compute the update for a training instance j , one usually pick the h such that ±p¸ u ¾ Ü+ÚÞ ¾ g is the smallest. It can be shown that if there exist weight vectors that separate the correct label from incorrect labels h R ² for all hUß H , then the perceptron method can find such a separator. However, it is not entirely clear what this method does when the training data are not completely separable. Moreover, the standard mistake bound justification does not apply when we go through the training data more than once, as typically done in practice. In spite of some issues in its justification, the perceptron algorithm is still very attractive due to its simplicity and computational efficiency. It also works quite well for a number of NLP applications. In the following, we show that a simple and efficient online training procedure can also be developed for the maximum entropy formulation Eq. 8. The proposed update rule is similar to the perceptron method but with a soft mistake-driven update rule, where the influence of each feature is weighted by the significance of its mistake. The method is essentially a version of the socalled stochastic gradient descent method, which has been widely used in complicated stochastic optimization problems such as neural networks. It was argued recently in (Zhang, 2004) that this method also works well for standard convex formulations of binary-classification problems including SVM and logistic regression. Convergence bounds similar to perceptron mistake bounds can be developed, although unlike perceptron, the theory justifies the standard practice of going through the training data more than once. In the non-separable case, the method solves a regularized version of Eq. 8, which has the statistical interpretation of estimating the conditional probability. Consequently, it does not have the potential issues of the perceptron method which we pointed out earlier. Due to the nature of online update, just like perceptron, this method is also very simple to implement and is scalable to large problem size. This is important in the SMT application because we can have a huge number of training instances which we are not able to keep in memory at the same time. In stochastic gradient descent, we examine one training instance at a time. At the j -th instance, we derive the update rule by maximizing with respect to the term associated with the instance N ± H ÌÍ g ÁdÂ Ú ´+¶· ±¸ ¾ g ´k¶· ± ¸ ¾ ÜkÚ in Eq. 8. We do a gradient descent localized to this instance as ±ãàä± Þæå `ç ç Ù N ± , where å is a parameter often referred to as the learning rate. For Eq. 8, the update rule becomes: ±áàâ± ¢ å g ÁdÂÛÚ ´+¶· ±¸ ¾ g ¿ ¾ ÜkÚ(Þ ¾ g g ÁdÂÛÚ ´k¶· ± ¸ ¾ g (9) Similar to online algorithms such as the perceptron, we apply this update rule one by one to each training instance (randomly ordered), and may go-through data points repeatedly. Compare Eq. 9 to the perceptron update, there are two main differences, which we discuss below. The first difference is the weighting scheme. Instead of putting the update weight to a single (most mistaken) feature component, as in the perceptron algorithm, we use a soft-weighting scheme, with each feature component h weighted by a factor ´+¶· ±p¸ ¾ g è Ð ÁdÂ Ú ´k¶· ±¸ ¾ Ð . A component h with larger ±p¸ ¾ g gets more weight. This effect is in principle similar to the perceptron update. The smoothing effect in Eq. 9 is useful for non-separable problems 561 since it does not force an update rule that attempts to separate the data. Each feature component gets a weight that is proportional to its conditional probability. The second difference is the introduction of a learning rate parameter å . For the algorithm to converge, one should pick a decreasing learning rate. In practice, however, it is often more convenient to select a fixed å H å for all j . This leads to an algorithm that approximately solve a regularized version of Eq. 8. If we go through the data repeatedly, one may also decrease the fixed learning rate by monitoring the progress made each time we go through the data. For practical purposes, a fixed small å such as å H .é is usually sufficient. We typically run forty updates over the training data. Using techniques similar to those of (Zhang, 2004), we can obtain a convergence theorem for our algorithm. Due to the space limitation, we will not present the analysis here. An advantage of this method over standard maximum entropy training such as GIS (generalized iterative scaling) is that it does not require us to store all the data in memory at once. Moreover, the convergence analysis can be used to show that if ê is large, we can get a very good approximate solution by going through the data only once. This desirable property implies that the method is particularly suitable for large scale problems. 5 Experimental Results The translation system is tested on an Arabic-to-English translation task. The training data comes from the UN news sources. Some punctuation tokenization and some number classing are carried out on the English and the Arabic training data. In this paper, we present results for two test sets: (1) the devtest set uses data provided by LDC, which consists of sentences with Õ"Ê Arabic words with reference translations. (2) the blind test set is the MT03 Arabic-English DARPA evaluation test set consisting of " sentences with M b Arabic words with also reference translations. Experimental results are reported in Table 2: here cased BLEU results are reported on MT03 Arabic-English test set (Papineni et al., 2002). The word casing is added as post-processing step using a statistical model (details are omitted here). In order to speed up the parameter training we filter the original training data according to the two test sets: for each of the test sets we take all the Arabic substrings up to length and filter the parallel training data to include only those training sentence pairs that contain at least one out of these phrases: the ’LDC’ training data contains about bM thousand sentence pairs and the ’MT03’ training data contains about Õ" thousand sentence pairs. Two block sets are derived for each of the training sets using a phrase-pair selection algorithm similar to (Koehn et al., 2003; Tillmann and Xia, 2003). These block sets also include blocks that occur only once in the training data. Additionally, some heuristic filtering is used to increase phrase translation accuracy (Al-Onaizan et al., 2004). 5.1 Likelihood Training Results We compare model performance with respect to the number and type of features used as well as with respect to different re-ordering models. Results for Ê experiments are shown in Table 2, where the feature types are described in Table 1. The first experimental results are obtained by carrying out the likelihood training described in Section 3. Line in Table 2 shows the performance of the baseline block unigram ’MON’ model which uses two ’float’ features: the unigram probability and the boundary-word language model probability. No block re-ordering is allowed for the baseline model (a monotone block sequence is generated). The ’SWAP’ model in line uses the same two features, but neighbor blocks can be swapped. No performance increase is obtained for this model. The ’SWAP & OR’ model uses an orientation model as described in Section 3. Here, we obtain a small but significant improvement over the baseline model. Line shows that by including two additional ’float’ features: the lexical weighting and the language model probability of predicting the second and subsequent words of the target clump yields a further significant improvement. Line shows that including binary features and training their weights on the training data actually decreases performance. This issue is addressed in Section 5.2. The training is carried out as follows: the results in line are obtained by training ’float’ weights only. Here, the training is carried out by running only once over % of the training data. The model including the binary features is trained on the entire training data. We obtain about b million features of the type defined in Eq. 3 by setting the threshold JëHì . Forty iterations over the training data take about hours on a single Intel machine. Although the online algorithm does not require us to do so, our training procedure keeps the entire training data and the weight vector ± in about gigabytes of memory. For blocks with neutral orientation HãJ , we train a separate model that does not use the orientation model feature or the binary features. E.g. for the results in line in Table 2, the neutral model would use the features Ví Vî V , but not k and V© . Here, the neutral model is trained on the neutral orientation bigram subsequence that is part of Eq. 2. 5.2 Modified Weight Training We implemented the following variation of the likelihood training procedure described in Section 3, where we make use of the ’LDC’ devtest set. First, we train a model on the ’LDC’ training data using float features and the binary features. We use this model to decode 562 Table 1: List of feature-vector components. For a description, see Section 3. Description (a) Unigram probability (b) Orientation probability (c) LM first word probability (d) LM second and following words probability (e) Lexical weighting (f) Binary Block Bigram Features Table 2: Cased BLEU translation results with confidence intervals on the MT03 test data. The third column summarizes the model variations. The results in lines and Ê are for a cheating experiment: the float weights are trained on the test data itself. Re-ordering Components BLEU 1 ’MON’ (a),(c) " pï G 2 ’SWAP’ (a),(c) " pï G 3 ’SWAP & OR’ (a),(b),(c) " Ê ï G 4 ’SWAP & OR’ (a)-(e) ðï G 5 ’SWAP & OR’ (a)-(f) pï G 6 ’SWAP & OR’ (a)-(e) (ldc devtest) ï G 7 ’SWAP & OR’ (a)-(f) (ldc devtest) pï G 8 ’SWAP & OR’ (a)-(e) (mt03 test) Ê pï G 9 ’SWAP & OR’ (a)-(f) (mt03 test) Ê pï G the devtest ’LDC’ set. During decoding, we generate a ’translation graph’ for every input sentence using a procedure similar to (Ueffing et al., 2002): a translation graph is a compact way of representing candidate translations which are close in terms of likelihood. From the translation graph, we obtain the Õ" best translations according to the translation score. Out of this list, we find the block sequence that generated the top BLEU-scoring target translation. Computing the top BLEU-scoring block sequence for all the input sentences we obtain: H k C 6 (10) where J ÊÕ " . Here, J is the number of blocks needed to decode the entire devtest set. Alternatives for each of the events in M are generated as described in Section 3.2. The set of alternatives is further restricted by using only those blocks that occur in some translation in the "" -best list. The float weights are trained on the modified training data in Eq. 10, where the training takes only a few seconds. We then decode the ’MT03’ test set using the modified ’float’ weights. As shown in line and line there is almost no change in performance between training on the original training data in Eq. 2 or on the modified training data in Eq. 10. Line shows that even when training the float weights on an event set obtained from the test data itself in a cheating experiment, we obtain only a moderate performance improvement from b to Ê . For the experimental results in line and Ê , we use the same five float weights as trained for the experiments in line and and keep them fixed while training the binary feature weights only. Using the binary features leads to only a minor improvement in BLEU from b to in line . For this best model, we obtain a Mñ % BLEU improvement over the baseline. From our experimental results, we draw the following conclusions: (1) the translation performance is largely dominated by the ’float’ features, (2) using the same set of ’float’ features, the performance doesn’t change much when training on training, devtest, or even test data. Although, we do not obtain a significant improvement from the use of binary features, currently, we expect the use of binary features to be a promising approach for the following reasons: ò The current training does not take into account the block interaction on the sentence level. A more accurate approximation of the global model as discussed in Section 3.1 might improve performance. ò As described in Section 3.2 and Section 5.2, for efficiency reasons alternatives are computed from source phrase matches only. During training, more accurate local approximations for the partition function in Eq. 6 can be obtained by looking at block translations in the context of translation sequences. This involves the computationally expensive generation of a translation graph for each training sentence pair. This is future work. ò As mentioned in Section 1, viewing the translation process as a sequence of local discussions makes it similar to other NLP problems such as POS tagging, phrase chunking, and also statistical parsing. This similarity may facilitate the incorporation of these approaches into our translation model. 6 Discussion and Future Work In this paper we proposed a method for discriminatively training the parameters of a block SMT decoder. We discussed two possible approaches: global versus local. This work focused on the latter, due to its computational advantages. Some limitations of our approach have also been pointed out, although our experiments showed that this simple method can significantly improve the baseline model. As far as the log-linear combination of float features is concerned, similar training procedures have been proposed in (Och, 2003). This paper reports the use of 563 features whose parameter are trained to optimize performance in terms of different evaluation criteria, e.g. BLEU. On the contrary, our paper shows that a significant improvement can also be obtained using a likelihood training criterion. Our modified training procedure is related to the discriminative re-ranking procedure presented in (Shen et al., 2004). In fact, one may view discriminative reranking as a simplification of the global model we discussed, in that it restricts the number of candidate global translations to make the computation more manageable. However, the number of possible translations is often exponential in the sentence length, while the number of candidates in a typically reranking approach is fixed. Unless one employs an elaborated procedure, the candidate translations may also be very similar to one another, and thus do not give a good coverage of representative translations. Therefore the reranking approach may have some severe limitations which need to be addressed. For this reason, we think that a more principled treatment of global modeling can potentially lead to further performance improvements. For future work, our training technique may be used to train models that handle global sentence-level reorderings. This might be achieved by introducing orientation sequences over phrase types that have been used in ((Schafer and Yarowsky, 2003)). To incorporate syntactic knowledge into the block-based model, we will examine the use of additional real-valued or binary features, e.g. features that look at whether the block phrases cross syntactic boundaries. This can be done with only minor modifications to our training method. Acknowledgment This work was partially supported by DARPA and monitored by SPAWAR under contract No. N66001-99-28916. The paper has greatly profited from suggestions by the anonymous reviewers. References Yaser Al-Onaizan, Niyu Ge, Young-Suk Lee, Kishore Papineni, Fei Xia, and Christoph Tillmann. 2004. IBM Site Report. In NIST 2004 Machine Translation Workshop, Alexandria, VA, June. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proc. EMNLP’02. Philipp Koehn, Franz-Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proc. of the HLT-NAACL 2003 conference, pages 127–133, Edmonton, Canada, May. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML-01, pages 282–289. Franz-Josef Och, Christoph Tillmann, and Hermann Ney. 1999. Improved Alignment Models for Statistical Machine Translation. In Proc. of the Joint Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC 99), pages 20–28, College Park, MD, June. Och et al. 2004. A Smorgasbord of Features for Statistical Machine Translation. In Proceedings of the Joint HLT and NAACL Conference (HLT 04), pages 161– 168, Boston, MA, May. Franz-Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proc. of the 41st Annual Conf. of the Association for Computational Linguistics (ACL 03), pages 160–167, Sapporo, Japan, July. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of machine translation. In Proc. of the 40th Annual Conf. of the Association for Computational Linguistics (ACL 02), pages 311–318, Philadelphia, PA, July. Charles Schafer and David Yarowsky. 2003. Statistical Machine Translation Using Coercive Two-Level Syntactic Translation. In Proc. of the Conf. on Empirical Methods in Natural Language Processing (EMNLP 03), pages 9–16, Sapporo, Japan, July. Libin Shen, Anoop Sarkar, and Franz-Josef Och. 2004. Discriminative Reranking of Machine Translation. In Proceedings of the Joint HLT and NAACL Conference (HLT 04), pages 177–184, Boston, MA, May. Christoph Tillmann and Fei Xia. 2003. A Phrase-based Unigram Model for Statistical Machine Translation. In Companian Vol. of the Joint HLT and NAACL Conference (HLT 03), pages 106–108, Edmonton, Canada, June. Christoph Tillmann. 2004. A Unigram Orientation Model for Statistical Machine Translation. In Companian Vol. of the Joint HLT and NAACL Conference (HLT 04), pages 101–104, Boston, MA, May. Nicola Ueffing, Franz-Josef Och, and Hermann Ney. 2002. Generation of Word Graphs in Statistical Machine Translation. In Proc. of the Conf. on Empirical Methods in Natural Language Processing (EMNLP 02), pages 156–163, Philadelphia, PA, July. Tong Zhang. 2004. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In ICML 04, pages 919–926. 564 | 2005 | 69 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 50–57, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Aggregation improves learning: experiments in natural language generation for intelligent tutoring systems Barbara Di Eugenio and Davide Fossati and Dan Yu University of Illinois Chicago, IL, 60607, USA {bdieugen,dfossa1,dyu6}@uic.edu Susan Haller University of Wisconsin - Parkside Kenosha, WI 53141, USA [email protected] Michael Glass Valparaiso University Valparaiso, IN, 46383, USA [email protected] Abstract To improve the interaction between students and an intelligent tutoring system, we developed two Natural Language generators, that we systematically evaluated in a three way comparison that included the original system as well. We found that the generator which intuitively produces the best language does engender the most learning. Specifically, it appears that functional aggregation is responsible for the improvement. 1 Introduction The work we present in this paper addresses three issues: evaluation of Natural Language Generation (NLG) systems, the place of aggregation in NLG, and NL interfaces for Intelligent Tutoring Systems. NLG systems have been evaluated in various ways, such as via task efficacy measures, i.e., measuring how well the users of the system perform on the task at hand (Young, 1999; Carenini and Moore, 2000; Reiter et al., 2003). We also employed task efficacy, as we evaluated the learning that occurs in students interacting with an Intelligent Tutoring System (ITS) enhanced with NLG capabilities. We focused on sentence planning, and specifically, on aggregation. We developed two different feedback generation engines, that we systematically evaluated in a three way comparison that included the original system as well. Our work is novel for NLG evaluation in that we focus on one specific component of the NLG process, aggregation. Aggregation pertains to combining two or more of the messages to be communicated into one sentence (Reiter and Dale, 2000). Whereas it is considered an essential task of an NLG system, its specific contributions to the effectiveness of the text that is eventually produced have rarely been assessed (Harvey and Carberry, 1998). We found that syntactic aggregation does not improve learning, but that what we call functional aggregation does. Further, we ran a controlled data collection in order to provide a more solid empirical base for aggregation rules than what is normally found in the literature, e.g. (Dalianis, 1996; Shaw, 2002). As regards NL interfaces for ITSs, research on the next generation of ITSs (Evens et al., 1993; Litman et al., 2004; Graesser et al., 2005) explores NL as one of the keys to bridging the gap between current ITSs and human tutors. However, it is still not known whether the NL interaction between students and an ITS does in fact improve learning. We are among the first to show that this is the case. We will first discuss DIAG, the ITS shell we are using, and the two feedback generators that we developed, DIAG-NLP1and DIAG-NLP2. Since the latter is based on a corpus study, we will briefly describe that as well. We will then discuss the formal evaluation we conducted and our results. 2 Natural Language Generation for DIAG DIAG (Towne, 1997) is a shell to build ITSs based on interactive graphical models that teach students to troubleshoot complex systems such as home heating and circuitry. A DIAG application presents a student with a series of troubleshooting problems of increasing difficulty. The student tests indicators and tries to infer which faulty part (RU) may cause the abnormal states detected via the indicator readings. RU stands for replaceable unit, because the only course of action for the student to fix the problem is to replace faulty components in the graphical simulation. 50 Figure 1: The furnace system Fig. 1 shows the furnace, one subsystem of the home heating system in our DIAG application. Fig. 1 includes indicators such as the gauge labeled Water Temperature, RUs, and complex modules (e.g., the Oil Burner) that contain indicators and RUs. Complex components are zoomable. At any point, the student can consult the tutor via the Consult menu (cf. the Consult button in Fig. 1). There are two main types of queries: ConsultInd(icator) and ConsultRU. ConsultInd queries are used mainly when an indicator shows an abnormal reading, to obtain a hint regarding which RUs may cause the problem. DIAG discusses the RUs that should be most suspected given the symptoms the student has already observed. ConsultRU queries are mainly used to obtain feedback on the diagnosis that a certain RU is faulty. DIAG responds with an assessment of that diagnosis and provides evidence for it in terms of the symptoms that have been observed relative to that RU. The original DIAG system (DIAG-orig) uses very simple templates to assemble the text to present to the student. The top parts of Figs. 2 and 3 show the replies provided by DIAG-orig to a ConsultInd on the Visual Combustion Check, and to a ConsultRu on the Water Pump. The highly repetitive feedback by DIAG-orig screams for improvements based on aggregation techniques. Our goal in developing DIAG-NLP1 and DIAG-NLP2 was to assess whether simple, rapidly deployable NLG techniques would lead to measurable improvements in the student’s learning. Thus, in both cases it is still DIAG that performs content determination, and provides to DIAG-NLP1 and DIAG-NLP2 a file in which the facts to be communicated are written – a fact is the basic unit of information that underlies each of the clauses in a reply by DIAG-orig. The only way we altered the interaction between student and system is the actual language that is presented in the output window. In DIAG-NLP1 we mostly explored using syntactic aggregation to improve the feedback, whereas DIAG-NLP2 is corpus-based and focuses on functional aggregation. In both DIAG-NLP1 and DIAGNLP2, we use EXEMPLARS (White and Caldwell, 1998), an object-oriented, rule-based generator. The rules (called exemplars) are meant to capture an exemplary way of achieving a communicative goal in a given context. EXEMPLARS selects rules by traversing the exemplar specialization hierarchy and evaluating the applicability conditions associated with each exemplar. The visual combustion check is igniting which is abnormal (normal is combusting). Oil Nozzle always produces this abnormality when it fails. Oil Supply Valve always produces this abnormality when it fails. Oil pump always produces this abnormality when it fails. Oil Filter always produces this abnormality when it fails. System Control Module sometimes produces this abnormality when it fails. Ignitor Assembly never produces this abnormality when it fails. Burner Motor always produces this abnormality when it fails. The visual combustion check indicator is igniting. This is abnormal. Normal is combusting. Within the furnace system, this is sometimes caused if the System Control Module has failed. Within the Oil Burner this is never caused if the Ignitor Assembly has failed. In contrast, this is always caused if the Burner Motor, Oil Filter, Oil Pump, Oil Supply Valve, or Oil Nozzle has failed. The combustion is abnormal. In the oil burner, check the units along the path of the oil and the burner motor. Figure 2: Answers to ConsultInd by DIAG-orig, DIAG-NLP1and DIAG-NLP2 51 Water pump is a very poor suspect. Some symptoms you have seen conflict with that theory. Water pump sound was normal. This normal indication never results when this unit fails. Visual combustion check was igniting. This abnormal indication never results when this unit fails. Burner Motor RMP Gauge was 525. This normal indication never results when this unit fails. The Water pump is a very poor suspect. Some symptoms you have seen conflict with that theory. The following indicators never display normally when this unit fails. Within the furnace system, the Burner Motor RMP Gauge is 525. Within the water pump and safety cutoff valve, the water pump sound indicator is normal. The following indicators never display abnormally when this unit fails. Within the fire door sight hole, the visual combustion check indicator is igniting. The water pump is a poor suspect since the water pump sound is ok. You have seen that the combustion is abnormal. Check the units along the path of the oil and the electrical devices. Figure 3: Answers to ConsultRu by DIAG-orig, DIAG-NLP1 and DIAG-NLP2 2.1 DIAG-NLP1: Syntactic aggregation DIAG-NLP1 1 (i) introduces syntactic aggregation (Dalianis, 1996; Huang and Fiedler, 1996; Reape and Mellish, 1998; Shaw, 2002) and what we call structural aggregation, namely, grouping parts according to the structure of the system; (ii) generates some referring expressions; (iii) models a few rhetorical relations; and (iv) improves the format of the output. The middle parts of Figs. 2 and 3 show the revised output produced by DIAG-NLP1. E.g., in Fig. 2 the RUs of interest are grouped by the system modules that contain them (Oil Burner and Furnace System), and by the likelihood that a certain RU causes the observed symptoms. In contrast to the original answer, the revised answer highlights that the Ignitor Assembly cannot cause the symptom. In DIAG-NLP1, EXEMPLARS accesses the SNePS Knowledge Representation and Reasoning System (Shapiro, 2000) for static domain information.2 SNePS makes it easy to recognize structural 1DIAG-NLP1 actually augments and refines the first feedback generator we created for DIAG, DIAG-NLP0 (Di Eugenio et al., 2002). DIAG-NLP0 only covered (i) and (iv). 2In DIAG, domain knowledge is hidden and hardly accessimilarities and use shared structures. Using SNePS, we can examine the dimensional structure of an aggregation and its values to give preference to aggregations with top-level dimensions that have fewer values, to give summary statements when a dimension has many values that are reported on, and to introduce simple text structuring in terms of rhetorical relations, inserting relations like contrast and concession to highlight distinctions between dimensional values (see Fig. 2, middle). DIAG-NLP1 uses the GNOME algorithm (Kibble and Power, 2000) to generate referential expressions. Importantly, using SNePS propositions can be treated as discourse entities, added to the discourse model and referred to (see This is ... caused if ... in Fig. 2, middle). Information about lexical realization, and choice of referring expression is encoded in the appropriate exemplars. 2.2 DIAG-NLP2: functional aggregation In the interest of rapid prototyping, DIAG-NLP1 was implemented without the benefit of a corpus study. DIAG-NLP2 is the empirically grounded version of the feedback generator. We collected 23 tutoring interactions between a student using the DIAG tutor on home heating and two human tutors, for a total of 272 tutor turns, of which 235 in reply to ConsultRU and 37 in reply to ConsultInd (the type of student query is automatically logged). The tutor and the student are in different rooms, sharing images of the same DIAG tutoring screen. When the student consults DIAG, the tutor sees, in tabular form, the information that DIAG would use in generating its advice — the same “fact file” that DIAG gives to DIAG-NLP1and DIAG-NLP2— and types a response that substitutes for DIAG’s. The tutor is presented with this information because we wanted to uncover empirical evidence for aggregation rules in our domain. Although we cannot constrain the tutor to mention only the facts that DIAG would have communicated, we can analyze how the tutor uses the information provided by DIAG. We developed a coding scheme (Glass et al., 2002) and annotated the data. As the annotation was performed by a single coder, we lack measures of intercoder reliability. Thus, what follows should be taken as observations rather than as rigorous findings – useful observations they clearly are, since sible. Thus, in both DIAG-NLP1 and DIAG-NLP2 we had to build a small knowledge base that contains domain knowledge. 52 DIAG-NLP2 is based on these observations and its language fosters the most learning. Our coding scheme focuses on four areas. Fig. 4 shows examples of some of the tags (the SCM is the System Control Module). Each tag has from one to five additional attributes (not shown) that need to be annotated too. Domain ontology. We tag objects in the domain with their class indicator, RU and their states, denoted by indication and operationality, respectively. Tutoring actions. They include (i) Judgment. The tutor evaluates what the student did. (ii) Problem solving. The tutor suggests the next course of action. (iii) The tutor imparts Domain Knowledge. Aggregation. Objects may be functional aggregates, such as the oil burner, which is a system component that includes other components; linguistic aggregates, which include plurals and conjunctions; or a summary over several unspecified indicators or RUs. Functional/linguistic aggregate and summary tags often co-occur, as shown in Fig. 4. Relation to DIAG’s output. Contrary to all other tags, in this case we annotate the input that DIAG gave the tutor. We tag its portions as included / excluded / contradicted, according to how it has been dealt with by the tutor. Tutors provide explicit problem solving directions in 73% of the replies, and evaluate the student’s action in 45% of the replies (clearly, they do both in 28% of the replies, as in Fig. 4). As expected, they are much more concise than DIAG, e.g., they never mention RUs that cannot or are not as likely to cause a certain problem, such as, respectively, the ignitor assembly and the SCM in Fig. 2. As regards aggregation, 101 out of 551 RUs, i.e. 18%, are labelled as summary; 38 out of 193 indicators, i.e. 20%, are labelled as summary. These percentages, though seemingly low, represent a considerable amount of aggregation, since in our domain some items have very little in common with others, and hence cannot be aggregated. Further, tutors aggregate parts functionally rather than syntactically. For example, the same assemblage of parts, i.e., oil nozzle, supply valve, pump, filter, etc., can be described as the other items on the fuel line or as the path of the oil flow. Finally, directness – an attribute on the indicator tag – encodes whether the tutor explicitly talks about the indicator (e.g., The water temperature gauge reading is low), or implicitly via the object to which the indicator refers (e.g., the water is too cold). 110 out of 193 indicators, i.e. 57%, are marked as implicit, 45, i.e. 41%, as explicit, and 2% are not marked for directness (the coder was free to leave attributes unmarked). This, and the 137 occurrences of indication, prompted us to refer to objects and their states, rather than to indicators (as implemented by Steps 2 in Fig. 5, and 2(b)i, 3(b)i, 3(c)i in Fig. 6, which generate The combustion is abnormal and The water pump sound is OK in Figs. 2 and 3). 2.3 Feedback Generation in DIAG-NLP2 In DIAG-NLP1 the fact file provided by DIAG is directly processed by EXEMPLARS. In contrast, in DIAG-NLP2 a planning module manipulates the information before passing it to EXEMPLARS. This module decides which information to include according to the type of query the system is responding to, and produces one or more Sentence Structure objects. These are then passed to EXEMPLARS that transforms them into Deep Syntactic Structures. Then, a sentence realizer, RealPro (Lavoie and Rambow, 1997), transforms them into English sentences. Figs. 5 and 6 show the control flow in DIAGNLP2 for feedback generation for ConsultInd and ConsultRU. Step 3a in Fig. 5 chooses, among all the RUs that DIAG would talk about, only those that would definitely result in the observed symptom. Step 2 in the AGGREGATE procedure in Fig. 5 uses a simple heuristic to decide whether and how to use functional aggregation. For each RU, its possible aggregators and the number n of units it covers are listed in a table (e.g., electrical devices covers 4 RUs, ignitor, photoelectric cell, transformer and burner motor). If a group of REL-RUs contains k units that a certain aggregator Agg covers, if k < n 2 , Agg will not be used; if n 2 ≤k < n, Agg preceded by some of will be used; if k = n, Agg will be used. DIAG-NLP2 does not use SNePS, but a relational database storing relations, such as the ISA hierarchy (e.g., burner motor IS-A RU), information about referents of indicators (e.g., room temperature gauge REFERS-TO room), and correlations between RUs and the indicators they affect. 3 Evaluation Our empirical evaluation is a three group, betweensubject study: one group interacts with DIAG-orig, 53 [judgment [replaceable−unit the ignitor] is a poor suspect] since [indication combustion is working] during startup. The problem is that the SCM is shutting the system off during heating. [domain−knowledge The SCM reads [summary [linguistic−aggregate input signals from sensors]] and uses the signals to determine how to control the system.] [problem−solving Check the sensors.] Figure 4: Examples of a coded tutor reply 1. IND ←queried indicator 2. Mention the referent of IND and its state 3. IF IND reads abnormal, (a) REL-RUs ←choose relevant RUs (b) AGGR-RUs ←AGGREGATE(REL-RUs) (c) Suggest to check AGGR-RUs AGGREGATE(RUs) 1. Partition REL-RUs into subsets by system structure 2. Apply functional aggregation to subsets Figure 5: DIAG-NLP2: Feedback generation for ConsultInd one with DIAG-NLP1, one with DIAG-NLP2. The 75 subjects (25 per group) were all science or engineering majors affiliated with our university. Each subject read some short material about home heating, went through one trial problem, then continued through the curriculum on his/her own. The curriculum consisted of three problems of increasing difficulty. As there was no time limit, every student solved every problem. Reading materials and curriculum were identical in the three conditions. While a subject was interacting with the system, a log was collected including, for each problem: whether the problem was solved; total time, and time spent reading feedback; how many and which indicators and RUs the subject consults DIAG about; how many, and which RUs the subject replaces. We will refer to all the measures that were automatically collected as performance measures. At the end of the experiment, each subject was administered a questionnaire divided into three parts. The first part (the posttest) consists of three questions and tests what the student learned about the domain. The second part concerns whether subjects remember their actions, specifically, the RUs they replaced. We quantify the subjects’ recollections in terms of precision and recall with respect to the log that the system collects. We expect precision and recall of the replaced RUs to correlate with transfer, namely, to predict how well a subject is able to apply what s/he learnt about diagnosing malfunctions 1. RU ←queried RU REL-IND ←indicator associated to RU 2. IF RU warrants suspicion, (a) state RU is a suspect (b) IF student knows that REL-IND is abnormal i. remind him of referent of REL-IND and its abnormal state ii. suggest to replace RU (c) ELSE suggest to check REL-IND 3. ELSE (a) state RU is not a suspect (b) IF student knows that REL-IND is normal i. use referent of REL-IND and its normal state to justify judgment (c) IF student knows of abnormal indicators OTHER-INDs i. remind him of referents of OTHER-INDs and their abnormal states ii. FOR each OTHER-IND A. REL-RUs ←RUs associated with OTHER-IND B. AGGR-RUs ←AGGREGATE(REL-RUs) ∪AGGR-RUs iii. Suggest to check AGGR-RUs Figure 6: DIAG-NLP2: Feedback generation for ConsultRU to new problems. The third part concerns usability, to be discussed below. We found that subjects who used DIAG-NLP2 had significantly higher scores on the posttest, and were significantly more correct (higher precision) in remembering what they did. As regards performance measures, there are no so clear cut results. As regards usability, subjects prefer DIAG-NLP1/2 to DIAG-orig, however results are mixed as regards which of the two they actually prefer. In the tables that follow, boldface indicates significant differences, as determined by an analysis of variance performed via ANOVA, followed by posthoc Tukey tests. Table 1 reports learning measures, average across the three problems. DIAG-NLP2 is significantly better as regards PostTest score (F = 10.359, p = 0.000), and RU Precision (F = 4.719, p = 0.012). Performance on individual questions in the 54 DIAG-orig DIAG-NLP1 DIAG-NLP2 PostTest 0.72 0.69 0.90 RU Precision 0.78 0.70 0.91 RU Recall .53 .47 .40 Table 1: Learning Scores Figure 7: Scores on PostTest questions PostTest3 is illustrated in Fig. 7. Scores in DIAGNLP2 are always higher, significantly so on questions 2 and 3 (F = 8.481, p = 0.000, and F = 7.909, p = 0.001), and marginally so on question 1 (F = 2.774, p = 0.069).4 D-Orig D-NLP1 D-NLP2 Total Time 30’17” 28’34” 34’53” RU Replacements 8.88 11.12 11.36 ConsultInd 22.16 6.92 28.16 Avg. Reading Time 8” 14” 2” ConsultRU 63.52 45.68 52.12 Avg. Reading Time 5” 4” 5” Table 2: Performance Measures Table 2 reports performance measures, cumulative across the three problems, other than average reading times. Subjects don’t differ significantly in the time they spend solving the problems, or in the number of RU replacements they perform. DIAG’s assumption (known to the subjects) is that there is only one broken RU per problem, but the simulation allows subjects to replace as many as they want without any penalty before they come to the correct solution. The trend on RU replacements is opposite what we would have hoped for: when repairing a real system, replacing parts that are working should clearly be kept to a minimum, and subjects replace 3The three questions are: 1. Describe the main subsystems of the furnace. 2. What is the purpose of (a) the oil pump (b) the system control module? 3. Assume the photoelectric cell is covered with enough soot that it could not detect combustion. What impact would this have on the system? 4The PostTest was scored by one of the authors, following written guidelines. fewer parts in DIAG-orig. The next four entries in Table 2 report the number of queries that subjects ask, and the average time it takes subjects to read the feedback. The subjects ask significantly fewer ConsultInd in DIAG-NLP1 (F = 8.905, p = 0.000), and take significantly less time reading ConsultInd feedback in DIAG-NLP2 (F = 15.266, p = 0.000). The latter result is not surprising, since the feedback in DIAG-NLP2 is much shorter than in DIAG-orig and DIAG-NLP1. Neither the reason not the significance of subjects asking many fewer ConsultInd of DIAG-NLP1 are apparent to us – it happens for ConsultRU as well, to a lesser, not significant degree. We also collected usability measures. Although these are not usually reported in ITS evaluations, in a real setting students should be more willing to sit down with a system that they perceive as more friendly and usable. Subjects rate the system along four dimensions on a five point scale: clarity, usefulness, repetitiveness, and whether it ever misled them (the scale is appropriately arranged: the highest clarity but the lowest repetitiveness receive 5 points). There are no significant differences on individual dimensions. Cumulatively, DIAG-NLP2 (at 15.08) slightly outperforms the other two (DIAG-orig at 14.68 and DIAG-NLP1 at 14.32), however, the difference is not significant (highest possible rating is 20 points). prefer neutral disprefer DIAG-NLP1 to DIAG-orig 28 5 17 DIAG-NLP2 to DIAG-orig 34 1 15 DIAG-NLP2 to DIAG-NLP1 24 1 25 Table 3: User preferences among the three systems prefer neutral disprefer Consult Ind. 8 1 16 Consult RU 16 0 9 Table 4: DIAG-NLP2 versus DIAG-NLP1 natural concise clear contentful DIAG-NLP1 4 8 10 23 DIAG-NLP2 16 8 11 12 Table 5: Reasons for system preference Finally,5 on paper, subjects compare two pairs of versions of feedback: in each pair, the first feedback 5Subjects can also add free-form comments. Only few did 55 is generated by the system they just worked with, the second is generated by one of the other two systems. Subjects say which version they prefer, and why (they can judge the system along one or more of four dimensions: natural, concise, clear, contentful). The first two lines in Table 3 show that subjects prefer the NLP systems to DIAG-orig (marginally significant, χ2 = 9.49, p < 0.1). DIAG-NLP1 and DIAG-NLP2 receive the same number of preferences; however, a more detailed analysis (Table 4) shows that subjects prefer DIAG-NLP1 for feedback to ConsultInd, but DIAG-NLP2 for feedback to ConsultRu (marginally significant, χ2 = 5.6, p < 0.1). Finally, subjects find DIAG-NLP2 more natural, but DIAG-NLP1 more contentful (Table 5, χ2 = 10.66, p < 0.025). 4 Discussion and conclusions Our work touches on three issues: aggregation, evaluation of NLG systems, and the role of NL interfaces for ITSs. In much work on aggregation (Huang and Fiedler, 1996; Horacek, 2002), aggregation rules and heuristics are shown to be plausible, but are not based on any hard evidence. Even where corpus work is used (Dalianis, 1996; Harvey and Carberry, 1998; Shaw, 2002), the results are not completely convincing because we do not know for certain the content to be communicated from which these texts supposedly have been aggregated. Therefore, positing empirically based rules is guesswork at best. Our data collection attempts at providing a more solid empirical base for aggregation rules; we found that tutors exclude significant amounts of factual information, and use high degrees of aggregation based on functionality. As a consequence, while part of our rules implement standard types of aggregation, such as conjunction via shared participants, we also introduced functional aggregation (see conceptual aggregation (Reape and Mellish, 1998)). As regards evaluation, NLG systems have been evaluated e.g. by using human judges to assess the quality of the texts produced (Coch, 1996; Lester and Porter, 1997; Harvey and Carberry, 1998); by comparing the system’s performance to that of humans (Yeh and Mellish, 1997); or through task efficacy measures, i.e., measuring how well the users so, and the distribution of topics and of evaluations is too broad to be telling. of the system perform on the task at hand (Young, 1999; Carenini and Moore, 2000; Reiter et al., 2003). The latter kind of studies generally contrast different interventions, i.e. a baseline that does not use NLG and one or more variations obtained by parameterizing the NLG system. However, the evaluation does not focus on a specific component of the NLG process, as we did here for aggregation. Regarding the role of NL interfaces for ITSs, only very recently have the first few results become available, to show that first of all, students do learn when interacting in NL with an ITS (Litman et al., 2004; Graesser et al., 2005). However, there are very few studies like ours, that evaluate specific features of the NL interaction, e.g. see (Litman et al., 2004). In our case, we did find that different features of the NL feedback impact learning. Although we contend that this effect is due to functional aggregation, the feedback in DIAG-NLP2 changed along other dimensions, mainly using referents of indicators instead of indicators, and being more strongly directive in suggesting what to do next. Of course, we cannot argue that our best NL generator is equivalent to a human tutor – e.g., dividing the number of ConsultRU and ConsultInd reported in Sec. 2.2 by the number of dialogues shows that students ask about 10 ConsultRus and 1.5 ConsultInd per dialogue when interacting with a human, many fewer than those they pose to the ITSs (cf. Table 2) (regrettably we did not administer a PostTest to students in the human data collection). We further discuss the implications of our results for NL interfaces for ITSs in a companion paper (Di Eugenio et al., 2005). The DIAG project has come to a close. We are satisfied that we demonstrated that even not overly sophisticated NL feedback can make a difference; however, the fact that DIAG-NLP2 has the best language and engenders the most learning prompts us to explore more complex language interactions. We are pursuing new exciting directions in a new domain, that of basic data structures and algorithms. We are investigating what distinguishes expert from novice tutors, and we will implement our findings in an ITS that tutors in this domain. Acknowledgments. This work is supported by the Office of Naval Research (awards N00014-99-1-0930 and N00014-001-0640), and in part by the National Science Foundation (award IIS 0133123). We are grateful to CoGenTex Inc. for making EXEMPLARS and RealPro available to us. 56 References Giuseppe Carenini and Johanna D. Moore. 2000. An empirical study of the influence of argument conciseness on argument effectiveness. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, Hong Kong. Jos´e Coch. 1996. Evaluating and comparing three textproduction techniques. In COLING96, Proceedings of the Sixteenth International Conference on Computational Linguistics, pages 249–254 Hercules Dalianis. 1996. Concise Natural Language Generation from Formal Specifications. Ph.D. thesis, Department of Computer and Systems Science, Stocholm University. Technical Report 96-008. Barbara Di Eugenio, Michael Glass, and Michael J. Trolio. 2002. The DIAG experiments: Natural Language Generation for Intelligent Tutoring Systems. In INLG02, The Third International Natural Language Generation Conference, pages 120–127. Barbara Di Eugenio, Davide Fossati, Dan Yu, Susan Haller, and Michael Glass. 2005. Natural language generation for intelligent tutoring systems: a case study. In AIED 2005, the 12th International Conference on Artificial Intelligence in Education. M. W. Evens, J. Spitkovsky, P. Boyle, J. A. Michael, and A. A. Rovick. 1993. Synthesizing tutorial dialogues. In Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society, pages 137–140. Michael Glass, Heena Raval, Barbara Di Eugenio, and Maarika Traat. 2002. The DIAG-NLP dialogues: coding manual. Technical Report UIC-CS 02-03, University of Illinois - Chicago. A.C. Graesser, N. Person, Z. Lu, M.G. Jeon, and B. McDaniel. 2005. Learning while holding a conversation with a computer. In L. PytlikZillig, M. Bodvarsson, and R. Brunin, editors, Technology-based education: Bringing researchers and practitioners together. Information Age Publishing. Terrence Harvey and Sandra Carberry. 1998. Integrating text plans for conciseness and coherence. In ACL/COLING 98, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics, pages 512–518. Helmut Horacek. 2002. Aggregation with strong regularities and alternatives. In International Conference on Natural Language Generation. Xiaoron Huang and Armin Fiedler. 1996. Paraphrasing and aggregating argumentative text using text structure. In Proceedings of the 8th International Workshop on Natural Language Generation, pages 21–30. Rodger Kibble and Richard Power. 2000. Nominal generation in GNOME and ICONOCLAST. Technical report, Information Technology Research Institute, University of Brighton, Brighton, UK. Benoˆıt Lavoie and Owen Rambow. 1997. A fast and portable realizer for text generation systems. In Proceedings of the Fifth Conference on Applied Natural Language Processing. James C. Lester and Bruce W. Porter. 1997. Developing and empirically evaluating robust explanation generators: the KNIGHT experiments. Computational Linguistics, 23(1):65–102. D. J. Litman, C. P. Ros´e, K. Forbes-Riley, K. VanLehn, D. Bhembe, and S. Silliman. 2004. Spoken versus typed human and computer dialogue tutoring. In Proceedings of the Seventh International Conference on Intelligent Tutoring Systems, Maceio, Brazil. Mike Reape and Chris Mellish. 1998. Just what is aggregation anyway? In Proceedings of the European Workshop on Natural Language Generation. Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Studies in Natural Language Processing. Cambridge University Press. Ehud Reiter, Roma Robertson, and Liesl Osman. 2003. Lessons from a failure: Generating tailored smoking cessation letters. Artificial Intelligence, 144:41–58. S. C. Shapiro. 2000. SNePS: A logic for natural language understanding and commonsense reasoning. In L. M. Iwanska and S. C. Shapiro, editors, Natural Language Processing and Knowledge Representation. AAAI Press/MIT Press. James Shaw. 2002. A corpus-based analysis for the ordering of clause aggregation operators. In COLING02, Proceedings of the 19th International Conference on Computational Linguistics. Douglas M. Towne. 1997. Approximate reasoning techniques for intelligent diagnostic instruction. International Journal of Artificial Intelligence in Education. Michael White and Ted Caldwell. 1998. Exemplars: A practical, extensible framework for dynamic text generation. In Proceedings of the Ninth International Workshop on Natural Language Generation, pages 266–275, Niagara-on-the-Lake, Canada. Ching-Long Yeh and Chris Mellish. 1997. An empirical study on the generation of anaphora in Chinese. Computational Linguistics, 23(1):169–190. R. Michael Young. 1999. Using Grice’s maxim of quantity to select the content of plan descriptions. Artificial Intelligence, 115:215–256. 57 | 2005 | 7 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 565–572, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Instance-based Sentence Boundary Determination by Optimization for Natural Language Generation Shimei Pan and James C. Shaw IBM T. J. Watson Research Center 19 Skyline Drive Hawthorne, NY 10532, USA {shimei,shawjc}@us.ibm.com Abstract This paper describes a novel instancebased sentence boundary determination method for natural language generation that optimizes a set of criteria based on examples in a corpus. Compared to existing sentence boundary determination approaches, our work offers three significant contributions. First, our approach provides a general domain independent framework that effectively addresses sentence boundary determination by balancing a comprehensive set of sentence complexity and quality related constraints. Second, our approach can simulate the characteristics and the style of naturally occurring sentences in an application domain since our solutions are optimized based on their similarities to examples in a corpus. Third, our approach can adapt easily to suit a natural language generation system’s capability by balancing the strengths and weaknesses of its subcomponents (e.g. its aggregation and referring expression generation capability). Our final evaluation shows that the proposed method results in significantly better sentence generation outcomes than a widely adopted approach. 1 Introduction The problem of sentence boundary determination in natural language generation exists when more than one sentence is needed to convey multiple concepts and propositions. In the classic natural language generation (NLG) architecture (Reiter, 1994), sentence boundary decisions are made during the sentence planning stage in which the syntactic structure and wording of sentences are decided. Sentence boundary determination is a complex process that directly impacts a sentence’s readability (Gunning, 1952), its semantic cohesion, its syntactic and lexical realizability, and its smoothness between sentence transitions. Sentences that are too complex are hard to understand, so are sentences lacking semantic cohesion and cross-sentence coherence. Further more, bad sentence boundary decisions may even make sentences unrealizable. To design a sentence boundary determination method that addresses these issues, we employ an instance-based approach (Varges and Mellish, 2001; Pan and Shaw, 2004). Because we optimize our solutions based on examples in a corpus, the output sentences can demonstrate properties, such as similar sentence length distribution and semantic grouping similar to those in the corpus. Our approach also avoids problematic sentence boundaries by optimizing the solutions using all the instances in the corpus. By taking a sentence’s lexical and syntactic realizability into consideration, it can also avoid sentence realization failures caused by bad sentence boundary decisions. Moreover, since our solution can be adapted easily to suit the capability of a natural language generator, we can easily tune the algorithm to maximize the generation quality. To the best of our knowledge, there is no existing comprehensive solution that is domain-independent and possesses all the above qualities. In summary, our work offers three significant contributions: 1. It provides a general and flexible sentence 565 boundary determination framework which takes a comprehensive set of sentence complexity and quality related criteria into consideration and ensures that the proposed algorithm is sensitive to not only the complexity of the generated sentences, but also their semantic cohesion, multi-sentence coherence and syntactic and lexical realizability. 2. Since we employ an instance-based method, the proposed solution is sensitive to the style of the sentences in the application domain in which the corpus is collected. 3. Our approach can be adjusted easily to suit a sentence generation system’s capability and avoid some of its known weaknesses. Currently, our work is embodied in a multimodal conversation application in the real-estate domain in which potential home buyers interact with the system using multiple modalities, such as speech and gesture, to request residential real-estate information (Zhou and Pan, 2001; Zhou and Chen, 2003; Zhou and Aggarwal, 2004). After interpreting the request, the system formulates a multimedia presentation, including automatically generated speech and graphics, as the response (Zhou and Aggarwal, 2004). The proposed sentence boundary determination module takes a set of propositions selected by a content planner and passes the sentence boundary decisions to SEGUE (Pan and Shaw, 2004), an instance-based sentence generator, to formulate the final sentences. For example, our system is called upon to generate responses to a user’s request: “Tell me more about this house.” Even though not all of the main attributes of a house (more than 20) will be conveyed, it is clear that a good sentence boundary determination module can greatly ease the generation process and improve the quality of the output. In the rest of the paper, we start with a discussion of related work, and then describe our instance-base approach to sentence boundary determination. Finally, we present our evaluation results. 2 Related Work Existing approaches to sentence boundary determination typically employ one of the following strategies. The first strategy uses domain-specific heuristics to decide which propositions can be combined. For example, Proteus (Davey, 1979; Ritchie, 1984) produces game descriptions by employing domainspecific sentence scope heuristics. This approach can work well for a particular application, however, it is not readily reusable for new applications. The second strategy is to employ syntactic, lexical, and sentence complexity constraints to control the aggregation of multiple propositions (Robin, 1994; Shaw, 1998). These strategies can generate fluent complex sentences, but they do not take other criteria into consideration, such as semantic cohesion. Further more, since these approaches do not employ global optimization as we do, the content of each sentence might not be distributed evenly. This may cause dangling sentence problem (Wilkinson, 1995). Another strategy described in Mann and Moore(1981) guided the aggregation process by using an evaluation score that is sensitive to the structure and term usage of a sentence. Similar to our approach, they rely on search to find an optimal solution. The main difference between this approach and ours is that their evaluation score is computed based on preference heuristics. For example, all the semantic groups existing in a domain have to be coded specifically in order to handle semantic grouping. In contrast, in our framework, the score is computed based on a sentence’s similarity to corpus instances, which takes advantage of the naturally occurring semantic grouping in the corpus. Recently, Walker (2002) and Stent (2004) used statistical features derived from corpus to rank generated sentence plans. Because the plan ranker was trained with existing examples, it can choose a plan that is consistent with the examples. However, depending on the features used and the size of the training examples, it is unclear how well it can capture patterns like semantic grouping and avoid problems likes dangling sentences. 3 Examples Before we describe our approach in detail, we start with a few examples from the real-estate domain to demonstrate the properties of the proposed approach. First, sentence complexity impacts sentence boundary determination. As shown in Table 1, after receiving a user’s request (U1) for the details of a house, the content planner asked the sentence planner to describe the house with a set of attributes including its asking price, style, number of bedrooms, number of bathrooms, square footage, garage, lot size, property tax, and its associated town and school 566 Example Turn Sentence E1 U1 Tell me more about this house S1 This is a 1 million dollar 3 bedroom, 2 bathroom, 2000 square foot colonial with 2 acre of land, 2 car garage, annual taxes 8000 dollars in Armonk and in the Byram Hills school district. S2 This is a 1 million dollar house. This is a 3 bedroom house. This is a 2 bathroom house. This house has 2000 square feet. This house has 2 acres of land. This house has 2 car garage. This is a colonial house. The annual taxes are 8000 dollars. This house is in Armonk. This house is in the Byram Hills school district. S3 This is a 3 bedroom, 2 bathroom, 2000 square foot colonial located in Armonk with 2 acres of land. The asking price is 1 million dollar and the annual taxes are 8000 dollars. The house is located in the Byram Hills School District. E2 S4 This is a 1 million dollar 3 bedroom house. This is a 2 bathroom house with annual taxes of 8000 dollars. S5 This is a 3 bedroom and 2 bathroom house. Its price is 1 million dollar and its annual taxes are 8000 dollars. E3 S6 The tax rate of the house is 3 percent. S7 The house has an asphalt roof. E4 S8 This is a 3 bedroom, 2 bathroom colonial with 2000 square feet and 2 acres of land. S9 The house has 2 bedrooms and 3 bathrooms. This house is a colonial. It has 2000 square feet. The house is on 2 acres of land. Table 1: Examples district name. Without proper sentence boundary determination, a sentence planner may formulate a single sentence to convey all the information, as in S1. Even though S1 is grammatically correct, it is too complex and too exhausting to read. Similarly, output like S2, despite its grammatical correctness, is choppy and too tedious to read. In contrast, our instance-based sentence boundary determination module will use examples in a corpus to partition those attributes into several sentences in a more balanced manner (S3). Semantic cohesion also influences the quality of output sentences. For example, in the real-estate domain, the number of bedrooms and number of bathrooms are two closely related concepts. Based on our corpus, when both concepts appear, they almost always conveyed together in the same sentence. Given this, if the content planner wants to convey a house with the following attributes: price, number of bedrooms, number of bathrooms, and property tax, S4 is a less desirable solution than S5 because it splits these concepts into two separate sentences. Since we use instance-based sentence boundary determination, our method generates S5 to minimize the difference from the corpus instances. Sentence boundary placement is also sensitive to the syntactic and lexical realizability of grouped items. For example, if the sentence planner asks the surface realizer to convey two propositions S6 and S7 together in a sentence, a realization failure will be triggered because both S6 and S7 only exist in the corpus as independent sentences. Since neither of them can be transformed into a modifier based on the corpus, S6 and S7 cannot be aggregated in our system. Our method takes a sentence’s lexical and syntactic realizability into consideration in order to avoid making such aggregation request to the surface realizer in the first place. A generation system’s own capability may also influence sentence boundary determination. Good sentence boundary decisions will balance a system’s strengths and weaknesses. In contrast, bad decisions will expose a system’s venerability. For example, if a sentence generator is good at performing aggregations and weak on referring expressions, we may avoid incoherence between sentences by preferring aggregating more attributes in one sentence (like in S8) rather than by splitting them into multiple sentences (like in S9). In the following, we will demonstrate how our approach can achieve all the above goals in a unified instance-based framework. 4 Instance-based boundary determination Instance-based generation automatically creates sentences that are similar to those generated by humans, including their way of grouping semantic content, their wording and their style. Previously, Pan and Shaw (2004) have demonstrated that instancebased learning can be applied successfully in generating new sentences by piecing together existing words and segments in a corpus. Here, we want to demonstrate that by applying the same principle, we can make better sentence boundary decisions. 567 The key idea behind the new approach is to find a sentence boundary solution that minimizes the expected difference between the sentences resulting from these boundary decisions and the examples in the corpus. Here we measure the expected difference based a set of cost functions. 4.1 Optimization Criteria We use three sentence complexity and quality related cost functions as the optimization criteria: sentence boundary cost, insertion cost and deletion cost. Sentence boundary cost (SBC): Assuming P is a set of propositions to be conveyed and S is a collection of example sentences selected from the corpus to convey P. Then we say P can be realized by S with a sentence boundary cost that is equal to (|S| −1) ∗SBC in which |S| is the number of sentences and SBC is the sentence boundary cost. To use a specific example from the real-estate domain, the input P has three propositions: p1. House1 has-attr (style=colonial). p2. House1 has-attr(bedroom=3). p3. House1 has-attr(bathroom=2). One solution, S, contains 2 sentences: s1. This is a 3 bedroom, 2 bathroom house. s2. This is a colonial house. Since only one sentence boundary is involved, S is a solution containing one boundary cost. In the above example, even though both s1 and s2 are grammatical sentences, the transition from s1 to s2 is not quite smooth. They sound choppy and disjointed. To penalize this, whenever there is a sentence break, there is a SBC. In general, the SBC is a parameter that is sensitive to a generation system’s capability such as its competence in reference expression generation. If a generation system does not have a robust approach for tracking the focus across sentences, it is likely to be weak in referring expression generation and adding sentence boundaries are likely to cause fluency problems. In contrast, if a generation system is very capable in maintaining the coherence between sentences, the proper sentence boundary cost would be lower. Insertion cost: Assume P is the set of propositions to be conveyed, and Ci is an instance in the corpus that can be used to realize P by inserting a missing proposition pj to Ci, then we say P can be realized using Ci with an insertion cost of icost(CH, pj), in which CH is the host sentence in the corpus containing proposition pj. Using an example from our real-estate domain, assume the input P=(p2, p3, p4), where p4. House1 has-attr (square footage=2000). Assume Ci is a sentence selected from the corpus to realize P: “This is 3 bedroom 2 bathroom house”. Since Ci does not contain p4, p4 needs to be added. We say that P can be realized using Ci by inserting a proposition p4 with an insertion cost of icost(CH, p4), in which CH is a sentence in the corpus such as “This is a house with 2000 square feet.” The insertion cost is influenced by two main factors: the syntactic and lexical insertability of the proposition pj and a system’s capability in aggregating propositions. For example, if in the corpus, the proposition pj is always realized as an independent sentence and never as a modifier, icost(∗, pj) should be extremely high, which effectively prohibit pj from becoming a part of another sentence. icost(∗, pj) is defined as the minimum insertion cost among all the icost(CH, pj). Currently icost(CH, pj) is computed dynamically based on properties of corpus instances. In addition, since whether a proposition is insertable depends on how capable an aggregation module can combine propositions correctly into a sentence, the insertion cost should be assigned high or low accordingly. Deletion cost: Assume P is a set of input propositions to be conveyed and Ci is an instance in the corpus that can be used to convey P by deleting an unneeded proposition pj in Ci. Then, we say P can be realized using Ci with a deletion cost dcost(Ci, pj). As a specific example, assuming the input is P=(p2, p3, p4), Ci is an instance in the corpus “This is a 3 bedroom, 2 bathroom, 2000 square foot colonial house.” In addition to the propositions p2, p3 and p4, Ci also conveys a proposition p1. Since p1 is not needed when conveying P, we say that P can be realized using Ci by deleting proposition p1 with a deletion cost of dcost(Ci, p1). The deletion cost is affected by two main factors as well: first the syntactic relation between pj and its host sentence. Given a new instance Ci, “This 2000 square foot 3 bedroom, 2 bathroom house is a colonial”, deleting p1, the main object 568 of the verb, will make the rest of the sentence incomplete. As a result, dcost(Ci, p1) is very expensive. In contrast, dcost(Ci, p4) is low because the resulting sentence is still grammatically sound. Currently dcost(Ci, pj) is computed dynamically based on properties of corpus instances. Second, the expected performance of a generation system in deletion also impacts the deletion cost. Depending on the sophistication of the generator to handle various deletion situations, the expected deletion cost can be high if the method employed is naive and error prone, or is low if the system can handle most cases accurately. Overall cost: Assume P is the set of propositions to be conveyed and S is the set of instances in the corpus that are chosen to realize P by applying a set of insertion, deletion and sentence breaking operations, the overall cost of the solution Cost(P) = Ci (Wi ∗ j icost(CHj, pj) +Wd ∗ k dcost(Ci, pk)) +(Nb −1) ∗SBC in which Wi, Wd and SBC are the insertion weight, deletion weight and sentence boundary cost; Nb is the number of sentences in the solution, Ci is a corpus instance been selected to construct the solution and CHj is the host sentence that proposition pj belongs. 4.2 Algorithm: Optimization based on overall cost We model the sentence boundary determination process as a branch and bound tree search problem. Before we explain the algorithm itself, first a few notations. The input P is a set of input propositions chosen by the content planner to be realized. Σ is the set of all possible propositions in an application domain. Each instance Ci in the corpus C is represented as a subset of Σ. Assume S is a solution to P, then it can be represented as the overall cost plus a list of pairs like (Cis, Ois), in which Cis is one of the instances selected to be used in that solution, Ois is a set of deletion, insertion operations that can be applied to Cis to transform it to a subsolution Si. To explain this representation further, we use a specific example in which P=(a, d, e, f), Σ=(a, b, c, d, e, f g, h, i). One of the boundary solution S can be represented as S = (Cost(S), (S1, S2)) S1 = (C1 = (a, b, d, i), delete(b, i)), S2 = (C2 = (e), insert(f as in C3 = (f, g))) Cost(S) = Wd ∗(dcost(C1, b) + dcost(C1, i)) + Wi ∗icost(C3, f) + 1 ∗SBC in which C1 and C2 are two corpus instances selected as the bases to formulate the solution and C3 is the host sentence containing proposition f. The general idea behind the instance-based branch and bound tree search algorithm is that given an input, P, for each corpus instance Ci, we construct a search branch, representing all possible ways to realize the input using the instance plus deletions, insertions and sentence breaks. Since each sentence break triggers a recursive call to our sentence boundary determination algorithm, the complexity of the algorithm is NP-hard. To speed up the process, for each iteration, we prune unproductive branches using an upper bound derived by several greedy algorithms. The details of our sentence boundary determination algorithm, sbd(P), are described below. P is the set of input propositions. 1. Set the current upper bound, UB, to the minimum cost of solutions derived by greedy algorithms, which we will describe later. This value is used to prune unneeded branches to make the search more efficient. 2. For each instance Ci in corpus C in which (Ci∩ P) ̸= ∅, loop from step 3 to 9. The goal here is to identify all the useful corpus instances for realizing P. 3. Delete all the propositions pj ∈D in which D = Ci −P (D contains propositions in Ci but not exist in P) with cost Costd(P) = Wd ∗ Pj∈D dcost(Ci, pj). This step computes the deletion operators and their associated costs. 4. Let I = P −Ci (I contains propositions in P but not in Ci). For each subset Ej ⊆I (Ej includes ∅and I itself), iterate through step 5 to 9. These steps figure out all the possible ways to add the missing propositions, including inserting into the instance Ci and separating the rest as independent sentence(s). 569 5. Generate a solution in which ∀pk ∈Ej, insert pk to Ci. All the propositions in Q = I −Ej will be realized in different sentences, thus incurring a SBC. 6. We update the cost Cost(P) to Costd(P) + Wi ∗ pk∈Ej icost(∗, pk)+ SBC + Cost(Q) in which Cost(Q) is the cost of sbd(Q) which recursively computes the best solution for input Q and Q ⊂P. To facilitate dynamic programming, we remember the best solution for Q derived by sbd(Q) in case Q is used to formulate other solutions. 7. If the lower bound for Cost(P) is greater than the established upper bound UB, prune this branch. 8. Using the notation described in the beginning of Sec. 4.2, we update the current solution to sbd(P) = (Cost(P), (Ci, delete∀pj∈D(pj), insert∀pk∈Ej(pk))) sbd(Q) in which is an operator that composes two partial solutions. 9. If sbd(P) is a complete solution (either Q is empty or have a known best solution) and Cost(P) < UB, update the upper bound UB = Cost(P). 10. Output the solution with the lowest overall cost. To establish the initial UB for pruning, we use the minimum of the following three bounds. In general, the tighter the UB is, the more effective the pruning is. Greedy set partition: we employ a greedy set partition algorithm in which we first match the set S ⊂P with the largest |S|. Repeat the same process for P ′ where P ′ = P −S. The solution cost is Cost(P) = (N −1) ∗SBC, and N is the number of sentences in the solution. The complexity of this computation is O(|P|), where |P| is the number of propositions in P. Revised minimum set covering: we employ a greedy minimum set covering algorithm in which we first find the set S in the corpus that maximizes the overlapping of propositions in the input P. The unwanted propositions in S −P are deleted. Assume P ′ = P −S, repeat the same process to P′ until P ′ is empty. The only difference between this and the previous approach is that S here might not be a subset of P. The complexity of this computation is O(|P|). One maximum overlapping sentence: we first identify the instance Ci in corpus that covers the maximum number of propositions in P. To arrive at a solution for P, the rest of the propositions not covered by Ci are inserted into Ci and all the unwanted propositions in Ci are deleted. The cost of this solution is Wd ∗ pj∈D dcost(Ci, pj) + Wi ∗ pk∈I icost(∗, pk) in which D includes proposition in Ci but not in P, and I includes propositions in P but not in Ci. Currently, we update UB only after a complete solution is found. It is possible to derive better UB by establishing the upper bound for each partial solution, but the computational overhead might not justify doing so. 4.3 Approximation Algorithm Even with pruning and dynamic programming, the exact solution still is very expensive computationally. Computing exact solution for an input size of 12 propositions has over 1.6 millions states and takes more than 30 minutes (see Figure 1). To make the search more efficient for tasks with a large number of propositions in the input, we naturally seek a greedy strategy in which at every iteration the algorithm myopically chooses the next best step without regard for its implications on future moves. One greedy search policy we implemented explores the branch that uses the instance with maximum overlapping propositions with the input and ignores all branches exploring other corpus instances. The intuition behind this policy is that the more overlap an instance has with the input, the less insertions or sentence breaks are needed. Figure 1 and Figure 2 demonstrate the tradeoff between computation efficiency and accuracy. In this graph, we use instances from the realestate corpus with size 250, we vary the input sentence length from one to twenty and the results shown in the graphs are average value over several typical weight configurations ((Wd,Wi,SBC)= 570 (1,3,5),(1,3,7),(1,5,3),(1,7,3),(1,1,1)). Figure 2 compares the quality of the solutions when using exact solutions versus approximation. In our interactive multimedia system, we currently use exact solution for input size of 7 propositions or less and switch to greedy for any larger input size to ensure sub-second performance for the NLG component. 0 20 40 60 80 100 120 140 160 180 200 2 4 6 8 9 10 12 14 16 18 20 # of Propositions in Input Execution Time (Seconds) Greedy Exact Figure 1: Speed difference between exact solutions and approximations 0 2 4 6 8 10 12 14 16 18 20 2 4 6 8 9 10 12 14 16 18 20 # of Proposition in Input Cost Greedy Exact Figure 2: Cost difference between exact solutions and approximations Measures Ours B-3 B-6 Dangling sentence (7) 0 100% 100% Split Semantic Group 1% 61% 21% Realization Failure 0 56% 72% Fluency 59% 4% 8% Table 2: Comparisons 5 Evaluations To evaluate the quality of our sentence boundary decisions, we implemented a baseline system in which boundary determination of the aggregation module is based on a threshold of the maximum number of propositions allowed in a sentence (a simplified version of the second strategy in Section 2. We have tested two threshold values, the average (3) and maximum (6) number of propositions among corpus instances. Other sentence complexity measures, such as the number of words and depth of embedding are not easily applicable for our comparison because they require the propositions to be realized first before the boundary decisions can be made. We tune the relative weight of our approach to best fit our system’s capability. Currently, the weights are empirically established to Wd = 1, Wi = 3 and SBC = 3. Based on the output generated from both systems, we derive four evaluation metrics: 1. Dangling sentences: We define dangling sentences as the short sentences with only one proposition that follow long sentences. This measure is used to verify our claim that because we use global instead of local optimization, we can avoid generating dangling sentences by making more balanced sentence boundary decisions. In contrast, the baseline approaches have dangling sentence problem when the input proposition is 1 over the multiple of the threshold values. The first row of Table 2 shows that when the input proposition length is set to 7, a pathological case, among the 200 input proposition sets randomly generated, the baseline approach always produce dangling sentences (100%). In contrast, our approach always generates more balanced sentences (0%). 2. Semantic group splitting. Since we use an instance-based approach, we can maintain the semantic cohesion better. To test this, we randomly generated 200 inputs with up to 10 propositions containing semantic grouping of both the number of bedrooms and number of bathrooms. The second row, Split Semantic Group, in Table 2 shows that our algorithm can maintain semantic group much better than the baseline approach. Only in 1% of the output sentences, our algorithm generated number of bedrooms and number of bathrooms in separate sentences. In contrast, the baseline approaches did much worse (61% and 21%). 3. Sentence realization failure. This measure is used to verify that since we also take a sentence’s lexical and syntactical realizability into consideration, our sentence boundary decisions will result in less sentence realization failures. 571 An realization failure occurs when the aggregation module failed to realize one sentence for all the propositions grouped by the sentence boundary determination module. The third row in Table 2, Realization Failure, indicates that given 200 randomly generated input proposition sets with length from 1 to 10, how many realization happened in the output. Our approach did not have any realization failure while for the baseline approaches, there are 56% and 72% outputs have one or more realization failures. 4. Fluency. This measure is used to verify our claim that since we also optimize our solutions based on boundary cost, we can reduce incoherence across multiple sentences. Given 200 randomly generated input propositions with length from 1 to 10, we did a blind test and presented pairs of generated sentences to two human subjects randomly and asked them to rate which output is more coherent. The last row, Fluency, in Table 2 shows how often the human subjects believe that a particular algorithm generated better sentences. The output of our algorithm is preferred for more than 59% of the cases, while the baseline approaches are preferred 4% and 8%, respectively. The other percentages not accounted for are cases where the human subject felt there is no significant difference in fluency between the two given choices. The result from this evaluation clearly demonstrates the superiority of our approach in generating coherent sentences. 6 Conclusion In the paper, we proposed a novel domain independent instance-based sentence boundary determination algorithm that is capable of balancing a comprehensive set of generation capability, sentence complexity, and quality related constraints. This is the first domain-independent algorithm that possesses many desirable properties, including balancing a system’s generation capabilities, maintaining semantic cohesion and cross sentence coherence, and preventing severe syntactic and lexical realization failures. Our evaluation results also demonstrate the superiority of the approach over a representative domain independent sentence boundary solution. References Anthony C. Davey. 1979. Discourse Production. Edinburgh University Press, Edinburgh. Robert Gunning. 1952. The Technique of Clear Writing. McGraw-Hill. William C. Mann and James A. Moore. 1981. Computer generation of multiparagraph English text. American Journal of Computational Linguistics, 7(1):17–29. Shimei Pan and James Shaw. 2004. SEGUE: A hybrid case-based surface natural language generator. In Proc. of ICNLG, Brockenhurst, U.K. Ehud Reiter. 1994. Has a consensus NL generation architecture appeared, and is it psycholinguistically plausible? In Proc. of INLG, Kennebunkport, Maine. Graeme D. Ritchie. 1984. A rational reconstruction of the Proteus sentence planner. In Proc. of the COLING and the ACL, Stanford, CA. Jacques Robin. 1994. Automatic generation and revision of natural language summaries providing historical background. In Proc. of the Brazilian Symposium on Artificial Intelligence, Fortaleza, CE, Brazil. James Shaw. 1998. Segregatory coordination and ellipsis in text generation. In Proc. of the COLING and the ACL., Montreal, Canada. Amanda Stent, Rashmi Prasad, and Marilyn Walker. 2004. Trainable sentence planning for complex information presentation in spoken dialog systems. In Proc. of the ACL, Barcelona, Spain. Sebastian Varges and Chris Mellish. 2001. Instancebased natural language generation. In Proc. of the NAACL, Pittsburgh, PA. Marilyn Walker, Owen Rambow, and Monica Rogati. 2002. Training a sentence planner for spoken dialogue using boosting. Computer Speech and Language. John Wilkinson. 1995. Aggregation in natural language generation: Another look. Co-op work term report, Dept. of Computer Science, University of Waterloo. Michelle Zhou and Vikram Aggarwal. 2004. An optimization-based approach to dynamic data content selection in intelligent multimedia interfaces. In Proc. of the UIST, Santa Fe, NM. Michelle X. Zhou and Min Chen. 2003. Automated generation of graphic sketches by example. In IJCAI, Acapulco, Mexico. Michelle X. Zhou and Shimei Pan. 2001. Automated authoring of coherent multimedia discourse in conversation systems. In ACM Multimedia, Ottawa, Canada. 572 | 2005 | 70 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 573–580, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Arabic Tokenization, Part-of-Speech Tagging and Morphological Disambiguation in One Fell Swoop Nizar Habash and Owen Rambow Center for Computational Learning Systems Columbia University New York, NY 10115, USA {habash,rambow}@cs.columbia.edu Abstract We present an approach to using a morphological analyzer for tokenizing and morphologically tagging (including partof-speech tagging) Arabic words in one process. We learn classifiers for individual morphological features, as well as ways of using these classifiers to choose among entries from the output of the analyzer. We obtain accuracy rates on all tasks in the high nineties. 1 Introduction Arabic is a morphologically complex language.1 The morphological analysis of a word consists of determining the values of a large number of (orthogonal) features, such as basic part-of-speech (i.e., noun, verb, and so on), voice, gender, number, information about the clitics, and so on.2 For Arabic, this gives us about 333,000 theoretically possible completely specified morphological analyses, i.e., morphological tags, of which about 2,200 are actually used in the first 280,000 words of the Penn Arabic Treebank (ATB). In contrast, English morphological tagsets usually have about 50 tags, which cover all morphological variation. As a consequence, morphological disambiguation of a word in context, i.e., choosing a complete 1We would like to thank Mona Diab for helpful discussions. The work reported in this paper was supported by NSF Award 0329163. The authors are listed in alphabetical order. 2In this paper, we only discuss inflectional morphology. Thus, the fact that the stem is composed of a root, a pattern, and an infix vocalism is not relevant except as it affects broken plurals and verb aspect. morphological tag, cannot be done successfully using methods developed for English because of data sparseness. Hajiˇc (2000) demonstrates convincingly that morphological disambiguation can be aided by a morphological analyzer, which, given a word without any context, gives us the set of all possible morphological tags. The only work on Arabic tagging that uses a corpus for training and evaluation (that we are aware of), (Diab et al., 2004), does not use a morphological analyzer. In this paper, we show that the use of a morphological analyzer outperforms other tagging methods for Arabic; to our knowledge, we present the best-performing wide-coverage tokenizer on naturally occurring input and the bestperforming morphological tagger for Arabic. 2 General Approach Arabic words are often ambiguous in their morphological analysis. This is due to Arabic’s rich system of affixation and clitics and the omission of disambiguating short vowels and other orthographic diacritics in standard orthography (“undiacritized orthography”). On average, a word form in the ATB has about 2 morphological analyses. An example of a word with some of its possible analyses is shown in Figure 1. Analyses 1 and 4 are both nouns. They differ in that the first noun has no affixes, while the second noun has a conjunction prefix (+ +w ‘and’) and a pronominal possessive suffix ( + +y ‘my’). In our approach, tokenizing and morphologically tagging (including part-of-speech tagging) are the same operation, which consists of three phases. First, we obtain from our morphological analyzer a list of all possible analyses for the words of a given sentence. We discuss the data and our lexicon in 573 # lexeme gloss POS Conj Part Pron Det Gen Num Per Voice Asp 1 wAliy ruler N NO NO NO NO masc sg 3 NA NA 2 <ilaY and to me P YES NO YES NA NA NA NA NA NA 3 waliy and I follow V YES NO NO NA neut sg 1 act imp 4 |l and my clan N YES NO YES NO masc sg 3 NA NA 5 |liy˜ and automatic AJ YES NO NO NO masc sg 3 NA NA Figure 1: Possible analyses for the word wAly more detail in Section 4. Second, we apply classifiers for ten morphological features to the words of the text. The full list of features is shown in Figure 2, which also identifies possible values and which word classes (POS) can express these features. We discuss the training and decoding of these classifiers in Section 5. Third, we choose among the analyses returned by the morphological analyzer by using the output of the classifiers. This is a non-trivial task, as the classifiers may not fully disambiguate the options, or they may be contradictory, with none of them fully matching any one choice. We investigate different ways of making this choice in Section 6. As a result of this process, we have the original text, with each word augmented with values for all the features in Figure 2. These values represent a complete morphological disambiguation. Furthermore, these features contain enough information about the presence of clitics and affixes to perform tokenization, for any reasonable tokenization scheme. Finally, we can determine the POS tag, for any morphologically motivated POS tagset. Thus, we have performed tokenization, traditional POS tagging, and full morphological disambiguation in one fell swoop. 3 Related Work Our work is inspired by Hajiˇc (2000), who convincingly shows that for five Eastern European languages with complex inflection plus English, using a morphological analyzer3 improves performance of a tagger. He concludes that for highly inflectional languages “the use of an independent morpholog3Hajiˇc uses a lookup table, which he calls a “dictionary”. The distinction between table-lookup and actual processing at run-time is irrelevant for us. ical dictionary is the preferred choice [over] more annotated data”. Hajiˇc (2000) uses a general exponential model to predict each morphological feature separately (such as the ones we have listed in Figure 2), but he trains different models for each ambiguity left unresolved by the morphological analyzer, rather than training general models. For all languages, the use of a morphological analyzer results in tagging error reductions of at least 50%. We depart from Hajiˇc’s work in several respects. First, we work on Arabic. Second, we use this approach to also perform tokenization. Third, we use the SVM-based Yamcha (which uses Viterbi decoding) rather than an exponential model; however, we do not consider this difference crucial and do not contrast our learner with others in this paper. Fourth, and perhaps most importantly, we do not use the notion of ambiguity class in the feature classifiers; instead we investigate different ways of using the results of the individual feature classifiers in directly choosing among the options produced for the word by the morphological analyzer. While there have been many publications on computational morphological analysis for Arabic (see (Al-Sughaiyer and Al-Kharashi, 2004) for an excellent overview), to our knowledge only Diab et al. (2004) perform a large-scale corpus-based evaluation of their approach. They use the same SVMbased learner we do, Yamcha, for three different tagging tasks: word tokenization (tagging on letters of a word), which we contrast with our work in Section 7; POS tagging, which we discuss in relation to our work in Section 8; and base phrase chunking, which we do not discuss in this paper. We take the comparison between our results on POS tagging and those of Diab et al. (2004) to indicate that the use of a morphological analyzer is beneficial for Arabic as 574 Feature Description Possible Values POS that Default Name Carry Feature POS Basic part-of-speech See Footnote 9 all X Conj Is there a cliticized conjunction? YES, NO all NO Part Is there a cliticized particle? YES, NO all NO Pron Is there a pronominal clitic? YES, NO V, N, PN, AJ, P, Q NO Det Is there a cliticized definite determiner + Al+? YES, NO N, PN, AJ NO Gen Gender (intrinsic or by agreement) masc(uline), fem(inine), neut(er) V, N, PN, AJ, PRO, REL, D masc Num Number sg (singular), du(al), pl(ural) V, N, PN, AJ, PRO, REL, D sg Per Person 1, 2, 3 V, N, PN, PRO 3 Voice Voice act(ive), pass(ive) V act Asp Aspect imp(erfective), perf(ective), imperative V perf Figure 2: Complete list of morphological features expressed by Arabic morphemes that we tag; the last column shows on which parts-of-speech this feature can be expressed; the value ‘NA’ is used for each feature other than POS, Conj, and Part if the word is not of the appropriate POS well. Several other publications deal specifically with segmentation. Lee et al. (2003) use a corpus of manually segmented words, which appears to be a subset of the first release of the ATB (110,000 words), and thus comparable to our training corpus. They obtain a list of prefixes and suffixes from this corpus, which is apparently augmented by a manually derived list of other affixes. Unfortunately, the full segmentation criteria are not given. Then a trigram model is learned from the segmented training corpus, and this is used to choose among competing segmentations for words in running text. In addition, a huge unannotated corpus (155 million words) is used to iteratively learn additional stems. Lee et al. (2003) show that the unsupervised use of the large corpus for stem identification increases accuracy. Overall, their error rates are higher than ours (2.9% vs. 0.7%), presumably because they do not use a morphological analyzer. There has been a fair amount of work on entirely unsupervised segmentation. Among this literature, Rogati et al. (2003) investigate unsupervised learning of stemming (a variant of tokenization in which only the stem is retained) using Arabic as the example language. Unsurprisingly, the results are much worse than in our resource-rich approach. Darwish (2003) discusses unsupervised identification of roots; as mentioned above, we leave root identification to future work. 4 Preparing the Data The data we use comes from the Penn Arabic Treebank (Maamouri et al., 2004). Like the English Penn Treebank, the corpus is a collection of news texts. Unlike the English Penn Treebank, the ATB is an ongoing effort, which is being released incrementally. As can be expected in this situation, the annotation has changed in subtle ways between the incremental releases. Even within one release (especially the first) there can be inconsistencies in the annotation. As our approach builds on linguistic knowledge, we need to carefully study how linguistic facts are represented in the ATB. In this section, we briefly summarize how we obtained the data in the representation we use for our machine learning experiments.4 We use the first two releases of the ATB, ATB1 and ATB2, which are drawn from different news sources. We divided both ATB1 and ATB2 into de4The code used to obtain the representations is available from the authors upon request. 575 velopment, training, and test corpora with roughly 12,000 word tokens in each of the development and test corpora, and 120,000 words in each of the training corpora. We will refer to the training corpora as TR1 and TR2, and to the test corpora as, TE1 and TE2. We report results on both TE1 and TE2 because of the differences in the two parts of the ATB, both in terms of origin and in terms of data preparation. We use the ALMORGEANA morphological analyzer (Habash, 2005), a lexeme-based morphological generator and analyzer for Arabic.5 A sample output of the morphological analyzer is shown in Figure 1. ALMORGEANA uses the databases (i.e., lexicon) from the Buckwalter Arabic Morphological Analyzer, but (in analysis mode) produces an output in the lexeme-and-feature format (which we need for our approach) rather than the stem-and-affix format of the Buckwalter analyzer. We use the data from first version of the Buckwalter analyzer (Buckwalter, 2002). The first version is fully consistent with neither ATB1 nor ATB2. Our training data consists of a set of all possible morphological analyses for each word, with the unique correct analysis marked. Since we want to learn to choose the correct output using the features generated by ALMORGEANA, the training data must also be in the ALMORGEANA output format. To obtain this data, we needed to match data in the ATB to the lexeme-and-feature representation output by ALMORGEANA. The matching included the use of some heuristics, since the representations and choices are not always consistent in the ATB. For example, nHw ‘towards’ is tagged as AV, N, or V (in the same syntactic contexts). We verified whether we introduced new errors while creating our data representation by manually inspecting 400 words chosen at random from TR1 and TR2. In eight cases, our POS tag differed from that in the ATB file; all but one case were plausible changes among Noun, Adjective, Adverb and Proper Noun resulting from missing entries in the Buckwalter’s lexicon. The remaining case was a failure in the conversion process relating to the handling of broken plurals at the lexeme level. We conclude that 5The ALMORGEANA engine is available at http://clipdemos.umiacs.umd.edu/ALMORGEANA/. our data representation provides an adequate basis for performing machine learning experiments. An important issue in using morphological analyzers for morphological disambiguation is what happens to unanalyzed words, i.e., words that receive no analysis from the morphological analyzer. These are frequently proper nouns; a typical example is
brlwskwny ‘Berlusconi’, for which no entry exists in the Buckwalter lexicon. A backoff analysis mode in ALMORGEANA uses the morphological databases of prefixes, suffixes, and allowable combinations from the Buckwalter analyzer to hypothesize all possible stems along with feature sets. Our Berlusconi example yields 41 possible analyses, including the correct one (as a singular masculine PN). Thus, with the backoff analysis, unanalyzed words are distinguished for us only by the larger number of possible analyses (making it harder to choose the correct analysis). There are not many unanalyzed words in our corpus. In TR1, there are only 22 such words, presumably because the Buckwalter lexicon our morphological analyzer uses was developed onTR1. In TR2, we have 737 words without analysis (0.61% of the entire corpus, giving us a coverage of about 99.4% on domainsimilar text for the Buckwalter lexicon). In ATB1, and to a lesser degree in ATB2, some words have been given no morphological analysis. (These cases are not necessarily the same words that our morphological analyzer cannot analyze.) The POS tag assigned to these words is then NO FUNC. In TR1 (138,756 words), we have 3,088 NO FUNC POS labels (2.2%). In TR2 (168,296 words), the number of NO FUNC labels has been reduced to 853 (0.5%). Since for these cases, there is no meaningful solution in the data, we have removed them from the evaluation (but not from training). In contrast, Diab et al. (2004) treat NO FUNC like any other POS tag, but it is unclear whether this is meaningful. Thus, when comparing results from different approaches which make different choices about the data (for example, the NO FUNC cases), one should bear in mind that small differences in performance are probably not meaningful. 576 5 Classifiers for Linguistic Features We now describe how we train classifiers for the morphological features in Figure 2. We train one classifier per feature. We use Yamcha (Kudo and Matsumoto, 2003), an implementation of support vector machines which includes Viterbi decoding.6 As training features, we use two sets. These sets are based on the ten morphological features in Figure 2, plus four other “hidden” morphological features, for which we do not train classifiers, but which are represented in the analyses returned by the morphological analyzer. The reason we do not train classifiers for the hidden features is that they are only returned by the morphological analyzer when they are marked overtly in orthography, but they are not disambiguated in case they are not overtly marked. The features are indefiniteness (presence of nunation), idafa (possessed), case, and mood. First, for each of the 14 morphological features and for each possible value (including ‘NA’ if applicable), we define a binary machine learning feature which states whether in any morphological analysis for that word, the feature has that value. This gives us 58 machine learning features per word. In addition, we define a second set of features which abstracts over the first set: for all features, we state whether any morphological analysis for that word has a value other than ‘NA’. This yields a further 11 machine learning features (as 3 morphological features never have the value ‘NA’). In addition, we use the untokenized word form and a binary feature stating whether there is an analysis or not. This gives us a total of 71 machine learning features per word. We specify a window of two words preceding and following the current word, using all 71 features for each word in this 5-word window. In addition, two dynamic features are used, namely the classification made for the preceding two words. For each of the ten classifiers, Yamcha then returns a confidence value for each possible value of the classifier, and in addition it marks the value that is chosen during subsequent Viterbi decoding (which need not be the value with the highest confidence value because of the inclusion of dynamic features). We train on TR1 and report the results for the ten 6We use Yamcha’s default settings: standard SVM with 2nd degree polynomial kernel and 1 slack variable. Method BL Class BL Class Test TE1 TE1 TE2 TE2 POS 96.6 97.7 91.1 95.5 Conj 99.9 99.9 99.7 99.9 Part 99.9 99.9 99.5 99.7 Pron 99.5 99.6 98.8 99.0 Det 98.8 99.2 96.8 98.3 Gen 98.6 99.2 95.8 98.2 Num 98.8 99.4 96.8 98.8 Per 97.6 98.7 94.8 98.1 Voice 98.8 99.3 97.5 99.0 Asp 98.8 99.4 97.4 99.1 Figure 3: Accuracy of classifiers (Class) for morphological features trained on TR1, and evaluated on TE1 and TE2; BL is the unigram baseline trained on TR1 Yamcha classifiers on TE1 and TE2, using all simple tokens,7 including punctuation, in Figure 3. The baseline BL is the most common value associated in the training corpus TR1 with every feature for a given word form (unigram). We see that the baseline for TE1 is quite high, which we assume is due to the fact that when there is ambiguity, often one interpretation is much more prevelant than the others. The error rates on the baseline approximately double on TE2, reflecting the difference between TE2 and TR1, and the small size of TR1. The performance of our classifiers is good on TE1 (third column), and only slightly worse on TE2 (fifth column). We attribute the increase in error reduction over the baseline for TE2 to successfully learned generalizations. We investigated the performance of the classifiers on unanalyzed words. The performance is generally below the baseline BL. We attribute this to the almost complete absence of unanalyzed words in training data TR1. In future work we could attempt to improve performance in these cases; however, given their small number, this does not seem a priority. 7We use the term orthographic token to designate tokens determined only by white space, while simple tokens are orthographic tokens from which punctuation has been segmented (becoming its own token), and from which all tatweels (the elongation character) have been removed. 577 6 Choosing an Analysis Once we have the results from the classifiers for the ten morphological features, we combine them to choose an analysis from among those returned by the morphological analyzer. We investigate several options for how to do this combination. In the following, we use two numbers for each analysis. First, the agreement is the number of classifiers agreeing with the analysis. Second, the weighted agreement is the sum, over all classifiers, of the classification confidence measure of that value that agrees with the analysis. The agreement, but not the weighted agreement, uses Yamcha’s Viterbi decoding. • The majority combiner (Maj) chooses the analysis with the largest agreement. • The confidence-based combiner (Con) chooses the analysis with the largest weighted agreement. • The additive combiner (Add) chooses the analysis with the largest sum of agreement and weighted agreement. • The multiplicative combiner (Mul) chooses the analysis with the largest product of agreement and weighted agreement. • We use Ripper (Cohen, 1996) to learn a rulebased classifier (Rip) to determine whether an analysis from the morphological analyzer is a “good” or a “bad” analysis. We use the following features for training: for each morphological feature in Figure 2, we state whether or not the value chosen by its classifier agrees with the analysis, and with what confidence level. In addition, we use the word form. (The reason we use Ripper here is because it allows us to learn lower bounds for the confidence score features, which are real-valued.) In training, only the correct analysis is good. If exactly one analysis is classified as good, we choose that, otherwise we use Maj to choose. • The baseline (BL) chooses the analysis most commonly assigned in TR1 to the word in question. For unseen words, the choice is made randomly. In all cases, any remaining ties are resolved randomly. We present the performance in Figure 4. We see that the best performing combination algorithm on TE1 is Maj, and on TE2 it is Rip. Recall that the Yamcha classifiers are trained on TR1; in addition, Rip is trained on the output of these Yamcha clasCorpus TE1 TE2 Method All Words All Words BL 92.1 90.2 87.3 85.3 Maj 96.6 95.8 94.1 93.2 Con 89.9 87.6 88.9 87.2 Add 91.6 89.7 90.7 89.2 Mul 96.5 95.6 94.3 93.4 Rip 96.2 95.3 94.8 94.0 Figure 4: Results (percent accuracy) on choosing the correct analysis, measured per token (including and excluding punctuation and numbers); BL is the baseline sifiers on TR2. The difference in performance between TE1 and TE2 shows the difference between the ATB1 and ATB2 (different source of news, and also small differences in annotation). However, the results for Rip show that retraining the Rip classifier on a new corpus can improve the results, without the need for retraining all ten Yamcha classifiers (which takes considerable time). Figure 4 presents the accuracy of tagging using the whole complex morphological tagset. We can project this complex tagset to a simpler tagset, for example, POS. Then the minimum tagging accuracy for the simpler tagset must be greater than or equal to the accuracy of the complex morphological tagset. Even if a combining algorithm chooses the wrong analysis (and this is counted as a failure for the evaluation in this section), the chosen analysis may agree with some of the correct morphological features. We discuss our performance on the POS feature in Section 8. 7 Evaluating Tokenization The term “tokenization” refers to the segmenting of a naturally occurring input sequence of orthographic symbols into elementary symbols (“tokens”) used in subsequent processing steps (such as parsing) as basic units. In our approach, we determine all morphological properties of a word at once, so we can use this information to determine tokenization. There is not a single possible or obvious tokenization scheme: a tokenization scheme is an analytical tool devised by the researcher. We evaluate in this section how well our morphological disambiguation 578 Word Token Token Token Token Meth. Acc. Acc. Prec. Rec. F-m. BL 99.1 99.6 98.6 99.1 98.8 Maj 99.3 99.6 98.9 99.3 99.1 Figure 5: Results of tokenization on TE1: word accuracy measures for each input word whether it gets tokenized correctly, independently of the number of resulting tokens; the token-based measures refer to the four token fields into which the ATB splits each word determines the ATB tokenization. The ATB starts with a simple tokenization, and then splits the word into four fields: conjunctions; particles (prepositions in the case of nouns); the word stem; and pronouns (object clitics in the case of verbs, possessive clitics in the case of nouns). The ATB does not tokenize the definite article + Al+. We compare our output to the morphologically analyzed form of the ATB, and determine if our morphological choices lead to the correct identification of those clitics that need to be stripped off.8 For our evaluation, we only choose the Maj chooser, as it performed best on TE1. We evaluate in two ways. In the first evaluation, we determine for each simple input word whether the tokenization is correct (no matter how many ATB tokens result). We report the percentage of words which are correctly tokenized in the second column in Figure 5. In the second evaluation, we report on the number of output tokens. Each word is divided into exactly four token fields, which can be either filled or empty (in the case of the three clitic token fields) or correct or incorrect (in the case of the stem token field). We report in Figure 5 accuracy over all token fields for all words in the test corpus, as well as recall, precision, and f-measure for the non-null token fields. The baseline BL is the tokenization associated with the morphological analysis most frequently chosen for the input word in training. 8The ATB generates normalized forms of certain clitics and of the word stem, so that the resulting tokens are not simply the result of splitting the original words. We do not actually generate the surface token form from our deep representation, but this can be done in a deterministic, rule-based manner, given our rich morphological analysis, e.g., by using ALMORGEANA in generation mode after splitting off all separable tokens. While the token-based evaluation is identical to that performed by Diab et al. (2004), the results are not directly comparable as they did not use actual input words, but rather recreated input words from the regenerated tokens in the ATB. Sometimes this can simplify the analysis: for example, a p (ta marbuta) must be word-final in Arabic orthography, and thus a word-medial p in a recreated input word reliably signals a token boundary. The rather high baseline shows that tokenization is not a hard problem. 8 Evaluating POS Tagging The POS tagset Diab et al. (2004) use is a subset of the tagset for English that was introduced with the English Penn Treebank. The large set of Arabic tags has been mapped (by the Linguistic Data Consortium) to this smaller English set, and the meaning of the English tags has changed. We consider this tagset unmotivated, as it makes morphological distinctions because they are marked in English, not Arabic. The morphological distinctions that the English tagset captures represent the complete morphological variation that can be found in English. However, in Arabic, much morphological variation goes untagged. For example, verbal inflections for subject person, number, and gender are not marked; dual and plural are not distinguished on nouns; and gender is not marked on nouns at all. In Arabic nouns, arguably the gender feature is the more interesting distinction (rather than the number feature) as verbs in Arabic always agree with their nominal subjects in gender. Agreement in number occurs only when the nominal subject precedes the verb. We use the tagset here only to compare to previous work. Instead, we advocate using a reduced part-of-speech tag set,9 along with the other orthogonal linguistic features in Figure 2. We map our best solutions as chosen by the Maj model in Section 6 to the English tagset, and we furthermore assume (as do Diab et al. (2004)) the gold standard tokenization. We then evaluate against the gold standard POS tagging which we have mapped 9 We use V (Verb), N (Noun), PN (Proper Noun), AJ (Adjective), AV (Adverb), PRO (Nominal Pronoun), P (Preposition/Particle), D (Determiner), C (Conjunction), NEG (Negative particle), NUM (Number), AB (Abbreviation), IJ (Interjection), PX (Punctuation), and X (Unknown). 579 Corpus TE1 TE2 Method Tags All Words All Words BL PTB 93.9 93.3 90.9 89.8 Smp 94.9 94.3 92.6 91.4 Maj PTB 97.6 97.5 95.7 95.2 Smp 98.1 97.8 96.5 96.0 Figure 6: Part-of-speech tagging accuracy measured for all tokens (based on gold-standard tokenization) and only for word tokens, using the Penn Treebank (PTB) tagset as well as the smaller tagset (Smp) (see Footnote 9); BL is the baseline obtained by using the POS value from the baseline tag used in Section 6 similarly. We obtain a score for TE1 of 97.6% on all tokens. Diab et al. (2004) report a score of 95.5% for all tokens on a test corpus drawn from ATB1, thus their figure is comparable to our score of 97.6%. On our own reduced POS tagset, evaluating on TE1, we obtain an accuracy score of 98.1% on all tokens. The full dataset is shown in Figure 6. 9 Conclusion and Outlook We have shown how to use a morphological analyzer for tokenization, part-of-speech tagging, and morphological disambiguation in Arabic. We have shown that the use of a morphological analyzer is beneficial in POS tagging, and we believe our results are the best published to date for tokenization of naturally occurring input (in undiacritized orthography) and POS tagging. We intend to apply our approach to Arabic dialects, for which currently no annotated corpora exist, and for which very few written corpora of any kind exist (making the dialects bad candidates even for unsupervised learning). However, there is a fair amount of descriptive work on dialectal morphology, so that dialectal morphological analyzers may be easier to come by than dialect corpora. We intend to explore to what extent we can transfer models trained on Standard Arabic to dialectal morphological disambiguation. References Imad A. Al-Sughaiyer and Ibrahim A. Al-Kharashi. 2004. Arabic morphological analysis techniques: A comprehensive survey. Journal of the American Society for Information Science and Technology, 55(3):189–213. Tim Buckwalter. 2002. Buckwalter Arabic Morphological Analyzer Version 1.0. Linguistic Data Consortium, University of Pennsylvania, 2002. LDC Catalog No.: LDC2002L49. William Cohen. 1996. Learning trees and rules with set-valued features. In Fourteenth Conference of the American Association of Artificial Intelligence. AAAI. Kareem Darwish. 2003. Building a shallow Arabic morphological analyser in one day. In ACL02 Workshop on Computational Approaches to Semitic Languages, Philadelpia, PA. Association for Computational Linguistics. Mona Diab, Kadri Hacioglu, and Daniel Jurafsky. 2004. Automatic tagging of arabic text: From raw text to base phrase chunks. In 5th Meeting of the North American Chapter of the Association for Computational Linguistics/Human Language Technologies Conference (HLT-NAACL04), Boston, MA. Nizar Habash. 2005. Arabic morphological representations for machine translation. In Abdelhadi Soudi, Antal van den Bosch, and Guenter Neumann, editors, Arabic Computational Morphology: Knowledgebased and Empirical Methods, Text, Speech, and Language Technology. Kluwer/Springer. in press. Jan Hajiˇc. 2000. Morphological tagging: Data vs. dictionaries. In 1st Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL’00), Seattle, WA. Taku Kudo and Yuji Matsumoto. 2003. Fast methods for kernel-based text analysis. In 41st Meeting of the Association for Computational Linguistics (ACL’03), Sapporo, Japan. Young-Suk Lee, Kishore Papineni, Salim Roukos, Ossama Emam, and Hany Hassan. 2003. Language model based Arabic word segmentation. In 41st Meeting of the Association for Computational Linguistics (ACL’03), pages 399–406, Sapporo, Japan. Mohamed Maamouri, Ann Bies, and Tim Buckwalter. 2004. The penn arabic treebank : Building a largescale annotated arabic corpus. In NEMLAR Conference on Arabic Language Resources and Tools, Cairo, Egypt. Monica Rogati, J. Scott McCarley, and Yiming Yang. 2003. Unsupervised learning of arabic stemming using a parallel corpus. In 41st Meeting of the Association for Computational Linguistics (ACL’03), pages 391–398, Sapporo, Japan. 580 | 2005 | 71 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 581–588, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Semantic Role Labeling Using Different Syntactic Views∗ Sameer Pradhan, Wayne Ward, Kadri Hacioglu, James H. Martin Center for Spoken Language Research, University of Colorado, Boulder, CO 80303 {spradhan,whw,hacioglu,martin}@cslr.colorado.edu Daniel Jurafsky Department of Linguistics, Stanford University, Stanford, CA 94305 [email protected] Abstract Semantic role labeling is the process of annotating the predicate-argument structure in text with semantic labels. In this paper we present a state-of-the-art baseline semantic role labeling system based on Support Vector Machine classifiers. We show improvements on this system by: i) adding new features including features extracted from dependency parses, ii) performing feature selection and calibration and iii) combining parses obtained from semantic parsers trained using different syntactic views. Error analysis of the baseline system showed that approximately half of the argument identification errors resulted from parse errors in which there was no syntactic constituent that aligned with the correct argument. In order to address this problem, we combined semantic parses from a Minipar syntactic parse and from a chunked syntactic representation with our original baseline system which was based on Charniak parses. All of the reported techniques resulted in performance improvements. 1 Introduction Semantic Role Labeling is the process of annotating the predicate-argument structure in text with se∗This research was partially supported by the ARDA AQUAINT program via contract OCG4423B and by the NSF via grants IS-9978025 and ITR/HCI 0086132 mantic labels (Gildea and Jurafsky, 2000; Gildea and Jurafsky, 2002; Gildea and Palmer, 2002; Surdeanu et al., 2003; Hacioglu and Ward, 2003; Chen and Rambow, 2003; Gildea and Hockenmaier, 2003; Pradhan et al., 2004; Hacioglu, 2004). The architecture underlying all of these systems introduces two distinct sub-problems: the identification of syntactic constituents that are semantic roles for a given predicate, and the labeling of the those constituents with the correct semantic role. A detailed error analysis of our baseline system indicates that the identification problem poses a significant bottleneck to improving overall system performance. The baseline system’s accuracy on the task of labeling nodes known to represent semantic arguments is 90%. On the other hand, the system’s performance on the identification task is quite a bit lower, achieving only 80% recall with 86% precision. There are two sources of these identification errors: i) failures by the system to identify all and only those constituents that correspond to semantic roles, when those constituents are present in the syntactic analysis, and ii) failures by the syntactic analyzer to provide the constituents that align with correct arguments. The work we present here is tailored to address these two sources of error in the identification problem. The remainder of this paper is organized as follows. We first describe a baseline system based on the best published techniques. We then report on two sets of experiments using techniques that improve performance on the problem of finding arguments when they are present in the syntactic analysis. In the first set of experiments we explore new 581 features, including features extracted from a parser that provides a different syntactic view – a Combinatory Categorial Grammar (CCG) parser (Hockenmaier and Steedman, 2002). In the second set of experiments, we explore approaches to identify optimal subsets of features for each argument class, and to calibrate the classifier probabilities. We then report on experiments that address the problem of arguments missing from a given syntactic analysis. We investigate ways to combine hypotheses generated from semantic role taggers trained using different syntactic views – one trained using the Charniak parser (Charniak, 2000), another on a rule-based dependency parser – Minipar (Lin, 1998), and a third based on a flat, shallow syntactic chunk representation (Hacioglu, 2004a). We show that these three views complement each other to improve performance. 2 Baseline System For our experiments, we use Feb 2004 release of PropBank1 (Kingsbury and Palmer, 2002; Palmer et al., 2005), a corpus in which predicate argument relations are marked for verbs in the Wall Street Journal (WSJ) part of the Penn TreeBank (Marcus et al., 1994). PropBank was constructed by assigning semantic arguments to constituents of handcorrected TreeBank parses. Arguments of a verb are labeled ARG0 to ARG5, where ARG0 is the PROTO-AGENT, ARG1 is the PROTO-PATIENT, etc. In addition to these CORE ARGUMENTS, additional ADJUNCTIVE ARGUMENTS, referred to as ARGMs are also marked. Some examples are ARGM-LOC, for locatives; ARGM-TMP, for temporals; ARGMMNR, for manner, etc. Figure 1 shows a syntax tree along with the argument labels for an example extracted from PropBank. We use Sections 02-21 for training, Section 00 for development and Section 23 for testing. We formulate the semantic labeling problem as a multi-class classification problem using Support Vector Machine (SVM) classifier (Hacioglu et al., 2003; Pradhan et al., 2003; Pradhan et al., 2004) TinySVM2 along with YamCha3 (Kudo and Mat1http://www.cis.upenn.edu/˜ace/ 2http://chasen.org/˜taku/software/TinySVM/ 3http://chasen.org/˜taku/software/yamcha/ Shhhh h ( ( ( ( ( NPhhhh ( ( ( ( T he acquisition ARG1 VP``` ` VBD was NULL VPXXX VBN completed predicate PP```` in September ARGM−TMP [ARG1 The acquisition] was [predicate completed] [ARGM−TMP in September]. Figure 1: Syntax tree for a sentence illustrating the PropBank tags. sumoto, 2000; Kudo and Matsumoto, 2001) are used to implement the system. Using what is known as the ONE VS ALL classification strategy, n binary classifiers are trained, where n is number of semantic classes including a NULL class. The baseline feature set is a combination of features introduced by Gildea and Jurafsky (2002) and ones proposed in Pradhan et al., (2004), Surdeanu et al., (2003) and the syntactic-frame feature proposed in (Xue and Palmer, 2004). Table 1 lists the features used. PREDICATE LEMMA PATH: Path from the constituent to the predicate in the parse tree. POSITION: Whether the constituent is before or after the predicate. VOICE PREDICATE SUB-CATEGORIZATION PREDICATE CLUSTER HEAD WORD: Head word of the constituent. HEAD WORD POS: POS of the head word NAMED ENTITIES IN CONSTITUENTS: 7 named entities as 7 binary features. PARTIAL PATH: Path from the constituent to the lowest common ancestor of the predicate and the constituent. VERB SENSE INFORMATION: Oracle verb sense information from PropBank HEAD WORD OF PP: Head of PP replaced by head word of NP inside it, and PP replaced by PP-preposition FIRST AND LAST WORD/POS IN CONSTITUENT ORDINAL CONSTITUENT POSITION CONSTITUENT TREE DISTANCE CONSTITUENT RELATIVE FEATURES: Nine features representing the phrase type, head word and head word part of speech of the parent, and left and right siblings of the constituent. TEMPORAL CUE WORDS DYNAMIC CLASS CONTEXT SYNTACTIC FRAME CONTENT WORD FEATURES: Content word, its POS and named entities in the content word Table 1: Features used in the Baseline system As described in (Pradhan et al., 2004), we postprocess the n-best hypotheses using a trigram language model of the argument sequence. We analyze the performance on three tasks: • Argument Identification – This is the process of identifying the parsed constituents in the sentence that represent semantic arguments of a given predicate. 582 • Argument Classification – Given constituents known to represent arguments of a predicate, assign the appropriate argument labels to them. • Argument Identification and Classification – A combination of the above two tasks. ALL ARGs Task P R F1 A (%) (%) (%) HAND Id. 96.2 95.8 96.0 Classification 93.0 Id. + Classification 89.9 89.0 89.4 AUTOMATIC Id. 86.8 80.0 83.3 Classification 90.1 Id. + Classification 80.9 76.8 78.8 Table 2: Baseline system performance on all tasks using hand-corrected parses and automatic parses on PropBank data. Table 2 shows the performance of the system using the hand corrected, TreeBank parses (HAND) and using parses produced by a Charniak parser (AUTOMATIC). Precision (P), Recall (R) and F1 scores are given for the identification and combined tasks, and Classification Accuracy (A) for the classification task. Classification performance using Charniak parses is about 3% absolute worse than when using TreeBank parses. On the other hand, argument identification performance using Charniak parses is about 12.7% absolute worse. Half of these errors – about 7% are due to missing constituents, and the other half – about 6% are due to mis-classifications. Motivated by this severe degradation in argument identification performance for automatic parses, we examined a number of techniques for improving argument identification. We made a number of changes to the system which resulted in improved performance. The changes fell into three categories: i) new features, ii) feature selection and calibration, and iii) combining parses from different syntactic representations. 3 Additional Features 3.1 CCG Parse Features While the Path feature has been identified to be very important for the argument identification task, it is one of the most sparse features and may be difficult to train or generalize (Pradhan et al., 2004; Xue and Palmer, 2004). A dependency grammar should generate shorter paths from the predicate to dependent words in the sentence, and could be a more robust complement to the phrase structure grammar paths extracted from the Charniak parse tree. Gildea and Hockenmaier (2003) report that using features extracted from a Combinatory Categorial Grammar (CCG) representation improves semantic labeling performance on core arguments. We evaluated features from a CCG parser combined with our baseline feature set. We used three features that were introduced by Gildea and Hockenmaier (2003): • Phrase type – This is the category of the maximal projection between the two words – the predicate and the dependent word. • Categorial Path – This is a feature formed by concatenating the following three values: i) category to which the dependent word belongs, ii) the direction of dependence and iii) the slot in the category filled by the dependent word. • Tree Path – This is the categorial analogue of the path feature in the Charniak parse based system, which traces the path from the dependent word to the predicate through the binary CCG tree. Parallel to the hand-corrected TreeBank parses, we also had access to correct CCG parses derived from the TreeBank (Hockenmaier and Steedman, 2002a). We performed two sets of experiments. One using the correct CCG parses, and the other using parses obtained using StatCCG4 parser (Hockenmaier and Steedman, 2002). We incorporated these features in the systems based on hand-corrected TreeBank parses and Charniak parses respectively. For each constituent in the Charniak parse tree, if there was a dependency between the head word of the constituent and the predicate, then the corresponding CCG features for those words were added to the features for that constituent. Table 3 shows the performance of the system when these features were added. The corresponding baseline performances are mentioned in parentheses. 3.2 Other Features We added several other features to the system. Position of the clause node (S, SBAR) seems to be 4Many thanks to Julia Hockenmaier for providing us with the CCG bank as well as the StatCCG parser. 583 ALL ARGs Task P R F1 (%) (%) HAND Id. 97.5 (96.2) 96.1 (95.8) 96.8 (96.0) Id. + Class. 91.8 (89.9) 90.5 (89.0) 91.2 (89.4) AUTOMATIC Id. 87.1 (86.8) 80.7 (80.0) 83.8 (83.3) Id. + Class. 81.5 (80.9) 77.2 (76.8) 79.3 (78.8) Table 3: Performance improvement upon adding CCG features to the Baseline system. an important feature in argument identification (Hacioglu et al., 2004) therefore we experimented with four clause-based path feature variations. We added the predicate context to capture predicate sense variations. For some adjunctive arguments, punctuation plays an important role, so we added some punctuation features. All the new features are shown in Table 4 CLAUSE-BASED PATH VARIATIONS: I. Replacing all the nodes in a path other than clause nodes with an “*”. For example, the path NP↑S↑VP↑SBAR↑NP↑VP↓VBD becomes NP↑S↑*S↑*↑*↓VBD II. Retaining only the clause nodes in the path, which for the above example would produce NP↑S↑S↓VBD, III. Adding a binary feature that indicates whether the constituent is in the same clause as the predicate, IV. collapsing the nodes between S nodes which gives NP↑S↑NP↑VP↓VBD. PATH N-GRAMS: This feature decomposes a path into a series of trigrams. For example, the path NP↑S↑VP↑SBAR↑NP↑VP↓VBD becomes: NP↑S↑VP, S↑VP↑SBAR, VP↑SBAR↑NP, SBAR↑NP↑VP, etc. We used the first ten trigrams as ten features. Shorter paths were padded with nulls. SINGLE CHARACTER PHRASE TAGS: Each phrase category is clustered to a category defined by the first character of the phrase label. PREDICATE CONTEXT: Two words and two word POS around the predicate and including the predicate were added as ten new features. PUNCTUATION: Punctuation before and after the constituent were added as two new features. FEATURE CONTEXT: Features for argument bearing constituents were added as features to the constituent being classified. Table 4: Other Features 4 Feature Selection and Calibration In the baseline system, we used the same set of features for all the n binary ONE VS ALL classifiers. Error analysis showed that some features specifically suited for one argument class, for example, core arguments, tend to hurt performance on some adjunctive arguments. Therefore, we thought that selecting subsets of features for each argument class might improve performance. To achieve this, we performed a simple feature selection procedure. For each argument, we started with the set of features introduced by (Gildea and Jurafsky, 2002). We pruned this set by training classifiers after leaving out one feature at a time and checking its performance on a development set. We used the χ2 significance while making pruning decisions. Following that, we added each of the other features one at a time to the pruned baseline set of features and selected ones that showed significantly improved performance. Since the feature selection experiments were computationally intensive, we performed them using 10k training examples. SVMs output distances not probabilities. These distances may not be comparable across classifiers, especially if different features are used to train each binary classifier. In the baseline system, we used the algorithm described by Platt (Platt, 2000) to convert the SVM scores into probabilities by fitting to a sigmoid. When all classifiers used the same set of features, fitting all scores to a single sigmoid was found to give the best performance. Since different feature sets are now used by the classifiers, we trained a separate sigmoid for each classifier. Raw Scores Probabilities After lattice-rescoring Uncalibrated Calibrated (%) (%) (%) Same Feat. same sigmoid 74.7 74.7 75.4 Selected Feat. diff. sigmoids 75.4 75.1 76.2 Table 5: Performance improvement on selecting features per argument and calibrating the probabilities on 10k training data. Foster and Stine (2004) show that the pooladjacent-violators (PAV) algorithm (Barlow et al., 1972) provides a better method for converting raw classifier scores to probabilities when Platt’s algorithm fails. The probabilities resulting from either conversions may not be properly calibrated. So, we binned the probabilities and trained a warping function to calibrate them. For each argument classifier, we used both the methods for converting raw SVM scores into probabilities and calibrated them using a development set. Then, we visually inspected the calibrated plots for each classifier and chose the method that showed better calibration as the calibration procedure for that classifier. Plots of the predicted probabilities versus true probabilities for the ARGM-TMP VS ALL classifier, before and after calibration are shown in Figure 2. The performance improvement over a classifier that is trained using all the features for all the classes is shown in Table 5. Table 6 shows the performance of the system after adding the CCG features, additional features ex584 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Predicted Probability True Probability Before Calibration 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Predicted Probability True Probability After Calibration Figure 2: Plots showing true probabilities versus predicted probabilities before and after calibration on the test set for ARGM-TMP. tracted from the Charniak parse tree, and performing feature selection and calibration. Numbers in parentheses are the corresponding baseline performances. TASK P R F1 A (%) (%) (%) Id. 86.9 (86.8) 84.2 (80.0) 85.5 (83.3) Class. 92.0 (90.1) Id. + Class. 82.1 (80.9) 77.9 (76.8) 79.9 (78.8) Table 6: Best system performance on all tasks using automatically generated syntactic parses. 5 Alternative Syntactic Views Adding new features can improve performance when the syntactic representation being used for classification contains the correct constituents. Additional features can’t recover from the situation where the parse tree being used for classification doesn’t contain the correct constituent representing an argument. Such parse errors account for about 7% absolute of the errors (or, about half of 12.7%) for the Charniak parse based system. To address these errors, we added two additional parse representations: i) Minipar dependency parser, and ii) chunking parser (Hacioglu et al., 2004). The hope is that these parsers will produce different errors than the Charniak parser since they represent different syntactic views. The Charniak parser is trained on the Penn TreeBank corpus. Minipar is a rule based dependency parser. The chunking parser is trained on PropBank and produces a flat syntactic representation that is very different from the full parse tree produced by Charniak. A combination of the three different parses could produce better results than any single one. 5.1 Minipar-based Semantic Labeler Minipar (Lin, 1998; Lin and Pantel, 2001) is a rulebased dependency parser. It outputs dependencies between a word called head and another called modifier. Each word can modify at most one word. The dependency relationships form a dependency tree. The set of words under each node in Minipar’s dependency tree form a contiguous segment in the original sentence and correspond to the constituent in a constituent tree. We formulate the semantic labeling problem in the same way as in a constituent structure parse, except we classify the nodes that represent head words of constituents. A similar formulation using dependency trees derived from TreeBank was reported in Hacioglu (Hacioglu, 2004). In that experiment, the dependency trees were derived from hand-corrected TreeBank trees using head word rules. Here, an SVM is trained to assign PropBank argument labels to nodes in Minipar dependency trees using the following features: Table 8 shows the performance of the Miniparbased semantic parser. Minipar performance on the PropBank corpus is substantially worse than the Charniak based system. This is understandable from the fact that Minipar is not designed to produce constituents that would exactly match the constituent segmentation used in TreeBank. In the test set, about 37% of the argu585 PREDICATE LEMMA HEAD WORD: The word representing the node in the dependency tree. HEAD WORD POS: Part of speech of the head word. POS PATH: This is the path from the predicate to the head word through the dependency tree connecting the part of speech of each node in the tree. DEPENDENCY PATH: Each word that is connected to the head word has a particular dependency relationship to the word. These are represented as labels on the arc between the words. This feature is the dependencies along the path that connects two words. VOICE POSITION Table 7: Features used in the Baseline system using Minipar parses. Task P R F1 (%) (%) Id. 73.5 43.8 54.6 Id. + Classification 66.2 36.7 47.2 Table 8: Baseline system performance on all tasks using Minipar parses. ments do not have corresponding constituents that match its boundaries. In experiments reported by Hacioglu (Hacioglu, 2004), a mismatch of about 8% was introduced in the transformation from handcorrected constituent trees to dependency trees. Using an errorful automatically generated tree, a still higher mismatch would be expected. In case of the CCG parses, as reported by Gildea and Hockenmaier (2003), the mismatch was about 23%. A more realistic way to score the performance is to score tags assigned to head words of constituents, rather than considering the exact boundaries of the constituents as reported by Gildea and Hockenmaier (2003). The results for this system are shown in Table 9. Task P R F1 (%) (%) CHARNIAK Id. 92.2 87.5 89.8 Id. + Classification 85.9 81.6 83.7 MINIPAR Id. 83.3 61.1 70.5 Id. + Classification 72.9 53.5 61.7 Table 9: Head-word based performance using Charniak and Minipar parses. 5.2 Chunk-based Semantic Labeler Hacioglu has previously described a chunk based semantic labeling method (Hacioglu et al., 2004). This system uses SVM classifiers to first chunk input text into flat chunks or base phrases, each labeled with a syntactic tag. A second SVM is trained to assign semantic labels to the chunks. The system is trained on the PropBank training data. WORDS PREDICATE LEMMAS PART OF SPEECH TAGS BP POSITIONS: The position of a token in a BP using the IOB2 representation (e.g. B-NP, I-NP, O, etc.) CLAUSE TAGS: The tags that mark token positions in a sentence with respect to clauses. NAMED ENTITIES: The IOB tags of named entities. TOKEN POSITION: The position of the phrase with respect to the predicate. It has three values as ”before”, ”after” and ”-” (for the predicate) PATH: It defines a flat path between the token and the predicate CLAUSE BRACKET PATTERNS CLAUSE POSITION: A binary feature that identifies whether the token is inside or outside the clause containing the predicate HEADWORD SUFFIXES: suffixes of headwords of length 2, 3 and 4. DISTANCE: Distance of the token from the predicate as a number of base phrases, and the distance as the number of VP chunks. LENGTH: the number of words in a token. PREDICATE POS TAG: the part of speech category of the predicate PREDICATE FREQUENCY: Frequent or rare using a threshold of 3. PREDICATE BP CONTEXT: The chain of BPs centered at the predicate within a window of size -2/+2. PREDICATE POS CONTEXT: POS tags of words immediately preceding and following the predicate. PREDICATE ARGUMENT FRAMES: Left and right core argument patterns around the predicate. NUMBER OF PREDICATES: This is the number of predicates in the sentence. Table 10: Features used by chunk based classifier. Table 10 lists the features used by this classifier. For each token (base phrase) to be tagged, a set of features is created from a fixed size context that surrounds each token. In addition to the above features, it also uses previous semantic tags that have already been assigned to the tokens contained in the linguistic context. A 5-token sliding window is used for the context. P R F1 (%) (%) Id. and Classification 72.6 66.9 69.6 Table 11: Semantic chunker performance on the combined task of Id. and classification. SVMs were trained for begin (B) and inside (I) classes of all arguments and outside (O) class for a total of 78 one-vs-all classifiers. Again, TinySVM5 along with YamCha6 (Kudo and Matsumoto, 2000; Kudo and Matsumoto, 2001) are used as the SVM training and test software. Table 11 presents the system performances on the PropBank test set for the chunk-based system. 5http://chasen.org/˜taku/software/TinySVM/ 6http://chasen.org/˜taku/software/yamcha/ 586 6 Combining Semantic Labelers We combined the semantic parses as follows: i) scores for arguments were converted to calibrated probabilities, and arguments with scores below a threshold value were deleted. Separate thresholds were used for each parser. ii) For the remaining arguments, the more probable ones among overlapping ones were selected. In the chunked system, an argument could consist of a sequence of chunks. The probability assigned to the begin tag of an argument was used as the probability of the sequence of chunks forming an argument. Table 12 shows the performance improvement after the combination. Again, numbers in parentheses are respective baseline performances. TASK P R F1 (%) (%) Id. 85.9 (86.8) 88.3 (80.0) 87.1 (83.3) Id. + Class. 81.3 (80.9) 80.7 (76.8) 81.0 (78.8) Table 12: Constituent-based best system performance on argument identification and argument identification and classification tasks after combining all three semantic parses. The main contribution of combining both the Minipar based and the Charniak-based parsers was significantly improved performance on ARG1 in addition to slight improvements to some other arguments. Table 13 shows the effect on selected arguments on sentences that were altered during the the combination of Charniak-based and Chunk-based parses. Number of Propositions 107 Percentage of perfect props before combination 0.00 Percentage of perfect props after combination 45.95 Before After P R F1 P R F1 (%) (%) (%) (%) Overall 94.8 53.4 68.3 80.9 73.8 77.2 ARG0 96.0 85.7 90.5 92.5 89.2 90.9 ARG1 71.4 13.5 22.7 59.4 59.4 59.4 ARG2 100.0 20.0 33.3 50.0 20.0 28.5 ARGM-DIS 100.0 40.0 57.1 100.0 100.0 100.0 Table 13: Performance improvement on parses changed during pair-wise Charniak and Chunk combination. A marked increase in number of propositions for which all the arguments were identified correctly from 0% to about 46% can be seen. Relatively few predicates, 107 out of 4500, were affected by this combination. To give an idea of what the potential improvements of the combinations could be, we performed an oracle experiment for a combined system that tags head words instead of exact constituents as we did in case of Minipar-based and Charniak-based semantic parser earlier. In case of chunks, first word in prepositional base phrases was selected as the head word, and for all other chunks, the last word was selected to be the head word. If the correct argument was found present in either the Charniak, Minipar or Chunk hypotheses then that was selected. The results for this are shown in Table 14. It can be seen that the head word based performance almost approaches the constituent based performance reported on the hand-corrected parses in Table 3 and there seems to be considerable scope for improvement. Task P R F1 (%) (%) C Id. 92.2 87.5 89.8 Id. + Classification 85.9 81.6 83.7 C+M Id. 98.4 90.6 94.3 Id. + Classification 93.1 86.0 89.4 C+CH Id. 98.9 88.8 93.6 Id. + Classification 92.5 83.3 87.7 C+M+CH Id. 99.2 92.5 95.7 Id. + Classification 94.6 88.4 91.5 Table 14: Performance improvement on head word based scoring after oracle combination. Charniak (C), Minipar (M) and Chunker (CH). Table 15 shows the performance improvement in the actual system for pairwise combination of the parsers and one using all three. Task P R F1 (%) (%) C Id. 92.2 87.5 89.8 Id. + Classification 85.9 81.6 83.7 C+M Id. 91.7 89.9 90.8 Id. + Classification 85.0 83.9 84.5 C+CH Id. 91.5 91.1 91.3 Id. + Classification 84.9 84.3 84.7 C+M+CH Id. 91.5 91.9 91.7 Id. + Classification 85.1 85.5 85.2 Table 15: Performance improvement on head word based scoring after combination. Charniak (C), Minipar (M) and Chunker (CH). 587 7 Conclusions We described a state-of-the-art baseline semantic role labeling system based on Support Vector Machine classifiers. Experiments were conducted to evaluate three types of improvements to the system: i) adding new features including features extracted from a Combinatory Categorial Grammar parse, ii) performing feature selection and calibration and iii) combining parses obtained from semantic parsers trained using different syntactic views. We combined semantic parses from a Minipar syntactic parse and from a chunked syntactic representation with our original baseline system which was based on Charniak parses. The belief was that semantic parses based on different syntactic views would make different errors and that the combination would be complimentary. A simple combination of these representations did lead to improved performance. 8 Acknowledgements This research was partially supported by the ARDA AQUAINT program via contract OCG4423B and by the NSF via grants IS-9978025 and ITR/HCI 0086132. Computer time was provided by NSF ARI Grant #CDA-9601817, NSF MRI Grant #CNS0420873, NASA AIST grant #NAG2-1646, DOE SciDAC grant #DE-FG02-04ER63870, NSF sponsorship of the National Center for Atmospheric Research, and a grant from the IBM Shared University Research (SUR) program. We would like to thank Ralph Weischedel and Scott Miller of BBN Inc. for letting us use their named entity tagger – IdentiFinder; Martha Palmer for providing us with the PropBank data; Dan Gildea and Julia Hockenmaier for providing the gold standard CCG parser information, and all the anonymous reviewers for their helpful comments. References R. E. Barlow, D. J. Bartholomew, J. M. Bremmer, and H. D. Brunk. 1972. Statistical Inference under Order Restrictions. Wiley, New York. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of NAACL, pages 132–139, Seattle, Washington. John Chen and Owen Rambow. 2003. Use of deep linguistics features for the recognition and labeling of semantic arguments. In Proceedings of the EMNLP, Sapporo, Japan. Dean P. Foster and Robert A. Stine. 2004. Variable selection in data mining: building a predictive model for bankruptcy. Journal of American Statistical Association, 99, pages 303–313. Dan Gildea and Julia Hockenmaier. 2003. Identifying semantic roles using combinatory categorial grammar. In Proceedings of the EMNLP, Sapporo, Japan. Daniel Gildea and Daniel Jurafsky. 2000. Automatic labeling of semantic roles. In Proceedings of ACL, pages 512–520, Hong Kong, October. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. Daniel Gildea and Martha Palmer. 2002. The necessity of syntactic parsing for predicate argument recognition. In Proceedings of ACL, Philadelphia, PA. Kadri Hacioglu. 2004. Semantic role labeling using dependency trees. In Proceedings of COLING, Geneva, Switzerland. Kadri Hacioglu and Wayne Ward. 2003. Target word detection and semantic role chunking using support vector machines. In Proceedings of HLT/NAACL, Edmonton, Canada. Kadri Hacioglu, Sameer Pradhan, Wayne Ward, James Martin, and Dan Jurafsky. 2003. Shallow semantic parsing using support vector machines. Technical Report TR-CSLR-2003-1, Center for Spoken Language Research, Boulder, Colorado. Kadri Hacioglu, Sameer Pradhan, Wayne Ward, James Martin, and Daniel Jurafsky. 2004. Semantic role labeling by tagging syntactic chunks. In Proceedings of CoNLL-2004, Shared Task – Semantic Role Labeling. Kadri Hacioglu. 2004a. A lightweight semantic chunking model based on tagging. In Proceedings of HLT/NAACL, Boston, MA. Julia Hockenmaier and Mark Steedman. 2002. Generative models for statistical parsing with combinatory grammars. In Proceedings of the ACL, pages 335– 342. Julia Hockenmaier and Mark Steedman. 2002a. Acquiring compact lexicalized grammars from a cleaner treebank. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC-2002), Las Palmas, Canary Islands, Spain. Paul Kingsbury and Martha Palmer. 2002. From Treebank to PropBank. In Proceedings of LREC, Las Palmas, Canary Islands, Spain. Taku Kudo and Yuji Matsumoto. 2000. Use of support vector learning for chunk identification. In Proceedings of CoNLL and LLL, pages 142–144. Taku Kudo and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proceedings of the NAACL. Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question answering. Natural Language Engineering, 7(4):343–360. Dekang Lin. 1998. Dependency-based evaluation of MINIPAR. In In Workshop on the Evaluation of Parsing Systems, Granada, Spain. Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. 1994. The Penn Treebank: Annotating predicate argument structure. Martha Palmer, Dan Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. To appear Computational Linguistics. John Platt. 2000. Probabilities for support vector machines. In A. Smola, P. Bartlett, B. Scholkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers. MIT press, Cambridge, MA. Sameer Pradhan, Kadri Hacioglu, Wayne Ward, James Martin, and Dan Jurafsky. 2003. Semantic role parsing: Adding semantic structure to unstructured text. In Proceedings of ICDM, Melbourne, Florida. Sameer Pradhan, Wayne Ward, Kadri Hacioglu, James Martin, and Dan Jurafsky. 2004. Shallow semantic parsing using support vector machines. In Proceedings of HLT/NAACL, Boston, MA. Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument structures for information extraction. In Proceedings of ACL, Sapporo, Japan. Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of EMNLP, Barcelona, Spain. 588 | 2005 | 72 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 589–596, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Joint Learning Improves Semantic Role Labeling Kristina Toutanova Dept of Computer Science Stanford University Stanford, CA, 94305 [email protected] Aria Haghighi Dept of Computer Science Stanford University Stanford, CA, 94305 [email protected] Christopher D. Manning Dept of Computer Science Stanford University Stanford, CA, 94305 [email protected] Abstract Despite much recent progress on accurate semantic role labeling, previous work has largely used independent classifiers, possibly combined with separate label sequence models via Viterbi decoding. This stands in stark contrast to the linguistic observation that a core argument frame is a joint structure, with strong dependencies between arguments. We show how to build a joint model of argument frames, incorporating novel features that model these interactions into discriminative loglinear models. This system achieves an error reduction of 22% on all arguments and 32% on core arguments over a stateof-the art independent classifier for goldstandard parse trees on PropBank. 1 Introduction The release of semantically annotated corpora such as FrameNet (Baker et al., 1998) and PropBank (Palmer et al., 2003) has made it possible to develop high-accuracy statistical models for automated semantic role labeling (Gildea and Jurafsky, 2002; Pradhan et al., 2004; Xue and Palmer, 2004). Such systems have identified several linguistically motivated features for discriminating arguments and their labels (see Table 1). These features usually characterize aspects of individual arguments and the predicate. It is evident that the labels and the features of arguments are highly correlated. For example, there are hard constraints – that arguments cannot overlap with each other or the predicate, and also soft constraints – for example, is it unlikely that a predicate will have two or more AGENT arguments, or that a predicate used in the active voice will have a THEME argument prior to an AGENT argument. Several systems have incorporated such dependencies, for example, (Gildea and Jurafsky, 2002; Pradhan et al., 2004; Thompson et al., 2003) and several systems submitted in the CoNLL-2004 shared task (Carreras and M`arquez, 2004). However, we show that there are greater gains to be had by modeling joint information about a verb’s argument structure. We propose a discriminative log-linear joint model for semantic role labeling, which incorporates more global features and achieves superior performance in comparison to state-of-the-art models. To deal with the computational complexity of the task, we employ dynamic programming and reranking approaches. We present performance results on the February 2004 version of PropBank on gold-standard parse trees as well as results on automatic parses generated by Charniak’s parser (Charniak, 2000). 2 Semantic Role Labeling: Task Definition and Architectures Consider the pair of sentences, • [The GM-Jaguar pact]AGENT gives [the car market]RECIPIENT [a much-needed boost]THEME • [A much-needed boost]THEME was given to [the car market]RECIPIENT by [the GM-Jaguar pact]AGENT Despite the different syntactic positions of the labeled phrases, we recognize that each plays the same 589 role – indicated by the label – in the meaning of this sense of the verb give. We call such phrases fillers of semantic roles and our task is, given a sentence and a target verb, to return all such phrases along with their correct labels. Therefore one subtask is to group the words of a sentence into phrases or constituents. As in most previous work on semantic role labeling, we assume the existence of a separate parsing model that can assign a parse tree t to each sentence, and the task then is to label each node in the parse tree with the semantic role of the phrase it dominates, or NONE, if the phrase does not fill any role. We do stress however that the joint framework and features proposed here can also be used when only a shallow parse (chunked) representation is available as in the CoNLL-2004 shared task (Carreras and M`arquez, 2004). In the February 2004 version of the PropBank corpus, annotations are done on top of the Penn TreeBank II parse trees (Marcus et al., 1993). Possible labels of arguments in this corpus are the core argument labels ARG[0-5], and the modifier argument labels. The core arguments ARG[3-5] do not have consistent global roles and tend to be verb specific. There are about 14 modifier labels such as ARGM-LOC and ARGM-TMP, for location and temporal modifiers respectively.1 Figure 1 shows an example parse tree annotated with semantic roles. We distinguish between models that learn to label nodes in the parse tree independently, called local models, and models that incorporate dependencies among the labels of multiple nodes, called joint models. We build both local and joint models for semantic role labeling, and evaluate the gains achievable by incorporating joint information. We start by introducing our local models, and later build on them to define joint models. 3 Local Classifiers In the context of role labeling, we call a classifier local if it assigns a probability (or score) to the label of an individual parse tree node ni independently of the labels of other nodes. We use the standard separation of the task of semantic role labeling into identification and classifi1For a full listing of PropBank argument labels see (Palmer et al., 2003) cation phases. In identification, our task is to classify nodes of t as either ARG, an argument (including modifiers), or NONE, a non-argument. In classification, we are given a set of arguments in t and must label each one with its appropriate semantic role. Formally, let L denote a mapping of the nodes in t to a label set of semantic roles (including NONE) and let Id(L) be the mapping which collapses L’s non-NONE values into ARG. Then we can decompose the probability of a labeling L into probabilities according to an identification model PID and a classification model PCLS. PSRL(L|t, v) = PID(Id(L)|t, v) × PCLS(L|t, v, Id(L)) (1) This decomposition does not encode any independence assumptions, but is a useful way of thinking about the problem. Our local models for semantic role labeling use this decomposition. Previous work has also made this distinction because, for example, different features have been found to be more effective for the two tasks, and it has been a good way to make training and search during testing more efficient. Here we use the same features for local identification and classification models, but use the decomposition for efficiency of training. The identification models are trained to classify each node in a parse tree as ARG or NONE, and the classification models are trained to label each argument node in the training set with its specific label. In this way the training set for the classification models is smaller. Note that we don’t do any hard pruning at the identification stage in testing and can find the exact labeling of the complete parse tree, which is the maximizer of Equation 1. Thus we do not have accuracy loss as in the two-pass hard prune strategy described in (Pradhan et al., 2005). In previous work, various machine learning methods have been used to learn local classifiers for role labeling. Examples are linearly interpolated relative frequency models (Gildea and Jurafsky, 2002), SVMs (Pradhan et al., 2004), decision trees (Surdeanu et al., 2003), and log-linear models (Xue and Palmer, 2004). In this work we use log-linear models for multi-class classification. One advantage of log-linear models over SVMs for us is that they produce probability distributions and thus identification 590 Standard Features (Gildea and Jurafsky, 2002) PHRASE TYPE: Syntactic Category of node PREDICATE LEMMA: Stemmed Verb PATH: Path from node to predicate POSITION: Before or after predicate? VOICE: Active or passive relative to predicate HEAD WORD OF PHRASE SUB-CAT: CFG expansion of predicate’s parent Additional Features (Pradhan et al., 2004) FIRST/LAST WORD LEFT/RIGHT SISTER PHRASE-TYPE LEFT/RIGHT SISTER HEAD WORD/POS PARENT PHRASE-TYPE PARENT POS/HEAD-WORD ORDINAL TREE DISTANCE: Phrase Type with appended length of PATH feature NODE-LCA PARTIAL PATH Path from constituent to Lowest Common Ancestor with predicate node PP PARENT HEAD WORD If parent is a PP return parent’s head word PP NP HEAD WORD/POS For a PP, retrieve the head Word / POS of its rightmost NP Selected Pairs (Xue and Palmer, 2004) PREDICATE LEMMA & PATH PREDICATE LEMMA & HEAD WORD PREDICATE LEMMA & PHRASE TYPE VOICE & POSITION PREDICATE LEMMA & PP PARENT HEAD WORD Table 1: Baseline Features and classification models can be chained in a principled way, as in Equation 1. The features we used for local identification and classification models are outlined in Table 1. These features are a subset of features used in previous work. The standard features at the top of the table were defined by (Gildea and Jurafsky, 2002), and the rest are other useful lexical and structural features identified in more recent work (Pradhan et al., 2004; Surdeanu et al., 2003; Xue and Palmer, 2004). The most direct way to use trained local identification and classification models in testing is to select a labeling L of the parse tree that maximizes the product of the probabilities according to the two models as in Equation 1. Since these models are local, this is equivalent to independently maximizing the product of the probabilities of the two models for the label li of each parse tree node ni as shown below in Equation 2. P ℓ SRL(L|t, v) = Y ni∈t PID(Id(li)|t, v) (2) × Y ni∈t PCLS(li|t, v, Id(li)) A problem with this approach is that a maximizing labeling of the nodes could possibly violate the constraint that argument nodes should not overlap with each other. Therefore, to produce a consistent set of arguments with local classifiers, we must have a way of enforcing the non-overlapping constraint. 3.1 Enforcing the Non-overlapping Constraint Here we describe a fast exact dynamic programming algorithm to find the most likely non-overlapping (consistent) labeling of all nodes in the parse tree, according to a product of probabilities from local models, as in Equation 2. For simplicity, we describe the dynamic program for the case where only two classes are possible – ARG and NONE. The generalization to more classes is straightforward. Intuitively, the algorithm is similar to the Viterbi algorithm for context-free grammars, because we can describe the non-overlapping constraint by a “grammar” that disallows ARG nodes to have ARG descendants. Below we will talk about maximizing the sum of the logs of local probabilities rather than the product of local probabilities, which is equivalent. The dynamic program works from the leaves of the tree up and finds a best assignment for each tree, using already computed assignments for its children. Suppose we want the most likely consistent assignment for subtree t with children trees t1, . . . , tk each storing the most likely consistent assignment of nodes it dominates as well as the log-probability of the assignment of all nodes it dominates to NONE. The most likely assignment for t is the one that corresponds to the maximum of: • The sum of the log-probabilities of the most likely assignments of the children subtrees t1, . . . , tk plus the log-probability for assigning the node t to NONE • The sum of the log-probabilities for assigning all of ti’s nodes to NONE plus the logprobability for assigning the node t to ARG. Propagating this procedure from the leaves to the root of t, we have our most likely non-overlapping assignment. By slightly modifying this procedure, we obtain the most likely assignment according to 591 a product of local identification and classification models. We use the local models in conjunction with this search procedure to select a most likely labeling in testing. Test set results for our local model P ℓ SRL are given in Table 2. 4 Joint Classifiers As discussed in previous work, there are strong dependencies among the labels of the semantic argument nodes of a verb. A drawback of local models is that, when they decide the label of a parse tree node, they cannot use information about the labels and features of other nodes in the tree. Furthermore, these dependencies are highly nonlocal. For instance, to avoid repeating argument labels in a frame, we need to add a dependency from each node label to the labels of all other nodes. A factorized sequence model that assumes a finite Markov horizon, such as a chain Conditional Random Field (Lafferty et al., 2001), would not be able to encode such dependencies. The need for Re-ranking For argument identification, the number of possible assignments for a parse tree with n nodes is 2n. This number can run into the hundreds of billions for a normal-sized tree. For argument labeling, the number of possible assignments is ≈20m, if m is the number of arguments of a verb (typically between 2 and 5), and 20 is the approximate number of possible labels if considering both core and modifying arguments. Training a model which has such huge number of classes is infeasible if the model does not factorize due to strong independence assumptions. Therefore, in order to be able to incorporate long-range dependencies in our models, we chose to adopt a re-ranking approach (Collins, 2000), which selects from likely assignments generated by a model which makes stronger independence assumptions. We utilize the top N assignments of our local semantic role labeling model P ℓ SRL to generate likely assignments. As can be seen from Table 3, for relatively small values of N, our re-ranking approach does not present a serious bottleneck to performance. We used a value of N = 20 for training. In Table 3 we can see that if we could pick, using an oracle, the best assignment out for the top 20 assignments according to the local model, we would achieve an F-Measure of 98.8 on all arguments. Increasing the number of N to 30 results in a very small gain in the upper bound on performance and a large increase in memory requirements. We therefore selected N = 20 as a good compromise. Generation of top N most likely joint assignments We generate the top N most likely nonoverlapping joint assignments of labels to nodes in a parse tree according to a local model P ℓ SRL, by an exact dynamic programming algorithm, which is a generalization of the algorithm for finding the top non-overlapping assignment described in section 3.1. Parametric Models We learn log-linear re-ranking models for joint semantic role labeling, which use feature maps from a parse tree and label sequence to a vector space. The form of the models is as follows. Let Φ(t, v, L) ∈ Rs denote a feature map from a tree t, target verb v, and joint assignment L of the nodes of the tree, to the vector space Rs. Let L1, L2, · · · , LN denote top N possible joint assignments. We learn a loglinear model with a parameter vector W, with one weight for each of the s dimensions of the feature vector. The probability (or score) of an assignment L according to this re-ranking model is defined as: P r SRL(L|t, v) = e⟨Φ(t,v,L),W ⟩ PN j=1 e⟨Φ(t,v,Lj).W ⟩ (3) The score of an assignment L not in the top N is zero. We train the model to maximize the sum of log-likelihoods of the best assignments minus a quadratic regularization term. In this framework, we can define arbitrary features of labeled trees that capture general properties of predicate-argument structure. Joint Model Features We will introduce the features of the joint reranking model in the context of the example parse tree shown in Figure 1. We model dependencies not only between the label of a node and the labels of 592 S1 NP1-ARG1 Final-hour trading VP1 VBD1 PRED accelerated PP1 ARG4 TO1 to NP2 108.1 million shares NP3 ARGM-TMP yesterday Figure 1: An example tree from the PropBank with Semantic Role Annotations. other nodes, but also dependencies between the label of a node and input features of other argument nodes. The features are specified by instantiation of templates and the value of a feature is the number of times a particular pattern occurs in the labeled tree. Templates For a tree t, predicate v, and joint assignment L of labels to the nodes of the tree, we define the candidate argument sequence as the sequence of nonNONE labeled nodes [n1, l1, . . . , vP RED, nm, lm] (li is the label of node ni). A reasonable candidate argument sequence usually contains very few of the nodes in the tree – about 2 to 7 nodes, as this is the typical number of arguments for a verb. To make it more convenient to express our feature templates, we include the predicate node v in the sequence. This sequence of labeled nodes is defined with respect to the left-to-right order of constituents in the parse tree. Since non-NONE labeled nodes do not overlap, there is a strict left-to-right order among these nodes. The candidate argument sequence that corresponds to the correct assignment in Figure 1 will be: [NP1-ARG1,VBD1-PRED,PP1-ARG4,NP3-ARGM-TMP] Features from Local Models: All features included in the local models are also included in our joint models. In particular, each template for local features is included as a joint template that concatenates the local template and the node label. For example, for the local feature PATH, we define a joint feature template, that extracts PATH from every node in the candidate argument sequence and concatenates it with the label of the node. Both a feature with the specific argument label is created and a feature with the generic back-off ARG label. This is similar to adding features from identification and classification models. In the case of the example candidate argument sequence above, for the node NP1 we have the features: (NP↑S↓)-ARG1, (NP↑S↓)-ARG When comparing a local and a joint model, we use the same set of local feature templates in the two models. Whole Label Sequence: As observed in previous work (Gildea and Jurafsky, 2002; Pradhan et al., 2004), including information about the set or sequence of labels assigned to argument nodes should be very helpful for disambiguation. For example, including such information will make the model less likely to pick multiple fillers for the same role or to come up with a labeling that does not contain an obligatory argument. We added a whole label sequence feature template that extracts the labels of all argument nodes, and preserves information about the position of the predicate. The template also includes information about the voice of the predicate. For example, this template will be instantiated as follows for the example candidate argument sequence: [ voice:active ARG1,PRED,ARG4,ARGM-TMP] We also add a variant of this feature which uses a generic ARG label instead of specific labels. This feature template has the effect of counting the number of arguments to the left and right of the predicate, which provides useful global information about argument structure. As previously observed (Pradhan et al., 2004), including modifying arguments in sequence features is not helpful. This was confirmed in our experiments and we redefined the whole label sequence features to exclude modifying arguments. One important variation of this feature uses the actual predicate lemma in addition to “voice:active”. Additionally, we define variations of these feature templates that concatenate the label sequence with features of individual nodes. We experimented with 593 variations, and found that including the phrase type and the head of a directly dominating PP – if one exists – was most helpful. We also add a feature that detects repetitions of the same label in a candidate argument sequence, together with the phrase types of the nodes labeled with that label. For example, (NP-ARG0,WHNP-ARG0) is a common pattern of this form. Frame Features: Another very effective class of features we defined are features that look at the label of a single argument node and internal features of other argument nodes. The idea of these features is to capture knowledge about the label of a constituent given the syntactic realization of all arguments of the verb. This is helpful to capture syntactic alternations, such as the dative alternation. For example, consider the sentence (i) “[Shaw Publishing]ARG0 offered [Mr. Smith]ARG2 [a reimbursement]ARG1 ” and the alternative realization (ii) “[Shaw Publishing]ARG0 offered [a reimbursement]ARG1 [to Mr. Smith]ARG2”. When classifying the NP in object position, it is useful to know whether the following argument is a PP. If yes, the NP will more likely be an ARG1, and if not, it will more likely be an ARG2. A feature template that captures such information extracts, for each argument node, its phrase type and label in the context of the phrase types for all other arguments. For example, the instantiation of such a template for [a reimbursement] in (ii) would be [ voice:active NP,PRED,NP-ARG1,PP] We also add a template that concatenates the identity of the predicate lemma itself. We should note that Xue and Palmer (2004) define a similar feature template, called syntactic frame, which often captures similar information. The important difference is that their template extracts contextual information from noun phrases surrounding the predicate, rather than from the sequence of argument nodes. Because our model is joint, we are able to use information about other argument nodes when labeling a node. Final Pipeline Here we describe the application in testing of a joint model for semantic role labeling, using a local model P ℓ SRL, and a joint re-ranking model P r SRL. P ℓ SRL is used to generate top N non-overlapping joint assignments L1, . . . , LN. One option is to select the best Li according to P r SRL, as in Equation 3, ignoring the score from the local model. In our experiments, we noticed that for larger values of N, the performance of our reranking model P r SRL decreased. This was probably due to the fact that at test time the local classifier produces very poor argument frames near the bottom of the top N for large N. Since the re-ranking model is trained on relatively few good argument frames, it cannot easily rule out very bad frames. It makes sense then to incorporate the local model into our final score. Our final score is given by: PSRL(L|t, v) = (P ℓ SRL(L|t, v))α P r SRL(L|t, v) where α is a tunable parameter 2 for how much influence the local score has in the final score. Such interpolation with a score from a first-pass model was also used for parse re-ranking in (Collins, 2000). Given this score, at test time we choose among the top N local assignments L1, . . . , LN according to: arg max L∈{L1,...,LN} α log P ℓ SRL(L|t, v) + log P r SRL(L|t, v) 5 Experiments and Results For our experiments we used the February 2004 release of PropBank. 3 As is standard, we used the annotations from sections 02–21 for training, 24 for development, and 23 for testing. As is done in some previous work on semantic role labeling, we discard the relatively infrequent discontinuous arguments from both the training and test sets. In addition to reporting the standard results on individual argument F-Measure, we also report Frame Accuracy (Acc.), the fraction of sentences for which we successfully label all nodes. There are reasons to prefer Frame Accuracy as a measure of performance over individual-argument statistics. Foremost, potential applications of role labeling may require correct labeling of all (or at least the core) arguments in a sentence in order to be effective, and partially correct labelings may not be very useful. 2We found α = 0.5 to work best 3Although the first official release of PropBank was recently released, we have not had time to test on it. 594 Task CORE ARGM F1 Acc. F1 Acc. Identification 95.1 84.0 95.2 80.5 Classification 96.0 93.3 93.6 85.6 Id+Classification 92.2 80.7 89.9 71.8 Table 2: Performance of local classifiers on identification, classification, and identification+classification on section 23, using gold-standard parse trees. N CORE ARGM F1 Acc. F1 Acc. 1 92.2 80.7 89.9 71.8 5 97.8 93.9 96.8 89.5 20 99.2 97.4 98.8 95.3 30 99.3 97.9 99.0 96.2 Table 3: Oracle upper bounds for performance on the complete identification+classification task, using varying numbers of top N joint labelings according to local classifiers. Model CORE ARGM F1 Acc. F1 Acc. Local 92.2 80.7 89.9 71.8 Joint 94.7 88.2 92.1 79.4 Table 4: Performance of local and joint models on identification+classification on section 23, using goldstandard parse trees. We report results for two variations of the semantic role labeling task. For CORE, we identify and label only core arguments. For ARGM, we identify and label core as well as modifier arguments. We report results for local and joint models on argument identification, argument classification, and the complete identification and classification pipeline. Our local models use the features listed in Table 1 and the technique for enforcing the non-overlapping constraint discussed in Section 3.1. The labeling of the tree in Figure 1 is a specific example of the kind of errors fixed by the joint models. The local classifier labeled the first argument in the tree as ARG0 instead of ARG1, probably because an ARG0 label is more likely for the subject position. All joint models for these experiments used the whole sequence and frame features. As can be seen from Table 4, our joint models achieve error reductions of 32% and 22% over our local models in FMeasure on CORE and ARGM respectively. With respect to the Frame Accuracy metric, the joint error reduction is 38% and 26% for CORE and ARGM respectively. We also report results on automatic parses (see Table 5). We trained and tested on automatic parse trees from Charniak’s parser (Charniak, 2000). For approximately 5.6% of the argument constituents in the test set, we could not find exact matches in the automatic parses. Instead of discarding these arguments, we took the largest constituent in the automatic parse having the same head-word as the gold-standard argument constituent. Also, 19 of the propositions in the test set were discarded because Charniak’s parser altered the tokenization of the input sentence and tokens could not be aligned. As our results show, the error reduction of our joint model with respect to the local model is more modest in this setting. One reason for this is the lower upper bound, due largely to the the much poorer performance of the identification model on automatic parses. For ARGM, the local identification model achieves 85.9 F-Measure and 59.4 Frame Accuracy; the local classification model achieves 92.3 F-Measure and 83.1 Frame Accuracy. It seems that the largest boost would come from features that can identify arguments in the presence of parser errors, rather than the features of our joint model, which ensure global coherence of the argument frame. We still achieve 10.7% and 18.5% error reduction for CORE arguments in F-Measure and Frame Accuracy respectively. 595 Model CORE ARGM F1 Acc. F1 Acc. Local 84.1 66.5 81.4 55.6 Joint 85.8 72.7 82.9 60.8 Table 5: Performance of local and joint models on identification+classification on section 23, using Charniak automatically generated parse trees. 6 Related Work Several semantic role labeling systems have successfully utilized joint information. (Gildea and Jurafsky, 2002) used the empirical probability of the set of proposed arguments as a prior distribution. (Pradhan et al., 2004) train a language model over label sequences. (Punyakanok et al., 2004) use a linear programming framework to ensure that the only argument frames which get probability mass are ones that respect global constraints on argument labels. The key differences of our approach compared to previous work are that our model has all of the following properties: (i) we do not assume a finite Markov horizon for dependencies among node labels, (ii) we include features looking at the labels of multiple argument nodes and internal features of these nodes, and (iii) we train a discriminative model capable of incorporating these long-distance dependencies. 7 Conclusions Reflecting linguistic intuition and in line with current work, we have shown that there are substantial gains to be had by jointly modeling the argument frames of verbs. This is especially true when we model the dependencies with discriminative models capable of incorporating long-distance features. 8 Acknowledgements The authors would like to thank the reviewers for their helpful comments and Dan Jurafsky for his insightful suggestions and useful discussions. This work was supported in part by the Advanced Research and Development Activity (ARDA)’s Advanced Question Answering for Intelligence (AQUAINT) Program. References Collin Baker, Charles Fillmore, and John Lowe. 1998. The Berkeley Framenet project. In Proceedings of COLINGACL-1998. Xavier Carreras and Lu´ıs M`arquez. 2004. Introduction to the CoNLL-2004 shared task: Semantic role labeling. In Proceedings of CoNLL-2004. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of NAACL, pages 132–139. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proceedings of ICML-2000. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML-2001. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Martha Palmer, Dan Gildea, and Paul Kingsbury. 2003. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics. Sameer Pradhan, Wayne Ward, Kadri Hacioglu, James Martin, and Dan Jurafsky. 2004. Shallow semantic parsing using support vector machines. In Proceedings of HLT/NAACL2004. Sameer Pradhan, Kadri Hacioglu, Valerie Krugler, Wayne Ward, James Martin, and Dan Jurafsky. 2005. Support vector learning for semantic argument classification. Machine Learning Journal. Vasin Punyakanok, Dan Roth, Wen tau Yih, Dav Zimak, and Yuancheng Tu. 2004. Semantic role labeling via generalized inference over classifiers. In Proceedings of CoNLL-2004. Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument structures for information extraction. In Proceedings of ACL-2003. Cynthia A. Thompson, Roger Levy, and Christopher D. Manning. 2003. A generative model for semantic role labeling. In Proceedings of ECML-2003. Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of EMNLP-2004. 596 | 2005 | 73 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 597–604, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Paraphrasing with Bilingual Parallel Corpora Colin Bannard Chris Callison-Burch School of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh, EH8 9LW {c.j.bannard, callison-burch}@ed.ac.uk Abstract Previous work has used monolingual parallel corpora to extract and generate paraphrases. We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. Using alignment techniques from phrasebased statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments. 1 Introduction Paraphrases are alternative ways of conveying the same information. Paraphrases are useful in a number of NLP applications. In natural language generation the production of paraphrases allows for the creation of more varied and fluent text (Iordanskaja et al., 1991). In multidocument summarization the identification of paraphrases allows information repeated across documents to be condensed (McKeown et al., 2002). In the automatic evaluation of machine translation, paraphrases may help to alleviate problems presented by the fact that there are often alternative and equally valid ways of translating a text (Pang et al., 2003). In question answering, discovering paraphrased answers may provide additional evidence that an answer is correct (Ibrahim et al., 2003). In this paper we introduce a novel method for extracting paraphrases that uses bilingual parallel corpora. Past work (Barzilay and McKeown, 2001; Barzilay and Lee, 2003; Pang et al., 2003; Ibrahim et al., 2003) has examined the use of monolingual parallel corpora for paraphrase extraction. Examples of monolingual parallel corpora that have been used are multiple translations of classical French novels into English, and data created for machine translation evaluation methods such as Bleu (Papineni et al., 2002) which use multiple reference translations. While the results reported for these methods are impressive, their usefulness is limited by the scarcity of monolingual parallel corpora. Small data sets mean a limited number of paraphrases can be extracted. Furthermore, the narrow range of text genres available for monolingual parallel corpora limits the range of contexts in which the paraphrases can be used. Instead of relying on scarce monolingual parallel data, our method utilizes the abundance of bilingual parallel data that is available. This allows us to create a much larger inventory of phrases that is applicable to a wider range of texts. Our method for identifying paraphrases is an extension of recent work in phrase-based statistical machine translation (Koehn et al., 2003). The essence of our method is to align phrases in a bilingual parallel corpus, and equate different English phrases that are aligned with the same phrase in the other language. This assumption of similar mean597 Emma burst into tears and he tried to comfort her, saying things to make her smile. Emma cried, and he tried to console her, adorning his words with puns. Figure 1: Using a monolingal parallel corpus to extract paraphrases ing when multiple phrases map onto a single foreign language phrase is the converse of the assumption made in the word sense disambiguation work of Diab and Resnik (2002) which posits different word senses when a single English word maps onto different words in the foreign language (we return to this point in Section 4.4). The remainder of this paper is as follows: Section 2 contrasts our method for extracting paraphrases with the monolingual case, and describes how we rank the extracted paraphrases with a probability assignment. Section 3 describes our experimental setup and includes information about how phrases were selected, how we manually aligned parts of the bilingual corpus, and how we evaluated the paraphrases. Section 4 gives the results of our evaluation and gives a number of example paraphrases extracted with our technique. Section 5 reviews related work, and Section 6 discusses future directions. 2 Extracting paraphrases Much previous work on extracting paraphrases (Barzilay and McKeown, 2001; Barzilay and Lee, 2003; Pang et al., 2003) has focused on finding identifying contexts within aligned monolingual sentences from which divergent text can be extracted, and treated as paraphrases. Barzilay and McKeown (2001) gives the example shown in Figure 1 of how identical surrounding substrings can be used to extract the paraphrases of burst into tears as cried and comfort as console. While monolingual parallel corpora often have identical contexts that can be used for identifying paraphrases, bilingual parallel corpora do not. Instead, we use phrases in the other language as pivots: we look at what foreign language phrases the English translates to, find all occurrences of those foreign phrases, and then look back at what other English phrases they translate to. We treat the other English phrases as potential paraphrases. Figure 2 illustrates how a German phrase can be used as a point of identification for English paraphrases in this way. Section 2.1 explains which statistical machine translation techniques are used to align phrases within sentence pairs in a bilingual corpus. A significant difference between the present work and that employing monolingual parallel corpora, is that our method frequently extracts more than one possible paraphrase for each phrase. We assign a probability to each of the possible paraphrases. This is a mechanism for ranking paraphrases, which can be utilized when we come to select the correct paraphrase for a given context . Section 2.2 explains how we calculate the probability of a paraphrase. 2.1 Aligning phrase pairs We use phrase alignments in a parallel corpus as pivots between English paraphrases. We find these alignments using recent phrase-based approaches to statistical machine translation. The original formulation of statistical machine translation (Brown et al., 1993) was defined as a word-based operation. The probability that a foreign sentence is the translation of an English sentence is calculated by summing over the probabilities of all possible word-level alignments, a, between the sentences: p(f|e) = X a p(f, a|e) Thus Brown et al. decompose the problem of determining whether a sentence is a good translation of another into the problem of determining whether there is a sensible mapping between the words in the sentences. More recent approaches to statistical translation calculate the translation probability using larger blocks of aligned text. Koehn (2004), Tillmann (2003), and Vogel et al. (2003) describe various heuristics for extracting phrase alignments from the Viterbi word-level alignments that are estimated using Brown et al. (1993) models. We use the heuristic for phrase alignment described in Och and Ney (2003) which aligns phrases by incrementally building longer phrases from words and phrases which have adjacent alignment points.1 1Note that while we induce the translations of phrases from 598 what is more, the relevant cost dynamic is completely under control im übrigen ist die diesbezügliche kostenentwicklung völlig unter kontrolle we owe it to the taxpayers to keep in check the costs wir sind es den steuerzahlern die kosten zu haben schuldig unter kontrolle Figure 2: Using a bilingual parallel corpus to extract paraphrases 2.2 Assigning probabilities We define a paraphrase probability p(e2|e1) in terms of the translation model probabilities p(f|e1), that the original English phrase e1 translates as a particular phrase f in the other language, and p(e2|f), that the candidate paraphrase e2 translates as the foreign language phrase. Since e1 can translate as multiple foreign language phrases, we sum over f: ˆe2 = arg max e2̸=e1 p(e2|e1) (1) = arg max e2̸=e1 X f p(f|e1)p(e2|f) (2) The translation model probabilities can be computed using any standard formulation from phrasebased machine translation. For example, p(e|f) can be calculated straightforwardly using maximum likelihood estimation by counting how often the phrases e and f were aligned in the parallel corpus: p(e|f) = count(e, f) P e count(e, f) (3) Note that the paraphrase probability defined in Equation 2 returns the single best paraphrase, ˆe2, irrespective of the context in which e1 appears. Since the best paraphrase may vary depending on information about the sentence that e1 appears in, we extend the paraphrase probability to include that sentence S: ˆe2 = arg max e2̸=e1 p(e2|e1, S) (4) word-level alignments in this paper, direct estimation of phrasal translations (Marcu and Wong, 2002) would also suffice for extracting paraphrases from bilingual corpora. a million, as far as possible, at work, big business, carbon dioxide, central america, close to, concentrate on, crystal clear, do justice to, driving force, first half, for the first time, global warming, great care, green light, hard core, horn of africa, last resort, long ago, long run, military action, military force, moment of truth, new world, noise pollution, not to mention, nuclear power, on average, only too, other than, pick up, president clinton, public transport, quest for, red cross, red tape, socialist party, sooner or later, step up, task force, turn to, under control, vocational training, western sahara, world bank Table 1: Phrases that were selected to paraphrase S allows us to re-rank the candidate paraphrases based on additional contextual information. The experiments in this paper employ one variety of contextual information. We include a simple language model probability, which would additionally rank e2 based on the probability of the sentence formed by substiuting e2 for e1 in S. A possible extension which we do not evaluate might be permitting only paraphrases that are the same syntactic type as the original phrase, which we could do by extending the translation model probabilities to count only phrase occurrences of that type. 3 Experimental Design We extracted 46 English phrases to paraphrase (shown in Table 1), randomly selected from those multi-word phrases in WordNet which also occured multiple times in the first 50,000 sentences of our bilingual corpus. The bilingual corpus that we used 599 Alignment Tool . kontrolle unter völlig kostenentwickl... diesbezügliche die ist übrigen im . control under completely is dynamic cost relevant the , more is what (a) Aligning the English phrase to be paraphrased haben zu kontrolle unter kosten die schuldig steuerzahlern den es sind wir . check in costs the keep to taxpayers the to it owe we Alignment Tool (b) Aligning occurrences of its German translation Figure 3: Phrases highlighted for manual alignment was the German-English section of the Europarl corpus, version 2 (Koehn, 2002). We produced automatic alignments for it with the Giza++ toolkit (Och and Ney, 2003). Because we wanted to test our method independently of the quality of word alignment algorithms, we also developed a gold standard of word alignments for the set of phrases that we wanted to paraphrase. 3.1 Manual alignment The gold standard alignments were created by highlighting all occurrences of the English phrase to paraphrase and manually aligning it with its German equivalent by correcting the automatic alignment, as shown in Figure 3a. All occurrences of its German equivalents were then highlighted, and aligned with their English translations (Figure 3b). The other words in the sentences were left with their automatic alignments. 3.2 Paraphrase evaluation We evaluated the accuracy of each of the paraphrases that was extracted from the manually aligned data, as well as the top ranked paraphrases from the experimental conditions detailed below in Section 3.3. Because the acccuracy of paraphrases can vary depending on context, we substituted each Under control This situation is in check in terms of security. This situation is checked in terms of security. This situation is curbed in terms of security. This situation is curb in terms of security. This situation is limit in terms of security. This situation is slow down in terms of security. Figure 4: Paraphrases substituted in for the original phrase set of candidate paraphrases into between 2–10 sentences which contained the original phrase. Figure 4 shows the paraphrases for under control substituted into one of the sentences in which it occurred. We created a total of 289 such evaluation sets, with a total of 1366 unique sentences created through substitution. We had two native English speakers produce judgments as to whether the new sentences preserved the meaning of the original phrase and as to whether they remained grammatical. Paraphrases that were judged to preserve both meaning and grammaticality were considered to be correct, and examples which failed on either judgment were considered to be incorrect. In Figure 4 in check, checked, and curbed were 600 under control checked, curb, curbed, in check, limit, slow down sooner or later at some point, eventually military force armed forces, defence, force, forces, military forces, peace-keeping personnel long ago a little time ago, a long time, a long time ago, a lot of time, a while ago, a while back, far, for a long time, for some time, for such a long time, long, long period of time, long term, long time, long while, overdue, some time, some time ago green light approval, call, go-ahead, indication, message, sign, signal, signals, formal go-ahead great care a careful approach, greater emphasis, particular attention, special attention, specific attention, very careful first half first six months crystal clear absolutely clear, all clarity, clear, clearly, in great detail, no mistake, no uncertain, obvious, obviously, particularly clear, perfectly clear, quite clear, quite clearly, quite explicitly, quite openly, very clear, very clear and comprehensive, very clearly, very sure, very unclear, very well carbon dioxide co2 at work at the workplace, employment, held, holding, in the work sphere, operate, organised, taken place, took place, working Table 2: Paraphrases extracted from a manually word-aligned parallel corpus judged to be correct and curb, limit and slow down were judged to be incorrect. The inter-annotator agreement for these judgements was measured at κ = 0.605, which is conventionally interpreted as “good” agreement. 3.3 Experiments We evaluated the accuracy of top ranked paraphrases when the paraphrase probability was calculated using: 1. The manual alignments, 2. The automatic alignments, 3. Automatic alignments produced over multiple corpora in different languages, 4. All of the above with language model reranking. 5. All of the above with the candidate paraphrases limited to the same sense as the original phrase. 4 Results We report the percentage of correct translations (accuracy) for each of these experimental conditions. A summary of these can be seen in Table 3. This section will describe each of the set-ups and the score reported in more detail. 4.1 Manual alignments Table 2 gives a set of example paraphrases extracted from the gold standard alignments. The italicized paraphrases are those that were assigned the highest probability by Equation 2, which chooses a single best paraphrase without regard for context. The 289 sentences created by substituting the italicized paraphrases in for the original phrase were judged to be correct an average of 74.9% of the time. Ignoring the constraint that the new sentences remain grammatically correct, these paraphrases were judged to have the correct meaning 84.7% of the time. This suggests that the context plays a more important role with respect to the grammaticality of substituted paraphrases than with respect to their meaning. In order to allow the surrounding words in the sentence to have an influence on which paraphrase was selected, we re-ranked the paraphrase probabilities based on a trigram language model trained on the entire English portion of the Europarl corpus. Paraphrases were selected from among all those in Table 2, and not constrained to the italicized phrases. In the case of the paraphrases extracted from the manual word alignments, the language model re-ranking had virtually no influence, and resulted in a slight dip in accuracy to 71.7% 601 Paraphrase Prob Paraphrase Prob & LM Correct Meaning Manual Alignments 74.9 71.7 84.7 Automatic Alignments 48.9 55.3 64.5 Using Multiple Corpora 55.0 57.4 65.4 Word Sense Controlled 57.0 61.9 70.4 Table 3: Paraphrase accuracy and correct meaning for the different data conditions 4.2 Automatic alignments In this experimental condition paraphrases were extracted from a set of automatic alignments produced by running Giza++ over a set of 1,036,000 GermanEnglish sentence pairs (roughly 28,000,000 words in each language). When the single best paraphrase (irrespective of context) was used in place of the original phrase in the evaluation sentence the accuracy reached 48.9% which is quite low compared to the 74.9% of the manually aligned set. As with the manual alignments it seems that we are selecting phrases which have the correct meaning but are not grammatical in context. Indeed our judges thought the meaning of the paraphrases to be correct in 64.5% of cases. Using a language model to select the best paraphrase given the context reduces the number of ungrammatical examples and gives an improvement in quality from 48.9% to 55.3% correct. These results suggest two things: that improving the quality of automatic alignments would lead to more accurate paraphrases, and that there is room for improvement in limiting the paraphrases by their context. We address these points below. 4.3 Using multiple corpora Work in statistical machine translation suggests that, like many other machine learning problems, performance increases as the amount of training data increases. Och and Ney (2003) show that the accuracy of alignments produced by Giza++ improve as the size of the training corpus increases. Since we used the whole of the German-English section of the Europarl corpus, we could not try improving the alignments by simply adding more German-English training data. However, there is nothing that limits our paraphrase extraction method to drawing on candidate paraphrases from a single target language. We therefore re-formulated the paraphrase probability to include multiple corpora, as follows: ˆe2 = arg max e2̸=e1 X C X f in C p(f|e1)p(e2|f) (5) where C is a parallel corpus from a set of parallel corpora. For this condition we used Giza++ to align the French-English, Spanish-English, and ItalianEnglish portions of the Europarl corpus in addition to the German-English portion, for a total of around 4,000,000 sentence pairs in the training data. The accuracy of paraphrases extracted over multiple corpora increased to 55%, and further to 57.4% when the language model re-ranking was included. 4.4 Controlling for word sense As mentioned in Section 1, the way that we extract paraphrases is the converse of the methodology employed in word sense disambiguation work that uses parallel corpora (Diab and Resnik, 2002). The assumption made in the word sense disambiguation work is that if a source language word aligns with different target language words then those words may represent different word senses. This can be observed in the paraphrases for at work in Table 2. The paraphrases at the workplace, employment, and in the work sphere are a different sense of the phrase than operate, held, and holding, and they are aligned with different German phrases. When we calculate the paraphrase probability we sum over different target language phrases. Therefore the English phrases that are aligned with the different German phrases (which themselves maybe indicative of different word senses) are mingled. Performance may be degraded since paraphrases that reflect different senses of the original phrase, and which therefore have a different meaning, are included in the same candidate set. 602 We therefore performed an experiment to see whether improvement could be had by limiting the candidate paraphrases to be the same sense as the original phrase in each test sentence. To do this, we used the fact that our test sentences were drawn from a parallel corpus. We limited phrases to the same word sense by constraining the candidate paraphrases to those that aligned with the same target language phrase. Our basic paraphrase calculation was therefore: p(e2|e1, f) = p(f|e1)p(e2|f) (6) Using the foreign language phrase to identify the word sense is obviously not applicable in monolingual settings, but acts as a convenient stand-in for a proper word sense disambiguation algorithm here. When word sense is controlled in this way, the accuracy of the paraphrases extracted from the automatic alignments raises dramatically from 48.9% to 57% without language model re-ranking, and further to 61.9% when language model re-ranking was included. 5 Related Work Barzilay and McKeown (2001) extract both singleand multiple-word paraphrases from a monolingual parallel corpus. They co-train a classifier to identify whether two phrases were paraphrases of each other based on their surrounding context. Two disadvantages of this method are that it requires identical bounding substrings, and has bias towards single words. For an evaluation set of 500 paraphrases, they report an average precision of 86% at identifying paraphrases out of context, and of 91% when the paraphrases are substituted into the original context of the aligned sentence. The results of our systems are not directly comparable, since Barzilay and McKeown (2001) evaluated their paraphrases with a different set of criteria (they asked judges whether to judge paraphrases based on “approximate conceptual equivalence”). Furthermore, their evaluation was carried out only by substituting the paraphrase in for the phrase with the identical context, and not in for arbitrary occurrences of the original phrase, as we have done. Lin and Pantel (2001) use a standard (nonparallel) monolingual corpus to generate paraphrases, based on dependancy graphs and distributional similarity. One strong disadvantage of this method is that their paraphrases can also have opposite meanings. Ibrahim et al. (2003) combine the two approaches: aligned monolingual corpora and parsing. They evaluated their system with human judges who were asked whether the paraphrases were “roughly interchangeable given the genre”, scored an average of 41% on a set of 130 paraphrases, with the judges all agreeing 75% of the time, and a correlation of 0.66. The shortcomings of this method are that it is dependent upon parse quality, and is limited by the rareness of the data. Pang et al. (2003) use parse trees over sentences in monolingual parallel corpus to identify paraphrases by grouping similar syntactic constituents. They use heuristics such as keyword checking to limit the over-application of this method. Our alignment method might be an improvement of their heuristics for choosing which constituents ought to be grouped. 6 Discussion and Future Work In this paper we have introduced a novel method for extracting paraphrases, which we believe greatly increases the usefulness of paraphrasing in NLP applications. The advantages of our method are that it: • Produces a ranked list of high quality paraphrases with associated probabilities, from which the best paraphrase can be chosen according to the target context. We have shown how a language model can be used to select the best paraphrase for a particular context from this list. • Straightforwardly handles multi-word units. Whereas for previous approaches the evaluation has been performed over mostly single word paraphrases, our results are reported exclusively over units of between 2 and 4 words. • Because we use a much more abundant source of data, our method can be used for a much wider range of text genres than previous approaches, namely any for which parallel data is available. 603 One crucial thing to note is that we have demonstrated our paraphrases to be of higher quality when the alignments used to produce them are improved. This means that our method will reap the benefits of research that improvements to automatic alignment techniques (Callison-Burch et al., 2004), and will further improve as more parallel data becomes available. In the future we plan to: • Investigate whether our re-ranking can be further improved by using a syntax-based language model. • Formulate a paraphrase probability for sentential paraphrases, and use this to try to identify paraphrases across documents in order to condense information for multi-document summarization. • See whether paraphrases can be used to increase coverage for statistical machine translation when translating into “low-density” languages which have small parallel corpora. Acknowledgments The authors would like to thank Beatrice Alex, Marco Kuhlmann, and Josh Schroeder for their valuable input as well as their time spent annotating and contributing to the software. References Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiplesequence alignment. In Proceedings of HLT/NAACL. Regina Barzilay and Kathleen McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proceedings of ACL. Peter Brown, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer. 1993. The mathematics of machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311, June. Chris Callison-Burch, David Talbot, and Miles Osborne. 2004. Statistical machine translation with word- and sentence-aligned parallel corpora. In Proceedings of ACL. Mona Diab and Philip Resnik. 2002. An unsupervised method for word sense tagging using parallel corpora. In Proceedings of ACL. Ali Ibrahim, Boris Katz, and Jimmy Lin. 2003. Extracting structural paraphrases from aligned monolingual corpora. In Proceedings of the Second International Workshop on Paraphrasing (ACL 2003). Lidija Iordanskaja, Richard Kittredge, and Alain Polg´ere. 1991. Lexical selection and paraphrase in a meaningtext generation model. In C´ecile L. Paris, William R. Swartout, and William C. Mann, editors, Natural Language Generation in Artificial Intelligence and Computational Linguistics. Kluwer Academic. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT/NAACL. Philipp Koehn. 2002. Europarl: A multilingual corpus for evaluation of machine translation. Unpublished Draft. Philipp Koehn. 2004. Pharaoh: A beam search decoder for phrase-based statistical machine translation models. In Proceedings of AMTA. Dekang Lin and Patrick Pantel. 2001. DIRT - discovery of inference rules from text. In Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining. Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proceedings of EMNLP. Kathleen R. McKeown, Regina Barzilay, David Evans, Vasileios Hatzivassiloglou, Judith L. Klavans, Ani Nenkova, Carl Sable, Barry Schiffman, and Sergey Sigelman. 2002. Tracking and summarizing news on a daily basis with Columbia’s Newsblaster. In Proceedings of the Human Language Technology Conference. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51, March. Bo Pang, Kevin Knight, and Daniel Marcu. 2003. Syntax-based alignment of multiple translations: Extracting paraphrases and generating new sentences. In Proceedings of HLT/NAACL. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of ACL. Christoph Tillmann. 2003. A projection extension algorithm for statistical machine translation. In Proceedings of EMNLP. Stephan Vogel, Ying Zhang, Fei Huang, Alicia Tribble, Ashish Venugopal, Bing Zhao, and Alex Waibel. 2003. The CMU statistical machine translation system. In Proceedings of MT Summit 9. 604 | 2005 | 74 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 605–613, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics A Nonparametric Method for Extraction of Candidate Phrasal Terms Paul Deane Center for Assessment, Design and Scoring Educational Testing Service [email protected] Abstract This paper introduces a new method for identifying candidate phrasal terms (also known as multiword units) which applies a nonparametric, rank-based heuristic measure. Evaluation of this measure, the mutual rank ratio metric, shows that it produces better results than standard statistical measures when applied to this task. 1 Introduction The ordinary vocabulary of a language like English contains thousands of phrasal terms -- multiword lexical units including compound nouns, technical terms, idioms, and fixed collocations. The exact number of phrasal terms is difficult to determine, as new ones are coined regularly, and it is sometimes difficult to determine whether a phrase is a fixed term or a regular, compositional expression. Accurate identification of phrasal terms is important in a variety of contexts, including natural language parsing, question answering systems, information retrieval systems, among others. Insofar as phrasal terms function as lexical units, their component words tend to cooccur more often, to resist substitution or paraphrase, to follow fixed syntactic patterns, and to display some degree of semantic noncompositionality (Manning, 1999:183-186). However, none of these characteristics are amenable to a simple algorithmic interpretation. It is true that various term extraction systems have been developed, such as Xtract (Smadja 1993), Termight (Dagan & Church 1994), and TERMS (Justeson & Katz 1995) among others (cf. Daille 1996, Jacquemin & Tzoukermann 1994, Jacquemin, Klavans, & Toukermann 1997, Boguraev & Kennedy 1999, Lin 2001). Such systems typically rely on a combination of linguistic knowledge and statistical association measures. Grammatical patterns, such as adjective-noun or noun-noun sequences are selected then ranked statistically, and the resulting ranked list is either used directly or submitted for manual filtering. The linguistic filters used in typical term extraction systems have no obvious connection with the criteria that linguists would argue define a phrasal term (noncompositionality, fixed order, nonsubstitutability, etc.). They function, instead, to reduce the number of a priori improbable terms and thus improve precision. The association measure does the actual work of distinguishing between terms and plausible nonterms. A variety of methods have been applied, ranging from simple frequency (Justeson & Katz 1995), modified frequency measures such as c-values (Frantzi, Anadiou & Mima 2000, Maynard & Anadiou 2000) and standard statistical significance tests such as the t-test, the chi-squared test, and loglikelihood (Church and Hanks 1990, Dunning 1993), and information-based methods, e.g. pointwise mutual information (Church & Hanks 1990). Several studies of the performance of lexical association metrics suggest significant room for improvement, but also variability among tasks. One series of studies (Krenn 1998, 2000; Evert & Krenn 2001, Krenn & Evert 2001; also see Evert 2004) focused on the use of association metrics to identify the best candidates in particular grammatical constructions, such as adjective-noun pairs or verb plus prepositional phrase constructions, and compared the performance of simple frequency to several common measures (the log-likelihood, the t-test, the chi-squared test, the dice coefficient, relative entropy and mutual information). In Krenn & Evert 2001, frequency outperformed mutual information though not the ttest, while in Evert and Krenn 2001, log-likelihood and the t-test gave the best results, and mutual information again performed worse than frequency. However, in all these studies performance was generally low, with precision falling rapidly after the very highest ranked phrases in the list. By contrast, Schone and Jurafsky (2001) evaluate the identification of phrasal terms without grammatical filtering on a 6.7 million word extract from the TREC databases, applying both WordNet and online dictionaries as gold standards. Once again, the general level of performance was low, with precision falling off rapidly as larger portions 605 of the n-best list were included, but they report better performance with statistical and information theoretic measures (including mutual information) than with frequency. The overall pattern appears to be one where lexical association measures in general have very low precision and recall on unfiltered data, but perform far better when combined with other features which select linguistic patterns likely to function as phrasal terms. The relatively low precision of lexical association measures on unfiltered data no doubt has multiple explanations, but a logical candidate is the failure or inappropriacy of underlying statistical assumptions. For instance, many of the tests assume a normal distribution, despite the highly skewed nature of natural language frequency distributions, though this is not the most important consideration except at very low n (cf. Moore 2004, Evert 2004, ch. 4). More importantly, statistical and information-based metrics such as the log-likelihood and mutual information measure significance or informativeness relative to the assumption that the selection of component terms is statistically independent. But of course the possibilities for combinations of words are anything but random and independent. Use of linguistic filters such as "attributive adjective followed by noun" or "verb plus modifying prepositional phrase" arguably has the effect of selecting a subset of the language for which the standard null hypothesis -- that any word may freely be combined with any other word -- may be much more accurate. Additionally, many of the association measures are defined only for bigrams, and do not generalize well to phrasal terms of varying length. The purpose of this paper is to explore whether the identification of candidate phrasal terms can be improved by adopting a heuristic which seeks to take certain of these statistical issues into account. The method to be presented here, the mutual rank ratio, is a nonparametric rank-based approach which appears to perform significantly better than the standard association metrics. The body of the paper is organized as follows: Section 2 will introduce the statistical considerations which provide a rationale for the mutual rank ratio heuristic and outline how it is calculated. Section 3 will present the data sources and evaluation methodologies applied in the rest of the paper. Section 4 will evaluate the mutual rank ratio statistic and several other lexical association measures on a larger corpus than has been used in previous evaluations. As will be shown below, the mutual rank ratio statistic recognizes phrasal terms more effectively than standard statistical measures. 2 Statistical considerations 2.1 Highly skewed distributions As first observed e.g. by Zipf (1935, 1949) the frequency of words and other linguistic units tend to follow highly skewed distributions in which there are a large number of rare events. Zipf's formulation of this relationship for single word frequency distributions (Zipf's first law) postulates that the frequency of a word is inversely proportional to its rank in the frequency distribution, or more generally if we rank words by frequency and assign rank z, where the function fz(z,N) gives the frequency of rank z for a sample of size N, Zipf's first law states that: fz(z,N) = C zα where C is a normalizing constant and α is a free parameter that determines the exact degree of skew; typically with single word frequency data, α approximates 1 (Baayen 2001: 14). Ideally, an association metric would be designed to maximize its statistical validity with respect to the distribution which underlies natural language text -- which is if not a pure Zipfian distribution at least an LNRE (large number of rare events, cf. Baayen 2001) distribution with a very long tail, containing events which differ in probability by many orders of magnitude. Unfortunately, research on LNRE distributions focuses primarily on unigram distributions, and generalizations to bigram and ngram distributions on large corpora are not as yet clearly feasible (Baayen 2001:221). Yet many of the best-performing lexical association measures, such as the t-test, assume normal distributions, (cf. Dunning 1993) or else (as with mutual information) eschew significance testing in favor of a generic information-theoretic approach. Various strategies could be adopted in this situation: finding a better model of the distribution,or adopting a nonparametric method. 2.2 The independence assumption Even more importantly, many of the standard lexical association measures measure significance (or information content) against the default assumption that word-choices are statistically independent events. This assumption is built into the highest-performing measures as observed in Evert & Krenn 2001, Krenn & Evert 2001 and Schone & Jurafsky 2001. This is of course untrue, and justifiable only as a simplifying idealization in the absence of a better model. The actual probability of any sequence of words is strongly influenced by the base grammatical and semantic structure of language, particularly since phrasal terms usually conform to 606 the normal rules of linguistic structure. What makes a compound noun, or a verb-particle construction, into a phrasal term is not deviation from the base grammatical pattern for noun-noun or verb-particle structures, but rather a further pattern (of meaning and usage and thus heightened frequency) superimposed on the normal linguistic base. There are, of course, entirely aberrant phrasal terms, but they constitute the exception rather than the rule. This state of affairs poses something of a chicken-and-the-egg problem, in that statistical parsing models have to estimate probabilities from the same base data as the lexical association measures, so the usual heuristic solution as noted above is to impose a linguistic filter on the data, with the association measures being applied only to the subset thus selected. The result is in effect a constrained statistical model in which the independence assumption is much more accurate. For instance, if the universe of statistical possibilities is restricted to the set of sequences in which an adjective is followed by a noun, the null hypothesis that word choice is independent -- i.e., that any adjective may precede any noun -- is a reasonable idealization. Without filtering, the independence assumption yields the much less plausible null hypothesis that any word may appear in any order. It is thus worth considering whether there are any ways to bring additional information to bear on the problem of recognizing phrasal terms without presupposing statistical independence. 2.3 Variable length; alternative/overlapping phrases Phrasal terms vary in length. Typically they range from about two to six words in length, but critically we cannot judge whether a phrase is lexical without considering both shorter and longer sequences. That is, the statistical comparison that needs to be made must apply in principle to the entire set of word sequences that must be distinguished from phrasal terms, including longer sequences, subsequences, and overlapping sequences, despite the fact that these are not statistically independent events. Of the association metrics mentioned thus far, only the C-Value method attempts to take direct notice of such word sequence information, and then only as a modification to the basic information provided by frequency. Any solution to the problem of variable length must enable normalization allowing direct comparison of phrases of different length. Ideally, the solution would also address the other issues -- the independence assumption and the skewed distributions typical of natural language data. 2.4 Mutual expectation An interesting proposal which seeks to overcome the variable-length issue is the mutual expectation metric presented in Dias, Guilloré, and Lopes (1999) and implemented in the SENTA system (Gil and Dias 2003a). In their approach, the frequency of a phrase is normalized by taking into account the relative probability of each word compared to the phrase. Dias, Guilloré, and Lopes take as the foundation of their approach the idea that the cohesiveness of a text unit can be measured by measuring how strongly it resists the loss of any component term. This is implemented by considering, for any ngram, the set of [continuous or discontinuous] (n-1)-grams which can be formed by deleting one word from the n-gram. A normalized expectation for the n-gram is then calculated as follows: 1 2 1 2 ([ , ... ]) ([ , ... ]) n n p w w w FPE w w w where [w1, w2 ... wn] is the phrase being evaluated and FPE([w1, w2 ... wn]) is: 1 2 1 1 ^ 1 ([ , ... ]) [ ... .... ] n n i n i p w w w p w w w n = + ∑ where wi is the term omitted from the n-gram. They then calculate mutual expectation as the product of the probability of the n-gram and its normalized expectation. This statistic is of interest for two reasons: first, it provides a single statistic that can be applied to n-grams of any length; second, it is not based upon the independence assumption. The core statistic, normalized expectation, is essentially frequency with a penalty if a phrase contains component parts significantly more frequent than the phrase itself. It is of course an empirical question how well mutual expectation performs (and we shall examine this below) but mutual expectation is not in any sense a significance test. That is, if we are examining a phrase like the east end, the conditional probability of east given [__ end] or of end given [__ east] may be relatively low (since other words can appear in that context) and yet the phrase might still be very lexicalized if the association of both words with this context were significantly stronger than their association for 607 other phrases. That is, to the extent that phrasal terms follow the regular patterns of the language, a phrase might have a relatively low conditional probability (given the wide range of alternative phrases following the same basic linguistic patterns) and thus have a low mutual expectation yet still occur far more often than one would expect from chance. In short, the fundamental insight -- assessing how tightly each word is bound to a phrase -- is worth adopting. There is, however, good reason to suspect that one could improve on this method by assessing relative statistical significance for each component word without making the independence assumption. In the heuristic to be outlined below, a nonparametric method is proposed. This method is novel: not a modification of mutual expectation, but a new technique based on ranks in a Zipfian frequency distribution. 2.5 Rank ratios and mutual rank ratios This technique can be justified as follows. For each component word in the n-gram, we want to know whether the n-gram is more probable for that word than we would expect given its behavior with other words. Since we do not know what the expected shape of this distribution is going to be, a nonparametric method using ranks is in order, and there is some reason to think that frequency rank regardless of n-gram size will be useful. In particular, Ha, Sicilia-Garcia, Ming and Smith (2002) show that Zipf's law can be extended to the combined frequency distribution of n-grams of varying length up to rank 6, which entails that the relative rank of words in such a combined distribution provide a useful estimate of relative probability. The availability of new techniques for handling large sets of n-gram data (e.g. Gil & Dias 2003b) make this a relatively feasible task. Thus, given a phrase like east end, we can rank how often __ end appears with east in comparison to how often other phrases appear with east.That is, if {__ end, __side, the __, toward the __, etc.} is the set of (variable length) n-gram contexts associated with east (up to a length cutoff), then the actual rank of __ end is the rank we calculate by ordering all contexts by the frequency with which the actual word appears in the context. We also rank the set of contexts associated with east by their overall corpus frequency. The resulting ranking is the expected rank of __ end based upon how often the competing contexts appear regardless of which word fills the context. The rank ratio (RR) for the word given the context can then be defined as: RR(word,context) = ( ) ( ) , , ER word context AR word context where ER is the expected rank and AR is the actual rank. A normalized, or mutual rank ratio for the ngram can then be defined as 2 1 1, [__ .... ] 2, [ __ ... ] , [ 1, 2... _] ( )* ( )...* ( ) n n w w w w n w w n RR w RR w RR w The motivation for this method is that it attempts to address each of the major issues outlined above by providing a nonparametric metric which does not make the independence assumption and allows scores to be compared across n-grams of different lengths. A few notes about the details of the method are in order. Actual ranks are assigned by listing all the contexts associated with each word in the corpus, and then ranking contexts by word, assigning the most frequent context for word n the rank 1, next next most frequent rank 2, etc. Tied ranks are given the median value for the ranks occupied by the tie, e.g., if two contexts with the same frequency would occupy ranks 2 and 3, they are both assigned rank 2.5. Expected ranks are calculated for the same set of contexts using the same algorithm, but substituting the unconditional frequency of the (n-1)-gram for the gram's frequency with the target word.1 3 Data sources and methodology The Lexile Corpus is a collection of documents covering a wide range of reading materials such as a child might encounter at school, more or less evenly divided by Lexile (reading level) rating to cover all levels of textual complexity from kindergarten to college. It contains in excess of 400 million words of running text, and has been made available to the Educational Testing Service under a research license by Metametrics Corporation. This corpus was tokenized using an in-house tokenization program, toksent, which treats most punctuation marks as separate tokens but makes single tokens out of common abbreviations, numbers like 1,500, and words like o'clock. It should be noted that some of the association measures are known to perform poorly if punctuation marks and common stopwords are 1 In this study the rank-ratio method was tested for bigrams and trigrams only, due to the small number of WordNet gold standard items greater than two words in length. Work in progress will assess the metrics' performance on n-grams of orders four through six. 608 included; therefore, n-gram sequences containing punctuation marks and the 160 most frequent word forms were excluded from the analysis so as not to bias the results against them. Separate lists of bigrams and trigrams were extracted and ranked according to several standard word association metrics. Rank ratios were calculated from a comparison set consisting of all contexts derived by this method from bigrams and trigrams, e.g., contexts of the form word1__, ___word2, ___word1 word2, word1 ___ word3, and word1 word2 ___.2 Table 1 lists the standard lexical association measures tested in section four3. The logical evaluation method for phrasal term identification is to rank n-grams using each metric and then compare the results against a gold standard containing known phrasal terms. Since Schone and Jurafsky (2001) demonstrated similar results whether WordNet or online dictionaries were used as a gold standard, WordNet was selected. Two separate lists were derived containing two- and three-word phrases. The choice of WordNet as a gold standard tests ability to predict general dictionary headwords rather than technical terms, appropriate since the source corpus consists of nontechnical text. Following Schone & Jurafsky (2001), the bigram and trigram lists were ranked by each statistic then scored against the gold standard, with results evaluated using a figure of merit (FOM) roughly characterizable as the area under the precisionrecall curve. The formula is: 1 1 k i i P K =∑ where Pi (precision at i) equals i/Hi, and Hi is the number of n-grams into the ranked n-gram list required to find the ith correct phrasal term. It should be noted, however, that one of the most pressing issues with respect to phrasal terms is that they display the same skewed, long-tail distribution as ordinary words, with a large 2 Excluding the 160 most frequent words prevented evaluation of a subset of phrasal terms such as verbal idioms like act up or go on. Experiments with smaller corpora during preliminary work indicated that this exclusion did not appear to bias the results. 3 Schone & Jurafsky's results indicate similar results for log-likelihood & T-score, and strong parallelism among information-theoretic measures such as ChiSquared, Selectional Association (Resnik 1996), Symmetric Conditional Probability (Ferreira and Pereira Lopes, 1999) and the Z-Score (Smadja 1993). Thus it was not judged necessary to replicate results for all methods covered in Schone & Jurafsky (2001). proportion of the total displaying very low frequencies. This can be measured by considering Table 1. Some Lexical Association Measures the overlap between WordNet and the Lexile corpus. A list of 53,764 two-word phrases were extracted from WordNet, and 7,613 three-word phrases. Even though the Lexile corpus is quite large -- in excess of 400 million words of running text -- only 19,939 of the two-word phrases and 4 Due to the computational cost of calculating CValues over a very large corpus, C-Values were calculated over bigrams and trigrams only. More sophisticated versions of the C-Value method such as NC-values were not included as these incorporate linguistic knowledge and thus fall outside the scope of the study. METRIC FORMULA Frequency (Guiliano, 1964) x y f Pointwise Mutual Information [PMI] (Church & Hanks, 1990) ( ) xy x y 2 log / P P P True Mutual Information [TMI] (Manning, 1999) ( ) xy 2 xy x y log / P P P P Chi-Squared ( 2 χ ) (Church and Gale, 1991) { } { } , , 2 ( ) i X X Y Y i j i j i j j f ζ ζ ∈ ∈ − ∑ T-Score (Church & Hanks, 1990) 1 2 2 2 1 2 1 2 x x s s n n − + C-Values4 (Frantzi, Anadiou & Mima 2000) 2 is not nested 2 log ( ) log ( ) 1 ( ) ( ) a a b T a f f f b P T α α α α ∈ ⋅ ⋅ − ∑ where α is the candidate string f(α) is its frequency in the corpus Tα is the set of candidate terms that contain α P(Tα) is the number of these candidate terms 609 1,700 of the three-word phrases are attested in the Lexile corpus. 14,045 of the 19,939 attested twoword phrases occur at least 5 times, 11,384 occur at least 10 times, and only 5,366 occur at least 50 times; in short, the strategy of cutting off the data at a threshold sacrifices a large percent of total recall. Thus one of the issues that needs to be addressed is the accuracy with which lexical association measures can be extended to deal with relatively sparse data, e.g., phrases that appear less than ten times in the source corpus. A second question of interest is the effect of filtering for particular linguistic patterns. This is another method of prescreening the source data which can improve precision but damage recall. In the evaluation bigrams were classified as N-N and A-N sequences using a dictionary template, with the expected effect. For instance, if the WordNet two word phrase list is limited only to those which could be interpreted as noun-noun or adjective noun sequences, N>=5, the total set of WordNet terms that can be retrieved is reduced to 9,757.. 4 Evaluation Schone and Jurafsky's (2001) study examined the performance of various association metrics on a corpus of 6.7 million words with a cutoff of N=10. The resulting n-gram set had a maximum recall of 2,610 phrasal terms from the WordNet gold standard, and found the best figure of merit for any of the association metrics even with linguistic filterering to be 0.265. On the significantly larger Lexile corpus N must be set higher (around N=50) to make the results comparable. The statistics were also calculated for N=50, N=10 and N=5 in order to see what the effect of including more (relatively rare) n-grams would be on the overall performance for each statistic. Since many of the statistics are defined without interpolation only for bigrams, and the number of WordNet trigrams at N=50 is very small, the full set of scores were only calculated on the bigram data. For trigrams, in addition to rank ratio and frequency scores, extended pointwise mutual information and true mutual information scores were calculated using the formulas log (Pxyz/PxPy Pz)) and Pxyz log (Pxyz/PxPy Pz)). Also, since the standard lexical association metrics cannot be calculated across different n-gram types, results for bigrams and trigrams are presented separately for purposes of comparison. The results are are shown in Tables 2-5. Two points should should be noted in particular. First, the rank ratio statistic outperformed the other association measures tested across the board. Its best performance, a score of 0.323 in the part of speech filtered condition with N=50, outdistanced METRIC POS Filtered Unfiltered RankRatio 0.323 0.196 Mutual Expectancy 0.144 0.069 TMI 0.209 0.096 PMI 0.287 0.166 Chi-sqr 0.285 0.152 T-Score 0.154 0.046 C-Values 0.065 0.048 Frequency 0.130 0.044 Table 2. Bigram Scores for Lexical Association Measures with N=50 METRIC POS Filtered Unfiltered RankRatio 0.218 0.125 MutualExpectation 0.140 0.071 TMI 0.150 0.070 PMI 0.147 0.065 Chi-sqr 0.145 0.065 T-Score 0.112 0.048 C-Values 0.096 0.036 Frequency 0.093 0.034 Table 3. Bigram Scores for Lexical Association Measures with N=10 METRIC POS Filtered Unfiltered RankRatio 0.188 0.110 Mutual Expectancy 0.141 0.073 TMI 0.131 0.063 PMI 0.108 0.047 Chi-sqr 0.107 0.047 T-Score 0.098 0.043 C-Values 0.084 0.031 Frequency 0.081 0.021 Table 4. Bigram Scores for Lexical Association Measures with N=5 METRIC N=50 N=10 N=5 RankRatio 0.273 0.137 0.103 PMI 0.219 0.121 0.059 TMI 0.137 0.074 0.056 Frequency 0.089 0.047 0.035 Table 5. Trigram scores for Lexical Association Measures at N=50, 10 and 5 without linguistic filtering. 610 the best score in Schone & Jurafsky's study (0.265), and when large numbers of rare bigrams were included, at N=10 and N=5, it continued to outperform the other measures. Second, the results were generally consistent with those reported in the literature, and confirmed Schone & Jurafsky's observation that the information-theoretic measures (such as mutual information and chisquared) outperform frequency-based measures (such as the T-score and raw frequency.)5 4.1 Discussion One of the potential strengths of this method is that is allows for a comparison between n-grams of varying lengths. The distribution of scores for the gold standard bigrams and trigrams appears to bear out the hypothesis that the numbers are comparable across n-gram length. Trigrams constitute approximately four percent of the gold standard test set, and appear in roughly the same percentage across the rankings; for instance, they consistute 3.8% of the top 10,000 ngrams ranked by mutual rank ratio. Comparison of trigrams with their component bigrams also seems consistent with this hypothesis; e.g., the bigram Booker T. has a higher mutual rank ratio than the trigram Booker T. Washington, which has a higher rank that the bigram T. Washington. These results suggest that it would be worthwhile to examine how well the method succeeds at ranking n-grams of varying lengths, though the limitations of the current evaluation set to bigrams and trigrams prevented a full evaluation of its effectiveness across n-grams of varying length. The results of this study appear to support the conclusion that the Mutual Rank Ratio performs notably better than other association measures on this task. The performance is superior to the nextbest measure when N is set as low as 5 (0.110 compared to 0.073 for Mutual Expectation and 0.063 for true mutual information and less than .05 for all other metrics). While this score is still fairly low, it indicates that the measure performs relatively well even when large numbers of lowprobability n-grams are included. An examination of the n-best list for the Mutual Rank ratio at N=5 supports this contention. The top 10 bigrams are: 5 Schone and Jurafsky's results differ from Krenn & Evert (2001)'s results, which indicated that frequency performed better than the statistical measures in almost every case. However, Krenn and Evert's data consisted of n-grams preselected to fit particular collocational patterns. Frequency-based metrics seem to be particularly benefited by linguistic prefiltering. Julius Caesar, Winston Churchill, potato chips, peanut butter, Frederick Douglass, Ronald Reagan, Tia Dolores, Don Quixote, cash register, Santa Claus At ranks 3,000 to 3,010, the bigrams are: Ted Williams, surgical technicians, Buffalo Bill, drug dealer, Lise Meitner, Butch Cassidy, Sandra Cisneros, Trey Granger, senior prom, Ruta Skadi At ranks 10,000 to 10,010, the bigrams are: egg beater, sperm cells, lowercase letters, methane gas, white settlers, training program, instantly recognizable, dried beef, television screens, vienna sausages In short, the n-best list returned by the mutual rank ratio statistic appears to consist primarily of phrasal terms far down the list, even when N is as low as 5. False positives are typically: (i) morphological variants of established phrases; (ii) bigrams that are part of longer phrases, such as cream sundae (from ice cream sundae); (iii) examples of highly productive constructions such as an artist, three categories or January 2. The results for trigrams are relatively sparse and thus less conclusive, but are consistent with the bigram results: the mutual rank ratio measure performs best, with top ranking elements consistently being phrasal terms. Comparison with the n-best list for other metrics bears out the qualitative impression that the rank ratio is performing better at selecting phrasal terms even without filtering. The top ten bigrams for the true mutual information metric at N=5 are: a little, did not, this is, united states, new york, know what, a good, a long, a moment, a small Ranks 3000 to 3010 are: waste time, heavily on, earlier than, daddy said, ethnic groups, tropical rain, felt sure, raw materials, gold medals, gold rush Ranks 10,000 to 10,010 are: quite close, upstairs window, object is, lord god, private schools, nat turner, fire going, bering sea,little higher, got lots The behavior is consistent with known weaknesses of true mutual information -- its tendency to overvalue frequent forms. Next, consider the n-best lists for loglikelihood at N=5. The top ten n-grams are: sheriff poulson, simon huggett, robin redbreast, eric torrosian, colonel hillandale, colonel sapp, nurse leatheran, st. catherines, karen torrio, jenny yonge N-grams 3000 to 3010 are: comes then, stuff who, dinner get, captain see, tom see, couple get, fish see, picture go, building go, makes will, pointed way 611 N-grams 10000 to 10010 are: sayings is, writ this, llama on, undoing this, dwahro did, reno on, squirted on, hardens like, mora did, millicent is, vets did Comparison thus seems to suggest that if anything the quality of the mutual rank ratio results are being understated by the evaluation metric, as the metric is returning a large number of phrasal terms in the higher portion of the n-best list that are absent from the gold standard. Conclusion This study has proposed a new method for measuring strength of lexical association for candidate phrasal terms based upon the use of Zipfian ranks over a frequency distribution combining n-grams of varying length. The method is related in general philosophy of Mutual Expectation, in that it assesses the strenght of connection for each word to the combined phrase; it differs by adopting a nonparametric measure of strength of association. Evaluation indicates that this method may outperform standard lexical association measures, including mutual information, chi-squared, log-likelihood, and the T-score. References Baayen, R. H. (2001) Word Frequency Distributions. Kluwer: Dordrecht. Boguraev, B. and C. Kennedy (1999). Applications of Term Identification Technology: Domain Description and Content Characterization. Natural Language Engineering 5(1):17-44. Choueka, Y. (1988). Looking for needles in a haystack or locating interesting collocation expressions in large textual databases. Proceedings of the RIAO, pages 38-43. Church, K.W., and P. Hanks (1990). Word association norms, mutual information, and lexicography. Computational Linguistics 16(1):2229. Dagan, I. and K.W. Church (1994). Termight: Identifying and translating technical terminology. ACM International Conference Proceeding Series: Proceedings of the fourth conference on Applied natural language processing, pages 39-40. Daille, B. 1996. "Study and Implementation of Combined Techniques from Automatic Extraction of Terminology". Chap. 3 of "The Balancing Act": Combining Symbolic and Statistical Approaches to Kanguage (Klavans, J., Resnik, P. (eds.)), pages 49-66. Dias, G., S. Guilloré, and J.G. Pereira Lopes (1999), Language independent automatic acquisition of rigid multiword units from unrestricted text corpora. TALN, p. 333-338. Dunning, T. (1993). Accurate methods for the statistics of surprise and coincidence. Computational Linguistics 19(1): 65-74. Evert, S. (2004). The Statistics of Word Cooccurrences: Word Pairs and Collocations. Phd Thesis, Institut für maschinelle Sprachverarbeitung, University of Stuttgart. Evert, S. and B. Krenn. (2001). Methods for the Qualitative Evaluation of Lexical Association Measures. Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, pages 188-195. Ferreira da Silva, J. and G. Pereira Lopes (1999). A local maxima method and a fair dispersion normalization for extracting multiword units from corpora. Sixth Meeting on Mathematics of Language, pages 369-381. Frantzi, K., S. Ananiadou, and H. Mima. (2000). Automatic recognition of multiword terms: the CValue and NC-Value Method. International Journal on Digital Libraries 3(2):115-130. Gil, A. and G. Dias. (2003a). Efficient Mining of Textual Associations. International Conference on Natural Language Processing and Knowledge Engineering. Chengqing Zong (eds.) pages 26-29. Gil, A. and G. Dias (2003b). Using masks, suffix array-based data structures, and multidimensional arrays to compute positional n-gram statistics from corpora. In Proceedings of the Workshop on Multiword Expressions of the 41st Annual Meeting of the Association of Computational Linguistics, pages 25-33. Ha, L.Q., E.I. Sicilia-Garcia, J. Ming and F.J. Smith. (2002), "Extension of Zipf's law to words and phrases", Proceedings of the 19th International Conference on Computational Linguistics (COLING'2002), pages 315-320. Jacquemin, C. and E. Tzoukermann. (1999). NLP for Term Variant Extraction: Synergy between Morphology, Lexicon, and Syntax. Natural Language Processing Information Retrieval, pages 25-74. Kuwer, Boston, MA, U.S.A. Jacquemin, C., J.L. Klavans and E. Tzoukermann (1997). Expansion of multiword terms for indexing and retrieval using morphology and syntax. Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, pages 24-31. 612 Johansson, C. 1994b, Catching the Cheshire Cat, In Proceedings of COLING 94, Vol. II, pages 1021 1025. Johansson, C. 1996. Good Bigrams. In Proceedings from the 16th International Conference on Computational Linguistics (COLING-96), pages 592-597. Justeson, J.S. and S.M. Katz (1995). Technical terminology: some linguistic properties and an algorithm for identification in text. Natural Language Engineering 1:9-27. Krenn, B. 1998. Acquisition of Phraseological Units from Linguistically Interpreted Corpora. A Case Study on German PP-Verb Collocations. Proceedings of ISP-98, pages 359-371. Krenn, B. 2000. Empirical Implications on Lexical Association Measures. Proceedings of The Ninth EURALEX International Congress. Krenn, B. and S. Evert. 2001. Can we do better than frequency? A case study on extracting PP-verb collocations. Proceedings of the ACL Workshop on Collocations, pages 39-46. Lin, D. 1998. Extracting Collocations from Text Corpora. First Workshop on Computational Terminology, pages 57-63 Lin, D. 1999. Automatic Identification of Noncompositional Phrases, In Proceedings of The 37th Annual Meeting of the Association For Computational Lingusitics, pages 317-324. Manning, C.D. and H. Schütze. (1999). Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, MA, U.S.A. Maynard, D. and S. Ananiadou. (2000). Identifying Terms by their Family and Friends. COLING 2000, pages 530-536. Pantel, P. and D. Lin. (2001). A Statistical CorpusBased Term Extractor. In: Stroulia, E. and Matwin, S. (Eds.) AI 2001, Lecture Notes in Artificial Intelligence, pages 36-46. Springer-Verlag. Resnik, P. (1996). Selectional constraints: an information-theoretic model and its computational realization. Cognition 61: 127-159. Schone, P. and D. Jurafsky, 2001. Is KnowledgeFree Induction of Multiword Unit Dictionary Headwords a Solved Problem? Proceedings of Empirical Methods in Natural Language Processing, pages 100-108. Sekine, S., J. J. Carroll, S. Ananiadou, and J. Tsujii. 1992. Automatic Learning for Semantic Collocation. Proceedings of the 3rd Conference on Applied Natural Language Processing, pages 104110. Shimohata, S., T. Sugio, and J. Nagata. (1997). Retrieving collocations by co-occurrences and word order constraints. Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, pages 476-481. Smadja, F. (1993). Retrieving collocations from text: Xtract. Computational Linguistics, 19:143-177. Thanapoulos, A., N. Fakotakis and G. Kokkinkais. 2002. Comparaitve Evaluation of Collocation Extraction Metrics. Proceedings of the LREC 2002 Conference, pages 609-613. Zipf, P. (1935). Psychobiology of Language. Houghton-Mifflin, New York, New York. Zipf, P.(1949). Human Behavior and the Principle of Least Effort. Addison-Wesley, Cambridge, Mass. 613 | 2005 | 75 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 614–621, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Automatic Acquisition of Adjectival Subcategorization from Corpora Jeremy Yallop∗, Anna Korhonen, and Ted Briscoe Computer Laboratory University of Cambridge 15 JJ Thomson Avenue Cambridge CB3 OFD, UK [email protected], {Anna.Korhonen, Ted.Briscoe}@cl.cam.ac.uk Abstract This paper describes a novel system for acquiring adjectival subcategorization frames (SCFs) and associated frequency information from English corpus data. The system incorporates a decision-tree classifier for 30 SCF types which tests for the presence of grammatical relations (GRs) in the output of a robust statistical parser. It uses a powerful patternmatching language to classify GRs into frames hierarchically in a way that mirrors inheritance-based lexica. The experiments show that the system is able to detect SCF types with 70% precision and 66% recall rate. A new tool for linguistic annotation of SCFs in corpus data is also introduced which can considerably alleviate the process of obtaining training and test data for subcategorization acquisition. 1 Introduction Research into automatic acquisition of lexical information from large repositories of unannotated text (such as the web, corpora of published text, etc.) is starting to produce large scale lexical resources which include frequency and usage information tuned to genres and sublanguages. Such resources are critical for natural language processing (NLP), both for enhancing the performance of ∗Part of this research was conducted while this author was at the University of Edinburgh Laboratory for Foundations of Computer Science. state-of-art statistical systems and for improving the portability of these systems between domains. One type of lexical information with particular importance for NLP is subcategorization. Access to an accurate and comprehensive subcategorization lexicon is vital for the development of successful parsing technology (e.g. (Carroll et al., 1998b), important for many NLP tasks (e.g. automatic verb classification (Schulte im Walde and Brew, 2002)) and useful for any application which can benefit from information about predicate-argument structure (e.g. Information Extraction (IE) (Surdeanu et al., 2003)). The first systems capable of automatically learning a small number of verbal subcategorization frames (SCFs) from English corpora emerged over a decade ago (Brent, 1991; Manning, 1993). Subsequent research has yielded systems for English (Carroll and Rooth, 1998; Briscoe and Carroll, 1997; Korhonen, 2002) capable of detecting comprehensive sets of SCFs with promising accuracy and demonstrated success in application tasks (e.g. (Carroll et al., 1998b; Korhonen et al., 2003)), besides systems for a number of other languages (e.g. (Kawahara and Kurohashi, 2002; Ferrer, 2004)). While there has been considerable research into acquisition of verb subcategorization, we are not aware of any systems built for adjectives. Although adjectives are syntactically less multivalent than verbs, and although verb subcategorization distribution data appears to offer the greatest potential boost in parser performance, accurate and comprehensive knowledge of the many adjective SCFs can improve the accuracy of parsing at several levels 614 (from tagging to syntactic and semantic analysis). Automatic SCF acquisition techniques are particularly important for adjectives because extant syntax dictionaries provide very limited coverage of adjective subcategorization. In this paper we propose a method for automatic acquisition of adjectival SCFs from English corpus data. Our method has been implemented using a decision-tree classifier which tests for the presence of grammatical relations (GRs) in the output of the RASP (Robust Accurate Statistical Parsing) system (Briscoe and Carroll, 2002). It uses a powerful taskspecific pattern-matching language which enables the frames to be classified hierarchically in a way that mirrors inheritance-based lexica. As reported later, the system is capable of detecting 30 SCFs with an accuracy comparable to that of best state-ofart verbal SCF acquisition systems (e.g. (Korhonen, 2002)). Additionally, we present a novel tool for linguistic annotation of SCFs in corpus data aimed at alleviating the process of obtaining training and test data for subcategorization acquisition. The tool incorporates an intuitive interface with the ability to significantly reduce the number of frames presented to the user for each sentence. We discuss adjectival subcategorization in section 2 and introduce the system for SCF acquisition in section 3. Details of the annotation tool and the experimental evaluation are supplied in section 4. Section 5 provides discussion on our results and future work, and section 6 summarises the paper. 2 Adjectival Subcategorization Although the number of SCF types for adjectives is smaller than the number reported for verbs (e.g. (Briscoe and Carroll, 1997)), adjectives nevertheless exhibit rich syntactic behaviour. Besides the common attributive and predicative positions there are at least six further positions in which adjectives commonly occur (see figure 1). Adjectives in predicative position can be further classified according to the nature of the arguments with which they combine — finite and non-finite clauses and noun phrases, phrases with and without complementisers, etc. — and whether they occur as subject or object. Additional distinctions can be made concernAttributive “The young man” Predicative “He is young” Postpositive “Anyone [who is] young can do it” Predeterminer “such a young man”; “so young a man” Fused modifier-head “the younger of them”; “the young” Predicative adjunct “he died young” Supplementive clause “Young, he was plain in appearance” Contingent clause “When young, he was lonely” Figure 1: Fundamental adjectival frames ing such features as the mood of the complement (mandative, interrogative, etc.), preferences for particular prepositions and whether the subject is extraposed. Even ignoring preposition preference, there are more than 30 distinguishable adjectival SCFs. Some fairly extensive frame sets can be found in large syntax dictionaries, such as COMLEX (31 SCFs) (Wolff et al., 1998) and ANLT (24 SCFs) (Boguraev et al., 1987). While such resources are generally accurate, they are disappointingly incomplete: none of the proposed frame sets in the well-known resources subsumes the others, the coverage of SCF types for individual adjectives is low, and (accurate) information on the relative frequency of SCFs for each adjective is absent. The inadequacy of manually-created dictionaries and the difficulty of adequately enhancing and maintaining the information by hand was a central motivation for early research into automatic subcategorization acquisition. The focus heretofore has remained firmly on verb subcategorization, but this is not sufficient, as countless examples show. Knowledge of adjectival subcategorization can yield further improvements in tagging (e.g. distinguishing between “to” as an infinitive marker and as a true preposition), parsing (e.g. distinguishing between PP-arguments and adjuncts), and semantic analysis. For example, if John is both easy and eager to please then we know that he is the recipient of pleasure in the first instance and desirous of providing it in the second, but a computational system cannot determine this without knowledge of the subcategorization of the two adjectives. Likewise, a natural language generation system can legitimately apply the extraposition transformation to the first case, but not to the second: It is “easy to please John”, but not 615 “eager”to do so, at least if “it”be expletive. Similar examples abound. Many of the difficulties described in the literature on acquiring verb subcategorization also arise in the adjectival case. The most apparent is data sparsity: among the 100M-word British National Corpus (BNC) (Burnard, 1995), the RASP tools find 124,120 distinct adjectives, of which 70,246 occur only once, 106,464 fewer than ten times and 119,337 fewer than a hundred times. There are fewer than 1,000 adjectives in the corpus which have more than 1,000 occurrences. Both adjective and SCF frequencies have Zipfian distributions; consequently, even the largest corpora may contain only single instances of a particular adjective-SCF combination, which is generally insufficient for classification. 3 Description of the System Besides focusing on adjectives, our approach to SCF acquisition differs from earlier work in a number of ways. A common strategy in existing systems (e.g. (Briscoe and Carroll, 1997)) is to extract SCFs from parse trees, introducing an unnecessary dependence on the details of a particular parser. In our approach the patterns are extracted from GRs — representations of head-complement relations which are designed to be largely parser-independent — making the techniques more widely applicable and allowing classification to operate at a higher level. Further, most existing systems work by classifying corpus occurrences into individual, mutually independent SCFs. We adopt instead a hierarchical approach, viewing frames that share features as descendants of a common parent frame. The benefits are severalfold: specifying each feature only once makes the system both more efficient and easier to understand and maintain, and the multiple inheritance hierarchy reflects the hierarchy of lexical types found in modern grammars where relationships between similar frames are represented explicitly1. Our acquisition process consists of two main steps: 1) extracting GRs from corpus data, and 2) feeding the GRs as input to the classifier which incrementally matches parts of the GR sets to decide which branches of a decision-tree to follow. The 1Compare the cogent argument for a inheritance-based lexicon in (Flickinger and Nerbonne, 1992), much of which can be applied unchanged to the taxonomy of SCFs. dependent mod arg mod arg aux conj subj or dobj ncmod xmod cmod detmod subj comp ncsubj xsubj csubj obj clausal dobj obj2 iobj xcomp ccomp Figure 2: The GR hierarchy used by RASP leaves of the tree correspond to SCFs. The details of these two steps are provided in the subsequent sections, respectively2. 3.1 Obtaining Grammatical Relations Attempts to acquire verb subcategorization have benefited from increasingly sophisticated parsers. We have made use of the RASP toolkit (Briscoe and Carroll, 2002) — a modular statistical parsing system which includes a tokenizer, tagger, lemmatiser, and a wide-coverage unification-based tag-sequence parser. The parser has several modes of operation; we invoked it in a mode in which GRs with associated probabilities are emitted even when a complete analysis of the sentence could not be found. In this mode there is wide coverage (over 98% of the BNC receives at least a partial analysis (Carroll and Briscoe, 2002)) which is useful in view of the infrequent occurrence of some of the SCFs, although combining the results of competing parses may in some cases result in an inconsistent or misleading combination of GRs. The parser uses a scheme of GRs between lemmatised lexical heads (Carroll et al., 1998a; Briscoe et al., 2002). The relations are organized as a multipleinheritance subsumption hierarchy where each subrelation extends the meaning, and perhaps the argument structure, of its parents (figure 2). For descriptions and examples of each relation, see (Carroll et al., 1998a). The dependency relationships which the GRs embody correspond closely to the head-complement 2In contrast to almost all earlier work, there was no filtering stage involved in SCF acquisition. The classifier was designed to operate with high precision, so filtering was less necessary. 616 2 6666664 SUBJECT NP 1 , ADJ-COMPS * PP " PVAL “for” NP 3 # , VP 2 664 MOOD to-infinitive SUBJECT 3 OMISSION 1 3 775 + 3 7777775 Figure 3: Feature structure for SCF adj-obj-for-to-inf (|These:1_DD2| |example+s:2_NN2| |of:3_IO| |animal:4_JJ| |senses:5_NN2| |be+:6_VBR| |relatively:7_RR| |easy:8_JJ| |for:9_IF| |we+:10_PPIO2| |to:11_TO| |comprehend:12_VV0|) ... xcomp(_ be+[6] easy:[8]) xmod(to[11] be+[6] comprehend:[12]) ncsubj(be+[6] example+s[2] _) ncmod(for[9] easy[8] we+[10]) ncsubj(comprehend[12] we+[10], _) ... Figure 4: GRs from RASP for adj-obj-for-to-inf structure which subcategorization acquisition attempts to recover, which makes GRs ideal input to the SCF classifier. Consider the arguments of “easy” in the sentence: These examples of animal senses are relatively easy for us to comprehend as they are not too far removed from our own experience. According to the COMLEX classification, this is an example of the frame adj-obj-for-to-inf, shown in figure 3, (using AVM notation in place of COMLEX s-expressions). Part of the output of RASP for this sentence (the full output includes 87 weighted GRs) is shown in figure 43. Each instantiated GR in figure 4 corresponds to one or more parts of the feature structure in figure 3. xcomp( be[6] easy[8]) establishes be[6] as the head of the VP in which easy[8] occurs as a complement. The first (PP)-complement is “for us”, as indicated by ncmod(for[9] easy[8] we+[10]), with “for” as PFORM and we+ (“us”) as NP. The second complement is represented by xmod(to[11] be+[6] comprehend[12]): a to-infinitive VP. The NP headed by “examples” is marked as the subject of the frame by ncsubj(be[6] examples[2]), and ncsubj(comprehend[12] we+[10]) corresponds to the coindexation marked by 3 : the subject of the 3The format is slightly more complicated than that shown in (Carroll et al., 1998a): each argument that corresponds to a word consists of three parts: the lexeme, the part of speech tag, and the position (index) of the word in the sentence. xcomp(_, [*;1;be-verb], ˜) xmod([to;*;to], 1, [*;2;vv0]) ncsubj(1, [*;3;noun/pronoun], _) ncmod([for;*;if], ˜, [*;4;noun/pronoun]) ncsubj(2, 4) Figure 5: A pattern to match the frame adj-obj-for-to-inf VP is the NP of the PP. The only part of the feature structure which is not represented by the GRs is coindexation between the omitted direct object 1 of the VP-complement and the subject of the whole clause. 3.2 SCF Classifier 3.2.1 SCF Frames We used for our classifier a modified version of the fairly extensive COMLEX frameset, including 30 SCFs. The COMLEX frameset includes mutually inconsistent frames, such as sentential complement with obligatory complementiser that and sentential complement with optional that. We modified the frameset so that an adjective can legitimately instantiate any combination of frames, which simplifies classification. We also added simple-predicative and attributive SCFs to the set, since these account for a substantial proportion of frame instances. Finally, frames which could only be distinguished by information not retained in the GRs scheme of the current version of the shallow parser were merged (e.g. the COMLEX frames adj-subj-to-inf-rs (“She was kind to invite me”) and adj-to-inf (“She was able to climb the mountain”)). 3.2.2 Classifier The classifier operates by attempting to match the set of GRs associated with each sentence against various patterns. The patterns were developed by a combination of knowledge of the GRs and examining a set of training sentences to determine which relations were actually emitted by the parser for each SCF. The data used during development consisted of the sentences in the BNC in which one of the 23 adjectives4 given as examples for SCFs in (Macleod 4The adjectives used for training were: able, anxious, apparent, certain, convenient, curious, desirable, disappointed, easy, happy, helpful, imperative, impractical, insistent, kind, obvious, practical, preferable, probable, ridiculous, unaware, uncertain and unclear. 617 et al., 1998) occur. In our pattern matching language a pattern is a disjunction of sets of partially instantiated GRs with logic variables (slots) in place of indices, augmented by ordering constraints that restrict the possible instantiations of slots. A match is considered successful if the set of GRs can be unified with any of the disjuncts. Unification of a sentence-relation and a pattern-relation occurs when there is a one-to-one correspondence between sentence elements and pattern elements that includes a mapping from slots to indices (a substitution), and where atomic elements in corresponding positions share a common subtype. Figure 5 shows a pattern for matching the SCF adj-obj-for-to-inf. For a match to succeed there must be GRs associated with the sentence that match each part of the pattern. Each argument matches either anything at all (*), the “current” adjective (˜), an empty GR argument ( ), a [word;id;part-of-speech] 3-tuple or a numeric id. In a successful match, equal ids in different parts of the pattern must match the same word position, and distinct ids must match different positions. The various patterns are arranged in a tree, where a parent node contains the elements common to all of its children. This kind of once-only representation of particular features, together with the successive refinements provided by child nodes reflects the organization of inheritance-based lexica. The inheritance structure naturally involves multiple inheritance, since each frame typically includes multiple features (such as the presence of a to-infinitive complement or an expletive subject argument) inherited from abstract parent classes, and each feature is instantiated in several frames. The tree structure also improves the efficiency of the pattern matching process, which then occurs in stages: at each matching node the classifier attempts to match a set of relations with each child pattern to yield a substitution that subsumes the substitution resulting from the parent match. Both the patterns and the pattern language itself underwent successive refinements after investigation of the performance on training data made it increasingly clear what sort of distinctions were useful to express. The initial pattern language had no slots; it was easy to understand and implement, but insufficiently expressive. The final refinement was the adunspecified 285 improbable 350 unsure 570 doubtful 1147 generous 2052 sure 13591 difficult 18470 clear 19617 important 33303 Table 1: Test adjectives and frequencies in the BNC dition of ordering constraints between instantiated slots, which are indispensable for detecting, e.g., extraposition. 4 Experimental Evaluation 4.1 Data In order to evaluate the system we selected a set of 9 adjectives which between them could instantiate all of the frames. The test set was intentionally kept fairly small for these first experiments with adjectival SCF acquisition so that we could carry out a thorough evaluation of all the test instances. We excluded the adjectives used during development and adjectives with fewer than 200 instances in the corpus. The final test set, together with their frequencies in the tagged version of the BNC, is shown in table 1. For each adjective we extracted 200 sentences (evenly spaced throughout the BNC) which we processed using the SCF acquisition system described in the previous section. 4.2 Method 4.2.1 Annotation Tool and Gold Standard Our gold standard was human-annotated data. Two annotators associated a SCF with each sentence/adjective pair in the test data. To alleviate the process we developed a program which first uses reliable heuristics to reduce the number of SCF choices and then allows the annotator to select the preferred choice with a single mouse click in a browser window. The heuristics reduced the average number of SCFs presented alongside each sentence from 30 to 9. Through the same browser interface we provided annotators with information and instructions (with links to COMLEX documentation), the ability to inspect and review previous decisions and decision summaries5 and an option to record that partic5The varying number of SCFs presented to the user and the ability to revisit previous decisions precluded accurate measure618 Figure 6: Sample classification screen for web annotation tool ular sentences could not be classified (which is useful for further system development, as discussed in section 5). A screenshot is shown in figure 6. The resulting annotation revealed 19 of the 30 SCFs in the test data. 4.2.2 Evaluation Measures We use the standard evaluation metrics: type and token precision, recall and F-measure. Token recall is the proportion of annotated (sentence, frame) pairs that the system recovered correctly. Token precision is the proportion of classified (sentence, frame) pairs that were correct. Type precision and type recall are analogously defined for (adjective, frame) pairs. The F-measure (β = 1) is a weighted combination of precision and recall. 4.3 Results Running the system on the test data yielded the results summarised in table 2. The greater expressiveness of the final pattern language resulted in a classifier that performed better than the “regression” versions which ignored either ordering constraints, or both ordering constraints and slots. As expected, removing features from the classifier translated directly into degraded accuracy. The performance of the best classifier (67.8% F-measure) is quite similar to that of the best current verbal SCF acquisition systems (e.g. (Korhonen, 2002)). Results for individual adjectives are given in table 3. The first column shows the number of SCFs acquired for each adjective, ranging from 2 for unspecments of inter-annotator agreement, but this was judged less important than the enhanced ease of use arising from the reduced set of choices. Type performance System Precision Recall F Final 69.6 66.1 67.8 No order constraints 67.3 62.7 64.9 No slots 62.7 51.4 56.5 Token performance System Precision Recall F Final 63.0 70.5 66.5 No order constraints 58.8 68.3 63.2 No slots 58.3 67.6 62.6 Table 2: Overall performance of the classifier and of regression systems with restricted pattern-matching ified to 11 for doubtful. Looking at the F-measure, the best performing adjectives are unspecified, difficult and sure (80%) and the worst performing unsure (50%) and and improbable (60%). There appears to be no obvious connection between performance figures and the number of acquired SCF types; differences are rather due to the difficulty of detecting individual SCF types — an issue directly related to data sparsity. Despite the size of the BNC, 5 SCFs were not seen at all, either for the test adjectives or for any others. Frames involving to-infinitive complements were particularly rare: 4 such SCFs had no examples in the corpus and a further 3 occurred 5 times or fewer in the test data. It is more difficult to develop patterns for SCFs that occur infrequently, and the few instances of such SCFs are unlikely to include a set of GRs that is adequate for classification. The effect on the results was clear: of the 9 SCFs which the classifier did not correctly recognise at all, 4 occurred 5 times or fewer in the test data and a further 2 occurred 5–10 times. The most common error made by the classifier was to mistake a complex frame (e.g. adj-obj-for-to-inf, or to-inf-wh-adj) for simple-predicative, which subsumes all such frames. This occurred whenever the GRs emitted by the parser failed to include any information about the complements of the adjective. 5 Discussion Data sparsity is perhaps the greatest hindrance both to recovering adjectival subcategorization and to lexical acquisition in general. In the future, we plan to carry out experiments with a larger set of adjec619 Adjective SCFs Precision Recall F-measure unspecified 2 66.7 100.0 80.0 generous 3 60.0 100.0 75.0 improbable 5 60.0 60.0 60.0 unsure 6 50.0 50.0 50.0 important 7 55.6 71.4 62.5 clear 8 83.3 62.5 71.4 difficult 8 85.7 75.0 80.0 sure 9 100.0 66.7 80.0 doubtful 11 66.7 54.5 60.0 Table 3: SCF count and classifier performance for each adjective. tives using more data (possibly from several corpora and the web) to determine how severe this problem is for adjectives. One possible way to address the problem is to smooth the acquired SCF distributions using SCF “back-off” (probability) estimates based on lexical classes of adjectives in the manner proposed by (Korhonen, 2002). This helps to correct the acquired distributions and to detect low frequency and unseen SCFs. However, our experiment also revealed other problems which require attention in the future. One such is that GRs output by RASP (the version we used in our experiments) do not retain certain distinctions which are essential for distinguishing particular SCFs. For example, a sentential complement of an adjective with a that-complementiser should be annotated with ccomp(that, adjective, verbal-head), but this relation (with that as the type argument) does not occur in the parsed BNC. As a consequence the classifier is unable to distinguish the frame. Another problem arises from the fact that our current classifier operates on a predefined set of SCFs. The COMLEX SCFs, from which ours were derived, are extremely incomplete. Almost a quarter (477 of 1931) of sentences were annotated as “undefined”. For example, while there are SCFs for sentential and infinitival complement in subject position with what6, there is no SCF for the case with a whatprefixed complement in object position, where the subject is an NP. The lack is especially perplexing, because COMLEX does include the corresponding SCFs for verbs. There is a frame for “He wondered 6(adj-subj-what-s: “What he will do is uncertain”; adj-subj-what-to-inf: “What to do was unclear”), together with the extraposed versions (extrap-adj-what-s and extrap-adj-what-to-inf). what to do” (what-to-inf), but none for “He was unsure what to do”. While we can easily extend the current frameset by looking for further SCF types from dictionaries and from among the corpus occurrences labelled by our annotators as unclassified, we also plan to extend the classifier to automatically induce previously unseen frames from data. A possible approach is to use restricted generalization on sets of GRs to group similar sentences together. Generalization (anti-unification) is an intersection operation on two structures which retains the features common to both; generalization over the sets of GRs associated with the sentences which instantiate a particular frame can produce a pattern such as we used for classification in the experiments described above. This approach also offers the possibility of associating confidence levels with each pattern, corresponding to the degree to which the generalized pattern captures the features common to the members of the associated class. It is possible that frames could be induced by grouping sentences according to the “best” (e.g. most information-preserving) generalizations for various combinations, but it is not clear how this can be implemented with acceptable efficiency. The hierarchical approach described in this paper may also helpful in the discovery of new frames: missing combinations of parent classes can be explored readily, and it may be possible to combine the various features in an SCF feature structure to generate example sentences which a human could then inspect to judge grammaticality. 6 Conclusion We have described a novel system for automatically acquiring adjectival subcategorization and associated frequency information from corpora, along with an annotation tool for producing training and test data for the task. The acquisition system, which is capable of distinguishing 30 SCF types, performs sophisticated pattern matching on sets of GRs produced by a robust statistical parser. The information provided by GRs closely matches the structure that subcategorization acquisition seeks to recover. The figures reported demonstrate the feasibility of the approach: our classifier achieved 70% type pre620 cision and 66% type recall on the test data. The discussion suggests several ways in which the system may be improved, refined and extended in the future. Acknowledgements We would like to thank Ann Copestake for all her help during this work. References B. Boguraev, J. Carroll, E. Briscoe, D. Carter, and C. Grover. 1987. The derivation of a grammaticallyindexed lexicon from the Longman Dictionary of Contemporary English. In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, pages 193–200, Stanford, CA. Michael R. Brent. 1991. Automatic acquisition of subcategorization frames from untagged text. In Meeting of the Association for Computational Linguistics, pages 209–214. E. J. Briscoe and J. Carroll. 1997. Automatic Extraction of Subcategorization from Corpora. In Proceedings of the 5th Conference on Applied Natural Language Processing, Washington DC, USA. E. Briscoe and J. Carroll. 2002. Robust accurate statistical annotation of general text. In Proceedings of the Third International Conference on Language Resources and Evaluation, pages 1499–1504, Las Palmas, Canary Islands, May. E. Briscoe, J. Carroll, Jonathan Graham, and Ann Copestake. 2002. Relational evaluation schemes. In Proceedings of the Beyond PARSEVAL Workshop at the 3rd International Conference on Language Resources and Evaluation, pages 4–8, Las Palmas, Gran Canaria. Lou Burnard, 1995. The BNC Users Reference Guide. British National Corpus Consortium, Oxford, May. J. Carroll and E. Briscoe. 2002. High precision extraction of grammatical relations. In Proceedings of the 19th International Conference on Computational Linguistics, pages 134–140, Taipei, Taiwan. Glenn Carroll and Mats Rooth. 1998. Valence induction with a head-lexicalized pcfg. In Proc. of the 3rd Conference on Empirical Methods in Natural Language Processing, Granada, Spain. J. Carroll, E. Briscoe, and A. Sanfilippo. 1998a. Parser evaluation: a survey and a new proposal. In Proceedings of the 1st International Conference on Language Resources and Evaluation, pages 447–454, Granada, Spain. John Carroll, Guido Minnen, and Edward Briscoe. 1998b. Can Subcategorisation Probabilities Help a Statistical Parser? In Proceedings of the 6th ACL/SIGDAT Workshop on Very Large Corpora, pages 118–126, Montreal, Canada. Association for Computational Linguistics. Eva Esteve Ferrer. 2004. Towards a Semantic Classification of Spanish Verbs Based on Subcategorisation Information. In ACL Student Research Workshop, Barcelona, Spain. Dan Flickinger and John Nerbonne. 1992. Inheritance and complementation: A case study of easy adjectives and related nouns. Computational Linguistics, 18(3):269–309. Daisuke Kawahara and Sadao Kurohashi. 2002. Fertilization of Case Frame Dictionary for Robust Japanese Case Analysis. In 19th International Conference on Computational Linguistics. Anna Korhonen, Yuval Krymolowski, and Zvika Marx. 2003. Clustering Polysemic Subcategorization Frame Distributions Semantically. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 64–71, Sapporo, Japan. Anna Korhonen. 2002. Subcategorization acquisition. Ph.D. thesis, University of Cambridge Computer Laboratory, February. Catherine Macleod, Ralph Grishman, and Adam Meyers, 1998. COMLEX Syntax Reference Manual. Computer Science Department, New York University. Christopher D. Manning. 1993. Automatic Acquisition of a Large Subcategorization Dictionary from Corpora. In Meeting of the Association for Computational Linguistics, pages 235–242. S. Schulte im Walde and C. Brew. 2002. Inducing german semantic verb classes from purely syntactic subcategorisation information. In 40th Annual Meeting of the Association for Computational Linguistics, Philadephia, USA. Mihai Surdeanu, Sanda Harabagiu, JohnWilliams, and Paul Aarseth. 2003. Using predicate-argument structures for information extraction. In Proc. of the 41st Annual Meeting of the Association for Computational Linguistics, Sapporo. Susanne Rohen Wolff, Catherine Macleod, and Adam Meyers, 1998. COMLEX Word Classes Manual. Computer Science Department, New York University , June. 621 | 2005 | 76 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 622–629, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Randomized Algorithms and NLP: Using Locality Sensitive Hash Function for High Speed Noun Clustering Deepak Ravichandran, Patrick Pantel, and Eduard Hovy Information Sciences Institute University of Southern California 4676 Admiralty Way Marina del Rey, CA 90292. {ravichan, pantel, hovy}@ISI.EDU Abstract In this paper, we explore the power of randomized algorithm to address the challenge of working with very large amounts of data. We apply these algorithms to generate noun similarity lists from 70 million pages. We reduce the running time from quadratic to practically linear in the number of elements to be computed. 1 Introduction In the last decade, the field of Natural Language Processing (NLP), has seen a surge in the use of corpus motivated techniques. Several NLP systems are modeled based on empirical data and have had varying degrees of success. Of late, however, corpusbased techniques seem to have reached a plateau in performance. Three possible areas for future research investigation to overcoming this plateau include: 1. Working with large amounts of data (Banko and Brill, 2001) 2. Improving semi-supervised and unsupervised algorithms. 3. Using more sophisticated feature functions. The above listing may not be exhaustive, but it is probably not a bad bet to work in one of the above directions. In this paper, we investigate the first two avenues. Handling terabytes of data requires more efficient algorithms than are currently used in NLP. We propose a web scalable solution to clustering nouns, which employs randomized algorithms. In doing so, we are going to explore the literature and techniques of randomized algorithms. All clustering algorithms make use of some distance similarity (e.g., cosine similarity) to measure pair wise distance between sets of vectors. Assume that we are given n points to cluster with a maximum of k features. Calculating the full similarity matrix would take time complexity n2k. With large amounts of data, say n in the order of millions or even billions, having an n2k algorithm would be very infeasible. To be scalable, we ideally want our algorithm to be proportional to nk. Fortunately, we can borrow some ideas from the Math and Theoretical Computer Science community to tackle this problem. The crux of our solution lies in defining Locality Sensitive Hash (LSH) functions. LSH functions involve the creation of short signatures (fingerprints) for each vector in space such that those vectors that are closer to each other are more likely to have similar fingerprints. LSH functions are generally based on randomized algorithms and are probabilistic. We present LSH algorithms that can help reduce the time complexity of calculating our distance similarity atrix to nk. Rabin (1981) proposed the use of hash functions from random irreducible polynomials to create short fingerprint representations for very large strings. These hash function had the nice property that the fingerprint of two identical strings had the same fingerprints, while dissimilar strings had different fingerprints with a very small probability of collision. Broder (1997) first introduced LSH. He proposed the use of Min-wise independent functions to create fingerprints that preserved the Jaccard sim622 ilarity between every pair of vectors. These techniques are used today, for example, to eliminate duplicate web pages. Charikar (2002) proposed the use of random hyperplanes to generate an LSH function that preserves the cosine similarity between every pair of vectors. Interestingly, cosine similarity is widely used in NLP for various applications such as clustering. In this paper, we perform high speed similarity list creation for nouns collected from a huge web corpus. We linearize this step by using the LSH proposed by Charikar (2002). This reduction in complexity of similarity computation makes it possible to address vastly larger datasets, at the cost, as shown in Section 5, of only little reduction in accuracy. In our experiments, we generate a similarity list for each noun extracted from 70 million page web corpus. Although the NLP community has begun experimenting with the web, we know of no work in published literature that has applied complex language analysis beyond IR and simple surface-level pattern matching. 2 Theory The core theory behind the implementation of fast cosine similarity calculation can be divided into two parts: 1. Developing LSH functions to create signatures; 2. Using fast search algorithm to find nearest neighbors. We describe these two components in greater detail in the next subsections. 2.1 LSH Function Preserving Cosine Similarity We first begin with the formal definition of cosine similarity. Definition: Let u and v be two vectors in a k dimensional hyperplane. Cosine similarity is defined as the cosine of the angle between them: cos(θ(u, v)). We can calculate cos(θ(u, v)) by the following formula: cos(θ(u, v)) = |u.v| |u||v| (1) Here θ(u, v) is the angle between the vectors u and v measured in radians. |u.v| is the scalar (dot) product of u and v, and |u| and |v| represent the length of vectors u and v respectively. The LSH function for cosine similarity as proposed by Charikar (2002) is given by the following theorem: Theorem: Suppose we are given a collection of vectors in a k dimensional vector space (as written as Rk). Choose a family of hash functions as follows: Generate a spherically symmetric random vector r of unit length from this k dimensional space. We define a hash function, hr, as: hr(u) = 1 : r.u ≥0 0 : r.u < 0 (2) Then for vectors u and v, Pr[hr(u) = hr(v)] = 1 −θ(u, v) π (3) Proof of the above theorem is given by Goemans and Williamson (1995). We rewrite the proof here for clarity. The above theorem states that the probability that a random hyperplane separates two vectors is directly proportional to the angle between the two vectors (i,e., θ(u, v)). By symmetry, we have Pr[hr(u) ̸= hr(v)] = 2Pr[u.r ≥0, v.r < 0]. This corresponds to the intersection of two half spaces, the dihedral angle between which is θ. Thus, we have Pr[u.r ≥0, v.r < 0] = θ(u, v)/2π. Proceeding we have Pr[hr(u) ̸= hr(v)] = θ(u, v)/π and Pr[hr(u) = hr(v)] = 1 −θ(u, v)/π. This completes the proof. Hence from equation 3 we have, cos(θ(u, v)) = cos((1 −Pr[hr(u) = hr(v)])π) (4) This equation gives us an alternate method for finding cosine similarity. Note that the above equation is probabilistic in nature. Hence, we generate a large (d) number of random vectors to achieve the process. Having calculated hr(u) with d random vectors for each of the vectors u, we apply equation 4 to find the cosine distance between two vectors. As we generate more number of random vectors, we can estimate the cosine similarity between two vectors more accurately. However, in practice, the number (d) of random vectors required is highly domain dependent, i.e., it depends on the value of the total number of vectors (n), features (k) and the way the vectors are distributed. Using d random vectors, we 623 can represent each vector by a bit stream of length d. Carefully looking at equation 4, we can observe that Pr[hr(u) = hr(v)] = 1 − (hamming distance)/d1 . Thus, the above theorem, converts the problem of finding cosine distance between two vectors to the problem of finding hamming distance between their bit streams (as given by equation 4). Finding hamming distance between two bit streams is faster and highly memory efficient. Also worth noting is that this step could be considered as dimensionality reduction wherein we reduce a vector in k dimensions to that of d bits while still preserving the cosine distance between them. 2.2 Fast Search Algorithm To calculate the fast hamming distance, we use the search algorithm PLEB (Point Location in Equal Balls) first proposed by Indyk and Motwani (1998). This algorithm was further improved by Charikar (2002). This algorithm involves random permutations of the bit streams and their sorting to find the vector with the closest hamming distance. The algorithm given in Charikar (2002) is described to find the nearest neighbor for a given vector. We modify it so that we are able to find the top B closest neighbor for each vector. We omit the math of this algorithm but we sketch its procedural details in the next section. Interested readers are further encouraged to read Theorem 2 from Charikar (2002) and Section 3 from Indyk and Motwani (1998). 3 Algorithmic Implementation In the previous section, we introduced the theory for calculation of fast cosine similarity. We implement it as follows: 1. Initially we are given n vectors in a huge k dimensional space. Our goal is to find all pairs of vectors whose cosine similarity is greater than a particular threshold. 2. Choose d number of (d << k) unit random vectors {r0, r1, ......, rd} each of k dimensions. A k dimensional unit random vector, in general, is generated by independently sampling a 1Hamming distance is the number of bits which differ between two binary strings. Gaussian function with mean 0 and variance 1, k number of times. Each of the k samples is used to assign one dimension to the random vector. We generate a random number from a Gaussian distribution by using Box-Muller transformation (Box and Muller, 1958). 3. For every vector u, we determine its signature by using the function hr(u) (as given by equation 4). We can represent the signature of vector u as: ¯u = {hr1(u), hr2(u), ......., hrd(u)}. Each vector is thus represented by a set of a bit streams of length d. Steps 2 and 3 takes O(nk) time (We can assume d to be a constant since d << k). 4. The previous step gives n vectors, each of them represented by d bits. For calculation of fast hamming distance, we take the original bit index of all vectors and randomly permute them (see Appendix A for more details on random permutation functions). A random permutation can be considered as random jumbling of the bits of each vector2. A random permutation function can be approximated by the following function: π(x) = (ax + b)mod p (5) where, p is prime and 0 < a < p , 0 ≤b < p, and a and b are chosen at random. We apply q different random permutation for every vector (by choosing random values for a and b, q number of times). Thus for every vector we have q different bit permutations for the original bit stream. 5. For each permutation function π, we lexicographically sort the list of n vectors (whose bit streams are permuted by the function π) to obtain a sorted list. This step takes O(nlogn) time. (We can assume q to be a constant). 6. For each sorted list (performed after applying the random permutation function π), we calculate the hamming distance of every vector with 2The jumbling is performed by a mapping of the bit index as directed by the random permutation function. For a given permutation, we reorder the bit indexes of all vectors in similar fashion. This process could be considered as column reording of bit vectors. 624 B of its closest neighbors in the sorted list. If the hamming distance is below a certain predetermined threshold, we output the pair of vectors with their cosine similarity (as calculated by equation 4). Thus, B is the beam parameter of the search. This step takes O(n), since we can assume B, q, d to be a constant. Why does the fast hamming distance algorithm work? The intuition is that the number of bit streams, d, for each vector is generally smaller than the number of vectors n (ie. d << n). Thus, sorting the vectors lexicographically after jumbling the bits will likely bring vectors with lower hamming distance closer to each other in the sorted lists. Overall, the algorithm takes O(nk +nlogn) time. However, for noun clustering, we generally have the number of nouns, n, smaller than the number of features, k. (i.e., n < k). This implies logn << k and nlogn << nk. Hence the time complexity of our algorithm is O(nk + nlogn) ≈O(nk). This is a huge saving from the original O(n2k) algorithm. In the next section, we proceed to apply this technique for generating noun similarity lists. 4 Building Noun Similarity Lists A lot of work has been done in the NLP community on clustering words according to their meaning in text (Hindle, 1990; Lin, 1998). The basic intuition is that words that are similar to each other tend to occur in similar contexts, thus linking the semantics of words with their lexical usage in text. One may ask why is clustering of words necessary in the first place? There may be several reasons for clustering, but generally it boils down to one basic reason: if the words that occur rarely in a corpus are found to be distributionally similar to more frequently occurring words, then one may be able to make better inferences on rare words. However, to unleash the real power of clustering one has to work with large amounts of text. The NLP community has started working on noun clustering on a few gigabytes of newspaper text. But with the rapidly growing amount of raw text available on the web, one could improve clustering performance by carefully harnessing its power. A core component of most clustering algorithms used in the NLP community is the creation of a similarity matrix. These algorithms are of complexity O(n2k), where n is the number of unique nouns and k is the feature set length. These algorithms are thus not readily scalable, and limit the size of corpus manageable in practice to a few gigabytes. Clustering algorithms for words generally use the cosine distance for their similarity calculation (Salton and McGill, 1983). Hence instead of using the usual naive cosine distance calculation between every pair of words we can use the algorithm described in Section 3 to make noun clustering web scalable. To test our algorithm we conduct similarity based experiments on 2 different types of corpus: 1. Web Corpus (70 million web pages, 138GB), 2. Newspaper Corpus (6 GB newspaper corpus) 4.1 Web Corpus We set up a spider to download roughly 70 million web pages from the Internet. Initially, we use the links from Open Directory project3 as seed links for our spider. Each webpage is stripped of HTML tags, tokenized, and sentence segmented. Each document is language identified by the software TextCat4 which implements the paper by Cavnar and Trenkle (1994). We retain only English documents. The web contains a lot of duplicate or near-duplicate documents. Eliminating them is critical for obtaining better representation statistics from our collection. The problem of identifying near duplicate documents in linear time is not trivial. We eliminate duplicate and near duplicate documents by using the algorithm described by Kolcz et al. (2004). This process of duplicate elimination is carried out in linear time and involves the creation of signatures for each document. Signatures are designed so that duplicate and near duplicate documents have the same signature. This algorithm is remarkably fast and has high accuracy. This entire process of removing non English documents and duplicate (and near-duplicate) documents reduces our document set from 70 million web pages to roughly 31 million web pages. This represents roughly 138GB of uncompressed text. We identify all the nouns in the corpus by using a noun phrase identifier. For each noun phrase, we identify the context words surrounding it. Our context window length is restricted to two words to 3http://www.dmoz.org/ 4http://odur.let.rug.nl/∼vannoord/TextCat/ 625 Table 1: Corpus description Corpus Newspaper Web Corpus Size 6GB 138GB Unique Nouns 65,547 655,495 Feature size 940,154 1,306,482 the left and right of each noun. We use the context words as features of the noun vector. 4.2 Newspaper Corpus We parse a 6 GB newspaper (TREC9 and TREC2002 collection) corpus using the dependency parser Minipar (Lin, 1994). We identify all nouns. For each noun we take the grammatical context of the noun as identified by Minipar5. We do not use grammatical features in the web corpus since parsing is generally not easily web scalable. This kind of feature set does not seem to affect our results. Curran and Moens (2002) also report comparable results for Minipar features and simple word based proximity features. Table 1 gives the characteristics of both corpora. Since we use grammatical context, the feature set is considerably larger than the simple word based proximity feature set for the newspaper corpus. 4.3 Calculating Feature Vectors Having collected all nouns and their features, we now proceed to construct feature vectors (and values) for nouns from both corpora using mutual information (Church and Hanks, 1989). We first construct a frequency count vector C(e) = (ce1, ce2, ..., cek), where k is the total number of features and cef is the frequency count of feature f occurring in word e. Here, cef is the number of times word e occurred in context f. We then construct a mutual information vector MI(e) = (mie1, mie2, ..., miek) for each word e, where mief is the pointwise mutual information between word e and feature f, which is defined as: mief = log cef N Pn i=1 cif N × Pk j=1 cej N (6) where n is the number of words and N = 5We perform this operation so that we can compare the performance of our system to that of Pantel and Lin (2002). Pn i=1 Pm j=1 cij is the total frequency count of all features of all words. Having thus obtained the feature representation of each noun we can apply the algorithm described in Section 3 to discover similarity lists. We report results in the next section for both corpora. 5 Evaluation Evaluating clustering systems is generally considered to be quite difficult. However, we are mainly concerned with evaluating the quality and speed of our high speed randomized algorithm. The web corpus is used to show that our framework is webscalable, while the newspaper corpus is used to compare the output of our system with the similarity lists output by an existing system, which are calculated using the traditional formula as given in equation 1. For this base comparison system we use the one built by Pantel and Lin (2002). We perform 3 kinds of evaluation: 1. Performance of Locality Sensitive Hash Function; 2. Performance of fast Hamming distance search algorithm; 3. Quality of final similarity lists. 5.1 Evaluation of Locality sensitive Hash function To perform this evaluation, we randomly choose 100 nouns (vectors) from the web collection. For each noun, we calculate the cosine distance using the traditional slow method (as given by equation 1), with all other nouns in the collection. This process creates similarity lists for each of the 100 vectors. These similarity lists are cut off at a threshold of 0.15. These lists are considered to be the gold standard test set for our evaluation. For the above 100 chosen vectors, we also calculate the cosine similarity using the randomized approach as given by equation 4 and calculate the mean squared error with the gold standard test set using the following formula: errorav = sX i (CSreal,i −CScalc,i)2/total (7) where CSreal,i and CScalc,i are the cosine similarity scores calculated using the traditional (equation 1) and randomized (equation 4) technique re626 Table 2: Error in cosine similarity Number of random vectors d Average error in cosine similarity Time (in hours) 1 1.0000 0.4 10 0.4432 0.5 100 0.1516 3 1000 0.0493 24 3000 0.0273 72 10000 0.0156 241 spectively. i is the index over all pairs of elements that have CSreal,i >= 0.15 We calculate the error (errorav) for various values of d, the total number of unit random vectors r used in the process. The results are reported in Table 26. As we generate more random vectors, the error rate decreases. For example, generating 10 random vectors gives us a cosine error of 0.4432 (which is a large number since cosine similarity ranges from 0 to 1.) However, generation of more random vectors leads to reduction in error rate as seen by the values for 1000 (0.0493) and 10000 (0.0156). But as we generate more random vectors the time taken by the algorithm also increases. We choose d = 3000 random vectors as our optimal (time-accuracy) cut off. It is also very interesting to note that by using only 3000 bits for each of the 655,495 nouns, we are able to measure cosine similarity between every pair of them to within an average error margin of 0.027. This algorithm is also highly memory efficient since we can represent every vector by only a few thousand bits. Also the randomization process makes the the algorithm easily parallelizable since each processor can independently contribute a few bits for every vector. 5.2 Evaluation of Fast Hamming Distance Search Algorithm We initially obtain a list of bit streams for all the vectors (nouns) from our web corpus using the randomized algorithm described in Section 3 (Steps 1 to 3). The next step involves the calculation of hamming distance. To evaluate the quality of this search algorithm we again randomly choose 100 vectors (nouns) from our collection. For each of these 100 vectors we manually calculate the hamming distance 6The time is calculated for running the algorithm on a single Pentium IV processor with 4GB of memory with all other vectors in the collection. We only retain those pairs of vectors whose cosine distance (as manually calculated) is above 0.15. This similarity list is used as the gold standard test set for evaluating our fast hamming search. We then apply the fast hamming distance search algorithm as described in Section 3. In particular, it involves steps 3 to 6 of the algorithm. We evaluate the hamming distance with respect to two criteria: 1. Number of bit index random permutations functions q; 2. Beam search parameter B. For each vector in the test collection, we take the top N elements from the gold standard similarity list and calculate how many of these elements are actually discovered by the fast hamming distance algorithm. We report the results in Table 3 and Table 4 with beam parameters of (B = 25) and (B = 100) respectively. For each beam, we experiment with various values for q, the number of random permutation function used. In general, by increasing the value for beam B and number of random permutation q , the accuracy of the search algorithm increases. For example in Table 4 by using a beam B = 100 and using 1000 random bit permutations, we are able to discover 72.8% of the elements of the Top 100 list. However, increasing the values of q and B also increases search time. With a beam (B) of 100 and the number of random permutations equal to 100 (i.e., q = 1000) it takes 570 hours of processing time on a single Pentium IV machine, whereas with B = 25 and q = 1000, reduces processing time by more than 50% to 240 hours. We could not calculate the total time taken to build noun similarity list using the traditional technique on the entire corpus. However, we estimate that its time taken would be at least 50,000 hours (and perhaps even more) with a few of Terabytes of disk space needed. This is a very rough estimate. The experiment was infeasible. This estimate assumes the widely used reverse indexing technique, where in one compares only those vector pairs that have at least one feature in common. 5.3 Quality of Final Similarity Lists For evaluating the quality of our final similarity lists, we use the system developed by Pantel and Lin (2002) as gold standard on a much smaller data set. We use the same 6GB corpus that was used for train627 Table 3: Hamming search accuracy (Beam B = 25) Random permutations q Top 1 Top 5 Top 10 Top 25 Top 50 Top 100 25 6.1% 4.9% 4.2% 3.1% 2.4% 1.9% 50 6.1% 5.1% 4.3% 3.2% 2.5% 1.9% 100 11.3% 9.7% 8.2% 6.2% 5.7% 5.1% 500 44.3% 33.5% 30.4% 25.8% 23.0% 20.4% 1000 58.7% 50.6% 48.8% 45.0% 41.0% 37.2% Table 4: Hamming search accuracy (Beam B = 100) Random permutations q Top 1 Top 5 Top 10 Top 25 Top 50 Top 100 25 9.2% 9.5% 7.9% 6.4% 5.8% 4.7% 50 15.4% 17.7% 14.6% 12.0% 10.9% 9.0% 100 27.8% 27.2% 23.5% 19.4% 17.9% 16.3% 500 73.1% 67.0% 60.7% 55.2% 53.0% 50.5% 1000 87.6% 84.4% 82.1% 78.9% 75.8% 72.8% ing by Pantel and Lin (2002) so that the results are comparable. We randomly choose 100 nouns and calculate the top N elements closest to each noun in the similarity lists using the randomized algorithm described in Section 3. We then compare this output to the one provided by the system of Pantel and Lin (2002). For every noun in the top N list generated by our system we calculate the percentage overlap with the gold standard list. Results are reported in Table 5. The results shows that we are able to retrieve roughly 70% of the gold standard similarity list. In Table 6, we list the top 10 most similar words for some nouns, as examples, from the web corpus. 6 Conclusion NLP researchers have just begun leveraging the vast amount of knowledge available on the web. By searching IR engines for simple surface patterns, many applications ranging from word sense disambiguation, question answering, and mining semantic resources have already benefited. However, most language analysis tools are too infeasible to run on the scale of the web. A case in point is generating noun similarity lists using co-occurrence statistics, which has quadratic running time on the input size. In this paper, we solve this problem by presenting a randomized algorithm that linearizes this task and limits memory requirements. Experiments show that our method generates cosine similarities between pairs of nouns within a score of 0.03. In many applications, researchers have shown that more data equals better performance (Banko and Brill, 2001; Curran and Moens, 2002). Moreover, at the web-scale, we are no longer limited to a snapshot in time, which allows broader knowledge to be learned and processed. Randomized algorithms provide the necessary speed and memory requirements to tap into terascale text sources. We hope that randomized algorithms will make other NLP tools feasible at the terascale and we believe that many algorithms will benefit from the vast coverage of our newly created noun similarity list. Acknowledgement We wish to thank USC Center for High Performance Computing and Communications (HPCC) for helping us use their cluster computers. References Banko, M. and Brill, E. 2001. Mitigating the paucity of dataproblem. In Proceedings of HLT. 2001. San Diego, CA. Box, G. E. P. and M. E. Muller 1958. Ann. Math. Stat. 29, 610–611. Broder, Andrei 1997. On the Resemblance and Containment of Documents. Proceedings of the Compression and Complexity of Sequences. Cavnar, W. B. and J. M. Trenkle 1994. N-Gram-Based Text Categorization. In Proceedings of Third Annual Symposium on Document Analysis and Information Retrieval, Las Vegas, NV, UNLV Publications/Reprographics, 161–175. 628 Table 5: Final Quality of Similarity Lists Top 1 Top 5 Top 10 Top 25 Top 50 Top 100 Accuracy 70.7% 71.9% 72.2% 71.7% 71.2% 71.1% Table 6: Sample Top 10 Similarity Lists JUST DO IT computer science TSUNAMI Louis Vuitton PILATES HAVE A NICE DAY mechanical engineering tidal wave PRADA Tai Chi FAIR AND BALANCED electrical engineering LANDSLIDE Fendi Cardio POWER TO THE PEOPLE chemical engineering EARTHQUAKE Kate Spade SHIATSU NEVER AGAIN Civil Engineering volcanic eruption VUITTON Calisthenics NO BLOOD FOR OIL ECONOMICS HAILSTORM BURBERRY Ayurveda KINGDOM OF HEAVEN ENGINEERING Typhoon GUCCI Acupressure If Texas Wasn’t Biology Mudslide Chanel Qigong BODY OF CHRIST environmental science windstorm Dior FELDENKRAIS WE CAN PHYSICS HURRICANE Ferragamo THERAPEUTIC TOUCH Weld with your mouse information science DISASTER Ralph Lauren Reflexology Charikar, Moses 2002. Similarity Estimation Techniques from Rounding Algorithms In Proceedings of the 34th Annual ACM Symposium on Theory of Computing. Church, K. and Hanks, P. 1989. Word association norms, mutual information, and lexicography. In Proceedings of ACL89. pp. 76–83. Vancouver, Canada. Curran, J. and Moens, M. 2002. Scaling context space. In Proceedings of ACL-02 pp 231–238, Philadelphia, PA. Goemans, M. X. and D. P. Williamson 1995. Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming. JACM 42(6): 1115–1145. Hindle, D. 1990. Noun classification from predicate-argument structures. In Proceedings of ACL-90. pp. 268–275. Pittsburgh, PA. Lin, D. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING/ACL-98. pp. 768–774. Montreal, Canada. Indyk, P., Motwani, R. 1998. Approximate nearest neighbors: towards removing the curse of dimensionality Proceedings of 30th STOC, 604–613. A. Kolcz, A. Chowdhury, J. Alspector 2004. Improved robustness of signature-based near-replica detection via lexicon randomization. Proceedings of ACM-SIGKDD (2004). Lin, D. 1994 Principar - an efficient, broad-coverage, principle-based parser. Proceedings of COLING-94, pp. 42– 48. Kyoto, Japan. Pantel, Patrick and Dekang Lin 2002. Discovering Word Senses from Text. In Proceedings of SIGKDD-02, pp. 613– 619. Edmonton, Canada Rabin, M. O. 1981. Fingerprinting by random polynomials. Center for research in Computing technology , Harvard University, Report TR-15-81. Salton, G. and McGill, M. J. 1983. Introduction to Modern Information Retrieval. McGraw Hill. Appendix A. Random Permutation Functions We define [n] = {0, 1, 2, ..., n −1}. [n] can thus be considered as a set of integers from 0 to n −1. Let π : [n] →[n] be a permutation function chosen at random from the set of all such permutation functions. Consider π : [4] →[4]. A permutation function π is a one to one mapping from the set of [4] to the set of [4]. Thus, one possible mapping is: π : {0, 1, 2, 3} →{3, 2, 1, 0} Here it means: π(0) = 3, π(1) = 2, π(2) = 1, π(3) = 0 Another possible mapping would be: π : {0, 1, 2, 3} →{3, 0, 1, 2} Here it means: π(0) = 3, π(1) = 0, π(2) = 1, π(3) = 2 Thus for the set [4] there would be 4! = 4∗3∗2 = 24 possibilities. In general, for a set [n] there would be n! unique permutation functions. Choosing a random permutation function amounts to choosing one of n! such functions at random. 629 | 2005 | 77 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 58–65, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Empirically-based Control of Natural Language Generation Daniel S. Paiva Roger Evans Department of Informatics Information Technology Research Institute University of Sussex University of Brighton Brighton, UK Brighton, UK [email protected] [email protected] Abstract In this paper we present a new approach to controlling the behaviour of a natural language generation system by correlating internal decisions taken during free generation of a wide range of texts with the surface stylistic characteristics of the resulting outputs, and using the correlation to control the generator. This contrasts with the generate-andtest architecture adopted by most previous empirically-based generation approaches, offering a more efficient, generic and holistic method of generator control. We illustrate the approach by describing a system in which stylistic variation (in the sense of Biber (1988)) can be effectively controlled during the generation of short medical information texts. 1 Introduction This paper1 is concerned with the problem of controlling the output of natural language generation (NLG) systems. In many application scenarios the generator’s task is underspecified, resulting in multiple possible solutions (texts expressing the desired content), all equally good to the generator, but not equally appropriate for the application. Customising the generator directly to overcome this generally leads to ad-hoc, non-reusable solutions. A more modular approach is a generate-andtest architecture, in which all solutions are generated, and then ranked or otherwise selected according to their appropriateness in a separate post 1 Paiva and Evans (2004) provides an overview of our framework and detailed comparison with previous approaches to stylistic control (like Hovy (1988), Green and DiMarco (1993) and Langkilde-Geary (2002)). This paper provides a more detailed account of the system and reports additional experimental results. process. Such architectures have been particularly prominent in the recent development of empirically-based approaches to NLG, where generator outputs can be selected according to application requirements acquired directly from human subjects (e.g. Walker et al. (2002)) or statistically from a corpus (e.g. Langkilde-Geary (2002)). However, this approach suffers from a number of drawbacks: 1. It requires generation of all, or at least many solutions (often hundreds of thousands), expensive both in time and space, and liable to lead to unnecessary interactions with other components (e.g. knowledge bases) in complex systems. Recent advances in the use of packed representations ameliorate some of these issues, but the basic need to compare a large number of solutions in order to rank them remains. 2. The ‘test’ component generally does not give fine-grained control — for example, in a statistically-based system it typically measures how close a text is to some single notion of ideal (actually, statistically average) output. 3. Use of an external filter does not combine well with any control mechanisms within the generator: e.g. controlling combinatorial explosion of modifier attachment or adjective order. In this paper we present an empirically-based method for controlling a generator which overcomes these deficiencies. It controls the generator internally, so that it can produce just one (locally) optimal solution; it employs a model of language variation, so that the generator can be controlled within a multidimensional space of possible variants; its view of the generator is completely holistic, so that it can accommodate any other control mechanisms intrinsic to the generation task. 58 To illustrate our approach we describe a system for controlling ‘style’ in the sense of Biber (1988) during the generation of short texts giving instructions about doses of medicine. The paper continues as follows. In §2 we describe our overall approach. We then present the implemented system (§3) and report on our experimental evaluation (§4). We end with a discussion of conclusions and future directions (§5). 2 Overview of the Approach Our overall approach has two phases: (1) offline calculation of the control parameters, and (2) online application to generation. In the first phase we determine a set of correlation equations, which capture the relationship between surface linguistic features of generated texts and the internal generator decisions that gave rise to those texts (see figure 1). In the second phase, these correlations are used to guide the generator to produce texts with particular surface feature characteristics (see figure 2). corpus linguistic features factor analysis variation dimensions NLG system text CP2 CP1 CPn variation scores variation model correlation analysis correlation equations … generator decisions at different choice points input Figure 1: Offline processing The starting point is a corpus of texts which represents all the variability that we wish to capture. Counts for (surface) linguistic features from the texts in the corpus are obtained, and a factor analysis is used to establish dimensions of variation in terms of these counts: each dimension is defined by a weighted sum of scores for particular features, and factor analysis determines the combination that best accounts for the variability across the whole corpus. This provides a language variation model which can be used to score a new text along each of the identified dimensions, that is, to locate the text in the variation space determined by the corpus. The next step is to take a generator which can generate across the range of variation in the corpus, and identify within it the key choice points (CP1, CP2, … CPn) in its generation of a text. We then allow the generator to freely generate all possible texts from one or more inputs. For each text so generated we record (a) the text’s score according to the variation model and (b) the set of decisions made at each of the selected choice points in the generator. Finally, for a random sample of the generated texts, a statistical correlation analysis is undertaken between the scores and the corresponding generator decisions, resulting in correlation equations which predict likely variation scores from generator decisions. NLG system text in specified style CP2 CP1 CPn correlation equations … target variation score input Figure 2: Online processing In the second phase, the generator is adapted to use the correlation equations to conduct a best-first search of the generation space. As well as the usual input, the generator is supplied with target scores for each dimension of variation. At each choice point, the correlation equations are used to predict which choice is most likely to move closer to the target score for the final text. This basic architecture makes no commitment to what is meant by ‘variation’, ‘linguistic features’, ‘generator choice points’, or even ‘NLG system’. The key ideas are that a statistical analysis of surface features of a corpus of texts can be used to define a model of variation; this model can then be used to control a generator; and the model can also be used to evaluate the generator’s performance. In the next section we describe a concrete instantiation of this architecture, in which ‘variation’ is stylistic variation as characterised by a collection of shallow lexical and syntactic features. 3 An Implemented System In order to evaluate the effectiveness of this general approach, we implemented a system which attempts to control style of text generated as de59 fined by Biber (1988) in short text (typically 2-3 sentences) describing medicine dosage instructions. 3.1 Factor Analysis Biber characterised style in terms of very shallow linguistic features, such as presence of pronouns, auxiliaries, passives etc. By using factor analysis techniques he was able to determine complex correlations between the occurrence and nonoccurrence of such features in text, which he used to characterise different styles of text.2 We adopted the same basic methodology, applied to a smaller more consistent corpus of just over 300 texts taken from proprietary patient information leaflets. Starting with around 70 surface linguistic features as variables, our factor analysis yielded two main factors (each containing linguistic features grouped in positive and negative correlated subgroups) which we used as our dimensions of variation. We interpreted these dimensions as follows (this is a subjective process — factor analysis does not itself provide any interpretation of factors): dimension 1 ranges from texts that try to involve the reader (high positive score) to text that try to be distant from the reader (high negative score); dimension 2 ranges from texts with more pronominal reference and a higher proportion of certain verbal forms (high positive score) to text that use full nominal reference (high negative score).3 3.2 Generator Architecture The generator was constructed from a mixture of existing components and new implementation, using a fairly standard overall architecture as shown in figure 3. Here, dotted lines show the control flow and the straight lines show data flow — the choice point annotations are described below. The input constructor takes an input specification and, using a background database of medicine information, creates a network of concepts and re 2 Some authors (e.g. Lee (1999)) have criticised Biber for making assumptions about the validity and generalisability of his approach to English language as a whole. Here, however, we use his methodology to characterise whatever variation exists without needing to make any broader claims. 3 Full details of the factor analysis can be found in (Paiva 2000). lations (see figure 4) using a schema-based approach (McKeown, 1985). input constructor split network network ordering referring expression NP pruning realiser initial input networks sentence-size networks subnetwork chosen referring expression net pruned network sentence input specification choice point 1: number of sentences choice point 2: type of referring expression choice point 3: choice of mapping rule Figure 3: Generator architecture with choice points Each network is then split into subnetworks by the split network module. This partitions the network by locating ‘proposition’ objects (marked with a double-lined box in figure 4) which have no parent and tracing the subnetwork reachable from each one. We call these subnetworks propnets. In figure 4, there are two propnets, rooted in [1:take] and [9:state] — proposition [15:state] is not a root as it can be reached from [1:take]. A list of all possible groupings of these propnets is obtained4, and one of the possible combinations is passed to the network ordering module. This is the first source of non-determinism in our system, marked as choice point one in figure 3. A combination of subnetworks will be material for the realisation of one paragraph and each subnetwork will be realised as one sentence. 4 For instance, with three propnets (A, B and C) the list of combinations would be [(A,B,C), (A,BC), (AB, C), (AC,B), (ABC)]. 60 2:patient 1:take 3:medicine 12:freq 15:state 13:value(2xday) 4:pres 7:dose 9:state 8:value(2gram) 10:pres 14:pres arg0 arg1 6:of 11:of arg0 arg0 arg0 arg0 arg0 arg0 arg1 arg1 tense tense tense freq 5:patient proxy Figure 4: Example of semantic network produced by the input constructor5 The network ordering module receives a combination of subnetworks and orders them based on the number of common elements between each subnetwork. The strategy is to try to maximise the possibility of having a smooth transition from one sentence to the next in accordance with Centering Theory (Grosz et al., 1995), and so increase the possibility of having a pronoun generated. The referring expression module receives one subnetwork at a time and decides, for each object that is of type [thing], which type of referring expression will be generated. The module is re-used from the Riches system (Cahill et al., 2001) and it generates either a definite description or a pronoun. This is the second source of non-determinism in our system, marked as choice point two in figure 3. Referring expression decisions are recorded by introducing additional nodes into the network, as shown for example in figure 5 (a fragment of the network in figure 4, with the additional nodes). NP pruning is responsible for erasing from a referring expression subnetwork all the nodes that can be transitively reached from a node marked to be pronominalised. This prevents the realiser from trying to express the information twice. In figure 5, [7:dose] is marked to be pronominalised, so the concepts [11:of] and [3:medicine] do not need to be realised, so they are pruned. 5 Although some of the labels in this figure look like words, they bear no direct relation to words in the surface text — for example, ‘of’ may be realised as a genitive construction or a possessive. 3:medicine 7:dose 11:of arg0 arg0 21:pronoun refexp 22:definite refexp Figure 5: Referring expressions and pruning The realiser is a re-implementation of Nicolov’s (1999) generator, extended to use the widecoverage lexicalised grammar developed in the LEXSYS project (Carroll et al., 2000), with further semantic extensions for the present system. It selects grammar rules by matching their semantic patterns to subnetworks of the input, and tries to generate a sentence consuming the whole input. In general there are several rules linking each piece of semantics to its possible realisation, so this is our third, and most prolific, source of non-determinism in the architecture, marked as choice point three in figure 3. A few examples of outputs for the input represented in figure 4 are: the dose of the patient 's medicine is taken twice a day. it is two grams. the two-gram dose of the patient 's medicine is taken twice a day. the patient takes the two-gram dose of the patient 's medicine twice a day. From a typical input corresponding to 2-3 sentences, this generator will generate over a 1000 different texts. 3.3 Tracing Generator Behaviour In order to control the generator’s behaviour we first allow it to run freely, recording a ‘trace’ of the decisions it makes at each choice point during the production of each text. Although there are only three choice points in figure 3, the control structure included two loops: an outer loop which ranges over the sequence of propnets, generating a sentence for each one, and an inner loop which ranges over subnetworks of a propnet as realisation rules are chosen. So the decision structure for even a small text may be quite complex. In the experiments reported here, the trace of the generation process is simply a record of the number of times each decision (choice point, and what choice was made) occurred. Paiva (2004) discusses more complex tracing models, where the context of each decision (for example, what the preceding decision was) is recorded and used in the correlation. However the best results were obtained using 61 just the simple decision-counting model (perhaps in part due to data sparseness for more complex models). 3.4 Correlating Decisions with Text Features By allowing the generator to freely generate all possible output from a single input, we recorded a set of <trace, text> pairs ranging across the full variation space. From these pairs we derived corresponding <decision-count, factor-score> pairs, to which we applied a very simple correlational technique, multivariate linear regression analysis, which is used to find an estimator function for a linear relationship (i.e., one that can be approximated by a straight line) from the data available for several variables (Weisberg, 1985). In our case we want to predict the value for a score in a stylistic dimension (SSi) based on a configuration of generator decisions (GDj) as seen in equation 1. (eq. 1) SSi = x0 + x1GD1 + … + xnGDn + ε 6 We used three randomly sampled data sets of 1400, 1400 and 5000 observations obtained from a potential base of about 1,400,000 different texts that could be produced by our generator from a single input. With each sample, we obtained a regression equation for each stylistic dimension separately. In the next subsections we will present the final results for each of the dimensions separately. Regression on Stylistic Dimension 1 For the regression model on the first stylistic dimension (SS1), the generator decisions that were used in the regression analysis7 are: imperative with one object sentences (IMP_VNP), V_NP_PP agentless passive sentences (PAS_VNPP), V_NP bypassives (BYPAS_VN), and N_PP clauses (NPP) and these are all decisions that happen in the realiser, i.e., at the third choice point in the architecture. This resulted in the regression equation shown in equation 2. 6 SSi represents a stylistic score and is the dependent variable or criterion in the regression analysis; the GDj’s represent generator decisions and are called the independent variables or predictors; the xj’s are weights, and ε is the error. 7 The process of determining the regression takes care of eliminating the variables (i.e. generator decisions) that are not useful to estimate the stylistic dimensions. (eq. 2) SS1 = 6.459 − (1.460∗NPP) − (1.273*BYPAS_VN) − (1.826∗PAS_VNPP) + (1.200∗IMP_VNP)8 The coefficients for the regression on SS1 are unstandardised coefficients, i.e. the ones that are used when dealing with raw counts for the generator decisions. The coefficient of determination (R2), which measures the proportion of the variance of the dependent variable about its mean that is explained by the independent variables, had a reasonably high value (.895)9 and the analysis of variance obtained an F test of 1701.495. One of the assumptions that this technique assumes is the linearity of the relation between the dependent and the independent variables (i.e., in our case, between the stylistic scores in a dimension and the generator decisions). The analysis of the residuals resulted in a graph that had some problems but that resembled a normal graph (see (Paiva, 2004) for more details). Regression on Stylistic Dimension 2 For the regression model on the second stylistic dimension (SS2) the variables that we used were: the number of times a network was split (SPLITNET), generation of a pronoun (RE_PRON), auxiliary verb (VAUX), noun with determiner (NOUN), transitive verb (VNP), and agentless passive (PAS_VNP) — the first type of decision happens in the split network module (our first choice point); the second, in the referring expression module (second choice point); and the rest in the realiser (third choice point). The main results for this model are as follows: the coefficient of determination (R2) was .959 and the analysis of variance obtained an F test of 2298.519. The unstandardised regression coefficients for this model can be seen in eq. 3. (eq. 3) SS2 = − 27.208 − (1.530∗VNP) + (2.002∗RE_PRON) − (.547∗NOUN) + (.356∗VAUX) + (.860∗SPLITNET) + (.213∗PAS_VNP)10 8 This specific equation came from the sample with 5,000 observations — the equations obtained from the other samples are very similar to this one. 9 All the statistical results presented in this paper are significant at the 0.01 level (two-tailed). 10 This specific equation comes from one of the samples of 1,400 observations. 62 With this second model we did not find any problems with the linearity assumptions as the analysis of the residuals gave a normal graph. 4 Controlling the Generator These regression equations characterise the way in which generator decisions influence the final style of the text (as measured by the stylistic factors). In order to control the generator, the user specifies a target stylistic score for each dimension of the text to be generated. At each choice point during generation, all possible decisions are collected in a list and the regression equations are used to order them. The equations allow us to estimate the subsequent values of SS1 and SS2 for each of the possible decisions, and the decisions are ordered according to the distance of the resulting scores from the target scores — the closer the score, the better the decision. Hence the search algorithm that we are using here is the best-first search, i.e., the best local solution according to an evaluation function (which in this case is the Euclidian distance from the target and the resulted value obtained by using the regression equation) is tried first but all the other local solutions are kept in order so backtracking is possible. In this paper we report on tests of two internal aspects of the system11. First we wish to know how good the generator is at hitting a user-specified target — i.e., how close are the scores given by the regression equations for the first text generated to the user’s input target scores. Second, we wish to know how good the regression equation scores are at modelling the original stylistic factors — i.e., we want to compare the regression scores of an output text with the factor analysis scores. We address these questions across the whole of the twodimensional stylistic space, by specifying a rectangular grid of scores spanning the whole space, and asking the generator to produce texts for each grid point from the same semantic input specification. 11 We are not dealing with external (user) evaluation of the system and of the stylistic dimensions we obtained — this was left for future work. Nonetheless, Sigley (1997) showed that the dimensions obtained with factor analysis and people’s perception have a high correlation. -25 -30 -35 -40 -45 10 8 6 4 2 0 -2 -4 -6 -8 -10 80 79 78 77 76 75 74 73 72 71 70 69 68 67 66 65 64 63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Figure 6: Target scores for the texts In this case we divided the scoring space with an 8 by 10 grid pattern as shown in figure 6.12 Each point specifies the target scores for each text that should be generated (the number next to each point is an identifier of each text). For instance, text number 1 was targeted at coordinate (−7, −44), whereas text number 79 was targeted at coordinate (+7, −28). 4.1 Comparing Target Points and Regression Scores In the first part of this experiment we wanted to know how close to the user-specified target coordinates the resulting regression scores of the first generated text were. This can be done in two different ways. The first is to plot the resulting regression scores (see figure 7) and visually check if it mirrors the grid-shape pattern of the target points (figure 6) — this can be done by inspecting the text identifiers13. This can be a bit misleading because there will always be variation around the target point that was supposed to be achieved (i.e., there is a margin for error) and this can blur the comparison unfavourably. 12 The range for each scale comes from the maximum and minimum values for the factors obtained in the samples of generated texts. 13 Note that some texts obtained the same regression score and, in the statistical package, only one was numbered. Those instances are: 1 and 7; 18 and 24; 22 and 28. 63 -25 -30 -35 -40 -45 10 8 6 4 2 0 -2 -4 -6 -8 -10 80 79 78 77 76 75 74 73 72 70 69 68 67 66 65 64 6362 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 76 5 43 2 1 Figure 7: Texts scored by using the regression equation A more formal comparison can be made by plotting the target points versus the regression results for each dimension separately and obtaining a correlation measure between these values. These correlations are shown in figure 8 for SS1 (left) and SS2 (right). The degree of correlation (R2) between the values of target and regression points is 0.9574 for SS1 and 0.942 for SS2, which means that the search mechanism is working very satisfactorily on both dimensions.14 8 6 4 2 0 -2 -4 -6 -8 -10 8 6 4 2 0 -2 -4 -6 -8 -10 -25 -30 -35 -40 -45 -25 -30 -35 -40 -45 Figure 8: Plotting target points versus regression results on SS1 (left) and SS2 (right) 4.2 Comparing Target Points and Stylistic Scores In the second part of this experiment we wanted to know whether the regression equations were doing the job they were supposed to do by comparing the regression scores with stylistic scores obtained (from the factor analysis) for each of the generated texts. In figure 9 we plotted the texts in a graph in accordance with their stylistic scores (once again, some texts occupy the same point so they do not appear). 14 All the correlational figures (R2) presented for this experiment are significant at the 0.01 level (twotailed). -25 -30 -35 -40 -45 10 8 6 4 2 0 -2 -4 -6 -8 -10 80 79 78 77 76 75 74 73 72 71 70 69 68 67 66 65 64 63 62 61 60 59 58 5756 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 76 5 4 3 2 1 Figure 9: Texts scored using the two stylistic dimension obtained in our factor analysis In the ideal situation, the generator would have produced texts with the perfect regression scores and they would be identical to the stylistic scores, so the graph in the figure 9 would be like a gridshape one as in figure 6. However we have already seen in figure 7, that this is not the case for the relation between the target coordinates and the regression scores. So we did not expect the plot of stylistic scores 1 (SS1) against stylistic scores 2 (SS2) to be a perfect grid. Figure 10 (left-hand side) shows the relation between the target points and the scores obtained from the original factor equation of SS1. The value of R2, which represents their correlation, is high (0.9458), considering that this represents the possible accumulation of errors of two stages: from the target to the regression scores, and then from the regression to the actual factor scores. On the right of figure 10 we can see the plotting of the target points and their respective factor scores on SS2. The correlation obtained is also reasonably high (R2 = 0.9109). 10 8 6 4 2 0 -2 -4 -6 -8 -10 10 8 6 4 2 0 -2 -4 -6 -8 -10 -25 -30 -35 -40 -45 -25 -30 -35 -40 -45 Figure 10: Plotting target points versus factor scores on SS1 (left) and SS2 (right) 5 Discussion and Future Work These results demonstrate that it is possible to provide effective control of a generator correlating internal generator behaviour with characteristics of the resulting texts. It is important to note that these 64 two sets of variables (generator decision and surface features) are in principle quite independent of each other. Although in some cases there are strong correlations (for example, the generator’s use of a ‘passive’ rule, correlates with the occurrence of passive participles in the text), in others the relationship is much less direct (for example, the choice of how many subnetworks to split a network into, i.e., SPLITNET, does not correspond to any feature in the factor analysis), and the way individual features combine into significant factors may be quite different. Another feature of our approach is that we do not assume some pre-defined notion of parameters of variation – variation is characterised completely by a corpus (in contrast to approaches which use a corpus to characterise a single style). The disadvantage of this is that variation is not grounded in some ‘intuitive’ notion of style: the interpretation of the stylistic dimensions is subjective and tentative. However, as no comprehensive computationally realisable theory of style yet exists, we believe that this approach has considerable promise for practical, empirically-based stylistic control. The results reported here also make us think that a possible avenue for future work is to explore the issue of what types of problems the generalisation induced by our framework (which will be discussed below) can be applied to. This paper dealt with an application to stylistic variation but, in theory, the approach can be applied to any kind of process to which there is a sorting function that can impose an order, using a measurable scale (e.g., ranking), onto the outputs of another process. Schematically the approach can be abstracted to any sort of problem of the form shown in figure 11. Here there is a producer process outputting a large number of solutions. There is also a sorter process which will classify those solutions in a certain order. The numerical value associated with the output by the sorter can be correlated with the decisions the producer took to generate the output. The same correlation and control mechanism used in this paper can be introduced in the producer process, making it controllable with respect to the sorting dimension. producer output 1 output 2 output m output 3 output 4 ... sorting dimension sorter output 3 output 1 output 14 output 10 output m ... ... ... ... Figure 11: The producer-sorter scheme. References Biber, Douglas (1988) Variation across speech and writing. Cambridge University Press. Cahill, Lynne; J. Carroll; R. Evans; D. Paiva; R. Power; D. Scott; and K. van Deemter From RAGS to RICHES: exploiting the potential of a flexible generation architecture. Proceedings of ACL/EACL 2001, pp. 98-105. Carroll, John; N. Nicolov; O. Shaumyan; M. Smets; and D. Weir (2000) Engineering a wide-coverage lexicalized grammar. Proceedings of the Fifth International Workshop on Tree Adjoining Grammars and Related Frameworks. Green, Stephen J.; and C. DiMarco (1993) Stylistic decision-making in NLG. In Proceedings of the 4th European Workshop on Natural Language Generation. Pisa, Italy. Grosz, Barbara J.; A.K. Joshi; and S. Weinstein (1995) Centering: A Framework for Modelling the Local Coherence of Discourse. Institute for Research in Cognitive Science, IRCS-95-01, University of Pennsylvania. Hovy, Eduard H. (1988) Generating natural language under pragmatic constraints. Lawrence Erlbaum Associates. Langkilde-Geary, Irene. (2002) An empirical verification of coverage and correctness for a general-purpose sentence generator. Proceeding of INLG’02, pp. 17-24. Lee, David (1999) Modelling Variation in Spoken And Written English: the Multi-Dimensional Approach Revisited. PhD thesis, University of Lancaster, UK. McKeown, Kathleen R. (1985) Text Generation: Using Discourse Strategies and Focus Constraints to Generate Natural Language Text. Cambridge University Press. Nicolov, Nicolas (1999) Approximate Text Generation from Nonhierarchical Representations in a Declarative Framework. PhD Thesis, University of Edinburgh. Paiva, Daniel S. (2000) Investigating style in a corpus of pharmaceutical leaflets: results of a factor analysis. Proceedings of the Student Workshop of the 38th Annual Meeting of the Association for Computational Linguistics (ACL'2000), Hong Kong, China. Paiva, Daniel S. (2004) Using Stylistic Parameters to Control a Natural Language Generation System. PhD Thesis, University of Brighton, Brighton, UK. Paiva, Daniel S.; R. Evans (2004) A Framework for Stylistically Controlled Generation. In Proceedings of the 3rd International Conference on Natural Language Generation (INLG’04). New Forest, UK. Sigley, Robert (1997) Text categories and where you can stick them: a crude formality index. International Journal of Corpus Linguistics, volume 2, number 2, pp. 199-237. Walker, Marilyn; O. Rambow, and M. Rogati (2002) Training a Sentence Planner for Spoken Dialogue Using Boosting. Computer Speech and Language, Special Issue on Spoken Language Generation. July. Weisberg, Sanford (1985) Applied Linear Regression, 2nd edition. John Wiley & Sons. 65 | 2005 | 8 |
Proceedings of the 43rd Annual Meeting of the ACL, pages 66–74, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Towards Developing Generation Algorithms for Text-to-Text Applications Radu Soricut and Daniel Marcu Information Sciences Institute University of Southern California 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 radu, marcu @isi.edu Abstract We describe a new sentence realization framework for text-to-text applications. This framework uses IDL-expressions as a representation formalism, and a generation mechanism based on algorithms for intersecting IDL-expressions with probabilistic language models. We present both theoretical and empirical results concerning the correctness and efficiency of these algorithms. 1 Introduction Many of today’s most popular natural language applications – Machine Translation, Summarization, Question Answering – are text-to-text applications. That is, they produce textual outputs from inputs that are also textual. Because these applications need to produce well-formed text, it would appear natural that they are the favorite testbed for generic generation components developed within the Natural Language Generation (NLG) community. Over the years, several proposals of generic NLG systems have been made: Penman (Matthiessen and Bateman, 1991), FUF (Elhadad, 1991), Nitrogen (Knight and Hatzivassiloglou, 1995), Fergus (Bangalore and Rambow, 2000), HALogen (Langkilde-Geary, 2002), Amalgam (Corston-Oliver et al., 2002), etc. Instead of relying on such generic NLG systems, however, most of the current text-to-text applications use other means to address the generation need. In Machine Translation, for example, sentences are produced using application-specific “decoders”, inspired by work on speech recognition (Brown et al., 1993), whereas in Summarization, summaries are produced as either extracts or using task-specific strategies (Barzilay, 2003). The main reason for which text-to-text applications do not usually involve generic NLG systems is that such applications do not have access to the kind of information that the input representation formalisms of current NLG systems require. A machine translation or summarization system does not usually have access to deep subject-verb or verb-object relations (such as ACTOR, AGENT, PATIENT, POSSESSOR, etc.) as needed by Penman or FUF, or even shallower syntactic relations (such as subject, object, premod, etc.) as needed by HALogen. In this paper, following the recent proposal made by Nederhof and Satta (2004), we argue for the use of IDL-expressions as an applicationindependent, information-slim representation language for text-to-text natural language generation. IDL-expressions are created from strings using four operators: concatenation ( ), interleave ( ), disjunction ( ), and lock ( ). We claim that the IDL formalism is appropriate for text-to-text generation, as it encodes meaning only via words and phrases, combined using a set of formally defined operators. Appropriate words and phrases can be, and usually are, produced by the applications mentioned above. The IDL operators have been specifically designed to handle natural constraints such as word choice and precedence, constructions such as phrasal combination, and underspecifications such as free word order. 66 CFGs via intersection with Deterministic Non−deterministic via intersection with probabilistic LMs Word/Phrase based Fergus, Amalgam Nitrogen, HALogen FUF, PENMAN NLG System (Nederhof&Satta 2004) IDL Representation (formalism) Semantic, few meanings Syntactically/ Semantically grounded Syntactic dependencies Representation (computational) Linear Exponential Linear Deterministic Generation (mechanism) Non−deterministic via intersection with probabilistic LMs Non−deterministic via intersection with probabilistic LMs (this paper) IDL Linear Generation (computational) Optimal Solution Efficient Run−time Efficient Run−time Optimal Solution Efficient Run−time All Solutions Efficient Run−time Optimal Solution Linear Linear based Word/Phrase Table 1: Comparison of the present proposal with current NLG systems. In Table 1, we present a summary of the representation and generation characteristics of current NLG systems. We mark by characteristics that are needed/desirable in a generation component for textto-text applications, and by characteristics that make the proposal inapplicable or problematic. For instance, as already argued, the representation formalism of all previous proposals except for IDL is problematic ( ) for text-to-text applications. The IDL formalism, while applicable to text-to-text applications, has the additional desirable property that it is a compact representation, while formalisms such as word-lattices and non-recursive CFGs can have exponential size in the number of words available for generation (Nederhof and Satta, 2004). While the IDL representational properties are all desirable, the generation mechanism proposed for IDL by Nederhof and Satta (2004) is problematic ( ), because it does not allow for scoring and ranking of candidate realizations. Their generation mechanism, while computationally efficient, involves intersection with context free grammars, and therefore works by excluding all realizations that are not accepted by a CFG and including (without ranking) all realizations that are accepted. The approach to generation taken in this paper is presented in the last row in Table 1, and can be summarized as a tiling of generation characteristics of previous proposals (see the shaded area in Table 1). Our goal is to provide an optimal generation framework for text-to-text applications, in which the representation formalism, the generation mechanism, and the computational properties are all needed and desirable ( ). Toward this goal, we present a new generation mechanism that intersects IDL-expressions with probabilistic language models. The generation mechanism implements new algorithms, which cover a wide spectrum of run-time behaviors (from linear to exponential), depending on the complexity of the input. We also present theoretical results concerning the correctness and the efficiency input IDL-expression) of our algorithms. We evaluate these algorithms by performing experiments on a challenging word-ordering task. These experiments are carried out under a highcomplexity generation scenario: find the most probable sentence realization under an n-gram language model for IDL-expressions encoding bags-of-words of size up to 25 (up to 10 possible realizations!). Our evaluation shows that the proposed algorithms are able to cope well with such orders of complexity, while maintaining high levels of accuracy. 2 The IDL Language for NLG 2.1 IDL-expressions IDL-expressions have been proposed by Nederhof & Satta (2004) (henceforth N&S) as a representation for finite languages, and are created from strings using four operators: concatenation ( ), interleave ( ), disjunction ( ), and lock ( ). The semantics of IDL-expressions is given in terms of sets of strings. The concatenation ( ) operator takes two arguments, and uses the strings encoded by its argument expressions to obtain concatenated strings that respect the order of the arguments; e.g., encodes the singleton set . The
nterleave ( ) operator interleaves the strings encoded by its argument expressions; e.g., encodes the set . The isjunction ( ) operator allows a choice among the strings encoded by its argument expressions; e.g., encodes the set . The ock ( ) operator takes only one argument, and “locks-in” the strings encoded by its argument expression, such that no additional material can be interleaved; e.g., ! " encodes the set . Consider the following IDL-expression: $#&%(')*)$+, !.-0/1 *2(354768 %13"6" 9.-0/1 :"'"2-40; 16 < 131 =31)>1?'61?@ .A The concatenation ( ) operator captures precedence constraints, such as the fact that a determiner like 67 the appears before the noun it determines. The lock ( ) operator enforces phrase-encoding constraints, such as the fact that the captives is a phrase which should be used as a whole. The disjunction ( ) operator allows for multiple word/phrase choice (e.g., the prisoners versus the captives), and the interleave ( ) operator allows for word-order freedom, i.e., word order underspecification at meaning representation level. Among the strings encoded by IDLexpression 1 are the following: finally the prisoners were released the captives finally were released the prisoners were finally released The following strings, however, are not part of the language defined by IDL-expression 1: the finally captives were released the prisoners were released finally the captives released were The first string is disallowed because the operator locks the phrase the captives. The second string is not allowed because the operator requires all its arguments to be represented. The last string violates the order imposed by the precedence operator between were and released. 2.2 IDL-graphs IDL-expressions are a convenient way to compactly represent finite languages. However, IDLexpressions do not directly allow formulations of algorithms to process them. For this purpose, an equivalent representation is introduced by N&S, called IDL-graphs. We refer the interested reader to the formal definition provided by N&S, and provide here only an intuitive description of IDL-graphs. We illustrate in Figure 1 the IDL-graph corresponding to IDL-expression 1. In this graph, vertices and are called initial and final, respectively. Vertices , with in-going -labeled edges, and , with out-going -labeled edges, for example, result from the expansion of the operator, while vertices ,
with in-going -labeled edges, and , with out-going -labeled edges result from the expansion of the operator. Vertices to and to result from the expansion of the two operators, respectively. These latter vertices are also shown to have rank 1, as opposed to rank 0 (not shown) assigned to all other vertices. The ranking of vertices in an IDL-graph is needed to enforce a higher priority on the processing of the higher-ranked vertices, such that the desired semantics for the lock operator is preserved. With each IDL-graph we can associate a finite language: the set of strings that can be generated by an IDL-specific traversal of , starting from and ending in . An IDL-expression and its corresponding IDL-graph are said to be equivalent because they generate the same finite language, denoted . 2.3 IDL-graphs and Finite-State Acceptors To make the connection with the formulation of our algorithms, in this section we link the IDL formalism with the more classical formalism of finite-state acceptors (FSA) (Hopcroft and Ullman, 1979). The FSA representation can naturally encode precedence and multiple choice, but it lacks primitives corresponding to the interleave ( ) and lock ( ) operators. As such, an FSA representation must explicitly enumerate all possible interleavings, which are implicitly captured in an IDL representation. This correspondence between implicit and explicit interleavings is naturally handled by the notion of a cut of an IDL-graph . Intuitively, a cut through is a set of vertices that can be reached simultaneously when traversing from the initial node to the final node, following the branches as prescribed by the encoded
, , and operators, in an attempt to produce a string in 9 . More precisely, the initial vertex is considered a cut (Figure 2 (a)). For each vertex in a given cut, we create a new cut by replacing the start vertex of some edge with the end vertex of that edge, observing the following rules: the vertex that is the start of several edges labeled using the special symbol is replaced by a sequence of all the end vertices of these edges (for example, is a cut derived from (Figure 2 (b))); a mirror rule handles the special symbol ; the vertex that is the start of an edge labeled using vocabulary items or is replaced by the end vertex of that edge (for example, , , , are cuts derived from , , 68 v1 v0 ve vs finally ε ε ε ε ε ε ε ε ε ε ε ε ε released were captives prisoners the the v2 1 1 1 1 1 1 1 1 v20 v19 v18 v17 v16 v15 v14 v13 v12 v11 v10 v9 v8 v7 v6 v5 v4 v3 Figure 1: The IDL-graph corresponding to the IDLexpression $#&%(' )0)$+ !.-0/1 2(354768 %13"6 " !.-0/1 :?'52-4*;156 < 131 31)>1?'61?@ . (a) vs (c) v1 finally v2 v0 vs (b) v2 v0 vs rank 1 rank 0 finally ε v5 the (e) v3 ε v2 v0 vs the ε v2 v0 vs ε v6 v1 (d) v6 v5 v3 Figure 2: Cuts of the IDL-graph in Figure 1 (a-d). A non-cut is presented in (e). , and , respectively, see Figure 2 (cd)), only if the end vertex is not lower ranked than any of the vertices already present in the cut (for example, is not a cut that can be derived from , see Figure 2 (e)). Note the last part of the second rule, which restricts the set of cuts by using the ranking mechanism. If one would allow to be a cut, one would imply that finally may appear inserted between the words of the locked phrase the prisoners. We now link the IDL formalism with the FSA formalism by providing a mapping from an IDL-graph to an acyclic finite-state acceptor . Because both formalisms are used for representing finite languages, they have equivalent representational power. The IDL representation is much more compact, however, as one can observe by comparing the IDL-graph in Figure 1 with the equivalent finitestate acceptor in Figure 3. The set of states of is the set of cuts of . The initial state of the finite-state acceptor is the state corresponding to cut , and the final states of the finite-state acceptor are the state corresponding to cuts that contain . In what follows, we denote a state of by the name of the cut to which it corresponds. A transiv0v2 vs ε v1v2 v0v4 v0 v10 the v0v5the v0 v0 v0 v0 v11 v12 v6 v7 v0 v0 v8 v13 prisoners captives ε ε ε ε v10 v1 ε ε the the v6 v11 prisoners captives v5 v1 v1 v1 v1 v1 v7 v12 ε ε v1v8 v13 v1 v0v3 v4 v1 finally finally finally v3 v1 ε ε ε ε v14 v0 v1v9 v0v9 v1v14 finally finally finally finally ve ε v1v15 v0v15 were were ε ε ε ε released released v16 v16 v17 v17 v18 v18 v19 v19 ε v20 v1 v1 v1 v1 v0 v0 v0 v0 v0 finally finally finally finally v20 v1 ε ε ε ε ε ε ε ε ε Figure 3: The finite-state acceptor corresponding to the IDL-graph in Figure 1. tion labeled in between state
and state
occurs if there is an edge in . For the example in Figure 3, the transition labeled were between states
and
occurs because of the edge labeled were between nodes and (Figure 1), whereas the transition labeled finally between states
and
occurs because of the edge labeled finally between nodes and (Figure 1). The two representations and are equivalent in the sense that the language generated by IDL-graph is the same as the language accepted by FSA . It is not hard to see that the conversion from the IDL representation to the FSA representation destroys the compactness property of the IDL formalism, because of the explicit enumeration of all possible interleavings, which causes certain labels to appear repeatedly in transitions. For example, a transition labeled finally appears 11 times in the finitestate acceptor in Figure 3, whereas an edge labeled finally appears only once in the IDL-graph in Figure 1. 3 Computational Properties of IDL-expressions 3.1 IDL-graphs and Weighted Finite-State Acceptors As mentioned in Section 1, the generation mechanism we propose performs an intersection of IDLexpressions with n-gram language models. Following (Mohri et al., 2002; Knight and Graehl, 1998), we implement language models using weighted finite-state acceptors (wFSA). In Section 2.3, we presented a mapping from an IDL-graph to a finite-state acceptor . From such a finite-state acceptor , we arrive at a weighted finite-state acceptor , by splitting the states of ac69 cording to the information needed by the language model to assign weights to transitions. For example, under a bigram language model , state
in Figure 3 must be split into three different states, 23"4*68 %,13"6
, >:?'52-4*;156
, and #&%(')*)$+(
, according to which (non-epsilon) transition was last used to reach this state. The transitions leaving these states have the same labels as those leaving state
, and are now weighted using the language model probability distributions 2(3"4*68%13"6 , :?'"2-40; 16 , and #&%(' )0) + , respectively. Note that, at this point, we already have a na¨ıve algorithm for intersecting IDL-expressions with ngram language models. From an IDL-expression , following the mapping , we arrive at a weighted finite-state acceptor, on which we can use a single-source shortestpath algorithm for directed acyclic graphs (Cormen et al., 2001) to extract the realization corresponding to the most probable path. The problem with this algorithm, however, is that the premature unfolding of the IDL-graph into a finite-state acceptor destroys the representation compactness of the IDL representation. For this reason, we devise algorithms that, although similar in spirit with the single-source shortest-path algorithm for directed acyclic graphs, perform on-the-fly unfolding of the IDL-graph, with a mechanism to control the unfolding based on the scores of the paths already unfolded. Such an approach has the advantage that prefixes that are extremely unlikely under the language model may be regarded as not so promising, and parts of the IDLexpression that contain them may not be unfolded, leading to significant savings. 3.2 Generation via Intersection of IDL-expressions with Language Models Algorithm IDL-NGLM-BFS The first algorithm that we propose is algorithm IDL-NGLM-BFS in Figure 4. The algorithm builds a weighted finitestate acceptor corresponding to an IDL-graph incrementally, by keeping track of a set of active states, called ' : -4*;1 . The incrementality comes from creating new transitions and states in originating in these active states, by unfolding the IDLgraph ; the set of newly unfolded states is called
%8 ) @ . The new transitions in are weighted acIDL-NGLM-BFS 1 ' : -4*;1
2 ' A 3 while ' 4 do
%8 ) @ UNFOLDIDLG ' : -40; 1 = 5 EVALUATENGLM
%8 )>@ 6 if FINALIDLG
%8 )>@ 7 then ' 8 ' : -4*;1
%8) @ 9 return ' : -4*;1 Figure 4: Pseudo-code for intersecting an IDL-graph with an n-gram language model using incremental unfolding and breadth-first search. cording to the language model. If a final state of is not yet reached, the while loop is closed by making the
%8) @ set of states to be the next set of ' : -4*;1 states. Note that this is actually a breadthfirst search (BFS) with incremental unfolding. This algorithm still unfolds the IDL-graph completely, and therefore suffers from the same drawback as the na¨ıve algorithm. The interesting contribution of algorithm IDL-NGLM-BFS, however, is the incremental unfolding. If, instead of line 8 in Figure 4, we introduce mechanisms to control which
%8) @ states become part of the ' : -40; 1 state set for the next unfolding iteration, we obtain a series of more effective algorithms. Algorithm IDL-NGLM-A We arrive at algorithm IDL-NGLM-A by modifying line 8 in Figure 4, thus obtaining the algorithm in Figure 5. We use as control mechanism a priority queue, '6' 3 , in which the states from
% 8 )>@ are PUSH-ed, sorted according to an admissible heuristic function (Russell and Norvig, 1995). In the next iteration, ' : -40; 1 is a singleton set containing the state POP-ed out from the top of the priority queue. Algorithm IDL-NGLM-BEAM We arrive at algorithm IDL-NGLM-BEAM by again modifying line 8 in Figure 4, thus obtaining the algorithm in Figure 6. We control the unfolding using a probabilistic beam !"1'#" , which, via the BEAMSTATES function, selects as ' : -40; 1 states only the states in 70 IDL-NGLM-A 1 ' : -4*;1
2 ' A 3 while ' 4 do
%8 ) @ UNFOLDIDLG ' : -40; 1 = 5 EVALUATENGLM
%8 )>@ 6 if FINALIDLG
%8 )>@ 7 then ' 8 for each 65'1 in
%8 )>@ do PUSH '6' 3# 65'1 ' : -40; 1 POP ' 65' 3# 9 return ' : -4*;1 Figure 5: Pseudo-code for intersecting an IDL-graph with an n-gram language model using incremental unfolding and A search. IDL-NGLM-BEAM !"1'#" 1 ' : -4*;1
2 ' A 3 while ' 4 do
%8 ) @ UNFOLDIDLG ' : -40; 1 = 5 EVALUATENGLM
%8 )>@ 6 if FINALIDLG
%8 )>@ 7 then ' 8 ' : -4*;1 BEAMSTATES
%8 ) @ !?1?'#" 9 return ' : -4*;1 Figure 6: Pseudo-code for intersecting an IDL-graph with an n-gram language model using incremental unfolding and probabilistic beam search.
%8 ) @ reachable with a probability higher or equal to the current maximum probability times the probability beam !?1?'#" . 3.3 Computing Admissible Heuristics for IDL-expressions The IDL representation is ideally suited for computing accurate admissible heuristics under language models. These heuristics are needed by the IDL-NGLM-A algorithm, and are also employed for pruning by the IDL-NGLM-BEAM algorithm. For each state in a weighted finite-state acceptor corresponding to an IDL-graph , one can efficiently extract from – without further unfolding – the set1 of all edge labels that can be used to reach the final states of . This set of labels, denoted , is an overestimation of the set of future events reachable from , because the labels under the operators are all considered. From and the -1 labels (when using an -gram language model) recorded in state we obtain the set of label sequences of length -1. This set, denoted
, is an (over)estimated set of possible future conditioning events for state , guaranteed to contain the most cost-efficient future conditioning events for state . Using
, one needs to extract from the set of most cost-efficient future events from under each operator. We use this set, denoted , to arrive at an admissible heuristic for state under a language model , using Equation 2: ! #"%$'& )( +* , .-+ 0/ 12/ (2) If is the true future cost for state , we guarantee that 43 from the way and
are constructed. Note that, as it usually happens with admissible heuristics, we can make come arbitrarily close to , by computing increasingly better approximations
of
. Such approximations, however, require increasingly advanced unfoldings of the IDL-graph (a complete unfolding of for state gives
, and consequently 5 ). It follows that arbitrarily accurate admissible heuristics exist for IDL-expressions, but computing them onthe-fly requires finding a balance between the time and space requirements for computing better heuristics and the speed-up obtained by using them in the search algorithms. 3.4 Formal Properties of IDL-NGLM algorithms The following theorem states the correctness of our algorithms, in the sense that they find the maximum probability path encoded by an IDL-graph under an n-gram language model. Theorem 1 Let be an IDL-expression, G( ) its IDL-graph, and W( ) its wFSA under an n-gram language model LM. Algorithms IDL-NGLM-BFS and IDL-NGLM-A find the 1Actually, these are multisets, as we treat multiply-occurring labels as separate items. 71 path of maximum probability under LM. Algorithm IDL-NGLM-BEAM finds the path of maximum probability under LM, if all states in W( ) along this path are selected by its BEAMSTATES function. The proof of the theorem follows directly from the correctness of the BFS and A search, and from the condition imposed on the beam search. The next theorem characterizes the run-time complexity of these algorithms, in terms of an input IDLexpression and its corresponding IDL-graph complexity. There are three factors that linearly influence the run-time complexity of our algorithms: is the maximum number of nodes in needed to represent a state in – depends solely on ; is the maximum number of nodes in needed to represent a state in – depends on and , the length of the context used by the -gram language model; and is the number of states of – also depends on and . Of these three factors, is by far the predominant one, and we simply call the complexity of an IDL-expression. Theorem 2 Let be an IDL-expression, its IDL-graph, its FSA, and its wFSA under an n-gram language model. Let
be the set of states of , and
the set of states of . Let also ( +* ,
1 , ( +* ,
1 , and
. Algorithms IDL-NGLM-BFS and IDL-NGLM-BEAM have run-time complexity . Algorithm IDL-NGLM-A has run-time complexity " $'& . We omit the proof here due to space constraints. The fact that the run-time behavior of our algorithms is linear in the complexity of the input IDL-expression (with an additional log factor in the case of A search due to priority queue management) allows us to say that our algorithms are efficient with respect to the task they accomplish. We note here, however, that depending on the input IDL-expression, the task addressed can vary in complexity from linear to exponential. That is, for the intersection of an IDL-expression (bag of words) with a trigram language model, we have , , 1 1 A , and therefore a 1 complexity. This exponential complexity comes as no surprise given that the problem of intersecting an ngram language model with a bag of words is known to be NP-complete (Knight, 1999). On the other hand, for intersecting an IDL-expression (sequence of words) with a trigram language model, we have A , , and , and therefore an generation algorithm. In general, for IDL-expressions for which is bounded, which we expect to be the case for most practical problems, our algorithms perform in polynomial time in the number of words available for generation. 4 Evaluation of IDL-NGLM Algorithms In this section, we present results concerning the performance of our algorithms on a wordordering task. This task can be easily defined as follows: from a bag of words originating from some sentence, reconstruct the original sentence as faithfully as possible. In our case, from an original sentence such as “the gifts are donated by american companies”, we create the IDL-expression ! " .-0/1 4 -06 @ 8 %('1?@ :?8#"92' % 4 156!+( ' 3?1 ' " 1354:"'% !$##" , from which some algorithm realizes a sentence such as “donated by the american companies are gifts”. Note the natural way we represent in an IDL-expression beginning and end of sentence constraints, using the operator. Since this is generation from bag-of-words, the task is known to be at the high-complexity extreme of the run-time behavior of our algorithms. As such, we consider it a good test for the ability of our algorithms to scale up to increasingly complex inputs. We use a state-of-the-art, publicly available toolkit2 to train a trigram language model using Kneser-Ney smoothing, on 10 million sentences (170 million words) from the Wall Street Journal (WSJ), lower case and no final punctuation. The test data is also lower case (such that upper-case words cannot be hypothesized as first words), with final punctuation removed (such that periods cannot be hypothesized as final words), and consists of 2000 unseen WSJ sentences of length 3-7, and 2000 unseen WSJ sentences of length 10-25. The algorithms we tested in this experiments were the ones presented in Section 3.2, plus two baseline algorithms. The first baseline algorithm, L, uses an 2http://www.speech.sri.com/projects/srilm/ 72 inverse-lexicographic order for the bag items as its output, in order to get the word the on sentence initial position. The second baseline algorithm, G, is a greedy algorithm that realizes sentences by maximizing the probability of joining any two word sequences until only one sequence is left. For the A algorithm, an admissible cost is computed for each state in a weighted finite-state automaton, as the sum (over all unused words) of the minimum language model cost (i.e., maximum probability) of each unused word when conditioning over all sequences of two words available at that particular state for future conditioning (see Equation 2, with ). These estimates are also used by the beam algorithm for deciding which IDL-graph nodes are not unfolded. We also test a greedy version of the A algorithm, denoted A , which considers for unfolding only the nodes extracted from the priority queue which already unfolded a path of length greater than or equal to the maximum length already unfolded minus (in this notation, the A algorithm would be denoted A ). For the beam algorithms, we use the notation B to specify a probabilistic beam of size , i.e., an algorithm that beams out the states reachable with probability less than the current maximum probability times . Our first batch of experiments concerns bags-ofwords of size 3-7, for which exhaustive search is possible. In Table 2, we present the results on the word-ordering task achieved by various algorithms. We evaluate accuracy performance using two automatic metrics: an identity metric, ID, which measures the percent of sentences recreated exactly, and BLEU (Papineni et al., 2002), which gives the geometric average of the number of uni-, bi-, tri-, and four-grams recreated exactly. We evaluate the search performance by the percent of Search Errors made by our algorithms, as well as a percent figure of Estimated Search Errors, computed as the percent of searches that result in a string with a lower probability than the probability of the original sentence. To measure the impact of using IDL-expressions for this task, we also measure the percent of unfolding of an IDL graph with respect to a full unfolding. We report speed results as the average number of seconds per bag-of-words, when using a 3.0GHz CPU machine under a Linux OS. The first notable result in Table 2 is the savings ALG ID BLEU Search Unfold Speed (%) Errors (%) (%) (sec./bag) L 2.5 9.5 97.2 (95.8) N/A .000 G 30.9 51.0 67.5 (57.6) N/A .000 BFS 67.1 79.2 0.0 (0.0) 100.0 .072 A 67.1 79.2 0.0 (0.0) 12.0 .010 A 60.5 74.8 21.1 (11.9) 3.2 .004 A 64.3 77.2 8.5 (4.0) 5.3 .005 B 65.0 78.0 9.2 (5.0) 7.2 .006 B 66.6 78.8 3.2 (1.7) 13.2 .011 Table 2: Bags-of-words of size 3-7: accuracy (ID, BLEU), Search Errors (and Estimated Search Errors), space savings (Unfold), and speed results. achieved by the A algorithm under the IDL representation. At no cost in accuracy, it unfolds only 12% of the edges, and achieves a 7 times speedup, compared to the BFS algorithm. The savings achieved by not unfolding are especially important, since the exponential complexity of the problem is hidden by the IDL representation via the folding mechanism of the operator. The algorithms that find sub-optimal solutions also perform well. While maintaining high accuracy, the A and B algorithms unfold only about 5-7% of the edges, at 12-14 times speed-up. Our second batch of experiments concerns bagof-words of size 10-25, for which exhaustive search is no longer possible (Table 3). Not only exhaustive search, but also full A search is too expensive in terms of memory (we were limited to 2GiB of RAM for our experiments) and speed. Only the greedy versions A and A , and the beam search using tight probability beams (0.2-0.1) scale up to these bag sizes. Because we no longer have access to the string of maximum probability, we report only the percent of Estimated Search Errors. Note that, in terms of accuracy, we get around 20% Estimated Search Errors for the best performing algorithms (A and B ), which means that 80% of the time the algorithms are able to find sentences of equal or better probability than the original sentences. 5 Conclusions In this paper, we advocate that IDL expressions can provide an adequate framework for develop73 ALG ID BLEU Est. Search Speed (%) Errors (%) (sec./bag) L 0.0 1.4 99.9 0.0 G 1.2 31.6 83.6 0.0 A 5.8 47.7 34.0 0.7 A 7.4 51.2 21.4 9.5 B 9.0 52.1 23.3 7.1 B 12.2 52.6 19.9 36.7 Table 3: Bags-of-words of size 10-25: accuracy (ID, BLEU), Estimated Search Errors, and speed results. ing text-to-text generation capabilities. Our contribution concerns a new generation mechanism that implements intersection between an IDL expression and a probabilistic language model. The IDL formalism is ideally suited for our approach, due to its efficient representation and, as we show in this paper, efficient algorithms for intersecting, scoring, and ranking sentence realizations using probabilistic language models. We present theoretical results concerning the correctness and efficiency of the proposed algorithms, and also present empirical results that show that our algorithms scale up to handling IDL-expressions of high complexity. Real-world text-to-text generation tasks, such as headline generation and machine translation, are likely to be handled graciously in this framework, as the complexity of IDL-expressions for these tasks tends to be lower than the complexity of the IDL-expressions we worked with in our experiments. Acknowledgment This work was supported by DARPA-ITO grant NN66001-00-1-9814. References Srinivas Bangalore and Owen Rambow. 2000. Using TAG, a tree model, and a language model for generation. In Proceedings of the 1st International Natural Language Generation Conference. Regina Barzilay. 2003. Information Fusion for Multidocument Summarization: Paraphrasing and Generation. Ph.D. thesis, Columbia University. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. 2001. Introduction to Algorithms. The MIT Press and McGraw-Hill. Second Edition. Simon Corston-Oliver, Michael Gamon, Eric K. Ringger, and Robert Moore. 2002. An overview of Amalgam: A machine-learned generation module. In Proceedings of the International Natural Language Generation Conference. Michael Elhadad. 1991. FUF User manual — version 5.0. Technical Report CUCS-038-91, Department of Computer Science, Columbia University. John E. Hopcroft and Jeffrey D. Ullman. 1979. Introduction to automata theory, languages, and computation. Addison-Wesley. Kevin Knight and Jonathan Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4):599– 612. Kevin Knight and Vasileios Hatzivassiloglou. 1995. Two level, many-path generation. In Proceedings of the Association of Computational Linguistics. Kevin Knight. 1999. Decoding complexity in wordreplacement translation models. Computational Linguistics, 25(4):607–615. Irene Langkilde-Geary. 2002. A foundation for generalpurpose natural language generation: sentence realization using probabilistic models of language. Ph.D. thesis, University of Southern California. Christian Matthiessen and John Bateman. 1991. Text Generation and Systemic-Functional Linguistic. Pinter Publishers, London. Mehryar Mohri, Fernando Pereira, and Michael Riley. 2002. Weighted finite-state transducers in speech recognition. Computer Speech and Language, 16(1):69–88. Mark-Jan Nederhof and Giorgio Satta. 2004. IDLexpressions: a formalism for representing and parsing finite languages in natural language processing. Journal of Artificial Intelligence Research, 21:287–317. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the Association for Computational Linguistics (ACL-2002), pages 311–318, Philadelphia, PA, July 7-12. Stuart Russell and Peter Norvig. 1995. Artificial Intelligence. A Modern Approach. Prentice Hall, Englewood Cliffs, New Jersey. 74 | 2005 | 9 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1–8, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Combination of Arabic Preprocessing Schemes for Statistical Machine Translation Fatiha Sadat Institute for Information Technology National Research Council of Canada [email protected] Nizar Habash Center for Computational Learning Systems Columbia University [email protected] Abstract Statistical machine translation is quite robust when it comes to the choice of input representation. It only requires consistency between training and testing. As a result, there is a wide range of possible preprocessing choices for data used in statistical machine translation. This is even more so for morphologically rich languages such as Arabic. In this paper, we study the effect of different word-level preprocessing schemes for Arabic on the quality of phrase-based statistical machine translation. We also present and evaluate different methods for combining preprocessing schemes resulting in improved translation quality. 1 Introduction Statistical machine translation (SMT) is quite robust when it comes to the choice of input representation. It only requires consistency between training and testing. As a result, there is a wide range of possible preprocessing choices for data used in SMT. This is even more so for morphologically rich languages such as Arabic. We use the term “preprocessing” to describe various input modifications applied to raw training and testing texts for SMT. Preprocessing includes different kinds of tokenization, stemming, part-of-speech (POS) tagging and lemmatization. The ultimate goal of preprocessing is to improve the quality of the SMT output by addressing issues such as sparsity in training data. We refer to a specific kind of preprocessing as a “scheme” and differentiate it from the “technique” used to obtain it. In a previous publication, we presented results describing six preprocessing schemes for Arabic (Habash and Sadat, 2006). These schemes were evaluated against three different techniques that vary in linguistic complexity; and across a learning curve of training sizes. Additionally, we reported on the effect of scheme/technique combination on genre variation between training and testing. In this paper, we shift our attention to exploring and contrasting additional preprocessing schemes for Arabic and describing and evaluating different methods for combining them. We use a single technique throughout the experiments reported here. We show an improved MT performance when combining different schemes. Similarly to Habash and Sadat (2006), the set of schemes we explore are all word-level. As such, we do not utilize any syntactic information. We define the word to be limited to written Modern Standard Arabic (MSA) strings separated by white space, punctuation and numbers. Section 2 presents previous relevant research. Section 3 presents some relevant background on Arabic linguistics to motivate the schemes discussed in Section 4. Section 5 presents the tools and data sets used, along with the results of basic scheme experiments. Section 6 presents combination techniques and their results. 2 Previous Work The anecdotal intuition in the field is that reduction of word sparsity often improves translation quality. This reduction can be achieved by increasing training data or via morphologically driven preprocessing (Goldwater and McClosky, 2005). Recent publications on the effect of morphology on SMT quality focused on morphologically rich languages such as German (Nießen and Ney, 2004); Spanish, Catalan, and Serbian (Popovi´c 1 and Ney, 2004); and Czech (Goldwater and McClosky, 2005). They all studied the effects of various kinds of tokenization, lemmatization and POS tagging and show a positive effect on SMT quality. Specifically considering Arabic, Lee (2004) investigated the use of automatic alignment of POS tagged English and affix-stem segmented Arabic to determine appropriate tokenizations. Her results show that morphological preprocessing helps, but only for the smaller corpora. As size increases, the benefits diminish. Our results are comparable to hers in terms of BLEU score and consistent in terms of conclusions. Other research on preprocessing Arabic suggests that minimal preprocessing, such as splitting off the conjunction + w+ ’and’, produces best results with very large training data (Och, 2005). System combination for MT has also been investigated by different researchers. Approaches to combination generally either select one of the hypotheses produced by the different systems combined (Nomoto, 2004; Paul et al., 2005; Lee, 2005) or combine lattices/n-best lists from the different systems with different degrees of synthesis or mixing (Frederking and Nirenburg, 1994; Bangalore et al., 2001; Jayaraman and Lavie, 2005; Matusov et al., 2006). These different approaches use various translation and language models in addition to other models such as word matching, sentence and document alignment, system translation confidence, phrase translation lexicons, etc. We extend on previous work by experimenting with a wider range of preprocessing schemes for Arabic and exploring their combination to produce better results. 3 Arabic Linguistic Issues Arabic is a morphologically complex language with a large set of morphological features1. These features are realized using both concatenative morphology (affixes and stems) and templatic morphology (root and patterns). There is a variety of morphological and phonological adjustments that appear in word orthography and interact with orthographic variations. Next we discuss a subset of these issues that are necessary background for the later sections. We do not address 1Arabic words have fourteen morphological features: POS, person, number, gender, voice, aspect, determiner proclitic, conjunctive proclitic, particle proclitic, pronominal enclitic, nominal case, nunation, idafa (possessed), and mood. derivational morphology (such as using roots as tokens) in this paper. Orthographic Ambiguity: The form of certain letters in Arabic script allows suboptimal orthographic variants of the same word to coexist in the same text. For example, variants of Hamzated Alif, or are often written without their Hamza ( ): A. These variant spellings increase the ambiguity of words. The Arabic script employs diacritics for representing short vowels and doubled consonants. These diacritics are almost always absent in running text, which increases word ambiguity. We assume all of the text we are using is undiacritized. Clitics: Arabic has a set of attachable clitics to be distinguished from inflectional features such as gender, number, person, voice, aspect, etc. These clitics are written attached to the word and thus increase the ambiguity of alternative readings. We can classify three degrees of cliticization that are applicable to a word base in a strict order: [CONJ+ [PART+ [Al+ BASE +PRON]]] At the deepest level, the BASE can have a definite article (+ Al+ ‘the’) or a member of the class of pronominal enclitics, +PRON, (e.g.
+ +hm ‘their/them’). Pronominal enclitics can attach to nouns (as possessives) or verbs and prepositions (as objects). The definite article doesn’t apply to verbs or prepositions. +PRON and Al+ cannot co-exist on nouns. Next comes the class of particle proclitics (PART+): + l+ ‘to/for’, + b+ ‘by/with’, + k+ ‘as/such’ and + s+ ‘will/future’. b+ and k+ are only nominal; s+ is only verbal and l+ applies to both nouns and verbs. At the shallowest level of attachment we find the conjunctions (CONJ+) + w+ ‘and’ and + f+ ‘so’. They can attach to everything. Adjustment Rules: Morphological features that are realized concatenatively (as opposed to templatically) are not always simply concatenated to a word base. Additional morphological, phonological and orthographic rules are applied to the word. An example of a morphological rule is the feminine morpheme, +p (ta marbuta), which can only be word final. In medial position, it is turned into t. For example,
+ mktbp+hm appears as mktbthm ‘their library’. An example of an orthographic rule is the deletion of the Alif ( ) of the definite article + Al+ in nouns when preceded by the preposition + l+ ‘to/for’ but not with any other prepositional proclitic. 2 Templatic Inflections: Some of the inflectional features in Arabic words are realized templatically by applying a different pattern to the Arabic root. As a result, extracting the lexeme (or lemma) of an Arabic word is not always an easy task and often requires the use of a morphological analyzer. One common example in Arabic nouns is Broken Plurals. For example, one of the plural forms of the Arabic word kAtb ‘writer’ is ktbp ‘writers’. An alternative non-broken plural (concatenatively derived) is kAtbwn ‘writers’. These phenomena highlight two issues related to the task at hand (preprocessing): First, ambiguity in Arabic words is an important issue to address. To determine whether a clitic or feature should be split off or abstracted off requires that we determine that said feature is indeed present in the word we are considering in context – not just that it is possible given an analyzer. Secondly, once a specific analysis is determined, the process of splitting off or abstracting off a feature must be clear on what the form of the resulting word should be. In principle, we would like to have whatever adjustments now made irrelevant (because of the missing feature) to be removed. This ensures reduced sparsity and reduced unnecessary ambiguity. For example, the word ktbthm has two possible readings (among others) as ‘their writers’ or ‘I wrote them’. Splitting off the pronominal enclitic
+ +hm without normalizing the t to p in the nominal reading leads the coexistence of two forms of the noun ktbp and ktbt. This increased sparsity is only worsened by the fact that the second form is also the verbal form (thus increased ambiguity). 4 Arabic Preprocessing Schemes Given Arabic morphological complexity, the number of possible preprocessing schemes is very large since any subset of morphological and orthographic features can be separated, deleted or normalized in various ways. To implement any preprocessing scheme, a preprocessing technique must be able to disambiguate amongst the possible analyses of a word, identify the features addressed by the scheme in the chosen analysis and process them as specified by the scheme. In this section we describe eleven different schemes. 4.1 Preprocessing Technique We use the Buckwalter Arabic Morphological Analyzer (BAMA) (Buckwalter, 2002) to obtain possible word analyses. To select among these analyses, we use the Morphological Analysis and Disambiguation for Arabic (MADA) tool,2 an off-theshelf resource for Arabic disambiguation (Habash and Rambow, 2005). Being a disambiguation system of morphology, not word sense, MADA sometimes produces ties for analyses with the same inflectional features but different lexemes (resolving such ties require word-sense disambiguation). We resolve these ties in a consistent arbitrary manner: first in a sorted list of analyses. Producing a preprocessing scheme involves removing features from the word analysis and regenerating the word without the split-off features. The regeneration ensures that the generated form is appropriately normalized by addressing various morphotactics described in Section 3. The generation is completed using the off-the-shelf Arabic morphological generation system Aragen (Habash, 2004). This preprocessing technique we use here is the best performer amongst other explored techniques presented in Habash and Sadat (2006). 4.2 Preprocessing Schemes Table 1 exemplifies the effect of different schemes on the same sentence. ST: Simple Tokenization is the baseline preprocessing scheme. It is limited to splitting off punctuations and numbers from words. For example the last non-white-space string in the example sentence in Table 1, “trkyA.” is split into two tokens: “trkyA” and “.”. An example of splitting numbers from words is the case of the conjunction + w+ ‘and’ which can prefix numerals such as when a list of numbers is described: 15 w15 ‘and 15’. This scheme requires no disambiguation. Any diacritics that appear in the input are removed in this scheme. This scheme is used as input to produce the other schemes. ON: Orthographic Normalization addresses the issue of sub-optimal spelling in Arabic. We use the Buckwalter answer undiacritized as the orthographically normalized form. An example of ON is the spelling of the last letter in the first and 2The version of MADA used in this paper was trained on the Penn Arabic Treebank (PATB) part 1 (Maamouri et al., 2004). 3 Table 1: Various Preprocessing Schemes Input wsynhY Alr ys jwlth bzyArp AlY trkyA. Gloss and will finish the president tour his with visit to Turkey . English The president will finish his tour with a visit to Turkey. Scheme Baseline ST wsynhY Alr ys jwlth bzyArp AlY trkyA . ON wsynhy Alr ys jwlth bzyArp lY trkyA . D1 w+ synhy Alr ys jwlth bzyArp lY trkyA . D2 w+ s+ ynhy Alr ys jwlth b+ zyArp lY trkyA . D3 w+ s+ ynhy Al+ r ys jwlp +P b+ zyArp lY trkyA . WA w+ synhy Alr ys jwlth bzyArp lY trkyA . TB w+ synhy Alr ys jwlp +P b+ zyArp lY trkyA . MR w+ s+ y+ nhy Al+ r ys jwl +p +h b+ zyAr +p lY trkyA . L1 nhY r ys jwlp zyArp lY trkyA . L2 nhY
r ys jwlp zyArp lY trkyA . EN w+ s+ nhY +S Al+ r ys jwlp +P b+ zyArp lY trkyA . fifth words in the example in Table 1 (wsynhY and AlY, respectively). Since orthographic normalization is tied to the use of MADA and BAMA, all of the schemes we use here are normalized. D1, D2, and D3: Decliticization (degree 1, 2 and 3) are schemes that split off clitics in the order described in Section 3. D1 splits off the class of conjunction clitics (w+ and f+). D2 is the same as D1 plus splitting off the class of particles (l+, k+, b+ and s+). Finally D3 splits off what D2 does in addition to the definite article Al+ and all pronominal enclitics. A pronominal clitic is represented as its feature representation to preserve its uniqueness. (See the third word in the example in Table 1.) This allows distinguishing between the possessive pronoun and object pronoun which often look similar. WA: Decliticizing the conjunction w+. This is the simplest tokenization used beyond ON. It is similar to D1, but without including f+. This is included to compare to evidence in its support as best preprocessing scheme for very large data (Och, 2005). TB: Arabic Treebank Tokenization. This is the same tokenization scheme used in the Arabic Treebank (Maamouri et al., 2004). This is similar to D3 but without the splitting off of the definite article Al+ or the future particle s+. MR: Morphemes. This scheme breaks up words into stem and affixival morphemes. It is identical to the initial tokenization used by Lee (2004). L1 and L2: Lexeme and POS. These reduce a word to its lexeme and a POS. L1 and L2 differ in the set of POS tags they use. L1 uses the simple POS tags advocated by Habash and Rambow (2005) (15 tags); while L2 uses the reduced tag set used by Diab et al. (2004) (24 tags). The latter is modeled after the English Penn POS tag set. For example, Arabic nouns are differentiated for being singular (NN) or Plural/Dual (NNS), but adjectives are not even though, in Arabic, they inflect exactly the same way nouns do. EN: English-like. This scheme is intended to minimize differences between Arabic and English. It decliticizes similarly to D3, but uses Lexeme and POS tags instead of the regenerated word. The POS tag set used is the reduced Arabic Treebank tag set (24 tags) (Maamouri et al., 2004; Diab et al., 2004). Additionally, the subject inflection is indicated explicitly as a separate token. We do not use any additional information to remove specific features using alignments or syntax (unlike, e.g. removing all but one Al+ in noun phrases (Lee, 2004)). 4.3 Comparing Various Schemes Table 2 compares the different schemes in terms of the number of tokens, number of out-ofvocabulary (OOV) tokens, and perplexity. These statistics are computed over the MT04 set, which we use in this paper to report SMT results (Section 5). Perplexity is measured against a language model constructed from the Arabic side of the parallel corpus used in the MT experiments (Section 5). Obviously the more verbose a scheme is, the bigger the number of tokens in the text. The ST, ON, L1, and L2 share the same number of tokens because they all modify the word without splitting off any of its morphemes or features. The increase in the number of tokens is in inverse correlation 4 Table 2: Scheme Statistics Scheme Tokens OOVs Perplexity ST 36000 1345 1164 ON 36000 1212 944 D1 38817 1016 582 D2 40934 835 422 D3 52085 575 137 WA 38635 1044 596 TB 42880 662 338 MR 62410 409 69 L1 36000 392 401 L2 36000 432 460 EN 55525 432 103 with the number of OOVs and perplexity. The only exceptions are L1 and L2, whose low OOV rate is the result of the reductionist nature of the scheme, which does not preserve morphological information. 5 Basic Scheme Experiments We now describe the system and the data sets we used to conduct our experiments. 5.1 Portage We use an off-the-shelf phrase-based SMT system, Portage (Sadat et al., 2005). For training, Portage uses IBM word alignment models (models 1 and 2) trained in both directions to extract phrase tables in a manner resembling (Koehn, 2004a). Trigram language models are implemented using the SRILM toolkit (Stolcke, 2002). Decoding weights are optimized using Och’s algorithm (Och, 2003) to set weights for the four components of the loglinear model: language model, phrase translation model, distortion model, and word-length feature. The weights are optimized over the BLEU metric (Papineni et al., 2001). The Portage decoder, Canoe, is a dynamic-programming beam search algorithm resembling the algorithm described in (Koehn, 2004a). 5.2 Experimental data All of the training data we use is available from the Linguistic Data Consortium (LDC). We use an Arabic-English parallel corpus of about 5 million words for translation model training data.3 We created the English language model from the English side of the parallel corpus together 3The parallel text includes Arabic News (LDC2004T17), eTIRR (LDC2004E72), English translation of Arabic Treebank (LDC2005E46), and Ummah (LDC2004T18). with 116 million words the English Gigaword Corpus (LDC2005T12) and 128 million words from the English side of the UN Parallel corpus (LDC2004E13).4 English preprocessing simply included lowercasing, separating punctuation from words and splitting off “’s”. The same preprocessing was used on the English data for all experiments. Only Arabic preprocessing was varied. Decoding weight optimization was done using a set of 200 sentences from the 2003 NIST MT evaluation test set (MT03). We report results on the 2004 NIST MT evaluation test set (MT04) The experiment design and choices of schemes and techniques were done independently of the test set. The data sets, MT03 and MT04, include one Arabic source and four English reference translations. We use the evaluation metric BLEU-4 (Papineni et al., 2001) although we are aware of its caveats (CallisonBurch et al., 2006). 5.3 Experimental Results We conducted experiments with all schemes discussed in Section 4 with different training corpus sizes: 1%, 10%, 50% and 100%. The results of the experiments are summarized in Table 3. These results are not English case sensitive. All reported scores must have over 1.1% BLEU-4 difference to be significant at the 95% confidence level for 1% training. For all other training sizes, the difference must be over 1.7% BLEU-4. Error intervals were computed using bootstrap resampling (Koehn, 2004b). Across different schemes, EN performs the best under scarce-resource condition; and D2 performs as best under large resource conditions. The results from the learning curve are consistent with previous published work on using morphological preprocessing for SMT: deeper morph analysis helps for small data sets, but the effect is diminished with more data. One interesting observation is that for our best performing system (D2), the BLEU score at 50% training (35.91) was higher than the baseline ST at 100% training data (34.59). This relationship is not consistent across the rest of the experiments. ON improves over the baseline 4The SRILM toolkit has a limit on the size of the training corpus. We selected portions of additional corpora using a heuristic that picks documents containing the word “Arab” only. The Language model created using this heuristic had a bigger improvement in BLEU score (more than 1% BLEU-4) than a randomly selected portion of equal size. 5 Table 3: Scheme Experiment Results (BLEU-4) Training Data Scheme 1% 10% 50% 100% ST 9.42 22.92 31.09 34.59 ON 10.71 24.3 32.52 35.91 D1 13.11 26.88 33.38 36.06 D2 14.19 27.72 35.91 37.10 D3 16.51 28.69 34.04 34.33 WA 13.12 26.29 34.24 35.97 TB 14.13 28.71 35.83 36.76 MR 11.61 27.49 32.99 34.43 L1 14.63 24.72 31.04 32.23 L2 14.87 26.72 31.28 33.00 EN 17.45 28.41 33.28 34.51 but only statistically significantly at the 1% level. The results for WA are generally similar to D1. This makes sense since w+ is by far the most common of the two conjunctions D1 splits off. The TB scheme behaves similarly to D2, the best scheme we have. It outperformed D2 in few instances, but the difference were not statistically significant. L1 and L2 behaved similar to EN across the different training size. However, both were always worse than EN. Neither variant was consistently better than the other. 6 System Combination The complementary variation in the behavior of different schemes under different resource size conditions motivated us to investigate system combination. The intuition is that even under large resource conditions, some words will occur very infrequently that the only way to model them is to use a technique that behaves well under poor resource conditions. We conducted an oracle study into system combination. An oracle combination output was created by selecting for each input sentence the output with the highest sentence-level BLEU score. We recognize that since the brevity penalty in BLEU is applied globally, this score may not be the highest possible combination score. The oracle combination has a 24% improvement in BLEU score (from 37.1 in best system to 46.0) when combining all eleven schemes described in this paper. This shows that combining of output from all schemes has a large potential of improvement over all of the different systems and that the different schemes are complementary in some way. In the rest of this section we describe two successful methods for system combination of different schemes: rescoring-only combination (ROC) and decoding-plus-rescoring combination (DRC). All of the experiments use the same training data, test data (MT04) and preprocessing schemes described in the previous section. 6.1 Rescoring-only Combination This “shallow” approach rescores all the one-best outputs generated from separate scheme-specific systems and returns the top choice. Each schemespecific system uses its own scheme-specific preprocessing, phrase-tables, and decoding weights. For rescoring, we use the following features: The four basic features used by the decoder: trigram language model, phrase translation model, distortion model, and word-length feature. IBM model 1 and IBM model 2 probabilities in both directions. We call the union of these two sets of features standard. The perplexity of the preprocessed source sentence (PPL) against a source language model as described in Section 4.3. The number of out-of-vocabulary words in the preprocessed source sentence (OOV). Length of the preprocessed source sentence (SL). An encoding of the specific scheme used (SC). We use a one-hot coding approach with 11 separate binary features, each corresponding to a specific scheme. Optimization of the weights on the rescoring features is carried out using the same max-BLEU algorithm and the same development corpus described in Section 5. Results of different sets of features with the ROC approach are presented in Table 4. Using standard features with all eleven schemes, we obtain a BLEU score of 34.87 – a significant drop from the best scheme system (D2, 37.10). Using different subsets of features or limiting the number of systems to the best four systems (D2, TB, D1 and WA), we get some improvements. The best results are obtained using all schemes with standard features plus perplexity and scheme coding. The improvements are small; however they are statistically significant (see Section 6.3). 6 Table 4: ROC Approach Results Combination All Schemes 4 Best Schemes standard 34.87 37.12 +PPL+SC 37.58 37.45 +PPL+SC+OOV 37.40 +PPL+SC+OOV+SL 37.39 +PPL+SC+SL 37.15 6.2 Decoding-plus-Rescoring Combination This “deep” approach allows the decoder to consult several different phrase tables, each generated using a different preprocessing scheme; just as with ROC, there is a subsequent rescoring stage. A problem with DRC is that the decoder we use can only cope with one format for the source sentence at a time. Thus, we are forced to designate a particular scheme as privileged when the system is carrying out decoding. The privileged preprocessing scheme will be the one applied to the source sentence. Obviously, words and phrases in the preprocessed source sentence will more frequently match the phrases in the privileged phrase table than in the non-privileged ones. Nevertheless, the decoder may still benefit from having access to all the tables. For each choice of a privileged scheme, optimization of log-linear weights is carried out (with the version of the development set preprocessed in the same privileged scheme). The middle column of Table 5 shows the results for 1-best output from the decoder under different choices of the privileged scheme. The bestperforming system in this column has as its privileged preprocessing scheme TB. The decoder for this system uses TB to preprocess the source sentence, but has access to a log-linear combination of information from all 11 preprocessing schemes. The final column of Table 5 shows the results of rescoring the concatenation of the 1-best outputs from each of the combined systems. The rescoring features used are the same as those used for the ROC experiments. For rescoring, a privileged preprocessing scheme is chosen and applied to the development corpus. We chose TB for this (since it yielded the best result when chosen to be privileged at the decoding stage). Applied to 11 schemes, this yields the best result so far: 38.67 BLEU. Combining the 4 best pre-processing schemes (D2, TB, D1, WA) yielded a lower BLEU score (37.73). These results show that combining phrase tables from different schemes have a positive effect on MT performance. Table 5: DRC Approach Results Combination Decoding Rescoring Scheme 1-best Standard+PPL D2 37.16 All schemes TB 38.24 38.67 D1 37.89 WA 36.91 ON 36.42 ST 34.27 EN 30.78 MR 34.65 D3 34.73 L2 32.25 L1 30.47 D2 37.39 4 best schemes TB 37.53 37.73 D1 36.05 WA 37.53 Table 6: Statistical Significance using Bootstrap Resampling DRC ROC D2 TB D1 WA ON 100 0 0 0 0 0 0 97.7 2.2 0.1 0 0 0 92.1 7.9 0 0 0 98.8 0.7 0.3 0.2 53.8 24.1 22.1 59.3 40.7 6.3 Significance Test We use bootstrap resampling to compute MT statistical significance as described in (Koehn, 2004a). The results are presented in Table 6. Comparing the 11 individual systems and the two combinations DRC and ROC shows that DRC is significantly better than the other systems – DRC got a max BLEU score in 100% of samples. When excluding DRC from the comparison set, ROC got max BLEU score in 97.7% of samples, while D2 and TB got max BLEU score in 2.2% and 0.1% of samples, respectively. The difference between ROC and D2 and ATB is statistically significant. 7 Conclusions and Future Work We motivated, described and evaluated several preprocessing schemes for Arabic. The choice of a preprocessing scheme is related to the size of available training data. We also presented two techniques for scheme combination. Although the results we got are not as high as the oracle scores, they are statistically significant. In the future, we plan to study additional scheme variants that our current results support as potentially helpful. We plan to include more 7 syntactic knowledge. We also plan to continue investigating combination techniques at the sentence and sub-sentence levels. We are especially interested in the relationship between alignment and decoding and the effect of preprocessing scheme on both. Acknowledgments This paper is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-06-C0023. Any opinions, findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of DARPA. We thank Roland Kuhn and George Forster for helpful discussions and support. References S. Bangalore, G. Bordel, and G. Riccardi. 2001. Computing Consensus Translation from Multiple Machine Translation Systems. In Proc. of IEEE Automatic Speech Recognition and Understanding Workshop, Italy. T. Buckwalter. 2002. Buckwalter Arabic Morphological Analyzer Version 1.0. Linguistic Data Consortium, University of Pennsylvania. Catalog: LDC2002L49. C. Callison-Burch, M. Osborne, and P. Koehn. 2006. Re-evaluating the Role of Bleu in Machine Translation Research. In Proc. of the European Chapter of the Association for Computational Linguistics (EACL), Trento, Italy. M. Diab, K. Hacioglu, and D. Jurafsky. 2004. Automatic Tagging of Arabic Text: From Raw Text to Base Phrase Chunks. In Proc. of the North American Chapter of the Association for Computational Linguistics (NAACL), Boston, MA. R. Frederking and S. Nirenburg. 2005. Three Heads are Better Than One. In Proc. of Applied Natural Language Processing, Stuttgart, Germany. S. Goldwater and D. McClosky. 2005. Improving Statistical MT through Morphological Analysis. In Proc. of Empirical Methods in Natural Language Processing (EMNLP), Vancouver, Canada. N. Habash and O. Rambow. 2005. Tokenization, Morphological Analysis, and Part-of-Speech Tagging for Arabic in One Fell Swoop. In Proc. of Association for Computational Linguistics (ACL), Ann Arbor, Michigan. N. Habash and F. Sadat. 2006. Arabic Preprocessing Schemes for Statistical Machine Translation. In Proc. of NAACL, Brooklyn, New York. N. Habash. 2004. Large Scale Lexeme-based Arabic Morphological Generation. In Proc. of Traitement Automatique du Langage Naturel (TALN). Fez, Morocco. S. Jayaraman and A. Lavie. 2005. Multi-Engine Machine Translation Guided by Explicit Word Matching. In Proc. of the Association of Computational Linguistics (ACL), Ann Arbor, MI. P. Koehn. 2004a. Pharaoh: a Beam Search Decoder for Phrase-based Statistical Machine Translation Models. In Proc. of the Association for Machine Translation in the Americas (AMTA). P. Koehn. 2004b. Statistical Significance Tests for Machine Translation Evaluation. In Proc. of the EMNLP, Barcelona, Spain. Y. Lee. 2004. Morphological Analysis for Statistical Machine Translation. In Proc. of NAACL, Boston, MA. Y. Lee. 2005. IBM Statistical Machine Translation for Spoken Languages. In Proc. of International Workshop on Spoken Language Translation (IWSLT). M. Maamouri, A. Bies, and T. Buckwalter. 2004. The Penn Arabic Treebank: Building a Large-scale Annotated Arabic Corpus. In Proc. of NEMLAR Conference on Arabic Language Resources and Tools, Cairo, Egypt. E. Matusov, N. Ueffing, H. Ney 2006. Computing Consensus Translation from Multiple Machine Translation Systems Using Enhanced Hypotheses Alignment. In Proc. of EACL, Trento, Italy. S. Nießen and H. Ney. 2004. Statistical Machine Translation with Scarce Resources Using Morphosyntactic Information. Computational Linguistics, 30(2). T. Nomoto. 2004. Multi-Engine Machine Translation with Voted Language Model. In Proc. of ACL, Barcelona, Spain. F. Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proc. of the ACL, Sapporo, Japan. F. Och. 2005. Google System Description for the 2005 Nist MT Evaluation. In MT Eval Workshop (unpublished talk). K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2001. Bleu: a Method for Automatic Evaluation of Machine Translation. Technical Report RC22176(W0109-022), IBM Research Division, Yorktown Heights, NY. M. Paul, T. Doi, Y. Hwang, K. Imamura, H. Okuma, and E. Sumita. 2005. Nobody is Perfect: ATR’s Hybrid Approach to Spoken Language Translation. In Proc. of IWSLT. M. Popovi´c and H. Ney. 2004. Towards the Use of Word Stems and Suffixes for Statistical Machine Translation. In Proc. of Language Resources and Evaluation (LREC), Lisbon, Portugal. F. Sadat, H. Johnson, A. Agbago, G. Foster, R. Kuhn, J. Martin, and A. Tikuisis. 2005. Portage: A Phrasebased Machine Translation System. In Proceedings of the ACL Workshop on Building and Using Parallel Texts, Ann Arbor, Michigan. A. Stolcke. 2002. Srilm - An Extensible Language Modeling Toolkit. In Proc. of International Conference on Spoken Language Processing. 8 | 2006 | 1 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 73–80, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Named Entity Transliteration with Comparable Corpora Richard Sproat, Tao Tao, ChengXiang Zhai University of Illinois at Urbana-Champaign, Urbana, IL, 61801 [email protected], {taotao,czhai}@cs.uiuc.edu Abstract In this paper we investigate ChineseEnglish name transliteration using comparable corpora, corpora where texts in the two languages deal in some of the same topics — and therefore share references to named entities — but are not translations of each other. We present two distinct methods for transliteration, one approach using phonetic transliteration, and the second using the temporal distribution of candidate pairs. Each of these approaches works quite well, but by combining the approaches one can achieve even better results. We then propose a novel score propagation method that utilizes the co-occurrence of transliteration pairs within document pairs. This propagation method achieves further improvement over the best results from the previous step. 1 Introduction As part of a more general project on multilingual named entity identification, we are interested in the problem of name transliteration across languages that use different scripts. One particular issue is the discovery of named entities in “comparable” texts in multiple languages, where by comparable we mean texts that are about the same topic, but are not in general translations of each other. For example, if one were to go through an English, Chinese and Arabic newspaper on the same day, it is likely that the more important international events in various topics such as politics, business, science and sports, would each be covered in each of the newspapers. Names of the same persons, locations and so forth — which are often transliterated rather than translated — would be found in comparable stories across the three papers.1 We wish to use this expectation to leverage transliteration, and thus the identification of named entities across languages. Our idea is that the occurrence of a cluster of names in, say, an English text, should be useful if we find a cluster of what looks like the same names in a Chinese or Arabic text. An example of what we are referring to can be found in Figure 1. These are fragments of two stories from the June 8, 2001 Xinhua English and Chinese newswires, each covering an international women’s badminton championship. Though these two stories are from the same newswire source, and cover the same event, they are not translations of each other. Still, not surprisingly, a lot of the names that occur in one, also occur in the other. Thus (Camilla) Martin shows up in the Chinese version as í û ¢ ma-er-ting; Judith Meulendijks is Ú ¤ ¬ × Ï Ë ¹ yu mo-lun-di-ke-si; and Mette Sorensen is õ ¤ ÷ × mai su-lun-sen. Several other correspondences also occur. While some of the transliterations are “standard” — thus Martin is conventionally transliterated as í û ¢ ma-erting — many of them were clearly more novel, though all of them follow the standard Chinese conventions for transliterating foreign names. These sample documents illustrate an important point: if a document in language L1 has a set of names, and one finds a document in L2 containing a set of names that look as if they could be transliterations of the names in the L1 document, then this should boost one’s confidence that the two sets of names are indeed transliterations of each other. We will demonstrate that this intuition is correct. 1Many names, particularly of organizations, may be translated rather than transliterated; the transliteration method we discuss here obviously will not account for such cases, though the time correlation and propagation methods we discuss will still be useful. 73 Dai Yun Nips World No. 1 Martin to Shake off Olympic Shadow . . . In the day’s other matches, second seed Zhou Mi overwhelmed Ling Wan Ting of Hong Kong, China 11-4, 114, Zhang Ning defeat Judith Meulendijks of Netherlands 112, 11-9 and third seed Gong Ruina took 21 minutes to eliminate Tine Rasmussen of Denmark 11-1, 11-1, enabling China to claim five quarterfinal places in the women’s singles. ð « ò À õ ü Ð ú ® ¥ ¡ Ö « ¿ Ò í Ë ¿ . . . íû¢ ¹Ïª, ý»Éܬø ½¤4öÐ ú Ë, ´ ¹ . . . ý Å Ö Ó ¨ £ Ç ñ í Ô ½ ö 11:1á ¡ ¤ ó ¡ Ö Ù¤ ¹·, Å þ Ú Ï ç Ô11:2Í11:9Ô Ë É ¼ Ä Ú¤¬×Ï˹, Ü Û Ú Â ç Ô11:4Í11:1½ ¤ Ë Ð ú ãÛ¡Öèñ Figure 1: Sample from two stories about an international women’s badminton championship. 2 Previous Work In previous work on Chinese named-entity transliteration — e.g. (Meng et al., 2001; Gao et al., 2004), the problem has been cast as the problem of producing, for a given Chinese name, an English equivalent such as one might need in a machine translation system. For example, for the name ¬ ¤ þ ® · ¹wei wei-lian-mu-si, one would like to arrive at the English name V(enus) Williams. Common approaches include sourcechannel methods, following (Knight and Graehl, 1998) or maximum-entropy models. Comparable corpora have been studied extensively in the literature (e.g.,(Fung, 1995; Rapp, 1995; Tanaka and Iwasaki, 1996; Franz et al., 1998; Ballesteros and Croft, 1998; Masuichi et al., 2000; Sadat et al., 2003)), but transliteration in the context of comparable corpora has not been well addressed. The general idea of exploiting frequency correlations to acquire word translations from comparable corpora has been explored in several previous studies (e.g., (Fung, 1995; Rapp, 1995; Tanaka and Iwasaki, 1996)).Recently, a method based on Pearson correlation was proposed to mine word pairs from comparable corpora (Tao and Zhai, 2005), an idea similar to the method used in (Kay and Roscheisen, 1993) for sentence alignment. In our work, we adopt the method proposed in (Tao and Zhai, 2005) and apply it to the problem of transliteration. We also study several variations of the similarity measures. Mining transliterations from multilingual web pages was studied in (Zhang and Vines, 2004); Our work differs from this work in that we use comparable corpora (in particular, news data) and leverage the time correlation information naturally available in comparable corpora. 3 Chinese Transliteration with Comparable Corpora We assume that we have comparable corpora, consisting of newspaper articles in English and Chinese from the same day, or almost the same day. In our experiments we use data from the English and Chinese stories from the Xinhua News agency for about 6 months of 2001.2 We assume that we have identified names for persons and locations—two types that have a strong tendency to be transliterated wholly or mostly phonetically—in the English text; in this work we use the named-entity recognizer described in (Li et al., 2004), which is based on the SNoW machine learning toolkit (Carlson et al., 1999). To perform the transliteration task, we propose the following general three-step approach: 1. Given an English name, identify candidate Chinese character n-grams as possible transliterations. 2. Score each candidate based on how likely the candidate is to be a transliteration of the English name. We propose two different scoring methods. The first involves phonetic scoring, and the second uses the frequency profile of the candidate pair over time. We will show that each of these approaches works quite well, but by combining the approaches one can achieve even better results. 3. Propagate scores of all the candidate transliteration pairs globally based on their cooccurrences in document pairs in the comparable corpora. The intuition behind the third step is the following. Suppose several high-confidence name transliteration pairs occur in a pair of English and Chinese documents. Intuitively, this would increase our confidence in the other plausible transliteration pairs in the same document pair. We thus propose a score propagation method to allow these high-confidence pairs to propagate some of their 2Available from the LDC via the English Gigaword (LDC2003T05) and Chinese Gigaword (LDC2003T09) corpora. 74 scores to other co-occurring transliteration pairs. As we will show later, such a propagation strategy can generally further improve the transliteration accuracy; in particular, it can further improve the already high performance from combining the two scoring methods. 3.1 Candidate Selection The English named entity candidate selection process was already described above. Candidate Chinese transliterations are generated by consulting a list of characters that are frequently used for transliterating foreign names. As discussed elsewhere (Sproat et al., 1996), a subset of a few hundred characters (out of several thousand) tends to be used overwhelmingly for transliterating foreign names into Chinese. We use a list of 495 such characters, derived from various online dictionaries. A sequence of three or more characters from the list is taken as a possible name. If the character “¤” occurs, which is frequently used to represent the space between parts of an English name, then at least one character to the left and right of this character will be collected, even if the character in question is not in the list of “foreign” characters. Armed with the English and Chinese candidate lists, we then consider the pairing of every English candidate with every Chinese candidate. Obviously it would be impractical to do this for all of the candidates generated for, say, an entire year: we consider as plausible pairings those candidates that occur within a day of each other in the two corpora. 3.2 Candidate scoring based on pronunciation We adopt a source-channel model for scoring English-Chinese transliteration pairs. In general, we seek to estimate P(e|c), where e is a word in Roman script, and c is a word in Chinese script. Since Chinese transliteration is mostly based on pronunciation, we estimate P(e′|c′), where e′ is the pronunciation of e and c′ is the pronunciation of c. Again following standard practice, we decompose the estimate of P(e′|c′) as P(e′|c′) = Q i P(e′ i|c′ i). Here, e′ i is the ith subsequence of the English phone string, and c′ i is the ith subsequence of the Chinese phone string. Since Chinese transliteration attempts to match the syllablesized characters to equivalent sounding spans of the English language, we fix the c′ i to be syllables, and let the e′ i range over all possible subsequences of the English phone string. For training data we have a small list of 721 names in Roman script and their Chinese equivalent.3 Pronunciations for English words are obtained using the Festival text-tospeech system (Taylor et al., 1998); for Chinese, we use the standard pinyin transliteration of the characters. English-Chinese pairs in our training dictionary were aligned using the alignment algorithm from (Kruskal, 1999), and a hand-derived set of 21 rules-of-thumb: for example, we have rules that encode the fact that Chinese /l/ can correspond to English /r/, /n/ or /er/; and that Chinese /w/ may be used to represent /v/. Given that there are over 400 syllables in Mandarin (not counting tone) and each of these syllables can match a large number of potential English phone spans, this is clearly not enough training data to cover all the parameters, and so we use Good-Turing estimation to estimate probabilities for unseen correspondences. Since we would like to filter implausible transliteration pairs we are less lenient than standard estimation techniques in that we are willing to assign zero probability to some correspondences. Thus we set a hard rule that for an English phone span to correspond to a Chinese syllable, the initial phone of the English span must have been seen in the training data as corresponding to the initial of the Chinese syllable some minimum number of times. For consonant-initial syllables we set the minimum to 4. We omit further details of our estimation technique for lack of space. This phonetic correspondence model can then be used to score putative transliteration pairs. 3.3 Candidate Scoring based on Frequency Correlation Names of the same entity that occur in different languages often have correlated frequency patterns due to common triggers such as a major event. Thus if we have comparable news articles over a sufficiently long time period, it is possible to exploit such correlations to learn the associations of names in different languages. The idea of exploiting frequency correlation has been well studied. (See the previous work section.) We adopt the method proposed in (Tao and Zhai, 2005), which 3The LDC provides a much larger list of transliterated Chinese-English names, but we did not use this here for two reasons. First, we have found it it be quite noisy. Secondly, we were interested in seeing how well one could do with a limited resource of just a few hundred names, which is a more realistic scenario for languages that have fewer resources than English and Chinese. 75 works as follows: We pool all documents in a single day to form a large pseudo-document. Then, for each transliteration candidate (both Chinese and English), we compute its frequency in each of those pseudo-documents and obtain a raw frequency vector. We further normalize the raw frequency vector so that it becomes a frequency distribution over all the time points (days). In order to compute the similarity between two distribution vectors, The Pearson correlation coefficient was used in (Tao and Zhai, 2005); here we also considered two other commonly used measures – cosine (Salton and McGill, 1983), and Jensen-Shannon divergence (Lin, 1991), though our results show that Pearson correlation coefficient performs better than these two other methods. 3.4 Score Propagation In both scoring methods described above, scoring of each candidate transliteration pair is independent of the other. As we have noted, document pairs that contain lots of plausible transliteration pairs should be viewed as more plausible document pairs; at the same time, in such a situation we should also trust the putative transliteration pairs more. Thus these document pairs and transliteration pairs mutually “reinforce” each other, and this can be exploited to further optimize our transliteration scores by allowing transliteration pairs to propagate their scores to each other according to their co-occurrence strengths. Formally, suppose the current generation of transliteration scores are (ei, ci, wi) i = 1, ..., n, where (ei, ci) is a distinct pair of English and Chinese names. Note that although for any i ̸= j, we have (ei, ci) ̸= (ej, cj), it is possible that ei = ej or ci = cj for some i ̸= j. wi is the transliteration score of (ei, ci). These pairs along with their co-occurrence relation computed based on our comparable corpora can be formally represented by a graph as shown in Figure 2. In such a graph, a node represents (ei, ci, wi). An edge between (ei, ci, wi) and (ej, cj, wj) is constructed iff (ei, ci) and (ej, cj) co-occur in a certain document pair (Et, Ct), i.e. there exists a document pair (Et, Ct), such that ei, ej ∈Et and ci, cj ∈Ct. Given a node (ei, ci, wi), we refer to all its directly-connected nodes as its “neighbors”. The documents do not appear explicitly in the graph, but they implicitly affect the graph’s topology and the weight of each edge. Our idea of score propagation can now be formulated as the following recursive equation for w1 w4 w2 w3 w5 w6 w7 (e4, c4) (e3, c3) (e5, c5) (e5, c5) (e2, c2) (e7, c7) (e6, c6) Figure 2: Graph representing transliteration pairs and cooccurence relations. updating the scores of all the transliteration pairs. w(k) i = α × w(k−1) i + (1 −α) × n X j̸=i,j=1 (w(k−1) j × P(j|i)), where w(k) i is the new score of the pair (ei, ci) after an iteration, while w(k−1) i is its old score before updating; α ∈[0, 1] is a parameter to control the overall amount of propagation (when α = 1, no propagation occurs); P(j|i) is the conditional probability of propagating a score from node (ej, cj, wj) to node (ei, ci, wi). We estimate P(j|i) in two different ways: 1) The number of cooccurrences in the whole collection (Denote as CO). P(j|i) = C(i,j) P j′ C(i,j′), where C(i, j) is the cooccurrence count of (ei, ci) and (ej, cj); 2) A mutual information-based method (Denote as MI). P(j|i) = MI(i,j) P j′ MI(i,j′), where MI(i, j) is the mutual information of (ei, ci) and (ej, cj). As we will show, the CO method works better. Note that the transition probabilities between indirect neighbors are always 0. Thus propagation only happens between direct neighbors. This formulation is very similar to PageRank, a link-based ranking algorithm for Web retrieval (Brin and Page, 1998). However, our motivation is propagating scores to exploit cooccurrences, so we do not necessarily want the equation to converge. Indeed, our results show that although the initial iterations always help improve accuracy, too many iterations actually would decrease the performance. 4 Evaluation We use a comparable English-Chinese corpus to evaluate our methods for Chinese transliteration. We take one day’s worth of comparable news articles (234 Chinese stories and 322 English stories), generate about 600 English names with the entity recognizer (Li et al., 2004) as described above, and 76 find potential Chinese transliterations also as previously described. We generated 627 Chinese candidates. In principle, all these 600 × 627 pairs are potential transliterations. We then apply the phonetic and time correlation methods to score and rank all the candidate Chinese-English correspondences. To evaluate the proposed transliteration methods quantitatively, we measure the accuracy of the ranked list by Mean Reciprocal Rank (MRR), a measure commonly used in information retrieval when there is precisely one correct answer (Kantor and Voorhees, 2000). The reciprocal rank is the reciprocal of the rank of the correct answer. For example, if the correct answer is ranked as the first, the reciprocal rank would be 1.0, whereas if it is ranked the second, it would be 0.5, and so forth. To evaluate the results for a set of English names, we take the mean of the reciprocal rank of each English name. We attempted to create a complete set of answers for all the English names in our test set, but a small number of English names do not seem to have any standard transliteration according to the resources that we consulted. We ended up with a list of about 490 out of the 600 English names judged. We further notice that some answers (about 20%) are not in our Chinese candidate set. This could be due to two reasons: (1) The answer does not occur in the Chinese news articles we look at. (2) The answer is there, but our candidate generation method has missed it. In order to see more clearly how accurate each method is for ranking the candidates, we also compute the MRR for the subset of English names whose transliteration answers are in our candidate list. We distinguish the MRRs computed on these two sets of English names as “AllMRR” and “CoreMRR”. Below we first discuss the results of each of the two methods. We then compare the two methods and discuss results from combining the two methods. 4.1 Phonetic Correspondence We show sample results for the phonetic scoring method in Table 1. This table shows the 10 highest scoring transliterations for each Chinese character sequence based on all texts in the Chinese and English Xinhua newswire for the 13th of August, 2001. 8 out of these 10 are correct. For all the English names the MRR is 0.3, and for the ∗paris å×¹ pei-lei-si 3.51 iraq ÁË yi-la-ke 3.74 staub ¹þ® si-ta-bo 4.45 canada Ó ó jia-na-da 4.85 belfast ´û¨¹Ø bei-er-fa-si-te 4.90 fischer Æáû fei-she-er 4.91 philippine ÆÉö fei-l¨u-bin 4.97 lesotho ³÷Ð lai-suo-two 5.12 ∗tirana ú·Ú tye-lu-na 5.15 freeman ¥ïü fu-li-man 5.26 Table 1: Ten highest-scoring matches for the Xinhua corpus for 8/13/01. The final column is the −log P estimate for the transliteration. Starred entries are incorrect. core names it is 0.89. Thus on average, the correct answer, if it is included in our candidate list, is ranked mostly as the first one. 4.2 Frequency correlation Similarity AllMRR CoreMRR Pearson 0.1360 0.3643 Cosine 0.1141 0.3015 JS-div 0.0785 0.2016 Table 2: MRRs of the frequency correlation methods. We proposed three similarity measures for the frequency correlation method, i.e., the Cosine, Pearson coefficient, and Jensen-Shannon divergence. In Table 2, we show their MRRs. Given that the only resource the method needs is comparable text documents over a sufficiently long period, these results are quite encouraging. For example, with Pearson correlation, when the Chinese transliteration of an English name is included in our candidate list, the correct answer is, on average, ranked at the 3rd place or better. The results thus show that the idea of exploiting frequency correlation does work. We also see that among the three similarity measures, Pearson correlation performs the best; it performs better than Cosine, which is better than JS-divergence. Compared with the phonetic correspondence method, the performance of the frequency correlation method is in general much worse, which is not surprising, given the fact that terms may be correlated merely because they are topically related. 77 4.3 Combination of phonetic correspondence and frequency correlation Method AllMRR CoreMRR Phonetic 0.2999 0.8895 Freq 0.1360 0.3643 Freq+PhoneticFilter 0.3062 0.9083 Freq+PhoneticScore 0.3194 0.9474 Table 3: Effectiveness of combining the two scoring methods. Since the two methods exploit complementary resources, it is natural to see if we can improve performance by combining the two methods. Indeed, intuitively the best candidate is the one that has a good pronunciation alignment as well as a correlated frequency distribution with the English name. We evaluated two strategies for combining the two methods. The first strategy is to use the phonetic model to filter out (clearly impossible) candidates and then use the frequency correlation method to rank the candidates. The second is to combine the scores of these two methods. Since the correlation coefficient has a maximum value of 1, we normalize the phonetic correspondence score by dividing all scores by the maximum score so that the maximum normalized value is also 1. We then take the average of the two scores and rank the candidates based on their average scores. Note that the second strategy implies the application of the first strategy. The results of these two combination strategies are shown in Table 3 along with the results of the two individual methods. We see that both combination strategies are effective and the MRRs of the combined results are all better than those of the two individual methods. It is interesting to see that the benefit of applying the phonetic correspondence model as a filter is quite significant. Indeed, although the performance of the frequency correlation method alone is much worse than that of the phonetic correspondence method, when working on the subset of candidates passing the phonetic filter (i.e., those candidates that have a reasonable phonetic alignment with the English name), it can outperform the phonetic correspondence method. This once again indicates that exploiting the frequency correlation can be effective. When combining the scores of these two methods, we not only (implicitly) apply the phonetic filter, but also exploit the discriminative power provided by the phonetic correspondence scores and this is shown to bring in additional benefit, giving the best performance among all the methods. 4.4 Error Analysis From the results above, we see that the MRRs for the core English names are substantially higher than those for all the English names. This means that our methods perform very well whenever we have the answer in our candidate list, but we have also missed the answers for many English names. The missing of an answer in the candidate list is thus a major source of errors. To further understand the upper bound of our method, we manually add the missing correct answers to our candidate set and apply all the methods to rank this augmented set of candidates. The performance is reported in Table 4 with the corresponding performance on the original candidate set. We see that, Method ALLMRR Original Augmented Phonetic 0.2999 0.7157 Freq 0.1360 0.3455 Freq+PhoneticFilter 0.3062 0.6232 Freq+PhoneticScore 0.3194 0.7338 Table 4: MRRs on the augmented candidate list. as expected, the performance on the augmented candidate list, which can be interpreted as an upper bound of our method, is indeed much better, suggesting that if we can somehow improve the candidate generation method to include the answers in the list, we can expect to significantly improve the performance for all the methods. This is clearly an interesting topic for further research. The relative performance of different methods on this augmented candidate list is roughly the same as on the original candidate list, except that the “Freq+PhoneticFilter” is slightly worse than that of the phonetic method alone, though it is still much better than the performance of the frequency correlation alone. One possible explanation may be that since these names do not necessarily occur in our comparable corpora, we may not have sufficient frequency observations for some of the names. 78 Method AllMRR CoreMRR init. CO MI init. CO MI Freq+PhoneticFilter 0.3171 0.3255 0.3255 0.9058 0.9372 0.9372 Freq+PhoneticScore 0.3290 0.3373 0.3392 0.9422 0.9659 0.9573 Table 5: Effectiveness of score propagation. 4.5 Experiments on score propagation To demonstrate that score propagation can further help transliteration, we use the combination scores in Table 3 as the initial scores, and apply our propagation algorithm to iteratively update them. We remove the entries when they do not co-occur with others. There are 25 such English name candidates. Thus, the initial scores are actually slightly different from the values in Table 3. We show the new scores and the best propagation scores in Table 5. In the table, “init.” refers to the initial scores. and “CO” and “MI” stand for best scores obtained using either the co-occurrence or mutual information method. While both methods result in gains, CO very slightly outperforms the MI approach. In the score propagation process, we introduce two additional parameters: the interpolation parameter α and the number of iterations k. Figure 3 and Figure 4 show the effects of these parameters. Intuitively, we want to preserve the initial score of a pair, but add a slight boost from its neighbors. Thus, we set α very close to 1 (0.9 and 0.95), and allow the system to perform 20 iterations. In both figures, the first few iterations certainly leverage the transliteration, demonstrating that the propagation method works. However, we observe that the performance drops when more iterations are used, presumably due to noise introduced from more distantly connected nodes. Thus, a relatively conservative approach is to choose a high α value, and run only a few iterations. Note, finally, that the CO method seems to be more stable than the MI method. 5 Conclusions and Future Work In this paper we have discussed the problem of Chinese-English name transliteration as one component of a system to find matching names in comparable corpora. We have proposed two methods for transliteration, one that is more traditional and based on phonetic correspondences, and one that is based on word distributions and adopts methods from information retrieval. We have shown 0.76 0.78 0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 0 2 4 6 8 10 12 14 16 18 20 MRR values number of iterations alpha=0.9, MI alpha=0.9, CO alpha=0.95, MI alpha=0.95, CO Figure 3: Propagation: Core items that both methods yield good results, and that even better results can be achieved by combining the methods. We have further showed that one can improve upon the combined model by using reinforcement via score propagation when transliteration pairs cluster together in document pairs. The work we report is ongoing. We are investigating transliterations among several language pairs, and are extending these methods to Korean, Arabic, Russian and Hindi — see (Tao et al., 2006). 6 Acknowledgments This work was funded by Dept. of the Interior contract NBCHC040176 (REFLEX). We also thank three anonymous reviewers for ACL06. References Lisa Ballesteros and W. Bruce Croft. 1998. Resolving ambiguity for cross-language retrieval. In Research and Development in Information Retrieval, pages 64–71. Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30:107– 117. 79 0.28 0.29 0.3 0.31 0.32 0.33 0.34 0 2 4 6 8 10 12 14 16 18 20 MRR values number of iterations alpha=0.9, MI alpha=0.9, CO alpha=0.95, MI alpha=0.95, CO Figure 4: Propagation: All items A. Carlson, C. Cumby, J. Rosen, and D. Roth. 1999. The SNoW learning architecture. Technical Report UIUCDCS-R-99-2101, UIUC CS Dept. Martin Franz, J. Scott McCarley, and Salim Roukos. 1998. Ad hoc and multilingual information retrieval at IBM. In Text REtrieval Conference, pages 104– 115. Pascale Fung. 1995. A pattern matching method for finding noun and proper noun translations from noisy parallel corpora. In Proceedings of ACL 1995, pages 236–243. W. Gao, K.-F. Wong, and W. Lam. 2004. Phonemebased transliteration of foreign names for OOV problem. In IJCNLP, pages 374–381, Sanya, Hainan. P. Kantor and E. Voorhees. 2000. The TREC-5 confusion track: Comparing retrieval methods for scanned text. Information Retrieval, 2:165–176. M. Kay and M. Roscheisen. 1993. Text translation alignment. Computational Linguistics, 19(1):75– 102. K. Knight and J. Graehl. 1998. Machine transliteration. CL, 24(4). J. Kruskal. 1999. An overview of sequence comparison. In D. Sankoff and J. Kruskal, editors, Time Warps, String Edits, and Macromolecules, chapter 1, pages 1–44. CSLI, 2nd edition. X. Li, P. Morie, and D. Roth. 2004. Robust reading: Identification and tracing of ambiguous names. In NAACL-2004. J. Lin. 1991. Divergence measures based on the shannon entropy. IEEE Transactions on Information Theory, 37(1):145–151. H. Masuichi, R. Flournoy, S. Kaufmann, and S. Peters. 2000. A bootstrapping method for extracting bilingual text pairs. H.M. Meng, W.K Lo, B. Chen, and K. Tang. 2001. Generating phonetic cognates to handle named entities in English-Chinese cross-languge spoken document retrieval. In Proceedings of the Automatic Speech Recognition and Understanding Workshop. R. Rapp. 1995. Identifying word translations in nonparallel texts. In Proceedings of ACL 1995, pages 320–322. Fatiha Sadat, Masatoshi Yoshikawa, and Shunsuke Uemura. 2003. Bilingual terminology acquisition from comparable corpora and phrasal translation to crosslanguage information retrieval. In ACL ’03, pages 141–144. G. Salton and M. McGill. 1983. Introduction to Modern Information Retrieval. McGraw-Hill. R. Sproat, C. Shih, W. Gale, and N. Chang. 1996. A stochastic finite-state word-segmentation algorithm for Chinese. CL, 22(3). K. Tanaka and H. Iwasaki. 1996. Extraction of lexical translation from non-aligned corpora. In Proceedings of COLING 1996. Tao Tao and ChengXiang Zhai. 2005. Mining comparable bilingual text corpora for cross-language information integration. In KDD’05, pages 691–696. Tao Tao, Su-Youn Yoon, Andrew Fister, Richard Sproat, and ChengXiang Zhai. 2006. Unsupervised named entity transliteration using temporal and phonetic correlation. In EMNLP 2006, Sydney, July. P. Taylor, A. Black, and R. Caley. 1998. The architecture of the Festival speech synthesis system. In Proceedings of the Third ESCA Workshop on Speech Synthesis, pages 147–151, Jenolan Caves, Australia. Ying Zhang and Phil Vines. 2004. Using the web for automated translation extraction in cross-language information retrieval. In SIGIR ’04, pages 162–169. 80 | 2006 | 10 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 793–800, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Ontologizing Semantic Relations Marco Pennacchiotti ART Group - DISP University of Rome “Tor Vergata” Viale del Politecnico 1 Rome, Italy [email protected] Patrick Pantel Information Sciences Institute University of Southern California 4676 Admiralty Way Marina del Rey, CA90292 [email protected] Abstract Many algorithms have been developed to harvest lexical semantic resources, however few have linked the mined knowledge into formal knowledge repositories. In this paper, we propose two algorithms for automatically ontologizing (attaching) semantic relations into WordNet. We present an empirical evaluation on the task of attaching partof and causation relations, showing an improvement on F-score over a baseline model. 1 Introduction NLP researchers have developed many algorithms for mining knowledge from text and the Web, including facts (Etzioni et al. 2005), semantic lexicons (Riloff and Shepherd 1997), concept lists (Lin and Pantel 2002), and word similarity lists (Hindle 1990). Many recent efforts have also focused on extracting binary semantic relations between entities, such as entailments (Szpektor et al. 2004), is-a (Ravichandran and Hovy 2002), part-of (Girju et al. 2003), and other relations. The output of most of these systems is flat lists of lexical semantic knowledge such as “Italy is-a country” and “orange similar-to blue”. However, using this knowledge beyond simple keyword matching, for example in inferences, requires it to be linked into formal semantic repositories such as ontologies or term banks like WordNet (Fellbaum 1998). Pantel (2005) defined the task of ontologizing a lexical semantic resource as linking its terms to the concepts in a WordNet-like hierarchy. For example, “orange similar-to blue” ontologizes in WordNet to “orange#2 similar-to blue#1” and “orange#2 similar-to blue#2”. In his framework, Pantel proposed a method of inducing ontological co-occurrence vectors 1 which are subsequently used to ontologize unknown terms into WordNet with 74% accuracy. In this paper, we take the next step and explore two algorithms for ontologizing binary semantic relations into WordNet and we present empirical results on the task of attaching part-of and causation relations. Formally, given an instance (x, r, y) of a binary relation r between terms x and y, the ontologizing task is to identify the WordNet senses of x and y where r holds. For example, the instance (proton, PART-OF, element) ontologizes into WordNet as (proton#1, PART-OF, element#2). The first algorithm that we explore, called the anchoring approach, was suggested as a promising avenue of future work in (Pantel 2005). This bottom up algorithm is based on the intuition that x can be disambiguated by retrieving the set of terms that occur in the same relation r with y and then finding the senses of x that are most similar to this set. The assumption is that terms occurring in the same relation will tend to have similar meaning. In this paper, we propose a measure of similarity to capture this intuition. In contrast to anchoring, our second algorithm, called the clustering approach, takes a top-down view. Given a relation r, suppose that we are given every conceptual instance of r, i.e., instances of r in the upper ontology like (particles#1, PART-OF, substances#1). An instance (x, r, y) can then be ontologized easily by finding the senses of x and y that are subsumed by ancestors linked by a conceptual instance of r. For example, the instance (proton, PART-OF, element) ontologizes to (proton#1, PART-OF, element#2) since proton#1 is subsumed by particles and element#2 is subsumed by substances. The problem then is to automatically infer the set of con 1 The ontological co-occurrence vector of a concept consists of all lexical co-occurrences with the concept in a corpus. 793 ceptual instances. In this paper, we develop a clustering algorithm for generalizing a set of relation instances to conceptual instances by looking up the WordNet hypernymy hierarchy for common ancestors, as specific as possible, that subsume as many instances as possible. An instance is then attached to its senses that are subsumed by the highest scoring conceptual instances. 2 Relevant Work Several researchers have worked on ontologizing semantic resources. Most recently, Pantel (2005) developed a method to propagate lexical cooccurrence vectors to WordNet synsets, forming ontological co-occurrence vectors. Adopting an extension of the distributional hypothesis (Harris 1985), the co-occurrence vectors are used to compute the similarity between synset/synset and between lexical term/synset. An unknown term is then attached to the WordNet synset whose cooccurrence vector is most similar to the term’s co-occurrence vector. Though the author suggests a method for attaching more complex lexical structures like binary semantic relations, the paper focused only on attaching terms. Basili (2000) proposed an unsupervised method to infer semantic classes (WordNet synsets) for terms in domain-specific verb relations. These relations, such as (x, EXPAND, y) are first automatically learnt from a corpus. The semantic classes of x and y are then inferred using conceptual density (Agirre and Rigau 1996), a WordNet-based measure applied to all instantiation of x and y in the corpus. Semantic classes represent possible common generalizations of the verb arguments. At the end of the process, a set of syntactic-semantic patterns are available for each verb, such as: (social_group#1, expand, act#2) (instrumentality#2, expand, act#2) The method is successful on specific relations with few instances (such as domain verb relations) while its value on generic and frequent relations, such as part-of, was untested. Girju et al. (2003) presented a highly supervised machine learning algorithm to infer semantic constraints on part-of relations, such as (object#1, PART-OF, social_event#1). These constraints are then used as selectional restrictions in harvesting part-of instances from ambiguous lexical patterns, like “X of Y”. The approach shows high performance in terms of precision and recall, but, as the authors acknowledge, it requires large human effort during the training phase. Others have also made significant additions to WordNet. For example, in eXtended WordNet (Harabagiu et al. 1999), the glosses in WordNet are enriched by disambiguating the nouns, verbs, adverbs, and adjectives with synsets. Another work has enriched WordNet synsets with topically related words extracted from the Web (Agirre et al. 2001). Finally, the general task of word sense disambiguation (Gale et al. 1991) is relevant since there the task is to ontologize each term in a passage into a WordNet-like sense inventory. If we had a large collection of sensetagged text, then our mining algorithms could directly discover WordNet attachment points at harvest time. However, since there is little high precision sense-tagged corpora, methods are required to ontologize semantic resources without fully disambiguating text. 3 Ontologizing Semantic Relations Given an instance (x, r, y) of a binary relation r between terms x and y, the ontologizing task is to identify the senses of x and y where r holds. In this paper, we focus on WordNet 2.0 senses, though any similar term bank would apply. Let Sx and Sy be the sets of all WordNet senses of x and y. A sense pair, sxy, is defined as any pair of senses of x and y: sxy={sx, sy} where sx∈Sx and sy∈Sy. The set of all sense pairs Sxy consists of all permutations between senses in Sx and Sy. In order to attach a relation instance (x, r, y) into WordNet, one must: • Disambiguate x and y, that is, find the subsets S'x⊆Sx and S'y⊆Sy for which the relation r holds; and • Instantiate the relation in WordNet, using the synsets corresponding to all correct permutations between the senses in S'x and S'y. We denote this set of attachment points as S'xy. If Sx or Sy is empty, no attachments are produced. For example, the instance (study, PART-OF, report) is ontologized into WordNet through the senses S'x={survey#1, study#2} and S’y={report#1}. The final attachment points S'xy are: (survey#1, PART-OF, report#1) (study#1, PART-OF, report#1) Unlike common algorithms for word sense disambiguation, here it is important to take into consideration the semantic dependency between the two terms x and y. For example, an entity that is part-of a study has to be some kind of informa794 tion. This knowledge about mutual selectional preference (the preferred semantic class that fills a certain relation role, as x or y) can be exploited to ontologize the instance. In the following sections, we propose two algorithms for ontologizing binary semantic relations. 3.1 Method 1: Anchor Approach Given an instance (x, r, y), this approach fixes the term y, called the anchor, and then disambiguates x by looking at all other terms that occur in the relation r with y. Based on the principle of distributional similarity (Harris 1985), the algorithm assumes that the words that occur in the same relation r with y will be more similar to the correct sense(s) of x than the incorrect ones. After disambiguating x, the process is then inverted with x as the anchor to disambiguate y. In the first step, y is fixed and the algorithm retrieves the set of all other terms X' that occur in an instance (x', r, y), x' ∈ X'2. For example, given the instance (reflections, PART-OF, book), and a resource containing the following relations: (false allegations, PART-OF, book) (stories, PART-OF, book) (expert analysis, PART-OF, book) (conclusions, PART-OF, book) the resulting set X' would be: {allegations, stories, analysis, conclusions}. All possible permutations, Sxx', between the senses of x and the senses of each term in X', called Sx', are computed. For each sense pair {sx, sx'} ∈ Sxx', a similarity score r(sx, sx') is calculated using WordNet: ) ( 1 ) , ( 1 ) , ( ' ' ' x x x x x s f s s d s s r × + = where the distance d(sx, sx') is the length of the shortest path connecting the two synsets in the hypernymy hierarchy of WordNet, and f(sx') is the number of times sense sx' occurs in any of the instances of X'. Note that if no connection between two synsets exists, then r(sx, sx') = 0. The overall sense score for each sense sx of x is calculated as: ∑ ∈ = ' ' ) , ( ) ( ' x x S s x x x s s r s r Finally, the algorithm inverts the process by setting x as the anchor and computes r(sy) for 2 For semantic relations between complex terms, like (expert analysis, PART-OF, book), only the head noun of terms are recorded, like “analysis”. As a future work, we plan to use the whole term if it is present in WordNet. each sense of y. All possible permutations of senses are computed and scored by averaging r(sx) and r(sy). Permutations scoring higher than a threshold τ1 are selected as the attachment points in WordNet. We experimentally set τ1 = 0.02. 3.2 Method 2: Clustering Approach The main idea of the clustering approach is to leverage the lexical behaviors of the two terms in an instance as a whole. The assumption is that the general meaning of the relation is derived from the combination of the two terms. The algorithm is divided in two main phases. In the first phase, semantic clusters are built using the WordNet senses of all instances. A semantic cluster is defined by the set of instances that have a common semantic generalization. We denote the conceptual instance of the semantic cluster as the pair of WordNet synsets that represents this generalization. For example the following two part-of instances: (second section, PART-OF, Los Angeles-area news) (Sandag study, PART-OF, report) are in a common cluster represented by the following conceptual instance: [writing#2, PART-OF, message#2] since writing#2 is a hypernym of both section and study, and message#2 is a hypernym of news and report3. In the second phase, the algorithm attaches an instance into WordNet by using WordNet distance metrics and frequency scores to select the best cluster for each instance. A good cluster is one that: • achieves a good trade-off between generality and specificity; and • disambiguates among the senses of x and y using the other instances’ senses as support. For example, given the instance (second section, PART-OF, Los Angeles-area news) and the following conceptual instances: [writing#2, PART-OF, message#2] [object#1, PART-OF, message#2] [writing#2, PART-OF, communication#2] [social_group#1, PART-OF, broadcast#2] [organization#, PART-OF, message#2] the first conceptual instance should be scored highest since it is both not too generic nor too specific and is supported by the instance (Sandag study, PART-OF, report), i.e., the conceptual instance subsumes both instances. The second and 3 Again, here, we use the syntactic head of each term for generalization since we assume that it drives the meaning of the term itself. 795 the third conceptual instances should be scored lower since they are too generic, while the last two should be scored lower since the sense for section and news are not supported by other instances. The system then outputs, for each instance, the set of sense pairs that are subsumed by the highest scoring conceptual instance. In the previous example: (section#1, PART-OF, news#1) (section#1, PART-OF, news#2) (section#1, PART-OF, news#3) are selected, as they are subsumed by [writing#2, PART-OF, message#2]. These sense pairs are then retained as attachment points into WordNet. Below, we describe each phase in more detail. Phase 1: Cluster Building Given an instance (x, r, y), all sense pair permutations sxy={sx, sy} are retrieved from WordNet. A set of candidate conceptual instances, Cxy, is formed for each instance from the permutation of each WordNet ancestor of sx and sy, following the hypernymy link, up to degree τ2. Each candidate conceptual instance, c={cx, cy}, is scored by its degree of generalization as follows: )1 ( )1 ( 1 ) ( + × + = y x n n c r where ni is the number of hypernymy links needed to go from si to ci, for i ∈ {x, y}. r(c) ranges from [0, 1] and is highest when little generalization is needed. For example, the instance (Sandag study, PART-OF, report) produces 70 sense pairs since study has 10 senses and report has 7 senses. Assuming τ2=1, the instance sense (survey#1, PARTOF, report#1) has the following set of candidate conceptual instances: Cxy nx ny r(c) (survey#1, PART-OF,report#1) 0 0 1 (survey#1, PART-OF,document#1) 0 1 0.5 (examination#1, PART-OF,report#1) 1 0 0.5 (examination#1, PART-OF,document#1) 1 1 0.25 Finally, each candidate conceptual instance c forms a cluster of all instances (x, r, y) that have some sense pair sx and sy as hyponyms of c. Note also that candidate conceptual instances may be subsumed by other candidate conceptual instances. Let Gc refer to the set of all candidate conceptual instances subsumed by candidate conceptual instance c. Intuitively, better candidate conceptual instances are those that subsume both many instances and other candidate conceptual instances, but at the same time that have the least distance from subsumed instances. We capture this intuition with the following score of c: c c c G g G I G g r c score c log log ) ( ) ( × × = ∑ ∈ where Ic is the set of instances subsumed by c. We experimented with different variations of this score and found that it is important to put more weight on the distance between subsumed conceptual instances than the actual number of subsumed instances. Without the log terms, the highest scoring conceptual instances are too generic (i.e., they are too high up in the ontology). Phase 2: Attachment Points Selection In this phase, we utilize the conceptual instances of the previous phase to attach each instance (x, r, y) into WordNet. At the end of Phase 1, an instance can be clustered in different conceptual instances. In order to select an attachment, the algorithm selects the sense pair of x and y that is subsumed by the highest scoring candidate conceptual instance. It and all other sense pairs that are subsumed by this conceptual instance are then retained as the final attachment points. As a side effect, a final set of conceptual instances is obtained by deleting from each candidate those instances that are subsumed by a higher scoring conceptual instance. Remaining conceptual instances are then re-scored using score(c). The final set of conceptual instances thus contains unambiguous sense pairs. 4 Experimental Results In this section we provide an empirical evaluation of our two algorithms. 4.1 Experimental Setup Researchers have developed many algorithms for harvesting semantic relations from corpora and the Web. For the purposes of this paper, we may choose any one of them and manually validate its mined relations. We choose Espresso4, a generalpurpose, broad, and accurate corpus harvesting algorithm requiring minimal supervision. Adopt 4 Reference suppressed – the paper introducing Espresso has also been submitted to COLING/ACL 2006. 796 ing a bootstrapping approach, Espresso takes as input a few seed instances of a particular relation and iteratively learns surface patterns to extract more instances. Test Sets We experiment with two relations: part-of and causation. The causation relation occurs when an entity produces an effect or is responsible for events or results, for example (virus, CAUSE, influenza) and (burning fuel, CAUSE, pollution). We manually built five seed relation instances for both relations and apply Espresso to a dataset consisting of a sample of articles from the Aquaint (TREC-9) newswire text collection. The sample consists of 55.7 million words extracted from the Los Angeles Times data files. Espresso extracted 1,468 part-of instances and 1,129 causation instances. We manually validated the output and randomly selected 200 correct relation instances of each relation for ontologizing into WordNet 2.0. Gold Standard We manually built a gold standard of all correct attachments of the test sets in WordNet. For each relation instance (x, r, y), two human annotators selected from all sense permutations of x and y the correct attachment points in WordNet. For example, for (synthetic material, PART-OF, filter), the judges selected the following attachment points: (synthetic material#1, PART-OF, filter#1) and (synthetic material#1, PART-OF, filter#2). The kappa statistic (Siegel and Castellan Jr. 1988) on the two relations together was Κ = 0.73. Systems The following three systems are evaluated: • BL: the baseline system that attaches each relation instance to the first (most common) WordNet sense of both terms; • AN: the anchor approach described in Section 3.1. • CL: the clustering approach described in Section 3.2. 4.2 Precision, Recall and F-score For both the part-of and causation relations, we apply the three systems described above and compare their attachment performance using precision, recall, and F-score. Using the manually built gold standard, the precision of a system on a given relation instance is measured as the percentage of correct attachments and recall is measured as the percentage of correct attachments retrieved by the system. Overall system precision and recall are then computed by averaging the precision and recall of each relation instance. Table 1 and Table 2 report the results on the part-of and causation relations. We experimentally set the CL generalization parameter τ2 to 5 and the τ1 parameter for AN to 0.02. 4.3 Discussion For both relations, CL and AN outperform the baseline in overall F-score. For part-of, Table 1 shows that CL outperforms BL by 13.6% in Fscore and AN by 9.4%. For causation, Table 2 shows that AN outperforms BL by 4.4% on Fscore and CL by 0.6%. The good results of the CL method on the part-of relation suggest that instances of this relation are particularly amenable to be clustered. The generality of the part-of relation in fact allows the creation of fairly natural clusters, corresponding to different sub-types of part-of, as those proposed in (Winston 1983). The causation relation, however, being more difficult to define at a semantic level (Girju 2003), is less easy to cluster and thus to disambiguate. Both CL and AN have better recall than BL, but precision results vary with CL beating BL only on the part-of relation. Overall, the system performances suggest that ontologizing semantic relations into WordNet is in general not easy. The better results of CL and AN with respect to BL suggest that the use of comparative semantic analysis among corpus instances is a good way to carry out disambiguation. Yet, the BL SYSTEM PRECISION RECALL F-SCORE BL 45.0% 25.0% 32.1% AN 41.7% 32.4% 36.5% CL 40.0% 32.6% 35.9% Table 2. System precision, recall and F-score on the causation relation. SYSTEM PRECISION RECALL F-SCORE BL 54.0% 31.3% 39.6% AN 40.7% 47.3% 43.8% CL 57.4% 49.6% 53.2% Table 1. System precision, recall and F-score on the part-of relation. 797 method shows surprisingly good results. This indicates that also a simple method based on word sense usage in language can be valuable. An interesting avenue of future work is to better combine these two different views in a single system. The low recall results for CL are mostly attributed to the fact that in Phase 2 only the best scoring cluster is retained for each instance. This means that instances with multiple senses that do not have a common generalization are not captured. For example the part-of instance (wings, PART-OF, chicken) should cluster both in [body_part#1, PART-OF, animal#1] and [body_part#1, PART-OF, food#2], but only the best scoring one is retained. 5 Conceptual Instances: Other Uses Our clustering approach from Section 3.2 is enabled by learning conceptual instances – relations between mid-level ontological concepts. Beyond the ontologizing task, conceptual instances may be useful for several other tasks. In this section, we discuss some of these opportunities and present small qualitative evaluations. Conceptual instances represent common semantic generalizations of a particular relation. For example, below are two possible conceptual instances for the part-of relation: [person#1, PART-OF, organization#1] [act#1, PART-OF, plan#1] The first conceptual instance in the example subsumes all the part-of instances in which one or more persons are part of an organization, such as: (president Brown, PART-OF, executive council) (representatives, PART-OF, organization) (students, PART-OF, orchestra) (players, PART-OF, Metro League) Below, we present three possible ways of exploiting these conceptual instances. Support to Relation Extraction Tools Conceptual instances may be used to support relation extraction algorithms such as Espresso. Most minimally supervised harvesting algorithm do not exploit generic patterns, i.e. those patterns with high recall but low precision, since they cannot separate correct and incorrect relation instances. For example, the pattern “X of Y” extracts many correct relation instances like “wheel of the car” but also many incorrect ones like “house of representatives”. Girju et al. (2003) described a highly supervised algorithm for learning semantic constraints on generic patterns, leading to a very significant increase in system recall without deteriorating precision. Conceptual instances can be used to automatically learn such semantic constraints by acting as a filter for generic patterns, retaining only those instances that are subsumed by high scoring conceptual instances. Effectively, conceptual instances are used as selectional restrictions for the relation. For example, our system discards the following incorrect instances: (week, CAUSE, coalition) (demeanor, CAUSE, vacuum) as they are both part of the very low scoring conceptual instance [abstraction#6, CAUSE, state#1]. Ontology Learning from Text Each conceptual instance can be viewed as a formal specification of the relation at hand. For example, Winston (1983) manually identified six sub-types of the part-of relation: membercollection, component-integral object, portionmass, stuff-object, feature-activity and placearea. Such classifications are useful in applications and tasks where a semantically rich organization of knowledge is required. Conceptual instances can be viewed as an automatic derivation of such a classification based on corpus usage. Moreover, conceptual instances can be used to improve the ontology learning process itself. For example, our clustering approach can be seen as an inductive step producing conceptual instances that are then used in a deductive step to learn new instances. An algorithm could iterate between the induction/deduction cycle until no new relation instances and conceptual instances can be inferred. Word Sense Disambiguation Word Sense Disambiguation (WSD) systems can exploit the selectional restrictions identified by conceptual instances to disambiguate ambiguous terms occurring in particular contexts. For example, given the sentence: “the board is composed by members of different countries” and a harvesting algorithm that extracts the partof relation (members, PART-OF, board), the system could infer the correct senses for board and members by looking at their closest conceptual instance. In our system, we would infer the attachment (member#1, PART-OF, board#1) since it is part of the highest scoring conceptual instance [person#1, PART-OF, organization#1]. 798 5.1 Qualitative Evaluation Table 3 and Table 4 list samples of the highest ranking conceptual instances obtained by our system for the part-of and causation relations. Below we provide a small evaluation to verify: • the correctness of the conceptual instances. Incorrect conceptual instances such as [attribute#2, CAUSE, state#4], discovered by our system, can impede WSD and extraction tools where precise selectional restrictions are needed; and • the accuracy of the conceptual instances. Sometimes, an instance is incorrectly attached to a correct conceptual instance. For example, the instance (air mass, PART-OF, cold front) is incorrectly clustered in [group#1, PART-OF, multitude#3] since mass and front both have a sense that is descendant of group#1 and multitude#3. However, these are not the correct senses of mass and front for which the part-of relation holds. For evaluating correctness, we manually verify how many correct conceptual instances are produced by Phase 2 of the clustering approach described in Section 3.2. The claim is that a correct conceptual instance is one for which the relation holds for all possible subsumed senses. For example, the conceptual instance [group#1, PART-OF, multitude#3] is correct, as the relation holds for every semantic subsumption of the two senses. An example of an incorrect conceptual instance is [state#4, CAUSE, abstraction#6] since it subsumes the incorrect instance (audience, CAUSE, new context). A manual evaluation of the highest scoring 200 conceptual instances, generated on our test sets described in Section 4.1, showed 82% correctness for the part-of relation and 86% for causation. For estimating the overall clustering accuracy, we evaluated the number of correctly clustered instances in each conceptual instance. For example, the instance (business people, PART-OF, committee) is correctly clustered in [multitude#3, PART-OF, group#1] and the instance (law, PARTOF, constitutional pitfalls) is incorrectly clustered in [group#1, PART-OF, artifact#1]. We estimated the overall accuracy by manually judging the instances attached to 10 randomly sampled conceptual instances. The accuracy for part-of is 84% and for causation it is 76.6%. 6 Conclusions In this paper, we proposed two algorithms for automatically ontologizing binary semantic relations into WordNet: an anchoring approach and a clustering approach. Experiments on the partof and causation relations showed promising results. Both algorithms outperformed the baseline on F-score. Our best results were on the part-of relation where the clustering approach achieved 13.6% higher F-score than the baseline. The induction of conceptual instances has opened the way for many avenues of future work. We intend to pursue the ideas presented in Section 5 for using conceptual instances to: i) support knowledge acquisition tools by learning semantic constraints on extracting patterns; ii) support ontology learning from text; and iii) improve word sense disambiguation through selectional restrictions. Also, we will try different similarity score functions for both the clustering and the anchor approaches, as those surveyed in Corley and Mihalcea (2005). CONCEPTUAL INSTANCE SCORE # INSTANCES INSTANCES [multitude#3, PART-OF, group#1] 2.04 10 (ordinary people, PART-OF, Democratic Revolutionary Party) (unlicensed people, PART-OF, underground economy) (young people, PART-OF, commission) (air mass, PART-OF, cold front) [person#1, PART-OF, organization#1] 1.71 43 (foreign ministers, PART-OF, council) (students, PART-OF, orchestra) (socialists, PART-OF, Iraqi National Joint Action Committee) (players, PART-OF, Metro League) [act#2, PART-OF, plan#1] 1.60 16 (major concessions, PART-OF, new plan) (attacks, PART-OF, coordinated terrorist plan) (visit, PART-OF, exchange program) (survey, PART-OF, project) [communication#2, PART-OF, book#1] 1.14 10 (hints, PART-OF, booklet) (soup recipes, PART-OF, book) (information, PART-OF, instruction manual) (extensive expert analysis, PART-OF, book) [compound#2, PART-OF, waste#1] 0.57 3 (salts, PART-OF, powdery white waste) (lime, PART-OF, powdery white waste) (resin, PART-OF, waste) Table 3. Sample of the highest scoring conceptual instances learned for the part-of relation. For each conceptual instance, we report the score(c), the number of instances, and some example instances. 799 The algorithms described in this paper may be applied to ontologize many lexical resources of semantic relations, no matter the harvesting algorithm used to mine them. In doing so, we have the potential to quickly enrich our ontologies, like WordNet, thus reducing the knowledge acquisition bottleneck. It is our hope that we will be able to leverage these enriched resources, albeit with some noisy additions, to improve performance on knowledge-rich problems such as question answering and textual entailment. References Agirre, E. and Rigau, G. 1996. Word sense disambiguation using conceptual density. In Proceedings of COLING-96. pp. 16-22. Copenhagen, Danmark. Agirre, E.; Ansa, O.; Martinez, D.; and Hovy, E. 2001. Enriching WordNet concepts with topic signatures. In Proceedings of NAACL Workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations. Pittsburgh, PA. Basili, R.; Pazienza, M.T.; and Vindigni, M. 2000. Corpus-driven learning of event recognition rules. In Proceedings of Workshop on Machine Learning and Information Extraction (ECAI-00). Corley, C. and Mihalcea, R. 2005. Measuring the Semantic Similarity of Texts. In Proceedings of the ACL Workshop on Empirical Modelling of Semantic Equivalence and Entailment. Ann Arbor, MI. Etzioni, O.; Cafarella, M.J.; Downey, D.; Popescu, A.M.; Shaked, T.; Soderland, S.; Weld, D.S.; and Yates, A. 2005. Unsupervised named-entity extraction from the Web: An experimental study. Artificial Intelligence, 165(1): 91-134. Fellbaum, C. 1998. WordNet: An Electronic Lexical Database. MIT Press. Gale, W.; Church, K.; and Yarowsky, D. 1992. A method for disambiguating word senses in a large corpus. Computers and Humanities, 26:415-439. Girju, R.; Badulescu, A.; and Moldovan, D. 2003. Learning semantic constraints for the automatic discovery of part-whole relations. In Proceedings of HLT/NAACL-03. pp. 80-87. Edmonton, Canada. Girju, R. 2003. Automatic Detection of Causal Relations for Question Answering. In Proceedings of ACL Workshop on Multilingual Summarization and Question Answering. Sapporo, Japan. Harabagiu, S.; Miller, G.; and Moldovan, D. 1999. WordNet 2 - A Morphologically and Semantically Enhanced Resource. In Proceedings of SIGLEX-99. pp.1-8. University of Maryland. Harris, Z. 1985. Distributional structure. In: Katz, J. J. (ed.) The Philosophy of Linguistics. New York: Oxford University Press. pp. 26–47. Hindle, D. 1990. Noun classification from predicateargument structures. In Proceedings of ACL-90. pp. 268–275. Pittsburgh, PA. Lin, D. and Pantel, P. 2002. Concept discovery from text. In Proceedings of COLING-02. pp. 577-583. Taipei, Taiwan. Pantel, P. 2005. Inducing Ontological Co-occurrence Vectors. In Proceedings of ACL-05. pp. 125-132. Ann Arbor, MI. Ravichandran, D. and Hovy, E.H. 2002. Learning surface text patterns for a question answering system. In Proceedings of ACL-2002. pp. 41-47. Philadelphia, PA. Riloff, E. and Shepherd, J. 1997. A corpus-based approach for building semantic lexicons. In Proceedings of EMNLP-97. Siegel, S. and Castellan Jr., N. J. 1988. Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill. Szpektor, I.; Tanev, H.; Dagan, I.; and Coppola, B. 2004. Scaling web-based acquisition of entailment relations. In Proceedings of EMNLP-04. Barcelona, Spain. Winston, M.; Chaffin, R.; and Hermann, D. 1987. A taxonomy of part-whole relations. Cognitive Science, 11:417–444. CONCEPTUAL INSTANCE SCORE # INSTANCES INSTANCES [change#3, CAUSE, state#4] 1.49 17 (separation, CAUSE, anxiety) (demotion, CAUSE, roster vacancy) (budget cuts, CAUSE, enrollment declines) (reduced flow, CAUSE, vacuum) [act#2, CAUSE, state#3] 0.81 20 (oil drilling, CAUSE, air pollution) (workplace exposure, CAUSE, genetic injury) (industrial emissions, CAUSE, air pollution) (long recovery, CAUSE, great stress) [person#1, CAUSE, act#2] 0.64 12 (homeowners, CAUSE, water waste) (needlelike puncture, CAUSE, physician) (group member, CAUSE, controversy) (children, CAUSE, property damage) [organism#1, CAUSE, disease#1] 0.03 4 (parasites, CAUSE, pneumonia) (virus, CAUSE, influenza) (chemical agents, CAUSE, pneumonia) (genetic mutation, CAUSE, Dwarfism) Table 4. Sample of the highest scoring conceptual instances learned for the causation relation. For each conceptual instance, we report score(c) , the number of instances, and some example instances. 800 | 2006 | 100 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 801–808, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Semantic Taxonomy Induction from Heterogenous Evidence Rion Snow Computer Science Department Stanford University Stanford, CA 94305 [email protected] Daniel Jurafsky Linguistics Department Stanford University Stanford, CA 94305 [email protected] Andrew Y. Ng Computer Science Department Stanford University Stanford, CA 94305 [email protected] Abstract We propose a novel algorithm for inducing semantic taxonomies. Previous algorithms for taxonomy induction have typically focused on independent classifiers for discovering new single relationships based on hand-constructed or automatically discovered textual patterns. By contrast, our algorithm flexibly incorporates evidence from multiple classifiers over heterogenous relationships to optimize the entire structure of the taxonomy, using knowledge of a word’s coordinate terms to help in determining its hypernyms, and vice versa. We apply our algorithm on the problem of sense-disambiguated noun hyponym acquisition, where we combine the predictions of hypernym and coordinate term classifiers with the knowledge in a preexisting semantic taxonomy (WordNet 2.1). We add 10, 000 novel synsets to WordNet 2.1 at 84% precision, a relative error reduction of 70% over a non-joint algorithm using the same component classifiers. Finally, we show that a taxonomy built using our algorithm shows a 23% relative F-score improvement over WordNet 2.1 on an independent testset of hypernym pairs. 1 Introduction The goal of capturing structured relational knowledge about lexical terms has been the motivating force underlying many projects in lexical acquisition, information extraction, and the construction of semantic taxonomies. Broad-coverage semantic taxonomies such as WordNet (Fellbaum, 1998) and CYC (Lenat, 1995) have been constructed by hand at great cost; while a crucial source of knowledge about the relations between words, these taxonomies still suffer from sparse coverage. Many algorithms with the potential for automatically extending lexical resources have been proposed, including work in lexical acquisition (Riloff and Shepherd, 1997; Roark and Charniak, 1998) and in discovering instances, named entities, and alternate glosses (Etzioni et al., 2005; Pasc¸a, 2005). Additionally, a wide variety of relationship-specific classifiers have been proposed, including pattern-based classifiers for hyponyms (Hearst, 1992), meronyms (Girju, 2003), synonyms (Lin et al., 2003), a variety of verb relations (Chklovski and Pantel, 2004), and general purpose analogy relations (Turney et al., 2003). Such classifiers use hand-written or automaticallyinduced patterns like Such NPy as NPx or NPy like NPx to determine, for example that NPy is a hyponym of NPx (i.e., NPy IS-A NPx). While such classifiers have achieved some degree of success, they frequently lack the global knowledge necessary to integrate their predictions into a complex taxonomy with multiple relations. Past work on semantic taxonomy induction includes the noun hypernym hierarchy created in (Caraballo, 2001), the part-whole taxonomies in (Girju, 2003), and a great deal of recent work described in (Buitelaar et al., 2005). Such work has typically either focused on only inferring small taxonomies over a single relation, or as in (Caraballo, 2001), has used evidence for multiple relations independently from one another, by for example first focusing strictly on inferring clusters of coordinate terms, and then by inferring hypernyms over those clusters. Another major shortfall in previous techniques for taxonomy induction has been the inability to handle lexical ambiguity. Previous approaches have typically sidestepped the issue of polysemy altogether by making the assumption of only a single sense per word, and inferring taxonomies explicitly over words and not senses. Enforcing a false monosemy has the downside of making potentially erroneous inferences; for example, collapsing the polysemous term Bush into a single sense might lead one to infer by transitivity that a rose bush is a kind of U.S. president. Our approach simultaneously provides a solution to the problems of jointly considering evidence about multiple relationships as well as lexical ambiguity within a single probabilistic framework. The key contribution of this work is to offer a solution to two crucial problems in taxonomy in801 duction and hyponym acquisition: the problem of combining heterogenous sources of evidence in a flexible way, and the problem of correctly identifying the appropriate word sense of each new word added to the taxonomy.1 2 A Probabilistic Framework for Taxonomy Induction In section 2.1 we introduce our definitions for taxonomies, relations, and the taxonomic constraints that enforce dependencies between relations; in section 2.2 we give a probabilistic model for defining the conditional probability of a set of relational evidence given a taxonomy; in section 2.3 we formulate a local search algorithm to find the taxonomy maximizing this conditional probability; and in section 2.4 we extend our framework to deal with lexical ambiguity. 2.1 Taxonomies, Relations, and Taxonomic Constraints We define a taxonomy T as a set of pairwise relations R over some domain of objects DT. For example, the relations in WordNet include hypernymy, holonymy, verb entailment, and many others; the objects of WordNet between which these relations hold are its word senses or synsets. We define that each relation R ∈R is a set of ordered or unordered pairs of objects (i, j) ∈DT; we define Rij ∈T if relationship R holds over objects (i, j) in T. Relations for Hyponym Acquisition For the case of hyponym acquisition, the objects in our taxonomy are WordNet synsets. In this paper we focus on two of the many possible relationships between senses: the hypernym relation and the coordinate term relation. We treat the hypernym or ISA relation as atomic; we use the notation Hn ij if a sense j is the n-th ancestor of a sense i in the hypernym hierarchy. We will simply use Hij to indicate that j is an ancestor of i at some unspecified level. Two senses are typically considered to be “coordinate terms” or “taxonomic sisters” if they share an immediate parent in the hypernym hierarchy. We generalize this notion of siblinghood to state that two senses i and j are (m, n)-cousins if their closest least common 1The taxonomies discussed in this paper are available for download at http://ai.stanford.edu/∼rion/swn. subsumer (LCS)2 is within exactly m and n links, respectively.3 We use the notation Cmn ij to denote that i and j are (m, n)-cousins. Thus coordinate terms are (1, 1)-cousins; technically the hypernym relation may also be seen as a specific case of this representation; an immediate parent in the hypernym hierarchy is a (1, 0)-cousin, and the k-th ancestor is a (k, 0)-cousin. Taxonomic Constraints A semantic taxonomy such as WordNet enforces certain taxonomic constraints which disallow particular taxonomies T. For example, the ISA transitivity constraint in WordNet requires that each synset inherits the hypernyms of its hypernym, and the part-inheritance constraint requires that each synset inherits the meronyms of its hypernyms. For the case of hyponym acquisition we enforce the following two taxonomic constraints on the hypernym and (m, n)-cousin relations: 1. ISA Transitivity: Hm ij ∧Hn jk ⇒Hm+n ik . 2. Definition of (m, n)-cousinhood: Cmn ij ⇔∃k.k = LCS(i, j) ∧Hm ik ∧Hn jk. Constraint (1) requires that the each synset inherits the hypernyms of its direct hypernym; constraint (2) simply defines the (m, n)-cousin relation in terms of the atomic hypernym relation. The addition of any new hypernym relation to a preexisting taxonomy will usually necessitate the addition of a set of other novel relations as implied by the taxonomic constraints. We refer to the full set of novel relations implied by a new link Rij as I(Rij); we discuss the efficient computation of the set of implied links for the purpose of hyponym acquisition in Section 3.4. 2.2 A Probabilistic Formulation We propose that the event Rij ∈T has some prior probability P(Rij ∈T), and P(Rij ∈ 2A least common subsumer LCS(i, j) is defined as a synset that is an ancestor in the hypernym hierarchy of both i and j which has no child that is also an ancestor of both i and j. When there is more than one LCS (due to multiple inheritance), we refer to the closest LCS, i.e.,the LCS that minimizes the maximum distance to i and j. 3An (m, n)-cousin for m ≥2 corresponds to the English kinship relation “(m−1)-th cousin |m−n|-times removed.” 802 T) + P(Rij ̸∈T) = 1. We define the probability of the taxonomy as a whole as the joint probability of its component relations; given a partition of all possible relations R = {A, B} where A ∈T and B ̸∈T, we define: P(T) = P(A ∈T, B ̸∈T). We assume that we have some set of observed evidence E consisting of observed features over pairs of objects in some domain DE; we’ll begin with the assumption that our features are over pairs of words, and that the objects in the taxonomy also correspond directly to words.4 Given a set of features ER ij ∈E, we assume we have some model for inferring P(Rij ∈T|ER ij), i.e., the posterior probability of the event Rij ∈T given the corresponding evidence ER ij for that relation. For example, evidence for the hypernym relation EH ij might be the set of all observed lexico-syntactic patterns containing i and j in all sentences in some corpus. For simplicity we make the following independence assumptions: first, we assume that each item of observed evidence ER ij is independent of all other observed evidence given the taxonomy T, i.e., P(E|T) = Q ER ij∈E P(ER ij|T). Further, we assume that each item of observed evidence ER ij depends on the taxonomy T only by way of the corresponding relation Rij, i.e., P(ER ij|T) = ½ P(ER ij|Rij ∈T) if Rij ∈T P(ER ij|Rij ̸∈T) if Rij ̸∈T For example, if our evidence EH ij is a set of observed lexico-syntactic patterns indicative of hypernymy between two words i and j, we assume that whatever dependence the relations in T have on our observations may be explained entirely by dependence on the existence or non-existence of the single hypernym relation H(i, j). Applying these two independence assumptions we may express the conditional probability of our evidence given the taxonomy: P(E|T) = Y Rij∈T P(ER ij|Rij ∈T) · Y Rij̸∈T P(ER ij|Rij ̸∈T). Rewriting the conditional probability in terms of our estimates of the posterior probabilities 4In section 2.4 we drop this assumption, extending our model to manage lexical ambiguity. P(Rij|ER ij) using Bayes Rule, we obtain: P(E|T) = Y Rij∈T P(Rij ∈T|ER ij)P(ER ij) P(Rij ∈T) · Y Rij̸∈T P(Rij ̸∈T|ER ij)P(ER ij) P(Rij ̸∈T) . Within our model we define the goal of taxonomy induction to be to find the taxonomy ˆT that maximizes the conditional probability of our observations E given the relationships of T, i.e., to find ˆT = arg max T P(E|T). 2.3 Local Search Over Taxonomies We propose a search algorithm for finding ˆT for the case of hyponym acquisition. We assume we begin with some initial (possibly empty) taxonomy T. We restrict our consideration of possible new taxonomies to those created by the single operation ADD-RELATION(Rij, T), which adds the single relation Rij to T. We define the multiplicative change ∆T(Rij) to the conditional probability P(E|T) given the addition of a single relation Rij: ∆T(Rij) = P(E|T′)/P(E|T) = P(Rij ∈T|ER ij)P(ER ij) P(Rij ̸∈T|ER ij)P(ER ij) · P(Rij ̸∈T) P(Rij ∈T) = k P ³ Rij ∈T|ER ij ´ 1 −P ³ Rij ∈T|ER ij ´ . Here k is the inverse odds of the prior on the event Rij ∈T; we consider this to be a constant independent of i, j, and the taxonomy T. To enforce the taxonomic constraints in T, for each application of the ADD-RELATION operator we must add all new relations in the implied set I(Rij) not already in T.5 Thus we define the multiplicative change of the full set of implied relations as the product over all new relations: ∆T(I(Rij)) = Y R∈I(Rij) ∆T(R). 5For example, in order to add the new synset microsoft under the noun synset company#n#1 in WordNet 2.1, we must necessarily add the new relations H2(microsoft, institution#n#1) C11(microsoft, dotcom#n#1), and so on. 803 This definition leads to the following best-first search algorithm for hyponym acquisition, which at each iteration defines the new taxonomy as the union of the previous taxonomy T and the set of novel relations implied by the relation Rij that maximizes ∆T(I(Rij)) and thus maximizes the conditional probability of the evidence over all possible single relations: WHILE max Rij̸∈T ∆T(I(Rij)) > 1 T ←T ∪I(arg max Rij̸∈T ∆T(I(Rij))). 2.4 Extending the Model to Manage Lexical Ambiguity Since word senses are not directly observable, if the objects in the taxonomy are word senses (as in WordNet), we must extend our model to allow for a many-to-many mapping (e.g., a word-to-sense mapping) between DE and DT. For this setting we assume we know the function senses(i), mapping from the word i to all of i′s possible corresponding senses. We assume that each set of word-pair evidence ER ij we possess is in fact sense-pair evidence ER kl for a specific pair of senses k0 ∈senses(i), l0 ∈ senses(j). Further, we assume that a new relation between two words is probable only between the correct sense pair, i.e.: P(Rkl|ER ij) = 1{k = k0, l = l0} · P(Rij|ER ij). When computing the conditional probability of a specific new relation Rkl ∈I(Rab), we assume that the relevant sense pair k0, l0 is the one which maximizes the probability of the new relation, i.e. for k ∈senses(i), l ∈senses(j), (k0, l0) = arg max k,l P(Rkl ∈T|ER ij). Our independence assumptions for this extension need only to be changed slightly; we now assume that the evidence ER ij depends on the taxonomy T via only a single relation between sensepairs Rkl. Using this revised independence assumption the derivation for best-first search over taxonomies for hyponym acquisition remains unchanged. One side effect of this revised independence assumption is that the addition of the single “sense-collapsed” relation Rkl in the taxonomy T will explain the evidence ER ij for the relation over words i and j now that such evidence has been revealed to concern only the specific senses k and l. 3 Extending WordNet We demonstrate the ability of our model to use evidence from multiple relations to extend WordNet with novel noun hyponyms. While in principle we could use any number of relations, for simplicity we consider two primary sources of evidence: the probability of two words in WordNet being in a hypernym relation, and the probability of two words in WordNet being in a coordinate relation. In sections 3.1 and 3.2 we describe the construction of our hypernym and coordinate classifiers, respectively; in section 3.3 we outline the efficient algorithm we use to perform local search over hyponym-extended WordNets; and in section 3.4 we give an example of the implicit structure-based word sense disambiguation performed within our framework. 3.1 Hyponym Classification Our classifier for the hypernym relation is derived from the “hypernym-only” classifier described in (Snow et al., 2005). The features used for predicting the hypernym relationship are obtained by parsing a large corpus of newswire and encyclopedia text with MINIPAR (Lin, 1998). From the resulting dependency trees the evidence EH ij for each word pair (i, j) is constructed; the evidence takes the form of a vector of counts of occurrences that each labeled syntactic dependency path was found as the shortest path connecting i and j in some dependency tree. The labeled training set is constructed by labeling the collected feature vectors as positive “known hypernym” or negative “known non-hypernym” examples using WordNet 2.0; 49,922 feature vectors were labeled as positive training examples, and 800,828 noun pairs were labeled as negative training examples. The model for predicting P(Hij|EH ij ) is then trained using logistic regression, predicting the noun-pair hypernymy label from WordNet from the feature vector of lexico-syntactic patterns. The hypernym classifier described above predicts the probability of the generalized hypernymancestor relation over words P(Hij|EH ij ). For the purposes of taxonomy induction, we would prefer an ancestor-distance specific set of classifiers over senses, i.e., for k ∈senses(i), l ∈ senses(j), the set of classifiers estimating {P(H1 kl|EH ij ), P(H2 kl|EH ij ), . . . }. 804 One problem that arises from directly assigning the probability P(Hn ij|EH ij ) ∝P(Hij|EH ij ) for all n is the possibility of adding a novel hyponym to an overly-specific hypernym, which might still satisfy P(Hn ij|EH ij ) for a very large n. In order to discourage unnecessary overspecification, we penalize each probability P(Hk ij|EH ij ) by a factor λk−1 for some λ < 1, and renormalize: P(Hk ij|EH ij ) ∝λk−1P(Hij|EH ij ). In our experiments we set λ = 0.95. 3.2 (m, n)-cousin Classification The classifier for learning coordinate terms relies on the notion of distributional similarity, i.e., the idea that two words with similar meanings will be used in similar contexts (Hindle, 1990). We extend this notion to suggest that words with similar meanings should be near each other in a semantic taxonomy, and in particular will likely share a hypernym as a near parent. Our classifier for (m, n)-cousins is derived from the algorithm and corpus given in (Ravichandran et al., 2005). In that work an efficient randomized algorithm is derived for computing clusters of similar nouns. We use a set of more than 1000 distinct clusters of English nouns collected by their algorithm over 70 million webpages6, with each noun i having a score representing its cosine similarity to the centroid c of the cluster to which it belongs, cos(θ(i, c)). We use the cluster scores of noun pairs as input to our own algorithm for predicting the (m, n)cousin relationship between the senses of two words i and j. If two words i and j appear in a cluster together, with cluster centroid c, we set our single coordinate input feature to be the minimum cluster score min(cos(θ(i, c)), cos(θ(j, c))), and zero otherwise. For each such noun pair feature, we construct a labeled training set of (m, n)cousin relation labels from WordNet 2.1. We define a noun pair (i, j) to be a “known (m, n)cousin” if for some senses k ∈senses(i), l ∈ senses(j), Cmn ij ∈WordNet; if more than one such relation exists, we assume the relation with smallest sum m + n, breaking ties by smallest absolute difference |m −n|. We consider all such labeled relationships from WordNet with 0 ≤ m, n ≤7; pairs of words that have no corresponding pair of synsets connected in the hypernym hi6As a preprocessing step we hand-edit the clusters to remove those containing non-English words, terms related to adult content, and other webpage-specific clusters. erarchy, or with min(m, n) > 7, are assigned to a single class C∞. Further, due to the symmetry of the similarity score, we merge each class Cmn = Cmn ∪Cnm; this implies that the resulting classifier will predict, as expected given a symmetric input, P(Cmn kl |EC ij) = P(Cnm kl |EC ij). We find 333,473 noun synset pairs in our training set with similarity score greater than 0.15. We next apply softmax regression to learn a classifier that predicts P(Cmn ij |EC ij), predicting the WordNet class labels from the single similarity score derived from the noun pair’s cluster similarity. 3.3 Details of our Implementation Hyponym acquisition is among the simplest and most straightforward of the possible applications of our model; here we show how we efficiently implement our algorithm for this problem. First, we identify the set of all the word pairs (i, j) over which we have hypernym and/or coordinate evidence, and which might represent additions of a novel hyponym to the WordNet 2.1 taxonomy (i.e., that has a known noun hypernym and an unknown hyponym, or has a known noun coordinate term and an unknown coordinate term). This yields a list of 95,000 single links over threshold P(Rij) > 0.12. For each unknown hyponym i we may have several pieces of evidence; for example, for the unknown term continental we have 21 relevant pieces of hypernym evidence, with links to possible hypernyms {carrier, airline, unit, . . .}; and we have 5 pieces of coordinate evidence, with links to possible coordinate terms {airline, american eagle, airbus, . . .}. For each proposed hypernym or coordinate link involved with the novel hyponym i, we compute the set of candidate hypernyms for i; in practice we consider all senses of the immediate hypernym j for each potential novel hypernym, and all senses of the coordinate term k and its first two hypernym ancestors for each potential coordinate. In the continental example, from the 26 individual pieces of evidence over words we construct the set of 99 unique synsets that we will consider as possible hypernyms; these include the two senses of the word airline, the ten senses of the word carrier, and so forth. Next, we iterate through each of the possible hypernym synsets l under which we might add the new word i; for each synset l we com805 pute the change in taxonomy score resulting from adding the implied relations I(H1 il) required by the taxonomic constraints of T. Since typically our set of all evidence involving i will be much smaller than the set of possible relations in I(H1 il), we may efficiently check whether, for each sense s ∈senses(w), for all words where we have some evidence ER iw, whether s participates in some relation with i in the set of implied relations I(H1 il).7 If there is more than one sense s ∈senses(w), we add to I(H1 il) the single relationship Ris that maximizes the taxonomy likelihood, i.e. arg maxs∈senses(w) ∆T(Ris). 3.4 Hypernym Sense Disambiguation A major strength of our model is its ability to correctly choose the sense of a hypernym to which to add a novel hyponym, despite collecting evidence over untagged word pairs. In our algorithm word sense disambiguation is an implicit side-effect of our algorithm; since our algorithm chooses to add the single link which, with its implied links, yields the most likely taxonomy, and since each distinct synset in WordNet has a different immediate neighborhood of relations, our algorithm simply disambiguates each node based on its surrounding structural information. As an example of sense disambiguation in practice, consider our example of continental. Suppose we are iterating through each of the 99 possible synsets under which we might add continental as a hyponym, and we come to the synset airline#n#2 in WordNet 2.1, i.e. “a commercial organization serving as a common carrier.” In this case we will iterate through each piece of hypernym and coordinate evidence; we find that the relation H(continental, carrier) is satisfied with high probability for the specific synset carrier#n#5, the grandparent of airline#n#2; thus the factor ∆T(H3(continental, carrier#n#5)) is included in the factor of the set of implied relations ∆T ¡ I(H1(continental, airline#n#2)) ¢ . Suppose we instead evaluate the first synset of airline, i.e., airline#n#1, with the gloss “a hose that carries air under pressure.” For this synset none of the other 20 relationships directly implied by hypernym evidence or the 5 relationships implied by the coordinate ev7Checking whether or not Ris ∈I(H1 il) may be efficiently computed by checking whether s is in the hypernym ancestors of l or if it shares a least common subsumer with l within 7 steps. idence are implied by adding the single link H1(continental,airline#n#1); thus the resulting change in the set of implied links given by the correct “carrier” sense of airline is much higher than that of the “hose” sense. In fact it is the largest of all the 99 considered hypernym links for continental; H1(continental, airline#n#2) is link #18,736 added to the taxonomy by our algorithm. 4 Evaluation In order to evaluate our framework for taxonomy induction, we have applied hyponym acquisition to construct several distinct taxonomies, starting with the base of WordNet 2.1 and only adding novel noun hyponyms. Further, we have constructed taxonomies using a baseline algorithm, which uses the identical hypernym and coordinate classifiers used in our joint algorithm, but which does not combine the evidence of the classifiers. In section 4.1 we describe our evaluation methodology; in sections 4.2 and 4.3 we analyze the fine-grained precision and disambiguation precision of our algorithm compared to the baseline; in section 4.4 we compare the coarse-grained precision of our links (motivated by categories defined by the WordNet supersenses) against the baseline algorithm and against an “oracle” for named entity recognition. Finally, in section 4.5 we evaluate the taxonomies inferred by our algorithm directly against the WordNet 2.1 taxonomy; we perform this evaluation by testing each taxonomy on a set of human judgments of hypernym and non-hypernym noun pairs sampled from newswire text. 4.1 Methodology We evaluate the quality of our acquired hyponyms by direct judgment. In four separate annotation sessions, two judges labeled {50,100,100,100} samples uniformly generated from the first {100,1000,10000,20000} single links added by our algorithm. For the direct measure of fine-grained precision, we simply ask for each link H(X, Y ) added by the system, is X a Y ? In addition to the fine-grained precision, we give a coarse-grained evaluation, inspired by the idea of supersense-tagging in (Ciaramita and Johnson, 2003). The 26 supersenses used in WordNet 2.1 are listed in Table 1; we label a hyponym link as correct in the coarse-grained evaluation if the novel hyponym is placed under the appropriate supersense. This evaluation task 806 1 Tops 8 communication 15 object 22 relation 2 act 9 event 16 person 23 shape 3 animal 10 feeling 17 phenomenon 24 state 4 artifact 11 food 18 plant 25 substance 5 attribute 12 group 19 possession 26 time 6 body 13 location 20 process 7 cognition 14 motive 21 quantity Table 1: The 26 WordNet supersenses is similar to a fine-grained Named Entity Recognition (Fleischman and Hovy, 2002) task with 26 categories; for example, if our algorithm mistakenly inserts a novel non-capital city under the hyponym state capital, it will inherit the correct supersense location. Finally, we evaluate the ability of our algorithm to correctly choose the appropriate sense of the hypernym under which a novel hyponym is being added. Our labelers categorize each candidate sense-disambiguated hypernym synset suggested by our algorithm into the following categories: c1: Correct sense-disambiguated hypernym. c2: Correct hypernym word, but incorrect sense of that word. c3: Incorrect hypernym, but correct supersense. c4: Any other relation is considered incorrect. A single hyponym/hypernym pair is allowed to be simultaneously labeled 2 and 3. 4.2 Fine-grained evaluation Table 2 displays the results of our evaluation of fine-grained precision for the baseline non-joint algorithm (Base) and our joint algorithm (Joint), as well as the relative error reduction (ER) of our algorithm over the baseline. We use the minimum of the two judges’ scores. Here we define fine-grained precision as c1/total. We see that our joint algorithm strongly outperforms the baseline, and has high precision for predicting novel hyponyms up to 10,000 links. 4.3 Hypernym sense disambiguation Also in Table 2 we compare the sense disambiguation precision of our algorithm and the baseline. Here we measure the precision of sense-disambiguation among all examples where each algorithm found a correct hyponym word; our calculation for disambiguation precision is c1/ (c1 + c2). Again our joint algorithm outperforms the baseline algorithm at all levels of recall. Interestingly the baseline disambiguation precision improves with higher recall; this may Fine-grained Pre. Disambiguation Pre. #Links Base Joint ER Base Joint ER 100 0.60 1.00 100% 0.86 1.00 100% 1000 0.52 0.93 85% 0.84 1.00 100% 10000 0.46 0.84 70% 0.90 1.00 100% 20000 0.46 0.68 41% 0.94 0.98 68% Table 2: Fine-grained and disambiguation precision and error reduction for hyponym acquisition # Links NER Base Joint ER vs. ER vs. Oracle NER Base 100 1.00 0.72 1.00 0% 100% 1000 0.69 0.68 0.99 97% 85% 10000 0.45 0.69 0.96 93% 70% 20000 0.54 0.69 0.92 83% 41% Table 3: Coarse-grained precision and error reduction vs. Non-joint baseline and NER Oracle be attributed to the observation that the highestconfidence hypernyms predicted by individual classifiers are likely to be polysemous, whereas hypernyms of lower confidence are more frequently monosemous (and thus trivially easy to disambiguate). 4.4 Coarse-grained evaluation We compute coarse-grained precision as (c1 + c3)/total. Inferring the correct coarse-grained supersense of a novel hyponym can be viewed as a fine-grained (26-category) Named Entity Recognition task; our algorithm for taxonomy induction can thus be viewed as performing high-accuracy fine-grained NER. Here we compare against both the baseline non-joint algorithm as well as an “oracle” algorithm for Named Entity Recognition, which perfectly classifies the supersense of all nouns that fall under the four supersenses {person, group, location, quantity}, but works only for those supersenses. Table 3 shows the results of this coarse-grained evaluation. We see that the baseline non-joint algorithm has higher precision than the NER oracle as 10,000 and 20,000 links; however, both are significantly outperformed by our joint algorithm, which maintains high coarse-grained precision (92%) even at 20,000 links. 4.5 Comparison of inferred taxonomies and WordNet For our final evaluation we compare our learned taxonomies directly against the currently existing hypernym links in WordNet 2.1. In order to compare taxonomies we use a hand-labeled test 807 WN +10K +20K +30K +40K PRE 0.524 0.524 0.574 0.583 0.571 REC 0.165 0.165 0.203 0.211 0.211 F 0.251 0.251 0.300 0.309 0.307 Table 4: Taxonomy hypernym classification vs. WordNet 2.1 on hand-labeled testset set of over 5,000 noun pairs, randomly-sampled from newswire corpora (described in (Snow et al., 2005)). We measured the performance of both our inferred taxonomies and WordNet against this test set.8 The performance and comparison of the best WordNet classifier vs. our taxonomies is given in Table 4. Our best-performing inferred taxonomy on this test set is achieved after adding 30,000 novel hyponyms, achieving an 23% relative improvement in F-score over the WN2.1 classifier. 5 Conclusions We have presented an algorithm for inducing semantic taxonomies which attempts to globally optimize the entire structure of the taxonomy. Our probabilistic architecture also includes a new model for learning coordinate terms based on (m, n)-cousin classification. The model’s ability to integrate heterogeneous evidence from different classifiers offers a solution to the key problem of choosing the correct word sense to which to attach a new hypernym. Acknowledgements Thanks to Christiane Fellbaum, Rajat Raina, Bill MacCartney, and Allison Buckley for useful discussions and assistance annotating data. Rion Snow is supported by an NDSEG Fellowship sponsored by the DOD and AFOSR. This work was supported in part by the Disruptive Technology Office (DTO)’s Advanced Question Answering for Intelligence (AQUAINT) Program. References P. Buitelaar, P. Cimiano and B. Magnini. 2005. Ontology Learning from Text: Methods, Evaluation and Applications. Volume 123 Frontiers in Artificial Intelligence and Applications. S. Caraballo. 2001. Automatic Acquisition of a Hypernym-Labeled Noun Hierarchy from Text. Brown University Ph.D. Thesis. 8We found that the WordNet 2.1 model achieving the highest F-score used only the first sense of each hyponym, and allowed a maximum distance of 4 edges between each hyponym and its hypernym. S. Cederberg and D. Widdows. 2003. Using LSA and Noun Coordination Information to Improve the Precision and Recall of Automatic Hyponymy Extraction. Proc. CoNLL-2003, pp. 111–118. T. Chklovski and P. Pantel. 2004. VerbOcean: Mining the Web for Fine-Grained Semantic Verb Relations. Proc. EMNLP-2004. M. Ciaramita and M. Johnson. 2003. Supersense Tagging of Unknown Nouns in WordNet. Proc. EMNLP-2003. O. Etzioni, M. Cafarella, D. Downey, A. Popescu, T. Shaked, S. Soderland, D. Weld, and A. Yates. 2005. Unsupervised Named-Entity Extraction from the Web: An Experimental Study. Artificial Intelligence, 165(1):91–134. C. Fellbaum. 1998. WordNet: An Electronic Lexical Database. Cambridge, MA: MIT Press. R. Girju, A. Badulescu, and D. Moldovan. 2003. Learning Semantic Constraints for the Automatic Discovery of Part-Whole Relations. Proc. HLT-03. M. Fleischman and E. Hovy. 2002. Fine grained classification of named entities. Proc. COLING-02. M. Hearst. 1992. Automatic Acquisition of Hyponyms from Large Text Corpora. Proc. COLING-92. D. Hindle. 1990. Noun classification from predicateargument structures. Proc. ACL-90. D. Lenat. 1995. CYC: A Large-Scale Investment in Knowledge Infrastructure, Communications of the ACM, 38:11, 33–35. D. Lin. 1998. Dependency-based Evaluation of MINIPAR. Workshop on the Evaluation of Parsing Systems, Granada, Spain. D. Lin, S. Zhao, L. Qin and M. Zhou. 2003. Identifying Synonyms among Distributionally Similar Words. Proc. IJCAI-03. M. Pasc¸a. 2005. Finding Instance Names and Alternative Glosses on the Web: WordNet Reloaded. CICLing 2005, pp. 280-292. D. Ravichandran, P. Pantel, and E. Hovy. 2002. Randomized Algorithms and NLP: Using Locality Sensitive Hash Function for High Speed Noun Clustering. Proc. ACL-2002. E. Riloff and J. Shepherd. 1997. A Corpus-Based Approach for Building Semantic Lexicons. Proc EMNLP-1997. B. Roark and E. Charniak. 1998. Noun-phrase cooccurerence statistics for semi-automatic-semantic lexicon construction. Proc. ACL-1998. R. Snow, D. Jurafsky, and A. Y. Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. NIPS 2005. P. Turney, M. Littman, J. Bigham, and V. Shnayder. 2003. Combining independent modules to solve multiple-choice synonym and analogy problems. Proc. RANLP-2003, pp. 482–489. 808 | 2006 | 101 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 809–816, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Names and Similarities on the Web: Fact Extraction in the Fast Lane Marius Pas¸ca Google Inc. Mountain View, CA 94043 [email protected] Dekang Lin Google Inc. Mountain View, CA 94043 [email protected] Jeffrey Bigham∗ University of Washington Seattle, WA 98195 [email protected] Andrei Lifchits∗ University of British Columbia Vancouver, BC V6T 1Z4 [email protected] Alpa Jain∗ Columbia University New York, NY 10027 [email protected] Abstract In a new approach to large-scale extraction of facts from unstructured text, distributional similarities become an integral part of both the iterative acquisition of high-coverage contextual extraction patterns, and the validation and ranking of candidate facts. The evaluation measures the quality and coverage of facts extracted from one hundred million Web documents, starting from ten seed facts and using no additional knowledge, lexicons or complex tools. 1 Introduction 1.1 Background The potential impact of structured fact repositories containing billions of relations among named entities on Web search is enormous. They enable the pursuit of new search paradigms, the processing of database-like queries, and alternative methods of presenting search results. The preparation of exhaustive lists of hand-written extraction rules is impractical given the need for domainindependent extraction of many types of facts from unstructured text. In contrast, the idea of bootstrapping for relation and information extraction was first proposed in (Riloff and Jones, 1999), and successfully applied to the construction of semantic lexicons (Thelen and Riloff, 2002), named entity recognition (Collins and Singer, 1999), extraction of binary relations (Agichtein and Gravano, 2000), and acquisition of structured data for tasks such as Question Answering (Lita and Carbonell, 2004; Fleischman et al., 2003). In the context of fact extraction, the resulting iterative acquisition ∗Work done during internships at Google Inc. framework starts from a small set of seed facts, finds contextual patterns that extract the seed facts from the underlying text collection, identifies a larger set of candidate facts that are extracted by the patterns, and adds the best candidate facts to the previous seed set. 1.2 Contributions Figure 1 describes an architecture geared towards large-scale fact extraction. The architecture is similar to other instances of bootstrapping for information extraction. The main processing stages are the acquisition of contextual extraction patterns given the seed facts, acquisition of candidate facts given the extraction patterns, scoring and ranking of the patterns, and scoring and ranking of the candidate facts, a subset of which is added to the seed set of the next round. Within the existing iterative acquisition framework, our first contribution is a method for automatically generating generalized contextual extraction patterns, based on dynamically-computed classes of similar words. Traditionally, the acquisition of contextual extraction patterns requires hundreds or thousands of consecutive iterations over the entire text collection (Lita and Carbonell, 2004), often using relatively expensive or restrictive tools such as shallow syntactic parsers (Riloff and Jones, 1999; Thelen and Riloff, 2002) or named entity recognizers (Agichtein and Gravano, 2000). Comparatively, generalized extraction patterns achieve exponentially higher coverage in early iterations. The extraction of large sets of candidate facts opens the possibility of fast-growth iterative extraction, as opposed to the de-facto strategy of conservatively growing the seed set by as few as five items (Thelen and Riloff, 2002) after each iteration. 809 Acquisition of contextual extraction patterns Distributional similarities Text collection Candidate facts Acquisition of candidate facts Occurrences of extraction patterns Validation of candidate facts Scored extraction patterns Scored candidate facts Scoring and ranking Validated candidate facts Seed facts Occurrences of seed facts Extraction patterns Validated extraction patterns Validation of patterns Generalized extraction patterns Figure 1: Large-scale fact extraction architecture The second contribution of the paper is a method for domain-independent validation and ranking of candidate facts, based on a similarity measure of each candidate fact relative to the set of seed facts. Whereas previous studies assume clean text collections such as news corpora (Thelen and Riloff, 2002; Agichtein and Gravano, 2000; Hasegawa et al., 2004), the validation is essential for low-quality sets of candidate facts collected from noisy Web documents. Without it, the addition of spurious candidate facts to the seed set would result in a quick divergence of the iterative acquisition towards irrelevant information (Agichtein and Gravano, 2000). Furthermore, the finer-grained ranking induced by similarities is necessary in fast-growth iterative acquisition, whereas previously proposed ranking criteria (Thelen and Riloff, 2002; Lita and Carbonell, 2004) are implicitly designed for slow growth of the seed set. 2 Similarities for Pattern Acquisition 2.1 Generalization via Word Similarities The extraction patterns are acquired by matching the pairs of phrases from the seed set into document sentences. The patterns consist of contiguous sequences of sentence terms, but otherwise differ from the types of patterns proposed in earlier work in two respects. First, the terms of a pattern are either regular words or, for higher generality, any word from a class of similar words. Second, the amount of textual context encoded in a pattern is limited to the sequence of terms between (i.e., infix) the pair of phrases from a seed fact that could be matched in a document sentence, thus excluding any context to the left (i.e., prefix) and to the right (i.e., postfix) of the seed. The pattern shown at the top of Figure 2, which (Irving Berlin, 1888) NNP NNP CD Infix Aurelio de la Vega was born November 28 , 1925 , in Havana , Cuba . FW FW FW NNP VBD VBN NNP CD , CD , IN NNP , NNP . found not found Infix not found Prefix Postfix Infix Matching on sentences Seed fact Infix−only pattern The poet was born Jan. 13 , several years after the revolution . not found British − native Glenn Cornick of Jethro Tull was born April 23 , 1947 . NNP : JJ NNP NNP IN NNP NNP VBD VBN NNP CD , CD . Infix found found Chester Burton Atkins was born June 20 , 1924 , on a farm near Luttrell . NNP NNP NNP VBD VBN NNP CD , CD , IN DT NN IN NNP . Infix Infix found The youngest child of three siblings , Mariah Carey was born March 27 , 1970 in Huntington , Long Island in New York . DT JJS NN IN CD NNS , NNP NNP VBD VBN NNP CD , CD IN NNP , JJ NN IN NNP NNP . found found found (S1) (S2) (S3) (S4) (S5) (Jethro Tull, 1947) (Mariah Carey, 1970) (Chester Burton Atkins, 1924) Candidate facts DT NN VBD VBN NNP CD , JJ NNS IN DT NN . N/A CL1 born CL2 00 , N/A Figure 2: Extraction via infix-only patterns contains the sequence [CL1 born CL2 00 .], illustrates the use of classes of distributionally similar words within extraction patterns. The first word class in the sequence, CL1, consists of words such as {was, is, could}, whereas the second class includes {February, April, June, Aug., November} and other similar words. The classes of words are computed on the fly over all sequences of terms in the extracted patterns, on top of a large set of pairwise similarities among words (Lin, 1998) extracted in advance from around 50 million news articles indexed by the Google search engine over three years. All digits in both patterns and sentences are replaced with a common marker, such 810 that any two numerical values with the same number of digits will overlap during matching. Many methods have been proposed to compute distributional similarity between words, e.g., (Hindle, 1990), (Pereira et al., 1993), (Grefenstette, 1994) and (Lin, 1998). Almost all of the methods represent a word by a feature vector, where each feature corresponds to a type of context in which the word appeared. They differ in how the feature vectors are constructed and how the similarity between two feature vectors is computed. In our approach, we define the features of a word w to be the set of words that occurred within a small window of w in a large corpus. The context window of an instance of w consists of the closest non-stopword on each side of w and the stopwords in between. The value of a feature w′ is defined as the pointwise mutual information between w′ and w: PMI(w′, w) = −log( P (w,w′) P (w)P (w′)). The similarity between two different words w1 and w2, S(w1, w2), is then computed as the cosine of the angle between their feature vectors. While the previous approaches to distributional similarity have only applied to words, we applied the same technique to proper names as well as words. The following are some example similar words and phrases with their similarities, as obtained from the Google News corpus: • Carey: Higgins 0.39, Lambert 0.39, Payne 0.38, Kelley 0.38, Hayes 0.38, Goodwin 0.38, Griffin 0.38, Cummings 0.38, Hansen 0.38, Williamson 0.38, Peters 0.38, Walsh 0.38, Burke 0.38, Boyd 0.38, Andrews 0.38, Cunningham 0.38, Freeman 0.37, Stephens 0.37, Flynn 0.37, Ellis 0.37, Bowers 0.37, Bennett 0.37, Matthews 0.37, Johnston 0.37, Richards 0.37, Hoffman 0.37, Schultz 0.37, Steele 0.37, Dunn 0.37, Rowe 0.37, Swanson 0.37, Hawkins 0.37, Wheeler 0.37, Porter 0.37, Watkins 0.37, Meyer 0.37 [..]; • Mariah Carey: Shania Twain 0.38, Christina Aguilera 0.35, Sheryl Crow 0.35, Britney Spears 0.33, Celine Dion 0.33, Whitney Houston 0.32, Justin Timberlake 0.32, Beyonce Knowles 0.32, Bruce Springsteen 0.30, Faith Hill 0.30, LeAnn Rimes 0.30, Missy Elliott 0.30, Aretha Franklin 0.29, Jennifer Lopez 0.29, Gloria Estefan 0.29, Elton John 0.29, Norah Jones 0.29, Missy Elliot 0.29, Alicia Keys 0.29, Avril Lavigne 0.29, Kid Rock 0.28, Janet Jackson 0.28, Kylie Minogue 0.28, Beyonce 0.27, Enrique Iglesias 0.27, Michelle Branch 0.27 [..]; • Jethro Tull: Motley Crue 0.28, Black Crowes 0.26, Pearl Jam 0.26, Silverchair 0.26, Black Sabbath 0.26, Doobie Brothers 0.26, Judas Priest 0.26, Van Halen 0.25, Midnight Oil 0.25, Pere Ubu 0.24, Black Flag 0.24, Godsmack 0.24, Grateful Dead 0.24, Grand Funk Railroad 0.24, Smashing Pumpkins 0.24, Led Zeppelin 0.24, Aerosmith 0.24, Limp Bizkit 0.24, Counting Crows 0.24, Echo And The Bunnymen 0.24, Cold Chisel 0.24, Thin Lizzy 0.24 [..]. To our knowledge, the only previous study that embeds similarities into the acquisition of extraction patterns is (Stevenson and Greenwood, 2005). The authors present a method for computing pairwise similarity scores among large sets of potential syntactic (subject-verb-object) patterns, to detect centroids of mutually similar patterns. By assuming the syntactic parsing of the underlying text collection to generate the potential patterns in the first place, the method is impractical on Web-scale collections. Two patterns, e.g. chairman-resign and CEO-quit, are similar to each other if their components are present in an external hand-built ontology (i.e., WordNet), and the similarity among the components is high over the ontology. Since general-purpose ontologies, and WordNet in particular, contain many classes (e.g., chairman and CEO) but very few instances such as Osasuna, Crewe etc., the patterns containing an instance rather than a class will not be found to be similar to one another. In comparison, the classes and instances are equally useful in our method for generalizing patterns for fact extraction. We merge basic patterns into generalized patterns, regardless of whether the similar words belong, as classes or instances, in any external ontology. 2.2 Generalization via Infix-Only Patterns By giving up the contextual constraints imposed by the prefix and postfix, infix-only patterns represent the most aggressive type of extraction patterns that still use contiguous sequences of terms. In the absence of the prefix and postfix, the outer boundaries of the fact are computed separately for the beginning of the first (left) and end of the second (right) phrases of the candidate fact. For generality, the computation relies only on the partof-speech tags of the current seed set. Starting forward from the right extremity of the infix, we collect a growing sequence of terms whose partof-speech tags are [P1+ P2+ .. Pn+], where the 811 notation Pi+ represents one or more consecutive occurrences of the part-of-speech tag Pi. The sequence [P1 P2 .. Pn] must be exactly the sequence of part of speech tags from the right side of one of the seed facts. The point where the sequence cannot be grown anymore defines the boundary of the fact. A similar procedure is applied backwards, starting from the left extremity of the infix. An infix-only pattern produces a candidate fact from a sentence only if an acceptable sequence is found to the left and also to the right of the infix. Figure 2 illustrates the process on the infixonly pattern mentioned earlier, and one seed fact. The part-of-speech tags for the seed fact are [NNP NNP] and [CD] for the left and right sides respectively. The infix occurs in all sentences. However, the matching of the part-of-speech tags of the sentence sequences to the left and right of the infix, against the part-of-speech tags of the seed fact, only succeeds for the last three sentences. It fails for the first sentence S1 to the left of the infix, because [.. NNP] (for Vega) does not match [NNP NNP]. It also fails for the second sentence S2 to both the left and the right side of the infix, since [.. NN] (for poet) does not match [NNP NNP], and [JJ ..] (for several) does not match [CD]. 3 Similarities for Validation and Ranking 3.1 Revisiting Standard Ranking Criteria Because some of the acquired extraction patterns are too generic or wrong, all approaches to iterative acquisition place a strong emphasis on the choice of criteria for ranking. Previous literature quasi-unanimously assesses the quality of each candidate fact based on the number and quality of the patterns that extract the candidate fact (more is better); and the number of seed facts extracted by the same patterns (again, more is better) (Agichtein and Gravano, 2000; Thelen and Riloff, 2002; Lita and Carbonell, 2004). However, our experiments using many variations of previously proposed scoring functions suggest that they have limited applicability in large-scale fact extraction, for two main reasons. The first is that it is impractical to perform hundreds of acquisition iterations on terabytes of text. Instead, one needs to grow the seed set aggressively in each iteration. Previous scoring functions were implicitly designed for cautious acquisition strategies (Collins and Singer, 1999), which expand the seed set very slowly across consecutive iterations. In that case, it makes sense to single out a small number of best candidates, among the other available candidates. Comparatively, when 10,000 candidate facts or more need to be added to a seed set of 10 seeds as early as after the first iteration, it is difficult to distinguish the quality of extraction patterns based, for instance, only on the percentage of the seed set that they extract. The second reason is the noisy nature of the Web. A substantial number of factors can and will concur towards the worst-case extraction scenarios on the Web. Patterns of apparently high quality turn out to produce a large quantity of erroneous “facts” such as (A-League, 1997), but also the more interesting (Jethro Tull, 1947) as shown earlier in Figure 2, or (Web Site David, 1960) or (New York, 1831). As for extraction patterns of average or lower quality, they will naturally lead to even more spurious extractions. 3.2 Ranking of Extraction Patterns The intuition behind our criteria for ranking generalized pattern is that patterns of higher precision tend to contain words that are indicative of the relation being mined. Thus, a pattern is more likely to produce good candidate facts if its infix contains the words language or spoken if extracting Language-SpokenIn-Country facts, or the word capital if extracting City-CapitalOf-Country relations. In each acquisition iteration, the scoring of patterns is a two-pass procedure. The first pass computes the normalized frequencies of all words excluding stopwords, over the entire set of extraction patterns. The computation applies separately to the prefix, infix and postfix of the patterns. In the second pass, the score of an extraction pattern is determined by the words with the highest frequency score in its prefix, infix and postfix, as computed in the first pass and adjusted for the relative distance to the start and end of the infix. 3.3 Ranking of Candidate Facts Figure 3 introduces a new scheme for assessing the quality of the candidate facts, based on the computation of similarity scores for each candidate relative to the set of seed facts. A candidate fact, e.g., (Richard Steele, 1672), is similar to the seed set if both its phrases, i.e., Richard Steele and 1672, are similar to the corresponding phrases (John Lennon or Stephen Foster in the case of Richard Steele) from the seed facts. For a phrase of a candidate fact to be assigned a non-default (non-minimum) 812 ... Lennon Lambert McFadden Bateson McNamara Costello Cronin Wooley Baker ... Foster Hansen Hawkins Fisher Holloway Steele Sweeney Chris John James Andrew Mike Matt Brian Christopher ... John Lennon 1940 Seed facts Stephen Foster 1826 Brian McFadden 1980 (4) (3) Robert S. McNamara 1916 (6) (5) Barbara Steele 1937 (7) (2) Stan Hansen 1949 (9) (8) Similar words Similar words for: John Similar words for: Stephen for: Lennon Similar words for: Foster ... Stephen Robert Michael Peter William Stan Richard (1) Barbara (3) (5) (7) (2) (8) (9) (4) (6) (2) (1) Candidate facts Jethro Tull 1947 Richard Steele 1672 Figure 3: The role of similarities in estimating the quality of candidate facts similarity score, the words at its extremities must be similar to one or more words situated at the same positions in the seed facts. This is the case for the first five candidate facts in Figure 3. For example, the first word Richard from one of the candidate facts is similar to the first word John from one of the seed facts. Concurrently, the last word Steele from the same phrase is similar to Foster from another seed fact. Therefore Robert Foster is similar to the seed facts. The score of a phrase containing N words is: ( C1 + PN i=1 log(1 + Simi) , if Sim1,N > 0 C2 , otherwise. where Simi is the similarity of the component word at position i in the phrase, and C1 and C2 are scaling constants such that C2≪C1. Thus, the similarity score of a candidate fact aggregates individual word-to-word similarity scores, for the left side and then for the right side of a candidate fact. In turn, the similarity score of a component word Simi is higher if: a) the computed word-toword similarity scores are higher relative to words at the same position i in the seeds; and b) the component word is similar to words from more than one seed fact. The similarity scores are one of a linear combination of features that induce a ranking over the candidate facts. Three other domain-independent features contribute to the final ranking: a) a phrase completeness score computed statistically over the entire set of candidate facts, which demotes candidate facts if any of their two sides is likely to be incomplete (e.g., Mary Lou vs. Mary Lou Retton, or John F. vs. John F. Kennedy); b) the average PageRank value over all documents from which the candidate fact is extracted; and c) the patternbased scores of the candidate fact. The latter feature converts the scores of the patterns extracting the candidate fact into a score for the candidate fact. For this purpose, it considers a fixed-length window of words around each match of a candidate fact in some sentence from the text collection. This is equivalent to analyzing all sentence contexts from which a candidate fact can be extracted. For each window, the word with the highest frequency score, as computed in the first pass of the procedure for scoring the patterns, determines the score of the candidate fact in that context. The overall pattern-based score of a candidate fact is the sum of the scores over all its contexts of occurrence, normalized by the frequency of occurrence of the candidate over all sentences. Besides inducing a ranking over the candidate facts, the similarity scores also serve as a validation filter over the candidate facts. Indeed, any candidates that are not similar to the seed set can be filtered out. For instance, the elimination of (Jethro Tull, 1947) is a side effect of verifying that Tull is not similar to any of the last-position words from phrases in the seed set. 4 Evaluation 4.1 Data The source text collection consists of three chunks W1, W2, W3 of approximately 100 million documents each. The documents are part of a larger snapshot of the Web taken in 2003 by the Google search engine. All documents are in English. The textual portion of the documents is cleaned of Html, tokenized, split into sentences and partof-speech tagged using the TnT tagger (Brants, 2000). The evaluation involves facts of type PersonBornIn-Year. The reasons behind the choice of this particular type are threefold. First, many Person-BornIn-Year facts are probably available on the Web (as opposed to, e.g., City-CapitalOfCountry facts), to allow for a good stress test for large-scale extraction. Second, either side of the facts (Person and Year) may be involved in many other types of facts, such that the extraction would easily divergence unless it performs correctly. Third, the phrases from one side (Person) have an utility in their own right, for lexicon 813 Table 1: Set of seed Person-BornIn-Year facts Name Year Name Year Paul McCartney 1942 John Lennon 1940 Vincenzo Bellini 1801 Stephen Foster 1826 Hoagy Carmichael 1899 Irving Berlin 1888 Johann Sebastian Bach 1685 Bela Bartok 1881 Ludwig van Beethoven 1770 Bob Dylan 1941 construction or detection of person names. The Person-BornIn-Year type is specified through an initial set of 10 seed facts shown in Table 1. Similarly to source documents, the facts are also part-of-speech tagged. 4.2 System Settings In each iteration, the case-insensitive matching of the current set of seed facts onto the sentences produces basic patterns. The patterns are converted into generalized patterns. The length of the infix may vary between 1 and 6 words. Potential patterns are discarded if the infix contains only stopwords. When a pattern is retained, it is used as an infix-only pattern, and allowed to generate at most 600,000 candidate facts. At the end of an iteration, approximately one third of the validated candidate facts are added to the current seed set. Consequently, the acquisition expands the initial seed set of 10 facts to 100,000 facts (after iteration 1) and then to one million facts (after iteration 2) using chunk W1. 4.3 Precision A separate baseline run extracts candidate facts from the text collection following the traditional iterative acquisition approach. Pattern generalization is disabled, and the ranking of patterns and facts follows strictly the criteria and scoring functions from (Thelen and Riloff, 2002), which are also used in slightly different form in (Lita and Carbonell, 2004) and (Agichtein and Gravano, 2000). The theoretical option of running thousands of iterations over the text collection is not viable, since it would imply a non-justifiable expense of our computational resources. As a more realistic compromise over overly-cautious acquisition, the baseline run retains as many of the top candidate facts as the size of the current seed, whereas (Thelen and Riloff, 2002) only add the top five candidate facts to the seed set after each iteration. The evaluation considers all 80, a sample of the 320, and another sample of the 10,240 facts retained after iterations 3, 5 and 10 respectively. The correctness assessment of each fact consists in manually finding some Web page that contains clear evidence that the fact is correct. If no such page exists, the fact is marked as incorrect. The corresponding precision values after the three iterations are 91.2%, 83.8% and 72.9%. For the purpose of evaluating the precision of our system, we select a sample of facts from the entire list of one million facts extracted from chunk W1, ranked in decreasing order of their computed scores. The sample is generated automatically from the top of the list to the bottom, by retaining a fact and skipping the following consecutive N facts, where N is incremented at each step. The resulting list, which preserves the relative order of the facts, contains 1414 facts. The 115 facts for which a Web search engine does not return any documents, when the name (as a phrase) and the year are submitted together in a conjunctive query, are discarded from the sample of 1414 facts. In those cases, the facts were acquired from the 2003 snapshot of the Web, but queries are submitted to a search engine with access to current Web documents, hence the difference when some of the 2003 documents are no longer available or indexable. Based on the sample set, the average precision of the list of one million facts extracted from chunk W1 is 98.5% over the top 1/100 of the list, 93.1% over the top half of the list, and 88.3% over the entire list of one million facts. Table 2 shows examples of erroneous facts extracted from chunk W1. Causes of errors include incorrect approximations of the name boundaries (e.g., Alma in Alma Theresa Rausch is incorrectly tagged as an adjective), and selection of the wrong year as birth year (e.g., for Henry Lumbar). In the case of famous people, the extracted facts tend to capture the correct birth year for several variations of the names, as shown in Table 3. Conversely, it is not necessary that a fact occur with high frequency in order for it to be extracted, which is an advantage over previous approaches that rely strongly on redundancy (cf. (Cafarella et al., 2005)). Table 4 illustrates a few of the correctly extracted facts that occur rarely on the Web. 4.4 Recall In contrast to the assessment of precision, recall can be evaluated automatically, based on external 814 Table 2: Incorrect facts extracted from the Web Spurious Fact Context in Source Sentence (Theresa Rausch, Alma Theresa Rausch was born 1912) on 9 March 1912 (Henry Lumbar, Henry Lumbar was born 1861 1937) and died 1937 (Concepcion Paxety, Maria de la Concepcion Paxety 1817) b. 08 Dec. 1817 St. Aug., FL. (Mae Yaeger, Ella May/Mae Yaeger was born 1872) 20 May 1872 in Mt. (Charles Whatley, Long, Charles Whatley b. 16 1821) FEB 1821 d. 29 AUG (HOLT George W. HOLT (new line) George W. Holt Holt, 1845) was born in Alabama in 1845 (David Morrish David Morrish (new line) Canadian, 1953) Canadian, b. 1953 (Mary Ann, 1838) had a daughter, Mary Ann, who was born in Tennessee in 1838 (Mrs. Blackmore, Mrs. Blackmore was born April 1918) 28, 1918, in Labaddiey Table 3: Birth years extracted for both pseudonyms and corresponding real names Pseudonym Real Name Year Gloria Estefan Gloria Fajardo 1957 Nicolas Cage Nicolas Kim Coppola 1964 Ozzy Osbourne John Osbourne 1948 Ringo Starr Richard Starkey 1940 Tina Turner Anna Bullock 1939 Tom Cruise Thomas Cruise Mapother IV 1962 Woody Allen Allen Stewart Konigsberg 1935 lists of birth dates of various people. We start by collecting two gold standard sets of facts. The first set is a random set of 609 actors and their birth years from a Web compilation (GoldA). The second set is derived from the set of questions used in the Question Answering track (Voorhees and Tice, 2000) of the Text REtrieval Conference from 1999 through 2002. Each question asking for the birth date of a person (e.g., “What year was Robert Frost born?”) results in a pair containing the person’s name and the birth year specified in the answer keys. Thus, the second gold standard set contains 17 pairs of people and their birth years (GoldT ). Table 5 shows examples of facts in each of the gold standard sets. Table 6 shows two types of recall scores computed against the gold standard sets. The recall scores over ∩Gold take into consideration only the set of person names from the gold standard with some extracted year(s). More precisely, given that some years were extracted for a person name, it verifies whether they include the year specified in the gold standard for that person name. Comparatively, the recall score denoted AllGold is comTable 4: Extracted facts that occur infrequently Fact Source Domain (Irvine J Forcier, 1912) geocities.com (Marie Louise Azelie Chabert, 1861) vienici.com (Jacob Shalles, 1750) selfhost.com (Robert Chester Claggett, 1898) rootsweb.com (Charoltte Mollett, 1843) rootsweb.com (Nora Elizabeth Curran, 1979) jimtravis.com Table 5: Composition of gold standard sets Gold Set Composition and Examples of Facts GoldA Actors (Web compilation) Nr. facts: 609 (Andie MacDowell, 1958), (Doris Day, 1924), (Diahann Carroll, 1935) GoldT People (TREC QA track) Nr. facts: 17 (Davy Crockett, 1786), (Julius Caesar, 100 B.C.), (King Louis XIV, 1638) puted over the entire set of names from the gold standard. For the GoldA set, the size of the ∩Gold set of person names changes little when the facts are extracted from chunk W1 vs. W2 vs. W3. The recall scores over ∩Gold exhibit little variation from one Web chunk to another, whereas the AllGold score is slightly higher on the W3 chunk, probably due to a higher number of documents that are relevant to the extraction task. When the facts are extracted from a combination of two or three of the available Web chunks, the recall scores computed over AllGold are significantly higher as the size of the ∩Gold set increases. In comparison, the recall scores over the growing ∩Gold set increases slightly with larger evaluation sets. The highest value of the recall score for GoldA is 89.9% over the ∩Gold set, and 70.7% over AllGold. The smaller size of the second gold standard set, GoldT , explains the higher variation of the values shown in the lower portion of Table 6. 4.5 Comparison to Previous Results Another recent approach specifically addresses the problem of extracting facts from a similarly-sized collection of Web documents. In (Cafarella et al., 2005), manually-prepared extraction rules are applied to a collection of 60 million Web documents to extract entities of types Company and Country, as well as facts of type Person-CeoOf-Company and City-CapitalOf-Country. Based on manual evaluation of precision and recall, a total of 23,128 company names are extracted at precision of 80%; the number decreases to 1,116 at precision of 90%. In addition, 2,402 Person-CeoOf-Company facts 815 Table 6: Automatic evaluation of recall, over two gold standard sets GoldA (609 person names) and GoldT (17 person names) Gold Set Input Data Recall (%) (Web Chunk) ∩Gold AllGold GoldA W1 86.4 49.4 W2 85.0 50.5 W3 86.3 54.1 W1+W2 88.5 64.5 W1+W2+W3 89.9 70.7 GoldT W1 81.8 52.9 W2 90.0 52.9 W3 100.0 64.7 W1+W2 81.8 52.9 W1+W2+W3 91.6 64.7 are extracted at precision 80%. The recall value is 80% at precision 90%. Recall is evaluated against the set of company names extracted by the system, rather than an external gold standard with pairs of a CEO and a company name. As such, the resulting metric for evaluating recall used in (Cafarella et al., 2005) is somewhat similar to, though more relaxed than, the recall score over the ∩Gold set introduced in the previous section. 5 Conclusion The combination of generalized extraction patterns and similarity-driven ranking criteria results in a fast-growth iterative approach for large-scale fact extraction. From 10 Person-BornIn-Year facts and no additional knowledge, a set of one million facts of the same type is extracted from a collection of 100 million Web documents of arbitrary quality, with a precision around 90%. This corresponds to a growth ratio of 100,000:1 between the size of the extracted set of facts and the size of the initial set of seed facts. To our knowledge, the growth ratio and the number of extracted facts are several orders of magnitude higher than in any of the previous studies on fact extraction based on either hand-written extraction rules (Cafarella et al., 2005), or bootstrapping for relation and information extraction (Agichtein and Gravano, 2000; Lita and Carbonell, 2004). The next research steps converge towards the automatic construction of a searchable repository containing billions of facts regarding people. References E. Agichtein and L. Gravano. 2000. Snowball: Extracting relations from large plaintext collections. In Proceedings of the 5th ACM International Conference on Digital Libraries (DL-00), pages 85–94, San Antonio, Texas. T. Brants. 2000. TnT - a statistical part of speech tagger. In Proceedings of the 6th Conference on Applied Natural Language Processing (ANLP-00), pages 224–231, Seattle, Washington. M. Cafarella, D. Downey, S. Soderland, and O. Etzioni. 2005. KnowItNow: Fast, scalable information extraction from the web. In Proceedings of the Human Language Technology Conference (HLT-EMNLP-05), pages 563–570, Vancouver, Canada. M. Collins and Y. Singer. 1999. Unsupervised models for named entity classification. In Proceedings of the 1999 Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-99), pages 189–196, College Park, Maryland. M. Fleischman, E. Hovy, and A. Echihabi. 2003. Offline strategies for online question answering: Answering questions before they are asked. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03), pages 1–7, Sapporo, Japan. G. Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers, Boston, Massachusetts. T. Hasegawa, S. Sekine, and R. Grishman. 2004. Discovering relations among named entities from large corpora. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 415– 422, Barcelona, Spain. D. Hindle. 1990. Noun classification from predicateargument structures. In Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics (ACL-90), pages 268–275, Pittsburgh, Pennsylvania. D. Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th International Conference on Computational Linguistics and the 36th Annual Meeting of the Association for Computational Linguistics (COLING-ACL-98), pages 768–774, Montreal, Quebec. L. Lita and J. Carbonell. 2004. Instance-based question answering: A data driven approach. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-04), pages 396–403, Barcelona, Spain. F. Pereira, N. Tishby, and L. Lee. 1993. Distributional clustering of english words. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics (ACL-93), pages 183–190, Columbus, Ohio. E. Riloff and R. Jones. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In Proceedings of the 16th National Conference on Artificial Intelligence (AAAI-99), pages 474–479, Orlando, Florida. M. Stevenson and M. Greenwood. 2005. A semantic approach to IE pattern induction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL-05), pages 379–386, Ann Arbor, Michigan. M. Thelen and E. Riloff. 2002. A bootstrapping method for learning semantic lexicons using extraction pattern contexts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-02), pages 214–221, Philadelphia, Pennsylvania. E.M. Voorhees and D.M. Tice. 2000. Building a questionanswering test collection. In Proceedings of the 23rd International Conference on Research and Development in Information Retrieval (SIGIR-00), pages 200–207, Athens, Greece. 816 | 2006 | 102 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 817–824, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Weakly Supervised Named Entity Transliteration and Discovery from Multilingual Comparable Corpora Alexandre Klementiev Dan Roth Dept. of Computer Science University of Illinois Urbana, IL 61801 klementi,danr @uiuc.edu Abstract Named Entity recognition (NER) is an important part of many natural language processing tasks. Current approaches often employ machine learning techniques and require supervised data. However, many languages lack such resources. This paper presents an (almost) unsupervised learning algorithm for automatic discovery of Named Entities (NEs) in a resource free language, given a bilingual corpora in which it is weakly temporally aligned with a resource rich language. NEs have similar time distributions across such corpora, and often some of the tokens in a multi-word NE are transliterated. We develop an algorithm that exploits both observations iteratively. The algorithm makes use of a new, frequency based, metric for time distributions and a resource free discriminative approach to transliteration. Seeded with a small number of transliteration pairs, our algorithm discovers multi-word NEs, and takes advantage of a dictionary (if one exists) to account for translated or partially translated NEs. We evaluate the algorithm on an English-Russian corpus, and show high level of NEs discovery in Russian. 1 Introduction Named Entity recognition has been getting much attention in NLP research in recent years, since it is seen as significant component of higher level NLP tasks such as information distillation and question answering. Most successful approaches to NER employ machine learning techniques, which require supervised training data. However, for many languages, these resources do not exist. Moreover, it is often difficult to find experts in these languages both for the expensive annotation effort and even for language specific clues. On the other hand, comparable multilingual data (such as multilingual news streams) are becoming increasingly available (see section 4). In this work, we make two independent observations about Named Entities encountered in such corpora, and use them to develop an algorithm that extracts pairs of NEs across languages. Specifically, given a bilingual corpora that is weakly temporally aligned, and a capability to annotate the text in one of the languages with NEs, our algorithm identifies the corresponding NEs in the second language text, and annotates them with the appropriate type, as in the source text. The first observation is that NEs in one language in such corpora tend to co-occur with their counterparts in the other. E.g., Figure 1 shows a histogram of the number of occurrences of the word Hussein and its Russian transliteration in our bilingual news corpus spanning years 2001 through late 2005. One can see several common peaks in the two histograms, largest one being around the time of the beginning of the war in Iraq. The word Russia, on the other hand, has a distinctly different temporal signature. We can exploit such weak synchronicity of NEs across languages to associate them. In order to score a pair of entities across languages, we compute the similarity of their time distributions. The second observation is that NEs often contain or are entirely made up of words that are phonetically transliterated or have a common etymological origin across languages (e.g. parliament in English and
, its Russian translation), and thus are phonetically similar. Figure 2 shows 817 0 5 10 15 20 ’hussein’ (English) 0 5 10 15 20 ’hussein’ (Russian) 0 5 10 15 20 01/01/01 10/05/05 Number of Occurences Time ’russia’ (English) Figure 1: Temporal histograms for Hussein (top), its Russian transliteration (middle), and of the word Russia (bottom). an example list of NEs and their possible Russian transliterations. Approaches that attempt to use these two characteristics separately to identify NEs across languages would have significant shortcomings. Transliteration based approaches require a good model, typically handcrafted or trained on a clean set of transliteration pairs. On the other hand, time sequence similarity based approaches would incorrectly match words which happen to have similar time signatures (e.g., Taliban and Afghanistan in recent news). We introduce an algorithm we call co-ranking which exploits these observations simultaneously to match NEs on one side of the bilingual corpus to their counterparts on the other. We use a Discrete Fourier Transform (Arfken, 1985) based metric for computing similarity of time distributions, and show that it has significant advantages over other metrics traditionally used. We score NEs similarity with a linear transliteration model. We first train a transliteration model on singleword NEs. During training, for a given NE in one language, the current model chooses a list of top ranked transliteration candidates in another language. Time sequence scoring is then used to rerank the list and choose the candidate best temporally aligned with the NE. Pairs of NEs and the best candidates are then used to iteratively train the
! "#%$ '& #)( * +,-!+). / ('02143657(81 9 .:8; *=< .7; > 0 / # ?@ 9 +)A & 5-BDCE0-FF G)<H*JI @-K L 0M$ & CN02F1O P @-, I @2K4; Q Figure 2: Example English NEs and their transliterated Russian counterparts. transliteration model. Once the model is trained, NE discovery proceeds as follows. For a given NE, transliteration model selects a candidate list for each constituent word. If a dictionary is available, each candidate list is augmented with translations (if they exist). Translations will be the correct choice for some NE words (e.g. for queen in Queen Victoria), and transliterations for others (e.g. Bush in Steven Bush). We expect temporal sequence alignment to resolve many of such ambiguities. It is used to select the best translation/transliteration candidate from each word’s candidate set, which are then merged into a possible NE in the other language. Finally, we verify that the NE is actually contained in the target corpus. A major challenge inherent in discovering transliterated NEs is the fact that a single entity may be represented by multiple transliteration strings. One reason is language morphology. For example, in Russian, depending on a case being used, the same noun may appear with various endings. Another reason is the lack of transliteration standards. Again, in Russian, several possible transliterations of an English entity may be acceptable, as long as they are phonetically similar to the source. Thus, in order to rely on the time sequences we obtain, we need to be able to group variants of the same NE into an equivalence class, and collect their aggregate mention counts. We would then score time sequences of these equivalence classes. For instance, we would like to count the aggregate number of occurrences of R Herzegovina, Hercegovina S on the English side in order to map it accurately to the equivalence class of that NE’s variants we may see on the Russian side of our corpus (e.g. RHT VU XW)YZ[ 4\]T ^U XW)YZ[ _%\]T VU M` W)YZ [ baV\bT VU MW)YZ[ cYed [ S ). One of the objectives for this work was to use as 818 little of the knowledge of both languages as possible. In order to effectively rely on the quality of time sequence scoring, we used a simple, knowledge poor approach to group NE variants for the languages of our corpus (see 3.2.1). In the rest of the paper, whenever we refer to a Named Entity or an NE constituent word, we imply its equivalence class. Note that although we expect that better use of language specific knowledge would improve the results, it would defeat one of the goals of this work. 2 Previous work There has been other work to automatically discover NE with minimal supervision. Both (Cucerzan and Yarowsky, 1999) and (Collins and Singer, 1999) present algorithms to obtain NEs from untagged corpora. However, they focus on the classification stage of already segmented entities, and make use of contextual and morphological clues that require knowledge of the language beyond the level we want to assume with respect to the target language. The use of similarity of time distributions for information extraction, in general, and NE extraction, in particular, is not new. (Hetland, 2004) surveys recent methods for scoring time sequences for similarity. (Shinyama and Sekine, 2004) used the idea to discover NEs, but in a single language, English, across two news sources. A large amount of previous work exists on transliteration models. Most are generative and consider the task of producing an appropriate transliteration for a given word, and thus require considerable knowledge of the languages. For example, (AbdulJaleel and Larkey, 2003; Jung et al., 2000) train English-Arabic and EnglishKorean generative transliteration models, respectively. (Knight and Graehl, 1997) build a generative model for backward transliteration from Japanese to English. While generative models are often robust, they tend to make independence assumptions that do not hold in data. The discriminative learning framework argued for in (Roth, 1998; Roth, 1999) as an alternative to generative models is now used widely in NLP, even in the context of word alignment (Taskar et al., 2005; Moore, 2005). We make use of it here too, to learn a discriminative transliteration model that requires little knowledge of the target language. We extend our preliminary work in (Klementiev and Roth, 2006) to discover multi-word Named Entities and to take advantage of a dictionary (if one exists) to handle NEs which are partially or entirely translated. We take advantage of dynamically growing feature space to reduce the number of supervised training examples. 3 Co-Ranking: An Algorithm for NE Discovery 3.1 The algorithm In essence, the algorithm we present uses temporal alignment as a supervision signal to iteratively train a transliteration model. On each iteration, it selects a list of top ranked transliteration candidates for each NE according to the current model (line 6). It then uses temporal alignment (with thresholding) to re-rank the list and select the best transliteration candidate for the next round of training (lines 8, and 9). Once the training is complete, lines 4 through 10 are executed without thresholding for each constituent NE word. If a dictionary is available, transliteration candidate lists on line 6 are augmented with translations. We then combine the best candidates (as chosen on line 8, without thresholding) into complete target language NE. Finally, we discard transliterations which do not actually appear in the target corpus. Input: Bilingual, comparable corpus ( , ), set of named entities from , threshold Output: Transliteration model
Initialize
; 1 , collect time distribution ; 2 repeat 3 ; 4 for each do 5 Use
to collect a list of candidates 6 with high transliteration scores; collect time distribution ; 7 Select candidate with the best 8 !#"%$'&)(+*+,. 0/ 21 ; if ! exceeds , add tuple 3/ 41 to ; 9 end 10 Use to train
; 11 until D stops changing between iterations ; 12 Algorithm 1: Iterative transliteration model training. 819 3.2 Time sequence generation and matching In order to generate time sequence for a word, we divide the corpus into a sequence of temporal bins, and count the number of occurrences of the word in each bin. We then normalize the sequence. We use a method called the F-index (Hetland, 2004) to implement the similarity function on line 8 of the algorithm. We first run a Discrete Fourier Transform on a time sequence to extract its Fourier expansion coefficients. The score of a pair of time sequences is then computed as a Euclidean distance between their expansion coefficient vectors. 3.2.1 Equivalence Classes As we mentioned in the introduction, an NE may map to more than one transliteration in another language. Identification of the entity’s equivalence class of transliterations is important for obtaining its accurate time sequence. In order to keep to our objective of requiring as little language knowledge as possible, we took a rather simplistic approach for both languages of our corpus. For Russian, two words were considered variants of the same NE if they share a prefix of size five or longer. Each unique word had its own equivalence class for the English side of the corpus, although, in principal, ideas such as in (Li et al., 2004) could be incorporated. A cumulative distribution was then collected for such equivalence classes. 3.3 Transliteration model Unlike most of the previous work considering generative transliteration models, we take the discriminative approach. We train a linear model to decide whether a word is a transliteration of an NE
. The words in the pair are partitioned into a set of substrings
and up to a particular length (including the empty string ). Couplings of the substrings
from both sets produce features we use for training. Note that couplings with the empty string represent insertions/omissions. Consider the following example: (
, 3 ) = (powell, pauel). We build a feature vector from this example in the following manner: First, we split both words into all possible substrings of up to size two:
R ! " $#%$#% "! #%&## S R $'(!) $#*+',$'-)!) # S We build a feature vector by coupling substrings from the two sets: ! . / .$'0/213131 "$'4)5/213131 #6 #7/813131 #9#% #:! We use the observation that transliteration tends to preserve phonetic sequence to limit the number of couplings. For example, we can disallow the coupling of substrings whose starting positions are too far apart: thus, we might not consider a pairing !) in the above example. In our experiments, we paired substrings if their positions in their respective words differed by -1, 0, or 1. We use the perceptron (Rosenblatt, 1958) algorithm to train the model. The model activation provides the score we use to select best transliterations on line 6. Our version of perceptron takes variable number of features in its examples; each example is a subset of all features seen so far that are active in the input. As the iterative algorithm observes more data, it discovers and makes use of more features. This model is called the infinite attribute model (Blum, 1992) and it follows the perceptron version of SNoW (Roth, 1998). Positive examples used for iterative training are pairs of NEs and their best temporally aligned (thresholded) transliteration candidates. Negative examples are English non-NEs paired with random Russian words. 4 Experimental Study We ran experiments using a bilingual comparable English-Russian news corpus we built by crawling a Russian news web site (www.lenta.ru). The site provides loose translations of (and pointers to) the original English texts. We collected pairs of articles spanning from 1/1/2001 through 10/05/2005. The corpus consists of 2,327 documents, with 0-8 documents per day. The corpus is available on our web page at http://L2R.cs.uiuc.edu/ ; cogcomp/. The English side was tagged with a publicly available NER system based on the SNoW learning architecture (Roth, 1998), that is available on the same site. This set of English NEs was hand-pruned to remove incorrectly classified words to obtain 978 single word NEs. In order to reduce running time, some limited pre-processing was done on the Russian side. All classes, whose temporal distributions were close to uniform (i.e. words with a similar likelihood of occurrence throughout the corpus) were 820 0 10 20 30 40 50 60 70 80 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Accuracy (%) Iteration Complete Algorithm Transliteration Model Only Temporal Sequence Only Figure 3: Proportion of correctly discovered NE pairs vs. training iteration. Complete algorithm outperforms both transliteration model and temporal sequence matching when used on their own. deemed common and not considered as NE candidates. Unique words were thus grouped into 14,781 equivalence classes. Unless mentioned otherwise, the transliteration model was initialized with a set of 20 pairs of English NEs and their Russian transliterations. Negative examples here and during the rest of the training were pairs of randomly selected non-NE English and Russian words. New features were discovered throughout training; all but top 3000 features from positive and 3000 from negative examples were pruned based on the number of their occurrences so far. Features remaining at the end of training were used for NE discovery. Insertions/omissions features were not used in the experiments as they provided no tangible benefit for the languages of our corpus. In each iteration, we used the current transliteration model to find a list of 30 best transliteration equivalence classes for each NE. We then computed time sequence similarity score between NE and each class from its list to find the one with the best matching time sequence. If its similarity score surpassed a set threshold, it was added to the list of positive examples for the next round of training. Positive examples were constructed by pairing an NE with the common stem of its transliteration equivalence class. We used the same number of positive and negative examples. 0 10 20 30 40 50 60 70 80 0 1 2 3 4 5 Accuracy (%) Iteration 5 examples 20 examples 80 examples Figure 4: Proportion of correctly discovered NE pairs vs. the initial example set size. As long as the size is large enough, decreasing the number of examples does not have a significant impact on the performance of the later iterations. We used the Mueller English-Russian dictionary to obtain translations in our multi-word NE experiments. We only considered the first dictionary definition as a candidate. For evaluation, random 727 of the total of 978 NEs were matched to correct transliterations by a language expert (partly due to the fact that some of the English NEs were not mentioned in the Russian side of the corpus). Accuracy was computed as the percentage of NEs correctly identified by the algorithm. In the multi-word NE experiment, 282 random multi-word (2 or more) NEs and their transliterations/translations discovered by the algorithm were verified by a language expert. 4.1 NE discovery Figure 3 shows the proportion of correctly discovered NE transliteration equivalence classes throughout the training stage. The figure also shows the accuracy if transliterations are selected according to the current transliteration model (top scoring candidate) and temporal sequence matching alone. The transliteration model alone achieves an accuracy of about 38%, while the time sequence alone gets about 41%. The combined algorithm achieves about 63%, giving a significant improvement. 821 Cosine 41.3 5.8 1.7 Pearson 41.1 5.8 1.7 DFT 41.0 12.4 4.8 Table 1: Proportion of correctly discovered NEs vs. corpus misalignment ( ) for each of the three measures. DFT based measure provides significant advantages over commonly used metrics for weakly aligned corpora. Cosine 5.8 13.5 18.4 Pearson 5.8 13.5 18.2 DFT 12.4 20.6 27.9 Table 2: Proportion of correctly discovered NEs vs. sliding window size ( ) for each of the three measures. In order to understand what happens to the transliteration model as the training proceeds, let us consider the following example. Figure 5 shows parts of transliteration lists for NE forsyth for two iterations of the algorithm. The weak transliteration model selects the correct transliteration (italicized) as the 24th best transliteration in the first iteration. Time sequence scoring function chooses it to be one of the training examples for the next round of training of the model. By the eighth iteration, the model has improved to select it as a best transliteration. Not all correct transliterations make it to the top of the candidates list (transliteration model by itself is never as accurate as the complete algorithm on Figure 3). That is not required, however, as the model only needs to be good enough to place the correct transliteration anywhere in the candidate list. Not surprisingly, some of the top transliteration candidates start sounding like the NE itself, as training progresses. On Figure 5, candidates for forsyth on iteration 7 include fross and fossett. Once the transliteration model was trained, we ran the algorithm to discover multi-word NEs, augmenting candidate sets of dictionary words with their translations as described in Section 3.1. We achieved the accuracy of about 66%. The correctly discovered Russian NEs included entirely transliterated, partially translated, and entirely translated NEs. Some of them are shown on Figure 6. 4.2 Initial example set size We ran a series of experiments to see how the size of the initial training set affects the accuracy of the model as training progresses (Figure 4). Although the performance of the early iterations is significantly affected by the size of the initial training example set, the algorithm quickly improves its performance. As we decrease the size from 80 to 20, the accuracy of the first iteration drops by over 20%, but a few iterations later the two have similar performance. However, when initialized with the set of size 5, the algorithm never manages to improve. The intuition is the following. The few examples in the initial training set produce features corresponding to substring pairs characteristic for English-Russian transliterations. Model trained on these (few) examples chooses other transliterations containing these same substring pairs. In turn, the chosen positive examples contain other characteristic substring pairs, which will be used by the model to select more positive examples on the next round, and so on. On the other hand, if the initial set is too small, too few of the characteristic transliteration features are extracted to select a clean enough training set on the next round of training. In general, one would expect the size of the training set necessary for the algorithm to improve to depend on the level of temporal alignment of the two sides of the corpus. Indeed, the weaker the temporal supervision the more we need to endow the model so that it can select cleaner candidates in the early iterations. 4.3 Comparison of time sequence scoring functions We compared the performance of the DFT-based time sequence similarity scoring function we use in this paper to the commonly used cosine (Salton and McGill, 1986) and Pearson’s correlation measures. We perturbed the Russian side of the corpus in the following way. Articles from each day were randomly moved (with uniform probability) within a -day window. We ran single word NE temporal sequence matching alone on the perturbed corpora using each of the three measures (Table 1). Some accuracy drop due to misalignment could be accommodated for by using a larger temporal 822
! #"$%"$&'%"$&(!)*%"&(!+&-, .0/21436578:9#;<5=>;$=?;A@B C DE*FG #"H*I*%J"HI+*+'%"+HK%"+*LM, C JDEJ*FN #"H*I*%"H*I++'%J"+HK%"$+LM*, O *P&*IQ "$R*%"2, O S*TU "VJJFW%J"VR%J"H+'%"$LTYX-%"$VTE%-ZZZ, [ D\H! #"$I*J%"$I]'%*"2%"$I]+*+-, [ DEJ ^ _ DEJ#L! #"$L%J"LR*%"LJ`J%J"$R*%"`, a b C [ .0/21-36578c9;<5d=?;$=e;A@B f g h Figure 5: Transliteration lists for forsyth for two iterations of the algorithm. As transliteration model improves, the correct transliteration moves up the list. bin for collecting occurrence counts. We tried various (sliding) window size for a perturbed corpus with (Table 2). DFT metric outperforms the other measures significantly in most cases. NEs tend to have distributions with few pronounced peaks. If two such distributions are not well aligned, we expect both Pearson and Cosine measures to produce low scores, whereas the DFT metric should catch their similarities in the frequency domain. 5 Conclusions We have proposed a novel algorithm for cross lingual multi-word NE discovery in a bilingual weakly temporally aligned corpus. We have demonstrated that using two independent sources of information (transliteration and temporal similarity) together to guide NE extraction gives better performance than using either of them alone (see Figure 3). We developed a linear discriminative transliteration model, and presented a method to automatically generate features. For time sequence matching, we used a scoring metric novel in this domain. We provided experimental evidence that this metric outperforms other scoring metrics traditionally used. In keeping with our objective to provide as little language knowledge as possible, we introduced a simplistic approach to identifying transliteration equivalence classes, which sometimes produced erroneous groupings (e.g. an equivalence class for NE congolese in Russian included both congo and congolese on Figure 6). We expect that more language specific knowledge used to discover accurate equivalence classes would result in performance improvements. Other type of supervision was in the form of a ikjmlon\pqi rtsmuJnvpqixwy-soz|{?nE}~|*uu #
#'J J #6d$ '#*E* 6v 2 ¡?*t¢£¤ **¥¦J* *§#¨J¤* ©ª* \#¬« ®¯J®J°W-k±#®J# ²³²²´ ¬J¥J³©ª §2*#µ±¬- ¶q2«³* ±¬®-J²²³²³0'#$·**³²²²´ ¸ «
¬¶«2v¹J«³º « µ¢£··»·µ#¤ ¡>· ¶ ¡?-2A Figure 6: Example of correct transliterations discovered by the algorithm. very small bootstrapping transliteration set. 6 Future Work The algorithm can be naturally extended to comparable corpora of more than two languages. Pair-wise time sequence scoring and transliteration models should give better confidence in NE matches. The ultimate goal of this work is to automatically tag NEs so that they can be used for training of an NER system for a new language. To this end, we would like to compare the performance of an NER system trained on a corpus tagged using this approach to one trained on a hand-tagged corpus. 7 Acknowledgments We thank Richard Sproat, ChengXiang Zhai, and Kevin Small for their useful feedback during this work, and the anonymous referees for their helpful comments. This research is supported by the Advanced Research and Development Activity (ARDA)’s Advanced Question Answering for Intelligence (AQUAINT) Program and a DOI grant under the Reflex program. 823 References Nasreen AbdulJaleel and Leah S. Larkey. 2003. Statistical transliteration for english-arabic cross language information retrieval. In Proceedings of CIKM, pages 139–146, New York, NY, USA. George Arfken. 1985. Mathematical Methods for Physicists. Academic Press. Avrim Blum. 1992. Learning boolean functions in an infinite attribute space. Machine Learning, 9(4):373–386. Michael Collins and Yoram Singer. 1999. Unsupervised models for named entity classification. In Proc. of the Conference on Empirical Methods for Natural Language Processing (EMNLP). Silviu Cucerzan and David Yarowsky. 1999. Language independent named entity recognition combining morphological and contextual evidence. In Proc. of the Conference on Empirical Methods for Natural Language Processing (EMNLP). Magnus Lie Hetland, 2004. Data Mining in Time Series Databases, chapter A Survey of Recent Methods for Efficient Retrieval of Similar Time Sequences. World Scientific. Sung Young Jung, SungLim Hong, and Eunok Paek. 2000. An english to korean transliteration model of extended markov window. In Proc. the International Conference on Computational Linguistics (COLING), pages 383–389. Alexandre Klementiev and Dan Roth. 2006. Named entity transliteration and discovery from multilingual comparable corpora. In Proc. of the Annual Meeting of the North American Association of Computational Linguistics (NAACL). Kevin Knight and Jonathan Graehl. 1997. Machine transliteration. In Proc. of the Meeting of the European Association of Computational Linguistics, pages 128–135. Xin Li, Paul Morie, and Dan Roth. 2004. Identification and tracing of ambiguous names: Discriminative and generative approaches. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 419–424. Robert C. Moore. 2005. A discriminative framework for bilingual word alignment. In Proc. of the Conference on Empirical Methods for Natural Language Processing (EMNLP), pages 81–88. Frank Rosenblatt. 1958. The perceptron: A probabilistic model for information storage and organizationin the brain. Psychological Review, 65. Dan Roth. 1998. Learning to resolve natural language ambiguities: A unified approach. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 806–813. Dan Roth. 1999. Learning in natural language. In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI), pages 898–904. Gerard Salton and Michael J. McGill. 1986. Introduction to Modern Information Retrieval. McGrawHill, Inc., New York, NY, USA. Yusuke Shinyama and Satoshi Sekine. 2004. Named entity discovery using comparable news articles. In Proc. the International Conference on Computational Linguistics (COLING), pages 848–853. Ben Taskar, Simon Lacoste-Julien, and Michael Jordan. 2005. Structured prediction via the extragradient method. In The Conference on Advances in Neural Information Processing Systems (NIPS). MIT Press. 824 | 2006 | 103 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 825–832, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Composite Kernel to Extract Relations between Entities with both Flat and Structured Features Min Zhang Jie Zhang Jian Su Guodong Zhou Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore 119613 {mzhang, zhangjie, sujian, zhougd}@i2r.a-star.edu.sg Abstract This paper proposes a novel composite kernel for relation extraction. The composite kernel consists of two individual kernels: an entity kernel that allows for entity-related features and a convolution parse tree kernel that models syntactic information of relation examples. The motivation of our method is to fully utilize the nice properties of kernel methods to explore diverse knowledge for relation extraction. Our study illustrates that the composite kernel can effectively capture both flat and structured features without the need for extensive feature engineering, and can also easily scale to include more features. Evaluation on the ACE corpus shows that our method outperforms the previous best-reported methods and significantly outperforms previous two dependency tree kernels for relation extraction. 1 Introduction The goal of relation extraction is to find various predefined semantic relations between pairs of entities in text. The research on relation extraction has been promoted by the Message Understanding Conferences (MUCs) (MUC, 19871998) and Automatic Content Extraction (ACE) program (ACE, 2002-2005). According to the ACE Program, an entity is an object or set of objects in the world and a relation is an explicitly or implicitly stated relationship among entities. For example, the sentence “Bill Gates is chairman and chief software architect of Microsoft Corporation.” conveys the ACE-style relation “EMPLOYMENT.exec” between the entities “Bill Gates” (PERSON.Name) and “Microsoft Corporation” (ORGANIZATION. Commercial). In this paper, we address the problem of relation extraction using kernel methods (Schölkopf and Smola, 2001). Many feature-based learning algorithms involve only the dot-product between feature vectors. Kernel methods can be regarded as a generalization of the feature-based methods by replacing the dot-product with a kernel function between two vectors, or even between two objects. A kernel function is a similarity function satisfying the properties of being symmetric and positive-definite. Recently, kernel methods are attracting more interests in the NLP study due to their ability of implicitly exploring huge amounts of structured features using the original representation of objects. For example, the kernels for structured natural language data, such as parse tree kernel (Collins and Duffy, 2001), string kernel (Lodhi et al., 2002) and graph kernel (Suzuki et al., 2003) are example instances of the wellknown convolution kernels1 in NLP. In relation extraction, typical work on kernel methods includes: Zelenko et al. (2003), Culotta and Sorensen (2004) and Bunescu and Mooney (2005). This paper presents a novel composite kernel to explore diverse knowledge for relation extraction. The composite kernel consists of an entity kernel and a convolution parse tree kernel. Our study demonstrates that the composite kernel is very effective for relation extraction. It also shows without the need for extensive feature engineering the composite kernel can not only capture most of the flat features used in the previous work but also exploit the useful syntactic structure features effectively. An advantage of our method is that the composite kernel can easily cover more knowledge by introducing more kernels. Evaluation on the ACE corpus shows that our method outperforms the previous bestreported methods and significantly outperforms the previous kernel methods due to its effective exploration of various syntactic features. The rest of the paper is organized as follows. In Section 2, we review the previous work. Section 3 discusses our composite kernel. Section 4 reports the experimental results and our observations. Section 5 compares our method with the 1 Convolution kernels were proposed for a discrete structure by Haussler (1999) in the machine learning field. This framework defines a kernel between input objects by applying convolution “sub-kernels” that are the kernels for the decompositions (parts) of the objects. 825 previous work from the viewpoint of feature exploration. We conclude our work and indicate the future work in Section 6. 2 Related Work Many techniques on relation extraction, such as rule-based (MUC, 1987-1998; Miller et al., 2000), feature-based (Kambhatla 2004; Zhou et al., 2005) and kernel-based (Zelenko et al., 2003; Culotta and Sorensen, 2004; Bunescu and Mooney, 2005), have been proposed in the literature. Rule-based methods for this task employ a number of linguistic rules to capture various relation patterns. Miller et al. (2000) addressed the task from the syntactic parsing viewpoint and integrated various tasks such as POS tagging, NE tagging, syntactic parsing, template extraction and relation extraction using a generative model. Feature-based methods (Kambhatla, 2004; Zhou et al., 2005; Zhao and Grishman, 20052) for this task employ a large amount of diverse linguistic features, such as lexical, syntactic and semantic features. These methods are very effective for relation extraction and show the bestreported performance on the ACE corpus. However, the problems are that these diverse features have to be manually calibrated and the hierarchical structured information in a parse tree is not well preserved in their parse tree-related features, which only represent simple flat path information connecting two entities in the parse tree through a path of non-terminals and a list of base phrase chunks. Prior kernel-based methods for this task focus on using individual tree kernels to exploit tree structure-related features. Zelenko et al. (2003) developed a kernel over parse trees for relation extraction. The kernel matches nodes from roots to leaf nodes recursively layer by layer in a topdown manner. Culotta and Sorensen (2004) generalized it to estimate similarity between dependency trees. Their tree kernels require the matchable nodes to be at the same layer counting from the root and to have an identical path of ascending nodes from the roots to the current nodes. The two constraints make their kernel high precision but very low recall on the ACE 2003 corpus. Bunescu and Mooney (2005) proposed another dependency tree kernel for relation extraction. 2 We classify the feature-based kernel defined in (Zhao and Grishman, 2005) into the feature-based methods since their kernels can be easily represented by the dot-products between explicit feature vectors. Their kernel simply counts the number of common word classes at each position in the shortest paths between two entities in dependency trees. The kernel requires the two paths to have the same length; otherwise the kernel value is zero. Therefore, although this kernel shows performance improvement over the previous one (Culotta and Sorensen, 2004), the constraint makes the two dependency kernels share the similar behavior: good precision but much lower recall on the ACE corpus. The above discussion shows that, although kernel methods can explore the huge amounts of implicit (structured) features, until now the feature-based methods enjoy more success. One may ask: how can we make full use of the nice properties of kernel methods and define an effective kernel for relation extraction? In this paper, we study how relation extraction can benefit from the elegant properties of kernel methods: 1) implicitly exploring (structured) features in a high dimensional space; and 2) the nice mathematical properties, for example, the sum, product, normalization and polynomial expansion of existing kernels is a valid kernel (Schölkopf and Smola, 2001). We also demonstrate how our composite kernel effectively captures the diverse knowledge for relation extraction. 3 Composite Kernel for Relation Extraction In this section, we define the composite kernel and study the effective representation of a relation instance. 3.1 Composite Kernel Our composite kernel consists of an entity kernel and a convolution parse tree kernel. To our knowledge, convolution kernels have not been explored for relation extraction. (1) Entity Kernel: The ACE 2003 data defines four entity features: entity headword, entity type and subtype (only for GPE), and mention type while the ACE 2004 data makes some modifications and introduces a new feature “LDC mention type”. Our statistics on the ACE data reveals that the entity features impose a strong constraint on relation types. Therefore, we design a linear kernel to explicitly capture such features: 1 2 1 2 1,2 ( , ) ( . , . ) L E i i i K R R K R E R E = = ∑ (1) where 1 R and 2 R stands for two relation instances, Ei means the ith entity of a relation instance, and 826 ( , ) E K • • is a simple kernel function over the features of entities: 1 2 1 2 ( , ) ( . , . ) E i i i K E E C E f E f = ∑ (2) where if represents the ith entity feature, and the function ( , ) C • • returns 1 if the two feature values are identical and 0 otherwise. ( , ) E K • • returns the number of feature values in common of two entities. (2) Convolution Parse Tree Kernel: A convolution kernel aims to capture structured information in terms of substructures. Here we use the same convolution parse tree kernel as described in Collins and Duffy (2001) for syntactic parsing and Moschitti (2004) for semantic role labeling. Generally, we can represent a parse tree T by a vector of integer counts of each sub-tree type (regardless of its ancestors): ( ) T φ = (# subtree1(T), …, # subtreei(T), …, # subtreen(T) ) where # subtreei(T) is the occurrence number of the ith sub-tree type (subtreei) in T. Since the number of different sub-trees is exponential with the parse tree size, it is computationally infeasible to directly use the feature vector ( ) T φ . To solve this computational issue, Collins and Duffy (2001) proposed the following parse tree kernel to calculate the dot product between the above high dimensional vectors implicitly. 1 1 2 2 1 1 2 2 1 2 1 2 1 2 1 2 1 2 ( , ) ( ), ( ) # ( ) # ( ) ( ) ( ) ( , ) ( ) ( ) i i i i i subtree subtree i n N n N n N n N K T T T T subtree T subtree T I n I n n n φ φ ∈ ∈ ∈ ∈ =< > = = = ∆ ⋅ ⋅ ∑ ∑∑ ∑ ∑ ∑ (3) where N1 and N2 are the sets of nodes in trees T1 and T2, respectively, and ( ) i subtree I n is a function that is 1 iff the subtreei occurs with root at node n and zero otherwise, and 1 2 ( , ) n n ∆ is the number of the common subtrees rooted at n1 and n2, i.e. 1 2 1 2 ( , ) ( ) ( ) i i subtree subtree i n n I n I n ∆ = ⋅ ∑ 1 2 ( , ) n n ∆ can be computed by the following recursive rules: (1) if the productions (CFP rules) at 1n and 2n are different, 1 2 ( , ) 0 n n ∆ = ; (2) else if both 1n and 2n are pre-terminals (POS tags), 1 2 ( , ) 1 n n λ ∆ = × ; (3) else, 1 ( ) 1 2 1 2 1 ( , ) (1 ( ( , ), ( , ))) nc n j n n ch n j ch n j λ = ∆ = +∆ ∏ , where 1 ( ) nc n is the child number of 1n , ch(n,j) is the jth child of node n andλ (0<λ <1) is the decay factor in order to make the kernel value less variable with respect to the subtree sizes. In addition, the recursive rule (3) holds because given two nodes with the same children, one can construct common sub-trees using these children and common sub-trees of further offspring. The parse tree kernel counts the number of common sub-trees as the syntactic similarity measure between two relation instances. The time complexity for computing this kernel is 1 2 (| | | |) O N N ⋅ . In this paper, two composite kernels are defined by combing the above two individual kernels in the following ways: 1) Linear combination: 1 1 2 1 2 1 2 ˆ ˆ ( , ) ( , ) (1 ) ( , ) L K R R K R R K T T α α • • = + − (4) Here, ˆ ( , ) K • • is the normalized3 ( , ) K • • and α is the coefficient. Evaluation on the development set shows that this composite kernel yields the best performance when α is set to 0.4. 2) Polynomial expansion: 2 1 2 1 2 1 2 ˆ ˆ ( , ) ( , ) (1 ) ( , ) P L K R R K R R K T T α α • • = + − (5) Here, ˆ ( , ) K • • is the normalized ( , ) K • • , ( , ) p K • • is the polynomial expansion of ( , ) K • • with degree d=2, i.e. 2 ( , ) ( ( , ) 1) p K K • • • • = + , and α is the coefficient. Evaluation on the development set shows that this composite kernel yields the best performance when α is set to 0.23. The polynomial expansion aims to explore the entity bi-gram features, esp. the combined features from the first and second entities, respectively. In addition, due to the different scales of the values of the two individual kernels, they are normalized before combination. This can avoid one kernel value being overwhelmed by that of another one. The entity kernel formulated by eqn. (1) is a proper kernel since it simply calculates the dot product of the entity feature vectors. The tree kernel formulated by eqn. (3) is proven to be a proper kernel (Collins and Duffy, 2001). Since kernel function set is closed under normalization, polynomial expansion and linear combination (Schölkopf and Smola, 2001), the two composite kernels are also proper kernels. 3 A kernel ( , ) K x y can be normalized by dividing it by ( , ) ( , ) K x x K y y • . 827 3.2 Relation Instance Spaces A relation instance is encapsulated by a parse tree. Thus, it is critical to understand which portion of a parse tree is important in the kernel calculation. We study five cases as shown in Fig.1. (1) Minimum Complete Tree (MCT): the complete sub-tree rooted by the nearest common ancestor of the two entities under consideration. (2) Path-enclosed Tree (PT): the smallest common sub-tree including the two entities. In other words, the sub-tree is enclosed by the shortest path linking the two entities in the parse tree (this path is also commonly-used as the path tree feature in the feature-based methods). (3) Context-Sensitive Path Tree (CPT): the PT extended with the 1st left word of entity 1 and the 1st right word of entity 2. (4) Flattened Path-enclosed Tree (FPT): the PT with the single in and out arcs of nonterminal nodes (except POS nodes) removed. (5) Flattened CPT (FCPT): the CPT with the single in and out arcs of non-terminal nodes (except POS nodes) removed. Fig. 1 illustrates different representations of an example relation instance. T1 is MCT for the relation instance, where the sub-tree circled by a dashed line is PT, which is also shown in T2 for clarity. The only difference between MCT and PT lies in that MCT does not allow partial production rules (for example, NPÆPP is a partial production rule while NPÆNP+PP is an entire production rule in the top of T2). For instance, only the most-right child in the most-left sub-tree [NP [CD 200] [JJ domestic] [E1-PER …]] of T1 is kept in T2. By comparing the performance of T1 and T2, we can evaluate the effect of sub-trees with partial production rules as shown in T2 and the necessity of keeping the whole left and right context sub-trees as shown in T1 in relation extraction. T3 is CPT, where the two sub-trees circled by dashed lines are included as the context to T2 and make T3 context-sensitive. This is to evaluate whether the limited context information in CPT can boost performance. FPT in T4 is formed by removing the two circled nodes in T2. This is to study whether and how the elimination of single non-terminal nodes affects the performance of relation extraction. T1): MCT T2): PT T3):CPT T4): FPT Figure 1. Different representations of a relation instance in the example sentence “…provide benefits to 200 domestic partners of their own workers in New York”, where the phrase type “E1-PER” denotes that the current node is the 1st entity with type “PERSON”, and likewise for the others. The relation instance is excerpted from the ACE 2003 corpus, where a relation “SOCIAL.Other-Personal” exists between entities “partners” (PER) and “workers” (PER). We use Charniak’s parser (Charniak, 2001) to parse the example sentence. To save space, the FCPT is not shown here. 828 4 Experiments 4.1 Experimental Setting Data: We use the English portion of both the ACE 2003 and 2004 corpora from LDC in our experiments. In the ACE 2003 data, the training set consists of 674 documents and 9683 relation instances while the test set consists of 97 documents and 1386 relation instances. The ACE 2003 data defines 5 entity types, 5 major relation types and 24 relation subtypes. The ACE 2004 data contains 451 documents and 5702 relation instances. It redefines 7 entity types, 7 major relation types and 23 subtypes. Since Zhao and Grishman (2005) use a 5-fold cross-validation on a subset of the 2004 data (newswire and broadcast news domains, containing 348 documents and 4400 relation instances), for comparison, we use the same setting (5-fold cross-validation on the same subset of the 2004 data, but the 5 partitions may not be the same) for the ACE 2004 data. Both corpora are parsed using Charniak’s parser (Charniak, 2001). We iterate over all pairs of entity mentions occurring in the same sentence to generate potential relation instances. In this paper, we only measure the performance of relation extraction models on “true” mentions with “true” chaining of coreference (i.e. as annotated by LDC annotators). Implementation: We formalize relation extraction as a multi-class classification problem. SVM is selected as our classifier. We adopt the one vs. others strategy and select the one with the largest margin as the final answer. The training parameters are chosen using cross-validation (C=2.4 (SVM); λ =0.4(tree kernel)). In our implementation, we use the binary SVMLight (Joachims, 1998) and Tree Kernel Tools (Moschitti, 2004). Precision (P), Recall (R) and F-measure (F) are adopted to measure the performance. 4.2 Experimental Results In this subsection, we report the experiments of different kernel setups for different purposes. (1) Tree Kernel only over Different Relation Instance Spaces: In order to better study the impact of the syntactic structure information in a parse tree on relation extraction, we remove the entity-related information from parse trees by replacing the entity-related phrase types (“E1PER” and so on as shown in Fig. 1) with “NP”. Table 1 compares the performance of 5 tree kernel setups on the ACE 2003 data using the tree structure information only. It shows that: • Overall the five different relation instance spaces are all somewhat effective for relation extraction. This suggests that structured syntactic information has good predication power for relation extraction and the structured syntactic information can be well captured by the tree kernel. • MCT performs much worse than the others. The reasons may be that MCT includes too much left and right context information, which may introduce many noisy features and cause over-fitting (high precision and very low recall as shown in Table 1). This suggests that only keeping the complete (not partial) production rules in MCT does harm performance. • PT achieves the best performance. This means that only keeping the portion of a parse tree enclosed by the shortest path between entities can model relations better than all others. This may be due to that most significant information is with PT and including context information may introduce too much noise. Although context may include some useful information, it is still a problem to correctly utilize such useful information in the tree kernel for relation extraction. • CPT performs a bit worse than PT. In some cases (e.g. in sentence “the merge of company A and company B….”, “merge” is a critical context word), the context information is helpful. However, the effective scope of context is hard to determine given the complexity and variability of natural languages. • The two flattened trees perform worse than the original trees. This suggests that the single nonterminal nodes are useful for relation extraction. Evaluation on the ACE 2004 data also shows that PT achieves the best performance (72.5/56.7 /63.6 in P/R/F). More evaluations with the entity type and order information incorporated into tree nodes (“E1-PER”, “E2-PER” and “E-GPE” as shown in Fig. 1) also show that PT performs best with 76.1/62.6/68.7 in P/R/F on the 2003 data and 74.1/62.4/67.7 in P/R/F on the 2004 data. Instance Spaces P(%) R(%) F Minimum Complete Tree (MCT) 77.5 38.4 51.3 Path-enclosed Tree (PT) 72.8 53.8 61.9 Context-Sensitive PT(CPT) 75.9 48.6 59.2 Flattened PT 72.7 51.7 60.4 Flattened CPT 76.1 47.2 58.2 Table 1. five different tree kernel setups on the ACE 2003 five major types using the parse tree structure information only (regardless of any entity-related information) 829 PTs (with Tree Structure Information only) P(%) R(%) F Entity kernel only 75.1 (79.5) 42.7 (34.6) 54.4 (48.2) Tree kernel only 72.5 (72.8) 56.7 (53.8) 63.6 (61.9) Composite kernel 1 (linear combination) 73.5 (76.3) 67.0 (63.0) 70.1 (69.1) Composite kernel 2 (polynomial expansion) 76.1 (77.3) 68.4 (65.6) 72.1 (70.9) Table 2. Performance comparison of different kernel setups over the ACE major types of both the 2003 data (the numbers in parentheses) and the 2004 data (the numbers outside parentheses) (2) Composite Kernels: Table 2 compares the performance of different kernel setups on the ACE major types. It clearly shows that: • The composite kernels achieve significant performance improvement over the two individual kernels. This indicates that the flat and the structured features are complementary and the composite kernels can well integrate them: 1) the flat entity information captured by the entity kernel; 2) the structured syntactic connection information between the two entities captured by the tree kernel. • The composite kernel via the polynomial expansion outperforms the one via the linear combination by ~2 in F-measure. It suggests that the bi-gram entity features are very useful. • The entity features are quite useful, which can achieve F-measures of 54.4/48.2 alone and can boost the performance largely by ~7 (70.163.2/69.1-61.9) in F-measure when combining with the tree kernel. • It is interesting that the ACE 2004 data shows consistent better performance on all setups than the 2003 data although the ACE 2003 data is two times larger than the ACE 2004 data. This may be due to two reasons: 1) The ACE 2004 data defines two new entity types and re-defines the relation types and subtypes in order to reduce the inconsistency between LDC annotators. 2) More importantly, the ACE 2004 data defines 43 entity subtypes while there are only 3 subtypes in the 2003 data. The detailed classification in the 2004 data leads to significant performance improvement of 6.2 (54.4-48.2) in Fmeasure over that on the 2003 data. Our composite kernel can achieve 77.3/65.6/70.9 and 76.1/68.4/72.1 in P/R/F over the ACE 2003/2004 major types, respectively. Methods (2002/2003 data) P(%) R(%) F Ours: composite kernel 2 (polynomial expansion) 77.3 (64.9) 65.6 (51.2) 70.9 (57.2) Zhou et al. (2005): feature-based SVM 77.2 (63.1) 60.7 (49.5) 68.0 (55.5) Kambhatla (2004): feature-based ME (-) (63.5) (-) (45.2) (-) (52.8) Ours: tree kernel with entity information at node 76.1 (62.4) 62.6 (48.5) 68.7 (54.6) Bunescu and Mooney (2005): shortest path dependency kernel 65.5 (-) 43.8 (-) 52.5 (-) Culotta and Sorensen (2004): dependency kernel 67.1 (-) 35.0 (-) 45.8 (-) Table 3. Performance comparison on the ACE 2003/2003 data over both 5 major types (the numbers outside parentheses) and 24 subtypes (the numbers in parentheses) Methods (2004 data) P(%) R(%) F Ours: composite kernel 2 (polynomial expansion) 76.1 (68.6) 68.4 (59.3) 72.1 (63.6) Zhao and Grishman (2005): feature-based kernel 69.2 (-) 70.5 (-) 70.4 (-) Table 4. Performance comparison on the ACE 2004 data over both 7 major types (the numbers outside parentheses) and 23 subtypes (the numbers in parentheses) (3) Performance Comparison: Tables 3 and 4 compare our method with previous work on the ACE 2002/2003/2004 data, respectively. They show that our method outperforms the previous methods and significantly outperforms the previous two dependency kernels4. This may be due to two reasons: 1) the dependency tree (Culotta and Sorensen, 2004) and the shortest path (Bunescu and Mooney, 2005) lack the internal hierarchical phrase structure information, so their corresponding kernels can only carry out node-matching directly over the nodes with word tokens; 2) the parse tree kernel has less constraints. That is, it is 4 Bunescu and Mooney (2005) used the ACE 2002 corpus, including 422 documents, which is known to have many inconsistencies than the 2003 version. Culotta and Sorensen (2004) used a generic ACE corpus including about 800 documents (no corpus version is specified). Since the testing corpora are in different sizes and versions, strictly speaking, it is not ready to compare these methods exactly and fairly. Therefore Table 3 is only for reference purpose. We just hope that we can get a few clues from this table. 830 not restricted by the two constraints of the two dependency kernels (identical layer and ancestors for the matchable nodes and identical length of two shortest paths, as discussed in Section 2). The above experiments verify the effectiveness of our composite kernels for relation extraction. They suggest that the parse tree kernel can effectively explore the syntactic features which are critical for relation extraction. # of error instances Error Type 2004 data 2003 data False Negative 198 416 False Positive 115 171 Cross Type 62 96 Table 5. Error distribution of major types on both the 2003 and 2004 data for the composite kernel by polynomial expansion (4) Error Analysis: Table 5 reports the error distribution of the polynomial composite kernel over the major types on the ACE data. It shows that 83.5%(198+115/198+115+62) / 85.8%(416 +171/416+171+96) of the errors result from relation detection and only 16.5%/14.2% of the errors result from relation characterization. This may be due to data imbalance and sparseness issues since we find that the negative samples are 8 times more than the positive samples in the training set. Nevertheless, it clearly directs our future work. 5 Discussion In this section, we compare our method with the previous work from the feature engineering viewpoint and report some other observations and issues in our experiments. 5.1 Comparison with Previous Work This is to explain more about why our method performs better and significantly outperforms the previous two dependency tree kernels from the theoretical viewpoint. (1) Compared with Feature-based Methods: The basic difference lies in the relation instance representation (parse tree vs. feature vector) and the similarity calculation mechanism (kernel function vs. dot-product). The main difference is the different feature spaces. Regarding the parse tree features, our method implicitly represents a parse tree by a vector of integer counts of each sub-tree type, i.e., we consider the entire sub-tree types and their occurring frequencies. In this way, the parse tree-related features (the path features and the chunking features) used in the featurebased methods are embedded (as a subset) in our feature space. Moreover, the in-between word features and the entity-related features used in the feature-based methods are also captured by the tree kernel and the entity kernel, respectively. Therefore our method has the potential of effectively capturing not only most of the previous flat features but also the useful syntactic structure features. (2) Compared with Previous Kernels: Since our method only counts the occurrence of each sub-tree without considering the layer and the ancestors of the root node of the sub-tree, our method is not limited by the constraints (identical layer and ancestors for the matchable nodes, as discussed in Section 2) in Culotta and Sorensen (2004). Moreover, the difference between our method and Bunescu and Mooney (2005) is that their kernel is defined on the shortest path between two entities instead of the entire subtrees. However, the path does not maintain the tree structure information. In addition, their kernel requires the two paths to have the same length. Such constraint is too strict. 5.2 Other Issues (1) Speed Issue: The recursively-defined convolution kernel is much slower compared to feature-based classifiers. In this paper, the speed issue is solved in three ways. First, the inclusion of the entity kernel makes the composite kernel converge fast. Furthermore, we find that the small portion (PT) of a full parse tree can effectively represent a relation instance. This significantly improves the speed. Finally, the parse tree kernel requires exact match between two subtrees, which normally does not occur very frequently. Collins and Duffy (2001) report that in practice, running time for the parse tree kernel is more close to linear (O(|N1|+|N2|), rather than O(|N1|*|N2| ). As a result, using the PC with Intel P4 3.0G CPU and 2G RAM, our system only takes about 110 minutes and 30 minutes to do training on the ACE 2003 (~77k training instances) and 2004 (~33k training instances) data, respectively. (2) Further Improvement: One of the potential problems in the parse tree kernel is that it carries out exact matches between sub-trees, so that this kernel fails to handle sparse phrases (i.e. “a car” vs. “a red car”) and near-synonymic grammar tags (for example, the variations of a verb (i.e. go, went, gone)). To some degree, it could possibly lead to over-fitting and compromise the per831 formance. However, the above issues can be handled by allowing grammar-driven partial rule matching and other approximate matching mechanisms in the parse tree kernel calculation. Finally, it is worth noting that by introducing more individual kernels our method can easily scale to cover more features from a multitude of sources (e.g. Wordnet, gazetteers, etc) that can be brought to bear on the task of relation extraction. In addition, we can also easily implement the feature weighting scheme by adjusting the eqn.(2) and the rule (2) in calculating 1 2 ( , ) n n ∆ (see subsection 3.1). 6 Conclusion and Future Work Kernel functions have nice properties. In this paper, we have designed a composite kernel for relation extraction. Benefiting from the nice properties of the kernel methods, the composite kernel could well explore and combine the flat entity features and the structured syntactic features, and therefore outperforms previous bestreported feature-based methods on the ACE corpus. To our knowledge, this is the first research to demonstrate that, without the need for extensive feature engineering, an individual tree kernel achieves comparable performance with the feature-based methods. This shows that the syntactic features embedded in a parse tree are particularly useful for relation extraction and which can be well captured by the parse tree kernel. In addition, we find that the relation instance representation (selecting effective portions of parse trees for kernel calculations) is very important for relation extraction. The most immediate extension of our work is to improve the accuracy of relation detection. This can be done by capturing more features by including more individual kernels, such as the WordNet-based semantic kernel (Basili et al., 2005) and other feature-based kernels. We can also benefit from machine learning algorithms to study how to solve the data imbalance and sparseness issues from the learning algorithm viewpoint. In the future work, we will design a more flexible tree kernel for more accurate similarity measure. Acknowledgements: We would like to thank Dr. Alessandro Moschitti for his great help in using his Tree Kernel Toolkits and fine-tuning the system. We also would like to thank the three anonymous reviewers for their invaluable suggestions. References ACE. 2002-2005. The Automatic Content Extraction Projects. http://www.ldc.upenn.edu/Projects /ACE/ Basili R., Cammisa M. and Moschitti A. 2005. A Semantic Kernel to classify text with very few training examples. ICML-2005 Bunescu R. C. and Mooney R. J. 2005. A Shortest Path Dependency Kernel for Relation Extraction. EMNLP-2005 Charniak E. 2001. Immediate-head Parsing for Language Models. ACL-2001 Collins M. and Duffy N. 2001. Convolution Kernels for Natural Language. NIPS-2001 Culotta A. and Sorensen J. 2004. Dependency Tree Kernel for Relation Extraction. ACL-2004 Haussler D. 1999. Convolution Kernels on Discrete Structures. Technical Report UCS-CRL-99-10, University of California, Santa Cruz. Joachims T. 1998. Text Categorization with Support Vecor Machine: learning with many relevant features. ECML-1998 Kambhatla N. 2004. Combining lexical, syntactic and semantic features with Maximum Entropy models for extracting relations. ACL-2004 (poster) Lodhi H., Saunders C., Shawe-Taylor J., Cristianini N. and Watkins C. 2002. Text classification using string kernel. Journal of Machine Learning Research, 2002(2):419-444 Miller S., Fox H., Ramshaw L. and Weischedel R. 2000. A novel use of statistical parsing to extract information from text. NAACL-2000 Moschitti A. 2004. A Study on Convolution Kernels for Shallow Semantic Parsing. ACL-2004 MUC. 1987-1998. http://www.itl.nist.gov/iaui/894.02/ related_projects/muc/ Schölkopf B. and Smola A. J. 2001. Learning with Kernels: SVM, Regularization, Optimization and Beyond. MIT Press, Cambridge, MA 407-423 Suzuki J., Hirao T., Sasaki Y. and Maeda E. 2003. Hierarchical Directed Acyclic Graph Kernel: Methods for Structured Natural Language Data. ACL-2003 Zelenko D., Aone C. and Richardella A. 2003. Kernel Methods for Relation Extraction. Journal of Machine Learning Research. 2003(2):1083-1106 Zhao S.B. and Grishman R. 2005. Extracting Relations with Integrated Information Using Kernel Methods. ACL-2005 Zhou G.D., Su J, Zhang J. and Zhang M. 2005. Exploring Various Knowledge in Relation Extraction. ACL-2005 832 | 2006 | 104 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 833–840, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Japanese Dependency Parsing Using Co-occurrence Information and a Combination of Case Elements Takeshi Abekawa Graduate School of Education University of Tokyo [email protected] Manabu Okumura Precision and Intelligence Laboratory Tokyo Institute of Technology [email protected] Abstract In this paper, we present a method that improves Japanese dependency parsing by using large-scale statistical information. It takes into account two kinds of information not considered in previous statistical (machine learning based) parsing methods: information about dependency relations among the case elements of a verb, and information about co-occurrence relations between a verb and its case element. This information can be collected from the results of automatic dependency parsing of large-scale corpora. The results of an experiment in which our method was used to rerank the results obtained using an existing machine learning based parsing method showed that our method can improve the accuracy of the results obtained using the existing method. 1 Introduction Dependency parsing is a basic technology for processing Japanese and has been the subject of much research. The Japanese dependency structure is usually represented by the relationship between phrasal units called bunsetsu, each of which consists of one or more content words that may be followed by any number of function words. The dependency between two bunsetsus is direct from a dependent to its head. Manually written rules have usually been used to determine which bunsetsu another bunsetsu tends to modify, but this method poses problems in terms of the coverage and consistency of the rules. The recent availability of larger-scale corpora annotated with dependency information has thus resulted in more work on statistical dependency analysis technologies that use machine learning algorithms (Kudo and Matsumoto, 2002; Sassano, 2004; Uchimoto et al., 1999; Uchimoto et al., 2000). Work on statistical Japanese dependency analysis has usually assumed that all the dependency relations in a sentence are independent of each other, and has considered the bunsetsus in a sentence independently when judging whether or not a pair of bunsetsus is in a dependency relation. In judging which bunsetsu a bunsetsu modifies, this type of work has used as features the information of two bunsetsus, such as the head words of the two bunsetsus, and the morphemes at the ends of the bunsetsus (Uchimoto et al., 1999). It is necessary, however, to also consider features for the contextual information of the two bunsetsus. One such feature is the constraint that two case elements with the same case do not modify a verb. Statistical Japanese dependency analysis takes into account syntactic information but tends not to take into account lexical information, such as cooccurrence between a case element and a verb. The recent availability of more corpora has enabled much information about dependency relations to be obtained by using a Japanese dependency analyzer such as KNP (Kurohashi and Nagao, 1994) or CaboCha (Kudo and Matsumoto, 2002). Although this information is less accurate than manually annotated information, these automatic analyzers provide a large amount of co-occurrence information as well as information about combinations of multiple cases that tend to modify a verb. In this paper, we present a method for improving the accuracy of Japanese dependency analysis by representing the lexical information of cooccurrence and dependency relations of multiple cases as statistical models. We also show the results of experiments demonstrating the effectiveness of our method. 833 Keisatsu-de umibe-de hitori-de arui-teiru syonen-wo hogo-shita (The police/subj) (on the beach) (alone) (was walking) (boy/obj) (had custody) (The police had custody of the boy who was walking alone on the beach.) Figure 1: Example of a Japanese sentence, bunsetsu and dependencies 2 Parsing Japanese The Japanese language is basically an SOV language, but word order is relatively free. In English the syntactic function of each word is represented by word order, while in Japanese it is represented by postpositions. For example, one or more postpositions following a noun play a role similar to the declension of nouns in German, which indicates grammatical case. The syntax of a Japanese sentence is analyzed by using segments, called bunsetsu, that usually contain one or more content words like a noun, verb, or adjective, and zero or more function words like a particle (case marker) or verb/noun suffix. By defining a bunsetsu in this manner, we can analyze a sentence in a way similar to that used when analyzing the grammatical roles of words in inflected languages like German. Japanese dependencies have the following characteristics: • Each bunsetsu except the rightmost one has only one head. • Each head bunsetsu is always placed to the right of (i.e. after) its modifier. • Dependencies do not cross one another. Statistical Japanese dependency analyzers (Kudo and Matsumoto, 2005; Kudo and Matsumoto, 2002; Sassano, 2004; Uchimoto et al., 1999; Uchimoto et al., 2000) automatically learn the likelihood of dependencies from a tagged corpus and calculate the best dependencies for an input sentence. These likelihoods are learned by considering the features of bunsetsus such as their character strings, parts of speech, and inflection types, as well as information between bunsetsus such as punctuation and the distance between bunsetsus. The weight of given features is learned from a training corpus by calculating the weights from the frequencies of the features in the training data. 3 Japanese dependency analysis taking account of co-occurrence information and a combination of multiple cases One constraint in Japanese is that multiple nouns of the same case do not modify a verb. Previous work on Japanese dependency analysis has assumed that all the dependency relations are independent of one another. It is therefore necessary to also consider such a constraint as a feature for contextual information. Uchimoto et al., for example, used as such a feature whether a particular type of bunsetsu is between two bunsetsus in a dependency relation (Uchimoto et al., 1999), and Sassano used information about what is just before and after the modifying bunsetsu and modifyee bunsetsu (Sassano, 2004). In the artificial example shown in Figure 1, it is natural to consider that “keisatsu-de” will modify “hogo-shita”. Statistical Japanese dependency analyzers (Uchimoto et al., 2000; Kudo and Matsumoto, 2002), however, will output the result where “keisatsu-de” modifies “arui-teiru”. This is because in sentences without internal punctuation a noun tends to modify the nearest verb, and these analyzers do not take into account a combination of multiple cases. Another kind of information useful in dependency analysis is the co-occurrence of a noun and a verb, which indicates to what degree the noun tends to modify the verb. In the above example, the possible modifyees of “keisatsu-de” are “aruiteiru” and “hogo-shita”. Taking into account information about the co-occurrence of “keisatsude” and “arui-teiru” and of “keisatsu-de” and “hogo-shita” makes it obvious that “keisatsu-de” is more likely to modify “hogo-shita”. 834 In summary, we think that statistical Japanese dependency analysis needs to take into account at least two more kinds of information: the dependency relation between multiple cases where multiple nouns of the same case do not modify a verb, and the co-occurrence of nouns and verbs. One way to use such information in statistical dependency analysis is to directly use it as features. However, Kehler et al. pointed out that this does not make the analysis more accurate (Kehler et al., 2004). This paper therefore presents a model that uses the co-occurrence information separately and reranks the analysis candidates generated by the existing machine learning model. 4 Our proposed model We first introduce the notation for the explanation of the dependency structure T: m(T) : the number of verbs in T vi(T) : the i-th verb in T ci(T) : the number of case elements that modify the i-th verb in T esi(T) : the set of case elements that modify the i-th verb in T rsi(T) : the set of particles in the set of case elements that modify the i-th verb in T nsi(T) : the set of nouns in the set of case elements that modify the i-th verb in T ri,j(T) : the j-th particle that modifies the i-th verb in T ni,j(T) : the j-th noun that modifies the i-th verb in T We defined case element as a pair of a noun and following particles. For the dependency structure we assume the conditional probability P(esi(T)|vi(T)) that the set of case elements esi(T) depends on the vi(T), and assume the set of case elements esi(T) is composed of the set of noun nsi(T) and particles rsi(T). P(esi(T)|vi(T)) def = P(rsi(T), nsi(T)|vi(T)) (1) = P(rsi(T)|vi(T)) × P(nsi(T)|rsi(T), vi(T)) (2) ≃P(rsi(T)|vi(T)) × ci(T) ∏ j=1 P(ni,j(T)|rsi(T),vi(T)) (3) ≃P(rsi(T)|vi(T)) × ci(T) ∏ j=1 P(ni,j(T)|ri,j(T),vi(T)) (4) In the transformation from Equation (2) to Equation (3), we assume that the set of noun nsi(T) is independent of the verb vi(T). And in the transformation from Equation (3) to Equation (4), we assume that the noun ni,j(T) is dependent on only its following particle ri,j(T). Now we assume the dependency structure T of the whole sentence is composed of only the dependency relation between case elements and verbs, and propose the sentence probability defined by Equation (5). P(T) = m(T) ∏ i=1 P(rsi(T)|vi(T)) × ci(T) ∏ j=1 P(ni,j(T)|ri,j(T), vi(T)) (5) We call P(rsi(T)|vi(T)) the co-occurrence probability of the particle set and the verb, and we call P(ni,j(T)|ri,j(T), vi(T)) the co-occurrence probability of the case element set and the verb. In the actual dependency analysis, we try to select the dependency structure ˆT that maximizes the Equation (5) from the possible parses T for the inputted sentence: ˆT = argmax T m(T) ∏ i=1 P(rsi(T)|vi(T)) × ci(T) ∏ j=1 P(ni,j(T)|ri,j(T), vi(T)). (6) The proposed model is inspired by the semantic role labeling method (Gildea and Jurafsky, 2002), which uses the frame element group in place of the particle set. It differs from the previous parsing models in that we take into account the dependency relations among particles in the set of case elements that modify a verb. This information can constrain the combination of particles (cases) among bunsetsus that modify a verb. Assuming the independence among particles, we can rewrite Equation (5) as P(T) = m(T) ∏ i=1 ci(T) ∏ j=1 P(ni,j(T), ri,j(T)|vi(T)). (7) 4.1 Syntactic property of a verb In Japanese, the “ha” case that indicates a topic tends to modify the main verb in a sentence and tends not to modify a verb in a relative clause. The 835 verb: ‘aru-ku’ verb: ‘hogo-suru’ case elements particle set case elements particle set a keisatsu-de umibe-de hitori-de { de,de,de } syonen-wo {wo} b umibe-de hitori-de {de,de} keisatsu-de syonen-wo {de,wo} c hitori-de {de} keisatsu-de umibe-de syonen-wo {de,de,wo} d {none} keisatsu-de umibe-de hitori-de syonen-wo { de,de,de,wo } Table 1: Analytical process of the example sentence co-occurrence probability of the particle set therefore tends to be different for verbs with different syntactic properties. Like (Shirai, 1998), to take into account the reliance of the co-occurrence probability of the particle set on the syntactic property of a verb, instead of using P(rsi(T)|vi(T)) in Equation (5), we use P(rsi(T)|syni(T), vi(T)), where syni(T) is the syntactic property of the i-th verb in T and takes one of the following three values: ‘verb’ when v modifies another verb ‘noun’ when v modifies a noun ‘main’ when v modifies nothing (when it is at the end of the sentence, and is the main verb) 4.2 Illustration of model application Here, we illustrate the process of applying our proposed model to the example sentence in Figure 1, for which there are four possible combinations of dependency relations. The bunsetsu combinations and corresponding sets of particles are listed in Table 1. In the analytical process, we calculate for all the combinations the co-occurrence probability of the case element set (bunsetsu set) and the cooccurrence probability of the particle set, and we select the ˆT that maximizes the probability. Some of the co-occurrence probabilities of the particle sets for the verbs “aru-ku” and “hogosuru” in the sentence are listed in Table 2. How to estimate these probabilities is described in section 5.3. Basically, the larger the number of particles, the lower the probability is. As you can see in the comparison between {de, wo} and {de, de}, the probability becomes lower when multiple same cases are included. Therefore, the probability can reflect the constraint that multiple case elements of the same particle tend not to modify a verb. 5 Experiments We evaluated the effectiveness of our model experimentally. Since our model treats only the dersi P(rsi|noun, v1) P(rsi|main, v2) v1 = “aru-ku” v2 = “hogo-suru” {none} 0.29 0.35 {wo} 0.30 0.24 {ga} 0.056 0.072 {ni} 0.040 0.041 {de} 0.032 0.033 {ha} 0.035 0.041 {de, wo} 0.022 0.018 {de, de} 0.00038 0.00038 {de, de, wo} 0.00022 0.00018 {de, de, de} 0.0000019 0.0000018 {de, de, de, wo} 0.00000085 0.00000070 Table 2: Example of the co-occurrence probabilities of particle sets pendency relations between a noun and a verb, we cannot determine all the dependency relations in a sentence. We therefore use one of the currently available dependency analyzers to generate an ordered list of n-best possible parses for the sentence and then use our proposed model to rerank them and select the best parse. 5.1 Dependency analyzer for outputting n-best parses We generated the n-best parses by using the “posterior context model” (Uchimoto et al., 2000). The features we used were those in (Uchimoto et al., 1999) and their combinations. We also added our original features and their combinations, with reference to (Sassano, 2004; Kudo and Matsumoto, 2002), but we removed the features that had a frequency of less than 30 in our training data. The total number of features is thus 105,608. 5.2 Reranking method Because our model considers only the dependency relations between a noun and a verb, and thus cannot determine all the dependency relations in a sentence, we restricted the possible parses for 836 reranking as illustrated in Figure 2. The possible parses for reranking were the first-ranked parse and those of the next-best parses in which the verb to modify was different from that in the firstranked one. For example, parses 1 and 3 in Figure 2 are the only candidates for reranking. In our experiments, n is set to 50. The score we used for reranking the parses was the product of the probability of the posterior context model and the probability of our proposed model: score = Pcontext(T)α × P(T), (8) where Pcontext(T) is the probability of the posterior context model. The α here is a parameter with which we can adjust the balance of the two probabilities, and is fixed to the best value by considering development data (different from the training data)1. Reranking Candidate 1 Candidate 2 Candidate 3 Candidate 4 : Case element : Verb Candidate Candidate Figure 2: Selection of possible parses for reranking Many methods for reranking the parsing of English sentences have been proposed (Charniak and Johnson, 2005; Collins and Koo, 2005; Henderson and Titov, 2005), all of which are discriminative methods which learn the difference between the best parse and next-best parses. While our reranking model using generation probability is quite simple, we can easily verify our hypothesis that the two proposed probabilities have an effect on improving the parsing accuracy. We can also verify that the parsing accuracy improves by using imprecise information obtained from an automatically parsed corpus. Klein and Manning proposed a generative model in which syntactic (PCFG) and semantic (lexical dependency) structures are scored with separate models (Klein and Manning, 2002), but 1In our experiments, α is set to 2.0 using development data. they do not take into account the combination of dependencies. Shirai et al. also proposed a statistical model of Japanese language which integrates lexical association statistics with syntactic preference (Shirai et al., 1998). Our proposed model differs from their method in that it explicitly uses the combination of multiple cases. 5.3 Estimation of co-occurrence probability We estimated the co-occurrence probability of the particle set and the co-occurrence probability of the case element set used in our model by analyzing a large-scale corpus. We collected a 30-year newspaper corpus2, applied the morphological analyzer JUMAN (Kurohashi and Nagao, 1998b), and then applied the dependency analyzer with a posterior context model3. To ensure that we collected reliable co-occurrence information, we removed the information for the bunsetsus with punctuation4. Like (Torisawa, 2001), we estimated the cooccurrence probability P(⟨n, r, v⟩) of the case element set (noun n, particle r, and verb v) by using probabilistic latent semantic indexing (PLSI) (Hofmann, 1999)5. If ⟨n, r, v⟩is the co-occurrence of n and ⟨r, v⟩, we can calculate P(⟨n, r, v⟩) by using the following equation: P(⟨n, r, v⟩) = ∑ z∈Z P(n|z)P(⟨r, v⟩|z)P(z), (9) where z indicates a latent semantic class of cooccurrence (hidden class). Probabilistic parameters P(n|z), P(⟨r, v⟩|z), and P(z) in Equation (9) can be estimated by using the EM algorithm. In our experiments, the dimension of the hidden class z was set to 300. As a result, the collected ⟨n, r, v⟩ total 102,581,924 pairs. The number of n and v is 57,315 and 15,098, respectively. The particles for which the co-occurrence probability was estimated were the set of case particles, the “ha” case particle, and a class of “fukujoshi” 213 years’ worth of articles from the Mainichi Shimbun, 14 years’ worth from the Yomiuri Shimbun, and 3 years’ worth from the Asahi Shimbun. 3We used the following package for calculation of Maximum Entropy: http://homepages.inf.ed.ac.uk/s0450736/maxent toolkit.html. 4The result of dependency analysis with a posterior context model for the Kyodai Corpus showed that the accuracy for the bunsetsu without punctuation is 90.6%, while the accuracy is only 76.4% for those with punctuation. 5We used the following package for calculation of PLSI: http://chasen.org/˜taku/software/plsi/. 837 Bunsetsu accuracy Sentence accuracy Whole data Context model 90.95% (73,390/80,695) 54.40% (5,052/9,287) Our model 91.21% (73,603/80,695) 55.17% (5,124/9,287) Only for reranked sentences Context model 90.72% (68,971/76,026) 48,33% (3,813/7,889) Our model 91.00% (69,184/76,026) 49.25% (3,885/7,889) Only for case elements Context model 91.80% (28,849/31,427) – Our model 92.47% (29,062/31,427) – Table 3: Accuracy before/after reranking particles. Therefore, the total number of particles was 10. We also estimated the co-occurrence probability of the particle set P(rs|syn, v) by using PLSI. We regarded the triple ⟨rs, syn, v⟩(the co-occurrence of particle set rs, verb v, and the syntactic property syn) as the co-occurrence of rs and ⟨syn, v⟩. The dimension of the hidden class was 100. The total number of ⟨rs, syn, v⟩pairs was 1,016,508, v was 18,423, and rs was 1,490. The particle set should be treated not as a non-ordered set but as an occurrence ordered set. However, we think correct probability estimation using an occurrence ordered set is difficult, because it gives rise to an explosion in the number of combination, 5.4 Experimental environment The evaluation data we used was Kyodai Corpus 3.0, a corpus manually annotated with dependency relations (Kurohashi and Nagao, 1998a). The statistics of the data are as follows: • Training data: 24,263 sentences, 234,474 bunsetsus • Development data: 4,833 sentences, 47,580 bunsetsus • Test data: 9,287 sentences, 89,982 bunsetsus The test data contained 31,427 case elements, and 28,801 verbs. The evaluation measures we used were bunsetsu accuracy (the percentage of bunsetsu for which the correct modifyee was identified) and sentence accuracy (the percentage of sentences for which the correct dependency structure was identified). 5.5 Experimental results 5.5.1 Evaluation of our model Our first experiment evaluated the effectiveness of reranking with our proposed model. Bunsetsu Our reranking model correct incorrect Context model correct 73,119 271 incorrect 484 6,821 Table 4: 2 × 2 contingency table of the number of correct bunsetsu (posterior context model × our model) and sentence accuracies before and after reranking, for the entire set of test data as well as for only those sentences whose parse was actually reranked, are listed in Table 3. The results showed that the accuracy could be improved by using our proposed model to rerank the results obtained with the posterior context model. McNemar testing showed that the null hypothesis that there is no difference between the accuracy of the results obtained with the posterior context model and those obtained with our model could be rejected with a p value < 0.01. The difference in accuracy is therefore significant. 5.5.2 Comparing variant models We next experimentally compare the following variations of the proposed model: (a) one in which the case element set is assumed to be independent [Equation (7)] (b) one using the co-occurrence probability of the particle set, P(rs|syn, v), in our model (c) one using only the co-occurrence probability of the case element, P(n|r, v), in our model (d) one not taking into account the syntactic property of a verb (i,e. a model in which the co-occurrence probability is defined as P(r|v), without the syntactic property syn) (e) one in which the co-occurrence probability of the case element, P(n|r, v), is simply added 838 Bunsetsu Sentence accuracy accuracy Context model 90.95% 54.40% Our model 91.21% 55.17% model (a) 91.12% 54.90% model (b) 91.10% 54.69% model (c) 91.11% 54.91% model (d) 91.15% 54.82% model (e) 90.96% 54.33% model (f) 89.50% 48.33% Kudo et al 2005 91.37% 56.00% Table 5: Comparison of various models to a feature set used in the posterior context model (f) one using only our proposed probabilities without the probability of the posterior context model The accuracies obtained with each of these models are listed in Table 5, from which we can conclude that it is effective to take into account the dependency between case elements because model (a) is less accurate than our model. Since the accuracy of model (d) is comparable to that of our model, we can conclude that the consideration of the syntactic property of a verb does not necessarily improve dependency analysis. The accuracy of model (e), which uses the cooccurrence probability of the case element set as features in the posterior context model, is comparable to that of the posterior context model. This result is similar to the one obtained by (Kehler et al., 2004), where the task was anaphora resolution. Although we think the co-occurrence probability is useful information for dependency analysis, this result shows that simply adding it as a feature does not improve the accuracy. 5.5.3 Changing the amount of training data Changing the size of the training data set, we investigated whether the degree of accuracy improvement due to reranking depends on the accuracy of the existing dependency analyzer. Figure 3 shows that the accuracy improvement is constant even if the accuracy of the dependency analyzer is varied. 5.6 Discussion The score used in reranking is the product of the probability of the posterior context model and the 0.894 0.896 0.898 0.9 0.902 0.904 0.906 0.908 0.91 0.912 0.914 4000 6000 8000 10000 12000 14000 16000 18000 20000 22000 24000 26000 No. of training sentences Bunsetsu accuracy posterior context model proposed model Figure 3: Bunsetsu accuracy when the size of the training data is changed probability of our proposed model. The results in Table 5 show that the parsing accuracy of model (f), which uses only the probabilities obtained with our proposed model, is quite low. We think the reason for this is that our two co-occurrence probabilities cannot take account of syntactic properties, such as punctuation and the distance between two bunsetsus, which improve dependency analysis. Furthermore, when the sentence has multiple verbs and case elements, the constraint of our proposed model tends to distribute case elements to each verb equally. To investigate such bias, we calculated the variance of the number of case elements per verb. Table 6 shows that the variance for our proposed model (Equation [5]) is the lowest, and this model distributes case elements to each verb equally. The variance of the posterior context model is higher than that of the test data, probably because the syntactic constraint in this model affects parsing too much. Therefore the variance of the reranking model (Equation [8]), which is the combination of our proposed model and the posterior context model, is close to that of the test data. The best parse which uses this data set is (Kudo and Matsumoto, 2005), and their parsing accuracy is 91.37%. The features and the parsing method used by their model are almost equal to the posterior context model, but they use a different method of probability estimation. If their model could generate n-best parsing and attach some kind of score to each parse tree, we would combine their model in place of the posterior context model. At the stage of incorporating the proposed approach to a parser, the consistency with other pos839 context model test data Equation [8] Equation [5] variance (σ2) 0.724 0.702 0.696 0.666 *The average number of elements per verb is 1.078. Table 6: The variance of the number of elements per verb sible methods that deal with other relations should be taken into account. This will be one of our future tasks. 6 Conclusion We presented a method of improving Japanese dependency parsing by using large-scale statistical information. Our method takes into account two types of information, not considered in previous statistical (machine learning based) parsing methods. One is information about the dependency relations among the case elements of a verb, and the other is information about co-occurrence relations between a verb and its case element. Experimental results showed that our method can improve the accuracy of the existing method. References Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting of the ACL, pages 173–180. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25–69. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. James Henderson and Ivan Titov. 2005. Data-defined kernels for parse reranking derived from probabilistic models. In Proceedings of the 43rd Annual Meeting of the ACL, pages 181–188. Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd Annual International SIGIR Conference on Research and Development in Information Retrieval, pages 50–57. Andrew Kehler, Douglas Appelt, Lara Taylor, and Aleksandr Simma. 2004. The (non)utility of predicate-argument frequencies for pronoun interpretation. In Proceedings of the HLT/NAACL 2004, pages 289–296. Dan Klein and Christopher D. Manning. 2002. Fast exact inference with a factored model for natural language parsing. In Advances in Neural Information Processing Systems 15 (NIPS 2002), pages 3– 10. Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analysis using cascaded chunking. In CoNLL 2002: Proceedings of the 6th Conference on Natural Language Learning 2002 (COLING 2002 Post-Conference Workshops), pages 63–69. Taku Kudo and Yuji Matsumoto. 2005. Japanese dependency parsing using relative preference of dependency. Transactions of Information Processing Society of Japan, 46(4):1082–1092. (in Japanese). Sadao Kurohashi and Makoto Nagao. 1994. Kn parser: Japanese dependency/case structure analyzer. In Proceedings of the Workshop on Sharable Natural Language Resources, pages 48–55. Sadao Kurohashi and Makoto Nagao. 1998a. Building a Japanese parsed corpus while improving the parsing system. In Proceedings of the 1st International Conference on Language Resources and Evaluation, pages 719–724. Sadao Kurohashi and Makoto Nagao. 1998b. Japanese Morphological Analysis System JUMAN version 3.5. Department of Informatics, Kyoto University. (in Japanese). Manabu Sassano. 2004. Linear-time dependency analysis for Japanese. In Proceedings of the COLING 2004, pages 8–14. Kiyoaki Shirai, Kentaro Inui, Takenobu Tokunaga, and Hozumi Tanaka. 1998. An empirical evaluation on statistical parsing of Japanese sentences using lexical association statistics. In Proceedings of the 3rd Conference on EMNLP, pages 80–87. Kiyoaki Shirai. 1998. The integrated natural language processing using statistical information. Technical Report TR98–0004, Department of Computer Science, Tokyo Institute of Technology. (in Japanese). Kentaro Torisawa. 2001. An unsupervised method for canonicalization of Japanese postpositions. In Proceedings of the 6th Natural Language Processing Pacific Rim Symposium (NLPRS), pages 211–218. Kiyotaka Uchimoto, Satoshi Sekine, and Hitoshi Isahara. 1999. Japanese dependency structure analysis based on maximum entropy models. Transactions of Information Processing Society of Japan, 40(9):3397–3407. (in Japanese). Kiyotaka Uchimoto, Masaki Murata, Satoshi Sekine, and Hitoshi Isahara. 2000. Dependency model using posterior context. In Proceedings of the Sixth International Workshop on Parsing Technology (IWPT2000), pages 321–322. 840 | 2006 | 105 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 841–848, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Answer Extraction, Semantic Clustering, and Extractive Summarization for Clinical Question Answering Dina Demner-Fushman1,3 and Jimmy Lin1,2,3 1Department of Computer Science 2College of Information Studies 3Institute for Advanced Computer Studies University of Maryland College Park, MD 20742, USA [email protected], [email protected] Abstract This paper presents a hybrid approach to question answering in the clinical domain that combines techniques from summarization and information retrieval. We tackle a frequently-occurring class of questions that takes the form “What is the best drug treatment for X?” Starting from an initial set of MEDLINE citations, our system first identifies the drugs under study. Abstracts are then clustered using semantic classes from the UMLS ontology. Finally, a short extractive summary is generated for each abstract to populate the clusters. Two evaluations—a manual one focused on short answers and an automatic one focused on the supporting abstracts—demonstrate that our system compares favorably to PubMed, the search system most widely used by physicians today. 1 Introduction Complex information needs can rarely be addressed by single documents, but rather require the integration of knowledge from multiple sources. This suggests that modern information retrieval systems, which excel at producing ranked lists of documents sorted by relevance, may not be sufficient to provide users with a good overview of the “information landscape”. Current question answering systems aspire to address this shortcoming by gathering relevant “facts” from multiple documents in response to information needs. The so-called “definition” or “other” questions at recent TREC evaluations (Voorhees, 2005) serve as good examples: “good answers” to these questions include interesting “nuggets” about a particular person, organization, entity, or event. The importance of cross-document information synthesis has not escaped the attention of other researchers. The last few years have seen a convergence between the question answering and summarization communities (Amig´o et al., 2004), as highlighted by the shift from generic to queryfocused summaries in the 2005 DUC evaluation (Dang, 2005). Despite a focus on document ranking, different techniques for organizing search results have been explored by information retrieval researchers, as exemplified by techniques based on clustering (Hearst and Pedersen, 1996; Dumais et al., 2001; Lawrie and Croft, 2003). Our work, which is situated in the domain of clinical medicine, lies at the intersection of question answering, information retrieval, and summarization. We employ answer extraction to identify short answers, semantic clustering to group similar results, and extractive summarization to produce supporting evidence. This paper describes how each of these capabilities contributes to an information system tailored to the requirements of physicians. Two separate evaluations demonstrate the effectiveness of our approach. 2 Clinical Information Needs Although the need to answer questions related to patient care has been well documented (Covell et al., 1985; Gorman et al., 1994; Ely et al., 1999), studies have shown that existing search systems, e.g., PubMed, the U.S. National Library of Medicine’s search engine, are often unable to supply physicians with clinically-relevant answers in a timely manner (Gorman et al., 1994; Chambliss and Conley, 1996). Clinical information 841 Disease: Chronic Prostatitis ▶anti-microbial 1. [temafloxacin] Treatment of chronic bacterial prostatitis with temafloxacin. Temafloxacin 400 mg b.i.d. administered orally for 28 days represents a safe and effective treatment for chronic bacterial prostatitis. 2. [ofloxacin] Ofloxacin in the management of complicated urinary tract infections, including prostatitis. In chronic bacterial prostatitis, results to date suggest that ofloxacin may be more effective clinically and as effective microbiologically as carbenicillin. 3. ... ▶Alpha-adrenergic blocking agent 1. [terazosine] Terazosin therapy for chronic prostatitis/chronic pelvic pain syndrome: a randomized, placebo controlled trial. CONCLUSIONS: Terazosin proved superior to placebo for patients with chronic prostatitis/chronic pelvic pain syndrome who had not received alpha-blockers previously. 2. ... Table 1: System response to the question “What is the best drug treatment for chronic prostatitis?” systems for decision support represent a potentially high-impact application. From a research perspective, the clinical domain is attractive because substantial knowledge has already been codified in the Unified Medical Language System (UMLS) (Lindberg et al., 1993). The 2004 version of the UMLS Metathesaurus contains information about over 1 million biomedical concepts and 5 million concept names. This and related resources allow us to explore knowledge-based techniques with substantially less upfront investment. Naturally, physicians have a wide spectrum of information needs, ranging from questions about the selection of treatment options to questions about legal issues. To make the retrieval problem more tractable, we focus on a subset of therapy questions taking the form “What is the best drug treatment for X?”, where X can be any number of diseases. We have chosen to tackle this class of questions because studies of physicians’ behavior in natural settings have revealed that such questions occur quite frequently (Ely et al., 1999). By leveraging the natural distribution of clinical information needs, we can make the greatest impact with the least effort. Our research follows the principles of evidencebased medicine (EBM) (Sackett et al., 2000), which provides a well-defined model to guide the process of clinical question answering. EBM is a widely-accepted paradigm for medical practice that involves the explicit use of current best evidence, i.e., high-quality patient-centered clinical research reported in the primary medical literature, to make decisions about patient care. As shown by previous work (Cogdill and Moore, 1997; De Groote and Dorsch, 2003), citations from the MEDLINE database (maintained by the U.S. National Library of Medicine) serve as a good source of clinical evidence. As a result of these findings, our work focuses on MEDLINE abstracts as the source for answers. 3 Question Answering Approach Conflicting desiderata shape the characteristics of “answers” to clinical questions. On the one hand, conciseness is paramount. Physicians are always under time pressure when making decisions, and information overload is a serious concern. Furthermore, we ultimately envision deploying advanced retrieval systems in portable packages such as PDAs to serve as tools in bedside interactions (Hauser et al., 2004). The small form factor of such devices limits the amount of text that can be displayed. However, conciseness exists in tension with completeness. For physicians, the implications of making potentially life-altering decisions mean that all evidence must be carefully examined in context. For example, the efficacy of a drug is always framed in the context of a specific sample population, over a set duration, at some fixed dosage, etc. A physician simply cannot recommend a particular course of action without considering all these factors. Our approach seeks to balance conciseness and completeness by providing hierarchical and inter842 active “answers” that support multiple levels of drill-down. A partial example is shown in Figure 1. Top-level answers to “What is the best drug treatment for X?” consist of categories of drugs that may be of interest to the physician. Each category is associated with a cluster of abstracts from MEDLINE about that particular treatment option. Drilling down into a cluster, the physician is presented with extractive summaries of abstracts that outline the clinical findings. To obtain more detail, the physician can pull up the complete abstract text, and finally the electronic version of the entire article (if available). In the example shown in Figure 1, the physician can see that two classes of drugs (anti-microbial and alpha-adrenergic blocking agent) are relevant for the disease “chronic prostatitis”. Drilling down into the first cluster, the physician can see summarized evidence for two specific types of anti-microbials (temafloxacin and ofloxacin) extracted from MEDLINE abstracts. Three major capabilities are required to produce the “answers” described above. First, the system must accurately identify the drugs under study in an abstract. Second, the system must group abstracts based on these substances in a meaningful way. Third, the system must generate short summaries of the clinical findings. We describe a clinical question answering system that implements exactly these capabilities (answer extraction, semantic clustering, and extractive summarization). 4 System Implementation Our work is primarily concerned with synthesizing coherent answers from a set of search results— the actual source of these results is not important. For convenience, we employ MEDLINE citations retrieved by the PubMed search engine (which also serves as a baseline for comparison). Given an initial set of citations, answer generation proceeds in three phases, described below. 4.1 Answer Extraction Given a set of abstracts, our system first identifies the drugs under study; these later become the short answers. In the parlance of evidence-based medicine, drugs fall into the category of “interventions”, which encompasses everything from surgical procedures to diagnostic tests. Our extractor for interventions relies on MetaMap (Aronson, 2001), a program that automatically identifies entities corresponding to UMLS concepts. UMLS has an extensive coverage of drugs, falling under the semantic type PHARMACOLOGICAL SUBSTANCE and a few others. All such entities are identified as candidates and each is scored based on a number of features: its position in the abstract, its frequency of occurrence, etc. A separate evaluation on a blind test set demonstrates that our extractor is able to accurately recognize the interventions in a MEDLINE abstract; see details in (Demner-Fushman and Lin, 2005; Demner-Fushman and Lin, 2006 in press). 4.2 Semantic Clustering Retrieved MEDLINE citations are organized into semantic clusters based on the main interventions identified in the abstract text. We employed a variant of the hierarchical agglomerative clustering algorithm (Zhao and Karypis, 2002) that utilizes semantic relationships within UMLS to compute similarities between interventions. Iteratively, we group abstracts whose interventions fall under a common ancestor, i.e., a hypernym. The more generic ancestor concept (i.e., the class of drugs) is then used as the cluster label. The process repeats until no new clusters can be formed. In order to preserve granularity at the level of practical clinical interest, the tops of the UMLS hierarchy were truncated; for example, the MeSH category “Chemical and Drugs” is too general to be useful. This process was manually performed during system development. We decided to allow an abstract to appear in multiple clusters if more than one intervention was identified, e.g., if the abstract compared the efficacy of two treatments. Once the clusters have been formed, all citations are then sorted in the order of the original PubMed results, with the most abstract UMLS concept as the cluster label. Clusters themselves are sorted in decreasing size under the assumption that more clinical research is devoted to more pertinent types of drugs. Returning to the example in Figure 1, the abstracts about temafloxacin and ofloxacin were clustered together because both drugs are hyponyms of anti-microbials within the UMLS ontology. As can be seen, this semantic resource provides a powerful tool for organizing search results. 4.3 Extractive Summarization For each MEDLINE citation, our system generates a short extractive summary consisting of three elements: the main intervention (which is usu843 ally more specific than the cluster label); the title of the abstract; and the top-scoring outcome sentence. The “outcome”, another term from evidence-based medicine, asserts the clinical findings of a study, and is typically found towards the end of a MEDLINE abstract. In our case, outcome sentences state the efficacy of a drug in treating a particular disease. Previously, we have built an outcome extractor capable of identifying such sentences in MEDLINE abstracts using supervised machine learning techniques (DemnerFushman and Lin, 2005; Demner-Fushman and Lin, 2006 in press). Evaluation on a blind heldout test set shows high classification accuracy. 5 Evaluation Methodology Given that our work draws from QA, IR, and summarization, a proper evaluation that captures the salient characteristics of our system proved to be quite challenging. Overall, evaluation can be decomposed into two separate components: locating a suitable resource to serve as ground truth and leveraging it to assess system responses. It is not difficult to find disease-specific pharmacology resources. We employed Clinical Evidence (CE), a periodic report created by the British Medical Journal (BMJ) Publishing Group that summarizes the best known drugs for a few dozen diseases. Note that the existence of such secondary sources does not obviate the need for automated systems because they are perpetually falling out of date due to rapid advances in medicine. Furthermore, such reports are currently created by highlyexperienced physicians, which is an expensive and time-consuming process. For each disease, CE classifies drugs into one of six categories: beneficial, likely beneficial, tradeoffs (i.e., may have adverse side effects), unknown, unlikely beneficial, and harmful. Included with each entry is a list of references—citations consulted by the editors in compiling the resource. Although the completeness of the drugs enumerated in CE is questionable, it nevertheless can be viewed as “authoritative”. 5.1 Previous Work How can we leverage a resource such as CE to assess the responses generated by our system? A survey of evaluation methodologies reveals shortcomings in existing techniques. Answers to factoid questions are automatically scored using regular expression patterns (Lin, 2005). In our application, this is inadequate for many reasons: there is rarely an exact string match between system output and drugs mentioned in CE, primarily due to synonymy (for example, alpha-adrenergic blocking agent and αblocker refer to the same class of drugs) and ontological mismatch (for example, CE might mention beta-agonists, while a retrieved abstract discusses formoterol, which is a specific representative of beta-agonists). Furthermore, while this evaluation method can tell us if the drugs proposed by the system are “good”, it cannot measure how well the answer is supported by MEDLINE citations; recall that answer justification is important for physicians. The nugget evaluation methodology (Voorhees, 2005) developed for scoring answers to complex questions is not suitable for our task, since there is no coherent notion of an “answer text” that the user reads end–to–end. Furthermore, it is unclear what exactly a “nugget” in this case would be. For similar reasons, methodologies for summarization evaluation are also of little help. Typically, system-generated summaries are either evaluated manually by humans (which is expensive and time-consuming) or automatically using a metric such as ROUGE, which compares system output against a number of reference summaries. The interactive nature of our answers violates the assumption that systems’ responses are static text segments. Furthermore, it is unclear what exactly should go into a reference summary, because physicians may want varying amounts of detail depending on familiarity with the disease and patient-specific factors. Evaluation methodologies from information retrieval are also inappropriate. User studies have previously been employed to examine the effect of categorized search results. However, they often conflate the effectiveness of the interface with that of the underlying algorithms. For example, Dumais et al. (2001) found significant differences in task performance based on different ways of using purely presentational devices such as mouseovers, expandable lists, etc. While interface design is clearly important, it is not the focus of our work. Clustering techniques have also been evaluated in the same manner as text classification algorithms, in terms of precision, recall, etc. based on some ground truth (Zhao and Karypis, 2002). 844 This, however, assumes the existence of stable, invariant categories, which is not the case since our output clusters are query-specific. Although it may be possible to manually create “reference clusters”, we lack sufficient resources to develop such a data set. Furthermore, it is unclear if sufficient interannotator agreement can be obtained to support meaningful evaluation. Ultimately, we devised two separate evaluations to assess the quality of our system output based on the techniques discussed above. The first is a manual evaluation focused on the cluster labels (i.e., drug categories), based on a factoid QA evaluation methodology. The second is an automatic evaluation of the retrieved abstracts using ROUGE, drawing elements from summarization evaluation. Details of the evaluation setup and results are preceded by a description of the test collection we created from CE. 5.2 Test Collection We were able to mine the June 2004 edition of Clinical Evidence to create a test collection for system evaluation. We randomly selected thirty diseases, generating a development set of five questions and a test set of twenty-five questions. Some examples include: acute asthma, chronic prostatitis, community acquired pneumonia, and erectile dysfunction. CE listed an average of 11.3 interventions per disease; of those, 2.3 on average were marked as beneficial and 1.9 as likely beneficial. On average, there were 48.4 references associated with each disease, representing the articles consulted during the compilation of CE itself. Of those, 34.7 citations on average appeared in MEDLINE; we gathered all these abstracts, which serve as the reference summaries for our ROUGE-based automatic evaluation. Since the focus of our work is not on retrieval algorithms per se, we employed PubMed to fetch an initial set of MEDLINE citations and performed answer synthesis using those results. The PubMed citations also serve as a baseline, since it represents a system commonly used by physicians. In order to obtain the best possible set of citations, the first author (an experienced PubMed searcher), manually formulated queries, taking advantage of MeSH (Medical Subject Headings) terms when available. MeSH terms are controlled vocabulary concepts assigned manually by trained medical indexers (based on the full text of the articles), and encode a substantial amount of knowledge about the contents of the citation. PubMed allows searches on MeSH terms, which usually yield accurate results. In addition, we limited retrieved citations to those that have the MeSH heading “drug therapy” and those that describe a clinical trial (another metadata field). Finally, we restricted the date range of the queries so that abstracts published after our version of CE were excluded. Although the query formulation process currently requires a human, we envision automating this step using a template-based approach in the future. 6 System Evaluation We adapted existing techniques to evaluate our system in two separate ways: a factoid-style manual evaluation focused on short answers and an automatic evaluation with ROUGE using CE-cited abstracts as the reference summaries. The setup and results for both are detailed below. 6.1 Manual Evaluation of Short Answers In our manual evaluation, system outputs were assessed as if they were answers to factoid questions. We gathered three different sets of answers. For the baseline, we used the main intervention from each of the first three PubMed citations. For our test condition, we considered the three largest clusters, taking the main intervention from the first abstract in each cluster. This yields three drugs that are at the same level of ontological granularity as those extracted from the unclustered PubMed citations. For our third condition, we assumed the existence of an oracle which selects the three best clusters (as determined by the first author, a medical doctor). From each of these three clusters, we extracted the main intervention of the first abstracts. This oracle condition represents an achievable upper bound with a human in the loop. Physicians are highly-trained professionals that already have significant domain knowledge. Faced with a small number of choices, it is likely that they will be able to select the most promising cluster, even if they did not previously know it. This preparation yielded up to nine drug names, three from each experimental condition. For short, we refer to these as PubMed, Cluster, and Oracle, respectively. After blinding the source of the drugs and removing duplicates, each short answer was presented to the first author for evaluation. Since 845 Clinical Evidence Physician B LB T U UB H N Good Okay Bad PubMed 0.200 0.213 0.160 0.053 0.000 0.013 0.360 0.600 0.227 0.173 Cluster 0.387 0.173 0.173 0.027 0.000 0.000 0.240 0.827 0.133 0.040 Oracle 0.400 0.200 0.133 0.093 0.013 0.000 0.160 0.893 0.093 0.013 Table 2: Manual evaluation of short answers: distribution of system answers with respect to CE categories (left side) and with respect to the assessor’s own expertise (right side). (Key: B=beneficial, LB=likely beneficial, T=tradeoffs, U=unknown, UB=unlikely beneficial, H=harmful, N=not in CE) the assessor had no idea from which condition an answer came, this process guarded against assessor bias. Each answer was evaluated in two different ways: first, with respect to the ground truth in CE, and second, using the assessor’s own medical expertise. In the first set of judgments, the assessor determined which of the six categories (beneficial, likely beneficial, tradeoffs, unknown, unlikely beneficial, harmful) the system answer belonged to, based on the CE recommendations. As we have discussed previously, a human (with sufficient domain knowledge) is required to perform this matching due to synonymy and differences in ontological granularity. However, note that the assessor only considered the drug name when making this categorization. In the second set of judgments, the assessor separately determined if the short answer was “good”, “okay” (marginal), or “bad” based both on CE and her own experience, taking into account the abstract title and the topscoring outcome sentence (and if necessary, the entire abstract text). Results of this manual evaluation are presented in Table 2, which shows the distribution of judgments for the three experimental conditions. For baseline PubMed, 20% of the examined drugs fell in the beneficial category; the values are 39% for the Cluster condition and 40% for the Oracle condition. In terms of short answers, our system returns approximately twice as many beneficial drugs as the baseline, a marked increase in answer accuracy. Note that a large fraction of the drugs evaluated were not found in CE at all, which provides an estimate of its coverage. In terms of the assessor’s own judgments, 60% of PubMed short answers were found to be “good”, compared to 83% and 89% for the Cluster and Oracle conditions, respectively. From a factoid QA point of view, we can conclude that our system outperforms the PubMed baseline. 6.2 Automatic Evaluation of Abstracts A major limitation of the factoid-based evaluation methodology is that it does not measure the quality of the abstracts from which the short answers were extracted. Since we lacked the necessary resources to manually gather abstract-level judgments for evaluation, we sought an alternative. Fortunately, CE can be leveraged to assess the “goodness” of abstracts automatically. We assume that references cited in CE are examples of high quality abstracts, since they were used in generating the drug recommendations. Following standard assumptions made in summarization evaluation, we considered abstracts that are similar in content with these “reference abstracts” to also be “good” (i.e., relevant). Similarity in content can be quantified with ROUGE. Since physicians demand high precision, we assess the cumulative relevance after the first, second, and third abstract that the clinician is likely to have examined (where the relevance for each individual abstract is given by its ROUGE-1 precision score). For the baseline PubMed condition, the examined abstracts simply correspond to the first three hits in the result set. For our test system, we developed three different orderings. The first, which we term cluster round-robin, selects the first abstract from the top three clusters (by size). The second, which we term oracle cluster order, selects three abstracts from the best cluster, assuming the existence of an oracle that informs the system. The third, which we term oracle round-robin, selects the first abstract from each of the three best clusters (also determined by an oracle). Results of this evaluation are shown in Table 3. The columns show the cumulative relevance (i.e., ROUGE score) after examining the first, second, and third abstract, under the different ordering conditions. To determine statistical significance, we applied the Wilcoxon signed-rank test, the 846 Rank 1 Rank 2 Rank 3 PubMed Ranked List 0.170 0.349 0.523 Cluster Round-Robin 0.181 (+6.3%)◦ 0.356 (+2.1%)◦ 0.526 (+0.5%)◦ Oracle Cluster Order 0.206 (+21.5%)△ 0.392 (+12.6%)△ 0.597 (+14.0%)▲ Oracle Round-Robin 0.206 (+21.5%)△ 0.396 (+13.6%)△ 0.586 (+11.9%)▲ Table 3: Cumulative relevance after examining the first, second, and third abstracts, according to different orderings. (◦denotes n.s., △denotes sig. at 0.90, ▲denotes sig. at 0.95) standard non-parametric test for applications of this type. Due to the relatively small test set (only 25 questions), the increase in cumulative relevance exhibited by the cluster round-robin condition is not statistically significant. However, differences for the oracle conditions were significant. 7 Discussion and Related Work According to two separate evaluations, it appears that our system outperforms the PubMed baseline. However, our approach provides more advantages over a linear result set that are not highlighted in these evaluations. Although difficult to quantify, categorized results provide an overview of the information landscape that is difficult to acquire by simply browsing a ranked list—user studies of categorized search have affirmed its value (Hearst and Pedersen, 1996; Dumais et al., 2001). One main advantage we see in our application is better “redundancy management”. With a ranked list, the physician may be forced to browse through multiple redundant abstracts that discuss the same or similar drugs to get a sense of the different treatment options. With our cluster-based approach, however, potentially redundant information is grouped together, since interventions discussed in a particular cluster are ontologically related through UMLS. The physician can examine different clusters for a broad overview, or peruse multiple abstracts within a cluster for a more thorough review of the evidence. Our cluster-based system is able to support both types of behaviors. This work demonstrates the value of semantic resources in the question answering process, since our approach makes extensive use of the UMLS ontology in all phases of answer synthesis. The coverage of individual drugs, as well as the relationship between different types of drugs within UMLS enables both answer extraction and semantic clustering. As detailed in (Demner-Fushman and Lin, 2006 in press), UMLS-based features are also critical in the identification of clinical outcomes, on which our extractive summaries are based. As a point of comparison, we also implemented a purely term-based approach to clustering PubMed citations. The results are so incoherent that a formal evaluation would prove to be meaningless. Semantic relations between drugs, as captured in UMLS, provide an effective method for organizing results—these relations cannot be captured by keyword content alone. Furthermore, term-based approaches suffer from the cluster labeling problem: it is difficult to automatically generate a short heading that describes cluster content. Nevertheless, there are a number of assumptions behind our work that are worth pointing out. First, we assume a high quality initial result set. Since the class of questions we examine translates naturally into accurate PubMed queries that can make full use of human-assigned MeSH terms, the overall quality of the initial citations can be assured. Related work in retrieval algorithms (Demner-Fushman and Lin, 2006 in press) shows that accurate relevance scoring of MEDLINE citations in response to more general clinical questions is possible. Second, our system does not actually perform semantic processing to determine the efficacy of a drug: it only recognizes “topics” and outcome sentences that state clinical findings. Since the system by default orders the clusters based on size, it implicitly equates “most popular drug” with “best drug”. Although this assumption is false, we have observed in practice that more-studied drugs are more likely to be beneficial. In contrast with the genomics domain, which has received much attention from both the IR and NLP communities, retrieval systems for the clinical domain represent an underexplored area of research. Although individual components that attempt to operationalize principles of evidencebased medicine do exist (Mendonc¸a and Cimino, 2001; Niu and Hirst, 2004), complete end–to– end clinical question answering systems are dif847 ficult to find. Within the context of the PERSIVAL project (McKeown et al., 2003), researchers at Columbia have developed a system that leverages patient records to rerank search results. Since the focus is on personalized summaries, this work can be viewed as complementary to our own. 8 Conclusion The primary contribution of this work is the development of a clinical question answering system that caters to the unique requirements of physicians, who demand both conciseness and completeness. These competing factors can be balanced in a system’s response by providing multiple levels of drill-down that allow the information space to be viewed at different levels of granularity. We have chosen to implement these capabilities through answer extraction, semantic clustering, and extractive summarization. Two separate evaluations demonstrate that our system outperforms the PubMed baseline, illustrating the effectiveness of a hybrid approach that leverages semantic resources. 9 Acknowledgments This work was supported in part by the U.S. National Library of Medicine. The second author thanks Esther and Kiri for their loving support. References E. Amig´o, J. Gonzalo, V. Peinado, A. Pe˜nas, and F. Verdejo. 2004. An empirical study of information synthesis task. In ACL 2004. A. Aronson. 2001. Effective mapping of biomedical text to the UMLS Metathesaurus: The MetaMap program. In AMIA 2001. M. Chambliss and J. Conley. 1996. Answering clinical questions. The Journal of Family Practice, 43:140– 144. K. Cogdill and M. Moore. 1997. First-year medical students’ information needs and resource selection: Responses to a clinical scenario. Bulletin of the Medical Library Association, 85(1):51–54. D. Covell, G. Uman, and P. Manning. 1985. Information needs in office practice: Are they being met? Annals of Internal Medicine, 103(4):596–599. H. Dang. 2005. Overview of DUC 2005. In DUC 2005 Workshop at HLT/EMNLP 2005. S. De Groote and J. Dorsch. 2003. Measuring use patterns of online journals and databases. Journal of the Medical Library Association, 91(2):231–240. D. Demner-Fushman and J. Lin. 2005. Knowledge extraction for clinical question answering: Preliminary results. In AAAI 2005 Workshop on QA in Restricted Domains. D. Demner-Fushman and J. Lin. 2006, in press. Answering clinical questions with knowledge-based and statistical techniques. Comp. Ling. S. Dumais, E. Cutrell, and H. Chen. 2001. Optimizing search by showing results in context. In CHI 2001. J. Ely, J. Osheroff, M. Ebell, G. Bergus, B. Levy, M. Chambliss, and E. Evans. 1999. Analysis of questions asked by family doctors regarding patient care. BMJ, 319:358–361. P. Gorman, J. Ash, and L. Wykoff. 1994. Can primary care physicians’ questions be answered using the medical journal literature? Bulletin of the Medical Library Association, 82(2):140–146, April. S. Hauser, D. Demner-Fushman, G. Ford, and G. Thoma. 2004. PubMed on Tap: Discovering design principles for online information delivery to handheld computers. In MEDINFO 2004. M. Hearst and J. Pedersen. 1996. Reexaming the cluster hypothesis: Scatter/gather on retrieval results. In SIGIR 1996. D. Lawrie and W. Croft. 2003. Generating hierarchical summaries for Web searches. In SIGIR 2003. J. Lin. 2005. Evaluation of resources for question answering evaluation. In SIGIR 2005. D. Lindberg, B. Humphreys, and A. McCray. 1993. The Unified Medical Language System. Methods of Information in Medicine, 32(4):281–291. K. McKeown, N. Elhadad, and V. Hatzivassiloglou. 2003. Leveraging a common representation for personalized search and summarization in a medical digital library. In JCDL 2003. E. Mendonc¸a and J. Cimino. 2001. Building a knowledge base to support a digital library. In MEDINFO 2001. Y. Niu and G. Hirst. 2004. Analysis of semantic classes in medical text for question answering. In ACL 2004 Workshop on QA in Restricted Domains. David Sackett, Sharon Straus, W. Richardson, William Rosenberg, and R. Haynes. 2000. EvidenceBased Medicine: How to Practice and Teach EBM. Churchill Livingstone, second edition. E. Voorhees. 2005. Using question series to evaluate question answering system effectiveness. In HLT/EMNLP 2005. Y. Zhao and G. Karypis. 2002. Evaluation of hierarchical clustering algorithms for document datasets. In CIKM 2002. 848 | 2006 | 106 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 849–856, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Discovering asymmetric entailment relations between verbs using selectional preferences Fabio Massimo Zanzotto DISCo University of Milano-Bicocca Via Bicocca degli Arcimboldi 8, Milano, Italy [email protected] Marco Pennacchiotti, Maria Teresa Pazienza ART Group - DISP University of Rome “Tor Vergata” Viale del Politecnico 1, Roma, Italy {pennacchiotti, pazienza}@info.uniroma2.it Abstract In this paper we investigate a novel method to detect asymmetric entailment relations between verbs. Our starting point is the idea that some point-wise verb selectional preferences carry relevant semantic information. Experiments using WordNet as a gold standard show promising results. Where applicable, our method, used in combination with other approaches, significantly increases the performance of entailment detection. A combined approach including our model improves the AROC of 5% absolute points with respect to standard models. 1 Introduction Natural Language Processing applications often need to rely on large amount of lexical semantic knowledge to achieve good performances. Asymmetric verb relations are part of it. Consider for example the question “What college did Marcus Camby play for?”. A question answering (QA) system could find the answer in the snippet “Marcus Camby won for Massachusetts” as the question verb play is related to the verb win. The viceversa is not true. If the question is “What college did Marcus Camby won for?”, the snippet “Marcus Camby played for Massachusetts” cannot be used. Winnig entails playing but not vice-versa, as the relation between win and play is asymmetric. Recently, many automatically built verb lexicalsemantic resources have been proposed to support lexical inferences, such as (Resnik and Diab, 2000; Lin and Pantel, 2001; Glickman and Dagan, 2003). All these resources focus on symmetric semantic relations, such as verb similarity. Yet, not enough attention has been paid so far to the study of asymmetric verb relations, that are often the only way to produce correct inferences, as the example above shows. In this paper we propose a novel approach to identify asymmetric relations between verbs. The main idea is that asymmetric entailment relations between verbs can be analysed in the context of class-level and word-level selectional preferences (Resnik, 1993). Selectional preferences indicate an entailment relation between a verb and its arguments. For example, the selectional preference {human} win may be read as a smooth constraint: if x is the subject of win then it is likely that x is a human, i.e. win(x) →human(x). It follows that selectional preferences like {player} win may be read as suggesting the entailment relation win(x) →play(x). Selectional preferences have been often used to infer semantic relations among verbs and to build symmetric semantic resources as in (Resnik and Diab, 2000; Lin and Pantel, 2001; Glickman and Dagan, 2003). However, in those cases these are exploited in a different way. The assumption is that verbs are semantically related if they share similar selectional preferences. Then, according to the Distributional Hypothesis (Harris, 1964), verbs occurring in similar sentences are likely to be semantically related. The Distributional Hypothesis suggests a generic equivalence between words. Related methods can then only discover symmetric relations. These methods can incidentally find verb pairs as (win,play) where an asymmetric entailment relation holds, but they cannot state the direction of entailment (e.g., win→play). As we investigate the idea that a single relevant verb selectional preference (as {player} 849 win) could produce an entailment relation between verbs, our starting point can not be the Distributional Hypothesis. Our assumption is that some point-wise assertions carry relevant semantic information (as in (Robison, 1970)). We do not derive a semantic relation between verbs by comparing their selectional preferences, but we use pointwise corpus-induced selectional preferences. The rest of the paper is organised as follows. In Sec. 2 we discuss the intuition behind our research. In Sec. 3 we describe different types of verb entailment. In Sec. 4 we introduce our model for detecting entailment relations among verbs . In Sec. 5 we review related works that are used both for comparison and for building combined methods. Finally, in Sec. 6 we present the results of our experiments. 2 Selectional Preferences and Verb Entailment Selectional restrictions are strictly related to entailment. When a verb or a noun expects a modifier having a predefined property it means that the truth value of the related sentences strongly depends on the satisfiability of these expectations. For example, “X is blue” implies the expectation that X has a colour. This expectation may be seen as a sort of entailment between “being a modifier of that verb or noun” and “having a property”. If the sentence is “The number three is blue”, then the sentence is false as the underlying entailment blue(x) →has colour(x) does not hold (cf. (Resnik, 1993)). In particular, this rule applies to verb logical subjects: if a verb v has a selectional restriction requiring its logical subjects to satisfy a property c, it follows that the implication: v(x) →c(x) should be verified for each logical subject x of the verb v. The implication can also be read as: if x has the property of doing the action v this implies that x has the property c. For example, if the verb is to eat, the selectional restrictions of to eat would imply that its subjects have the property of being animate. Resnik (1993) introduced a smoothed version of selectional restrictions called selectional preferences. These preferences describe the desired properties a modifier should have. The claim is that if a selectional preference holds, it is more probable that x has the property c given that it modifies v rather than x has this property in the general case, i.e.: p(c(x)|v(x)) > p(c(x)) (1) The probabilistic setting of selectional preferences also suggests an entailment: the implication v(x) →c(x) holds with a given degree of certainty. This definition is strictly related to the probabilistic textual entailment setting in (Glickman et al., 2005). We can use selectional preferences, intended as probabilistic entailment rules, to induce entailment relations among verbs. In our case, if a verb vt expects that the subject “has the property of doing an action vh”, this may be used to induce that the verb vt probably entails the verb vh, i.e.: vt(x) →vh(x) (2) As for class-based selectional preference acquisition, corpora can be used to estimate these particular kinds of preferences. For example, the sentence “John McEnroe won the match...” contributes to probability estimation of the class-based selectional preference win(x) → human(x) (since John McEnroe is a human). In particular contexts, it contributes also to the induction of the entailment relation between win and play, as John McEnroe has the property of playing. However, as the example shows, classes relevant for acquiring selectional preferences (such as human) are explicit, as they do not depend from the context. On the contrary, properties such as “having the property of doing an action” are less explicit, as they depend more strongly on the context of sentences. Thus, properties useful to derive entailment relations among verbs are more difficult to find. For example, it is easier to derive that John McEnroe is a human (as it is a stable property) than that he has the property of playing. Indeed, this latter property may be relevant only in the context of the previous sentence. However, there is a way to overcome this limitation: agentive nouns such as runner make explicit this kind of property and often play subject roles in sentences. Agentive nouns usually denote the “doer” or “performer” of some action. This is exactly what is needed to make clearer the relevant property vh(x) of the noun playing the logical subject role. The action vh will be the one entailed by the verb vt heading the sentence. As an example in the sentence “the player wins”, the action play 850 evocated by the agentive noun player is entailed by win. 3 Verb entailment: a classification The focus of our study is on verb entailment. A brief review of the WordNet (Miller, 1995) verb hierarchy (one of the main existing resources on verb entailment relations) is useful to better explain the problem and to better understand the applicability of our hypothesis. In WordNet, verbs are organized in synonymy sets (synsets) and different kinds of semantic relations can hold between two verbs (i.e. two synsets): troponymy, causation, backwardpresupposition, and temporal inclusion. All these relations are intended as specific types of lexical entailment. According to the definition in (Miller, 1995) lexical entailment holds between two verbs vt and vh when the sentence Someone vt entails the sentence Someone vh (e.g. “Someone wins” entails “Someone plays”). Lexical entailment is then an asymmetric relation. The four types of WordNet lexical entailment can be classified looking at the temporal relation between the entailing verb vt and the entailed verb vh. Troponymy represents the hyponymy relation between verbs. It stands when vt and vh are temporally co-extensive, that is, when the actions described by vt and vh begin and end at the same times (e.g. limp→walk). The relation of temporal inclusion captures those entailment pairs in which the action of one verb is temporally included in the action of the other (e.g. snore→sleep). Backwardpresupposition stands when the entailed verb vh happens before the entailing verb vt and it is necessary for vt. For example, win entails play via backward-presupposition as it temporally follows and presupposes play. Finally, in causation the entailing verb vt necessarily causes vh. In this case, the temporal relation is thus inverted with respect to backward-presupposition, since vt precedes vh. In causation, vt is always a causative verb of change, while vh is a resultative stative verb (e.g. buy→own, and give→have). As a final note, it is interesting to notice that the Subject-Verb structure of vt is generally preserved in vh for all forms of lexical entailment. The two verbs have the same subject. The only exception is causation: in this case the subject of the entailed verb vh is usually the object of vt (e.g., X give Y →Y have). In most cases the subject of vt carries out an action that changes the state of the object of vt, that is then described by vh. The intuition described in Sec. 2 is then applicable only for some kinds of verb entailments. First, the causation relation can not be captured since the two verbs should have the same subject (cf. eq. (2)). Secondly, troponymy seems to be less interesting than the other relations, since our focus is more on a logic type of entailment (i.e., vt and vh express two different actions one depending from the other). We then focus our study and our experiments on backward-presupposition and temporal inclusion. These two relations are organized in WordNet in a single set (called ent) parted from troponymy and causation pairs. 4 The method Our method needs two steps. Firstly (Sec. 4.1), we translate the verb selectional expectations in specific Subject-Verb lexico-syntactic patterns P(vt, vh). Secondly (Sec. 4.2), we define a statistical measure S(vt, vh) that captures the verb preferences. This measure describes how much the relations between target verbs (vt, vh) are stable and commonly agreed. Our method to detect verb entailment relations is based on the idea that some point-wise assertions carry relevant semantic information. This idea has been firstly used in (Robison, 1970) and it has been explored for extracting semantic relations between nouns in (Hearst, 1992), where lexico-syntactic patterns are induced by corpora. More recently this method has been applied for structuring terminology in isa hierarchies (Morin, 1999) and for learning question-answering patterns (Ravichandran and Hovy, 2002). 4.1 Nominalized textual entailment lexico-syntactic patterns The idea described in Sec. 2 can be applied to generate Subject-Verb textual entailment lexicosyntactic patterns. It often happens that verbs can undergo an agentive nominalization, e.g., play vs. player. The overall procedure to verify if an entailment between two verbs (vt, vh) holds in a pointwise assertion is: whenever it is possible to apply the agentive nominalization to the hypothesis vh, scan the corpus to detect those expressions in which the agentified hypothesis verb is the subject of a clause governed by the text verb vt. Given a verb pair (vt, vh) the assertion is for851 Lexico-syntactic patterns nominalization Pnom(vt, vh) = {“agent(vh)|num:sing vt|person:third,t:pres”, “agent(vh)|num:plur vt|person:nothird,t:pres”, “agent(vh)|num:sing vt|t:past”, “agent(vh)|num:plur vt|t:past”} happens-before (Chklovski and Pantel, 2004) Phb(vt, vh) = {“vh|t:inf and then vt|t:pres”, “vh|t:inf * and then vt|t:pres”, “vh|t:past and then vt|t:pres”, “vh|t:past * and then vt|t:pres”, “vh|t:inf and later vt|t:pres”, “vh|t:past and later vt|t:pres”, “vh|t:inf and subsequently vt|t:pres”, “vh|t:past and subsequently vt|t:pres”, “vh|t:inf and eventually vt|t:pres”, “vh|t:past and eventually vt|t:pres”} probabilistic entailment (Glickman et al., 2005) Ppe(vt, vh) = {“vh|person:third,t:pres” ∧“vt|person:third,t:pres”, “vh|t:past” ∧“vt|t:past”, “vh|t:pres cont” ∧“vt|t:pres cont”, “vh|person:nothird,t:pres” ∧“vt|person:nothird,t:pres”} additional sets Fagent(v) = {“agent(v)|num:sing”, “agent(v)|num:plur”} F(v) = {“v|person:third,t:present”, “v|person:nothird,t:present”, “v|t:past”} Fall(v) = {“v|person:third,t:pres”, “v|t:pres cont, “v|person:nothird,t:present”, “v|t:past”} Table 1: Nominalization and related textual entailment lexico-syntactic patterns malized in a set of textual entailment lexicosyntactic patterns, that we call nominalized patterns Pnom(vt, vh). This set is described in Tab. 1. agent(v) is the noun deriving from the agentification of the verb v. Elements such as l|f1,...,fN are the tokens generated from lemmas l by applying constraints expressed via the feature-value pairs f1, ..., fN. For example, in the case of the verbs play and win, the related set of textual entailment expressions derived from the patterns are Pnom(win, play) = {“player wins”, “players win”, “player won”, “players won”}. In the experiments hereafter described, the required verbal forms have been obtained using the publicly available morphological tools described in (Minnen et al., 2001). Simple heuristics have been used to produce the agentive nominalizations of verbs1. Two more sets of expressions, Fagent(v) and F(v) representing the single events in the pair, are needed for the second step (Sec. 4.2). This two additional sets are described in Tab. 1. In the example, the derived expressions are Fagent(play) = {“player”,“players”} and F(win) = {“wins”,“won”}. 4.2 Measures to estimate the entailment strength The above textual entailment patterns define pointwise entailment assertions. If pattern instances are found in texts, the related verb-subject pairs suggest but not confirm a verb selectional preference. 1Agentive nominalization has been obtained adding “-er” to the verb root taking into account possible special cases such as verbs ending in “-y”. A form is retained as a correct nominalization if it is in WordNet. The related entailment can not be considered commonly agreed. For example, the sentence “Like a writer composes a story, an artist must tell a good story through their work.” suggests that compose entails write. However, it may happen that these correctly detected entailments are accidental, that is, the detected relation is only valid for the given text. For example, if the text fragment “The writers take a simple idea and apply it to this task” is taken in isolation, it suggests that take entails write, but this could be questionable. In order to get rid of these wrong verb pairs, we perform a statistical analysis of the verb selectional preferences over a corpus. This assessment will validate point-wise entailment assertions. Before introducing the statistical entailment indicator, we provide some definitions. Given a corpus C containing samples, we will refer to the absolute frequency of a textual expression t in the corpus C with fC(t). The definition can be easily extended to a set of expressions T. Given a pair vt and vh we define the following entailment strength indicator S(vt, vh). Specifically, the measure Snom(vt, vh) is derived from point-wise mutual information (Church and Hanks, 1989): Snom(vt, vh) = log p(vt, vh|nom) p(vt)p(vh|pers) (3) where nom is the event of having a nominalized textual entailment pattern and pers is the event of having an agentive nominalization of verbs. Probabilities are estimated using maximum-likelihood: p(vt, vh|nom) ≈ fC(Pnom(vt, vh)) fC(S Pnom(v′ t, v′ h)), 852 p(vt) ≈ fC(F(vt))/fC(S F(v)), and p(vh|pers) ≈fC(Fagent(vh))/fC(S Fagent(v)). Counts are considered useful when they are greater or equal to 3. The measure Snom(vt, vh) indicates the relatedness between two elements composing a pair, in line with (Chklovski and Pantel, 2004; Glickman et al., 2005) (see Sec. 5). Moreover, if Snom(vt, vh) > 0 the verb selectional preference property described in eq. (1) is satisfied. 5 Related “non-distributional” methods and integrated approaches Our method is a “non-distributional” approach for detecting semantic relations between verbs. We are interested in comparing and integrating our method with similar approaches. We focus on two methods proposed in (Chklovski and Pantel, 2004) and (Glickman et al., 2005). We will shortly review these approaches in light of what introduced in the previous sections. We also present a simple way to combine these different approaches. The lexico-syntactic patterns introduced in (Chklovski and Pantel, 2004) have been developed to detect six kinds of verb relations: similarity, strength, antonymy, enablement, and happensbefore. Even if, as discussed in (Chklovski and Pantel, 2004), these patterns are not specifically defined as entailment detectors, they can be useful for this purpose. In particular, some of these patterns can be used to investigate the backwardpresupposition entailment. Verb pairs related by backward-presupposition are not completely temporally included one in the other (cf. Sec. 3): the entailed verb vh precedes the entailing verb vt. One set of lexical patterns in (Chklovski and Pantel, 2004) seems to capture the same idea: the happens-before (hb) patterns. These patterns are used to detect not temporally overlapping verbs, whose relation is semantically very similar to entailment. As we will see in the experimental section (Sec. 6), these patterns show a positive relation with the entailment relation. Tab. 1 reports the happens-before lexico-syntactic patterns (Phb) as proposed in (Chklovski and Pantel, 2004). In contrast to what is done in (Chklovski and Pantel, 2004) we decided to directly count patterns derived from different verbal forms and not to use an estimation factor. As in our work, also in (Chklovski and Pantel, 2004), a mutualinformation-related measure is used as statistical indicator. The two methods are then fairly in line. The other approach we experiment is the “quasi-pattern” used in (Glickman et al., 2005) to capture lexical entailment between two sentences. The pattern has to be discussed in the more general setting of the probabilistic entailment between texts: the text T and the hypothesis H. The idea is that the implication T →H holds (with a degree of truth) if the probability that H holds knowing that T holds is higher that the probability that H holds alone, i.e.: p(H|T) > p(H) (4) This equation is similar to equation (1) in Sec. 2. In (Glickman et al., 2005), words in H and T are supposed to be mutually independent. The previous relation between H and T probabilities then holds also for word pairs. A special case can be applied to verb pairs: p(vh|vt) > p(vh) (5) Equation (5) can be interpreted as the result of the following “quasi-pattern”: the verbs vh and vt should co-occur in the same document. It is possible to formalize this idea in the probabilistic entailment “quasi-patterns” reported in Tab. 1 as Ppe, where verb form variability is taken into consideration. In (Glickman et al., 2005) point-wise mutual information is also a relevant statistical indicator for entailment, as it is strictly related to eq. (5). For both approaches, the strength indicator Shb(vt, vh) and Spe(vt, vh) are computed as follows: Sy(vt, vh) = log p(vt, vh|y) p(vt)p(vh) (6) where y is hb for the happens-before patterns and pe for the probabilistic entailment patterns. Probabilities are estimated as in the previous section. Considering independent the probability spaces where the three patterns lay (i.e., the space of subject-verb pairs for nom, the space of coordinated sentences for hb, and the space of documents for pe), the combined approaches are obtained summing up Snom, Shb, and Spe. We will then experiment with these combined approaches: nom+pe, nom+hb, nom+hb+pe, and hb+pe. 6 Experimental Evaluation The aim of the experimental evaluation is to establish if the nominalized pattern is useful to help 853 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Se(t) 1 −Sp(t) (a) nom hb pe hb + pe hb + pe + nom 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Se(t) 1 −Sp(t) (b) hb hb + pe hb + pe + n hb + pe + n Figure 1: ROC curves of the different methods in detecting verb entailment. We experiment with the method by itself or in combination with other sets of patterns. We are then interested only in verb pairs where the nominalized pattern is applicable. The best pattern or the best combined method should be the one that gives the highest values of S to verb pairs in entailment relation, and the lowest value to other pairs. We need a corpus C over which to estimate probabilities, and two dataset, one of verb entailment pairs, the True Set (TS), and another with verbs not in entailment, the Control Set (CS). We use the web as corpus C where to estimate Smi and GoogleTM as a count estimator. The web has been largely employed as a corpus (e.g., (Turney, 2001)). The findings described in (Keller and Lapata, 2003) suggest that the count estimations we need in our study over Subject-Verb bigrams are highly correlated to corpus counts. 6.1 Experimental settings Since we have a predefined (but not exhaustive) set of verb pairs in entailment, i.e. ent in WordNet, we cannot replicate a natural distribution of verb pairs that are or are not in entailment. Recall and precision lose sense. Then, the best way to compare the patterns is to use the ROC curve (Green and Swets, 1996) mixing sensitivity and specificity. ROC analysis provides a natural means to check and estimate how a statistical measure is able to distinguish positive examples, the True Set (TS), and negative examples, the Control Set (CS). Given a threshold t, Se(t) is the probability of a candidate pair (vh, vt) to belong to True Set if the test is positive, while Sp(t) is the probability of belonging to ControlSet if the test is negative, i.e.: Se(t) = p((vh, vt) ∈TS|S(vh, vt) > t) Sp(t) = p((vh, vt) ∈CS|S(vh, vt) < t) The ROC curve (Se(t) vs. 1 −Sp(t)) naturally follows (see Fig. 1). Better methods will have ROC curves more similar to the step function f(1 −Sp(t)) = 0 when 1 −Sp(t) = 0 and f(1 −Sp(t)) = 1 when 0 < 1 −Sp(t) ≤1. The ROC analysis provides another useful evaluation tool: the AROC, i.e. the total area under the ROC curve. Statistically, AROC represents the probability that the method in evaluation will rank a chosen positive example higher than a randomly chosen negative instance. AROC is usually used to better compare two methods that have similar ROC curves. Better methods will have higher AROCs. As True Set (TS) we use the controlled verb entailment pairs ent contained in WordNet. As described in Sec. 3, the entailment relation is a semantic relation defined at the synset level, standing in the verb sub-hierarchy. That is, each pair of synsets (St, Sh) is an oriented entailment relation between St and Sh. WordNet contains 409 entailed synsets. These entailment relations are consequently stated also at the lexical level. The pair (St, Sh) naturally implies that vt entails vh for each possible vt ∈St and vh ∈Sh. It is possible to derive from the 409 entailment synset a test set of 2,233 verb pairs. As Control Set we use two sets: random and ent. The random set 854 is randomly generated using verb in ent, taking care of avoiding to capture pairs in entailment relation. A pair is considered a control pair if it is not in the True Set (the intersection between the True Set and the Control Set is empty). The ent is the set of pairs in ent with pairs in the reverse order. These two Control Sets will give two possible ways of evaluating the methods: a general and a more complex task. As a pre-processing step, we have to clean the two sets from pairs in which the hypotheses can not be nominalized, as our pattern Pnom is applicable only in these cases. The pre-processing step retains 1,323 entailment verb pairs. For comparative purposes the random Control Set is kept with the same cardinality of the True Set (in all, 1400 verb pairs). S is then evaluated for each pattern over the True Set and the Control Set, using equation (3) for Pnom, and equation (6) for Ppe and Phb. The best pattern or combined method is the one that is able to most neatly split entailment pairs from random pairs. That is, it should in average assign higher S values to pairs in the True Set. 6.2 Results and analysis In the first experiment we compared the performances of the methods in dividing the ent test set and the random control set. The compared methods are: (1) the set of patterns taken alone, i.e. nom, hb, and pe; (2) some combined methods, i.e. nom + pe, hb + pe, and nom + hb + pe. Results of this first experiment are reported in Tab. 2 and Fig. 1.(a). As Figure 1.(a) shows, our nominalization pattern Pnom performs better than the others. Only Phb seems to outperform nominalization in some point of the ROC curve, where Pnom presents a slight concavity, maybe due to a consistent overlap between positive and negative examples at specific values of the S threshold t. In order to understand which of the two patterns has the best discrimination power a comparison of the AROC values is needed. As Table 2 shows, Pnom has the best AROC value (59.94%) indicating a more interesting behaviour with respect to Phb and Ppe. It is respectively 2 and 3 absolute percent point higher. Moreover, the combinations nom + hb + pe and nom + pe that includes the Pnom pattern have a very high performance considering the difficulty of the task, i.e. 66% and 64%. If compared with the combinaAROC best accuracy hb 56.00 57.11 pe 57.00 55.75 nom 59.94 59.86 nom + pe 64.40 61.33 hb + pe 61.44 58.98 hb + nom + pe 66.44 63.09 hb 61.64 62.73 hb + pe 69.03 64.71 hb + nom + pe 70.82 66.07 Table 2: Performances in the general case: ent vs. random AROC best accuracy hb 43.82 50.11 nom 54.91 54.94 hb 56.18 57.16 hb + nom 49.35 51.73 hb + nom 57.67 57.22 Table 3: Performances in the complex case: ent vs. ent tion hb+pe that excludes the Pnom pattern (61%), the improvement in the AROC is of 5% and 3%. Moreover, the shape of the nom + hb + pe ROC curve in Fig. 1.(a) is above all the other in all the points. In the second experiment we compared methods in the more complex task of dividing the ent set from the ent set. In this case methods are asked to determine if win →play is a correct entailment and play →win is not. Results of these set of experiments is presented in Tab. 3. The nominalized pattern nom preserves its discriminative power. Its AROC is over the chance line even if, as expected, it is worse than the one obtained in the general case. Surprisingly, the happensbefore (hb) set of patterns seems to be not correlated the entailment relation. The temporal relation vh-happens-before-vt does not seem to be captured by those patterns. But, if this evidence is seen in a positive way, it seems that the patterns are better capturing the entailment when used in the reversed way (hb). This is confirmed by its AROC value. If we observe for example one of the implications in the True Set, reach →go what is happening may become clearer. Sample sentences respectively for the hb case and the hb case are “The group therefore elected to go to Tyso and then reach Anskaven” and “striving to reach personal goals and then go beyond them”. It seems that in the second case then assumes an enabling role more than only a temporal role. After this sur855 prising result, as we expected, in this experiment even the combined approach hb + nom behaves better than hb + nom and better than hb, respectively around 8% and 1.5% absolute points higher (see Tab. 3). The above results imposed the running of a third experiment over the general case. We need to compare the entailment indicators derived exploiting the new use of hb, i.e. hb, with respect to the methods used in the first experiment. Results are reported in Tab. 2 and Fig. 1.(b). As Fig. 1.(b) shows, the hb has a very interesting behaviour for small values of 1 −Sp(t). In this area it behaves extremely better than the combined method nom+hb+pe. This is an advantage and the combined method nom+hb+pe exploit it as both the AROC and the shape of the ROC curve demonstrate. Again the method nom + hb + pe that includes the Pnom pattern has 1,5% absolute points with respect to the combined method hb + pe that does not include this information. 7 Conclusions In this paper we presented a method to discover asymmetric entailment relations between verbs and we empirically demonstrated interesting improvements when used in combination with similar approaches. The method is promising and there is still some space for improvements. As implicitly experimented in (Chklovski and Pantel, 2004), some beneficial effect can be obtained combining these “non-distributional” methods with the methods based on the Distributional Hypothesis. References Timoty Chklovski and Patrick Pantel. 2004. VerbOCEAN: Mining the web for fine-grained semantic verb relations. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, Barcellona, Spain. Kenneth Ward Church and Patrick Hanks. 1989. Word association norms, mutual information and lexicography. In Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics (ACL), Vancouver, Canada. Oren Glickman and Ido Dagan. 2003. Identifying lexical paraphrases from a single corpus: A case study for verbs. In Proceedings of the International Conference Recent Advances of Natural Language Processing (RANLP-2003), Borovets, Bulgaria. Oren Glickman, Ido Dagan, and Moshe Koppel. 2005. Web based probabilistic textual entailment. In Proceedings of the 1st Pascal Challenge Workshop, Southampton, UK. David M. Green and John A. Swets. 1996. Signal Detection Theory and Psychophysics. John Wiley and Sons, New York, USA. Zellig Harris. 1964. Distributional structure. In Jerrold J. Katz and Jerry A. Fodor, editors, The Philosophy of Linguistics, New York. Oxford University Press. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 15th International Conference on Computational Linguistics (CoLing-92), Nantes, France. Frank Keller and Mirella Lapata. 2003. Using the web to obtain frequencies for unseen bigrams. Computational Linguistics, 29(3), September. Dekan Lin and Patrick Pantel. 2001. DIRT-discovery of inference rules from text. In Proc. of the ACM Conference on Knowledge Discovery and Data Mining (KDD-01), San Francisco, CA. George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39–41, November. Guido Minnen, John Carroll, and Darren Pearce. 2001. Applied morphological processing of english. Natural Language Engineering, 7(3):207–223. Emmanuel Morin. 1999. Extraction de liens s´emantiques entre termes `a partir de corpus de textes techniques. Ph.D. thesis, Univesit´e de Nantes, Facult´e des Sciences et de Techniques. Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a question answering system. In Proceedings of the 40th ACL Meeting, Philadelphia, Pennsilvania. Philip Resnik and Mona Diab. 2000. Measuring verb similarity. In Twenty Second Annual Meeting of the Cognitive Science Society (COGSCI2000), Philadelphia. Philip Resnik. 1993. Selection and Information: A Class-Based Approach to Lexical Relationships. Ph.D. thesis, Department of Computer and Information Science, University of Pennsylvania. Harold R. Robison. 1970. Computer-detectable semantic structures. Information Storage and Retrieval, 6(3):273–288. Peter D. Turney. 2001. Mining the web for synonyms: Pmi-ir versus lsa on toefl. In Proc. of the 12th European Conference on Machine Learning, Freiburg, Germany. 856 | 2006 | 107 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 857–864, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Event Extraction in a Plot Advice Agent Harry Halpin School of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh, EH8 9LW Scotland, UK [email protected] Johanna D. Moore School of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh, EH8 9LW Scotland, UK [email protected] Abstract In this paper we present how the automatic extraction of events from text can be used to both classify narrative texts according to plot quality and produce advice in an interactive learning environment intended to help students with story writing. We focus on the story rewriting task, in which an exemplar story is read to the students and the students rewrite the story in their own words. The system automatically extracts events from the raw text, formalized as a sequence of temporally ordered predicate-arguments. These events are given to a machine-learner that produces a coarse-grained rating of the story. The results of the machine-learner and the extracted events are then used to generate fine-grained advice for the students. 1 Introduction In this paper we investigate how features of a text discovered via automatic event extraction can be used in both natural language understanding and advice generation in the domain of narrative instruction. The background application is a fully automated plot analysis agent to improve the writing of students could be used by current narrative tutoring systems (Robertson and WiemerHastings, 2002). As shown by participatory design studies, teachers are interested in a plot analysis agent that can give online natural language advice and many students enjoy feedback from an automated agent (Robertson and Cross, 2003). We use automatic event extraction to create a storyindependent automated agent that can both analyze the plot of a story and generate appropriate advice. 1.1 The Story Rewriting Task A task used in schools is the story rewriting task, where a story, the exemplar story, is read to the students, and afterwards the story is rewritten by each student, providing a corpus of rewritten stories. This task tests the students ability to both listen and write, while removing from the student the cognitive load needed to generate a new plot. This task is reminiscent of the well-known “War of the Ghosts” experiment used in psychology for studying memory (Bartlett, 1932) and related to work in fields such as summarization (Lemaire et al., 2005) and narration (Halpin et al., 2004). 1.2 Agent Design The goal of the agent is to classify each of the rewritten stories for overall plot quality. This rating can be used to give “coarse-grained” general advice. The agent should then provide “finegrained” specific advice to the student on how their plot could be improved. The agent should be able to detect if the story should be re-read or a human teacher summoned to help the student. To accomplish this task, we extract events that represent the entities and their actions in the plot from both the exemplar and the rewritten stories. A plot comparison algorithm checks for the presence or absence of events from the exemplar story in each rewritten story. The results of this algorithm will be used by a machine-learner to classify each story for overall plot quality and provide general “canned” advice to the student. The features statistically shared by “excellent” stories represent the important events of the exemplar story. The results of a search for these important events in a rewritten story provides the input needed by templates to generate specific advice for a student. 857 2 Corpus In order to train our agent, we collected a corpus of 290 stories from primary schools based on two different exemplar stories. The first is an episode of “The Wonderful Adventures of Nils” by Selma Lagerloff (160 stories) and the second a re-telling of “The Treasure Thief” by Herodotus (130 stories). These will be referred to as the “Adventure” and “Thief” corpora. 2.1 Rating An experienced teacher, Rater A, designed a rating scheme equivalent to those used in schools. The scheme rates the stories as follows: 1. Excellent: An excellent story shows that the student has “read beyond the lines” and demonstrates a deep understanding of the story, using inference to grasp points that may not have been explicit in the story. The student should be able to retrieve all the important links, and not all the details, but the right details. 2. Good: A good story shows that the student understood the story and has “read between the lines.” The student recalls the main events and links in the plot. However, the student shows no deep understanding of the plot and does not make use of inference. This can often be detected by the student leaving out an important link or emphasizing the wrong details. 3. Fair: A fair story shows that student has listened to the story but not understood the story, and so is only trying to repeat what they have heard. This is shown by the fact that the fair story is missing multiple important links in the story, including a possibly vital part of the story. 4. Poor: A poor story shows the student has had trouble listening to the story. The poor story is missing a substantial amount of the plot, with characters left out and events confused. The student has trouble connecting the parts of the story. To check the reliability of the rating scheme, two other teachers (Rater B and Rater C) rated subsets (82 and 68 respectively) of each of the corpora. While their absolute agreement with Rater A Class Adventure Thief 1 (Excellent) .231 .146 2 (Good) .300 .377 3 (Fair) .156 .292 4 (Poor) .313 .185 Table 1: Probability Distribution of Ratings makes the task appear subjective (58% for B and 53% for C), their relative agreement was high, as almost all disagreements were by one level in the rating scheme. Therefore we use Cronbach’s α and τb instead of Cohen’s or Fleiss’ κ to take into account the fact that our scale is ordinal. Between Rater A and B there was a Cronbach’s α statistic of .90 and a Kendall’s τb statistic of .74. Between Rater B and C there was a Cronbach’s α statistic of .87 and Kendall’s τb statistic of .67. These statistics show the rating scheme to be reliable and the distribution of plot ratings are given in Table 1. 2.2 Linguistic Issues One challenge facing this task is the ungrammatical and highly irregular text produced by the students. Many stories consist of one long run-on sentence. This leads a traditional parsing system with a direct mapping from the parse tree to a semantic representation to fail to achieve a parse on 35% percent of the stories, and as such could not be used (Bos et al., 2004). The stories exhibit frequent use of reported speech and the switching from first-person to third-person within a single sentence. Lastly, the use of incorrect spelling e.g., “stalk” for “stork” appearing in multiple stories in the corpus, the consistent usage of homonyms such as “there” for “their,” and the invention of words (“torlix”), all prove to be frequent. 3 Plot Analysis To automatically rate student writing many tutoring systems use Latent Semantic Analysis, a variation on the “bag-of-words” technique that uses dimensionality reduction (Graesser et al., 2000). We hypothesize that better results can be achieved using a “representational” account that explicitly represents each event in the plot. These semantic relationships are important in stories, e.g., “The thief jumped on the donkey” being distinctly different from “The donkey jumped on the thief.” What characters participate in an action matter, since “The king stole the treasure” reveals a major 858 misunderstanding while “The thief stole the treasure” shows a correct interpretation by the student. 3.1 Stories as Events We represent a story as a sequence of events, p1...ph, represented as a list of predicatearguments, similar to the event calculus (Mueller, 2003). Our predicate-argument structure is a minimal subset of first-order logic (no quantifiers), and so is compatible with case-frame and dependency representations. Every event has a predicate (function) p that has one or more arguments, n1...na. In the tradition of Discourse Representation Theory (Kamp and Reyle, 1993), our current predicate argument structure could be converted automatically to first order logic by using a default existential quantification over the predicates and joining them conjunctively. Predicate names are often verbs, while their arguments are usually, although not exclusively, nouns or adjectives. When describing a set of events in the story, a superscript is used to keep the arguments in an event distinct, as n2 5 is argument 2 in event 5. The same argument name may appear in multiple events. The plot of any given story is formalized as an event structure composed of h events in a partial order, with the partial order denoting their temporal order: p1(n1 1, n2 1, ...na 1), ...., ph(n2 h, n4 h...nc h) An example from the “Thief” exemplar story is “The Queen nagged the king to build a treasure chamber. The king decided to have a treasure chamber.” This can be represented by an event structure as: nag(king, queen) build(chamber) decide(king) have(chamber) Note due the ungrammatical corpus we cannot at this time extract neo-Davidsonian events. A sentence maps onto one, multiple, or no events. A unique name and closed-world assumption is enforced, although for purposes of comparing event we compare membership of argument and predicate names in WordNet synsets in addition to exact name matches (Fellbaum, 1998). 4 Extracting Events Paralleling work in summarization, it is hypothesized that the quality of a rewritten story can be defined by the presence or absence of “semantic content units” that are crucial details of the text that may have a variety of syntactic forms (Nenkova and Passonneau, 2004). We further hypothesize these can be found in chunks of the text automatically identified by a chunker, and we can represent these units as predicate-arguments in our event structure. The event structure of each story is automatically extracted using an XMLbased pipeline composed of NLP processing modules, and unlike other story systems, extract full events instead of filling in a frame of a story script (Riloff, 1999). Using the latest version of the Language Technology Text Tokenization Toolkit (Grover et al., 2000), words are tokenized and sentence boundaries detected. Words are given partof-speech tags by a maximum entropy tagger from the toolkit. We do not attempt to obtain a full parse of the sentence due to the highly irregular nature of the sentences. Pronouns are resolved using a rule-based reimplementation of the CogNIAC algorithm (Baldwin, 1997) and sentences are lemmatized and chunked using the Cass Chunker (Abney, 1995). It was felt the chunking method would be the only feasible way to retrieve portions of the sentences that may contain complete “semantic content units” from the ungrammatical and irregular text. The application of a series of rules, mainly mapping verbs to predicate names and nouns to arguments, to the results of the chunker produces events from chunks as described in our previous work (McNeill et al., 2006). The accuracy of our rule-set was developed by using the grammatical exemplar stories as a testbed, and a blind judge found they produced 68% interpretable or “sensible” events given the ungrammatical text. Students usually use the present or past tense exclusively throughout the story and events are usually presented in order of occurrence. An inspection of our corpus showed 3% of stories in our corpus seemed to get the order of events wrong (Hickmann, 2003). 4.1 Comparing Stories Since the student is rewriting the story using their own words, a certain variance from the plot of the exemplar story should be expected and even rewarded. Extra statements that may be true, but are not explicitly stated in the story, can be inferred by the students. Statements that are true but are not highly relevant to the course of the 859 plot can likewise be left out. Word similarity must be taken into account, so that “The king is protecting his gold” can be recognized as “The pharaoh guarded the treasure.” Characters change in context, as one character that is described as the “younger brother” is from the viewpoint of his mother “the younger son.” So, building a model from the events of two stories and simply checking equivalence can not be used for comparison, since a wide variety of partial equivalence must be taken into account. Instead of using absolute measures of equivalence based on model checking or measures based on word distribution, we compare each story on the basis of the presence or absence of events. This approach takes advantage of WordNet to define synonym matching and uses the relational structure of the events to allow partial matching of predicate functions and arguments. The events of the exemplar story are assumed to be correct, and they are searched for in the rewritten story in the order in which they occur in the exemplar. If an event is matched (including using WordNet), then in turn each of the arguments attempts to be matched. This algorithm is given more formally in Figure 1. The complete event structure from the exemplar story, E, and the complete event structure from the rewritten story R, with each individual event predicate name labelled as e and r respectively, and their arguments labelled as n in either Ne and Nr. SYN(x) is the synset of the term x, including hypernyms and hyponyms except upper ontology ones. The results of the algorithm are stored in binary vector F with index i. 1 denotes an exact match or WordNet synset match, and 0 a failure to find any match. 4.2 Results As a baseline system LSA produces a similarity score for each rewritten story by comparing it to the exemplar, this score is used as a distance metric for a k-Nearest Neighbor classifier (Deerwester et al., 1990). The parameters for LSA were empirically determined to be a dimensionality of 200 over the semantic space given by the recommended reading list for American 6th graders (Landauer and Dumais, 1997). These parameters resulted in the LSA similarity score having a Pearson’s correlation of -.520 with Rater A. k was found to be optimal at 9. Algorithm 4.1: PLOTCOMPARE(E, R) i ←0 f ←∅ for e ∈E do for r ∈R do if e = SYN(r) then fi ←1 else fi ←0 for ne ∈Ne do for nr ∈Nr do if ne = SYN(nr) then fi ←1 else fi ←0 i = i + 1 Figure 1: Plot Comparison Algorithm Classifier Corpus Features % Correct k-NN Adventure LSA 47.5 Naive Bayes Adventure PLOT 55.6 k-NN Thief LSA 41.2 Naive Bayes Thief PLOT 45.4 Table 2: Machine-Learning Results The results of the plot comparison algorithm were given as features to machine-learners, with results produced using ten-fold cross-validation. A Naive Bayes learner discovers the different statistical distributions of events for each rating. The results for both the “Adventure” and “Thief” stories are displayed in Table 2. “PLOT” means the results of the Plot Comparison Algorithm were used as features for the machine-learner while ”LSA” means the similarity scores for Latent Semantic Analysis were used instead. Note that the same machine-learner could not be used to judge the effect of LSA and PLOT since LSA scores are real numbers and PLOT a set of features encoded as binary vectors. The results do not seem remarkable at first glance. However, recall that the human raters had an average of 56% agreement on story ratings, and in that light the Naive Bayes learner approaches the performance of human raters. Surprisingly, when the LSA score is used as a feature in addition to the results of the plot comparison algorithm for the Naive Bayes learners, there is no further improvement. This shows features given by the event 860 Class 1 2 3 4 1 (Excellent) 14 22 0 1 2 (Good) 5 36 0 7 3 (Fair) 3 20 0 2 4 (Poor) 0 11 0 39 Table 3: Naive Bayes Confusion Matrix: “Adventure” Class Precision Recall Excellent .64 .38 Good .40 .75 Fair .00 .00 Poor .80 .78 Table 4: Naive Bayes Results: “Adventure” structure better characterize plot structure than the word distribution. Unlike previous work, the use of both the plot comparison results and LSA did not improve performance for Naive Bayes, so the results of using Naive Bayes with both are not reported (Halpin et al., 2004). The results for the “Adventure” corpus are in general better than the results for the “Thief” corpus. However, this is due to the “Thief” corpus being smaller and having an infrequent number of “Excellent” and “Poor” stories, as shown in Table 1. In the “Thief” corpus the learner simply collapses most stories into “Good,” resulting in very poor performance. Another factor may be that the “Thief” story was more complex than the “Adventure” story, featuring 9 characters over 5 scenes, as opposed to the “Adventure” corpus that featured 4 characters over 2 scenes. For the “Adventure” corpus, the Naive Bayes classifier produces the best results, as detailed in Table 4 and the confusion matrix in Figure 3. A close inspection of the results shows that in the “Adventure Corpus” the “Poor” and “Good” stories are classified in general fairly well by the Naive Bayes learner, while some of the “Excellent” stories are classified as correctly. A significant number of both “Excellent” and most “Fair” stories are classified as “Good.” The “Fair” category, due to its small size in the training corpus, has disappeared. No “Poor” stories are classified as “Excellent,” and no “Excellent” stories are classified as “Poor.” The increased difficulty in distinguishing “Excellent” stories from “Good” stories is likely due to the use of inference by “Excellent” stories, which our system does not use. An inspection of the rating scale’s wording reveals the similarity in wording between the “Fair” and “Good” ratings. This may explain the lack of “Fair” stories in the corpus and therefore the inability of machine-learners to recognize them. As given by a survey of five teachers experienced in using the story rewriting task in schools, this level of performance is not ideal but acceptable to teachers. Our technique is also shown to be easily portable over different domains where a teacher can annotate around one hundred sample stories using our scale, although performance seems to suffer the more complex a story is. Since the Naive Bayes classifier is fast (able to classify stories in only a few seconds) and the entire algorithm from training to advice generation (as detailed below) is fully automatic once a small training corpus has been produced, this technique can be used in reallife tutoring systems and easily ported to other stories. 5 Automated Advice The plot analysis agent is not meant to give the students grades for their stories, but instead use the automatic ratings as an intermediate step to produce advice, like other hybrid tutoring systems (Rose et al., 2002). The advice that the agent can generate from the automatic rating classification is limited to coarse-grained general advice. However, by inspecting the results of the plot comparison algorithm, our agent is capable of giving detailed fine-grained specific advice from the relationships of the events in the story. One tutoring system resembling ours is the WRITE system, but we differ from it by using event structure to represent the information in the system, instead of using rhetorical features (Burstein et al., 2003). In this regards it more closely resembles the physics tutoring system WHY-ATLAS, although we deal with narrative stories of a longer length than physics essays. The WHY-ATLAS physics tutor identifies missing information in the explanations of students using theorem-proving (Rose et al., 2002). 5.1 Advice Generation Algorithm Different types of stories need different amounts of advice. An “Excellent” story needs less advice than a “Good” story. One advice statement is “general,” while the rest are specific. The system 861 produces a total of seven advice statements for a “Poor” story, and two less statements for each rating level above “Poor.” With the aid of a teacher, a number of “canned” text statements offering general advice were created for each rating class. These include statements such as “It’s very good! I only have a few pointers“ for a “Good” story and “Let’s get help from the teacher” for “Poor” story. The advice generation begins by randomly selecting a statement suitable for the rating of the story. Those students whose stories are rated “Poor” are asked if they would like to re-read the story and ask a teacher for help. The generation of specific advice uses the results of the plot-comparison algorithm to produce specific advice. A number of advice templates were produced, and the results of the Advice Generation Algorithm fill in the needed values of the template. The φ most frequent events in “Excellent” stories are called the Important Event Structure, which represents the “important” events in the story in temporal order. Empirical experiments led us φ = 10 for the “Adventure” story, but for longer stories like the “Thief” story a larger φ would be appropriate. These events correspond to the ones given the highest weights by the Naive Bayes algorithm. For each event in the event structure of a rewritten story, a search for a match in the important event structure is taken. If a predicate name match is found in the important event structure, the search continues to attempt to match the arguments. If the event and the arguments do not match, advice is generated using the structure of the “important” event that it cannot find in the rewritten story. This advice may use both the predicate name and its arguments, such as “Did the stork fly?” from fly(stork). If an argument is missing, the advice may be about only the argument(s), like “Can you tell me more about the stork?” If the event is out of order, advice is given to the student to correct the order, as in “I think something with the stork happened earlier in the story.” This algorithm is formalized in Figure 2, with all variables being the same as in the Plot Analysis Algorithm, except that W is the Important Event Structure composed of events w with the set of arguments Nw. M is a binary vector used to store the success of a match with index i. The ADV function, given an event, generates one adAlgorithm 5.1: ADVICEGENERATE(W, R) for w ∈W do M = ∅ i = 0 for r ∈R do if w = r or SY N(r) then mi = 1 else mi = 0 i = i + 1 for nw ∈Nw do for nr ∈Nr do if nw = SYN(nr) or nr then mi ←1 else mi ←0 i = i + 1 ADV (w, M) Figure 2: Advice Generation Algorithm vice statement to be given to the student. An element of randomization was used to generate a diversity of types of answers. An advice generation function (ADV ) takes an important event (w) and its binary matching vector (M) and generates an advice statement for w. Per important event this advice generation function is parameterized so that it has a 10% chance of delivering advice based on the entire event, 20% chance of producing advice that dealt with temporal order (these being parameters being found ideal after testing the algorithm), and otherwise produces advice based on the arguments. 5.2 Advice Evaluation The plot advice algorithm is run using a randomly selected corpus of 20 stories, 5 from each plot rating level using the “Adventure Corpus.” This produced matching advice for each story, for a total of 80 advice statements. 5.3 Advice Rating An advice rating scheme was developed to rate the advice produced in consultation with a teacher. 1. Excellent: The advice was suitable for the story, and helped the student gain insight into the story. 2. Good: The advice was suitable for the story, 862 Rating % Given Excellent 0 Good 35 Fair 60 Poor 5 Table 5: Advice Rating Results and would help the student. 3. Fair: The advice was suitable, but should have been phrased differently. 4. Poor: The advice really didn’t make sense and would only confuse the student further. Before testing the system on students, it was decided to have teachers evaluate how well the advice given by the system corresponded to the advice they would give in response to a story. A teacher read each story and the advice. They then rated the advice using the advice rating scheme. Each story was rated for its overall advice quality, and then each advice statement was given comments by the teacher, such that we could derive how each individual piece of advice contributed to the global rating. Some of the general “coarsegrained” advice was “Good! You got all the main parts of the story” for an “Excellent” story, “Let’s make it even better!” for a “Good” story, and “Reading the story again with a teacher would be help!” for a “Poor” story. Sometimes the advice generation algorithm was remarkably accurate. In one story the connection between a curse being lifted by the possession of a coin by the character Nils was left out by a student. The advice generation algorithm produced the following useful advice statement: “Tell me more about the curse and Nils.” Occasionally an automatically extracted event that is difficult to interpret by a human or simply incorrectly is extracted. This in turn can cause advice that does not make any sense can be produced, such as “Tell me more about a spot?”. Qualitative analysis showed that “missing important advice” to be the most significant problem, followed by “nonsensical advice.” 5.4 Results The results are given in Table 5. The majority of the advice was rated overall as “fair.” Only one story was given “poor” advice, and a few were given “good” advice. However, most advice rated as “good” was the advice generated by “excellent” stories, which generate less advice than other types of stories. “Poor” stories were given almost entirely “fair” advice, although once “poor” advice was generated. In general, the teacher found “coarse-grained” advice to be very useful, and was very pleased that the agent could detect when the student needed to re-read the story and when a student did not need to write any more. In some cases the specific advice was shown to help provide a “crucial detail” and help “elicit a fact.” The advice was often “repetitive” and ”badly phrased.” The specific advice came under criticism for often not “being directed enough” and for being “too literal” and not “inferential enough.” The rater noticed that “The program can not differentiate between an unfinished story...and one that is confused.” and that “Some why, where and how questions could be used” in the advice. 6 Conclusion and Future Work Since the task involved a fine-grained analysis of the rewritten story, the use of events that take plot structure into account made sense regardless of its performance. The use of events as structured features in a machine-learning classifier outperformed a classifier that relied on a unstructured “bag-of-words” as features. The system achieved close to human performance on rating the stories. Since each of the events used as a feature in the machine-learner corresponds to a particular event in the story, the features are easily interpretable by other components in the system and interpretable by humans. This allows these events to be used in a template-driven system to generate advice for students based on the structure of their plot. Extracting events from text is fraught with error, particularly in the ungrammatical and informal domain used in this experiment. This is often a failure of our system to detect semantic content units through either not including them in chunks or only partially including a single unit in a chunk. Chunking also has difficulty dealing with prepositions, embedded speech, semantic role labels, and complex sentences correctly. Improvement in our ability to retrieve semantics would help both story classification and advice generation. Advice generation was impaired by the ability to produce directed questions from the events using templates. This is because while our system could detect important events and their or863 der, it could not make explicit their connection through inference. Given the lack of a large-scale open-source accessible “common-sense” knowledge base and the difficulty in extracting inferential statements from raw text, further progress using inference will be difficult. Progress in either making it easier for a teacher to make explicit the important inferences in the text or improved methodology to learn inferential knowledge from the text would allow further progress. Tantalizingly, this ability for a reader to use “inference to grasp points that may not have been explicit in the story” is given as the hallmark of truly understanding a story by teachers. References Steven Abney. 1995. Chunks and dependencies: Bringing processing evidence to bear on syntax. In Jennifer Cole, Georgia Green, and Jerry Morgan, editors, Computational Linguistics and the Foundations of Linguistic Theory, pages 145–164. Breck Baldwin. 1997. CogNIAC : A High Precision Pronoun Resolution Engine. F.C. Bartlett. 1932. Remembering. Cambridge University Press, Cambridge. Johan Bos, Stephen Clark, Mark Steedman, James Curran, and Julia Hockenmaier. 2004. Wide-coverage semantic representations from a CCG parser. In In Proceedings of the 20th International Conference on Computational Linguistics (COLING ’04). Geneva, Switzerland. Jill Burstein, Daniel Marcu, and Kevin Knight. 2003. Finding the WRITE Stuff: Automatic Identification of Discourse Structure in Student Essays. IEEE Intelligent Systems, pages 32–39. S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R Harshman. 1990. Indexing by Latent Semantic Analysis. Journal of the American Society For Information Science, (41):391–407. Christine Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA. A. Graesser, P. Wiemer-Hastings, K. Wiemer-Hastings, D. Harter, and N. Person. 2000. Using latent semantic analysis to evaluate the contributions of students in autotutor. Interactive Learning Environments, 8:149–169. Claire Grover, Colin Matheson, Andrei Mikheev, and Marc Moens. 2000. LT TTT - A Flexible Tokenisation Tool. In Proceedings of the Second Language Resources and Evaluation Conference. Harry Halpin, Johanna Moore, and Judy Robertson. 2004. Automatic analysis of plot for story rewriting. In In Proceedings of Empirical Methods in Natural Language Processing, Barcelona, Spain. Maya Hickmann. 2003. Children’s Discourse: person, space and time across language. Cambridge University Press, Cambridge, UK. Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic. Kluwer Academic. Thomas. Landauer and Susan Dumais. 1997. A solution to Plato’s problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review. B. Lemaire, S. Mandin, P. Dessus, and G. Denhire. 2005. Computational cognitive models of summarization assessment skills. In In Proceedings of the 27th Annual Meeting of the Cognitive Science Society, Stressa, Italy. Fiona McNeill, Harry Halpin, Ewan Klein, and Alan Bundy. 2006. Merging stories with shallow semantics. In Proceedings of the Knowledge Representation and Reasoning for Language Processing Workshop at the European Association for Computational Linguistics, Genoa, Italy. Erik T. Mueller. 2003. Story understanding through multi-representation model construction. In Graeme Hirst and Sergei Nirenburg, editors, Text Meaning: Proceedings of the HLT-NAACL 2003 Workshop, pages 46–53, East Stroudsburg, PA. Association for Computational Linguistics. Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In In Proceedings of the Joint Conference of the North American Association for Computational Linguistics and Human Language Technologies. Boston, USA. E. Riloff. 1999. Information extraction as a stepping stone toward story understanding. In Ashwin Ram and Kenneth Moorman, editors, Computational Models of Reading and Understanding. MIT Press. Judy Robertson and Beth Cross. 2003. Children’s perceptions about writing with their teacher and the StoryStation learning environment. Narrative and Interactive Learning Environments: Special Issue of International Journal of Continuing Engineering Education and Life-long Learning. Judy Robertson and Peter Wiemer-Hastings. 2002. Feedback on children’s stories via multiple interface agents. In International Conference on Intelligent Tutoring Systems, Biarritz, France. C. Rose, D. Bhembe, A. Roque, S. Siler, R. Srivastava, and K. VanLehn. 2002. A hybrid language understanding approach for robust selection of tutoring goals. In International Conference on Intelligent Tutoring Systems, Biarritz, France. 864 | 2006 | 108 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 865–872, Sydney, July 2006. c⃝2006 Association for Computational Linguistics An All-Subtrees Approach to Unsupervised Parsing Rens Bod School of Computer Science University of St Andrews North Haugh, St Andrews KY16 9SX Scotland, UK [email protected] Abstract We investigate generalizations of the allsubtrees "DOP" approach to unsupervised parsing. Unsupervised DOP models assign all possible binary trees to a set of sentences and next use (a large random subset of) all subtrees from these binary trees to compute the most probable parse trees. We will test both a relative frequency estimator for unsupervised DOP and a maximum likelihood estimator which is known to be statistically consistent. We report state-ofthe-art results on English (WSJ), German (NEGRA) and Chinese (CTB) data. To the best of our knowledge this is the first paper which tests a maximum likelihood estimator for DOP on the Wall Street Journal, leading to the surprising result that an unsupervised parsing model beats a widely used supervised model (a treebank PCFG). 1 Introduction The problem of bootstrapping syntactic structure from unlabeled data has regained considerable interest. While supervised parsers suffer from shortage of hand-annotated data, unsupervised parsers operate with unlabeled raw data of which unlimited quantities are available. During the last few years there has been steady progress in the field. Where van Zaanen (2000) achieved 39.2% unlabeled f-score on ATIS word strings, Clark (2001) reports 42.0% on the same data, and Klein and Manning (2002) obtain 51.2% f-score on ATIS part-of-speech strings using a constituent-context model called CCM. On Penn Wall Street Journal po-s-strings ≤ 10 (WSJ10), Klein and Manning (2002) report 71.1% unlabeled f-score with CCM. And the hybrid approach of Klein and Manning (2004), which combines constituency and dependency models, yields 77.6% f-score. Bod (2006) shows that a further improvement on the WSJ10 can be achieved by an unsupervised generalization of the all-subtrees approach known as Data-Oriented Parsing (DOP). This unsupervised DOP model, coined U-DOP, first assigns all possible unlabeled binary trees to a set of sentences and next uses all subtrees from (a large subset of) these trees to compute the most probable parse trees. Bod (2006) reports that U-DOP not only outperforms previous unsupervised parsers but that its performance is as good as a binarized supervised parser (i.e. a treebank PCFG) on the WSJ. A possible drawback of U-DOP, however, is the statistical inconsistency of its estimator (Johnson 2002) which is inherited from the DOP1 model (Bod 1998). That is, even with unlimited training data, U-DOP's estimator is not guaranteed to converge to the correct weight distribution. Johnson (2002: 76) argues in favor of a maximum likelihood estimator for DOP which is statistically consistent. As it happens, in Bod (2000) we already developed such a DOP model, termed ML-DOP, which reestimates the subtree probabilities by a maximum likelihood procedure based on Expectation-Maximization. Although crossvalidation is needed to avoid overlearning, ML-DOP outperforms DOP1 on the OVIS corpus (Bod 2000). This raises the question whether we can create an unsupervised DOP model which is also 865 statistically consistent. In this paper we will show that an unsupervised version of ML-DOP can be constructed along the lines of U-DOP. We will start out by summarizing DOP, U-DOP and ML-DOP, and next create a new unsupervised model called UML-DOP. We report that UML-DOP not only obtains higher parse accuracy than U-DOP on three different domains, but that it also achieves this with fewer subtrees than U-DOP. To the best of our knowledge, this paper presents the first unsupervised parser that outperforms a widely used supervised parser on the WSJ, i.e. a treebank PCFG. We will raise the question whether the end of supervised parsing is in sight. 2 DOP The key idea of DOP is this: given an annotated corpus, use all subtrees, regardless of size, to parse new sentences. The DOP1 model in Bod (1998) computes the probabilities of parse trees and sentences from the relative frequencies of the subtrees. Although it is now known that DOP1's relative frequency estimator is statistically inconsistent (Johnson 2002), the model yields excellent empirical results and has been used in state-of-the-art systems. Let's illustrate DOP1 with a simple example. Assume a corpus consisting of only two trees, as given in figure 1. NP VP S NP Mary V likes John NP VP S NP V Peter hates Susan Figure 1. A corpus of two trees New sentences may be derived by combining fragments, i.e. subtrees, from this corpus, by means of a node-substitution operation indicated as °. Node-substitution identifies the leftmost nonterminal frontier node of one subtree with the root node of a second subtree (i.e., the second subtree is substituted on the leftmost nonterminal frontier node of the first subtree). Thus a new sentence such as Mary likes Susan can be derived by combining subtrees from this corpus, shown in figure 2. NP VP S NP V likes NP Mary NP Susan NP VP S NP Mary V likes Susan = ° ° Figure 2. A derivation for Mary likes Susan Other derivations may yield the same tree, e.g.: NP VP S NP V NP Mary NP VP S NP Mary V likes Susan = Susan V likes ° ° Figure 3. Another derivation yielding same tree DOP1 computes the probability of a subtree t as the probability of selecting t among all corpus subtrees that can be substituted on the same node as t. This probability is computed as the number of occurrences of t in the corpus, | t |, divided by the total number of occurrences of all subtrees t' with the same root label as t.1 Let r(t) return the root label of t. Then we may write: P(t) = | t | Σ t': r(t')=r(t) | t' | The probability of a derivation t1°...°tn is computed by the product of the probabilities of its subtrees ti: P(t1°...°tn) = Πi P(ti) As we have seen, there may be several distinct derivations that generate the same parse tree. The probability of a parse tree T is the sum of the 1 This subtree probability is redressed by a simple correction factor discussed in Goodman (2003: 136) and Bod (2003). 866 probabilities of its distinct derivations. Let tid be the i-th subtree in the derivation d that produces tree T, then the probability of T is given by P(T) = ΣdΠi P(tid) Thus DOP1 considers counts of subtrees of a wide range of sizes: everything from counts of singlelevel rules to entire trees is taken into account to compute the most probable parse tree of a sentence. A disadvantage of the approach may be that an extremely large number of subtrees (and derivations) must be considered. Fortunately there exists a compact isomorphic PCFG-reduction of DOP1 whose size is linear rather than exponential in the size of the training set (Goodman 2003). Moreover, Collins and Duffy (2002) show how a tree kernel can be applied to DOP1's all-subtrees representation. The currently most successful version of DOP1 uses a PCFG-reduction of the model with an n-best parsing algorithm (Bod 2003). 3 U-DOP U-DOP extends DOP1 to unsupervised parsing (Bod 2006). Its key idea is to assign all unlabeled binary trees to a set of sentences and to next use (in principle) all subtrees from these binary trees to parse new sentences. U-DOP thus proposes one of the richest possible models in bootstrapping trees. Previous models like Klein and Manning's (2002, 2005) CCM model limit the dependencies to "contiguous subsequences of a sentence". This means that CCM neglects dependencies that are non-contiguous such as between more and than in "BA carried more people than cargo". Instead, UDOP's all-subtrees approach captures both contiguous and non-contiguous lexical dependencies. As with most other unsupervised parsing models, U-DOP induces trees for p-o-s strings rather than for word strings. The extension to word strings is straightforward as there exist highly accurate unsupervised part-of-speech taggers (e.g. Schütze 1995) which can be directly combined with unsupervised parsers. To give an illustration of U-DOP, consider the WSJ p-o-s string NNS VBD JJ NNS which may correspond for instance to the sentence Investors suffered heavy losses. U-DOP starts by assigning all possible binary trees to this string, where each root node is labeled S and each internal node is labeled X. Thus NNS VBD JJ NNS has a total of five binary trees shown in figure 4 -- where for readability we add words as well. NNS VBD JJ NNS Investors suffered heavy losses X X S NNS VBD JJ NNS Investors suffered heavy losses X X S NNS VBD JJ NNS Investors suffered heavy losses X X S NNS VBD JJ NNS Investors suffered heavy losses X X S NNS VBD JJ NNS Investors suffered heavy losses X X S Figure 4. All binary trees for NNS VBD JJ NNS (Investors suffered heavy losses) While we can efficiently represent the set of all binary trees of a string by means of a chart, we need to unpack the chart if we want to extract subtrees from this set of binary trees. And since the total number of binary trees for the small WSJ10 is almost 12 million, it is doubtful whether we can apply the unrestricted U-DOP model to such a corpus. U-DOP therefore randomly samples a large subset from the total number of parse trees from the chart (see Bod 2006) and next converts the subtrees from these parse trees into a PCFG-reduction (Goodman 2003). Since the computation of the most probable parse tree is NP-complete (Sima'an 1996), U-DOP estimates the most probable tree from the 100 most probable derivations using Viterbi n-best parsing. We could also have used the more efficient k-best hypergraph parsing technique by Huang and Chiang (2005), but we have not yet incorporated this into our implementation. To give an example of the dependencies that U-DOP can take into account, consider the following subtrees in figure 5 from the trees in 867 figure 4 (where we again add words for readability). These subtrees show that U-DOP takes into account both contiguous and non-contiguous substrings. NNS VBD Investors suffered X X S VBD suffered X X NNS NNS Investors losses X X S JJ NNS heavy losses X X S JJ NNS heavy losses X NNS VBD Investors suffered X VBD JJ suffered heavy X Figure 5. Some subtrees from trees in figure 4 Of course, if we only had the sentence Investors suffered heavy losses in our corpus, there would be no difference in probability between the five parse trees in figure 4. However, if we also have a different sentence where JJ NNS ( heavy losses) appears in a different context, e.g. in Heavy losses were reported, its covering subtree gets a relatively higher frequency and the parse tree where heavy losses occurs as a constituent gets a higher total probability. 4 ML-DOP ML-DOP (Bod 2000) extends DOP with a maximum likelihood reestimation technique based on the expectation-maximization (EM) algorithm (Dempster et al. 1977) which is known to be statistically consistent (Shao 1999). ML-DOP reestimates DOP's subtree probabilities in an iterative way until the changes become negligible. The following exposition of ML-DOP is heavily based on previous work by Bod (2000) and Magerman (1993). It is important to realize that there is an implicit assumption in DOP that all possible derivations of a parse tree contribute equally to the total probability of the parse tree. This is equivalent to saying that there is a hidden component to the model, and that DOP can be trained using an EM algorithm to determine the maximum likelihood estimate for the training data. The EM algorithm for this ML-DOP model is related to the Inside-Outside algorithm for context-free grammars, but the reestimation formula is complicated by the presence of subtrees of depth greater than 1. To derive the reestimation formula, it is useful to consider the state space of all possible derivations of a tree. The derivations of a parse tree T can be viewed as a state trellis, where each state contains a partially constructed tree in the course of a leftmost derivation of T. st denotes a state containing the tree t which is a subtree of T. The state trellis is defined as follows. The initial state, s0, is a tree with depth zero, consisting of simply a root node labeled with S. The final state, sT, is the given parse tree T. A state st is connected forward to all states stf such that tf = t ° t', for some t'. Here the appropriate t' is defined to be tf − t. A state st is connected backward to all states stb such that t = tb ° t', for some t'. Again, t' is defined to be t − tb. The construction of the state lattice and assignment of transition probabilities according to the ML-DOP model is called the forward pass. The probability of a given state, P(s), is referred to as α(s). The forward probability of a state st is computed recursively α(st) = Σ α(st ) P(t − tb). b stb The backward probability of a state, referred to as β(s), is calculated according to the following recursive formula: β(st) = Σ β(st ) P(tf − t) f f st where the backward probability of the goal state is set equal to the forward probability of the goal state, β(sT) = α(sT). The update formula for the count of a subtree t is (where r(t) is the root label of t): 868 ct(t) = Σ β(st )α(st )P(t | r(t)) f b α(sgoal) st :∃st ,tb°t=tf b f The updated probability distribution, P'(t | r(t)), is defined to be P'(t | r(t)) = ct(t) ct(r(t)) where ct(r(t)) is defined as ct(r(t)) = Σ ct(t') t': r(t')=r(t) In practice, ML-DOP starts out by assigning the same relative frequencies to the subtrees as DOP1, which are next reestimated by the formulas above. We may in principle start out with any initial parameters, including random initializations, but since ML estimation is known to be very sensitive to the initialization of the parameters, it is convenient to start with parameters that are known to perform well. To avoid overtraining, ML-DOP uses the subtrees from one half of the training set to be trained on the other half, and vice versa. This crosstraining is important since otherwise UML-DOP would assign the training set trees their empirical frequencies and assign zero weight to all other subtrees (cf. Prescher et al. 2004). The updated probabilities are iteratively reestimated until the decrease in cross-entropy becomes negligible. Unfortunately, no compact PCFG-reduction of MLDOP is known. As a consequence, parsing with ML-DOP is very costly and the model has hitherto never been tested on corpora larger than OVIS (Bonnema et al. 1997). Yet, we will show that by clever pruning we can extend our experiments not only to the WSJ, but also to the German NEGRA and the Chinese CTB. (Zollmann and Sima'an 2005 propose a different consistent estimator for DOP, which we cannot go into here). 5 UML-DOP Analogous to U-DOP, UML-DOP is an unsupervised generalization of ML-DOP: it first assigns all unlabeled binary trees to a set of sentences and next extracts a large (random) set of subtrees from this tree set. It then reestimates the initial probabilities of these subtrees by ML-DOP on the sentences from a held-out part of the tree set. The training is carried out by dividing the tree set into two equal parts, and reestimating the subtrees from one part on the other. As initial probabilities we use the subtrees' relative frequencies as described in section 2 (smoothed by Good-Turing -- see Bod 1998), though it would also be interesting to see how the model works with other initial parameters, in particular with the usage frequencies proposed by Zuidema (2006). As with U-DOP, the total number of subtrees that can be extracted from the binary tree set is too large to be fully taken into account. Together with the high computational cost of reestimation we propose even more drastic pruning than we did in Bod (2006) for U-DOP. That is, while for sentences ≤ 7 words we use all binary trees, for each sentence ≥ 8 words we randomly sample a fixed number of 128 trees (which effectively favors more frequent trees). The resulting set of trees is referred to as the binary tree set. Next, we randomly extract for each subtree-depth a fixed number of subtrees, where the depth of subtree is the longest path from root to any leaf. This has roughly the same effect as the correction factor used in Bod (2003, 2006). That is, for each particular depth we sample subtrees by first randomly selecting a node in a random tree from the binary tree set after which we select random expansions from that node until a subtree of the particular depth is obtained. For our experiments in section 6, we repeated this procedure 200,000 times for each depth. The resulting subtrees are then given to MLDOP's reestimation procedure. Finally, the reestimated subtrees are used to compute the most probable parse trees for all sentences using Viterbi n-best, as described in section 3, where the most probable parse is estimated from the 100 most probable derivations. A potential criticism of (U)ML-DOP is that since we use DOP1's relative frequencies as initial parameters, ML-DOP may only find a local maximum nearest to DOP1's estimator. But this is of course a criticism against any iterative ML approach: it is not guaranteed that the global maximum is found (cf. Manning and Schütze 1999: 401). Nevertheless we will see that our reestimation 869 procedure leads to significantly better accuracy compared to U-DOP (the latter would be equal to UML-DOP under 0 iterations). Moreover, in contrast to U-DOP, UML-DOP can be theoretically motivated: it maximizes the likelihood of the data using the statistically consistent EM algorithm. 6 Experiments: Can we beat supervised by unsupervised parsing? To compare UML-DOP to U-DOP, we started out with the WSJ10 corpus, which contains 7422 sentences ≤ 10 words after removing empty elements and punctuation. We used the same evaluation metrics for unlabeled precision (UP) and unlabeled recall (UR) as defined in Klein (2005: 2122). Klein's definitions differ slightly from the standard PARSEVAL metrics: multiplicity of brackets is ignored, brackets of span one are ignored and the bracket labels are ignored. The two metrics of UP and UR are combined by the unlabeled f score F1 which is defined as the harmonic mean of UP and UR: F1 = 2*UP*UR/(UP+UR). For the WSJ10, we obtained a binary tree set of 5.68 * 105 trees, by extracting the binary trees as described in section 5. From this binary tree set we sampled 200,000 subtrees for each subtreedepth. This resulted in a total set of roughly 1.7 * 106 subtrees that were reestimated by our maximum-likelihood procedure. The decrease in cross-entropy became negligible after 14 iterations (for both halfs of WSJ10). After computing the most probable parse trees, UML-DOP achieved an f-score of 82.9% which is a 20.5% error reduction compared to U-DOP's f-score of 78.5% on the same data (Bod 2006). We next tested UML-DOP on two additional domains which were also used in Klein and Manning (2004) and Bod (2006): the German NEGRA10 (Skut et al. 1997) and the Chinese CTB10 (Xue et al. 2002) both containing 2200+ sentences ≤ 10 words after removing punctuation. Table 1 shows the results of UML-DOP compared to U-DOP, the CCM model by Klein and Manning (2002), the DMV dependency learning model by Klein and Manning (2004) as well as their combined model DMV+CCM. Table 1 shows that UML-DOP scores better than U-DOP and Klein and Manning's models in all cases. It thus pays off to not only use subtrees rather than substrings (as in CCM) but to also reestimate the subtrees' probabilities by a maximum-likelihood procedure rather than using their (smoothed) relative frequencies (as in U-DOP). Note that UML-DOP achieves these improved results with fewer subtrees than U-DOP, due to UML-DOP's more drastic pruning of subtrees. It is also noteworthy that UMLDOP, like U-DOP, does not employ a separate class for non-constituents, so-called distituents, while CCM and CCM+DMV do. (Interestingly, the top 10 most frequently learned constituents by UMLDOP were exactly the same as by U-DOP -- see the relevant table in Bod 2006). Model English German Chinese (WSJ10) (NEGRA10) (CTB10) CCM 71.9 61.6 45.0 DMV 52.1 49.5 46.7 DMV+CCM 77.6 63.9 43.3 U-DOP 78.5 65.4 46.6 UML-DOP 82.9 67.0 47.2 Table 1. F-scores of UML-DOP compared to previous models on the same data We were also interested in testing UML-DOP on longer sentences. We therefore included all WSJ sentences up to 40 words after removing empty elements and punctuation (WSJ40) and again sampled 200,000 subtrees for each depth, using the same method as before. Furthermore, we compared UML-DOP against a supervised binarized PCFG, i.e. a treebank PCFG whose simple relative frequency estimator corresponds to maximum likelihood (Chi and Geman 1998), and which we shall refer to as "ML-PCFG". To this end, we used a random 90%/10% division of WSJ40 into a training set and a test set. The ML-PCFG had thus access to the Penn WSJ trees in the training set, while UML-DOP had to bootstrap all structure from the flat strings from the training set to next parse the 10% test set -- clearly a much more challenging task. Table 2 gives the results in terms of f-scores. The table shows that UML-DOP scores better than U-DOP, also for WSJ40. Our results on WSJ10 are somewhat lower than in table 1 due to the use of a smaller training set of 90% of the data. But the most surprising result is that UML-DOP's f870 score is higher than the supervised binarized treebank PCFG (ML-PCFG) for both WSJ10 and WSJ40. In order to check whether this difference is statistically significant, we additionally tested on 10 different 90/10 divisions of the WSJ40 (which were the same splits as in Bod 2006). For these splits, UML-DOP achieved an average f-score of 66.9%, while ML-PCFG obtained an average f-score of 64.7%. The difference in accuracy between UMLDOP and ML-PCFG was statistically significant according to paired t-testing (p≤0.05). To the best of our knowledge this means that we have shown for the first time that an unsupervised parsing model (UML-DOP) outperforms a widely used supervised parsing model (a treebank PCFG) on the WSJ40. Model WSJ10 WSJ40 U-DOP 78.1 63.9 UML-DOP 82.5 66.4 ML-PCFG 81.5 64.6 Table 2. F-scores of U-DOP, UML-DOP and a supervised treebank PCFG (ML-PCFG) for a random 90/10 split of WSJ10 and WSJ40. We should keep in mind that (1) a treebank PCFG is not state-of-the-art: its performance is mediocre compared to e.g. Bod (2003) or McClosky et al. (2006), and (2) that our treebank PCFG is binarized as in Klein and Manning (2005) to make results comparable. To be sure, the unbinarized version of the treebank PCFG obtains 89.0% average f-score on WSJ10 and 72.3% average f-score on WSJ40. Remember that the Penn Treebank annotations are often exceedingly flat, and many branches have arity larger than two. It would be interesting to see how UML-DOP performs if we also accept ternary (and wider) branches -- though the total number of possible trees that can be assigned to strings would then further explode. UML-DOP's performance still remains behind that of supervised (binarized) DOP parsers, such as DOP1, which achieved 81.9% average fscore on the 10 WSJ40 splits, and ML-DOP, which performed slightly better with 82.1% average fscore. And if DOP1 and ML-DOP are not binarized, their average f-scores are respectively 90.1% and 90.5% on WSJ40. However, DOP1 and ML-DOP heavily depend on annotated data whereas UML-DOP only needs unannotated data. It would thus be interesting to see how close UML-DOP can get to ML-DOP's performance if we enlarge the amount of training data. 7 Conclusion: Is the end of supervised parsing in sight? Now that we have outperformed a well-known supervised parser by an unsupervised one, we may raise the question as to whether the end of supervised NLP comes in sight. All supervised parsers are reaching an asymptote and further improvement does not seem to come from more hand-annotated data but by adding unsupervised or semi-unsupervised techniques (cf. McClosky et al. 2006). Thus if we modify our question as: does the exclusively supervised approach to parsing come to an end, we believe that the answer is certainly yes. Yet we should neither rule out the possibility that entirely unsupervised methods will in fact surpass semi-supervised methods. The main problem is how to quantitatively compare these different parsers, as any evaluation on handannotated data (like the Penn treebank) will unreasonably favor semi-supervised parsers. There is thus is a quest for designing an annotationindependent evaluation scheme. Since parsers are becoming increasingly important in applications like syntax-based machine translation and structural language models for speech recognition, one way to go would be to compare these different parsing methods by isolating their contribution in improving a concrete NLP system, rather than by testing them against gold standard annotations which are inherently theory-dependent. The initially disappointing results of inducing trees entirely from raw text was not so much due to the difficulty of the bootstrapping problem per se, but to (1) the poverty of the initial models and (2) the difficulty of finding theoryindependent evaluation criteria. The time has come to fully reappraise unsupervised parsing models which should be trained on massive amounts of data, and be evaluated in a concrete application. There is a final question as to how far the DOP approach to unsupervised parsing can be stretched. In principle we can assign all possible syntactic categories, semantic roles, argument 871 structures etc. to a set of given sentences and let the statistics decide which assignments are most useful in parsing new sentences. Whether such a massively maximalist approach is feasible can only be answered by empirical investigation in due time. Acknowledgements Thanks to Willem Zuidema, David Tugwell and especially to three anonymous reviewers whose unanymous suggestions on DOP and EM considerably improved the original paper. A substantial part of this research was carried out in the context of the NWO Exact project "Unsupervised Stochastic Grammar Induction from Unlabeled Data", project number 612.066.405. References Bod, R. 1998. Beyond Grammar: An Experience-Based Theory of Language, CSLI Publications, distributed by Cambridge University Press. Bod, R. 2000. Combining semantic and syntactic structure for language modeling. Proceedings ICSLP 2000, Beijing. Bod, R. 2003. An efficient implementation of a new DOP model. Proceedings EACL 2003, Budapest. Bod, R. 2006. Unsupervised Parsing with U-DOP. Proceedings CONLL 2006, New York. Bonnema, R., R. Bod and R. Scha, 1997. A DOP model for semantic interpretation, Proceedings ACL/EACL 1997, Madrid. Chi, Z. and S. Geman 1998. Estimation of Probabilistic Context-Free Grammars. Computational Linguistics 24, 299-305. Clark, A. 2001. Unsupervised induction of stochastic context-free grammars using distributional clustering. Proceedings CONLL 2001. Collins, M. and N. Duffy 2002. New ranking algorithms for parsing and tagging: kernels over discrete structures, and the voted perceptron. Proceedings ACL 2002, Philadelphia. Dempster, A., N. Laird and D. Rubin, 1977. Maximum Likelihood from Incomplete Data via the EM Algorithm, Journal of the Royal Statistical Society 39, 1-38. Goodman, J. 2003. Efficient algorithms for the DOP model. In R. Bod, R. Scha and K. Sima'an (eds.). Data-Oriented Parsing, University of Chicago Press. Huang, L. and D. Chiang 2005. Better k-best parsing. Proceedings IWPT 2005, Vancouver. Johnson, M. 2002. The DOP estimation method is biased and inconsistent. Computational Linguistics 28, 71-76. Klein, D. 2005. The Unsupervised Learning of Natural Language Structure. PhD thesis, Stanford University. Klein, D. and C. Manning 2002. A general constituent-context model for improved grammar induction. Proceedings ACL 2002, Philadelphia. Klein, D. and C. Manning 2004. Corpus-based induction of syntactic structure: models of dependency and constituency. Proceedings ACL 2004, Barcelona. Klein, D. and C. Manning 2005. Natural language grammar induction with a generative constituentcontext model. Pattern Recognition 38, 1407-1419. Magerman, D. 1993. Expectation-Maximization for Data-Oriented Parsing, IBM Technical Report, Yorktown Heights, NY. McClosky, D., E. Charniak and M. Johnson 2006. Effective self-training for parsing. Proceedings HLTNAACL 2006, New York. Manning, C. and H. Schütze 1999. Foundations of Statistical Natural Language Processing. The MIT Press. Prescher, D., R. Scha, K. Sima'an and A. Zollmann 2004. On the statistical consistency of DOP estimators. Proceedings CLIN 2004, Leiden. Schütze, H. 1995. Distributional part-of-speech tagging. Proceedings ACL 1995, Dublin. Shao, J. 1999. Mathematical Statistics. Springer Verlag, New York. Sima'an, K. 1996. Computational complexity of probabilistic disambiguation by means of tree grammars. Proceedings COLING 1996, Copenhagen. Skut, W., B. Krenn, T. Brants and H. Uszkoreit 1997. An annotation scheme for free word order languages. Proceedings ANLP 1997. Xue, N., F. Chiou and M. Palmer 2002. Building a large-scale annotated Chinese corpus. Proceedings COLING 2002, Taipei. van Zaanen, M. 2000. ABL: Alignment-Based Learning. Proceedings COLING 2000, Saarbrücken. Zollmann, A. and K. Sima'an 2005. A consistent and efficient estimator for data-oriented parsing. Journal of Automata, Languages and Combinatorics, in press. Zuidema, W. 2006. What are the productive units of natural language grammar? A DOP approach to the automatic identification of constructions. Proceedings CONLL 2006, New York. 872 | 2006 | 109 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 81–88, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Extracting Parallel Sub-Sentential Fragments from Non-Parallel Corpora Dragos Stefan Munteanu University of Southern California Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA, 90292 [email protected] Daniel Marcu University of Southern California Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA, 90292 [email protected] Abstract We present a novel method for extracting parallel sub-sentential fragments from comparable, non-parallel bilingual corpora. By analyzing potentially similar sentence pairs using a signal processinginspired approach, we detect which segments of the source sentence are translated into segments in the target sentence, and which are not. This method enables us to extract useful machine translation training data even from very non-parallel corpora, which contain no parallel sentence pairs. We evaluate the quality of the extracted data by showing that it improves the performance of a state-of-the-art statistical machine translation system. 1 Introduction Recently, there has been a surge of interest in the automatic creation of parallel corpora. Several researchers (Zhao and Vogel, 2002; Vogel, 2003; Resnik and Smith, 2003; Fung and Cheung, 2004a; Wu and Fung, 2005; Munteanu and Marcu, 2005) have shown how fairly good-quality parallel sentence pairs can be automatically extracted from comparable corpora, and used to improve the performance of machine translation (MT) systems. This work addresses a major bottleneck in the development of Statistical MT (SMT) systems: the lack of sufficiently large parallel corpora for most language pairs. Since comparable corpora exist in large quantities and for many languages – tens of thousands of words of news describing the same events are produced daily – the ability to exploit them for parallel data acquisition is highly beneficial for the SMT field. Comparable corpora exhibit various degrees of parallelism. Fung and Cheung (2004a) describe corpora ranging from noisy parallel, to comparable, and finally to very non-parallel. Corpora from the last category contain “... disparate, very nonparallel bilingual documents that could either be on the same topic (on-topic) or not”. This is the kind of corpora that we are interested to exploit in the context of this paper. Existing methods for exploiting comparable corpora look for parallel data at the sentence level. However, we believe that very non-parallel corpora have none or few good sentence pairs; most of their parallel data exists at the sub-sentential level. As an example, consider Figure 1, which presents two news articles from the English and Romanian editions of the BBC. The articles report on the same event (the one-year anniversary of Ukraine’s Orange Revolution), have been published within 25 minutes of each other, and express overlapping content. Although they are “on-topic”, these two documents are non-parallel. In particular, they contain no parallel sentence pairs; methods designed to extract full parallel sentences will not find any useful data in them. Still, as the lines and boxes from the figure show, some parallel fragments of data do exist; but they are present at the sub-sentential level. In this paper, we present a method for extracting such parallel fragments from comparable corpora. Figure 2 illustrates our goals. It shows two sentences belonging to the articles in Figure 1, and highlights and connects their parallel fragments. Although the sentences share some common meaning, each of them has content which is not translated on the other side. The English phrase reports the BBC’s Helen Fawkes in Kiev, as well 81 Figure 1: A pair of comparable, non-parallel documents Figure 2: A pair of comparable sentences. as the Romanian one De altfel, vorbind inaintea aniversarii have no translation correspondent, either in the other sentence or anywhere in the whole document. Since the sentence pair contains so much untranslated text, it is unlikely that any parallel sentence detection method would consider it useful. And, even if the sentences would be used for MT training, considering the amount of noise they contain, they might do more harm than good for the system’s performance. The best way to make use of this sentence pair is to extract and use for training just the translated (highlighted) fragments. This is the aim of our work. Identifying parallel subsentential fragments is a difficult task. It requires the ability to recognize translational equivalence in very noisy environments, namely sentence pairs that express different (although overlapping) content. However, a good solution to this problem would have a strong impact on parallel data acquisition efforts. Enabling the exploitation of corpora that do not share parallel sentences would greatly increase the amount of comparable data that can be used for SMT. 2 Finding Parallel Sub-Sentential Fragments in Comparable Corpora 2.1 Introduction The high-level architecture of our parallel fragment extraction system is presented in Figure 3. The first step of the pipeline identifies document pairs that are similar (and therefore more likely to contain parallel data), using the Lemur information retrieval toolkit1 (Ogilvie and Callan, 2001); each document in the source language is translated word-for-word and turned into a query, which is run against the collection of target language documents. The top 20 results are retrieved and paired with the query document. We then take all sentence pairs from these document pairs and run them through the second step in the pipeline, the candidate selection filter. This step discards pairs which have very few words that are translations of each other. To all remaining sentence pairs we apply the fragment detection method (described in Section 2.3), which produces the output of the system. We use two probabilistic lexicons, learned au1http://www-2.cs.cmu.edu/$\sim$lemur 82 Figure 3: A Parallel Fragment Extraction System tomatically from the same initial parallel corpus. The first one, GIZA-Lex, is obtained by running the GIZA++2 implementation of the IBM word alignment models (Brown et al., 1993) on the initial parallel corpus. One of the characteristics of this lexicon is that each source word is associated with many possible translations. Although most of its high-probability entries are good translations, there are a lot of entries (of non-negligible probability) where the two words are at most related. As an example, in our GIZA-Lex lexicon, each source word has an average of 12 possible translations. This characteristic is useful for the first two stages of the extraction pipeline, which are not intended to be very precise. Their purpose is to accept most of the existing parallel data, and not too much of the non-parallel data; using such a lexicon helps achieve this purpose. For the last stage, however, precision is paramount. We found empirically that when using GIZA-Lex, the incorrect correspondences that it contains seriously impact the quality of our results; we therefore need a cleaner lexicon. In addition, since we want to distinguish between source words that have a translation on the target side and words that do not, we also need a measure of the probability that two words are not translations of each other. All these are part of our second lexicon, LLR-Lex, which we present in detail in Section 2.2. Subsequently, in Section 2.3, we present our algorithm for detecting parallel sub-sentential fragments. 2.2 Using Log-Likelihood-Ratios to Estimate Word Translation Probabilities Our method for computing the probabilistic translation lexicon LLR-Lex is based on the the Log2http://www.fjoch.com/GIZA++.html Likelihood-Ratio (LLR) statistic (Dunning, 1993), which has also been used by Moore (2004a; 2004b) and Melamed (2000) as a measure of word association. Generally speaking, this statistic gives a measure of the likelihood that two samples are not independent (i.e. generated by the same probability distribution). We use it to estimate the independence of pairs of words which cooccur in our parallel corpus. If source word and target word are independent (i.e. they are not translations of each other), we would expect that
, i.e. the distribution of given that is present is the same as the distribution of when is not present. The LLR statistic gives a measure of the likelihood of this hypothesis. The LLR score of a word pair is low when these two distributions are very similar (i.e. the words are independent), and high otherwise (i.e. the words are strongly associated). However, high LLR scores can indicate either a positive association (i.e.
) or a negative one; and we can distinguish between them by checking whether . Thus, we can split the set of cooccurring word pairs into positively and negatively associated pairs, and obtain a measure for each of the two association types. The first type of association will provide us with our (cleaner) lexicon, while the second will allow us to estimate probabilities of words not being translations of each other. Before describing our new method more formally, we address the notion of word cooccurrence. In the work of Moore (2004a) and Melamed (2000), two words cooccur if they are present in a pair of aligned sentences in the parallel training corpus. However, most of the words from aligned sentences are actually unrelated; therefore, this is a rather weak notion of cooccurrence. We follow Resnik et. al (2001) and adopt a stronger definition, based not on sentence alignment but on word alignment: two words cooccur if they are linked together in the word-aligned parallel training corpus. We thus make use of the significant amount of knowledge brought in by the word alignment procedure. We compute , the LLR score for words and , using the formula presented by Moore (2004b), which we do not repeat here due to lack of space. We then use these values to compute two conditional probability distributions: , the probability that source word trans83 Figure 4: Translated fragments, according to the lexicon. lates into target word , and , the probability that does not translate into . We obtain the distributions by normalizing the LLR scores for each source word. The whole procedure follows: Word-align the parallel corpus. Following Och and Ney (2003), we run GIZA++ in both directions, and then symmetrize the alignments using the refined heuristic. Compute all LLR scores. There will be an LLR score for each pair of words which are linked at least once in the word-aligned corpus Classify all as either (positive association) if , or (negative association) otherwise. For each , compute the normalizing factors and . Divide all terms by the corresponding normalizing factors to obtain . Divide all terms by the corresponding normalizing factors to obtain . In order to compute the distributions, we reverse the source and target languages and repeat the procedure. As we mentioned above, in GIZA-Lex the average number of possible translations for a source word is 12. In LLR-Lex that average is 5, which is a significant decrease. 2.3 Detecting Parallel Sub-Sentential Fragments Intuitively speaking, our method tries to distinguish between source fragments that have a translation on the target side, and fragments that do not. In Figure 4 we show the sentence pair from Figure 2, in which we have underlined those words of each sentence that have a translation in the other sentence, according to our lexicon LLR-Lex. The phrases “to focus on the past year’s achievements, which,” and “sa se concentreze pe succesele anului trecut, care,” are mostly underlined (the lexicon is unaware of the fact that “achievements” and “succesele” are in fact translations of each other, because “succesele” is a morphologically inflected form which does not cooccur with “achievements” in our initial parallel corpus). The rest of the sentences are mostly not underlined, although we do have occasional connections, some correct and some wrong. The best we can do in this case is to infer that these two phrases are parallel, and discard the rest. Doing this gains us some new knowledge: the lexicon entry (achievements, succesele). We need to quantify more precisely the notions of “mostly translated” and “mostly not translated”. Our approach is to consider the target sentence as a numeric signal, where translated words correspond to positive values (coming from the distribution described in the previous Section), and the others to negative ones (coming from the distribution). We want to retain the parts of the sentence where the signal is mostly positive. This can be achieved by applying a smoothing filter to the signal, and selecting those fragments of the sentence for which the corresponding filtered values are positive. The details of the procedure are presented below, and also illustrated in Figure 5. Let the Romanian sentence be the source sentence , and the English one be the target, . We compute a word alignment by greedily linking each English word with its best translation candidate from the Romanian sentence. For each of the linked target words, the corresponding signal value is the probability of the link (there can be at most one link for each target word). Thus, if target word is linked to source word , the signal value corresponding to is (the distribution described in Section 2.2), i.e. the probability that is the translation of . For the remaining target words, the signal value should reflect the probability that they are not 84 Figure 5: Our approach for detecting parallel fragments. The lower part of the figure shows the source and target sentence together with their alignment. Above are displayed the initial signal and the filtered signal. The circles indicate which fragments of the target sentence are selected by the procedure. translated; for this, we employ the distribution. Thus, for each non-linked target word , we look for the source word least likely to be its nontranslation:
. If exists, we set the signal value for to ; otherwise, we set it to . This is the initial signal. We obtain the filtered signal by applying an averaging filter, which sets the value at each point to be the average of several values surrounding it. In our experiments, we use the surrounding 5 values, which produced good results on a development set. We then simply retain the “positive fragments” of , i.e. those fragments for which the corresponding filtered signal values are positive. However, this approach will often produce short “positive fragments” which are not, in fact, translated in the source sentence. An example of this is the fragment “, reports” from Figure 5, which although corresponds to positive values of the filtered signal, has no translation in Romanian. In an attempt to avoid such errors, we disregard fragments with less than 3 words. We repeat the procedure in the other direction ( ) to obtain the fragments for , and consider the resulting two text chunks as parallel. For the sentence pair from Figure 5, our system will output the pair: people to focus on the past year’s achievements, which, he says sa se concentreze pe succesele anului trecut, care, printre 3 Experiments In our experiments, we compare our fragment extraction method (which we call FragmentExtract) with the sentence extraction approach of Munteanu and Marcu (2005) (SentenceExtract). All extracted datasets are evaluated by using them as additional MT training data and measuring their impact on the performance of the MT system. 3.1 Corpora We perform experiments in the context of Romanian to English machine translation. We use two initial parallel corpora. One is the training data for the Romanian-English word alignment task from the Workshop on Building and Using Parallel Corpora3 which has approximately 1M English words. The other contains additional data 3http://www.statmt.org/wpt05/ 85 Romanian English Source # articles # tokens # articles # tokens BBC 6k 2.5M 200k 118M EZZ 183k 91M 14k 8.5M Table 1: Sizes of our comparable corpora from the Romanian translations of the European Union’s acquis communautaire which we mined from the Web, and has about 10M English words. We downloaded comparable data from three online news sites: the BBC, and the Romanian newspapers “Evenimentul Zilei” and “Ziua”. The BBC corpus is precisely the kind of corpus that our method is designed to exploit. It is truly nonparallel; as our example from Figure 1 shows, even closely related documents have few or no parallel sentence pairs. Therefore, we expect that our extraction method should perform best on this corpus. The other two sources are fairly similar, both in genre and in degree of parallelism, so we group them together and refer to them as the EZZ corpus. This corpus exhibits a higher degree of parallelism than the BBC one; in particular, it contains many article pairs which are literal translations of each other. Therefore, although our subsentence extraction method should produce useful data from this corpus, we expect the sentence extraction method to be more successful. Using this second corpus should help highlight the strengths and weaknesses of our approach. Table 1 summarizes the relevant information concerning these corpora. 3.2 Extraction Experiments On each of our comparable corpora, and using each of our initial parallel corpora, we apply both the fragment extraction and the sentence extraction method of Munteanu and Marcu (2005). In order to evaluate the importance of the LLRLex lexicon, we also performed fragment extraction experiments that do not use this lexicon, but only GIZA-Lex. Thus, for each initial parallel corpus and each comparable corpus, we extract three datasets: FragmentExtract, SentenceExtract, and Fragment-noLLR. The sizes of the extracted datasets, measured in million English tokens, are presented in Table 2. Initial Source FragmentExtract SentenceExtract Fragment-noLLR corpus 1M BBC 0.4M 0.3M 0.8M 1M EZZ 6M 4M 8.1M 10M BBC 1.3M 0.9M 2M 10M EZZ 10M 7.9M 14.3M Table 2: Sizes of the extracted datasets. 3.3 SMT Performance Results We evaluate our extracted corpora by measuring their impact on the performance of an SMT system. We use the initial parallel corpora to train Baseline systems; and then train comparative systems using the initial corpora plus: the FragmentExtract corpora; the SentenceExtract corpora; and the FragmentExtract-noLLR corpora. In order to verify whether the fragment and sentence detection method complement each other, we also train a Fragment+Sentence system, on the initial corpus plus FragmentExtract and SentenceExtract. All MT systems are trained using a variant of the alignment template model of Och and Ney (2004). All systems use the same 2 language models: one trained on 800 million English tokens, and one trained on the English side of all our parallel and comparable corpora. This ensures that differences in performance are caused only by differences in the parallel training data. Our test data consists of news articles from the Time Bank corpus, which were translated into Romanian, and has 1000 sentences. Translation performance is measured using the automatic BLEU (Papineni et al., 2002) metric, on one reference translation. We report BLEU% numbers, i.e. we multiply the original scores by 100. The 95% confidence intervals of our scores, computed by bootstrap resampling (Koehn, 2004), indicate that a score increase of more than 1 BLEU% is statistically significant. The scores are presented in Figure 6. On the BBC corpus, the fragment extraction method produces statistically significant improvements over the baseline, while the sentence extraction method does not. Training on both datasets together brings further improvements. This indicates that this corpus has few parallel sentences, and that by going to the sub-sentence level we make better use of it. On the EZZ corpus, although our method brings improvements in the BLEU score, the sen86 Figure 6: SMT performance results tence extraction method does better. Joining both extracted datasets does not improve performance; since most of the parallel data in this corpus exists at sentence level, the extracted fragments cannot bring much additional knowledge. The Fragment-noLLR datasets bring no translation performance improvements; moreover, when the initial corpus is small (1M words) and the comparable corpus is noisy (BBC), the data has a negative impact on the BLEU score. This indicates that LLR-Lex is a higher-quality lexicon than GIZALex, and an important component of our method. 4 Previous Work Much of the work involving comparable corpora has focused on extracting word translations (Fung and Yee, 1998; Rapp, 1999; Diab and Finch, 2000; Koehn and Knight, 2000; Gaussier et al., 2004; Shao and Ng, 2004; Shinyama and Sekine, 2004). Another related research effort is that of Resnik and Smith (2003), whose system is designed to discover parallel document pairs on the Web. Our work lies between these two directions; we attempt to discover parallelism at the level of fragments, which are longer than one word but shorter than a document. Thus, the previous research most relevant to this paper is that aimed at mining comparable corpora for parallel sentences. The earliest efforts in this direction are those of Zhao and Vogel (2002) and Utiyama and Isahara (2003). Both methods extend algorithms designed to perform sentence alignment of parallel texts: they use dynamic programming to do sentence alignment of documents hypothesized to be similar. These approaches are only applicable to corpora which are at most “noisy-parallel”, i.e. contain documents which are fairly similar, both in content and in sentence ordering. Munteanu and Marcu (2005) analyze sentence pairs in isolation from their context, and classify them as parallel or non-parallel. They match each source document with several target ones, and classify all possible sentence pairs from each document pair. This enables them to find sentences from fairly dissimilar documents, and to handle any amount of reordering, which makes the method applicable to truly comparable corpora. The research reported by Fung and Cheung (2004a; 2004b), Cheung and Fung (2004) and Wu and Fung (2005) is aimed explicitly at “very non-parallel corpora”. They also pair each source document with several target ones and examine all possible sentence pairs; but the list of document pairs is not fixed. After one round of sentence extraction, the list is enriched with additional documents, and the system iterates. Thus, they include in the search document pairs which are dissimilar. One limitation of all these methods is that they are designed to find only full sentences. Our methodology is the first effort aimed at detecting sub-sentential correspondences. This is a difficult task, requiring the ability to recognize translationally equivalent fragments even in non-parallel sentence pairs. The work of Deng et. al (2006) also deals with sub-sentential fragments. However, they obtain parallel fragments from parallel sentence pairs (by chunking them and aligning the chunks appropriately), while we obtain them from comparable or non-parallel sentence pairs. Since our approach can extract parallel data from texts which contain few or no parallel sentences, it greatly expands the range of corpora which can be usefully exploited. 5 Conclusion We have presented a simple and effective method for extracting sub-sentential fragments from comparable corpora. We also presented a method for computing a probabilistic lexicon based on the LLR statistic, which produces a higher quality lexicon. We showed that using this lexicon helps improve the precision of our extraction method. Our approach can be improved in several aspects. The signal filtering function is very simple; more advanced filters might work better, and eliminate the need of applying additional 87 heuristics (such as our requirement that the extracted fragments have at least 3 words). The fact that the source and target signal are filtered separately is also a weakness; a joint analysis should produce better results. Despite the better lexicon, the greatest source of errors is still related to false word correspondences, generally involving punctuation and very common, closed-class words. Giving special attention to such cases should help get rid of these errors, and improve the precision of the method. Acknowledgements This work was partially supported under the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR001106-C-0022. References Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Percy Cheung and Pascale Fung. 2004. Sentence alignment in parallel, comparable, and quasicomparable corpora. In LREC2004 Workshop. Yonggang Deng, Shankar Kumar, and William Byrne. 2006. Segmentation and alignment of parallel text for statistical machine translation. Journal of Natural Language Engineering. to appear. Mona Diab and Steve Finch. 2000. A statistical wordlevel translation model for comparable corpora. In RIAO 2000. Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61–74. Pascale Fung and Percy Cheung. 2004a. Mining very non-parallel corpora: Parallel sentence and lexicon extraction vie bootstrapping and EM. In EMNLP 2004, pages 57–63. Pascale Fung and Percy Cheung. 2004b. Multilevel bootstrapping for extracting parallel sentences from a quasi-comparable corpus. In COLING 2004, pages 1051–1057. Pascale Fung and Lo Yuen Yee. 1998. An IR approach for translating new words from nonparallel, comparable texts. In ACL 1998, pages 414–420. Eric Gaussier, Jean-Michel Renders, Irina Matveeva, Cyril Goutte, and Herve Dejean. 2004. A geometric view on bilingual lexicon extraction from comparable corpora. In ACL 2004, pages 527–534. Philipp Koehn and Kevin Knight. 2000. Estimating word translation probabilities from unrelated monolingual corpora using the EM algorithm. In National Conference on Artificial Intelligence, pages 711–715. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In EMNLP 2004, pages 388–395. I. Dan Melamed. 2000. Models of translational equivalence among words. Computational Linguistics, 26(2):221–249. Robert C. Moore. 2004a. Improving IBM wordalignment model 1. In ACL 2004, pages 519–526. Robert C. Moore. 2004b. On log-likelihood-ratios and the significance of rare events. In EMNLP 2004, pages 333–340. Dragos Stefan Munteanu and Daniel Marcu. 2005. Improvingmachine translation performance by exploiting non-parallel corpora. Computational Linguistics, 31(4). Franz Joseph Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Franz Joseph Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–450. P. Ogilvie and J. Callan. 2001. Experiments using the Lemur toolkit. In TREC 2001, pages 103–108. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL 2002, pages 311–318. Reinhard Rapp. 1999. Automatic identification of word translations from unrelated English and German corpora. In ACL 1999, pages 519–526. Philip Resnik and Noah A. Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29(3):349–380. Philip Resnik, Douglas Oard, and Gina Lewow. 2001. Improved cross-language retrieval using backoff translation. In HLT 2001. Li Shao and Hwee Tou Ng. 2004. Mining new word translations from comparable corpora. In COLING 2004, pages 618–624. Yusuke Shinyama and Satoshi Sekine. 2004. Named entity discovery using comparable news articles. In COLING 2004, pages 848–853. Masao Utiyama and Hitoshi Isahara. 2003. Reliable measures for aligning Japanese-English news articles and sentences. In ACL 2003, pages 72–79. Stephan Vogel. 2003. Using noisy bilingual data for statistical machine translation. In EACL 2003, pages 175–178. Dekai Wu and Pascale Fung. 2005. Inversion transduction grammar constraints for mining parallel sentences from quasi-comparable corpora. In IJCNLP 2005, pages 257–268. Bing Zhao and Stephan Vogel. 2002. Adaptive parallel sentences mining from web bilingual news collection. In 2002 IEEE Int. Conf. on Data Mining, pages 745–748. 88 | 2006 | 11 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 873–880, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Advances in Discriminative Parsing Joseph Turian and I. Dan Melamed {lastname}@cs.nyu.edu Computer Science Department New York University New York, New York 10003 Abstract The present work advances the accuracy and training speed of discriminative parsing. Our discriminative parsing method has no generative component, yet surpasses a generative baseline on constituent parsing, and does so with minimal linguistic cleverness. Our model can incorporate arbitrary features of the input and parse state, and performs feature selection incrementally over an exponential feature space during training. We demonstrate the flexibility of our approach by testing it with several parsing strategies and various feature sets. Our implementation is freely available at: http://nlp.cs.nyu.edu/parser/. 1 Introduction Discriminative machine learning methods have improved accuracy on many NLP tasks, including POS-tagging, shallow parsing, relation extraction, and machine translation. Some advances have also been made on full syntactic constituent parsing. Successful discriminative parsers have relied on generative models to reduce training time and raise accuracy above generative baselines (Collins & Roark, 2004; Henderson, 2004; Taskar et al., 2004). However, relying on information from a generative model might prevent these approaches from realizing the accuracy gains achieved by discriminative methods on other NLP tasks. Another problem is training speed: Discriminative parsers are notoriously slow to train. In the present work, we make progress towards overcoming these obstacles. We propose a flexible, end-to-end discriminative method for training parsers, demonstrating techniques that might also be useful for other structured prediction problems. The proposed method does model selection without ad-hoc smoothing or frequency-based feature cutoffs. It requires no heuristics or human effort to optimize the single important hyper-parameter. The training regime can use all available information from the entire parse history. The learning algorithm projects the hand-provided features into a compound feature space and performs incremental feature selection over this large feature space. The resulting parser achieves higher accuracy than a generative baseline, despite not using a generative model as a feature. Section 2 describes the parsing algorithm. Section 3 presents the learning method. Section 4 presents experiments with discriminative parsers built using these methods. Section 5 compares our approach to related work. 2 Parsing Algorithm The following terms will help to explain our work. A span is a range over contiguous words in the input. Spans cross if they overlap but neither contains the other. An item is a (span, label) pair. A state is a partial parse, i.e. a set of items, none of whose spans may cross. A parse inference is a (state, item) pair, i.e. a state and an item to be added to it. The frontier of a state consists of the items with no parents yet. The children of a candidate inference are the frontier items below the item to be inferred, and the head of a candidate inference is the child item chosen by English head rules (Collins, 1999, pp. 238–240). A parse path is a sequence of parse inferences. For some input sentence and training parse tree, a state is correct if the parser can infer zero or more additional items to obtain the training parse tree, and an inference 873 is correct if it leads to a correct state. Given input sentence s, the parser searches for parse ˆp out of the possible parses P(s): ˆp = arg min p∈P(s) CΘ(p) (1) where CΘ(p) is the cost of parse p under model Θ: CΘ(p) = X i∈p cΘ(i) (2) Section 3.1 describes how to compute cΘ(i). Because cΘ(i) ∈R+, the cost of a partial parse monotonically increases as we add items to it. The parsing algorithm considers a succession of states. The initial state contains terminal items, whose labels are the POS tags given by the tagger of Ratnaparkhi (1996). Each time we pop a state from the agenda, cΘ computes the costs for the candidate bottom-up inferences generated from that state. Each candidate inference results in a successor state to be placed on the agenda. The cost function cΘ can consider arbitrary properties of the input and parse state. We are not aware of any tractable solution to Equation 1, such as dynamic programming. Therefore, the parser finds ˆp using a variant of uniform-cost search. The parser implements the search using an agenda that stores entire states instead of single items. Each time a state is popped from the agenda, the parser uses depth-first search starting from the state that was popped until it (greedily) finds a complete parse. In preliminary experiments, this search strategy was faster than standard uniformcost search (Russell & Norvig, 1995). 3 Training Method 3.1 General Setting Our training set I consists of candidate inferences from the parse trees in the training data. From each training inference i ∈I we generate the tuple ⟨X(i), y(i), b(i)⟩. X(i) is a feature vector describing i, with each element in {0, 1}. We will use X f (i) to refer to the element of X(i) that pertains to feature f. y(i) = +1 if i is correct, and y(i) = −1 if not. Some training examples might be more important than others, so each is given a bias b(i) ∈R+, as detailed in Section 3.3. The goal during training is to induce a hypothesis hΘ(i), which is a real-valued inference scoring function. In the present work, hΘ is a linear model parameterized by a real vector Θ, which has one entry for each feature f: hΘ(i) = Θ · X(i) = X f Θ f · X f (i) (3) The sign of hΘ(i) predicts the y-value of i and the magnitude gives the confidence in this prediction. The training procedure optimizes Θ to minimize the expected risk RΘ over training set I. RΘ is the objective function, a combination of loss function LΘ and regularization term ΩΘ: RΘ(I) = LΘ(I) + ΩΘ (4) The loss of the inference set decomposes into the loss of individual inferences: LΘ(I) = X i∈I lΘ(i) (5) In principle, lΘ can be any loss function, but in the present work we use the log-loss (Collins et al., 2002): lΘ(i) = b(i) · ln(1 + exp(−µΘ(i))) (6) and µΘ(i) is the margin of inference i: µΘ(i) = y(i) · hΘ(i) (7) Inference cost cΘ(i) in Equation 2 is lΘ(i) computed using y(i) = +1 and b(i) = 1, i.e.: cΘ(i) = ln(1 + exp(−hΘ(i))) (8) ΩΘ in Equation 4 is a regularizer, which penalizes complex models to reduce overfitting and generalization error. We use the ℓ1 penalty: ΩΘ = X f λ · |Θ f | (9) where λ is a parameter that controls the strength of the regularizer. This choice of objective RΘ is motivated by Ng (2004), who suggests that, given a learning setting where the number of irrelevant features is exponential in the number of training examples, we can nonetheless learn effectively by building decision trees to minimize the ℓ1regularized log-loss. On the other hand, Ng (2004) suggests that most of the learning algorithms commonly used by discriminative parsers will overfit when exponentially many irrelevant features are present.1 Learning over an exponential feature space is the very setting we have in mind. A priori, we define only a set A of simple atomic features (given 1including the following learning algorithms: • unregularized logistic regression • logistic regression with an ℓ2 penalty (i.e. a Gaussian prior) • SVMs using most kernels • multilayer neural nets trained by backpropagation • the perceptron algorithm 874 in Section 4). The learner then induces compound features, each of which is a conjunction of possibly negated atomic features. Each atomic feature can have one of three values (yes/no/don’t care), so the size of the compound feature space is 3|A|, exponential in the number of atomic features. It was also exponential in the number of training examples in our experiments (|A| ≈|I|). 3.2 Boosting ℓ1-Regularized Decision Trees We use an ensemble of confidence-rated decision trees (Schapire & Singer, 1999) to represent hΘ.2 The path from the root to each node n in a decision tree corresponds to some compound feature f, and we write ϕ(n) = f. To score an inference i using a decision tree, we percolate the inference’s features X(i) down to a leaf n and return confidence Θϕ(n). An inference i percolates down to node n iff Xϕ(n) = 1. Each leaf node n keeps track of the parameter value Θϕ(n).3 The score hΘ(i) given to an inference i by the whole ensemble is the sum of the confidences returned by the trees in the ensemble. Listing 1 Outline of training algorithm. 1: procedure T(I) 2: ensemble ←∅ 3: λ ←∞ 4: while dev set accuracy is increasing do 5: t ←tree with one (root) node 6: while the root node cannot be split do 7: decay ℓ1 parameter λ 8: while some leaf in t can be split do 9: split the leaf to maximize gain 10: percolate every i ∈I to a leaf node 11: for each leaf n in t do 12: update Θϕ(n) to minimize RΘ 13: append t to ensemble Listing 1 presents our training algorithm. At the beginning of training, the ensemble is empty, Θ = 0, and the ℓ1 parameter λ is set to ∞(Steps 1.2 and 1.3). We train until the objective cannot be further reduced for the current choice of λ. We then determine the accuracy of the parser on a held-out development set using the previous λ value (before it was decreased), and stop training when this 2Turian and Melamed (2005) reported that decision trees applied to parsing have higher accuracy and training speed than decision stumps, so we build full decision trees rather than stumps. 3Any given compound feature can appear in more than one tree, but each leaf node has a distinct confidence value. For simplicity, we ignore this possibility in our discussion. accuracy reaches a plateau (Step 1.4). Otherwise, we relax the regularization penalty by decreasing λ (Steps 1.6 and 1.7) and continue training. In this way, instead of choosing the best λ heuristically, we can optimize it during a single training run (Turian & Melamed, 2005). Each training iteration (Steps 1.5–1.13) has several steps. First, we choose some compound features that have high magnitude gradient with respect to the objective function. We do this by building a new decision tree, whose leaves represent the chosen compound features (Steps 1.5– 1.9). Second, we confidence-rate each leaf to minimize the objective over the examples that percolate down to that leaf (Steps 1.10–1.12). Finally, we append the decision tree to the ensemble and update parameter vector Θ accordingly (Step 1.13). In this manner, compound feature selection is performed incrementally during training, as opposed to a priori. Our strategy minimizing the objective RΘ(I) (Equation 4) is a variant of steepest descent (Perkins et al., 2003). To compute the gradient of the unpenalized loss LΘ with respect to the parameter Θ f of feature f, we have: ∂LΘ(I) ∂Θ f = X i∈I ∂lΘ(i) ∂µΘ(i) · ∂µΘ(i) ∂Θ f (10) where: ∂µΘ(i) ∂Θ f = y(i) · X f (i) (11) Using Equation 6, we define the weight of an example i under the current model as the rate at which loss decreases as the margin of i increases: wΘ(i) = −∂lΘ(i) ∂µΘ(i) = b(i) · 1 1 + exp(µΘ(i)) (12) Recall that X f (i) is either 0 or 1. Combining Equations 10–12 gives: ∂LΘ(I) ∂Θ f = − X i∈I Xf (i)=1 y(i) · wΘ(i) (13) We define the gain of feature f as: GΘ(I; f) = max 0, ∂LΘ(I) ∂Θ f −λ ! (14) Equation 14 has this form because the gradient of the penalty term is undefined at Θ f = 0. This discontinuity is why ℓ1 regularization tends to produce sparse models. If GΘ(I; f) = 0, then the objective RΘ(I) is at its minimum with respect to parameter Θf . Otherwise, GΘ(I; f) is the magnitude 875 of the gradient of the objective as we adjust Θ f in the appropriate direction. To build each decision tree, we begin with a root node. The root node corresponds to a dummy “always true” feature. We recursively split nodes by choosing a splitting feature that will allow us to increase the gain. Node n with corresponding compound feature ϕ(n) = f can be split by atomic feature a if: GΘ(I; f ∧a) + GΘ(I; f ∧¬a) > GΘ(I; f) (15) If no atomic feature satisfies the splitting criterion in Equation 15, then n becomes a leaf node of the decision tree and Θϕ(n) becomes one of the values to be optimized during the parameter update step. Otherwise, we choose atomic feature ˆa to split node n: ˆa = arg max a∈A (GΘ(I; f ∧a) + GΘ(I; f ∧¬a)) (16) This split creates child nodes n1 and n2, with ϕ(n1) = f ∧ˆa and ϕ(n2) = f ∧¬ˆa. Parameter update is done sequentially on only the most recently added compound features, which correspond to the leaves of the new decision tree. After the entire tree is built, we percolate examples down to their appropriate leaf nodes. We then choose for each leaf node n the parameter Θϕ(n) that minimizes the objective over the examples in that leaf. A convenient property of decision trees is that the leaves’ compound features are mutually exclusive. Their parameters can be directly optimized independently of each other using a line search over the objective. 3.3 The Training Set We choose a single correct path from each training parse tree, and the training examples correspond to all candidate inferences considered in every state along this path.4 In the deterministic setting there is only one correct path, so example generation is identical to that of Sagae and Lavie (2005). If parsing proceeds non-deterministically then there might be multiple paths that lead to the same final parse, so we choose one randomly. This method of generating training examples does not require a working parser and can be run prior to any training. The disadvantage of this approach is that it minimizes the error of the parser at correct states only. It does not account for compounded error or 4Nearly all of the examples generated are negative (y = −1). teach the parser to recover from mistakes gracefully. Turian and Melamed (2005) observed that uniform example biases b(i) produced lower accuracy as training progressed, because the induced classifiers minimized the error per example. To minimize the error per state, we assign every training state equal value and share half the value uniformly among the negative examples for the examples generated from that state and the other half uniformly among the positive examples. We parallelize training by inducing 26 label classifiers (one for each non-terminal label in the Penn Treebank). Parallelization might not uniformly reduce training time because different label classifiers train at different rates. However, parallelization uniformly reduces memory usage because each label classifier trains only on inferences whose consequent item has that label. 4 Experiments Discriminative parsers are notoriously slow to train. For example, Taskar et al. (2004) took several months to train on the ≤15 word sentences in the English Penn Treebank (Dan Klein, p.c.). The present work makes progress towards faster discriminative parser training: our slowest classifier took fewer than 5 days to train. Even so, it would have taken much longer to train on the entire treebank. We follow Taskar et al. (2004) in training and testing on ≤15 word sentences in the English Penn Treebank (Taylor et al., 2003). We used sections 02–21 for training, section 22 for development, and section 23 for testing, preprocessed as per Table 1. We evaluated our parser using the standard PARSEVAL measures (Black et al., 1991): labelled precision, labelled recall, and labelled F-measure (Prec., Rec., and F1, respectively), which are based on the number of nonterminal items in the parser’s output that match those in the gold-standard parse.5 As mentioned in Section 2, items are inferred bottom-up and the parser cannot infer any item that crosses an item already in the state. Although there are O(n2) possible (span, label) pairs over a frontier containing n items, we reduce this to the ≈5 · n inferences that have at most five children.6 5The correctness of a stratified shuffling test has been called into question (Michael Collins, p.c.), so we are not aware of any valid significance tests for observed differences in PARSEVAL scores. 6Only 0.57% of non-terminals in the preprocessed develop876 Table 1 Steps for preprocessing the data. Starred steps are performed only when parse trees are available in the data (e.g. not on test data). 1. * Strip functional tags and trace indices, and remove traces. 2. * Convert PRT to ADVP. (This convention was established by Magerman (1995).) 3. Remove quotation marks (i.e. terminal items tagged ‘‘ or ’’). (Bikel, 2004) 4. * Raise punctuation. (Bikel, 2004) 5. Remove outermost punctuation.a 6. * Remove unary projections to self (i.e. duplicate items with the same span and label). 7. POS tag the text using the tagger of Ratnaparkhi (1996). 8. Lowercase headwords. aAs pointed out by an anonymous reviewer of Collins (2003), removing outermost punctuation might discard useful information. Collins and Roark (2004) saw a LFMS improvement of 0.8% over their baseline discriminative parser after adding punctuation features, one of which encoded the sentence-final punctuation. To ensure the parser does not enter an infinite loop, no two items in a state can have both the same span and the same label. Given these restrictions on candidate inferences, there were roughly 40 million training examples generated in the training set. These were partitioned among the 26 constituent label classifiers. Building a decision tree (Steps 1.5–1.9 in Listing 1) using the entire example set I can be very expensive. We estimate loss gradients (Equation 13) using a sample of the inference set, which gives a 100-fold increase in training speed (Turian & Melamed, 2006). Our atomic feature set A contains 300K features, each of the form “is there an item in group J whose label/headword/headtag/headtagclass is ‘X’?”.7 Possible values of ‘X’ for each predicate are collected from the training data. For 1 ≤n ≤3, possible values for J are: • the first/last n child items • the first n left/right context items • the n children items left/right of the head • the head item. The left and right context items are the frontier items to the left and right of the children of the candidate inference, respectively. 4.1 Different Parsing Strategies To demonstrate the flexibility of our learning procedure, we trained three different parsers: left-to-right (l2r), right-to-left (r2l), ment set have more than five children. 7The predicate headtagclass is a supertype of the headtag. Given our compound features, these are not strictly necessary, but they accelerate training. An example is “proper noun,” which contains the POS tags given to singular and plural proper nouns. Space constraints prevent enumeration of the headtagclasses, which are instead provided at the URL given in the abstract. Table 2 Results on the development set, training and testing using only ≤15 word sentences. active λ features % Rec. % Prec. F1 l2r 0.040 11.9K 89.86 89.63 89.74 b.u. 0.020 13.7K 89.92 89.84 89.88 r2l 0.014 14.0K 90.66 89.81 90.23 and non-deterministic bottom-up (b.u.). The non-deterministic parser was allowed to choose any bottom-up inference. The other two parsers were deterministic: bottom-up inferences had to be performed strictly left-to-right or rightto-left, respectively. We stopped training when each parser had 15K active features. Figure 1 shows the accuracy of the different runs over the development set as training progressed. Table 2 gives the PARSEVAL scores of these parsers at their optimal ℓ1 penalty setting. We found that the perplexity of the r2l model was low so that, in 85% of the sentences, its greedy parse was the optimal one. The l2r parser does poorly because its decisions were more difficult than those of the other parsers. If it inferred far-right items, it was more likely to prevent correct subsequent inferences that were to the left. But if it inferred far-left items, then it went against the right-branching tendency of English sentences. The left-to-right parser would likely improve if we were to use a left-corner transform (Collins & Roark, 2004). Parsers in the literature typically choose some local threshold on the amount of search, such as a maximum beam width. With an accurate scoring function, restricting the search space using a fixed beam width might be unnecessary. Instead, we imposed a global threshold on exploration of the search space. Specifically, if the 877 Figure 1 F1 scores on the development set of the Penn Treebank, using only ≤15 word sentences. The x-axis shows the number of non-zero parameters in each parser, summed over all classifiers. 85% 86% 87% 88% 89% 90% 15K 10K 5K 2.5K 1.5K Devel. F-measure total number of non-zero parameters right-to-left left-to-right bottom up parser has found some complete parse and has explored at least 100K states (i.e. scored at least 100K inferences), search stopped prematurely and the parser would return the (possibly sub-optimal) current best complete parse. The l2r and r2l parsers never exceeded this threshold, and always found the optimal complete parse. However, the non-deterministic bottom-up parser’s search was cut-short in 28% of the sentences. The nondeterministic parser can reach each parse state through many different paths, so it searches a larger space than a deterministic parser, with more redundancy. To gain a better understanding of the weaknesses of our parser, we examined a sample of 50 development sentences that the r2l parser did not get entirely correct. Roughly half the errors were due to noise and genuine ambiguity. The remaining errors fell into three types, occurring with roughly the same frequency: • ADVPs and ADJPs The r2l parser had F1 = 81.1% on ADVPs, and F1 = 71.3% on ADJPs. Annotation of ADJP and ADVP in the PTB is inconsistent, particularly for unary projections. • POS Tagging Errors Many of the parser’s errors were due to incorrect POS tags. In future work we will integrate POS-tagging as inferences of the parser, allowing it to entertain competing hypotheses about the correct tagging. • Bilexical dependencies Although compound features exist to detect affinities between words, the parser had difficulties with bilexical dependency decisions that were unobserved in the training data. The classifier would need more training data to learn these affinities. Figure 2 F1 scores of right-to-left parsers with different atomic feature sets on the development set of the Penn Treebank, using only ≤15 word sentences. 85% 86% 87% 88% 89% 90% 91% 30K 20K 10K 5K 2.5K 1.5K Devel. F-measure total number of non-zero parameters kitchen sink baseline 4.2 More Atomic Features We compared our right-to-left parser with the baseline set of atomic features to one with a far richer atomic feature set, including unbounded context features, length features, and features of the terminal items. This “kitchen sink” parser merely has access to many more item groups J, described in Table 3. All features are all of the form given earlier, except for length features (Eisner & Smith, 2005). Length features compute the size of one of the groups of items in the indented list in Table 3. The feature determines if this length is equal to/greater than to n, 0 ≤n ≤15. The kitchen sink parser had 1.1 million atomic features, 3.7 times the number available in the baseline. In future work, we plan to try linguistically more sophisticated features (Charniak & Johnson, 2005) as well as sub-tree features (Bod, 2003; Kudo et al., 2005). Figure 2 shows the accuracy of the right-toleft parsers with different atomic feature sets over the development set as training progressed. Even though the baseline training made progress more quickly than the kitchen sink, the kitchen sink’s F1 surpassed the baseline’s F1 early in training, and at 6.3K active parameters it achieved a development set F1 of 90.55%. 4.3 Test Set Results To situate our results in the literature, we compare our results to those reported by Taskar et al. (2004) and Turian and Melamed (2005) for their discriminative parsers, which were also trained and tested on ≤15 word sentences. We also compare our parser to a representative non-discriminative 878 Table 3 Item groups available in the kitchen sink run. • the first/last n child items, 1 ≤n ≤4 • the first n left/right context items, 1 ≤n ≤4 • the n children items left/right of the head, 1 ≤n ≤4 • the nth frontier item left/right of the leftmost/head/rightmost child item, 1 ≤n ≤3 • the nth terminal item left/right of the leftmost/head/rightmost terminal item dominated by the item being inferred, 1 ≤n ≤3 • the leftmost/head/rightmost child item of the leftmost/head/rightmost child item • the following groups of frontier items: – all items – left/right context items – non-leftmost/non-head/non-rightmost child items – child items left/right of the head item, inclusive/exclusive • the terminal items dominated by one of the item groups in the indented list above Table 4 Results of parsers on the test set, training and testing using only ≤15 word sentences. % Rec. % Prec. F1 Turian and Melamed (2005) 86.47 87.80 87.13 Bikel (2004) 87.85 88.75 88.30 Taskar et al. (2004) 89.10 89.14 89.12 kitchen sink 89.26 89.55 89.40 parser (Bikel, 2004)8, the only one that we were able to train and test under exactly the same experimental conditions (including the use of POS tags from the tagger of Ratnaparkhi (1996)). Table 4 shows the PARSEVAL results of these four parsers on the test set. 5 Comparison with Related Work Our parsing approach is based upon a single endto-end discriminative learning machine. Collins and Roark (2004) and Taskar et al. (2004) beat the generative baseline only after using the standard trick of using the output from a generative model as a feature. Henderson (2004) finds that discriminative training was too slow, and reports accuracy higher than generative models by discriminatively reranking the output of his generative model. Unlike these state-of-the-art discriminative parsers, our method does not (yet) use any information from a generative model to improve training speed or accuracy. As far as we know, we present the first discriminative parser that does not use information from a generative model to beat a 8Bikel (2004) is a “clean room” reimplementation of the Collins (1999) model with comparable accuracy. generative baseline (the Collins model). The main limitation of our work is that we can do training reasonably quickly only on short sentences because a sentence with n words generates O(n2) training inferences in total. Although generating training examples in advance without a working parser (Turian & Melamed, 2005) is much faster than using inference (Collins & Roark, 2004; Henderson, 2004; Taskar et al., 2004), our training time can probably be decreased further by choosing a parsing strategy with a lower branching factor. Like our work, Ratnaparkhi (1999) and Sagae and Lavie (2005) generate examples off-line, but their parsing strategies are essentially shift-reduce so each sentence generates only O(n) training examples. An advantage of our approach is its flexibility. As our experiments showed, it is quite simple to substitute in different parsing strategies. Although we used very little linguistic information (the head rules and the POS tag classes), our model could also start with more sophisticated task-specific features in its atomic feature set. Atomic features that access arbitrary information are represented directly without the need for an induced intermediate representation (cf. Henderson, 2004). Other papers (Clark & Curran, 2004; Kaplan et al., 2004, e.g.) have applied log-linear models to parsing. These works are based upon conditional models, which include a normalization term. However, our loss function forgoes normalization, which means that it is easily decomposed into the loss of individual inferences (Equation 5). 879 Decomposition of the loss allows the objective to be optimized in parallel. This might be an advantage for larger structured prediction problems where there are more opportunities for parallelization, for example machine translation. The only important hyper-parameter in our method is the ℓ1 penalty factor. We optimize it as part of the training process, choosing the value that maximizes accuracy on a held-out development set. This technique stands in contrast to more ad-hoc methods for choosing hyper-parameters, which may require prior knowledge or additional experimentation. 6 Conclusion Our work has made advances in both accuracy and training speed of discriminative parsing. As far as we know, we present the first discriminative parser that surpasses a generative baseline on constituent parsing without using a generative component, and it does so with minimal linguistic cleverness. Our approach performs feature selection incrementally over an exponential feature space during training. Our experiments suggest that the learning algorithm is overfitting-resistant, as hypothesized by Ng (2004). If this is the case, it would reduce the effort required for feature engineering. An engineer can merely design a set of atomic features whose powerset contains the requisite information. Then, the learning algorithm can perform feature selection over the compound feature space, avoiding irrelevant compound features. In future work, we shall make some standard improvements. Our parser should infer its own POS tags to improve accuracy. A shift-reduce parsing strategy will generate fewer training inferences, and might lead to shorter training times. Lastly, we plan to give the model linguistically more sophisticated features. We also hope to apply the model to other structured prediction tasks, such as syntax-driven machine translation. Acknowledgments The authors would like to thank Chris Pike, Cynthia Rudin, and Ben Wellington, as well as the anonymous reviewers, for their helpful comments and constructive criticism. This research was sponsored by NSF grants #0238406 and #0415933. References Bikel, D. M. (2004). Intricacies of Collins’ parsing model. Computational Linguistics, 30(4). Black, E., Abney, S., Flickenger, D., Gdaniec, C., Grishman, R., Harrison, P., et al. (1991). A procedure for quantitatively comparing the syntactic coverage of English grammars. In Speech and Natural Language. Bod, R. (2003). An efficient implementation of a new DOP model. In EACL. Charniak, E., & Johnson, M. (2005). Coarse-to-fine n-best parsing and MaxEnt discriminative reranking. In ACL. Clark, S., & Curran, J. R. (2004). Parsing the WSJ using CCG and log-linear models. In ACL. Collins, M. (1999). Head-driven statistical models for natural language parsing. Doctoral dissertation. Collins, M. (2003). Head-driven statistical models for natural language parsing. Computational Linguistics, 29(4). Collins, M., & Roark, B. (2004). Incremental parsing with the perceptron algorithm. In ACL. Collins, M., Schapire, R. E., & Singer, Y. (2002). Logistic regression, AdaBoost and Bregman distances. Machine Learning, 48(1-3). Eisner, J., & Smith, N. A. (2005). Parsing with soft and hard constraints on dependency length. In IWPT. Henderson, J. (2004). Discriminative training of a neural network statistical parser. In ACL. Kaplan, R. M., Riezler, S., King, T. H., Maxwell, III, J. T., Vasserman, A., & Crouch, R. (2004). Speed and accuracy in shallow and deep stochastic parsing. In HLT/NAACL. Kudo, T., Suzuki, J., & Isozaki, H. (2005). Boosting-based parse reranking with subtree features. In ACL. Magerman, D. M. (1995). Statistical decision-tree models for parsing. In ACL. Ng, A. Y. (2004). Feature selection, ℓ1 vs. ℓ2 regularization, and rotational invariance. In ICML. Perkins, S., Lacker, K., & Theiler, J. (2003). Grafting: Fast, incremental feature selection by gradient descent in function space. Journal of Machine Learning Research, 3. Ratnaparkhi, A. (1996). A maximum entropy part-of-speech tagger. In EMNLP. Ratnaparkhi, A. (1999). Learning to parse natural language with maximum entropy models. Machine Learning, 34(13). Russell, S., & Norvig, P. (1995). Artificial intelligence: A modern approach. Sagae, K., & Lavie, A. (2005). A classifier-based parser with linear run-time complexity. In IWPT. Schapire, R. E., & Singer, Y. (1999). Improved boosting using confidence-rated predictions. Machine Learning, 37(3). Taskar, B., Klein, D., Collins, M., Koller, D., & Manning, C. (2004). Max-margin parsing. In EMNLP. Taylor, A., Marcus, M., & Santorini, B. (2003). The Penn Treebank: an overview. In A. Abeill´e (Ed.), Treebanks: Building and using parsed corpora (chap. 1). Turian, J., & Melamed, I. D. (2005). Constituent parsing by classification. In IWPT. Turian, J., & Melamed, I. D. (2006). Computational challenges in parsing by classification. In HLT-NAACL workshop on computationally hard problems and joint inference in speech and language processing. 880 | 2006 | 110 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 881–888, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Prototype-Driven Grammar Induction Aria Haghighi Computer Science Division University of California Berkeley [email protected] Dan Klein Computer Science Division University of California Berkeley [email protected] Abstract We investigate prototype-driven learning for primarily unsupervised grammar induction. Prior knowledge is specified declaratively, by providing a few canonical examples of each target phrase type. This sparse prototype information is then propagated across a corpus using distributional similarity features, which augment an otherwise standard PCFG model. We show that distributional features are effective at distinguishing bracket labels, but not determining bracket locations. To improve the quality of the induced trees, we combine our PCFG induction with the CCM model of Klein and Manning (2002), which has complementary stengths: it identifies brackets but does not label them. Using only a handful of prototypes, we show substantial improvements over naive PCFG induction for English and Chinese grammar induction. 1 Introduction There has been a great deal of work on unsupervised grammar induction, with motivations ranging from scientific interest in language acquisition to engineering interest in parser construction (Carroll and Charniak, 1992; Clark, 2001). Recent work has successfully induced unlabeled grammatical structure, but has not successfully learned labeled tree structure (Klein and Manning, 2002; Klein and Manning, 2004; Smith and Eisner, 2004) . In this paper, our goal is to build a system capable of producing labeled parses in a target grammar with as little total effort as possible. We investigate a prototype-driven approach to grammar induction, in which one supplies canonical examples of each target concept. For example, we might specify that we are interested in trees which use the symbol NP and then list several examples of prototypical NPs (determiner noun, pronouns, etc., see figure 1 for a sample prototype list). This prototype information is similar to specifying an annotation scheme, which even human annotators must be provided before they can begin the construction of a treebank. In principle, prototypedriven learning is just a kind of semi-supervised learning. However, in practice, the information we provide is on the order of dozens of total seed instances, instead of a handful of fully parsed trees, and is of a different nature. The prototype-driven approach has three strengths. First, since we provide a set of target symbols, we can evaluate induced trees using standard labeled parsing metrics, rather than the far more forgiving unlabeled metrics described in, for example, Klein and Manning (2004). Second, knowledge is declaratively specified in an interpretable way (see figure 1). If a user of the system is unhappy with its systematic behavior, they can alter it by altering the prototype information (see section 7.1 for examples). Third, and related to the first two, one does not confuse the ability of the system to learn a consistent grammar with its ability to learn the grammar a user has in mind. In this paper, we present a series of experiments in the induction of labeled context-free trees using a combination of unlabeled data and sparse prototypes. We first affirm the well-known result that simple, unconstrained PCFG induction produces grammars of poor quality as measured against treebank structures. We then augment a PCFG with prototype features, and show that these features, when propagated to non-prototype sequences using distributional similarity, are effective at learning bracket labels on fixed unlabeled trees, but are still not enough to learn good tree structures without bracketing information. Finally, we intersect the feature-augmented PCFG with the CCM model of Klein and Manning (2002), a highquality bracketing model. The intersected model is able to learn trees with higher unlabeled F1 than those in Klein and Manning (2004). More impor881 tantly, its trees are labeled and can be evaluated according to labeled metrics. Against the English Penn Treebank, our final trees achieve a labeled F1 of 65.1 on short sentences, a 51.7% error reduction over naive PCFG induction. 2 Experimental Setup The majority of our experiments induced tree structures from the WSJ section of the English Penn treebank (Marcus et al., 1994), though see section 7.4 for an experiment on Chinese. To facilitate comparison with previous work, we extracted WSJ-10, the 7,422 sentences which contain 10 or fewer words after the removal of punctuation and null elements according to the scheme detailed in Klein (2005). We learned models on all or part of this data and compared their predictions to the manually annotated treebank trees for the sentences on which the model was trained. As in previous work, we begin with the part-of-speech (POS) tag sequences for each sentence rather than lexical sequences (Carroll and Charniak, 1992; Klein and Manning, 2002). Following Klein and Manning (2004), we report unlabeled bracket precision, recall, and F1. Note that according to their metric, brackets of size 1 are omitted from the evaluation. Unlike that work, all of our induction methods produce trees labeled with symbols which are identified with treebank categories. Therefore, we also report labeled precision, recall, and F1, still ignoring brackets of size 1.1 3 Experiments in PCFG induction As an initial experiment, we used the insideoutside algorithm to induce a PCFG in the straightforward way (Lari and Young, 1990; Manning and Sch¨utze, 1999). For all the experiments in this paper, we considered binary PCFGs over the nonterminals and terminals occuring in WSJ10. The PCFG rules were of the following forms: • X →Y Z, for nonterminal types X, Y, and Z, with Y ̸= X or Z ̸= X • X →t Y , X →Y t, for each terminal t • X →t t′, for terminals t and t′ For a given sentence S, our CFG generates labeled trees T over S.2 Each tree consists of binary 1In cases where multiple gold labels exist in the gold trees, precision and recall were calculated as in Collins (1999). 2Restricting our CFG to a binary branching grammar results in an upper bound of 88.1% on unlabeled F1. productions X(i, j) →α over constituent spans (i, j), where α is a pair of non-terminal and/or terminal symbols in the grammar. The generative probability of a tree T for S is: PCFG(T, S) = Y X(i,j)→α∈T P(α|X) In the inside-outside algorithm, we iteratively compute posterior expectations over production occurences at each training span, then use those expectations to re-estimate production probabilities. This process is guaranteed to converge to a local extremum of the data likelihood, but initial production probability estimates greatly influence the final grammar (Carroll and Charniak, 1992). In particular, uniform initial estimates are an (unstable) fixed point. The classic approach is to add a small amount of random noise to the initial probabilities in order to break the symmetry between grammar symbols. We randomly initialized 5 grammars using treebank non-terminals and trained each to convergence on the first 2000 sentences of WSJ-10. Viterbi parses were extracted for each of these 2000 sentences according to each grammar. Of course, the parses’ symbols have nothing to anchor them to our intended treebank symbols. That is, an NP in one of these grammars may correspond to the target symbol VP, or may not correspond well to any target symbol. To evaluate these learned grammars, we must map the models’ phrase types to target phrase types. For each grammar, we followed the common approach of greedily mapping model symbols to target symbols in the way which maximizes the labeled F1. Note that this can, and does, result in mapping multiple model symbols to the most frequent target symbols. This experiment, labeled PCFG × NONE in figure 4, resulted in an average labeled F1 of 26.3 and an unlabeled F1 of 45.7. The unlabeled F1 is better than randomly choosing a tree (34.7), but not better than always choosing a right branching structure (61.7). Klein and Manning (2002) suggest that the task of labeling constituents is significantly easier than identifying them. Perhaps it is too much to ask a PCFG induction algorithm to perform both of these tasks simultaneously. Along the lines of Pereira and Schabes (1992), we reran the insideoutside algorithm, but this time placed zero mass on all trees which did not respect the bracketing of the gold trees. This constraint does not fully 882 Phrase Prototypes Phrase Prototypes NP DT NN VP VBN IN NN JJ NNS VBD DT NN NNP NNP MD VB CD S PRP VBD DT NN QP CD CD DT NN VBD IN DT NN RB CD DT VBZ DT JJ NN DT CD CD PP IN NN ADJP RB JJ TO CD CD JJ IN PRP JJ CC JJ ADVP RB RB RB CD RB CC RB VP-INF VB NN NP-INF NN POS Figure 1: English phrase type prototype list manually specified (The entire supervision for our system). The second part of the table is additional prototypes discussed in section 7.1. eliminate the structural uncertainty since we are inducing binary trees and the gold trees are flatter than binary in many cases. This approach of course achieved the upper bound on unlabeled F1, because of the gold bracket constraints. However, it only resulted in an average labeled F1 of 52.6 (experiment PCFG × GOLD in figure 4). While this labeled score is an improvement over the PCFG × NONE experiment, it is still relatively disappointing. 3.1 Encoding Prior Knowledge with Prototypes Clearly, we need to do something more than adding structural bias (e.g. bracketing information) if we are to learn a PCFG in which the symbols have the meaning and behaviour we intend. How might we encode information about our prior knowledge or intentions? Providing labeled trees is clearly an option. This approach tells the learner how symbols should recursively relate to each other. Another option is to provide fully linearized yields as prototypes. We take this approach here, manually creating a list of POS sequences typical of the 7 most frequent categories in the Penn Treebank (see figure 1).3 Our grammar is limited to these 7 phrase types plus an additional type which has no prototypes and is unconstrained.4 This list grounds each sym3A possible objection to this approach is the introduction of improper reasearcher bias via specifying prototypes. See section 7.3 for an experiment utilizing an automatically generated prototype list with comparable results. 4In our experiments we found that adding prototypes for more categories did not improve performance and took more bol in terms of an observable portion of the data, rather than attempting to relate unknown symbols to other unknown symbols. Broadly, we would like to learn a grammar which explains the observed data (EM’s objective) but also meets our prior expectations or requirements of the target grammar. How might we use such a list to constrain the learning of a PCFG with the inside-outside algorithm? We might require that all occurences of a prototype sequence, say DT NN, be constituents of the corresponding type (NP). However, human-elicited prototypes are not likely to have the property that, when they occur, they are (nearly) always constituents. For example, DT NN is a perfectly reasonable example of a noun phrase, but is not a constituent when it is part of a longer DT NN NN constituent. Therefore, when summing over trees with the inside-outside algorithm, we could require a weaker property: whenever a prototype sequence is a constituent it must be given the label specified in the prototype file.5 This constraint is enough to break the symmetry between the model labels, and therefore requires neither random initialization for training, nor post-hoc mapping of labels for evaluation. Adding prototypes in this way and keeping the gold bracket constraint gave 59.9 labeled F1. The labeled F1 measure is again an improvement over naive PCFG induction, but is perhaps less than we might expect given that the model has been given bracketing information and has prototypes as a form of supervision to direct it. In response to a prototype, however, we may wish to conclude something stronger than a constraint on that particular POS sequence. We might hope that sequences which are similar to a prototype in some sense are generally given the same label as that prototype. For example, DT NN is a noun phrase prototype, the sequence DT JJ NN is another good candidate for being a noun phrase. This kind of propagation of constraints requires that we have a good way of defining and detecting similarity between POS sequences. 3.2 Phrasal Distributional Similarity A central linguistic argument for constituent types is substitutability: phrases of the same type appear time. We note that we still evaluate against all phrase types regardless of whether or not they are modeled by our grammar. 5Even this property is likely too strong: prototypes may have multiple possible labels, for example DT NN may also be a QP in the English treebank. 883 Yield Prototype Skew KL Phrase Type Skew KL DT JJ NN DT NN 0.10 NP 0.39 IN DT VBG NN IN NN 0.24 PP 0.45 DT NN MD VB DT NNS PRP VBD DT NN 0.54 S 0.58 CC NN IN NN 0.43 PP 0.71 MD NNS PRP VBD DT NN 1.43 NONE Figure 2: Yields along with most similar prototypes and phrase types, guessed according to (3). in similar contexts and are mutually substitutable (Harris, 1954; Radford, 1988). For instance, DT JJ NN and DT NN occur in similar contexts, and are indeed both common NPs. This idea has been repeatedly and successfully operationalized using various kinds of distributional clustering, where we define a similarity measure between two items on the basis of their immediate left and right contexts (Sch¨utze, 1995; Clark, 2000; Klein and Manning, 2002). As in Clark (2001), we characterize the distribution of a sequence by the distribution of POS tags occurring to the left and right of that sequence in a corpus. Each occurence of a POS sequence α falls in a context x α y, where x and y are the adjacent tags. The distribution over contexts x −y for a given α is called its signature, and is denoted by σ(α). Note that σ(α) is composed of context counts from all occurences, constitiuent and distituent, of α. Let σc(α) denote the context distribution for α where the context counts are taken only from constituent occurences of α. For each phrase type in our grammar, X, define σc(X) to be the context distribution obtained from the counts of all constituent occurences of type X: σc(X) = Ep(α|X) σc(α) (1) where p(α|X) is the distribution of yield types for phrase type X. We compare context distributions using the skewed KL divergence: DSKL(p, q) = DKL(p∥γp + (1 −γ)q) where γ controls how much of the source distributions is mixed in with the target distribution. A reasonable baseline rule for classifying the phrase type of a POS yield is to assign it to the phrase from which it has minimal divergence: type(α) = arg min X DSKL(σc(α), σc(X)) (2) However, this rule is not always accurate, and, moreover, we do not have access to σc(α) or σc(X). We chose to approximate σc(X) using the prototype yields for X as samples from p(α|X). Letting proto(X) denote the (few) prototype yields for phrase type X, we define ˜σ(X): ˜σ(X) = 1 |proto(X)| X α∈proto(X) σ(α) Note ˜σ(X) is an approximation to (1) in several ways. We have replaced an expectation over p(α|X) with a uniform weighting of proto(X), and we have replaced σc(α) with σ(α) for each term in that expectation. Because of this, we will rely only on high confidence guesses, and allow yields to be given a NONE type if their divergence from each ˜σ(X) exceeds a fixed threshold t. This gives the following alternative to (2): type(α) = (3) ( NONE, if minX DSKL(σ(α), ˜σ(X)) < t arg minX DSKL(σ(α), ˜σ(X)), otherwise We built a distributional model implementing the rule in (3) by constructing σ(α) from context counts in the WSJ portion of the Penn Treebank as well as the BLIPP corpus. Each ˜σ(X) was approximated by a uniform mixture of σ(α) for each of X’s prototypes α listed in figure 1. This method of classifying constituents is very precise if the threshold is chosen conservatively enough. For instance, using a threshold of t = 0.75 and γ = 0.1, this rule correctly classifies the majority label of a constituent-type with 83% precision, and has a recall of 23% over constituent types. Figure 2 illustrates some sample yields, the prototype sequence to which it is least divergent, and the output of rule (3). We incorporated this distributional information into our PCFG induction scheme by adding a prototype feature over each span (i, j) indicating the output of (3) for the yield α in that span. Associated with each sentence S is a feature map F specifying, for each (i, j), a prototype feature pij. These features are generated using an augmented CFG model, CFG+, given by:6 PCFG+(T, F) = Y X(i,j)→α∈T P(pij|X)P(α|X) = Y X(i,j)→α∈T φCFG+(X →α, pij) 6Technically, all features in F must be generated for each assignment to T, which means that there should be terms in this equation for the prototype features on distituent spans. However, we fixed the prototype distribution to be uniform for distituent spans so that the equation is correct up to a constant depending on F. 884 P(S|ROOT) ¯ ROOT S P(NP VP|S) P(P = NONE|S) XXXXXX P(NN NNS|NP) P(P = NP|NP) ff NP HHH NNN payrolls NN Factory VP P(VBD PP|VP) P(P = VP|VP) aaaa ! ! ! ! VBD fell PP P(IN NN|PP) P(P = PP|PP) ! ! ! aaa NN November IN in Figure 3: Illustration of PCFG augmented with prototype similarity features. where φCFG+(X →α, pij) is the local factor for placing X →α on a span with prototype feature pij. An example is given in figure 3. For our experiments, we fixed P(pij|X) to be: P(pij|X) = ( 0.60, if pij = X uniform, otherwise Modifying the model in this way, and keeping the gold bracketing information, gave 71.1 labeled F1 (see experiment PROTO × GOLD in figure 4), a 40.3% error reduction over naive PCFG induction in the presence of gold bracketing information. We note that the our labeled F1 is upper-bounded by 86.0 due to unary chains and more-than-binary configurations in the treebank that cannot be obtained from our binary grammar. We conclude that in the presence of gold bracket information, we can achieve high labeled accuracy by using a CFG augmented with distributional prototype features. 4 Constituent Context Model So far, we have shown that, given perfect perfect bracketing information, distributional prototype features allow us to learn tree structures with fairly accurate labels. However, such bracketing information is not available in the unsupervised case. Perhaps we don’t actually need bracketing constraints in the presence of prototypes and distributional similarity features. However this experiment, labeled PROTO × NONE in figure 4, gave only 53.1 labeled F1 (61.1 unlabeled), suggesting that some amount of bracketing constraint is necessary to achieve high performance. Fortunately, there are unsupervised systems which can induce unlabeled bracketings with reasonably high accuracy. One such model is the constituent-context model (CCM) of Klein and Manning (2002), a generative distributional model. For a given sentence S, the CCM generates a bracket matrix, B, which for each span (i, j), indicates whether or not it is a constituent (Bij = c) or a distituent (Bij = d). In addition, it generates a feature map F ′, which for each span (i, j) in S specifies a pair of features, F ′ ij = (yij, cij), where yij is the POS yield of the span, and cij is the context of the span, i.e identity of the conjoined left and right POS tags: PCCM(B, F ′) = P(B) Y (i,j) P(yij|Bij)P(cij|Bij) The distribution P(B) only places mass on bracketings which correspond to binary trees. We can efficiently compute PCCM(B, F ′) (up to a constant) depending on F ′ using local factors φCCM(yij, cij) which decomposes over constituent spans:7 PCCM(B, F ′) ∝ Y (i,j):Bij=c P(yij|c)P(cij|c) P(yij|d)P(cij|d) = Y (i,j):Bij=c φCCM(yij, cij) The CCM by itself yields an unlabeled F1 of 71.9 on WSJ-10, which is reasonably high, but does not produce labeled trees. 5 Intersecting CCM and PCFG The CCM and PCFG models provide complementary views of syntactic structure. The CCM explicitly learns the non-recursive contextual and yield properties of constituents and distituents. The PCFG model, on the other hand, does not explicitly model properties of distituents but instead focuses on modeling the hierarchical and recursive properties of natural language syntax. One would hope that modeling both of these aspects simultaneously would improve the overall quality of our induced grammar. We therefore combine the CCM with our featureaugmented PCFG, denoted by PROTO in experiment names. When we run EM on either of the models alone, at each iteration and for each training example, we calculate posteriors over that 7Klein (2005) gives a full presentation. 885 model’s latent variables. For CCM, the latent variable is a bracketing matrix B (equivalent to an unlabeled binary tree), while for the CFG+ the latent variable is a labeled tree T. While these latent variables aren’t exactly the same, there is a close relationship between them. A bracketing matrix constrains possible labeled trees, and a given labeled tree determines a bracketing matrix. One way to combine these models is to encourage both models to prefer latent variables which are compatible with each other. Similar to the approach of Klein and Manning (2004) on a different model pair, we intersect CCM and CFG+ by multiplying their scores for any labeled tree. For each possible labeled tree over a sentence S, our generative model for a labeled tree T is given as follows: P(T, F, F ′) = (4) PCFG+(T, F)PCCM(B(T), F ′) where B(T) corresponds to the bracketing matrix determined by T. The EM algorithm for the product model will maximize: P(S,F, F ′) = X T∈T (S) PCCM(B, F ′)PCFG+(T, F) = X B PCCM(B, F ′) X T∈T (B,S) PCFG+(T, F) where T (S) is the set of labeled trees consistent with the sentence S and T (B, S) is the set of labeled trees consistent with the bracketing matrix B and the sentence S. Notice that this quantity increases as the CCM and CFG+ models place probability mass on compatible latent structures, giving an intuitive justification for the success of this approach. We can compute posterior expectations over (B, T) in the combined model (4) using a variant of the inside-outside algorithm. The local factor for a binary rule r = X →Y Z, over span (i, j), with CCM features F ′ ij = (yij, cij) and prototype feature pij, is given by the product of local factors for the CCM and CFG+ models: φ(r, (i, j)) = φCCM(yij, cij)φCFG+(r, pij) From these local factors, the inside-outside algorithm produces expected counts for each binary rule, r, over each span (i, j) and split point k, denoted by P(r, (i, j), k|S, F, F ′). These posteriors are sufficient to re-estimate all of our model parameters. Labeled Unlabeled Setting Prec. Rec. F1 Prec. Rec. F1 No Brackets PCFG × NONE 23.9 29.1 26.3 40.7 52.1 45.7 PROTO × NONE 51.8 62.9 56.8 59.6 76.2 66.9 Gold Brackets PCFG × GOLD 47.0 57.2 51.6 78.8 100.0 88.1 PROTO × GOLD 64.8 78.7 71.1 78.8 100.0 88.1 CCM Brackets CCM 64.2 81.6 71.9 PCFG × CCM 32.3 38.9 35.3 64.1 81.4 71.8 PROTO × CCM 56.9 68.5 62.2 68.4 86.9 76.5 BEST 59.4 72.1 65.1 69.7 89.1 78.2 UBOUND 78.8 94.7 86.0 78.8 100.0 88.1 Figure 4: English grammar induction results. The upper bound on labeled recall is due to unary chains. 6 CCM as a Bracketer We tested the product model described in section 5 on WSJ-10 under the same conditions as in section 3. Our initial experiment utilizes no protoype information, random initialization, and greedy remapping of its labels. This experiment, PCFG × CCM in figure 4, gave 35.3 labeled F1, compared to the 51.6 labeled F1 with gold bracketing information (PCFG × GOLD in figure 4). Next we added the manually specified prototypes in figure 1, and constrained the model to give these yields their labels if chosen as constituents. This experiment gave 48.9 labeled F1 (73.3 unlabeled). The error reduction is 21.0% labeled (5.3% unlabeled) over PCFG × CCM. We then experimented with adding distributional prototype features as discussed in section 3.2 using a threshold of 0.75 and γ = 0.1. This experiment, PROTO × CCM in figure 4, gave 62.2 labeled F1 (76.5 unlabeled). The error reduction is 26.0% labeled (12.0% unlabeled) over the experiment using prototypes without the similarity features. The overall error reduction from PCFG × CCM is 41.6% (16.7%) in labeled (unlabeled) F1. 7 Error Analysis The most common type of error by our PROTO × CCM system was due to the binary grammar restriction. For instance common NPs, such as DT JJ NN, analyzed as [NP DT [NP JJ NN] ], which proposes additional N constituents compared to the flatter treebank analysis. This discrepancy greatly, and perhaps unfairly, damages NP precision (see figure 6). However, this is error is unavoidable 886 SXXXXX NP NNP France VPXXXXX X MD can VPhhhhhhh ( ( ( ( ( ( ( VB boast NPXXXXX NPaaa a ! ! ! ! NPaaa ! ! ! DT the NN lion POS ’s NN share PPPPPP IN of NP HHH JJ high-priced NNS bottles Shhhhhhhhh ( ( ( ( ( ( ( ( ( NNP France VPhhhhhhhh ( ( ( ( ( ( ( ( VPXXXXX VP ZZ MD can VB boast NPaaa a ! ! ! ! NP ll , , DT the NN lion PP ZZ POS ’s NN share PPPPPP IN of NPHHH JJ high-priced NNS bottles Shhhhhhhhh ( ( ( ( ( ( ( ( ( NNP France VPhhhhhhhhh h ( ( ( ( ( ( ( ( ( ( VPPPPP MD can VPXXXXX VB boast NPPPPP NP bb b " " " DT the NP cc # # NN lion POS ’s NN share PPPPPP IN of NPHHH JJ high-priced NNS bottles a) b) c) Figure 5: Examples of corrections from adding VP-INF and NP-POS prototype categories. The tree in (a) is the Treebank parse, (b) is the parse with PROTO × CCM model, and c) is the parse with the BEST model (added prototype categories), which fixes the possesive NP and infinitival VP problems, but not the PP attachment. given our grammar restriction. Figure 5(b) demonstrates three other errors. Possessive NPs are analyzed as [NP NN [PP POS NN ] ], with the POS element treated as a preposition and the possessed NP as its complement. While labeling the POS NN as a PP is clearly incorrect, placing a constituent over these elements is not unreasonable and in fact has been proposed by some linguists (Abney, 1987). Another type of error also reported by Klein and Manning (2002) is MD VB groupings in infinitival VPs also sometimes argued by linguists (Halliday, 2004). More seriously, prepositional phrases are almost always attached “high” to the verb for longer NPs. 7.1 Augmenting Prototypes One of the advantages of the prototype driven approach, over a fully unsupervised approach, is the ability to refine or add to the annotation specification if we are not happy with the output of our system. We demonstrate this flexibility by augmenting the prototypes in figure 1 with two new categories NP-POS and VP-INF, meant to model possessive noun phrases and infinitival verb phrases, which tend to have slightly different distributional properties from normal NPs and VPs. These new sub-categories are used during training and then stripped in post-processing. This prototype list gave 65.1 labeled F1 (78.2 unlabeled). This experiment is labeled BEST in figure 4. Looking at the CFG-learned rules in figure 7, we see that the basic structure of the treebank grammar is captured. 7.2 Parsing with only the PCFG In order to judge how well the PCFG component of our model did in isolation, we experimented with training our BEST model with the CCM component, but dropping it at test time. This experiLabel Prec. Rec. F1 S 79.3 80.0 79.7 NP 49.0 74.4 59.1 VP 80.4 73.3 76.7 PP 45.6 78.6 57.8 QP 36.2 78.8 49.6 ADJP 29.4 33.3 31.2 ADVP 25.0 12.2 16.4 Figure 6: Precision, recall, and F1 for individual phrase types in the BEST model Rule Probability Rule Probability S →NP VP 0.51 VP →VBZ NP 0.20 S →PRP VP 0.13 VP →VBD NP 0.15 S →NNP VP 0.06 VP →VBP NP 0.09 S →NNS VP 0.05 VP →VB NP 0.08 NP →DT NN 0.12 ROOT →S 0.95 NP →NP PP 0.09 ROOT →NP 0.05 NP →NNP NNP 0.09 NP →JJ NN 0.07 PP →IN NP 0.37 QP →CD CD 0.35 PP →CC NP 0.06 QP →CD NN 0.30 PP →TO VP 0.05 QP →QP PP 0.10 PP →TO QP 0.04 QP →QP NNS 0.05 ADJP →RB VBN 0.37 ADVP →RB RB 0.25 ADJP →RB JJ 0.31 ADVP →ADJP PRP 0.15 ADJP →RBR JJ 0.09 ADVP →RB CD 0.10 Figure 7: Top PCFG Rules learned by BEST model ment gave 65.1 labeled F1 (76.8 unlabeled). This demonstrates that while our PCFG performance degrades without the CCM, it can be used on its own with reasonable accuracy. 7.3 Automatically Generated Prototypes There are two types of bias which enter into the creation of prototypes lists. One of them is the bias to choose examples which reflect the annotation semantics we wish our model to have. The second is the iterative change of prototypes in order to maximize F1. Whereas the first is appro887 priate, indeed the point, the latter is not. In order to guard against the second type of bias, we experimented with automatically extracted generated prototype lists which would not be possible without labeled data. For each phrase type category, we extracted the three most common yield associated with that category that differed in either first or last POS tag. Repeating our PROTO × CCM experiment with this list yielded 60.9 labeled F1 (76.5 unlabeled), comparable to the performance of our manual prototype list. 7.4 Chinese Grammar Induction In order to demonstrate that our system is somewhat language independent, we tested our model on CTB-10, the 2,437 sentences of the Chinese Treebank (Ircs, 2002) of length at most 10 after punctuation is stripped. Since the authors have no expertise in Chinese, we automatically extracted prototypes in the same way described in section 7.3. Since we did not have access to a large auxiliary POS tagged Chinese corpus, our distributional model was built only from the treebank text, and the distributional similarities are presumably degraded relative to the English. Our PCFG × CCM experiment gave 18.0 labeled F1 (43.4 unlabeled). The PROTO × CCM model gave 39.0 labeled F1 (53.2 unlabeled). Presumably with access to more POS tagged data, and the expertise of a Chinese speaker, our system would see increased performance. It is worth noting that our unlabeled F1 of 53.2 is the best reported from a primarily unsupervised system, with the next highest figure being 46.7 reported by Klein and Manning (2004). 8 Conclusion We have shown that distributional prototype features can allow one to specify a target labeling scheme in a compact and declarative way. These features give substantial error reduction in labeled F1 measure for English and Chinese grammar induction. They also achieve the best reported unlabeled F1 measure.8 Another positive property of this approach is that it tries to reconcile the success of distributional clustering approaches to grammar induction (Clark, 2001; Klein and Manning, 2002), with the CFG tree models in the supervised literature (Collins, 1999). Most importantly, this is the first work, to the authors’ knowl8The next highest results being 77.1 and 46.7 for English and Chinese respectively from Klein and Manning (2004). edge, which has learned CFGs in an unsupervised or semi-supervised setting and can parse natural language language text with any reasonable accuracy. Acknowledgments We would like to thank the anonymous reviewers for their comments. This work is supported by a Microsoft / CITRIS grant and by an equipment donation from Intel. References Stephen P. Abney. 1987. The English Noun Phrase in its Sentential Aspect. Ph.D. thesis, MIT. Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. Technical Report CS-92-16. Alexander Clark. 2000. Inducing syntactic categories by context distribution clustering. In CoNLL, pages 91–94, Lisbon, Portugal. Alexander Clark. 2001. The unsupervised induction of stochastic context-free grammars using distributional clustering. In CoNLL. Michael Collins. 1999. The Unsupervised learning of Natural Language Structure. Ph.D. thesis, University of Rochester. M.A.K Halliday. 2004. An introduction to functional grammar. Edward Arnold, 2nd edition. Zellig Harris. 1954. Distributional Structure. University of Chicago Press, Chicago. Nianwen Xue Ircs. 2002. Building a large-scale annotated chinese corpus. Dan Klein and Christopher Manning. 2002. A generative constituent-context model for improved grammar induction. In ACL. Dan Klein and Christopher Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In ACL. Dan Klein. 2005. The unsupervised learning of Natural Language Structure. Ph.D. thesis, Stanford University. Karim Lari and Steve Young. 1990. The estimation of stochastic context-free grammars using the insideoutside algorithm. Computer Speech and Language, 2(4):35–56. Christopher D. Manning and Hinrich Sch¨utze. 1999. Foundations of Statistical Natural Language Processing. The MIT Press. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1994. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330. Fernando C. N. Pereira and Yves Schabes. 1992. Insideoutside reestimation from partially bracketed corpora. In Meeting of the Association for Computational Linguistics, pages 128–135. Andrew Radford. 1988. Transformational Grammar. Cambridge University Press, Cambridge. Hinrich Sch¨utze. 1995. Distributional part-of-speech tagging. In EACL. Noah A. Smith and Jason Eisner. 2004. Guiding unsupervised grammar induction using contrastive estimation. In Working notes of the IJCAI workshop on Grammatical Inference Applications. 888 | 2006 | 111 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 889–896, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Exploring Correlation of Dependency Relation Paths for Answer Extraction Dan Shen Department of Computational Linguistics Saarland University Saarbruecken, Germany [email protected] Dietrich Klakow Spoken Language Systems Saarland University Saarbruecken, Germany [email protected] Abstract In this paper, we explore correlation of dependency relation paths to rank candidate answers in answer extraction. Using the correlation measure, we compare dependency relations of a candidate answer and mapped question phrases in sentence with the corresponding relations in question. Different from previous studies, we propose an approximate phrase mapping algorithm and incorporate the mapping score into the correlation measure. The correlations are further incorporated into a Maximum Entropy-based ranking model which estimates path weights from training. Experimental results show that our method significantly outperforms state-ofthe-art syntactic relation-based methods by up to 20% in MRR. 1 Introduction Answer Extraction is one of basic modules in open domain Question Answering (QA). It is to further process relevant sentences extracted with Passage / Sentence Retrieval and pinpoint exact answers using more linguistic-motivated analysis. Since QA turns to find exact answers rather than text snippets in recent years, answer extraction becomes more and more crucial. Typically, answer extraction works in the following steps: • Recognize expected answer type of a question. • Annotate relevant sentences with various types of named entities. • Regard the phrases annotated with the expected answer type as candidate answers. • Rank candidate answers. In the above work flow, answer extraction heavily relies on named entity recognition (NER). On one hand, NER reduces the number of candidate answers and eases answer ranking. On the other hand, the errors from NER directly degrade answer extraction performance. To our knowledge, most top ranked QA systems in TREC are supported by effective NER modules which may identify and classify more than 20 types of named entities (NE), such as abbreviation, music, movie, etc. However, developing such named entity recognizer is not trivial. Up to now, we haven’t found any paper relevant to QA-specific NER development. So, it is hard to follow their work. In this paper, we just use a general MUC-based NER, which makes our results reproducible. A general MUC-based NER can’t annotate a large number of NE classes. In this case, all noun phrases in sentences are regarded as candidate answers, which makes candidate answer sets much larger than those filtered by a well developed NER. The larger candidate answer sets result in the more difficult answer extraction. Previous methods working on surface word level, such as density-based ranking and pattern matching, may not perform well. Deeper linguistic analysis has to be conducted. This paper proposes a statistical method which exploring correlation of dependency relation paths to rank candidate answers. It is motivated by the observation that relations between proper answers and question phrases in candidate sentences are always similar to the corresponding relations in question. For example, the question ”What did Alfred Nobel invent?” and the 889 candidate sentence ”... in the will of Swedish industrialist Alfred Nobel, who invented dynamite.” For each question, firstly, dependency relation paths are defined and extracted from the question and each of its candidate sentences. Secondly, the paths from the question and the candidate sentence are paired according to question phrase mapping score. Thirdly, correlation between two paths of each pair is calculated by employing Dynamic Time Warping algorithm. The input of the calculation is correlations between dependency relations, which are estimated from a set of training path pairs. Lastly, a Maximum Entropy-based ranking model is proposed to incorporate the path correlations and rank candidate answers. Furthermore, sentence supportive measure are presented according to correlations of relation paths among question phrases. It is applied to re-rank the candidate answers extracted from the different candidate sentences. Considering phrases may provide more accurate information than individual words, we extract dependency relations on phrase level instead of word level. The experiment on TREC questions shows that our method significantly outperforms a densitybased method by 50% in MRR and three stateof-the-art syntactic-based methods by up to 20% in MRR. Furthermore, we classify questions by judging whether NER is used. We investigate how these methods perform on the two question sets. The results indicate that our method achieves better performance than the other syntactic-based methods on both question sets. Especially for more difficult questions, for which NER may not help, our method improves MRR by up to 31%. The paper is organized as follows. Section 2 discusses related work and clarifies what is new in this paper. Section 3 presents relation path correlation in detail. Section 4 and 5 discuss how to incorporate the correlations for answer ranking and re-ranking. Section 6 reports experiment and results. 2 Related Work In recent years’ TREC Evaluation, most top ranked QA systems use syntactic information in answer extraction. Next, we will briefly discuss the main usages. (Kaisser and Becker, 2004) match a question into one of predefined patterns, such as ”When did Jack Welch retire from GE?” to the pattern ”When+did+NP+Verb+NPorPP”. For each question pattern, there is a set of syntactic structures for potential answer. Candidate answers are ranked by matching the syntactic structures. This method worked well on TREC questions. However, it is costing to manually construct question patterns and syntactic structures of the patterns. (Shen et al., 2005) classify question words into four classes target word, head word, subject word and verb. For each class, syntactic relation patterns which contain one question word and one proper answer are automatically extracted and scored from training sentences. Then, candidate answers are ranked by partial matching to the syntactic relation patterns using tree kernel. However, the criterion to classify the question words is not clear in their paper. Proper answers may have absolutely different relations with different subject words in sentences. They don’t consider the corresponding relations in questions. (Tanev et al., 2004; Wu et al., 2005) compare syntactic relations in questions and those in answer sentences. (Tanev et al., 2004) reconstruct a basic syntactic template tree for a question, in which one of the nodes denotes expected answer position. Then, answer candidates for this question are ranked by matching sentence syntactic tree to the question template tree. Furthermore, the matching is weighted by lexical variations. (Wu et al., 2005) combine n-gram proximity search and syntactic relation matching. For syntactic relation matching, question tree and sentence subtree around a candidate answer are matched from node to node. Although the above systems apply the different methods to compare relations in question and answer sentences, they follow the same hypothesis that proper answers are more likely to have same relations in question and answer sentences. For example, in question ”Who founded the Black Panthers organization?”, where, the question word ”who” has the dependency relations ”subj” with ”found” and ”subj obj nn” with ”Black Panthers organization”, in sentence ”Hilliard introduced Bobby Seale, who co-founded the Black Panther Party here ...”, the proper answer ”Bobby Seale” has the same relations with most question phrases. These methods achieve high precision, but poor recall due to relation variations. One meaning is often represented as different relation combinations. In the above example, appositive rela890 tion frequently appears in answer sentences, such as ”Black Panther Party co-founder Bobby Seale is ordered bound and gagged ...” and indicates proper answer Bobby Seale although it is asked in different way in the question. (Cui et al., 2004) propose an approximate dependency relation matching method for both passage retrieval and answer extraction. The similarity between two relations is measured by their co-occurrence rather than exact matching. They state that their method effectively overcomes the limitation of the previous exact matching methods. Lastly, they use the sum of similarities of all path pairs to rank candidate answers, which is based on the assumption that all paths have equal weights. However, it might not be true. For example, in question ”What book did Rachel Carson write in 1962?”, the phrase ”Rachel Carson” looks like more important than ”1962” since the former is question topic and the latter is a constraint for expected answer. In addition, lexical variations are not well considered and a weak relation path alignment algorithm is used in their work. Based on the previous works, this paper explores correlation of dependency relation paths between questions and candidate sentences. Dynamic time warping algorithm is adapted to calculate path correlations and approximate phrase mapping is proposed to cope with phrase variations. Finally, maximum entropy-based ranking model is developed to incorporate the correlations and rank candidate answers. 3 Dependency Relation Path Correlation In this section, we discuss how the method performs in detail. 3.1 Dependency Relation Path Extraction We parse questions and candidate sentences with MiniPar (Lin, 1994), a fast and robust parser for grammatical dependency relations. Then, we extract relation paths from dependency trees. Dependency relation path is defined as a structure P =< N1, R, N2 > where, N1, N2 are two phrases and R is a relation sequence R =< r1, ..., ri > in which ri is one of the predefined dependency relations. Totally, there are 42 relations defined in MiniPar. A relation sequence R between two phrases N1, N2 is extracted by traversing from the N1 node to the N2 node in a dependency tree. Q: What book did Rachel Carson write in 1962? Paths for Answer Ranking N1 (EAP) R N2 What det book What det obj subj Rachel Carson What det obj write What det obj mod pcomp-n 1962 Paths for Answer Re-ranking book obj subj Rachel Carson book obj write book obj mod pcomp-n 1962 ... S: Rachel Carson ’s 1962 book " Silent Spring " said dieldrin causes mania. Paths for Answer Ranking N1 (CA) R N2 Silent Spring title book Silent Spring title gen Rachel Carson Silent Spring title num 1962 Paths for Answer Re-ranking book gen Rachel Carson book num 1962 ... Figure 1: Relation Paths for sample question and sentence. EAP indicates expected answer position; CA indicates candidate answer For each question, we extract relation paths among noun phrases, main verb and question word. The question word is further replaced with ”EAP”, which indicates the expected answer position. For each candidate sentence, we firstly extract relation paths between answer candidates and mapped question phrases. These paths will be used for answer ranking (Section 4). Secondly, we extract relation paths among mapped question phrases. These paths will be used for answer reranking (Section 5). Question phrase mapping will be discussed in Section 3.4. Figure 1 shows some relation paths extracted for an example question and candidate sentence. Next, the relation paths in a question and each of its candidate sentences are paired according to their phrase similarity. For any two relation path Pi and Pj which are extracted from the question and the candidate sentence respectively, if Sim(Ni1, Nj1) > 0 and Sim(Ni2, Nj2) > 0, Pi and Pj are paired as < Pi, Pj >. The question phrase ”EAP” is mapped to candidate answer phrase in the sentence. The similarity between two 891 Path Pairs for Answer Ranking N1 (EAP / CA) Rq Rs N2 Silent Spring det title book Silent Spring det obj subj title gen Rachel Carson Silent Spring det obj mod pcomp-n title num 1962 Path Pairs for Answer Re-ranking N1 Rq Rs N2 book obj subj gen Rachel Carson book obj mod pcomp-n num 1962 ... Figure 2: Paired Relation Path phrases will be discussed in Section 3.4. Figure 2 further shows the paired relation paths which are presented in Figure 1. 3.2 Dependency Relation Path Correlation Comparing a proper answer and other wrong candidate answers in each sentence, we assume that relation paths between the proper answer and question phrases in the sentence are more correlated to the corresponding paths in question. So, for each path pair < P1, P2 >, we measure the correlation between its two paths P1 and P2. We derive the correlations between paths by adapting dynamic time warping (DTW) algorithm (Rabiner et al., 1978). DTW is to find an optimal alignment between two sequences which maximizes the accumulated correlation between two sequences. A sketch of the adapted algorithm is as follows. Let R1 =< r11, ..., r1n >, (n = 1, ..., N) and R2 =< r21, ..., r2m >, (m = 1, ..., M) denote two relation sequences. R1 and R2 consist of N and M relations respectively. R1(n) = r1n and R2(m) = r2m. Cor(r1, r2) denotes the correlation between two individual relations r1, r2, which is estimated by a statistical model during training (Section 3.3). Given the correlations Cor(r1n, r2m) for each pair of relations (r1n, r2m) within R1 and R2, the goal of DTW is to find a path, m = map(n), which map n onto the corresponding m such that the accumulated correlation Cor∗along the path is maximized. Cor∗= max map(n) ( N X n=1 Cor(R1(n), R2(map(n)) ) A dynamic programming method is used to determine the optimum path map(n). The accumulated correlation CorA to any grid point (n, m) can be recursively calculated as CorA(n, m) = Cor(r1n, r2m) + max q≤m CorA(n −1, q) Cor∗= CorA(N, M) The overall correlation measure has to be normalized as longer sequences normally give higher correlation value. So, the correlation between two sequences R1 and R2 is calculated as Cor(R1, R2) = Cor∗/ max(N, M) Finally, we define the correlation between two relation paths P1 and P2 as Cor(P1, P2) = Cor(R1, R2) × Sim(N11, N21) × Sim(N12, N22) Where, Sim(N11, N21) and Sim(N12, N22) are the phrase mapping score when pairing two paths, which will be described in Section 3.4. If two phrases are absolutely different Cor(N11, N21) = 0 or Cor(N12, N22) = 0, the paths may not be paired since Cor(P1, P2) = 0. 3.3 Relation Correlation Estimation In the above section, we have described how to measure path correlations. The measure requires relation correlations Cor(r1, r2) as inputs. We apply a statistical method to estimate the relation correlations from a set of training path pairs. The training data collecting will be described in Section 6.1. For each question and its answer sentences in training data, we extract relation paths between ”EAP” and other phrases in the question and paths between proper answer and mapped question phrases in the sentences. After pairing the question paths and the corresponding sentence paths, correlation of two relations is measured by their bipartite co-occurrence in all training path pairs. Mutual information-based measure (Cui et al., 2004) is employed to calculate the relation correlations. Cor(rQ i , rS j ) = log P α × δ(rQ i , rS j ) fQ(rQ i ) × fS(rS j ) where, rQ i and rS j are two relations in question paths and sentence paths respectively. fQ(rQ i ) and fS(rS j ) are the numbers of occurrences of rQ i in question paths and rS j in sentence paths respectively. δ(rQ i , rS j ) is 1 when rQ i and rS j co-occur in a path pair, and 0 otherwise. α is a factor to discount the co-occurrence value for long paths. It is set to the inverse proportion of the sum of path lengths of the path pair. 892 3.4 Approximate Question Phrase Mapping Basic noun phrases (BNP) and verbs in questions are mapped to their candidate sentences. A BNP is defined as the smallest noun phrase in which there are no noun phrases embedded. To address lexical and format variations between phrases, we propose an approximate phrase mapping strategy. A BNP is separated into a set of heads H = {h1, ..., hi} and a set of modifiers M = {m1, ...mj}. Some heuristic rules are applied to judge heads and modifiers: 1. If BNP is a named entity, all words are heads. 2. The last word of BNP is head. 3. Rest words are modifiers. The similarity between two BNPs Sim(BNPq, BNPs) is defined as: Sim(BNPq, BNPs) = λSim(Hq, Hs) + (1 −λ)Sim(Mq, Ms) Sim(Hq, Hs) = P hi∈Hq P hj ∈Hs Sim(hi,hj) |Hq S Hs| Sim(Mq, Ms) = P mi∈Mq P mj ∈Ms Sim(mi,mj) |Mq S Ms| Furthermore, the similarity between two heads Sim(hi, hj) are defined as: • Sim = 1, if hi = hj after stemming; • Sim = 1, if hi = hj after format alternation; • Sim = SemSim(hi, hj) These items consider morphological, format and semantic variations respectively. 1. The morphological variations match words after stemming, such as ”Rhodes scholars” and ”Rhodes scholarships”. 2. The format alternations cope with special characters, such as ”-” for ”Ice-T” and ”Ice T”, ”&” for ”Abercrombie and Fitch” and ”Abercrombie & Fitch”. 3. The semantic similarity SemSim(hi, hj) is measured using WordNet and eXtended WordNet. We use the same semantic path finding algorithm, relation weights and semantic similarity measure as (Moldovan and Novischi, 2002). For efficiency, only hypernym, hyponym and entailment relations are considered and search depth is set to 2 in our experiments. Particularly, the semantic variations are not considered for NE heads and modifiers. Modifier similarity Sim(mi, mj) only consider the morphological and format variations. Moreover, verb similarity measure Sim(v1, v2) is the same as head similarity measure Sim(hi, hj). 4 Candidate Answer Ranking According to path correlations of candidate answers, a Maximum Entropy (ME)-based model is applied to rank candidate answers. Unlike (Cui et al., 2004), who rank candidate answers with the sum of the path correlations, ME model may estimate the optimal weights of the paths based on a training data set. (Berger et al., 1996) gave a good description of ME model. The model we use is similar to (Shen et al., 2005; Ravichandran et al., 2003), which regard answer extraction as a ranking problem instead of a classification problem. We apply Generalized Iterative Scaling for model parameter estimation and Gaussian Prior for smoothing. If expected answer type is unknown during question processing or corresponding type of named entities isn’t recognized in candidate sentences, we regard all basic noun phrases as candidate answers. Since a MUC-based NER loses many types of named entities, we have to handle larger candidate answer sets. Orthographic features, similar to (Shen et al., 2005), are extracted to capture word format information of candidate answers, such as capitalizations, digits and lengths, etc. We expect they may help to judge what proper answers look like since most NER systems work on these features. Next, we will discuss how to incorporate path correlations. Two facts are considered to affect path weights: question phrase type and path length. For each question, we divide question phrases into four types: target, topic, constraint and verb. Target is a kind of word which indicates the expected answer type of the question, such as ”party” in ”What party led Australia from 1983 to 1996?”. Topic is the event/person that the question talks about, such as ”Australia”. Intuitively, it is the most important phrase of the question. Constraint are the other phrases of the question except topic, such as ”1983” and ”1996”. Verb is the main verb of the question, such as ”lead”. Furthermore, since shorter path indicates closer relation between two phrases, we discount path correlation in long question path by dividing the correlation by the length of the question path. Lastly, we sum the discounted path correlations for each type of question phrases and fire it as a feature, such as ”Target Cor=c, where c is the correlation value for question target. ME-based ranking model incorporate the orthographic and path 893 correlation features to rank candidate answers for each of candidate sentences. 5 Candidate Answer Re-ranking After ranking candidate answers, we select the highest ranked one from each candidate sentence. In this section, we are to re-rank them according to sentence supportive degree. We assume that a candidate sentence supports an answer if relations between mapped question phrases in the candidate sentence are similar to the corresponding ones in question. Relation paths between any two question phrases are extracted and paired. Then, correlation of each pair is calculated. Re-rank formula is defined as follows: Score(answer) = α × X i Cor(Pi1, Pi2) where, α is answer ranking score. It is the normalized prediction value of the ME-based ranking model described in Section 4. P i Cor(Pi1, Pi2) is the sum of correlations of all path pairs. Finally, the answer with the highest score is returned. 6 Experiments In this section, we set up experiments on TREC factoid questions and report evaluation results. 6.1 Experiment Setup The goal of answer extraction is to identify exact answers from given candidate sentence collections for questions. The candidate sentences are regarded as the most relevant sentences to the questions and retrieved by IR techniques. Qualities of the candidate sentences have a strong impact on answer extraction. It is meaningless to evaluate the questions of which none candidate sentences contain proper answer in answer extraction experiment. To our knowledge, most of current QA systems lose about half of questions in sentence retrieval stage. To make more questions evaluated in our experiments, for each of questions, we automatically build a candidate sentence set from TREC judgements rather than use sentence retrieval output. We use TREC99-03 questions for training and TREC04 questions for testing. As to build training data, we retrieve all of the sentences which contain proper answers from relevant documents according to TREC judgements and answer patterns. Then, We manually check the sentences and remove those in which answers cannot be supported. As to build candidate sentence sets for testing, we retrieve all of the sentences from relevant documents in judgements and keep those which contain at least one question key word. Therefore, each question has at least one proper candidate sentence which contains proper answer in its candidate sentence set. There are 230 factoid questions (27 NIL questions) in TREC04. NIL questions are excluded from our test set because TREC doesn’t supply relevant documents and answer patterns for them. Therefore, we will evaluate 203 TREC04 questions. Five answer extraction methods are evaluated for comparison: • Density: Density-based method is used as baseline, in which we choose candidate answer with the shortest surface distance to question phrases. • SynPattern: Syntactic relation patterns (Shen et al., 2005) are automatically extracted from training set and are partially matched using tree kernel. • StrictMatch: Strict relation matching follows the assumption in (Tanev et al., 2004; Wu et al., 2005). We implement it by adapting relation correlation score. In stead of learning relation correlations during training, we predefine them as: Cor(r1, r2) = 1 if r1 = r2; 0, otherwise. • ApprMatch: Approximate relation matching (Cui et al., 2004) aligns two relation paths using fuzzy matching and ranks candidates according to the sum of all path similarities. • CorME: It is the method proposed in this paper. Different from ApprMatch, ME-based ranking model is implemented to incorporate path correlations which assigns different weights for different paths respectively. Furthermore, phrase mapping score is incorporated into the path correlation measure. These methods are briefly described in Section 2. Performance is evaluated with Mean Reciprocal Rank (MRR). Furthermore, we list percentages of questions correctly answered in terms of top 5 answers and top 1 answer returned respectively. No answer validations are used to adjust answers. 894 Table 1: Overall performance Density SynPattern StrictMatch ApprMatch CorME MRR 0.45 0.56 0.57 0.60 0.67 Top1 0.36 0.53 0.49 0.53 0.62 Top5 0.56 0.60 0.67 0.70 0.74 6.2 Results Table 1 shows the overall performance of the five methods. The main observations from the table are as follows: 1. The methods SynPattern, StrictMatch, ApprMatch and CorME significantly improve MRR by 25.0%, 26.8%, 34.5% and 50.1% over the baseline method Density. The improvements may benefit from the various explorations of syntactic relations. 2. The performance of SynPattern (0.56MRR) and StrictMatch (0.57MRR) are close. SynPattern matches relation sequences of candidate answers with the predefined relation sequences extracted from a training data set, while StrictMatch matches relation sequences of candidate answers with the corresponding relation sequences in questions. But, both of them are based on the assumption that the more number of same relations between two sequences, the more similar the sequences are. Furthermore, since most TREC04 questions only have one or two phrases and many questions have similar expressions, SynPattern and StrictMatch don’t make essential difference. 3. ApprMatch and CorME outperform SynPattern and StrictMatch by about 6.1% and 18.4% improvement in MRR. Strict matching often fails due to various relation representations in syntactic trees. However, such variations of syntactic relations may be captured by ApprMatch and CorME using a MI-based statistical method. 4. CorME achieves the better performance by 11.6% than ApprMatch. The improvement may benefit from two aspects: 1) ApprMatch assigns equal weights to the paths of a candidate answer and question phrases, while CorME estimate the weights according to phrase type and path length. After training a ME model, the weights are assigned, such as 5.72 for topic path ; 3.44 for constraints path and 1.76 for target path. 2) CorME incorporates approximate phrase mapping scores into path correlation measure. We further divide the questions into two classes according to whether NER is used in answer extraction. If the expected answer type of a question is unknown, such as ”How did James Dean die?” or the type cannot be annotated by NER, such as ”What ethnic group/race are Crip members?”, we put the question in Qw/oNE set, otherwise, we put it in QwNE. For the questions in Qw/oNE, we extract all basic noun phrases and verb phrases as candidate answers. Then, answer extraction module has to work on the larger candidate sets. Using a MUC-based NER, the recognized types include person, location, organization, date, time and money. In TREC04 questions, 123 questions are put in QwNE and 80 questions in Qw/oNE. Table 2: Performance on two question sets QwNE and Qw/oNE QwNE Qw/oNE Density 0.66 0.11 SynPattern 0.71 0.36 StrictMatch 0.70 0.36 ApprMatch 0.72 0.42 CorME 0.79 0.47 We evaluate the performance on QwNE and Qw/oNE respectively, as shown in Table 2. The density-based method Density (0.11MRR) loses many questions in Qw/oNE, which indicates that using only surface word information is not sufficient for large candidate answer sets. On the contrary, SynPattern(0.36MRR), StrictPattern(0.36MRR), ApprMatch(0.42MRR) and CorME (0.47MRR) which capture syntactic information, perform much better than Density. Our method CorME outperforms the other syntacticbased methods on both QwNE and Qw/oNE. Es895 pecially for more difficult questions Qw/oNE, the improvements (up to 31% in MRR) are more obvious. It indicates that our method can be used to further enhance state-of-the-art QA systems even if they have a good NER. In addition, we evaluate component contributions of our method based on the main idea of relation path correlation. Three components are tested: 1. Appr. Mapping (Section 3.4). We replace approximate question phrase mapping with exact phrase mapping and withdraw the phrase mapping scores from path correlation measure. 2. Answer Ranking (Section 4). Instead of using ME model, we sum all of the path correlations to rank candidate answers, which is similar to (Cui et al., 2004). 3. Answer Re-ranking (Section 5). We disable this component and select top 5 answers according to answer ranking scores. Table 3: Component Contributions MRR Overall 0.67 - Appr. Mapping 0.63 - Answer Ranking 0.62 - Answer Re-ranking 0.66 The contribution of each component is evaluated with the overall performance degradation after it is removed or replaced. Some findings are concluded from Table 3. Performances degrade when replacing approximate phrase mapping or ME-based answer ranking, which indicates that both of them have positive effects on the systems. This may be also used to explain why CorME outperforms ApprMatch in Table 1. However, removing answer re-ranking doesn’t affect much. Since short questions, such as ”What does AARP stand for?”, frequently occur in TREC04, exploring the phrase relations for such questions isn’t helpful. 7 Conclusion In this paper, we propose a relation path correlation-based method to rank candidate answers in answer extraction. We extract and pair relation paths from questions and candidate sentences. Next, we measure the relation path correlation in each pair based on approximate phrase mapping score and relation sequence alignment, which is calculated by DTW algorithm. Lastly, a ME-based ranking model is proposed to incorporate the path correlations and rank candidate answers. The experiment on TREC questions shows that our method significantly outperforms a density-based method by 50% in MRR and three state-of-the-art syntactic-based methods by up to 20% in MRR. Furthermore, the method is especially effective for difficult questions, for which NER may not help. Therefore, it may be used to further enhance state-of-the-art QA systems even if they have a good NER. In the future, we are to further evaluate the method based on the overall performance of a QA system and adapt it to sentence retrieval task. References Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguisitics, 22:39–71. Hang Cui, Keya Li, Renxu Sun, Tat-Seng Chua, and Min-Yen Kan. 2004. National university of singapore at the trec-13 question answering. In Proceedings of TREC2004, NIST. M. Kaisser and T. Becker. 2004. Question answering by searching large corpora with linguistic methods. In Proceedings of TREC2004, NIST. Dekang Lin. 1994. Principar—an efficient, broadcoverage, principle-based parser. In Proceedings of COLING1994, pages 42–488. Dan Moldovan and Adrian Novischi. 2002. Lexical chains for question answering. In Proceedings of COLING2002. L. R. Rabiner, A. E. Rosenberg, and S. E. Levinson. 1978. Considerations in dynamic time warping algorithms for discrete word recognition. In Proceedings of IEEE Transactions on acoustics, speech and signal processing. Deepak Ravichandran, Eduard Hovy, and Franz Josef Och. 2003. Statistical qa - classifier vs. re-ranker: What’s the difference? In Proceedings of ACL2003 workshop on Multilingual Summarization and Question Answering. Dan Shen, Geert-Jan M. Kruijff, and Dietrich Klakow. 2005. Exploring syntactic relation patterns for question answering. In Proceedings of IJCNLP2005. H. Tanev, M. Kouylekov, and B. Magnini. 2004. Combining linguisitic processing and web mining for question answering: Itc-irst at trec-2004. In Proceedings of TREC2004, NIST. M. Wu, M. Y. Duan, S. Shaikh, S. Small, and T. Strzalkowski. 2005. University at albany’s ilqua in trec 2005. In Proceedings of TREC2005, NIST. 896 | 2006 | 112 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 897–904, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Question Answering with Lexical Chains Propagating Verb Arguments Adrian Novischi Dan Moldovan Language Computer Corp. 1701 N. Collins Blvd, Richardson, TX, 75080 adrian,moldovan @languagecomputer.com Abstract This paper describes an algorithm for propagating verb arguments along lexical chains consisting of WordNet relations. The algorithm creates verb argument structures using VerbNet syntactic patterns. In order to increase the coverage, a larger set of verb senses were automatically associated with the existing patterns from VerbNet. The algorithm is used in an in-house Question Answering system for re-ranking the set of candidate answers. Tests on factoid questions from TREC 2004 indicate that the algorithm improved the system performance by 2.4%. 1 Introduction In Question Answering the correct answer can be formulated with different but related words than the question. Connecting the words in the question with the words in the candidate answer is not enough to recognize the correct answer. For example the following question from TREC 2004 (Voorhees, 2004): Q: (boxer Floyd Patterson) Who did he beat to win the title? has the following wrong answer: WA: He saw Ingemar Johanson knock down Floyd Patterson seven times there in winning the heavyweight title. Although the above sentence contains the words Floyd, Patterson, win, title, and the verb beat can be connected to the verb knock down using lexical chains from WordNet, this sentence does not answer the question because the verb arguments are in the wrong position. The proposed answer describes Floyd Patterson as being the object/patient of the beating event while in the question he is the subject/agent of the similar event. Therefore the selection of the correct answer from a list of candidate answers requires the check of additional constraints including the match of verb arguments. Previous approaches to answer ranking, used syntactic partial matching, syntactic and semantic relations and logic forms for selecting the correct answer from a set of candidate answers. Tanev et al. (Tanev et al., 2004) used an algorithm for partial matching of syntactic structures. For lexical variations they used a dependency based thesaurus of similar words (Lin, 1998). Hang et al. (Cui et al., 2004) used an algorithm to compute the similarity between dependency relation paths from a parse tree to rank the candidate answers. In TREC 2005, Ahn et al. (Ahn et al., 2005) used Discourse Representation Structures (DRS) resembling logic forms and semantic relations to represent questions and answers and then computed a score “indicating how well DRSs match each other”. Moldovan and Rus (Moldovan and Rus, 2001) transformed the question and the candidate answers into logic forms and used a logic prover to determine if the candidate answer logic form (ALF) entails the question logic form(QLF). Continuing this work Moldovan et al. (Moldovan et al., 2003) built a logic prover for Question Answering. The logic prover uses a relaxation module that is used iteratively if the proof fails at the price of decreasing the score of the proof. This logic prover was improved with temporal context detection (Moldovan et al., 2005). All these approaches superficially addressed verb lexical variations. Similar meanings can be expressed using different verbs that use the same arguments in different positions. For example the sentence: 897 John bought a cowboy hat for $50 can be reformulated as: John paid $50 for a cowboy hat. The verb buy entails the verb pay however the arguments a cowboy hat and $50 have different position around the verb. This paper describes the approach for propagating the arguments from one verb to another using lexical chains derived using WordNet (Miller, 1995). The algorithm uses verb argument structures created from VerbNet syntactic patterns (Kipper et al., 2000b). Section 2 presents VerbNet syntactic patterns and the machine learning approach used to increase the coverage of verb senses. Section 3 describes the algorithms for propagating verb arguments. Section 4 presents the results and the final section 5 draws the conclusions. 2 VerbNet Syntactic Patterns The algorithm for propagating verb arguments uses structures for representing them. Several choices were considered for retrieving verbs’ argument structure. Verb syntactic patterns from WordNet (called frames) could not be used because some tokens in the patterns (like “PP” or “CLAUSE”) cannot be mapped to arguments. FrameNet (Baker et al., 1998) and PropBank (Kingsbury and Palmer, 2002) contain verb syntactic patterns, but they do not have a mapping to WordNet. Finally VerbNet (Kipper et al., 2000b) represents a verb lexicon with syntactic and semantic information. This resource has a mapping to WordNet and therefore was considered the most suitable for propagating predicate arguments along lexical chains. 2.1 VerbNet description VerbNet is based on classes of verbs. Each verb entry points to a set of classes and each class represents a sense of a verb. The classes are organized hierarchically. Each class contains a set of syntactic patterns corresponding to licensed constructions. Each syntactic pattern is an ordered list of tokens and each token represents a group of words. The tokens contain various information and constraints about the word or the group of words they represent. The name of the token can represent the thematic role of an argument, the verb itself, prepositions, adjectives, adverbs or plain words. VerbNet uses 29 thematic roles (presented in taTable 1: VerbNet thematic roles Thematic Roles Topic Experiencer Stimulus Cause Actor Actor1 Actor2 Agent Asset Attribute Benefactor Beneficiary Destination Instrument Location Material Patient Patient1 Patient2 Predicate Product Recipient Source Theme Theme1 Theme2 Time Extent Value ble 1). VerbNet has a static aspect and a dynamic aspect. The static aspect refers to the organization of verb entries. The dynamic aspect refers to the lexicalized trees associated with syntactic patterns. A detailed description of VerbNet dynamic aspect can be found in (Kipper et al., 2000a). The algorithm for propagating predicate arguments uses the syntactic patterns associated with each sensekey. Each class contains a set of WordNet verb sensekeys and a set of syntactic patterns. Therefore, syntactic patterns can be associated with verb sensekey from the same class. Since sensekeys represent word senses in WordNet, each verb synset can be associated with a set of VerbNet syntactic patterns. VerbNet syntactic patterns allow predicate arguments to be propagated along lexical chains. However, not all verb senses in WordNet are listed in VerbNet classes. For the remaining verb sensekeys that are not listed in VerbNet, syntactic patterns were assigned automatically using machine learning as described in the following section. 2.2 Associating syntactic patterns with new verb senses In order to propagate predicate arguments along lexical chains, ideally every verb in every synonym set has to have a set of syntactic patterns. Only a part of verb senses are listed in VerbNet classes. WordNet 2.0 has 24,632 verb sensekeys, but only 4,983 sensekeys are listed in VerbNet classes. For the rest, syntactic patterns were assigned automatically. In order to assign these syntactic patterns to the verb senses not listed in VerbNet, training examples were needed, both positive and negative. The learning took place for one syntactic pattern at a time. A syntactic pattern can be listed in more than one class. All verb senses associated with a syntactic pattern can be considered positive examples of verbs having that syntactic pattern. For generating negative examples, 898 the following assumption was used: if a verb sense listed in a VerbNet class is not associated with a given syntactic pattern, then that verb sense represents a negative example for that pattern. 352 syntactic patterns were found in all VerbNet classes. A training example was generated for each pair of syntactic patterns and verb sensekeys, resulting in a total number of 1,754,016 training examples. These training examples were used to infer rules that would classify if a verb sense key can be associated with a given syntactic pattern. Training examples were created by using the following features: verb synset semantic category, verb synset position in the IS-A hierarchy, the fact that the verb synset is related to other synsets with CAUSATION relation, the semantic classes of all noun synsets derivationally related with the given verb synset and the WordNet syntactic pattern ids. A machine learning algorithm based on C5.0 (Quinlan, 1998) was run on these training examples. Table 2 presents the performance of the learning algorithm using a 10-fold cross validation for several patterns. A number of 20,759 pairs of verb senses with their syntactic patterns were added to the existing 35,618 pairs in VerbNet. In order to improve the performance of the question answering system, around 100 patterns were manually associated with some verb senses. Table 2: Performance of learning verb senses for several syntactic patterns Id Pattern Performance 0 Agent VERB Theme 74.2% 1 Experiencer VERB Cause 98.6% Experiencer VERB Oblique 2 for Cause 98.7% Experiencer VERB Cause 3 in Oblique 98.7% 4 Agent VERB Recipient 94.7% 5 Agent VERB Patient 85.6% 6 Patient VERB ADV 85.1% ... ... ... Agent VERB Patient 348 at Cause 99.8% Agent VERB in 349 Theme 99.8% Agent VERB Source 350 ADJ 99.5% 351 Agent VERB at Source 99.3% 3 Propagating Verb Arguments Given the argument structure of a verb in a sentence and a lexical chain between this verb and another, the algorithm for propagating verb arguments transforms this structure step by step, for each relation in the lexical chain. During each step the head of the structure changes its value and the arguments can change their position. The arguments change their position in a way that preserves the original meaning as much as possible. The argument structures mirror the syntactic patterns that a verb with a given sense can have. An argument structure contains the type of the pattern, the head and an array of tokens. Each token represents an argument with a thematic role or an adjective, an adverb, a preposition or just a regular word. The head and the arguments with thematic roles are represented by concepts. A concept is created from a word found in text. If the word is found in WordNet, the concept structure contains its surface form, its lemma, its part of speech and its WordNet sense. If the word is not found in WordNet, its concept structure contains only the word and the part of speech. The value of the field for an argument is represented by the concept that is the head of the phrase representing the argument. Because a synset may contain more than one verb and each verb can have different types of syntactic patterns, propagation of verb arguments along a single relation can result in more than one structure. The output of the algorithm as well as the output of the propagation of each relation in the lexical chain is the set of argument structures with the head being a verb from the set of synonyms of the target synset. For a given relation in the lexical chain, each structure coming from the previous step is transformed into a set of new structures. The relations used and the process of argument propagation is described below. 3.1 Relations used A restricted number of WordNet relations were used for creating lexical chains. Lexical chains between verbs were used for propagating verb arguments, and lexical chains between nouns were used to link semantically related arguments expressed with different words. Between verb synsets the following relations were used: HYPERNYM, TROPONYM, ENTAILMENT and CAUSATION. These relations were selected because they reveal patterns about how they propagate predicate arguments. The HYPERNYMY relation links one specific verb synset to one that is more general. Most of the time, the arguments have the same thematic roles for the two verbs. Sometimes the hypernym 899 synset has a syntactic pattern that has more thematic roles than the syntactic pattern of the start synset. In this case the pattern of the hypernym is not considered for propagation. The HYPONYMY relation is the reverse of HYPERNYMY and links one verb synset to a more specific one. Inference to a more specific verb requires abduction. Most of the time, the arguments have the same thematic roles for the two verbs. Usually the hyponym of the verb synset is more specific and have less syntactic patterns than the original synset. This is why a syntactic pattern of a verb can be linked with the syntactic pattern of its hyponym that has more thematic roles. These additional thematic roles in the syntactic pattern of the hyponym will receive the value ANY-CONCEPT when verb arguments are propagated along this relation. ENTAILMENT relation links two verb synsets that express two different events that are related: the first entails the second. This is different than HYPERNYMY or HYPONYMY that links verbs that express the same event with more or less details. Most of the time the subject of these two sentences has the same thematic role. If the thematic role of subjects is different, then the syntactic pattern of the target verb is not considered for propagation. The same happens if the start pattern contains less arguments than the target pattern. Additional arguments can change the meaning of the target pattern. A relation that is the reverse of the ENTAILMENT is not coded in WordNet but, it is used for a better connectivity. Given one sentence with a verb that is entailed by a verb , the sentence can be reformulated using the verb , and thus creating sentence . Sentence does not imply sentence but makes it plausible. Most of the time, the subject of these two sentences has the same thematic role. If the thematic role of subjects is different, then the pattern of the target verb synset is not considered for propagation. The same happens if the start pattern has less arguments than the target pattern. Additional arguments can change the meaning of the target pattern. The CAUSATION relation puts certain restrictions on the syntactic patterns of the two verb synsets. The first restriction applies to the syntactic pattern of the start synset: its subject must be an Agent or an Instrument and its object must be a Patient. The second restriction applies to the syntactic pattern of the destination synset: its subject must be a Patient. If the two syntactic patterns obey these restrictions then an instance of the destination synset pattern is created and its arguments will receive the value of the argument with the same thematic role in the pattern belonging to start synset. The reverse of the CAUSATION relation is not codified in WordNet database but it is used in lexical chains to increase the connectivity between synsets. Similar to causation relation, the reverse causation imposes two restrictions on the patterns belonging to the start and destination synset. First restriction applies to the syntactic pattern of the start synset: its subject must have the thematic role of Patient. The second restriction applies to the syntactic pattern of the destination synset: its subject must be an Agent or an Instrument and its object must be a Patient. If the two syntactic patterns obey these restrictions then an instance of the destination synset pattern is created and its arguments will receive the value of the argument with the same thematic role in the pattern belonging to start synset. When deriving lexical chains for linking words from questions and correct answers in TREC 2004, it was observed that many chains contain a pair of DERIVATION relations. Since a pair of DERIVATION relations can link either two noun synsets or two verb synsets, the pair was concatenated into a new relation called SIM DERIV. The number of SIM-DERIV relations is presented in table 3. For example the verb synsets emanate#2 and emit#1 are not synonyms (not listed in the same synset) but they are linked by a SIM-DERIV relation (both have a DERIVATION relation to the noun synset (n-emission#1, emanation#2) - nominalizations of the two verbs are listed in the same synset). There are no restrictions between pairs of patterns that participate in argument propagation. The arguments in the syntactic pattern instance of the destination synset take their values from the arguments with the same thematic roles from the syntactic pattern instance of the start synset. Table 3: The SIM-DERIV relations generated for nouns and verb . Relation Source Target Number SIM-DERIV noun noun 45,178 SIM-DERIV verb verb 15,926 900 The VERBGROUP and SEE-ALSO relations were not included in the experiment because it is not clear how they propagate arguments. A restricted set of instances of DERIVATION relation was used to link verbs to nouns that describe their action. When arguments are propagated from verb to noun, the noun synset will receive a set of syntactic patterns instances similar to the semantic instances of the verb. When arguments are propagated from noun to verb, a new created structure for the verb sense takes the values for its arguments from the arguments with similar thematic roles in the noun structure. Between the heads of two argument structures there can exist lexical chains of size 0, meaning that the heads of the two structures are in the same synset. However, the type of the start structure can be different than the type of the target structure. In this case, the arguments still have to be propagated from one structure to another. The arguments in the target structure will take the values of the arguments with the same thematic role in the start structure or the value ANY-CONCEPT if these arguments cannot be found. Relations between nouns were not used by the algorithm but they are used after the algorithm is applied, to link the arguments from a resulted structure to the arguments with the same semantic roles in the target structure. If such a link exists, then the arguments are considered to match. From the existing WordNet relations between noun synsets only HYPERNYM and HYPONYM were used. 3.2 Assigning weights to the relations Two synsets can be connected by a large number of lexical chains. For efficiency, the algorithm runs only on a restricted number of lexical chains. In order to select the most likely lexical chains, they were ordered decreasingly by their weight. The weight of a lexical chain is computed using the following formula inspired by (Moldovan and Novischi, 2002):
where n represents the number of relations in the lexical chain. The formula uses the weights ( "! ) of the relations along the chain (presented in table 4) and coefficients for pairs of relations $# (some of them presented in table 5, the rest having a weight of 1.0). This formula resulted from the observation that the relations are not equal (some relations like HYPERNYMY are stronger than other relations) and that the order of relations in the lexical chain influences its fitness (the order of relations is approximated by the weight given to pairs of relations). The formula uses the “measure of generality” of a concept defined as: %'& ( ) ) ( ) ) #*+,-.0/1/ where 2436587:9<;; represents the number of occurrences of a given concept in WordNet glosses. Table 4: The weight assigned to each relation Relation Weight HYPERNYM 0.8 HYPONYM 0.7 DERIVATION 0.6 ENTAILMENT 0.7 R-ENTAILMENT 0.6 CAUSATION 0.7 R-CAUSATION 0.6 Table 5: Some of the weights assigned to pair of relations Relation 1 Relation 2 Coefficient Weight HYPERNYM HYPONYM 1.25 HYPERNYM ENTAILMENT 1.25 HYPERNYM R-ENTAILMENT 0.8 HYPERNYM CAUSATION 1.25 HYPERNYM R-CAUSATION 1.25 HYPONYM HYPERNYM 0.8 HYPONYM ENTAILMENT 1.25 HYPONYM R-ENTAILMENT 0.8 HYPONYM CAUSATION 1.25 HYPONYM R-CAUSATION 0.8 ENTAILMENT HYPERNYM 1.25 ENTAILMENT HYPONYM 0.8 ENTAILMENT CAUSATION 1.25 ENTAILMENT R-CAUSATION 0.8 R-ENTAILMENT HYPERNYM 0.8 R-ENTAILMENT HYPONYM 0.8 R-ENTAILMENT CAUSATION 0.8 R-ENTAILMENT R-CAUSATION 1.25 CAUSATION HYPERNYM 1.25 CAUSATION HYPONYM 0.8 CAUSATION ENTAILMENT 1.25 CAUSATION R-ENTAILMENT 0.8 3.3 Example In the test set from the QA track in TREC 2004 we found the following question with correct answer: Q 28.2: (Abercrombie & Fitch) When was it established? A: ... Abercrombie & Fitch began life in 1982 ... The verb establish in the question has sense 2 in WordNet 2.0 and the verb begin in the answer 901 has also sense 2. The following lexical chain can be found between these two verbs: (v-begin#2,start#4) R-CAUSATION (v-begin#3,lead off#2,start#2,commence#2) SIM-DERIV (v-establish#2,found#1) From the question, an argument structure is created for the verb establish#2 using the following pattern: Agent establish#2 Patient where the argument with the thematic role of Agent has the value ANY-CONCEPT, and the Patient argument has the value Abercrombie & Fitch. From the answer, an argument structure is created for verb begin#2 using the pattern: Patient begin#2 Theme where the Patient argument has the value Abercrombie & Fitch and the Theme argument has the value n-life#2. This structure is propagated along the lexical chain, each relation at a time. First for the R-CAUSATION relation links the verb begin#2 having the pattern: Patient Verb Theme with the verb begin#3 that has the pattern: Agent begin#3 Patient The Patient keeps its value Abercrombie &Fitch event though it is changing its syntactic role from subject of the verb begin#2 to the object of the verb begin#3. The Theme argument is lost along this relation, instead the new argument with the thematic role of Agent receives the special value ANY-CONCEPT. The second relation in the chain, SIM-DERIV links two verbs that have the same syntactic pattern: Agent Verb Patient Therefore a new structure is created for the verb establish#2 using this pattern and its arguments take their values from the similar arguments in the argument structure for verb begin#3. This new structure exactly matches the argument structure from the question therefore the answer is ranked the highest in the set of candidate answer. Figure 1 illustrates the argument propagation process for this example. 4 Experiments and Results The algorithm for propagating verb arguments was used to improve performance of an in-house Question Answering system (Moldovan et al., 2004). This improvement comes from a better matching between a question and the sentences containing the correct answer. Integration of this algorithm into the Question Answering system requires 3 steps: (1) creation of structures containing verb arguments for the questions and its possible answers, (2) derivation of lexical chains between the two structures and propagation of the arguments along lexical chains, (3) measuring the similarity between the propagated structures and the structures from the question and re-ranking of the candidate answers based on similarity scores. Structures containing predicate arguments are created for all the verbs in the question and all verbs in each possible answer. The QA system takes care of coreference resolution. Argument structures are created for verbs in both active and passive voice. If the verb is in passive voice, then its arguments are normalized to active voice. The subject phrase of the verb in passive voice represents its object and the noun phrase inside prepositional phrase with preposition “by” becomes its subject. Special attention is given to di-transitive verbs. If in passive voice, the subject phrase can represent either the direct object or indirect object. The distinction is made in the following way: if the verb in passive voice has a direct object then the subject represents the indirect object (beneficiary), otherwise the subject represents direct object. All the other arguments are treated in the same way as in the active voice case. After the structures are created from a candidate answer and a question, lexical chains are created between their heads. Because lexical chains link two word senses, the heads need to be disambiguated. Before searching for lexical chains, the heads could be already partially disambiguated, because only a restricted number of senses of the head verb can have the VerbNet syntactic pattern matching the input text. An additional semantic disambiguation can take place before deriving lexical chains. The verbs from the answer and question can also be disambiguated by selecting the best lexical chain between them. This was the approach used in our experiment. The algorithm propagating verb arguments was tested on a set of 106 pairs of phrases with similar meaning for which argument structures could be built. These phrases were selected from pairs of questions and their correct answers from the 902 v-begin#2 Abercrombie & Fitch n-life#1 v-begin#3 ANY-CONCEPT AberCrombie & Fitch v-establish#2 ANY-CONCEPT Abercrombie & Fitch R-CAUSE Patient Theme Agent Agent SIM-DERIV Patient Patient v-establish#2 ANY-CONCEPT Abercrombie & Fitch Agent Patient A: ... Abercrombie & Fitch began life in 1982 Q 28.2 (Abercrombie & Fitch) When was it established? Figure 1: Example of lexical chain that propagates syntactic constraints from answer to question. set of factoid questions in TREC 2004 and also from the pairs of scenarios and hypotheses from first edition of PASCAL RTE Challenge (Dagan et al., 2005). Table 6 shows algorithm performance. The columns in the table correspond to the following cases: a) how many cases the algorithm propagated all the arguments; b) how many cases the algorithm propagated one argument; c) home many cases the algorithm did not propagate any argument; using top 5, 20, 50 lexical chains. The purpose of the algorithm for propagating predicate arguments is to measure the similarity between the sentences for which the argument structures have been built. This similarity can be computed by comparing the target argument structure with the propagated argument structure. The similarity score is computed in the following way: if 2 represents the number of arguments in a pattern, each argument matched is defined to have a contribution of 2 , except for the subject that has a contribution if matched of 2/(N+1). The propagated pattern is compared with the target pattern and the score is computed by summing up the contributions of all matched arguments. The set of factoid questions in TREC 2004 has 230 questions. Lexical chains containing the restricted set of relations that propagate verb arguments were found for 33 questions, linking verbs in those questions to verbs in their correct answer. This is the maximum number of questions on which the algorithm for propagating syntactic constraints can have an impact without using other knowledge. The algorithm for propagating verb argument could be applied on 15 of these questions. Table 7 shows the improvement of the Question Answering system when the first 20 or 50 answers returned by factoid strategy are re-ranked according to similarity scores between argument structures. The performance of the question answering system was measured using Mean Reciprocal Rank (MRR). Table 7: The impact of the algorithm for propagating predicate arguments over the question answering system Number of answers Performance Top 20 1.9% Top 50 2.4% 5 Conclusion This paper describes the approach of propagating verb arguments along lexical chains with WordNet relations using VerbNet frames. Since VerbNet frames are not associated with all verb senses from WordNet, some verb senses were added automatically to the existing VerbNet frames. The algorithm was used to improve the performance of the answer’s ranking stage in Question Answering system. Only a restricted set of WordNet semantic 903 Table 6: The performance of the algorithm for propagating predicate arguments with semantic constraints Arguments propagated Top 5 chains Top 10 chains Top 20 chains a all arguments 23(21.6%) 28(26.4%) 32(30.2%) b at least one argument 73(68.8%)% 81(76.4%) 89(83.9%) c no arguments 32(30.2%) 25(23.6%) 17(16.0%) relations were used to propagate predicate arguments. Lexical chains were also derived between the arguments for a better match. On the set of factoid questions from TREC 2004, it was found that for 33(14.3%) questions, the words in the question and the related words in the answer could be linked using lexical chains containing only the relations from the restricted set that propagate verb arguments. Overall, the algorithm for propagating verb arguments improved the system performance with 2.4% References Kisuh Ahn, Johan Bos, James R. Curran, Dave Kor, Malvina Nissim, and Bonnie Webber. 2005. Question Answering with QED at TREC-2005. In Proceedings of TREC 2005. Collin F. Baker, Charles J. Fillmore, , and John B Lowe. 1998. The Berkeley FrameNet project. In Proceedings of the COLING-ACL, Montreal, Canada. Hang Cui, Keya Li, Renxu Sun, Tat-Seng Chua, and Min-Yen-Kan. 2004. National University of Singapore at the TREC-13 Question Answering Main Task. In Proceedings of the 13th Text Retrieval Conference (TREC-2004), Gaithersburg, Maryland, USA, November 16-19. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. Recognising Textual Entailment Challenge, http://www.pascal-network.org/Challenges/RTE, March. Paul Kingsbury and Martha Palmer. 2002. From Treebank to PropBank. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC-2002), Las Palmas, Spain. K. Kipper, H. Dang, W. Schuler, and M. Palmer. 2000a. Building a class-based verb lexicon using tags. In Proceedings of Fifth TAG+ Workshop. Karin Kipper, Hoa Trang Dang, and Martha Palmer. 2000b. Class-based construction of a verb lexicon. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 691–696.AAAI Press / The MIT Press. D. Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL-98, Montreal, Canada, August. G. Miller. 1995. WordNet: a lexical database. Communications of the ACM, 38(11):39–41, November. Dan Moldovan and Adrian Novischi. 2002. Lexical chains for question answering. In Proceedings of COLING 2002, pages 674–680. Dan I. Moldovan and Vasile Rus. 2001. Logic Form Transformation of WordNet and its Applicability to Question Answering. In Proceedings of the ACL 2001, Toulouse, France, July. Dan I. Moldovan, Christine Clark, Sanda M. Harabagiu, and Steven J. Maiorano. 2003. Cogex: A logic prover for question answering. In Proceedings of HLT-NAACL 2003, Edmonton, Canada, May-June. Dan Moldovan, Sanda Harabagiu, Christine Clark, and Mitchell Bowden. 2004. PowerAnswer 2: Experiments and Analysis over TREC 2004. In Proceedings of Text Retrieval Conference 2004. Dan Moldovan, Christine Clark, and Sanda Harabagiu. 2005. Temporal Context Representation and Reasoning. In Proceedings of IJCAI-2005, pages 1099– 1104, Edinburgh, Scotland, July-August. R. Quinlan. 1998. C5.0: An Informal Tutorial, RuleQuest. H. Tanev, M. Kouylekov, and B. Magnini. 2004. Combining linguistic processing and web mining for question qnswering: Itc-irst at trec 2004. In Proceedings of the 13th Text Retrieval Conference (TREC-2004), pages 429–438, Gaithersburg, Maryland, USA, November 16-19. Ellen M. Voorhees. 2004. Overview of the TREC 2004 Question Answering Track. In Proceedings of the 13th Text Retrieval Conference (TREC-2004), pages 83–105, Gaithersburg, Maryland, USA, November 16-19. 904 | 2006 | 113 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 905–912, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Methods for Using Textual Entailment in Open-Domain Question Answering Sanda Harabagiu and Andrew Hickl Language Computer Corporation 1701 North Collins Boulevard Richardson, Texas 75080 USA [email protected] Abstract Work on the semantics of questions has argued that the relation between a question and its answer(s) can be cast in terms of logical entailment. In this paper, we demonstrate how computational systems designed to recognize textual entailment can be used to enhance the accuracy of current open-domain automatic question answering (Q/A) systems. In our experiments, we show that when textual entailment information is used to either filter or rank answers returned by a Q/A system, accuracy can be increased by as much as 20% overall. 1 Introduction Open-Domain Question Answering (Q/A) systems return a textual expression, identified from a vast document collection, as a response to a question asked in natural language. In the quest for producing accurate answers, the open-domain Q/A problem has been cast as: (1) a pipeline of linguistic processes pertaining to the processing of questions, relevant passages and candidate answers, interconnected by several types of lexicosemantic feedback (cf. (Harabagiu et al., 2001; Moldovan et al., 2002)); (2) a combination of language processes that transform questions and candidate answers in logic representations such that reasoning systems can select the correct answer based on their proofs (cf. (Moldovan et al., 2003)); (3) a noisy-channel model which selects the most likely answer to a question (cf. (Echihabi and Marcu, 2003)); or (4) a constraint satisfaction problem, where sets of auxiliary questions are used to provide more information and better constrain the answers to individual questions (cf. (Prager et al., 2004)). While different in their approach, each of these frameworks seeks to approximate the forms of semantic inference that will allow them to identify valid textual answers to natural language questions. Recently, the task of automatically recognizing one form of semantic inference – textual entailment – has received much attention from groups participating in the 2005 and 2006 PASCAL Recognizing Textual Entailment (RTE) Challenges (Dagan et al., 2005). 1 As currently defined, the RTE task requires systems to determine whether, given two text fragments, the meaning of one text could be reasonably inferred, or textually entailed, from the meaning of the other text. We believe that systems developed specifically for this task can provide current question-answering systems with valuable semantic information that can be leveraged to identify exact answers from ranked lists of candidate answers. By replacing the pairs of texts evaluated in the RTE Challenge with combinations of questions and candidate answers, we expect that textual entailment could provide yet another mechanism for approximating the types of inference needed in order answer questions accurately. In this paper, we present three different methods for incorporating systems for textual entailment into the traditional Q/A architecture employed by many current systems. Our experimental results indicate that (even at their current level of performance) textual entailment systems can substantially improve the accuracy of Q/A, even when no other form of semantic inference is employed. The remainder of the paper is organized as fol1http://www.pascal-network.org/Challenges/RTE 905 Processing Question Module (QP) Passage Retrieval Module (PR) Answer Type Expected Keywords Module Answer Processing (AP) Ranked List of Answers TEXTUAL ENTAILMENT Method 1 TEXTUAL ENTAILMENT Method 2 List of Questions Generation AUTO−QUAB Ranked List of Paragraphs TEXTUAL ENTAILMENT Method 3 Entailed Questions Entailed Paragraphs List of Entailed Paragraphs Question Documents Answers Answers−M1 Answers−M2 Answers−M3 QUESTION ANSWERING SYSTEM Figure 1: Integrating Textual Entailment in Q/A. lows. Section 2 describes the three methods of using textual entailment in open-domain question answering that we have identified, while Section 3 presents the textual entailment system we have used. Section 4 details our experimental methods and our evaluation results. Finally, Section 5 provides a discussion of our findings, and Section 6 summarizes our conclusions. 2 Integrating Textual Entailment in Question Answering In this section, we describe three different methods for integrating a textual entailment (TE) system into the architecture of an open-domain Q/A system. Work on the semantics of questions (Groenendijk, 1999; Lewis, 1988) has argued that the formal answerhood relation found between a question and a set of (correct) answers can be cast in terms of logical entailment. Under these approaches (referred to as licensing by (Groenendijk, 1999) and aboutness by (Lewis, 1988)), p is considered to be an answer to a question ?q iff ?q logically entails the set of worlds in which p is true(i.e. ?p). While the notion of textual entailment has been defined far less rigorously than logical entailment, we believe that the recognition of textual entailment between a question and a set of candidate answers – or between a question and questions generated from answers – can enable Q/A systems to identify correct answers with greater precision than current keyword- or pattern-based techniques. As illustrated in Figure 1, most open-domain Q/A systems generally consist of a sequence of three modules: (1) a question processing (QP) module; (2) a passage retrieval (PR) module; and (3) an answer processing (AP) module. Questions are first submitted to a QP module, which extracts a set of relevant keywords from the text of the question and identifies the question’s expected answer type (EAT). Keywords – along with the question’s EAT – are then used by a PR module to retrieve a ranked list of paragraphs which may contain answers to the question. These paragraphs are then sent to an AP module, which extracts an exact candidate answer from each passage and then ranks each candidate answer according to the likelihood that it is a correct answer to the original question. Method 1. In Method 1, each of a ranked list of answers that do not meet the minimum conditions for TE are removed from consideration and then re-ranked based on the entailment confidence (a real-valued number ranging from 0 to 1) assigned by the TE system to each remaining example. The system then outputs a new set of ranked answers which do not contain any answers that are not entailed by the user’s question. Table 1 provides an example where Method 1 could be used to make the right prediction for a set of answers. Even though A1 was ranked in sixth position, the identification of a high-confidence positive entailment enabled it to be returned as the 906 top answer. In contrast, the recognition of a negative entailment for A2 caused this answer to be dropped from consideration altogether. Q1: “What did Peter Minuit buy for the equivalent of $24.00?” Rank1 TE Rank2 Answer Text A1 6th YES (0.89) 1st Everyone knows that, back in 1626, Peter Minuit bought Manhattan from the Indians for $24 worth of trinkets. A2 1st NO (0.81) – In 1626, an enterprising Peter Minuit flagged down some passing locals, plied them with beads, cloth and trinkets worth an estimated $24, and walked away with the whole island. Table 1: Re-ranking of answers by Method 1. Method 2. Since AP is often a resourceintensive process for most Q/A systems, we expect that TE information can be used to limit the number of passages considered during AP. As illustrated in Method 2 in Figure 1, lists of passages retrieved by a PR module can either be ranked (or filtered) using TE information. Once ranking is complete, answer extraction takes place only on the set of entailed passages that the system considers likely to contain a correct answer to the user’s question. Method 3. In previous work (Harabagiu et al., 2005b), we have described techniques that can be used to automatically generate well-formed natural language questions from the text of paragraphs retrieved by a PR module. In our current system, sets of automatically-generated questions (AGQ) are created using a stand-alone AutoQUAB generation module, which assembles question-answer pairs (known as QUABs) from the top-ranked passages returned in response to a question. Table 2 lists some of the questions that this module has produced for the question Q2: “How hot does the inside of an active volcano get?”. Q2: “How hot does the inside of an active volcano get?” A2 Tamagawa University volcano expert Takeyo Kosaka said lava fragments belched out of the mountain on January 31 were as hot as 300 degrees Fahrenheit. The intense heat from a second eruption on Tuesday forced rescue operations to stop after 90 minutes. Because of the high temperatures, the bodies of only five of the volcano’s initial victims were retrieved. Positive Entailment AGQ1 What temperature were the lava fragments belched out of the mountain on January 31? AGQ2 How many degrees Fahrenheit were the lava fragments belched out of the mountain on January 31? Negative Entailment AGQ3 When did rescue operations have to stop? AGQ4 How many bodies of the volcano’s initial victims were retrieved? Table 2: TE between AGQs and user question. Following (Groenendijk, 1999), we expect that if a question ?q logically entails another question ?q′, then some subset of the answers entailed by ?q′ should also be interpreted as valid answers to ?q. By establishing TE between a question and AGQs derived from passages identified by the Q/A system for that question, we expect we can identify a set of answer passages that contain correct answers to the original question. For example, in Table 2, we find that entailment between questions indicates the correctness of a candidate answer: here, establishing that Q2 entails AGQ1 and AGQ2 (but not AGQ3 or AGQ4) enables the system to select A2 as the correct answer. When at least one of the AGQs generated by the AutoQUAB module is entailed by the original question, all AGQs that do not reach TE are filtered from consideration; remaining passages are assigned an entailment confidence score and are sent to the AP module in order to provide an exact answer to the question. Following this process, candidate answers extracted from the AP module were then re-associated with their AGQs and resubmitted to the TE system (as in Method 1). Question-answer pairs deemed to be positive instances of entailment were then stored in a database and used as additional training data for the AutoQUAB module. When no AGQs were found to be entailed by the original question, however, passages were ranked according to their entailment confidence and sent to AP for further processing and validation. 3 The Textual Entailment System Processing textual entailment, or recognizing whether the information expressed in a text can be inferred from the information expressed in another text, can be performed in four ways. We can try to (1) derive linguistic information from the pair of texts, and cast the inference recognition as a classification problem; or (2) evaluate the probability that an entailment can exist between the two texts; (3) represent the knowledge from the pair of texts in some representation language that can be associated with an inferential mechanism; or (4) use the classical AI definition of entailment and build models of the world in which the two texts are respectively true, and then check whether the models associated with one text are included in the models associated with the other text. Although we believe that each of these methods should be investigated fully, we decided to focus only on the first method, which allowed us to build the TE system illustrated in Figure 2. Our TE system consists of (1) a Preprocessing Module, which derives linguistic knowledge from the text pair; (2) an Alignment Module, which takes advantage of the notions of lexical alignment 907 Classifier YES NO Textual Input 2 Textual Input 1 Preprocessing Training Corpora Features Alignment Dependency Features Paraphrase Features Semantic/ Pragmatic Features Coreference Coreference NE Aliasing Concept Paraphrase Acquisition WWW Lexical Alignment Alignment Module Feature Extraction Classification Module Lexico−Semantic PoS/ NER Synonyms/ Antonyms Normalization Syntactic Semantic Temporal Parsing Modality Detection Speech Act Recognition Pragmatics Factivity Detection Belief Recognition Figure 2: Textual Entailment Architecture. and textual paraphrases; and (3) a Classification Module, which uses a machine learning classifier (based on decision trees) to make an entailment judgment for each pair of texts. As described in (Hickl et al., 2006), the Preprocessing module is used to syntactically parse texts, identify the semantic dependencies of predicates, label named entities, normalize temporal and spatial expressions, resolve instances of coreference, and annotate predicates with polarity, tense, and modality information. Following preprocessing, texts are sent to an Alignment Module which uses a Maximum Entropy-based classifier in order to estimate the probability that pairs of constituents selected from texts encode corresponding information that could be used to inform an entailment judgment. This module assumes that since sets of entailing texts necessarily predicate about the same set of individuals or events, systems should be able to identify elements from each text that convey similar types of presuppositions. Examples of predicates and arguments aligned by this module are presented in Figure 3. Pred: Pred: ArgM−LOC the inside of an active volcano an active volcano How hot the mountain the lava fragments Original Question Auto−QUAB What temperature get hot be temperature Arg1 Answer Type Arg1 Figure 3: Alignment Graph Aligned constituents are then used to extract sets of phrase-level alternations (or “paraphrases”) from the WWW that could be used to capture correspondences between texts longer than individual constituents. The top 8 candidate paraphrases for two of the aligned elements from Figure 3 are presented in Table 3. Finally, the Classification Module employs a Judgment Paraphrase YES lava fragments in pyroclastic flows can reach 400 degrees YES an active volcano can get up to 2000 degrees NO an active volcano above you are slopes of 30 degrees YES the active volcano with steam reaching 80 degrees YES lava fragments such as cinders may still be as hot as 300 degrees NO lava is a liquid at high temperature: typically from 700 degrees Table 3: Phrase-Level Alternations decision tree classifier in order to determine whether an entailment relationship exists for each pair of texts. This classifier is learned using features extracted from the previous modules, including features derived from (1) the (lexical) alignment of the texts, (2) syntactic and semantic dependencies discovered in each text passage, (3) paraphrases derived from web documents, and (4) semantic and pragmatic annotations. (A complete list of features can be found in Figure 4.) Based on these features, the classifier outputs both an entailment judgment (either yes or no) and a confidence value, which is used to rank answers or paragraphs in the architecture illustrated in Figure 1. 3.1 Lexical Alignment Several approaches to the RTE task have argued that the recognition of textual entailment can be enhanced when systems are able to identify – or align – corresponding entities, predicates, or phrases found in a pair of texts. In this section, we show that by using a machine learning-based classifier which combines lexico-semantic information from a wide range of sources, we are able to accurately identify aligned constituents in pairs of texts with over 90% accuracy. We believe the alignment of corresponding entities can be cast as a classification problem which uses lexico-semantic features in order to compute an alignment probability p(a), which corresponds to the likelihood that a term selected from one text entails a term from another text. We used constituency information from a chunk parser to decompose the pair of texts into a set of disjoint seg908 ALIGNMENT FEATURES: These three features are derived from the results of the lexical alignment classification. ⋄1⋄LONGEST COMMON STRING: This feature represents the longest contiguous string common to both texts. ⋄2⋄UNALIGNED CHUNK: This feature represents the number of chunks in one text that are not aligned with a chunk from the other ⋄3⋄LEXICAL ENTAILMENT PROBABILITY: This feature is defined in (Glickman and Dagan, 2005). DEPENDENCY FEATURES: These four features are computed from the PropBank-style annotations assigned by the semantic parser. ⋄1⋄ENTITY-ARG MATCH: This is a boolean feature which fires when aligned entities were assigned the same argument role label. ⋄2⋄ENTITY-NEAR-ARG MATCH: This feature is collapsing the arguments Arg1 and Arg2 (as well as the ArgM subtypes) into single categories for the purpose of counting matches. ⋄3⋄PREDICATE-ARG MATCH: This boolean feature is flagged when at least two aligned arguments have the same role. ⋄4⋄PREDICATE-NEAR-ARG MATCH: This feature is collapsing the arguments Arg1 and Arg2 (as well as the ArgM subtypes) into single categories for the purpose of counting matches. PARAPHRASE FEATURES: These three features are derived from the paraphrases acquired for each pair. ⋄1⋄SINGLE PATTERN MATCH: This is a boolean feature which fired when a paraphrase matched either of the texts. ⋄2⋄BOTH PATTERN MATCH: This is a boolean feature which fired when paraphrases matched both texts. ⋄3⋄CATEGORY MATCH: This is a boolean feature which fired when paraphrases could be found from the same paraphrase cluster that matched both texts. SEMANTIC/PRAGMATIC FEATURES: These six features are extracted by the preprocessing module. ⋄1⋄NAMED ENTITY CLASS: This feature has a different value for each of the 150 named entity classes. ⋄2⋄TEMPORAL NORMALIZATION: This boolean feature is flagged when the temporal expressions are normalized to the same ISO 9000 equivalents. ⋄3⋄MODALITY MARKER: This boolean feature is flagged when the two texts use the same modal verbs. ⋄4⋄SPEECH-ACT: This boolean feature is flagged when the lexicons indicate the same speech act in both texts. ⋄5⋄FACTIVITY MARKER: This boolean feature is flagged when the factivity markers indicate either TRUE or FALSE in both texts simultaneously. ⋄6⋄BELIEF MARKER: This boolean feature is set when the belief markers indicate either TRUE or FALSE in both texts simultaneously. CONTRAST FEATURES: These six features are derived from the opposing information provided by antonymy relations or chains. ⋄1⋄NUMBER OF LEXICAL ANTONYMY RELATIONS: This feature counts the number of antonyms from WordNet that are discovered between the two texts. ⋄2⋄NUMBER OF ANTONYMY CHAINS: This feature counts the number of antonymy chains that are discovered between the two texts. ⋄3⋄CHAIN LENGTH: This feature represents a vector with the lengths of the antonymy chains discovered between the two texts. ⋄4⋄NUMBER OF GLOSSES: This feature is a vector representing the number of Gloss relations used in each antonymy chain. ⋄5⋄NUMBER OF MORPHOLOGICAL CHANGES: This feature is a vector representing the number of Morphological-Derivation relations found in each antonymy chain. ⋄6⋄NUMBER OF NODES WITH DEPENDENCIES: This feature is a vector indexing the number of nodes in each antonymy chain that contain dependency relations. ⋄7⋄TRUTH-VALUE MISMATCH: This is a boolean feature which fired when two aligned predicates differed in any truth value. ⋄8⋄POLARITY MISMATCH: This is a boolean feature which fired when predicates were assigned opposite polarity values. Figure 4: Features Used in Classifying Entailment ments known as “alignable chunks”. Alignable chunks from one text (Ct) and the other text (Ch) are then assembled into an alignment matrix (Ct× Ch). Each pair of chunks (p ∈Ct × Ch) is then submitted to a Maximum Entropy-based classifier which determines whether or not the pair of chunks represents a case of lexical entailment. Three classes of features were used in the Alignment Classifier: (1) a set of statistical features (e.g. cosine similarity), (2) a set of lexicosemantic features (including WordNet Similarity (Pedersen et al., 2004), named entity class equality, and part-of-speech equality), and (3) a set of string-based features (such as Levenshtein edit distance and morphological stem equality). As in (Hickl et al., 2006), we used a twostep approach to obtain sufficient training data for the Alignment Classifier. First, humans were tasked with annotating a total of 10,000 alignment pairs (extracted from the 2006 PASCAL Development Set) as either positive or negative instances of alignment. These annotations were then used to train a hillclimber that was used to annotate a larger set of 450,000 alignment pairs selected at random from the training corpora described in Section 3.3. These machine-annotated examples were then used to train the Maximum Entropy-based classifier that was used in our TE system. Table 4 presents results from TE’s linearand Maximum Entropy-based Alignment Classifiers on a sample of 1000 alignment pairs selected at random from the 2006 PASCAL Test Set. Classifier Training Set Precision Recall F-Measure Linear 10K pairs 0.837 0.774 0.804 Maximum Entropy 10K pairs 0.881 0.851 0.866 Maximum Entropy 450K pairs 0.902 0.944 0.922 Table 4: Performance of Alignment Classifier 3.2 Paraphrase Acquisition Much recent work on automatic paraphrasing (Barzilay and Lee, 2003) has used relatively simple statistical techniques to identify text passages that contain the same information from parallel corpora. Since sentence-level paraphrases are generally assumed to contain information about the same event, these approaches have generally assumed that all of the available paraphrases for a given sentence will include at least one pair of entities which can be used to extract sets of paraphrases from text. The TE system uses a similar approach to gather phrase-level alternations for each entailment pair. In our system, the two highest-confidence entity alignments returned by the Lexical Alignment module were used to construct a query which was used to retrieve the top 500 documents from Google, as well as all matching instances from our training corpora described in Section 3.3. This method did not always extract true paraphrases of either texts. In order increase the likelihood that 909 only true paraphrases were considered as phraselevel alternations for an example, extracted sentences were clustered using complete-link clustering using a technique proposed in (Barzilay and Lee, 2003). 3.3 Creating New Sources of Training Data In order to obtain more training data for our TE system, we extracted more than 200,000 examples of textual entailment from large newswire corpora. Positive Examples. Following an idea proposed in (Burger and Ferro, 2005), we created a corpus of approximately 101,000 textual entailment examples by pairing the headline and first sentence from newswire documents. In order to increase the likelihood of including only positive examples, pairs were filtered that did not share an entity (or an NP) in common between the headline and the first sentence Judgment Example YES Text-1: Sydney newspapers made a secret deal not to report on the fawning and spending during the city’s successful bid for the 2000 Olympics, former Olympics Minister Bruce Baird said today. Text-2: Papers Said To Protect Sydney Bid YES Text-1: An IOC member expelled in the Olympic bribery scandal was consistently drunk as he checked out Stockholm’s bid for the 2004 Games and got so offensive that he was thrown out of a dinner party, Swedish officials said. Text-2: Officials Say IOC Member Was Drunk Table 5: Positive Examples Negative Examples. Two approaches were used to gather negative examples for our training set. First, we extracted 98,000 pairs of sequential sentences that included mentions of the same named entity from a large newswire corpus. We also extracted 21,000 pairs of sentences linked by connectives such as even though, in contrast and but. Judgment Example NO Text-1: One player losing a close friend is Japanese pitcher Hideki Irabu, who was befriended by Wells during spring training last year. Text-2: Irabu said he would take Wells out to dinner when the Yankees visit Toronto. NO Text-1: According to the professor, present methods of cleaning up oil slicks are extremely costly and are never completely efficient. Text-2: In contrast, he stressed, Clean Mag has a 100 percent pollution retrieval rate, is low cost and can be recycled. Table 6: Negative Examples 4 Experimental Results In this section, we describe results from four sets of experiments designed to explore how textual entailment information can be used to enhance the quality of automatic Q/A systems. We show that by incorporating features from TE into a Q/A system which employs no other form of textual inference, we can improve accuracy by more than 20% over a baseline. We conducted our evaluations on a set of 500 factoid questions selected randomly from questions previously evaluated during the annual TREC Q/A evaluations. 2 Of these 500 questions, 335 (67.0%) were automatically assigned an answer type from our system’s answer type hierarchy ; the remaining 165 (33.0%) questions were classified as having an unknown answer type. In order to provide a baseline for our experiments, we ran a version of our Q/A system, known as FERRET (Harabagiu et al., 2005a), that does not make use of textual entailment information when identifying answers to questions. Results from this baseline are presented in Table 7. Question Set Questions Correct Accuracy MRR Known Answer Types 335 107 32.0% 0.3001 Unknown Answer Types 265 81 30.6% 0.2987 Table 7: Q/A Accuracy without TE The performance of the TE system described in Section 3 was first evaluated in the 2006 PASCAL RTE Challenge. In this task, systems were tasked with determining whether the meaning of a sentence (referred to as a hypothesis) could be reasonably inferred from the meaning of another sentence (known as a text). Four types of sentence pairs were evaluated in the 2006 RTE Challenge, including: pairs derived from the output of (1) automatic question-answering (QA) systems, (2) information extraction systems (IE), (3) information retrieval (IR) systems, and (4) multidocument summarization (SUM) systems. The accuracy of our TE system across these four tasks is presented in Table 8. Training Data Development Set Additional Corpora Number of Examples 800 201,000 Task QA-test 0.5750 0.6950 IE-test 0.6450 0.7300 IR-test 0.6200 0.7450 SUM-test 0.7700 0.8450 Overall Accuracy 0.6525 0.7538 Table 8: Accuracy on the 2006 RTE Test Set In previous work (Hickl et al., 2006), we have found that the type and amount of training data available to our TE system significantly (p < 0.05) impacted its performance on the 2006 RTE Test Set. When our system was trained on the training corpora described in Section 3.3, the overall accuracy of the system increased by more than 10%, 2Text Retrieval Conference (http://trec.nist.gov) 910 from 65.25% to 75.38%. In order to provide training data that replicated the task of recognizing entailment between a question and an answer, we assembled a corpus of 5000 question-answer pairs selected from answers that our baseline Q/A system returned in response to a new set of 1000 questions selected from the TREC test sets. 2500 positive training examples were created from answers identified by human annotators to be correct answers to a question, while 2500 negative examples were created by pairing questions with incorrect answers returned by the Q/A system. After training our TE system on this corpus, we performed the following four experiments: Method 1. In the first experiment, the ranked lists of answers produced by the Q/A system were submitted to the TE system for validation. Under this method, answers that were not entailed by the question were removed from consideration; the top-ranked entailed answer was then returned as the system’s answer to the question. Results from this method are presented in Table 9. Method 2. In this experiment, entailment information was used to rank passages returned by the PR module. After an initial relevance ranking was determined from the PR engine, the top 50 passages were paired with the original question and were submitted to the TE system. Passages were re-ranked using the entailment judgment and the entailment confidence computed for each pair and then submitted to the AP module. Features derived from the entailment confidence were then combined with the keyword- and relation-based features described in (Harabagiu et al., 2005a) in order to produce a final ranking of candidate answers. Results from this method are presented in Table 9. Method 3. In the third experiment, TE was used to select AGQs that were entailed by the question submitted to the Q/A system. Here, AutoQUAB was used to generate questions for the top 50 candidate answers identified by the system. When at least one of the top 50 AGQs were entailed by the original question, the answer passage associated with the top-ranked entailed question was returned as the answer. When none of the top 50 AGQs were entailed by the question, questionanswer pairs were re-ranked based on the entailment confidence, and the top-ranked answer was returned. Results for both of these conditions are presented in Table 9. Hybrid Method. Finally, we found that the best results could be obtained by combining aspects of each of these three strategies. Under this approach, candidate answers were initially ranked using features derived from entailment classifications performed between (1) the original question and each candidate answer and (2) the original question and the AGQ generated from each candidate answer. Once a ranking was established, answers that were not judged to be entailed by the question were also removed from final ranking. Results from this hybrid method are provided in Table 9. Known EAT Unknown EAT Acc MRR Acc MRR Baseline 32.0% 0.3001 30.6% 0.2978 Method 1 44.1% 0.4114 39.5% 0.3833 Method 2 52.4% 0.5558 42.7% 0.4135 Method 3 41.5% 0.4257 37.5% 0.3575 Hybrid 53.9% 0.5640 41.9% 0.4010 Table 9: Q/A Performance with TE 5 Discussion The experiments reported in this paper suggest that current TE systems may be able to provide open-domain Q/A systems with the forms of semantic inference needed to perform accurate answer validation. While probabilistic or web-based methods for answer validation have been previously explored in the literature (Magnini et al., 2002), these approaches have modeled the relationship between a question and a (correct) answer in terms of relevance and have not tried to approximate the deeper semantic phenomena that are involved in determining answerhood. Our work suggests that considerable gains in performance can be obtained by incorporating TE during both answer processing and passage retrieval. While best results were obtained using the Hybrid Method (which boosted performance by nearly 28% for questions with known EATs), each of the individual methods managed to boost the overall accuracy of the Q/A system by at least 7%. When TE was used to filter non-entailed answers from consideration (Method 1), the overall accuracy of the Q/A system increased by 12% over the baseline (when an EAT could be identified) and by nearly 9% (when no EAT could be identified). In contrast, when entailment information was used to rank passages and candidate answers, performance increased by 22% and 10% respectively. Somewhat smaller performance gains were achieved when TE was used to select 911 amongst AGQs generated by our Q/A system’s AutoQUAB module (Method 3). We expect that by adding features to TE system specifically designed to account for the semantic contributions of a question’s EAT, we may be able to boost the performance of this method. 6 Conclusions In this paper, we discussed three different ways that a state-of-the-art textual entailment system could be used to enhance the performance of an open-domain Q/A system. We have shown that when textual entailment information is used to either filter or rank candidate answers returned by a Q/A system, Q/A accuracy can be improved from 32% to 52% (when an answer type can be detected) and from 30% to 40% (when no answer type can be detected). We believe that these results suggest that current supervised machine learning approaches to the recognition of textual entailment may provide open-domain Q/A systems with the inferential information needed to develop viable answer validation systems. 7 Acknowledgments This material is based upon work funded in whole or in part by the U.S. Government and any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. Government. References Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In HLT-NAACL. John Burger and Lisa Ferro. 2005. Generating an Entailment Corpus from News Headlines. In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, pages 49–54. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL Recognizing Textual Entailment Challenge. In Proceedings of the PASCAL Challenges Workshop. Abdessamad Echihabi and Daniel Marcu. 2003. A noisy-channel approach to question answering. In Proceedings of the 41st Meeting of the Association for Computational Linguistics. Oren Glickman and Ido Dagan. 2005. A Probabilistic Setting and Lexical Co-occurrence Model for Textual Entailment. In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, Ann Arbor, USA. Jeroen Groenendijk. 1999. The logic of interrogation: Classical version. In Proceedings of the Ninth Semantics and Linguistics Theory Conference (SALT IX), Ithaca, NY. Sanda Harabagiu, Dan Moldovan, Marius Pasca, Rada Mihalcea, Mihai Surdeanu, Razvan Bunsecu, Roxana Girju, Vasile Rus, and Paul Morarescu. 2001. The Role of Lexico-Semantic Feedback in OpenDomain Textual Question-Answering. In Proceedings of the 39th Meeting of the Association for Computational Linguistics. S. Harabagiu, D. Moldovan, C. Clark, M. Bowden, A. Hickl, and P. Wang. 2005a. Employing Two Question Answering Systems in TREC 2005. In Proceedings of the Fourteenth Text REtrieval Conference. Sanda Harabagiu, Andrew Hickl, John Lehmann, and Dan Moldovan. 2005b. Experiments with Interactive Question-Answering. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05). Andrew Hickl, John Williams, Jeremy Bensley, Kirk Roberts, Bryan Rink, and Ying Shi. 2006. Recognizing Textual Entailment with LCC’s Groundhog System. In Proceedings of the Second PASCAL Challenges Workshop. David Lewis. 1988. Relevant Implication. Theoria, 54(3):161–174. Bernardo Magnini, Matteo Negri, Roberto Prevete, and Hristo Tanev. 2002. Is it the right answer? exploiting web redundancy for answer validation. In Proceedings of the Fortieth Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, PA. Dan Moldovan, Marius Pasca, Sanda Harabagiu, and Mihai Surdeanu. 2002. Performance Issues and Error Analysis in an Open-Domain Question Answering System. In Proceedings of the 4Oth Meeting of the Association for Computational Linguistics. Dan Moldovan, Christine Clark, Sanda Harabagiu, and Steve Maiorano. 2003. COGEX: A Logic Prover for Question Answering. In Proceedings of HLT/NAACL-2003. T. Pedersen, S. Patwardhan, and J. Michelizzi. 2004. WordNet::Similarity - Measuring the Relatedness of Concepts. In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI04), San Jose, CA. John Prager, Jennifer Chu-Carroll, and Krzysztof Czuba. 2004. Question answering using constraint satisfaction: Qa-by-dossier-with-contraints. In Proceedings of the ACL-2004, pages 574–581, Barcelona, Spain, July. 912 | 2006 | 114 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 913–920, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Using String-Kernels for Learning Semantic Parsers Rohit J. Kate Department of Computer Sciences The University of Texas at Austin 1 University Station C0500 Austin, TX 78712-0233, USA [email protected] Raymond J. Mooney Department of Computer Sciences The University of Texas at Austin 1 University Station C0500 Austin, TX 78712-0233, USA [email protected] Abstract We present a new approach for mapping natural language sentences to their formal meaning representations using stringkernel-based classifiers. Our system learns these classifiers for every production in the formal language grammar. Meaning representations for novel natural language sentences are obtained by finding the most probable semantic parse using these string classifiers. Our experiments on two realworld data sets show that this approach compares favorably to other existing systems and is particularly robust to noise. 1 Introduction Computational systems that learn to transform natural language sentences into formal meaning representations have important practical applications in enabling user-friendly natural language communication with computers. However, most of the research in natural language processing (NLP) has been focused on lower-level tasks like syntactic parsing, word-sense disambiguation, information extraction etc. In this paper, we have considered the important task of doing deep semantic parsing to map sentences into their computer-executable meaning representations. Previous work on learning semantic parsers either employ rule-based algorithms (Tang and Mooney, 2001; Kate et al., 2005), or use statistical feature-based methods (Ge and Mooney, 2005; Zettlemoyer and Collins, 2005; Wong and Mooney, 2006). In this paper, we present a novel kernel-based statistical method for learning semantic parsers. Kernel methods (Cristianini and Shawe-Taylor, 2000) are particularly suitable for semantic parsing because it involves mapping phrases of natural language (NL) sentences to semantic concepts in a meaning representation language (MRL). Given that natural languages are so flexible, there are various ways in which one can express the same semantic concept. It is difficult for rule-based methods or even statistical featurebased methods to capture the full range of NL contexts which map to a semantic concept because they tend to enumerate these contexts. In contrast, kernel methods allow a convenient mechanism to implicitly work with a potentially infinite number of features which can robustly capture these range of contexts even when the data is noisy. Our system, KRISP (Kernel-based Robust Interpretation for Semantic Parsing), takes NL sentences paired with their formal meaning representations as training data. The productions of the formal MRL grammar are treated like semantic concepts. For each of these productions, a SupportVector Machine (SVM) (Cristianini and ShaweTaylor, 2000) classifier is trained using string similarity as the kernel (Lodhi et al., 2002). Each classifier then estimates the probability of the production covering different substrings of the sentence. This information is used to compositionally build a complete meaning representation (MR) of the sentence. Some of the previous work on semantic parsing has focused on fairly simple domains, primarily, ATIS (Air Travel Information Service) (Price, 1990) whose semantic analysis is equivalent to filling a single semantic frame (Miller et al., 1996; Popescu et al., 2004). In this paper, we have tested KRISP on two real-world domains in which meaning representations are more complex with richer predicates and nested structures. Our experiments demonstrate that KRISP compares favor913 NL: “If the ball is in our goal area then our player 1 should intercept it.” CLANG: ((bpos (goal-area our)) (do our {1} intercept)) Figure 1: An example of an NL advice and its CLANG MR. ably to other existing systems and is particularly robust to noise. 2 Semantic Parsing We call the process of mapping natural language (NL) utterances into their computer-executable meaning representations (MRs) as semantic parsing. These MRs are expressed in formal languages which we call meaning representation languages (MRLs). We assume that all MRLs have deterministic context free grammars, which is true for almost all computer languages. This ensures that every MR will have a unique parse tree. A learning system for semantic parsing is given a training corpus of NL sentences paired with their respective MRs from which it has to induce a semantic parser which can map novel NL sentences to their correct MRs. Figure 1 shows an example of an NL sentence and its MR from the CLANG domain. CLANG (Chen et al., 2003) is the standard formal coach language in which coaching advice is given to soccer agents which compete on a simulated soccer field in the RoboCup 1 Coach Competition. In the MR of the example, bpos stands for “ball position”. The second domain we have considered is the GEOQUERY domain (Zelle and Mooney, 1996) which is a query language for a small database of about 800 U.S. geographical facts. Figure 2 shows an NL query and its MR form in a functional query language. The parse of the functional query language is also shown along with the involved productions. This example is also used later to illustrate how our system does semantic parsing. The MR in the functional query language can be read as if processing a list which gets modified by various functions. From the innermost expression going outwards it means: the state of Texas, the list containing all the states next to the state of Texas and the list of all the rivers which flow through these states. This list is finally returned as the answer. 1http://www.robocup.org/ NL: “Which rivers run through the states bordering Texas?” Functional query language: answer(traverse(next to(stateid(‘texas’)))) Parse tree of the MR in functional query language: ANSWER answer RIVER TRAVERSE traverse STATE NEXT TO next to STATE STATEID stateid ‘texas’ Productions: ANSWER →answer(RIVER) RIVER →TRAVERSE(STATE) STATE →NEXT TO(STATE) STATE →STATEID TRAVERSE →traverse NEXT TO →next to STATEID →stateid(‘texas’) Figure 2: An example of an NL query and its MR in a functional query language with its parse tree. KRISP does semantic parsing using the notion of a semantic derivation of an NL sentence. In the following subsections, we define the semantic derivation of an NL sentence and its probability. The task of semantic parsing then is to find the most probable semantic derivation of an NL sentence. In section 3, we describe how KRISP learns the string classifiers that are used to obtain the probabilities needed in finding the most probable semantic derivation. 2.1 Semantic Derivation We define a semantic derivation, D, of an NL sentence, s, as a parse tree of an MR (not necessarily the correct MR) such that each node of the parse tree also contains a substring of the sentence in addition to a production. We denote nodes of the derivation tree by tuples (π, [i..j]), where π is its production and [i..j] stands for the substring s[i..j] of s (i.e. the substring from the ith word to the jth word), and we say that the node or its production covers the substring s[i..j]. The substrings covered by the children of a node are not allowed to overlap, and the substring covered by the parent must be the concatenation of the substrings covered by its children. Figure 3 shows a semantic derivation of the NL sentence and the MR parse which were shown in figure 2. The words are numbered according to their position in the sentence. Instead of non-terminals, productions are shown in the nodes to emphasize the role of productions in semantic derivations. Sometimes, the children of an MR parse tree 914 (ANSWER →answer(RIVER), [1..9]) (RIVER →TRAVERSE(STATE), [1..9]) (TRAVERSE →traverse, [1..4]) which1 rivers2 run3 through4 (STATE →NEXT TO(STATE), [5..9]) (NEXT TO→next to, [5..7]) the5 states6 bordering7 (STATE →STATEID, [8..9]) (STATEID →stateid ‘texas’, [8..9]) Texas8 ?9 Figure 3: Semantic derivation of the NL sentence “Which rivers run through the states bordering Texas?” which gives MR as answer(traverse(next to(stateid(texas)))). node may not be in the same order as are the substrings of the sentence they should cover in a semantic derivation. For example, if the sentence was “Through the states that border Texas which rivers run?”, which has the same MR as the sentence in figure 3, then the order of the children of the node “RIVER →TRAVERSE(STATE)” would need to be reversed. To accommodate this, a semantic derivation tree is allowed to contain MR parse tree nodes in which the children have been permuted. Note that given a semantic derivation of an NL sentence, it is trivial to obtain the corresponding MR simply as the string generated by the parse. Since children nodes may be permuted, this step also needs to permute them back to the way they should be according to the MRL productions. If a semantic derivation gives the correct MR of the NL sentence, then we call it a correct semantic derivation otherwise it is an incorrect semantic derivation. 2.2 Most Probable Semantic Derivation Let Pπ(u) denote the probability that a production π of the MRL grammar covers the NL substring u. In other words, the NL substring u expresses the semantic concept of a production π with probability Pπ(u). In the next subsection we will describe how KRISP obtains these probabilities using string-kernel based SVM classifiers. Assuming these probabilities are independent of each other, the probability of a semantic derivation D of a sentence s is then: P(D) = Y (π,[i..j])∈D Pπ(s[i..j]) The task of the semantic parser is to find the most probable derivation of a sentence s. This task can be recursively performed using the notion of a partial derivation En,s[i..j], which stands for a subtree of a semantic derivation tree with n as the left-hand-side (LHS) non-terminal of the root production and which covers s from index i to j. For example, the subtree rooted at the node “(STATE →NEXT TO(STATE),[5..9])” in the derivation shown in figure 3 is a partial derivation which would be denoted as ESTATE,s[5..9]. Note that the derivation D of sentence s is then simply Estart,s[1..|s|], where start is the start symbol of the MRL’s context free grammar, G. Our procedure to find the most probable partial derivation E∗ n,s[i..j] considers all possible subtrees whose root production has n as its LHS nonterminal and which cover s from index i to j. Mathematically, the most probable partial derivation E∗ n,s[i..j] is recursively defined as: E∗ n,s[i..j] = makeT ree( arg max π = n →n1..nt ∈G, (p1, .., pt) ∈ partition(s[i..j], t) (Pπ(s[i..j]) Y k=1..t P (E∗ nk,pk ))) where partition(s[i..j], t) is a function which returns the set of all partitions of s[i..j] with t elements including their permutations. A partition of a substring s[i..j] with t elements is a t−tuple containing t non-overlapping substrings of s[i..j] which give s[i..j] when concatenated. For example, (“the states bordering”, “Texas ?”) is a partition of the substring “the states bordering Texas ?” with 2 elements. The procedure makeTree(π, (p1, .., pt)) constructs a partial derivation tree by making π as its root production and making the most probable partial derivation trees found through the recursion as children subtrees which cover the substrings according to the partition (p1, .., pt). The most probable partial derivation E∗ n,s[i..j] is found using the above equation by trying all productions π = n →n1..nt in G which have 915 n as the LHS, and all partitions with t elements of the substring s[i..j] (n1 to nt are right-handside (RHS) non-terminals of π, terminals do not play any role in this process and are not shown for simplicity). The most probable partial derivation E∗ STATE,s[5..9] for the sentence shown in figure 3 will be found by trying all the productions in the grammar with STATE as the LHS, for example, one of them being “STATE →NEXT TO STATE”. Then for this sample production, all partitions, (p1, p2), of the substring s[5..9] with two elements will be considered, and the most probable derivations E∗ NEXT TO,p1 and E∗ STATE,p2 will be found recursively. The recursion reaches base cases when the productions which have n on the LHS do not have any non-terminal on the RHS or when the substring s[i..j] becomes smaller than the length t. According to the equation, a production π ∈G and a partition (p1, .., pt) ∈partition(s[i..j], t) will be selected in constructing the most probable partial derivation. These will be the ones which maximize the product of the probability of π covering the substring s[i..j] with the product of probabilities of all the recursively found most probable partial derivations consistent with the partition (p1, .., pt). A naive implementation of the above recursion is computationally expensive, but by suitably extending the well known Earley’s context-free parsing algorithm (Earley, 1970), it can be implemented efficiently. The above task has some resemblance to probabilistic context-free grammar (PCFG) parsing for which efficient algorithms are available (Stolcke, 1995), but we note that our task of finding the most probable semantic derivation differs from PCFG parsing in two important ways. First, the probability of a production is not independent of the sentence but depends on which substring of the sentence it covers, and second, the leaves of the tree are not individual terminals of the grammar but are substrings of words of the NL sentence. The extensions needed for Earley’s algorithm are straightforward and are described in detail in (Kate, 2005) but due to space limitation we do not describe them here. Our extended Earley’s algorithm does a beam search and attempts to find the ω (a parameter) most probable semantic derivations of an NL sentence s using the probabilities Pπ(s[i..j]). To make this search faster, it uses a threshold, θ, to prune low probability derivation trees. 3 KRISP’s Training Algorithm In this section, we describe how KRISP learns the classifiers which give the probabilities Pπ(u) needed for semantic parsing as described in the previous section. Given the training corpus of NL sentences paired with their MRs {(si, mi)|i = 1..N}, KRISP first parses the MRs using the MRL grammar, G. We represent the parse of MR, mi, by parse(mi). Figure 4 shows pseudo-code for KRISP’s training algorithm. KRISP learns a semantic parser iteratively, each iteration improving upon the parser learned in the previous iteration. In each iteration, for every production π of G, KRISP collects positive and negative example sets. In the first iteration, the set P(π) of positive examples for production π contains all sentences, si, such that parse(mi) uses the production π. The set of negative examples, N(π), for production π includes all of the remaining training sentences. Using these positive and negative examples, an SVM classifier 2, Cπ, is trained for each production π using a normalized string subsequence kernel. Following the framework of Lodhi et al. (2002), we define a kernel between two strings as the number of common subsequences they share. One difference, however, is that their strings are over characters while our strings are over words. The more the two strings share, the greater the similarity score will be. Normally, SVM classifiers only predict the class of the test example but one can obtain class probability estimates by mapping the distance of the example from the SVM’s separating hyperplane to the range [0,1] using a learned sigmoid function (Platt, 1999). The classifier Cπ then gives us the probabilities Pπ(u). We represent the set of these classifiers by C = {Cπ|π ∈G}. Next, using these classifiers, the extended Earley’s algorithm, which we call EXTENDED EARLEY in the pseudo-code, is invoked to obtain the ω best semantic derivations for each of the training sentences. The procedure getMR returns the MR for a semantic derivation. At this point, for many training sentences, the resulting most-probable semantic derivation may not give the correct MR. Hence, next, the system collects more refined positive and negative examples to improve the result in the next iteration. It 2We use the LIBSVM package available at: http:// www.csie.ntu.edu.tw/˜cjlin/libsvm/ 916 function TRAIN KRISP(training corpus {(si, mi)|i = 1..N}, MRL grammar G) for each π ∈G // collect positive and negative examples for the first iteration for i = 1 to N do if π is used in parse(mi) then include si in P(π) else include si in N(π) for iteration = 1 to MAX IT R do for each π ∈G do Cπ = trainSV M(P(π), N(π)) // SVM training for each π ∈G P(π) = Φ // empty the positive examples, accumulate negatives though for i = 1 to N do D =EXTENDED EARLEY(si, G, P ) // obtain best derivations if ̸ ∃d ∈D such that parse(mi) = getMR(d) then D = D ∪EXTENDED EARLEY CORRECT(si, G, P, mi) // if no correct derivation then force to find one d∗= arg maxd∈D&getMR(d)=parse(mi) P (d) COLLECT POSITIVES(d∗) // collect positives from maximum probability correct derivation for each d ∈D do if P (d) > P (d∗) and getMR(d) ̸= parse(mi) then // collect negatives from incorrect derivation with larger probability than the correct one COLLECT NEGATIVES(d, d∗) return classifiers C = {Cπ|π ∈G} Figure 4: KRISP’s training algorithm is also possible that for some sentences, none of the obtained ω derivations give the correct MR. But as will be described shortly, the most probable derivation which gives the correct MR is needed to collect positive and negative examples for the next iteration. Hence in these cases, a version of the extended Earley’s algorithm, EXTENDED EARLEY CORRECT, is invoked which also takes the correct MR as an argument and returns the best ω derivations it finds, all of which give the correct MR. This is easily done by making sure all subtrees derived in the process are present in the parse of the correct MR. From these derivations, positive and negative examples are collected for the next iteration. Positive examples are collected from the most probable derivation which gives the correct MR, figure 3 showed an example of a derivation which gives the correct MR. At each node in such a derivation, the substring covered is taken as a positive example for its production. Negative examples are collected from those derivations whose probability is higher than the most probable correct derivation but which do not give the correct MR. Figure 5 shows an example of an incorrect derivation. Here the function “next to” is missing from the MR it produces. The following procedure is used to collect negative examples from incorrect derivations. The incorrect derivation and the most probable correct derivation are traversed simultaneously starting from the root using breadth-first traversal. The first nodes where their productions differ is detected, and all of the words covered by the these nodes (in both derivations) are marked. In the correct and incorrect derivations shown in figures 3 and 5 respectively, the first nodes where the productions differ are “(STATE →NEXT TO(STATE), [5..9])” and “(STATE →STATEID, [8..9])”. Hence, the union of words covered by them: 5 to 9 (“the states bordering Texas?”), will be marked. For each of these marked words, the procedure considers all of the productions which cover it in the two derivations. The nodes of the productions which cover a marked word in the incorrect derivation but not in the correct derivation are used to collect negative examples. In the example, the node “(TRAVERSE→traverse,[1..7])” will be used to collect a negative example (i.e. the words 1 to 7 ‘‘which rivers run through the states bordering” will be a negative example for the production TRAVERSE→traverse) because the production covers the marked words “the”, “states” and “bordering” in the incorrect derivation but not in the correct derivation. With this as a negative example, hopefully in the next iteration, the probability of this derivation will decrease significantly and drop below the probability of the correct derivation. In each iteration, the positive examples from the previous iteration are first removed so that new positive examples which lead to better correct derivations can take their place. However, negative examples are accumulated across iterations for better accuracy because negative examples from each iteration only lead to incorrect derivations and it is always good to include them. To further increase the number of negative examples, every positive example for a production is also included as a negative example for all the other productions having the same LHS. After a specified number of MAX ITR iterations, 917 (ANSWER →answer(RIVER), [1..9]) (RIVER →TRAVERSE(STATE), [1..9]) (TRAVERSE→traverse, [1..7]) Which1 rivers2 run3 through4 the5 states6 bordering7 (STATE →STATEID, [8..9]) (STATEID →stateid texas, [8..9]) Texas8 ?9 Figure 5: An incorrect semantic derivation of the NL sentence ”Which rivers run through the states bordering Texas?” which gives the incorrect MR answer(traverse(stateid(texas))). the trained classifiers from the last iteration are returned. Testing involves using these classifiers to generate the most probable derivation of a test sentence as described in the subsection 2.2, and returning its MR. The MRL grammar may contain productions corresponding to constants of the domain, for e.g., state names like “STATEID →‘texas’”, or river names like “RIVERID →‘colorado’” etc. Our system allows the user to specify such productions as constant productions giving the NL substrings, called constant substrings, which directly relate to them. For example, the user may give “Texas” as the constant substring for the production “STATEID →‘texas’. Then KRISP does not learn classifiers for these constant productions and instead decides if they cover a substring of the sentence or not by matching it with the provided constant substrings. 4 Experiments 4.1 Methodology KRISP was evaluated on CLANG and GEOQUERY domains as described in section 2. The CLANG corpus was built by randomly selecting 300 pieces of coaching advice from the log files of the 2003 RoboCup Coach Competition. These formal advice instructions were manually translated into English (Kate et al., 2005). The GEOQUERY corpus contains 880 English queries collected from undergraduates and from real users of a web-based interface (Tang and Mooney, 2001). These were manually translated into their MRs. The average length of an NL sentence in the CLANG corpus is 22.52 words while in the GEOQUERY corpus it is 7.48 words, which indicates that CLANG is the harder corpus. The average length of the MRs is 13.42 tokens in the CLANG corpus while it is 6.46 tokens in the GEOQUERY corpus. KRISP was evaluated using standard 10-fold cross validation. For every test sentence, only the best MR corresponding to the most probable semantic derivation is considered for evaluation, and its probability is taken as the system’s confidence in that MR. Since KRISP uses a threshold, θ, to prune low probability derivation trees, it sometimes may fail to return any MR for a test sentence. We computed the number of test sentences for which KRISP produced MRs, and the number of these MRs that were correct. For CLANG, an output MR is considered correct if and only if it exactly matches the correct MR. For GEOQUERY, an output MR is considered correct if and only if the resulting query retrieves the same answer as the correct MR when submitted to the database. Performance was measured in terms of precision (the percentage of generated MRs that were correct) and recall (the percentage of all sentences for which correct MRs were obtained). In our experiments, the threshold θ was fixed to 0.05 and the beam size ω was 20. These parameters were found through pilot experiments. The maximum number of iterations (MAX ITR) required was only 3, beyond this we found that the system only overfits the training corpus. We compared our system’s performance with the following existing systems: the string and tree versions of SILT (Kate et al., 2005), a system that learns transformation rules relating NL phrases to MRL expressions; WASP (Wong and Mooney, 2006), a system that learns transformation rules using statistical machine translation techniques; SCISSOR (Ge and Mooney, 2005), a system that learns an integrated syntactic-semantic parser; and CHILL (Tang and Mooney, 2001) an ILP-based semantic parser. We also compared with the CCG-based semantic parser by Zettlemoyer et al. (2005), but their results are available only for the GEO880 corpus and their experimental set-up is also different from ours. Like KRISP, WASP and SCISSOR also give confidences to the MRs they generate which are used to plot precision-recall curves by measuring precisions and recalls at vari918 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Precision Recall KRISP WASP SCISSOR SILT-tree SILT-string Figure 6: Results on the CLANG corpus. 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Precision Recall KRISP WASP SCISSOR SILT-tree SILT-string CHILL Zettlemoyer et al. (2005) Figure 7: Results on the GEOQUERY corpus. ous confidence levels. The results of the other systems are shown as points on the precision-recall graph. 4.2 Results Figure 6 shows the results on the CLANG corpus. KRISP performs better than either version of SILT and performs comparable to WASP. Although SCISSOR gives less precision at lower recall values, it gives much higher maximum recall. However, we note that SCISSOR requires more supervision for the training corpus in the form of semantically annotated syntactic parse trees for the training sentences. CHILL could not be run beyond 160 training examples because its Prolog implementation runs out of memory. For 160 training examples it gave 49.2% precision with 12.67% recall. Figure 7 shows the results on the GEOQUERY corpus. KRISP achieves higher precisions than WASP on this corpus. Overall, the results show that KRISP performs better than deterministic rule-based semantic parsers like CHILL and SILT and performs comparable to other statistical semantic parsers like WASP and SCISSOR. 4.3 Experiments with Other Natural Languages We have translations of a subset of the GEOQUERY corpus with 250 examples (GEO250 corpus) in 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Precision Recall English Japanese Spanish Turkish Figure 8: Results of KRISP on the GEO250 corpus for different natural languages. three other natural languages: Spanish, Turkish and Japanese. Since KRISP’s learning algorithm does not use any natural language specific knowledge, it is directly applicable to other natural languages. Figure 8 shows that KRISP performs competently on other languages as well. 4.4 Experiments with Noisy NL Sentences Any real world application in which semantic parsers would be used to interpret natural language of a user is likely to face noise in the input. If the user is interacting through spontaneous speech and the input to the semantic parser is coming form the output of a speech recognition system then there are many ways in which noise could creep in the NL sentences: interjections (like um’s and ah’s), environment noise (like door slams, phone rings etc.), out-of-domain words, grammatically ill-formed utterances etc. (Zue and Glass, 2000). As opposed to the other systems, KRISP’s stringkernel-based semantic parsing does not use hardmatching rules and should be thus more flexible and robust to noise. We tested this hypothesis by running experiments on data which was artificially corrupted with simulated speech recognition errors. The interjections, environment noise etc. are likely to be recognized as real words by a speech recognizer. To simulate this, after every word in a sentence, with some probability Padd, an extra word is added which is chosen with probability proportional to its word frequency found in the British National Corpus (BNC), a good representative sample of English. A speech recognizer may sometimes completely fail to detect words, so with a probability of Pdrop a word is sometimes dropped. A speech recognizer could also introduce noise by confusing a word with a high frequency phonetically close word. We sim919 0 20 40 60 80 100 0 1 2 3 4 5 F-measure Noise level KRISP WASP SCISSOR Figure 9: Results on the CLANG corpus with increasing amounts of noise in the test sentences. ulate this type of noise by substituting a word in the corpus by another word, w, with probability ped(w)∗P(w), where p is a parameter, ed(w) is w’s edit distance (Levenshtein, 1966) from the original word and P(w) is w’s probability proportional to its word frequency. The edit distance which calculates closeness between words is character-based rather than based on phonetics, but this should not make a significant difference in the experimental results. Figure 9 shows the results on the CLANG corpus with increasing amounts of noise, from level 0 to level 4. The noise level 0 corresponds to no noise. The noise parameters, Padd and Pdrop, were varied uniformly from being 0 at level 0 and 0.1 at level 4, and the parameter p was varied uniformly from being 0 at level 0 and 0.01 at level 4. We are showing the best F-measure (harmonic mean of precision and recall) for each system at different noise levels. As can be seen, KRISP’s performance degrades gracefully in the presence of noise while other systems’ performance degrade much faster, thus verifying our hypothesis. In this experiment, only the test sentences were corrupted, we get qualitatively similar results when both training and test sentences are corrupted. The results are also similar on the GEOQUERY corpus. 5 Conclusions We presented a new kernel-based approach to learn semantic parsers. SVM classifiers based on string subsequence kernels are trained for each of the productions in the meaning representation language. These classifiers are then used to compositionally build complete meaning representations of natural language sentences. We evaluated our system on two real-world corpora. The results showed that our system compares favorably to other existing systems and is particularly robust to noise. Acknowledgments This research was supported by Defense Advanced Research Projects Agency under grant HR0011-04-1-0007. References Mao Chen et al. 2003. Users manual: RoboCup soccer server manual for soccer server version 7.07 and later. Available at http://sourceforge. net/projects/sserver/. Nello Cristianini and John Shawe-Taylor. 2000. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press. Jay Earley. 1970. An efficient context-free parsing algorithm. Communications of the Association for Computing Machinery, 6(8):451–455. R. Ge and R. J. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Proc. of 9th Conf. on Computational Natural Language Learning (CoNLL-2005), pages 9–16, Ann Arbor, MI, July. R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In Proc. of 20th Natl. Conf. on Artificial Intelligence (AAAI-2005), pages 1062–1068, Pittsburgh, PA, July. Rohit J. Kate. 2005. A kernel-based approach to learning semantic parsers. Technical Report UT-AI-05-326, Artificial Intelligence Lab, University of Texas at Austin, Austin, TX, November. V. I. Levenshtein. 1966. Binary codes capable of correcting insertions and reversals. Soviet Physics Doklady, 10(8):707–710, February. Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. Journal of Machine Learning Research, 2:419–444. Scott Miller, David Stallard, Robert Bobrow, and Richard Schwartz. 1996. A fully statistical approach to natural language interfaces. In Proc. of the 34th Annual Meeting of the Association for Computational Linguistics (ACL96), pages 55–61, Santa Cruz, CA. John C. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In Alexander J. Smola, Peter Bartlett, Bernhard Sch¨olkopf, and Dale Schuurmans, editors, Advances in Large Margin Classifiers, pages 185–208. MIT Press. Ana-Maria Popescu, Alex Armanasu, Oren Etzioni, David Ko, and Alexander Yates. 2004. Modern natural language interfaces to databases: Composing statistical parsing with semantic tractability. In Proc. of 20th Intl. Conf. on Computational Linguistics (COLING-04), Geneva, Switzerland, August. Patti J. Price. 1990. Evaluation of spoken language systems: The ATIS domain. In Proc. of 3rd DARPA Speech and Natural Language Workshop, pages 91–95, June. Andreas Stolcke. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165–201. L. R. Tang and R. J. Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In Proc. of the 12th European Conf. on Machine Learning, pages 466–477, Freiburg, Germany. Yuk Wah Wong and Raymond J. Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proc. of Human Language Technology Conf. / North American Association for Computational Linguistics Annual Meeting (HLT/NAACL-2006), New York City, NY. To appear. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proc. of 13th Natl. Conf. on Artificial Intelligence (AAAI-96), pages 1050–1055, Portland, OR, August. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proc. of 21th Conf. on Uncertainty in Artificial Intelligence (UAI2005), Edinburgh, Scotland, July. Victor W. Zue and James R. Glass. 2000. Conversational interfaces: Advances and challenges. In Proc. of the IEEE, volume 88(8), pages 1166–1180. 920 | 2006 | 115 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 921–928, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Bootstrapping Approach to Unsupervised Detection of Cue Phrase Variants Rashid M. Abdalla and Simone Teufel Computer Laboratory, University of Cambridge 15 JJ Thomson Avenue, Cambridge CB3 OFD, UK [email protected], [email protected] Abstract We investigate the unsupervised detection of semi-fixed cue phrases such as “This paper proposes a novel approach. . . 1” from unseen text, on the basis of only a handful of seed cue phrases with the desired semantics. The problem, in contrast to bootstrapping approaches for Question Answering and Information Extraction, is that it is hard to find a constraining context for occurrences of semi-fixed cue phrases. Our method uses components of the cue phrase itself, rather than external context, to bootstrap. It successfully excludes phrases which are different from the target semantics, but which look superficially similar. The method achieves 88% accuracy, outperforming standard bootstrapping approaches. 1 Introduction Cue phrases such as “This paper proposes a novel approach to. . . ”, “no method for . . . exists” or even “you will hear from my lawyer” are semi-fixed in that they constitute a formulaic pattern with a clear semantics, but with syntactic and lexical variations which are hard to predict and thus hard to detect in unseen text (e.g. “a new algorithm for . . . is suggested in the current paper” or “I envisage legal action”). In scientific discourse, such metadiscourse (Myers, 1992; Hyland, 1998) abounds and plays an important role in marking the discourse structure of the texts. Finding these variants can be useful for many text understanding tasks because semi-fixed cue phrases act as linguistic markers indicating the importance and/or the rhetorical role of some adjacent text. For the summarisation of scientific 1In contrast to standard work in discourse linguistics, which mostly considers sentence connectives and adverbials as cue phrases, our definition includes longer phrases, sometimes even entire sentences. papers, cue phrases such as “Our paper deals with. . . ” are commonly used as indicators of extraction-worthiness of sentences (Kupiec et al., 1995). Re-generative (rather than extractive) summarisation methods may want to go further than that and directly use the knowledge that a certain sentence contains the particular research aim of a paper, or a claimed gap in the literature. Similarly, in the task of automatic routing of customer emails and automatic answering of some of these, the detection of threats of legal action could be useful. However, systems that use cue phrases usually rely on manually compiled lists, the acquisition of which is time-consuming and error-prone and results in cue phrases which are genre-specific. Methods for finding cue phrases automatically include Hovy and Lin (1998) (using the ratio of word frequency counts in summaries and their corresponding texts), Teufel (1998) (using the most frequent n-grams), and Paice (1981) (using a pattern matching grammar and a lexicon of manually collected equivalence classes). The main issue with string-based pattern matching techniques is that they cannot capture syntactic generalisations such as active/passive constructions, different tenses and modification by adverbial, adjectival or prepositional phrases, appositions and other parenthetical material. For instance, we may be looking for sentences expressing the goal or main contribution of a paper; Fig. 1 shows candidates of such sentences. Cases a)–e), which do indeed describe the authors’ goal, display a wide range of syntactic variation. a) In this paper, we introduce a method for similaritybased estimation of . . . b) We introduce and justify a method. . . c) A method (described in section 1) is introduced d) The method introduced here is a variation. . . e) We wanted to introduce a method. . . f) We do not introduce a method. . . g) We introduce and adopt the method given in [1]. . . h) Previously we introduced a similar method. . . i) They introduce a similar method. . . Figure 1: Goal statements and syntactic variation – correct matches (a-e) and incorrect matches (f-i) 921 Cases f)–i) in contrast are false matches: they do not express the authors’ goals, although they are superficially similar to the correct contexts. While string-based approaches (Paice, 1981; Teufel, 1998) are too restrictive to cover the wide variation within the correct contexts, bag-of-words approaches such as Agichtein and Gravano’s (2000) are too permissive and would miss many of the distinctions between correct and incorrect contexts. Lisacek et al. (2005) address the task of identifying “paradigm shift” sentences in the biomedical literature, i.e. statements of thwarted expectation. This task is somewhat similar to ours in its definition by rhetorical context. Their method goes beyond string-based matching: In order for a sentence to qualify, the right set of concepts must be present in a sentence, with any syntactic relationship holding between them. Each concept set is encoded as a fixed, manually compiled lists of strings. Their method covers only one particular context (the paradigm shift one), whereas we are looking for a method where many types of cue phrases can be acquired. Whereas it relies on manually assembled lists, we advocate data-driven acquisition of new contexts. This is generally preferrable to manual definition, as language use is changing, inventive and hard to predict and as many of the relevant concepts in a domain may be infrequent (cf. the formulation “be cursed”, which was used in our corpus as a way of describing a method’s problems). It also allows the acquisition of cue phrases in new domains, where the exact prevalent meta-discourse might not be known. Riloff’s (1993) method for learning information extraction (IE) patterns uses a syntactic parse and correspondences between the text and filled MUCstyle templates to learn context in terms of lexicosemantic patterns. However, it too requires substantial hand-crafted knowledge: 1500 filled templates as training material, and a lexicon of semantic features for roughly 3000 nouns for constraint checking. Unsupervised methods for similar tasks include Agichtein and Gravano’s (2000) work, which shows that clusters of vector-spacebased patterns can be successfully employed to detect specific IE relationships (companies and their headquarters), and Ravichandran and Hovy’s (2002) algorithm for finding patterns for a Question Answering (QA) task. Based on training material in the shape of pairs of question and answer terms – e.g., (e.g. {Mozart, 1756}), they learn the a) In this paper, we introduce a method for similaritybased estimation of . . . b) Here, we present a similarity-based approach for estimation of. . . c) In this paper, we propose an algorithm which is . . . d) We will here define a technique for similarity-based. . . Figure 2: Context around cue phrases (lexical variants) semantics holding between these terms (“birth year”) via frequent string patterns occurring in the context, such as “A was born in B”, by considering n-grams of all repeated substrings. What is common to these three works is that bootstrapping relies on constraints between the context external to the extracted material and the extracted material itself, and that the target extraction material is defined by real-world relations. Our task differs in that the cue phrases we extract are based on general rhetorical relations holding in all scientific discourse. Our approach for finding semantically similar variants in an unsupervised fashion relies on bootstrapping of seeds from within the cue phrase. The assumption is that every semi-fixed cue phrase contains at least two main concepts whose syntax and semantics mutually constrain each other (e.g. verb and direct object in phrases such as “(we) present an approach for”). The expanded cue phrases are recognised in various syntactic contexts using a parser2. General semantic constraints valid for groups of semantically similar cue phrases are then applied to model, e.g., the fact that it must be the authors who present the method, not somebody else. We demonstrate that such an approach is more appropriate for our task than IE/QA bootstrapping mechanisms based on cue phrase-external context. Part of the reason for why normal bootstrapping does not work for our phrases is the difficulty of finding negatives contexts, essential in bootstrapping to evaluate the quality of the patterns automatically. IE and QA approaches, due to uniqueness assumptions of the real-world relations that these methods search for, have an automatic definition of negative contexts by hard constraints (i.e., all contexts involving Mozart and any other year are by definition of the wrong semantics; so are all contexts involving Microsoft and a city other than Redmond). As our task is not grounded in real-world relations but in rhetorical ones, constraints found in the context tend to be 2Thus, our task shows some parallels to work in paraphrasing (Barzilay and Lee, 2002) and syntactic variant generation (Jacquemin et al., 1997), but the methods are very different. 922 soft rather than hard (cf. Fig 2): while it it possible that strings such as “we” and “in this paper” occur more often in the context of a given cue phrase, they also occur in many other places in the paper where the cue phrase is not present. Thus, it is hard to define clear negative contexts for our task. The novelty of our work is thus the new pattern extraction task (finding variants of semi-fixed cue phrases), a task for which it is hard to directly use the context the patterns appear in, and an iterative unsupervised bootstrapping algorithm for lexical variants, using phrase-internal seeds and ranking similar candidates based on relation strength between the seeds. While our method is applicable to general cue phrases, we demonstrate it here with transitive verb–direct object pairs, namely a) cue phrases introducing a new methodology (and thus the main research goal of the scientific article; e.g. “In this paper, we propose a novel algorithm. . . ”) – we call those goal-type cue phrases; and b) cue phrases indicating continuation of previous other research (e.g. “Therefore, we adopt the approach presented in [1]. . . ”) – continuation-type cue phrases. 2 Lexical Bootstrapping Algorithm The task of this module is to find lexical variants of the components of the seed cue phrases. Given the seed phrases “we introduce a method” and “we propose a model”, the algorithm starts by finding all direct objects of “introduce” in a given corpus and, using an appropriate similarity measure, ranks them according to their distributional similarity to the nouns “method” and “model”. Subsequently, the noun “method” is used to find transitive verbs and rank them according to their similarity to “introduce” and “propose”. In both cases, the ranking step retains variants that preserve the semantics of the cue phrase (e.g. “develop” and “approach”) and filters irrelevant terms that change the phrase semantics (e.g. “need” and “example”). Stopping at this point would limit us to those terms that co-occur with the seed words in the training corpus. Therefore additional iterations using automatically generated verbs and nouns are applied in order to recover more and more variants. The full algorithm is given in Fig. 3. The algorithm requires corpus data for the steps Hypothesize (producing a list of potential candidates) and Rank (testing them for similarity). We Input: Tuples {A1, A2, . . . , Am} and {B1, B2, . . . , Bn}. Initialisation: Set the concept-A reference set to {A1, A2, . . . , Am} and the concept-B reference set to {B1, B2, . . . , Bn}. Set the concept-A active element to A1 and the concept-B active element to B1. Recursion: 1. Concept B retrieval: (i) Hypothesize: Find terms in the corpus which are in the desired relationship with the concept-A active element (e.g. direct objects of a verb active element). This results in the concept-B candidate set. (ii) Rank: Rank the concept-B candidate set using a suitable ranking methodology that may make use of the concept-B reference set. In this process, each member of the candidate set is assigned a score. (iii) Accumulate: Add the top s items of the concept-B candidate set to the concept-B accumulator list (based on empirical results, s is the rank of the candidate set during the initial iteration and 50 for the remaining iterations). If an item is already on the accumulator list, add its ranking score to the existing item’s score. 2. Concept A retrieval: as above, with concepts A and B swapped. 3. Updating active elements: (i) Set the concept-B active element to the highest ranked instance in the concept-B accumulator list which has not been used as an active element before. (ii) Set the concept-A active element to the highest ranked instance in the concept-A accumulator list which has not been used as an active element before. Repeat steps 1-3 for k iterations Output: top M words of concept-A (verb) accumulator list and top N words of concept-B (noun) accumulator list Reference set: a set of seed words which define the collective semantics of the concept we are looking for in this iteration Active element: the instance of the concept used in the current iteration for retrieving instances of the other concept. If we are finding lexical variants of Concept A by exploiting relationships between Concepts A and B, then the active element is from Concept B. Candidate set: the set of candidate terms for one concept (eg. Concept A) obtained using an active element from the other concept (eg. Concept B). The more semantically similar a term in the candidate set is to the members of the reference set, the higher its ranking should be. This set contains verbs if the active element is a noun and vice versa. Accumulator list: a sorted list that accumulates the ranked members of the candidate set. Figure 3: Lexical variant bootstrapping algorithm estimate frequencies for the Rank step from the written portion of the British National Corpus (BNC, Burnard (1995)), 90 Million words. For the Hypothesize step, we experiment with two data sets: First, the scientific subsection of the BNC (24 Million words), which we parse using RASP (Briscoe and Carroll, 2002); we then examine the grammatical relations (GRs) for transitive verb constructions, both in active and passive voice. This method guarantees that we find almost all transitive verb constructions cleanly; Carroll et al. (1999) report an accuracy of .85 for 923 DOs, Active: "AGENT STRING AUX active-verb-element DETERMINER * POSTMOD" DOs, Passive: "DETERMINER * AUX active-verb-element element" TVs, Active: "AGENT STRING AUX * DETERMINER active-noun- element POSTMOD" TVs, Passive:"DET active-noun-element AUX * POSTMOD" Figure 4: Query patterns for retrieving direct objects (DOs) and transitive verbs (TVs) in the Hypothesize step. newspaper articles for this relation. Second, in order to obtain larger coverage and more current data we also experiment with Google Scholar3, an automatic web-based indexer of scientific literature (mainly peer-reviewed papers, technical reports, books, pre-prints and abstracts). Google Scholar snippets are often incomplete fragments which cannot be parsed. For practical reasons, we decided against processing the entire documents, and obtain an approximation to direct objects and transitive verbs with regular expressions over the result snippets in both active and passive voice (cf. Fig. 4), designed to be high-precision4. The amount of data available from BNC and Google Scholar is not directly comparable: harvesting Google Scholar snippets for both active and passive constructions gives around 2000 sentences per seed (Google Scholar returns up to 1000 results per query), while the number of BNC sentences containing seed words in active and passive form varies from 11 (“formalism”) to 5467 (“develop”) with an average of 1361 sentences for the experimental seed pairs. Ranking Having obtained our candidate sets (either from the scientific subsection of the BNC or from Google Scholar), the members are ranked using BNC frequencies. We investigate two ranking methodologies: frequency-based and contextbased. Frequency-based ranking simply ranks each member of the candidate set by how many times it is retrieved together with the current active element. Context-based ranking uses a similarity measure for computing the scores, giving a higher score to those words that share sufficiently similar contexts with the members of the reference set. We consider similarity measures in a vector space defined either by a fixed window, by the sentence window, or by syntactic relationships. The score assigned to each word in the candidate set is the sum of its semantic similarity values computed with respect to each member in the reference set. 3http://scholar.google.com 4The capitalised words in these patterns are replaced by actual words (e.g. AGENT STRING: We/I, DETERMINER: a/ an/our), and the extracted words (indicated by “*”) are lemmatised. Syntactic contexts, as opposed to window-based contexts, constrain the context of a word to only those words that are grammatically related to it. We use verb-object relations in both active and passive voice constructions as did Pereira et al. (1993) and Lee (1999), among others. We use the cosine similarity measure for windowbased contexts and the following commonly used similarity measures for the syntactic vector space: Hindle’s (1990) measure, the weighted Lin measure (Wu and Zhou, 2003), the α-Skew divergence measure (Lee, 1999), the Jensen-Shannon (JS) divergence measure (Lin, 1991), Jaccard’s coefficient (van Rijsbergen, 1979) and the Confusion probability (Essen and Steinbiss, 1992). The Jensen-Shannon measure JS (x1, x2) = P y∈Y P x∈{x1,x2} P (y|x) log P(y|x) 1 2(P(y|x1)+P(y|x2)) subsequently performed best for our task. We compare the different ranking methodologies and data sets with respect to a manually-defined gold standard list of 20 goal-type verbs and 20 nouns. This list was manually assembled from Teufel (1999); WordNet synonyms and other plausible verbs and nouns found via Web searches on scientific articles were added. We ensured by searches on the ACL anthology that there is good evidence that the gold-standard words indeed occur in the right contexts, i.e. in goal statement sentences. As we want to find similarity metrics and data sources which result in accumulator lists with many of these gold members at high ranks, we need a measure that rewards exactly those lists. We use non-interpolated Mean Average Precision (MAP), a standard measure for evaluating ranked information retrieval runs, which combines precision and recall and ranges from 0 to 15. We use 8 pairs of 2-tuples as input (e.g. [introduce, study] & [approach, method]), randomly selected from the gold standard list. MAP was cal5MAP = 1 N PN j=1 APj = 1 N PN j=1 1 M PM i=1 P (gi) where P (gi) = nij rij if gi is retrieved and 0 otherwise, N is the number of seed combinations, M is the size of the golden list, gi is the ith member of the golden list and rij is its rank in the retrieved list of combination j while nij is the number of golden members found up to and including rank rij. 924 Ranking scheme BNC Google Scholar Frequency-based 0.123 0.446 Sentence-window 0.200 0.344 Fixedsize-window 0.184 0.342 Hindle 0.293 0.416 Weighted Lin 0.358 0.509 α-Skew 0.361 0.486 Jensen-Shannon 0.404 0.550 Jaccard’s coef. 0.301 0.436 Confusion prob. 0.171 0.293 Figure 5: MAPs after the first iteration culated over the verbs and nouns retrieved using our algorithm and averaged. Fig. 5 summarises the MAP scores for the first iteration, where Google Scholar significantly outperformed the BNC. The best result for this iteration (MAP=.550) was achieved by combining Google Scholar and the Jensen-Shannon measure. The algorithm stops to iterate when no more improvement can be obtained, in this case after 4 iterations, resulting in a final MAP of .619. Although α-Skew outperforms the simpler measures in ranking nouns, its performance on verbs is worse than the performance of Weighted Lin. While Lee (1999) argues that α-Skew’s asymmetry can be advantageous for nouns, this probably does not hold for verbs: verb hierarchies have much shallower structure than noun hierarchies with most verbs concentrated on one level (Miller et al., 1990). This would explain why JS, which is symmetric compared to the α-Skew metric, performed better in our experiments. In the evaluation presented here we therefore use Google Scholar data and the JS measure. An additional improvement (MAP=.630) is achieved when we incorporate a filter based on the following hypothesis: goal-type verbs should be more likely to have their direct objects preceded by indefinite articles rather than definite articles or possessive determiners (because a new method is introduced) whereas continuation-type verbs should prefer definite articles with their direct objects (as an existing method is involved). 3 Syntactic variants and semantic filters The syntactic variant extractor takes as its input the raw text and the lists of verbs and nouns generated by the lexical bootstrapper. After RASPparsing the input text, all instances of the input verbs are located and, based on the grammatical relations output by RASP6, a set of relevant en6The grammatical relations used are nsubj, dobj, iobj, aux, argmod, detmod, ncmod and mod. The agent of the verb (e.g., “We adopt. . . . . . adopted by the author”), the agent’s determiner and related adjectives. The direct object of the verb, the object’s determiner and adjectives, in addition to any post-modifiers (e.g., “. . . apply a method proposed by [1] . . . ” , “. . . follow an approach of [1] . . . ” Auxiliaries of the verb (e.g., “In a similar manner, we may propose a . . . ”) Adverbial modification of the verb (e.g., “We have previously presented a . . . .”) Prepositional phrases related to the verb (e.g., “In this paper we present. . . ”, “. . . adopted from their work”) Figure 6: Grammatical relations considered tities and modifiers for each verb is constructed, grouped into five categories (cf. Fig. 6). Next, semantic filters are applied to each of the potential candidates (represented by the extracted entities and modifiers), and a fitness score is calculated. These constraints encode semantic principles that will apply to all cue phrases of that rhetorical category. Examples for constraints are: if work is referred to as being done in previous own work, it is probably not a goal statement; the work in a goal statement must be presented here or in the current paper (the concept of ‘here-ness”); and the agents of a goal statement have to be the authors, not other people. While these filters are manually defined, they are modular, encode general principles, and can be combined to express a wide range of rhetorical contexts. We verified that around 20 semantic constraints are enough to cover a large sets of different cue phrases (the 1700 cue phrases from Teufel (1999)), though not all of these are implemented yet. A nice side-effect of our approach is the simple characterisation of a cue phrase (by a syntactic relationship, some seed words for each concept, and some general, reusable semantic constraints). This characterisation is more informative and specific than string-based approaches, yet it has the potential for generalisation (useful if the cue phrases are ever manually assessed and put into a lexicon). Fig. 7 shows successful extraction examples from our corpus7, illustrating the difficulty of the task: the system correctly identified sentences with syntactically complex goal-type and continuation-type cue phrases, and correctly rejected deceptive variants8. 7Numbers after examples give CmpLg archive numbers, followed by sentence numbers according to our preprocessing. 8The seeds in this example were [analyse, present] & [architecture, method] (for goal) and [improve, adopt] & [model, method] (for continuation). 925 Correctly found: Goal-type: What we aim in this paper is to propose a paradigm that enables partial/local generation through decompositions and reorganizations of tentative local structures. (9411021, S-5) Continuation-type: In this paper we have discussed how the lexicographical concept of lexical functions, introduced by Melcuk to describe collocations, can be used as an interlingual device in the machine translation of such structures. (9410009, S-126) Correctly rejected: Goal-type: Perhaps the method proposed by Pereira et al. (1993) is the most relevant in our context. (9605014, S-76) Continuation-type: Neither Kamp nor Kehler extend their copying/ substitution mechanism to anything besides pronouns, as we have done. (9502014, S-174) Figure 7: Sentences correctly processed by our system 4 Gold standard evaluation We evaluated the quality of the extracted phrases in two ways: by comparing our system output to gold standard annotation, and by human judgement of the quality of the returned sentences. In both cases bootstrapping was done using the seed tuples [analyse, present] & [architecture, method]. For the gold standard-evaluation, we ran our system on a test set of 121 scientific articles drawn from the CmpLg corpus (Teufel, 1999) – entirely different texts from the ones the system was trained on. Documents were manually annotated by the second author for (possibly more than one) goal-type sentence; annotation of that type has been previously shown to be reliable at K=.71 (Teufel, 1999). Our evaluation recorded how often the system’s highest-ranked candidate was indeed a goal-type sentence; as this is a precision-critical task, we do not measure recall here. We compared our system against our reimplementation of Ravichandran and Hovy’s (2002) paraphrase learning. The seed words were of the form {goal-verb, goal-noun}, and we submitted each of the 4 combinations of the seed pair to Google Scholar. From the top 1000 documents for each query, we harvested 3965 sentences containing both the goal-verb and the goal-noun. By considering all possible substrings, an extensive list of candidate patterns was assembled. Patterns with single occurrences were discarded, leaving a list of 5580 patterns (examples in Fig. 8). In order to rank the patterns by precision, the goal-verbs were submitted as queries and the top 1000 documents were downloaded for each. From these, we <verb> a <noun> for of a new <noun> to <verb> the In this section , we <verb> the <noun> of the <noun> <verb> in this paper is to <verb> the <noun> after Figure 8: Examples of patterns extracted using Ravichandran and Hovy’s (2002) method Method Correct sentences Our system with bootstrapping 88 (73%) Ravichandran and Hovy (2002) 58 (48%) Our system, no bootstrapping, WordNet 50 (41%) Our system, no bootstrapping, seeds only 37 (30%) Figure 9: Gold standard evaluation: results the precision of each pattern was calculated by dividing the number of strings matching the pattern instantiated with both the goal-verb and all WordNet synonyms of the goal-noun, by the number of strings matching the patterns instantiated with the goal-verb only. An important point here is that while the tight semantic coupling between the question and answer terms in the original method accurately identifies all the positive and negative examples, we can only approximate this by using a sensible synonym set for the seed goal-nouns. For each document in the test set, the sentence containing the pattern with the highest precision (if any) was extracted as the goal sentence. We also compared our system to two baselines. We replaced the lists obtained from the lexical bootstrapping module with a) just the seed pair and b) the seed pair and all the WordNet synonyms of the components of the seed pair9. The results of these experiments are given in Fig. 9. All differences are statistically significant with the χ2 test at p=.01 (except those between Ravichandran/Hovy and our nonbootstrapping/WordNet system). Our bootstrapping system outperforms the Ravichandran and Hovy algorithm by 34%. This is not surprising, because this algorithm was not designed to perform well in tasks where there is no clear negative context. The results also show that bootstrapping outperforms a general thesaurus such as WordNet. Out of the 33 articles where our system’s favourite was not an annotated goal-type sentence, only 15 are due to bootstrapping errors (i.e., to an incorrect ranking of the lexical variants), corre9Bootstrapping should in principle do better than a thesaurus, as some of our correctly identified variants are not true synonyms (e.g., theory vs. method), and as noise through overgeneration of unrelated senses might occur unless automatic word sense diambiguation is performed. 926 System chose: but should have chosen: derive set compare model illustrate algorithm present formalisation discuss measures present variations describe modifications propose measures accommodate material describe approach examine material present study Figure 10: Wrong bootstrapping decisions Ceiling System Baseline Exp. A 3.91 3.08 1.58 Exp.B 4.33 3.67 2.50 Figure 11: Extrinsic evaluation: judges’ scores sponding to a 88% accuracy of the bootstrapping module. Examples from those 15 error cases are given in Fig. 10. The other errors were due to the cue phrase not being a transitive verb–direct object pattern (e.g. we show that, our goal is and we focus on), so the system could not have found anything (11 cases, or an 80% accuracy), ungrammatical English or syntactic construction too complex, resulting in a lack of RASP detection of the crucial grammatical relation (2) and failure of the semantic filter to catch non-goal contexts (5). 5 Human evaluation We next perform two human experiments to indirectly evaluate the quality of the automatically generated cue phrase variants. Given an abstract of an article and a sentence extracted from the article, judges are asked to assign a score ranging from 1 (low) to 5 (high) depending on how well the sentence expresses the goal of that article (Exp. A), or the continuation of previous work (Exp. B). Each experiment involves 24 articles drawn randomly from a subset of 80 articles in the CmpLg corpus that contain manual annotation for goaltype and continuation-type sentences. The experiments use three external judges (graduate students in computational linguistics), and a Latin Square experimental design with three conditions: Baseline (see below), System-generated and Ceiling (extracted from the gold standard annotation used in Teufel (1999)). Judges were not told how the sentences were generated, and no judge saw an item in more than one condition. The baseline for Experiment A was a random selection of sentences with the highest TF*IDF scores, because goal-type sentences typically contain many content-words. The baseline for experiment B (continuation-type) were randomly selected sentences containing citations, because they often co-occur with statements of continuation. In both cases, the length of the baseline sentence was controlled for by the average lengths of the gold standard and the system-extracted sentences in the document. Fig. 11 shows that judges gave an average score of 3.08 to system-extracted sentences in Exp. A, compared with a baseline of 1.58 and a ceiling of 3.9110; in Exp. B, the system scored 3.67, with a higher baseline of 2.50 and a ceiling of 4.33. According to the Wilcoxon signed-ranks test at α = .01, the system is indistinguishable from the gold standard, but significantly different from the baseline, in both experiments. Although this study is on a small scale, it indicates that humans judged sentences obtained with our method as almost equally characteristic of their rhetorical function as human-chosen sentences, and much better than non-trivial baselines. 6 Conclusion In this paper we have investigated the automatic acquisition of semi-fixed cue phrases as a bootstrapping task which requires very little manual input for each cue phrase and yet generalises to a wide range of syntactic and lexical variants in running text. Our system takes a few seeds of the type of cue phrase as input, and bootstraps lexical variants from a large corpus. It filters out many semantically invalid contexts, and finds cue phrases in various syntactic variants. The system achieved 80% precision of goal-type phrases of the targeted syntactic shape (88% if only the bootstrapping module is evaluated), and good quality ratings from human judges. We found Google Scholar to perform better than BNC as source for finding hypotheses for lexical variants, which may be due to the larger amount of data available to Google Scholar. This seems to outweigh the disadvantage of only being able to use POS patterns with Google Scholar, as opposed to robust parsing with the BNC. In the experiments reported, we bootstrap only from one type of cue phrase (transitive verbs and direct objects). This type covers a large proportion of the cue phrases needed practically, but our algorithm should in principle work for any kind of semi-fixed cue phrase, as long as they have two core concepts and a syntactic and semantic 10This score seems somewhat low, considering that these were the best sentences available as goal descriptions, according to the gold standard. 927 CUE PHRASE: “(previous) methods fail” (Subj–Verb) VARIANTS SEED 1: methodology, approach, technique. . . VARIANTS SEED 2: be cursed, be incapable of, be restricted to, be troubled, degrade, fall prey to, . . . CUE PHRASE: “advantage over previous methods” (NP–PP postmod + adj–noun premod.) VARIANTS SEED 1: benefit, breakthrough, edge, improvement, innovation, success, triumph. . . VARIANTS SEED 2: available, better-known, cited, classic, common, conventional, current, customary, established, existing, extant,. . . Figure 12: Cues with other syntactic relationships relation between them. Examples for such other types of phrases are given in Fig. 12; the second cue phrase involves a complex syntactic relationship between the two seeds (or possibly it could be considered as a cue phrase with three seeds). We will next investigate if the positive results presented here can be maintained for other syntactic contexts and for cue phrases with more than two seeds. The syntactic variant extractor could be enhanced in various ways, eg. by resolving anaphora in cue phrases. A more sophisticated model of syntactically weighted vector space (Pado and Lapata, 2003) may help improve the lexical acquisition phase. Another line for future work is bootstrapping meaning across cue phrases within the same rhetorical class, e.g. to learn that we propose a method for X and we aim to do X are equivalent. As some papers will contain both variants of the cue phrase, with very similar material (X) in the vicinity, they could be used as starting point for experiments to validate cue phrase equivalence. 7 Acknowledgements This work was funded by the EPSRC projects CITRAZ (GR/S27832/01, “Rhetorical Citation Maps and Domain-independent Argumentative Zoning”) and SCIBORG (EP/C010035/1, “Extracting the Science from Scientific Publications”). References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the 5th ACM International Conference on Digital Libraries. Regina Barzilay and Lillian Lee. 2002. Bootstrapping lexical choice via multiple-sequence alignment. In Proc. of EMNLP. Ted Briscoe and John Carroll. 2002. Robust accurate statistical annotation of general text. In Proc. of LREC. Lou Burnard, 1995. Users Reference Guide, British National Corpus Version 1.0. Oxford University, UK. John Carroll, Guido Minnen, and Ted Briscoe. 1999. Corpus annotation for parser evaluation. In Proceedings of Linguistically Interpreted Corpora (LINC-99), EACLworkshop. Ute Essen and Volker Steinbiss. 1992. Co-occurrence smoothing for stochastic language modelling. In Proc. of ICASSP. Donald Hindle. 1990. Noun classification from predicateargument structures. In Proc. of the ACL. Edvard Hovy and Chin-Yew Lin. 1998. Automated text summarization and the Summarist system. In Proc. of the TIPSTER Text Program. Ken Hyland. 1998. Persuasion and context: The pragmatics of academic metadiscourse. Journal of Pragmatics, 30(4):437–455. Christian Jacquemin, Judith Klavans, and Evelyn Tzoukermann. 1997. Expansion of multi-word terms for indexing and retrieval using morphology and syntax. In Proc. of the ACL. Julian Kupiec, Jan O. Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Proc. of SIGIR-95. Lillian Lee. 1999. Measures of distributional similarity. In Proc. of the ACL. Jianhua Lin. 1991. Divergence measures based on the Shannon entropy. IEEE transactions on Information Theory, 37(1):145–151. Frederique Lisacek, Christine Chichester, Aaron Kaplan, and Sandor Agnes. 2005. Discovering paradigm shift patterns in biomedical abstracts: Application to neurodegenerative diseases. In Proc. of the SMBM. George Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine Miller. 1990. Five papers on WordNet. Technical report, Cognitive Science Laboratory, Princeton University. Greg Myers. 1992. In this paper we report...—speech acts and scientific facts. Journal of Pragmatics, 17(4):295– 313. Sebastian Pado and Mirella Lapata. 2003. Constructing semantic space models from parsed corpora. In Proc. of ACL. Chris D. Paice. 1981. The automatic generation of literary abstracts: an approach based on the identification of self-indicating phrases. In Robert Norman Oddy, Stephen E. Robertson, Cornelis Joost van Rijsbergen, and P. W. Williams, editors, Information Retrieval Research, Butterworth, London, UK. Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Proc. of the ACL. Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a question answering system. In Proc. of the ACL. Ellen Riloff. 1993. Automatically constructing a dictionary for information extraction tasks. In Proc. of AAAI-93. Simone Teufel. 1998. Meta-discourse markers and problemstructuring in scientific articles. In Proceedings of the ACL-98 Workshop on Discourse Structure and Discourse Markers. Simone Teufel. 1999. Argumentative Zoning: Information Extraction from Scientific Text. Ph.D. thesis, School of Cognitive Science, University of Edinburgh, UK. Cornelis Joost van Rijsbergen. 1979. Information Retrieval. Butterworth, London, UK, 2nd edition. Hua Wu and Ming Zhou. 2003. Synonymous collocation extraction using translation information. In Proc. of the ACL. 928 | 2006 | 116 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 929–936, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Semantic Role Labeling via FrameNet, VerbNet and PropBank Ana-Maria Giuglea and Alessandro Moschitti Department of Computer Science University of Rome ”Tor Vergata” Rome, Italy [email protected] [email protected] Abstract This article describes a robust semantic parser that uses a broad knowledge base created by interconnecting three major resources: FrameNet, VerbNet and PropBank. The FrameNet corpus contains the examples annotated with semantic roles whereas the VerbNet lexicon provides the knowledge about the syntactic behavior of the verbs. We connect VerbNet and FrameNet by mapping the FrameNet frames to the VerbNet Intersective Levin classes. The PropBank corpus, which is tightly connected to the VerbNet lexicon, is used to increase the verb coverage and also to test the effectiveness of our approach. The results indicate that our model is an interesting step towards the design of more robust semantic parsers. 1 Introduction During the last years a noticeable effort has been devoted to the design of lexical resources that can provide the training ground for automatic semantic role labelers. Unfortunately, most of the systems developed until now are confined to the scope of the resource used for training. A very recent example in this sense was provided by the CONLL 2005 shared task (Carreras and M`arquez, 2005) on PropBank (PB) (Kingsbury and Palmer, 2002) role labeling. The systems that participated in the task were trained on the Wall Street Journal corpus (WSJ) and tested on portions of WSJ and Brown corpora. While the best F-measure recorded on WSJ was 80%, on the Brown corpus, the F-measure dropped below 70%. The most significant causes for this performance decay were highly ambiguous and unseen predicates (i.e. predicates that do not have training examples). The same problem was again highlighted by the results obtained with and without the frame information in the Senseval-3 competition (Litkowski, 2004) of FrameNet (Johnson et al., 2003) role labeling task. When such information is not used by the systems, the performance decreases by 10 percent points. This is quite intuitive as the semantics of many roles strongly depends on the focused frame. Thus, we cannot expect a good performance on new domains in which this information is not available. A solution to this problem is the automatic frame detection. Unfortunately, our preliminary experiments showed that given a FrameNet (FN) predicate-argument structure, the task of identifying the associated frame can be performed with very good results when the verb predicates have enough training examples, but becomes very challenging otherwise. The predicates belonging to new application domains (i.e. not yet included in FN) are especially problematic since there is no training data available. Therefore, we should rely on a semantic context alternative to the frame (Giuglea and Moschitti, 2004). Such context should have a wide coverage and should be easily derivable from FN data. A very good candidate seems to be the Intersective Levin class (ILC) (Dang et al., 1998) that can be found as well in other predicate resources like PB and VerbNet (VN) (Kipper et al., 2000). In this paper we have investigated the above claim by designing a semi-automatic algorithm that assigns ILCs to FN verb predicates and by carrying out several semantic role labeling (SRL) experiments in which we replace the frame with the ILC information. We used support vector ma929 chines (Vapnik, 1995) with (a) polynomial kernels to learn the semantic role classification and (b) Tree Kernels (Moschitti, 2004) for learning both frame and ILC classification. Tree kernels were applied to the syntactic trees that encode the subcategorization structures of verbs. This means that, although FN contains three types of predicates (nouns, adjectives and verbs), we only concentrated on the verb predicates and their roles. The results show that: (1) ILC can be derived with high accuracy for both FN and Probank and (2) ILC can replace the frame feature with almost no loss in the accuracy of the SRL systems. At the same time, ILC provides better predicate coverage as it can also be learned from other corpora (e.g. PB). In the remainder of this paper, Section 2 summarizes previous work done on FN automatic role detection. It also explains in more detail why models based exclusively on this corpus are not suitable for free-text parsing. Section 3 focuses on VN and PB and how they can enhance the robustness of our semantic parser. Section 4 describes the mapping between frames and ILCs whereas Section 5 presents the experiments that support our thesis. Finally, Section 6 summarizes the conclusions. 2 Automatic Semantic Role Labeling One of the goals of the FN project is to design a linguistic ontology that can be used for the automatic processing of semantic information. The associated hierarchy contains an extensive semantic analysis of verbs, nouns, adjectives and situations in which they are used, called frames. The basic assumption on which the frames are built is that each word evokes a particular situation with specific participants (Fillmore, 1968). The word that evokes a particular frame is called target word or predicate and can be an adjective, noun or verb. The participant entities are defined using semantic roles and they are called frame elements. Several models have been developed for the automatic detection of the frame elements based on the FN corpus (Gildea and Jurafsky, 2002; Thompson et al., 2003; Litkowski, 2004). While the algorithms used vary, almost all the previous studies divide the task into: 1) the identification of the verb arguments to be labeled and 2) the tagging of each argument with a role. Also, most of the models agree on the core features as being: Predicate, Headword, Phrase Type, Governing Category, Position, Voice and Path. These are the initial features adopted by Gildea and Jurafsky (2002) (henceforth G&J) for both frame element identification and role classification. One difference among previous machinelearning models is whether they used the frame information or not. The impact of the frame feature over unseen predicates and words is particularly interesting for us. The results obtained by G&J provide some interesting insights in this direction. In one of their experiments, they used the frame to generalize from predicates seen in the training data to unseen predicates, which belonged to the same frame. The overall performance increased showing that when no training data is available for a target word we can use data from the same frame. Other studies suggest that the frame is crucial when trying to eliminate the major sources of errors. In their error analysis, (Thompson et al., 2003) pinpoints that the verb arguments with headwords that are rare in a particular frame but not rare over the whole corpus are especially hard to classify. For these cases the frame is very important because it provides the context information needed to distinguish between different word senses. Overall, the experiments presented in G&J’s study correlated with the results obtained in the Senseval-3 competition show that the frame feature increases the performance and decreases the amount of annotated examples needed in training (i.e. frame usage improves the generalization ability of the learning algorithm). On the other hand, the results obtained without the frame information are very poor. These results show that having broader frame coverage is very important for robust semantic parsing. Unfortunately, the 321 frames that contain at least one verb predicate cover only a small fraction of the English verb lexicon and of the possible domains. Also from these 321 frames only 100 were considered to have enough training data and were used in Senseval-3 (see (Litkowski, 2004) for more details). Our approach for solving such problems involves the usage of a frame-like feature, namely the Intersective Levin class (ILC). We show that the ILC can replace the frame with almost no loss in performance. At the same time, ILC provides better coverage as it can be learned also from other 930 corpora (e.g. PB). The next section provides the theoretical support for the unified usage of FN, VN and PB, explaining why and how it is possible to link them. 3 Linking FrameNet to VerbNet and PropBank In general, predicates belonging to the same FN frame have a coherent syntactic behavior that is also different from predicates pertaining to other frames (G&J). This finding is consistent with theories of linking that claim that the syntactic behavior of a verb can be predicted from its semantics (Levin, 1993). This insight justifies the attempt to use ILCs instead of the frame feature when classifying FN semantic roles (Giuglea and Moschitti, 2004). The main advantage of using Levin classes comes from the fact that other resources like PB and the VN lexicon contain this kind of information. Thus, we can train an ILC classifier also on the PB corpus, considerably increasing the verb knowledge base at our disposal. Another advantage derives from the syntactic criteria that were applied in defining the Levin’s clusters. As shown later in this article, the syntactic nature of these classes makes them easier to classify than frames when using only syntactic and lexical features. More precisely, Levin’s clusters are formed according to diathesis alternation criteria which are variations in the way verbal arguments are grammatically expressed when a specific semantic phenomenon arises. For example, two different types of diathesis alternations are the following: (a) Middle Alternation [Subject, Agent The butcher] cuts [Direct Object, Patient the meat]. [Subject, Patient The meat] cuts easily. (b) Causative/inchoative Alternation [Subject, Agent Janet] broke [Direct Object, Patient the cup]. [Subject, Patient The cup] broke. In both cases, what is alternating is the grammatical function that the Patient role takes when changing from the transitive use of the verb to the intransitive one. The semantic phenomenon accompanying these types of alternations is the change of focus from the entity performing the action to the theme of the event. Levin documented 79 alternations which constitute the building blocks for the verb classes. Although alternations are chosen as the primary means for identifying the classes, additional properties related to subcategorization, morphology and extended meanings of verbs are taken into account as well. Thus, from a syntactic point of view, the verbs in one Levin class have a regular behavior, different from the verbs pertaining to other classes. Also, the classes are semantically coherent and all verbs belonging to one class share the same participant roles. This constraint of having the same semantic roles is further ensured inside the VN lexicon which is constructed based on a more refined version of the Levin’s classification, called Intersective Levin classes (ILCs) (Dang et al., 1998). The lexicon provides a regular association between the syntactic and semantic properties of each of the described classes. It also provides information about the syntactic frames (alternations) in which the verbs participate and the set of possible semantic roles. One corpus associated with the VN lexicon is PB. The annotation scheme of PB ensures that the verbs belonging to the same Levin class share similarly labeled arguments. Inside one ILC, to one argument corresponds one semantic role numbered sequentially from ARG0 to ARG5. The adjunct roles are labeled ARGM. Levin classes were constructed based on regularities exhibited at grammatical level and the resulting clusters were shown to be semantically coherent. As opposed, the FN frames were built on semantic bases, by putting together verbs, nouns and adjectives that evoke the same situations. Although different in conception, the FN verb clusters and VN verb clusters have common properties1: 1. Different syntactic properties between distinct verb clusters (as proven by the experiments in G&J) 2. A shared set of possible semantic roles for all verbs pertaining to the same cluster. Having these insights, we have assigned a correspondent VN class not to each verb predicate but rather to each frame. In doing this we have applied the simplifying assumption that a frame has a 1See section 4.4 for more details 931 unique corresponding Levin class. Thus, we have created a one-to-many mapping between the ILCs and the frames. In order to create a pair ⟨FN frame, VN class⟩, our mapping algorithm checks both the syntactic and semantic consistency by comparing the role frequency distributions on different syntactic positions for the two candidates. The algorithm is described in detail in the next section. 4 Mapping FrameNet frames to VerbNet classes The mapping algorithm consists of three steps: (a) we link the frames and ILCs that have the largest number of verbs in common and we create a set of pairs ⟨FN frame, VN class⟩(see Table 1); (b) we refine the pairs obtained in the previous step based on diathesis alternation criteria, i.e. the verbs pertaining to the FN frame have to undergo the same diathesis alternation that characterize the corresponding VN class (see Table 2) and (c) we manually check the resulting mapping. 4.1 The mapping algorithm Given a frame, F, we choose as candidate for the mapping the ILC, C, that has the largest number of verbs in common with it (see Table 1, line (I)). If the number is greater or equal than three we form a pair ⟨F, C⟩that will be tested in the second step of the algorithm. Only the frames that have more than 3 verb lexical units are candidates for this step (frames with less than 3 members cannot pass condition (II)). This excludes a number of 60 frames that will be subsequently manually mapped. In order to assign a VN class to a frame, we have to verify that the verbs belonging to the FN frame participate in the same diathesis alternation criteria used to define the VN class. Thus, the pairs ⟨F, C⟩formed in step 1 of the mapping algorithm have to undergo a validation step that verifies the similarity between the enclosed FN frame and VN class. This validation process has several sub-steps: First, we make use of the property (2) of the Levin classes and FN frames presented in the previous section. According to this property, all verbs pertaining to one frame or ILC have the same participant roles. Thus, a first test of compatibility between a frame and a Levin class is that they share the same participant roles. As FN is annotated with frame-specific semantic roles, we manually mapped these roles into the VN set of theINPUT V N = {C|C is a V erbNet class} V N Class C = {v|c is a verb of C} FN = {F|F is a FrameNet frame} FN frame F = {v|v is a verb of F} OUTPUT Pairs = {⟨F, C⟩|F ∈FN, C ∈V N : F maps to C } COMPUTE PAIRS: Let Pairs = ∅ for each F ∈FN (I) compute C∗= arg maxC∈V N |F ∩C| (II) if |F ∩C∗| ≥3 then Pairs = Pairs ∪⟨F, C∗⟩ Table 1: Linking FrameNet frames and VerbNet classes. TR = {θi : θi is the i −th theta role of VerbNet } for each ⟨F, C⟩∈Pairs −→ A F = ⟨o1, .., on⟩, oi = #⟨θi, F, pos =adjacent⟩ −→ D F = ⟨o1, .., on⟩, oi = #⟨θi, F, pos =distant⟩ −→ A C = ⟨o1, .., on⟩, oi = #⟨θi, C, pos =adjacent⟩ −→ D C = ⟨o1, .., on⟩, oi = #⟨θi, C, pos =distant⟩ ScoreF,C = 2 3 × −→ A F ·−→ A C ¯¯¯ ¯¯¯−→ A F ¯¯¯ ¯¯¯× ¯¯¯ ¯¯¯−→ A C¯¯¯ ¯¯¯ + 1 3 × −→ D F ·−→ D C ¯¯¯ ¯¯¯−→ D F ¯¯¯ ¯¯¯× ¯¯¯ ¯¯¯−→ D C¯¯¯ ¯¯¯ Table 2: Mapping algorithm - refining step. matic roles. Given a frame, we assigned thematic roles to all frame elements that are associated with verbal predicates. For example the Speaker, Addressee, Message and Topic roles from the Telling frame were respectively mapped into the Agent, Recipient, Theme and Topic theta roles. Second, we build a frequency distribution of VN thematic roles on different syntactic positions. Based on our observation and previous studies (Merlo and Stevenson, 2001), we assume that each ILC has a distinct frequency distribution of roles on different grammatical slots. As we do not have matching grammatical functions in FN and VN, we approximate that subjects and direct objects are more likely to appear on positions adjacent to the predicate, while indirect objects appear on more distant positions. The same intuition is successfully used by G&J to design the Position feature. For each thematic role θi we acquired from VN and FN data the frequencies with which θi appears on an adjacent A or distant D positions in a given frame or VN class (i.e. #⟨θi, class, position⟩). Therefore, for each frame and class, we obtain two vectors with thematic role frequencies corresponding respectively to the adjacent and distant positions (see Table 2). We compute a score for each 932 Score No. of Frames Not mapped Correct Overall Correct [0,0.5] 118 48.3% 82.5% (0.5,0.75] 69 0 84% (0.75,1] 72 0 100% 89.6% Table 3: Results of the mapping algorithm. pair ⟨F, C⟩using the normalized scalar product. The core arguments, which tend to occupy adjacent positions, show a minor syntactic variability and are more reliable than adjunct roles. To account for this in the overall score, we multiply the adjacent and the distant scores by 2/3 and 1/3, respectively. This limits the impact of adjunct roles like Temporal and Location. The above frequency vectors are computed for FN directly from the corpus of predicate-argument structure examples associated with each frame. The examples associated with the VN lexicon are extracted from the PB corpus. In order to do this we apply a preprocessing step in which each label Arg0..5 is replaced with its corresponding thematic role given the ILC of the predicate. We assign the same roles to the adjuncts all over PB as they are general for all verb classes. The only exception is ARGM-DIR that can correspond to Source, Goal or Path. We assign different roles to this adjunct based on the prepositions. We ignore some adjuncts like ARGM-ADV or ARGM-DIS because they cannot bear a thematic role. 4.2 Mapping Results We found that only 133 VN classes have correspondents among FN frames. Moreover, from the frames mapped with an automatic score smaller than 0.5 almost a half did not match any of the existing VN classes2. A summary of the results is depicted in Table 3. The first column contains the automatic score provided by the mapping algorithm when comparing frames with ILCs. The second column contains the number of frames for each score interval. The third column contains the percentage of frames that did not have a corresponding VN class and finally the fourth and fifth columns contain the accuracy of the mapping algorithm for each interval score and for the whole task, respectively. We mention that there are 3,672 distinct verb senses in PB and 2,351 distinct verb senses in 2The automatic mapping is improved by manually assigning the FN frames of the pairs that receive a score lower than 0.5. FN. Only 501 verb senses are in common between the two corpora which means 13.64% of PB and 21.31% of FN. Thus, by training an ILC classifier on both PB and FN we extend the number of available verb senses to 5,522. 4.3 Discussion In the literature, other studies compared the Levin classes with the FN frames, e.g. (Baker and Ruppenhofer, 2002; Giuglea and Moschitti, 2004; Shi and Mihalcea, 2005). Their findings suggest that although the two set of clusters are roughly equivalent there are also several types of mismatches: 1. Levin classes that are narrower than the corresponding frames, 2. Levin classes that are broader that the corresponding frames and 3. Overlapping groups. For our task, point 2 does not pose a problem. Points 1 and 3 however suggest that there are cases in which to one FN frame corresponds more than one Levin class. By investigating such cases, we noted that the mapping algorithm consistently assigns scores below 75% to cases that match problem 1 (two Levin classes inside one frame) and below 50% to cases that match problem 3 (more than two Levin classes inside one frame). Thus, to increase the accuracy of our results, a first step should be to assign independently an ILC to each of the verbs pertaining to frames with score lower than 0.75%. Nevertheless the current results are encouraging as they show that the algorithm is achieving its purpose by successfully detecting syntactic incoherences that can be subsequently corrected manually. Also, in the next section we will show that our current mapping achieves very good results, giving evidence for the effectiveness of the Levin class feature. 5 Experiments In the previous sections we have presented the algorithm for annotating the verb predicates of FrameNet (FN) with Intersective Levin classes (ILCs). In order to show the effectiveness of this annotation and of the ILCs in general we have performed several experiments. First, we trained (1) an ILC multiclassifier from FN, (2) an ILC multiclassifier from PB and (3) a 933 Run 51.3.2 Cooking 45.3 Characterize 29.2 Other_cos 45.4 Say 37.7 Correspond 36.1 Multiclassifier PB #Train Instances PB #Test Instances 262 5 6 5 2,945 134 2,207 149 9,707 608 259 20 52,172 2,742 PB Results 75 33.33 96.3 97.24 100 88.89 92.96 FN #Train Instances FN #Test Instances 5,381 1,343 138 35 765 40 721 184 1,860 1,343 557 111 46,734 11,650 FN Results 96.36 72.73 95.73 92.43 94.43 78.23 92.63 Table 4: F1s of some individual ILC classifiers and the overall multiclassifier accuracy (180 classes on PB and 133 on FN). Body_part Crime Degree Agent Multiclassifier FN #Train Instances FN #Test Instances 1,511 356 39 5 765 187 6,441 1,643 102,724 25,615 LF+Gold Frame 90.91 88.89 70.51 93.87 90.8 LF+Gold ILC 90.80 88.89 71.52 92.01 88.23 LF+Automatic Frame 84.87 88.89 70.10 87.73 85.64 LF+Automatic ILC 85.08 88.89 69.62 87.74 84.45 LF 79.76 75.00 64.17 80.82 80.99 Table 5: F1s of some individual FN role classifiers and the overall multiclassifier accuracy (454 roles). frame multiclassifier from FN. We compared the results obtained when trying to classify the VN class with the results obtained when classifying frame. We show that ILCs are easier to detect than FN frames. Our second set of experiments regards the automatic labeling of FN semantic roles on FN corpus when using as features: gold frame, gold ILC, automatically detected frame and automatically detected ILC. We show that in all situations in which the VN class feature is used, the accuracy loss, compared to the usage of the frame feature, is negligible. This suggests that the ILC can successfully replace the frame feature for the task of semantic role labeling. Another set of experiments regards the generalization property of the ILC. We show the impact of this feature when very few training data is available and its evolution when adding more and more training examples. We again perform the experiments for: gold frame, gold ILC, automatically detected frame and automatically detected ILC. Finally, we simulate the difficulty of free text by annotating PB with FN semantic roles. We used PB because it covers a different set of verbal predicates and also because it is very different from FN at the level of vocabulary and sometimes even syntax. These characteristics make PB a difficult testbed for the semantic role models trained on FN. In the following section we present the results obtained for each of the experiments mentioned above. 5.1 Experimental setup The corpora available for the experiments were PB and FN. PB contains about 54,900 predicates and gold parse trees. We used sections from 02 to 22 (52,172 predicates) to train the ILC classifiers and Section 23 (2,742 predicates) for testing purposes. The number of ILCs is 180 in PB and 133 on FN, i.e. the classes that we were able to map. For the experiments on FN corpus, we extracted 58,384 sentences from the 319 frames that contain at least one verb annotation. There are 128,339 argument instances of 454 semantic roles. In our evaluation we use only verbal predicates. Moreover, as there is no fixed split between training and testing, we randomly selected 20% of sentences for testing and 80% for training. The sentences were processed using Charniak’s parser (Charniak, 2000) to generate parse trees automatically. The classification models were implemented by means of the SVM-light-TK software available at http://ai-nlp.info.uniroma2.it/moschitti which encodes tree kernels in the SVM-light software (Joachims, 1999). We used the default parameters. The classification performance was evaluated using the F1 measure for the individual role and ILC classifiers and the accuracy for the multiclassifiers. 934 5.2 Automatic VerbNet class vs. automatic FrameNet frame detection In these experiments, we classify ILCs on PB and frames on FN. For the training stage we use SVMs with Tree Kernels. The main idea of tree kernels is the modeling of a KT (T1,T2) function which computes the number of common substructures between two trees T1 and T2. Thus, we can train SVMs with structures drawn directly from the syntactic parse tree of the sentence. The kernel that we employed in our experiments is based on the SCF structure devised in (Moschitti, 2004). We slightly modified SCF by adding the headwords of the arguments, useful for representing the selectional preferences (more details are given in (Giuglea and Moschitti, 2006). For frame detection on FN, we trained our classifier on 46,734 training instances and tested on 11,650 testing instances, obtaining an accuracy of 91.11%. For ILC detection the results are depicted in Table 4. The first six columns report the F1 measure of some verb class classifiers whereas the last column shows the global multiclassifier accuracy. We note that ILC detection is more accurate than the frame detection on both FN and PB. Additionally, the ILC results on PB are similar with those obtained for the ILCs on FN. This suggests that the training corpus does not have a major influence. Also, the SCF-based tree kernel seems to be robust in what concerns the quality of the parse trees. The performance decay is very small on FN that uses automatic parse trees with respect to PB that contains gold parse trees. 5.3 Automatic semantic role labeling on FrameNet In the experiments involving semantic role labeling, we used SVMs with polynomial kernels. We adopted the standard features developed for semantic role detection by Gildea and Jurafsky (see Section 2). Also, we considered some of the features designed by (Pradhan et al., 2005): First and Last Word/POS in Constituent, Subcategorization, Head Word of Prepositional Phrases and the Syntactic Frame feature from (Xue and Palmer, 2004). For the rest of the paper, we will refer to these features as being literature features (LF). The results obtained when using the literature features alone or in conjunction with the gold frame feature, gold ILC, automatically detected frame feature and automatically detected ILC are depicted in Table 5. 30 40 50 60 70 80 90 10 20 30 40 50 60 70 80 90 100 % Training Data Accuracy LF+ILC LF LF+Automatic ILC Trained on PB LF+Automatic ILC Trained on FN Figure 1: Semantic role learning curve. The first four columns report the F1 measure of some role classifiers whereas the last column shows the global multiclassifier accuracy. The first row contains the number of training and testing instances and each of the other rows contains the performance obtained for different feature combinations. The results are reported for the labeling task as the argument-boundary detection task is not affected by the frame-like features (G&J). We note that automatic frame produces an accuracy very close to the one obtained with automatic ILC suggesting that this is a very good candidate for replacing the frame feature. Also, both automatic features are very effective and they decrease the error rate by 20%. To test the impact of ILC on SRL with different amount of training data, we additionally draw the learning curves with respect to different features: LF, LF+ (gold) ILC, LF+automatic ILC trained on PB and LF+automatic ILC trained on FN. As can be noted, the automatic ILC information provided by the ILC classifiers (trained on FN or PB) performs almost as good as the gold ILC. 5.4 Annotating PB with FN semantic roles To show that our approach can be suitable for semantic role free-text annotation, we have automatically classified PB sentences3 with the FN semantic-role classifiers. In order to measure the quality of the annotation, we randomly selected 100 sentences and manually verified them. We measured the performance obtained with and without the automatic ILC feature. The sentences contained 189 arguments from which 35 were incorrect when ILC was used compared to 72 incorrect in the absence of this feature, i.e. an accuracy of 81% with ILC versus 62% without it. This demonstrates the importance of the ILC feature 3The results reported are only for role classification. 935 outside the scope of FN where the frame feature is not available. 6 Conclusions In this paper we have shown that the ILC feature can successfully replace the FN frame feature. By doing that we could interconnect FN to VN and PB obtaining better verb coverage and a more robust semantic parser. Our good results show that we have defined an effective framework which is a promising step toward the design of more robust semantic parsers. In the future, we intend to measure the effectiveness of our system by testing FN SRL on a larger portion of PB or on other corpora containing a larger verb set. References Collin Baker and Josef Ruppenhofer. 2002. Framenets frames vs. levins verb classes. In 28th Annual Meeting of the Berkeley Linguistics Society. Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Proceedings of CoNLL-2005. Eugene Charniak. 2000. A maximum-entropyinspired parser. In Proceedings of NACL00, Seattle, Washington. Hoa Trang Dang, Karin Kipper, Martha Palmer, and Joseph Rosenzweig. 1998. Investigating regular sense extensions based on intersective levin classes. In Coling-ACL98. Charles J. Fillmore. 1968. The case for case. In Universals in Linguistic Theory. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistic. Ana-Maria Giuglea and Alessandro Moschitti. 2004. Knowledge discovering using FrameNet, VerbNet and PropBank. In Proceedings of Workshop on Ontology and Knowledge Discovering at ECML 2004, Pisa, Italy. Ana-Maria Giuglea and Alessandro Moschitti. 2006. Shallow semantic parsing based on FrameNet, VerbNet and PropBank. In Proceedings of the 17th European Conference on Artificial Intelligence, Riva del Garda, Italy. T. Joachims. 1999. Making large-scale SVM learning practical. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. Christopher Johnson, Miriam Petruck, Collin Baker, Michael Ellsworth, Josef Ruppenhofer, and Charles Fillmore. 2003. Framenet: Theory and practice. Berkeley, California. Paul Kingsbury and Martha Palmer. 2002. From Treebank to PropBank. In LREC02). Karin Kipper, Hoa Trang Dang, and Martha Palmer. 2000. Class-based construction of a verb lexicon. In AAAI00. Beth Levin. 1993. English Verb Classes and Alternations A Preliminary Investigation. Chicago: University of Chicago Press. Kenneth Litkowski. 2004. Senseval-3 task automatic labeling of semantic roles. In Senseval-3. Paola Merlo and Suzanne Stevenson. 2001. Automatic verb classification based on statistical distribution of argument structure. CL Journal. Alessandro Moschitti. 2004. A study on convolution kernels for shallow semantic parsing. In ACL04, Barcelona, Spain. Sameer Pradhan, Kadri Hacioglu, Valeri Krugler, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2005. Support vector learning for semantic argument classification. Machine Learning Journal. Lei Shi and Rada Mihalcea. 2005. Putting pieces together: Combining FrameNet, VerbNet and WordNet for robust semantic parsing. In Proceedings of Cicling 2005, Mexico. Cynthia A. Thompson, Roger Levy, and Christopher Manning. 2003. A generative model for semantic role labeling. In 14th European Conference on Machine Learning. V. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer. Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of EMNLP 2004, Barcelona, Spain. Association for Computational Linguistics. 936 | 2006 | 117 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 937–944, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Multilingual Legal Terminology on the Jibiki Platform: The LexALP Project Gilles S´erasset, Francis Brunet-Manquat Universit´e Joseph Fourier, Laboratoire CLIPS-IMAG, BP 53 38041 Grenoble Cedex 9 - France, [email protected] [email protected] Elena Chiocchetti EURAC Research Viale Druso 1 39100 Bozen/Bolzano - Italy [email protected] Abstract This paper presents the particular use of “Jibiki” (Papillon’s web server development platform) for the LexALP1 project. LexALP’s goal is to harmonise the terminology on spatial planning and sustainable development used within the Alpine Convention2, so that the member states are able to cooperate and communicate efficiently in the four official languages (French, German, Italian and Slovene). To this purpose, LexALP uses the Jibiki platform to build a term bank for the contrastive analysis of the specialised terminology used in six different national legal systems and four different languages. In this paper we present how a generic platform like Jibiki can cope with a new kind of dictionary. 1 Introduction One of the most time-consuming hindrances to supranational law drafting and convention negotiation is the lack of understanding among negotiators and technical writers. This is not only due to the fact that different languages are involved, but mainly to the inherent differences in the legal systems. Countries that speak the same language (like France and part of Switzerland) may use the same word to represent different legal concepts3, 1Legal Language Harmonisation System for Environment and Spatial Planning within the Multilingual Alps 2http://www.convenzionedellealpi.org 3E.g.: In the German-speaking province of Bolzano Italy the Landeshauptmann is the president of the provincial council, with much more limited competence that the Austrian Landeshauptmann, who is head of one of the states (Bundesland) that are part of the Austrian federation. as defined in their respective legal traditions. The same concept may be referred to in different ways according to the legal system4. Also, terms that may superficially seem to be translations of each other can represent different legal notions5. In order to concretely address these problems, several institutions representing translators, terminologists, legal experts and computational linguists joined in the LexALP project, co-funded by EU’s INTERREG IIIb Alpine Space programme. The objective of the project is to compare the specialised terminology of six different national legal systems (Austria, France, Germany, Italy, Switzerland and Slovenia) and three supranational systems (EU law, international law and the particular framework of the Alpine Convention) in the four official languages of the Al-pine Convention, which is an international framework agreement signed by all countries of the Alpine arc and the EU. This contrastive analysis serves as a basis for the work of a group of experts (the Harmonising Group) who will determine translation equivalents in French, Italian, German and Slovene (one-toone correspondence) in the fields of spatial planning and sustainable development for use within the Convention, thus optimising the understanding between the Alpine states at supranational level. The tools that are to be developed for these objectives comprise a corpus bank and a term bank. The corpus bank is developed by adapting the bistro system (Streiter et al., 2006; Streiter et al., 2004). The term bank is based on the Jibiki plat4See for instance the European Union use of chien drogue while French legislation calls them chien renifleur. 5For example, in Italy an elezione suppletiva is commonly held whenever an elected deputy or senator either resigns or dies. In Germany in such cases the first non-elected candidate is called to parliament. Ersatzwahlen are a rare phenomenon, foreseen in some very specific cases. 937 form (Mangeot et al., 2003; S´erasset, 2004). This paper details the way the Jibiki platform is used in order to cope with a new dictionary structure. The platform provides dictionary access and edition services without any new and specific development. After a brief overview of the Jibiki platform, we describe the choices made by the LexALP team for the structure and organisation of their term bank. Then, we show how this structure is described using Jibiki metadata description languages. Finally, we give some details on the resulting LexALP Information System. 2 Jibiki, The Papillon Dictionary Development Platform 2.1 Overview The Jibiki platform has been designed to support the collaborative development of multilingual dictionaries. This platform is used as the basis of the Papillon project web site6. This platform offers several services to its users: • access to many different dictionaries from a single easy to use query form, • advance search for particular dictionary entries through an advanced search form, • creation and edition of dictionary entries. What makes the Jibiki platform quite unique is the fact that it provides these services regardless of the dictionary structure. In other words it may be used by any dictionary builder to give access and collaboratively edit any dictionary, provided that the resulting dictionary will be freely accessible online. 2.2 Jibiki Platform Architecture The Jibiki platform is a framework used to set up a web server dedicated to the collaborative development of multilingual dictionaries. All services provided by the platform are organised as classical 3-tier architectures with a presentation layer (in charge of the interface with users), a business layer (which provides the services per se) and a data layer (in charge of the storage of persistent data). In order to adapt the Jibiki platform to a new dictionary, the dictionary manager does not have 6http://www.papillon-dictionary.org/ Papillon Application (java + enhydra presentation layer serveur HTTP (apache) Relational database (PostgreSQL) XML-UTF8 HTML CSS javascript + CGI WML xhtml chtml business layer data layer J D B C Lexie axie Dico Historique Utilisateur ... Data validation Mailing list archive Users/Groups Contributions management Volume Information sharing requests management Information Message Figure 1: The Jibiki platform general architecture to write specific java code nor specific dynamic web pages. The only necessary information used by the platform consists in: • a description of the dictionary volumes and their relations, • a mapping between the envisaged dictionary structure and a simple hypothetical dictionary structure (called CDM)7, • the definition of the XML structure of each envisaged dictionary volume by way of XML schemas, • the development of a specific edition interface as a standard xhtml form (that can be adapted from an automatically generated draft). 3 The LexALP Terminology Structure 3.1 Overview The objective of the LexALP project is to compare the specialised terminology of six different national legal systems and three supranational systems in four different languages, and to harmonise it, thus optimising communication between the Alpine states at supranational level. To achieve this objective, the terminology of the Alpine Convention is described and compared to the equivalent terms used in national legislation. The resulting terminology entries feed a specific term bank that will support the harmonisation work. As the project deals with legal terms, which refer to concepts that are proper of the considered national law or international convention, equivalence problems are the norm, given that concepts are not “stable” between the different national legislations. Standard terminology techniques for other fields can not be applied to the field of law, where the standardisation approach (Felber, 1987; 7This mapping is sufficient for simple dictionary access 938 Felber, 1994) is not applicable. For this, we chose to use “acceptions” as they are defined in the Papillon dictionary (S´erasset, 1994) to represent the equivalence links between concepts of the different legal systems (Arntz, 1993). Italian Slovene German French inneralpiner Verkehr znotrajalpski promet transport intra-alpin circulation intra-alpine trafic intra-alpin traffico intraalpino trasporto intraalpino Figure 2: An Alpine Convention concept in four languages The example given in figure 2 shows a concept defined in the Alpine Convention. This concept has the same definition in the four languages of the Alpine Convention but is expressed by different denominations. The Alpine Convention also uses the terms “circulation intra-alpine” or “transport intra-alpin” which are identified as synonyms by the terminologist. This illustrates the first goal of the LexALP project. In different texts, the same concept may be realised by different terms in the same language. This may lead to inefficient communication. Hence, a single term has to be determined as part of a harmonised quadruplet of translation equivalents. The other denominations will be represented in the term bank as non-harmonised synonyms in order to direct drafting and translating within the Alpine Convention towards a more clear and consistent terminology use for interlingual and supranational communication. In this example, the lexicographers and jurists did not identify any existing concept in the different national laws that could be considered close enough to the concept analysed. This is coherent with the minutes from the French National Assembly which clearly states that the term “trafic intraalpin” (among others) should be clarified by a declaration to be added to the Alpine Convention. Figure 3 shows an analogous quadrilingual example where the Alpine Convention concept may be related to a legal term defined in the French laws. In this example the French term is distinguished from the Alpine Convention terms, because these concepts belong to different legal sysItalian Slovene German French principio di precauzione Vorsorgeprinzip nacelo preventive principe de précaution principe de précaution Figure 3: A quadrilingual term extracted from the Alpine Convention with reference to its equivalent at French national level tems (and are not identically defined in them). Hence, the terminologists created distinct acceptions, one for each concept. These acceptions are related by a translation link. This illustrates the second goal of the project, which is to help with the fine comprehension of the Alpine Convention and with the detailed knowledge necessary to evaluate the implementation and implementability of the convention in the different legal systems. As a by-product of the project, one can see that there is an indirect relation between concepts from different national legal systems (by way of their respective relation to the concepts of the Alpine Convention). However, establishing these indirect relations is not one of the main objectives of the LexALP project and would require more direct contrastive analysis. 3.2 Macro- and Micro- Structures The LexALP term bank consists in 5 volumes (for French, German, Italian, Slovene and English) containing all term descriptions (grammatical information, definition, contexts etc.). The translation links are established through a central acception volume. Figure 2 and 3 show examples of terms extracted from the Alpine Convention, synonymy links in the French and Italian volumes, as well as inter-lingual relations by way of acceptions. All language volumes share the same microstructure. This structure is stored in XML. Figure 4 shows the xml structure of the French term “trafic intra-alpin”, as defined in the Alpine Convention. The term entry is associated to a unique identifier used to establish relations between volume entries. Each term entry belongs to one (and only one) legal system. The example term belongs to the Alpine Convention legal 939 <entry id="fra.trafic_intra-alpin.1010743.e" lang="fra" legalSystem="AC" process_status="FINALISED" status="HARMONISED"> <term>trafic intra-alpin</term> <grammar>n.m.</grammar> <domain>Transport</domain> <usage frequency="common" geographical-code="INT" technical="false"/> <relatedTerm isHarmonised="false" relationToTerm="Synonym" termref=""> transport intra-alpin </relatedTerm> <relatedTerm isHarmonised="false" relationToTerm="Synonym" termref=""> circulation intra-alpine </relatedTerm> <definition> [T]rafic constitu´e de trajets ayant leur point de d´epart et/ou d’arriv´ee `a l’int´erieur de l’espace alpin. </definition> <source url="">Prot. Transp., art. 2</source> <context url="http://www..."> Des projets routiers `a grand d´ebit pour le trafic intra-alpin peuvent ˆetre r´ealis´es, si [...]. </context> </entry> Figure 4: XML form of the term “trafic intraalpin”. system8 (code AC). The set of known legal systems includes of course countries belonging to the Alpine Space (Austria, France, Germany, Italy, Slovenia and Switzerland9) but also international treaties or conventions. The entry also bears the information on its status (harmonised or rejected) and its process status (to be processed, provisionally processed or finalised). The term itself and its part of speech is also given, with the general domain to which the term belongs, along with some usage notes. In these usage notes, the attribute geographical-code allows for discrimination between terms defined in national (or federal) laws and terms defined in regional laws as in some of the countries involved legislative power is distributed at different levels. Then the term may be related to other terms. These relations may lead to simple strings of texts (as in the given example) or to autonomous term entries in the dictionary by the use of the termref attribute. The relation itself is specified in the relationToTerm attribute. The current schema allows for the representation of relations 8Strictly speaking, the Alpine Convention does not constitute a legal system per se. 9Also Liechtenstein and Monaco are parties to the Alpine Convention, however, their legal systems are not terminologically processed within LexALP. between concepts (synonymy, hyponymy and hyperonymy), as well as relations between graphies (variant, abbreviation, acronym, etc.). Then, a definition and a context may be given. Both should be extracted from legal texts, which must be identified in the source field. An interlingual acception (or axie) is a place holder for relations. Each interlingual acception may be linked to several term entries in the language volumes through termref elements and to other interlingual acceptions through axieref elements, as illustrated in figure 5. <axie id="axi..1011424.e"> <termref idref="ita.traffico_intraalpino.1010654.e" lang="ita"/> <termref idref="fra.trafic_intra-alpin.1010743.e" lang="fra"/> <termref idref="deu.inneralpiner_Verkehr.1011065.e" lang="deu"/> <termref idref="slo.znotrajalpski_promet.1011132.e" lang="slo"/> <axieref idref=""/> <misc></misc> </axie> Figure 5: XML form of the interlingual acception illustated in figure 2. 4 LexALP Information System 4.1 Overview Building such a term bank can only be envisaged as a collaborative work involving terminologists, translators and legal experts from all the involved countries. Hence, the LexALP consortium has set up a centralised information system that is used to gather all textual and terminological data. This information system is organized in two main parts. The first one is dedicated to corpus management. It allows the users to upload legal texts that will serve to bootstrap the terminology work (by way of candidate term extraction) and to let terminologists find occurrences of the term they are working on, in order for them to provide definitions or contexts. The second part is dedicated to terminology work per se. It has been developed with the Jibiki platform described in section 2. In this section, we show the LexALP Information System functionality, along with the metadata required to implement it with Jibiki. 940 4.2 Dictionary Browsing The first main service consists in browsing the currently developed dictionary. It consists in two different query interfaces (see figures 6 and 7) and a unique result presentation interface (see figure 10). Figure 6: Simple search interface present on all pages of the LexALP Information System <dictionary-metadata [...] d:category="multilingual" d:fullname="LexALP multilingual Term Base" d:name="LexALP" d:owner="LexALP consortium" d:type="pivot"> <languages> <source-language d:lang="deu"/> <source-language d:lang="fra"/> <target-language d:lang="deu"/> <target-language d:lang="fra"/> [...] </languages> [...] <volumes> <volume-metadata-ref name="LexALP_fra" source-language="fra" xlink:href="LexALP_fra-metadata.xml"/> <volume-metadata-ref name="LexALP_deu" source-language="deu" xlink:href="LexALP_deu-metadata.xml"/> [...] <volume-metadata-ref name="LexALP_axi" source-language="axi" xlink:href="LexALP_axi-metadata.xml"/> </volumes> <xsl-stylesheet name="LexALP" default="true" xlink:href="LexALP-view.xsl"/> <xsl-stylesheet name="short-list" xlink:href="short-list-view.xsl"/> </dictionary-metadata> Figure 8: Excerpt of the dictionary descriptor In the provided examples, the user of the system specifies an entry (a term), or part of it, and a language in which the search is to be done. The expected behaviour may only be achieved if : • the system knows in which volume the search is to be performed, • the system knows where, in the volume entry, the headword is to be found, • the system is able to produce a presentation for the retrieved XML structures. However, as the Jibiki platform is entirely independent of the underlying dictionary structure <volume-metadata [...] dbname="lexalpfra" dictname="LexALP" name="LexALP_fra" source-language="fra"> <cdm-elements> <cdm-entry-id index="true" xpath="/volume/entry/@id"/> <cdm-headword d:lang="fra" index="true" xpath="/volume/entry/term/text()"/> <cdm-pos d:lang="fra" index="true" xpath="/volume/entry/grammar/text()"/> [...] </cdm-elements> <xmlschema-ref xlink:href="lexalp.xsd"/> <template-entry-ref xlink:href="lexalp_fra-template.xml"/> <template-interface-ref xlink:href="lexalp-interface.xhtml"/> </volume-metadata> Figure 9: Excerpt of a volume descriptor (which makes it highly adaptable), the expected result may only be achieved if additional metadata is added to the system. These pieces of information are to be found in the mandatory dictionary descriptor. It consists in a structure defined in the Dictionary Metadata Language (DML), as set of metadata structures and a specific XML namespace defined in (Mangeot, 2001). Figure 8 gives an excerpt of this descriptor. The metadata first identify the dictionary by giving it a name and a type. In this example the dictionary is a pivot dictionary (DML also defines monolingual and bilingual dictionary types). The descriptor also defines the set of source and target languages. Finally, the dictionary is defined as a set of volumes, each volume being described in another file. As the LexALP dictionary is a pivot dictionary, there should be a volume for the artificial language axi, which is the pivot volume. Figure 9 shows an excerpt of the description of the French volume of the LexALP dictionary. After specifying the name of the dictionary, the descriptor provides a set of cdm-elements. These elements are used to identify standard dictionary elements (that can be found in several dictionaries) in the specific dictionary structure. For instance, the descriptor tells the system that the headword of the dictionary (cdm-headword) is to be found by applying the specified xpath10 to the dictionary structure. With this set of metadata, the system knows that: 10an xpath is a standard way to extract a sub-part of any XML structure 941 Figure 7: Advanced search interface • requests on French should be directed to the LexALP fra volume, • the requested headword will be found in the text of the term element of the volume entry element, Hence, the system can easily perform a request and retrieve the desired XML entries. The only remaining step is to produce a presentation for the user, based on the retrieved entries. This is achieved by way of a xsl11 stylesheet. This stylesheet is specified either on the dictionary level (for common presentations) or on the volume level (for volume specific presentation). In the given example, the dictionary administrator provided two presentations called LexALP (the default one, as shown in figure 10) and short-list, both of them defined in the dictionary descriptor. This mechanism allows for the definition of presentation outputs in xhtml (for online browsing) or for presentation output in pdf (for dictionary export and print). 4.3 Dictionary Edition The second main service provided by the Jibiki platform is to allow terminologists to collaboratively develop the envisaged dictionary. In this sense, Jibiki is quite unique as it federates, on the very same platform the construction and diffusion of a structured dictionary. As before, Jibiki may be used to edit any dictionary. Hence, it needs some metadata information in order to work: • the complete definition of the dictionary entry structures by way of an XML schema, • a template describing an empty entry structure, 11XSL is a standard way to transform an XML structure into another structure (XML or not). Current XML structure Empty XHTML form Instanciate Form Instanciated XHTML form Online edition Network CGI decoding Figure 11: Basic flow chart of the editing service • a xhtml form used to edit a dictionary entry structure (which can be adapted from an automatically generated one). When this information is known, the Jibiki platform provides a specific web page to edit a dictionary entry structure. As shown in figure 11, the XML structure is projected into the given empty XHTML form. This form is served as a standard web page on the client browser. After manual editing, the resulting form is sent back to the Jibiki platform as CGI12 data. The Jibiki platform decodes this data and modifies the edited XML structure accordingly. Then the process iterates as long as necessary. Figure 12 shows an example of such a dynamically created web page. After each update, the resulting XML structure is stored in the dictionary database. However, it is not available to other users until it is marked as finished by the contributor (by clicking on the save button). If the contributor leaves the web page without saving the entry, he will be able to retrieve it and finish his contribution later. 12Common Gateway Interface 942 Figure 10: Query result presentation interface Figure 12: Edition interface of a LexALP French entry 943 At each step of the contribution (after each update) and at each step of dictionary editing (after each save), the previous state is saved and the contributor (or the dictionary administrator) is able to browse the history of changes and to revert the entry to a previous version. 5 Conclusion In this article we give some details on the way the Jibiki platform allows the diffusion and the online editing of a dictionary, regardless of his structure (monolingual, bilingual (directed or not) or multilingual (multi-bilingual or pivot based)). Initially developed to support the editing of the Papillon multilingual dictionary13, the Jibiki platform proved useful for the development of other very different dictionaries. It is currently used for the development of the GDEF (Grand Dictionnaire Estonien-Franc¸ais) project14 an Estonian French bilingual dictionary. This article also shows the use of the platform for the development of a European term bank for legal terms on spatial planning and sustainable development in the LexALP project. Adapting the Jibiki platform to a new dictionary requires the definition of several metadata information, taking the form of several XML files. While not trivial, this metadata definition does not require any competence in computer development. This adaptation may therefore also be done by experimented linguists. Moreover, when the dictionary microstructure needs to evolve, this evolution does not require any programming. Hence the Jibiki platform gives linguists great liberty in their decisions. Another positive aspect of Jibiki is that it integrates diffusion and editing services on the same platform. This allows for a tighter collaboration between linguists and users and also allows for the involvement of motivated users to the editing process. The Jibiki platform is freely available for use by any willing team of lexicographer/terminologists, provided that the resulting dictionary data will be freely available for online browsing. In this article, we also presented the choices made by the LexALP consortium to structure a term bank used for the description and harmonisation of legal terms in the domain of spacial plan13http://www.papillon-dictionary.org/ 14http://estfra.ee/ ning and sustainable development of the Alpine Space. In such a domain, classical techniques used in multilingual terminology cannot be used as the term cannot be defined by reference to a stable/shared semantic level (each country having its own set of non-equivalent legal concepts). References Reiner Arntz. 1993. Terminological equivalence and translation. In H. Sonneveld and K. Loening, editors, Terminology. Applications in Interdisciplinary Communication, pages 5–19. Amsterdam et Philadelphia, John Benjamins Publishing Company. Helmut Felber, 1987. Manuel de terminologie. UNESCO, Paris. Helmut Felber. 1994. Terminology research: Its relation to the theory of science. ALFA, 8(7):163–172. Mathieu Mangeot, Gilles S´erasset, and Mathieu Lafourcade. 2003. Construction collaborative d’une base lexicale multilingue, le projet Papillon. TAL, 44(2):151–176. Mathieu Mangeot. 2001. Environnements centralis´es et distribu´es pour lexicographes et lexicologues en contexte multilingue. Th`ese de nouveau doctorat, sp´ecialit´e informatique, Universit´e Joseph Fourier Grenoble I, Septembre. Gilles S´erasset. 1994. Interlingual lexical organisation for multilingual lexical databases in nadia. In Makoto Nagao, editor, COLING-94, volume 1, pages 278–282, August. Gilles S´erasset. 2004. A generic collaborative platform for multilingual lexical database development. In Gilles S´erasset, editor, COLING 2004 Multilingual Linguistic Resources, pages 73–79, Geneva, Switzerland, August 28. COLING. Oliver Streiter, Leonhard Voltmer, Isabella Ties, and Natascia Ralli. 2004. BISTRO, the online platform for terminology management: structuring terminology without entry structures. In The translation of domain specific languages and multilingual terminology, number 3 in Linguistica Antverpiensia New Series. Hoger Instituut voor Vertalers en Tolken, Hogeschool Antwerpen. Oliver Streiter, Leonhard Voltmer, Isabella Ties, Natascia Ralli, and Verena Lyding. 2006. BISTRO: Data structure, term tools and interface. Terminology Science and Research, 16. 944 | 2006 | 118 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 945–952, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Leveraging Reusability: Cost-effective Lexical Acquisition for Large-scale Ontology Translation G. Craig Murray Bonnie J. Dorr Jimmy Lin Institute for Advanced Computer Studies University of Maryland {gcraigm,bdorr,jimmylin}@umd.edu Jan Hajič Pavel Pecina Institute for Formal and Applied Linguistics Charles University {hajic,pecina}@ufal.mff.cuni.cz Abstract Thesauri and ontologies provide important value in facilitating access to digital archives by representing underlying principles of organization. Translation of such resources into multiple languages is an important component for providing multilingual access. However, the specificity of vocabulary terms in most ontologies precludes fully-automated machine translation using general-domain lexical resources. In this paper, we present an efficient process for leveraging human translations when constructing domain-specific lexical resources. We evaluate the effectiveness of this process by producing a probabilistic phrase dictionary and translating a thesaurus of 56,000 concepts used to catalogue a large archive of oral histories. Our experiments demonstrate a cost-effective technique for accurate machine translation of large ontologies. 1 Introduction Multilingual access to digital collections is an important problem in today’s increasingly interconnected world. Although technologies such as cross-language information retrieval and machine translation help humans access information they could not otherwise find or understand, they are often inadequate for highly specific domains. Most digital collections of any significant size use a system of organization that facilitates easy access to collection contents. Generally, the organizing principles are captured in the form of a controlled vocabulary of keyword phrases (descriptors) representing specific concepts. These descriptors are usually arranged in a hierarchic thesaurus or ontology, and are assigned to collection items as a means of providing access (either via searching for keyword phases, browsing the hierarchy, or a combination both). MeSH (Medical Subject Headings) serves as a good example of such an ontology; it is a hierarchicallyarranged collection of controlled vocabulary terms manually assigned to medical abstracts in a number of databases. It provides multilingual access to the contents of these databases, but maintaining translations of such a complex structure is challenging (Nelson, et al, 2004). For the most part, research in multilingual information access focuses on the content of digital repositories themselves, often neglecting significant knowledge that is explicitly encoded in the associated ontologies. However, information systems cannot utilize such ontologies by simply applying off-the-shelf machine translation. General-purpose translation resources provide insufficient coverage of the vocabulary contained within these domain-specific ontologies. This paper tackles the question of how one might efficiently translate a large-scale ontology to facilitate multilingual information access. If we need humans to assist in the translation process, how can we maximize access while minimizing cost? Because human translation is associated with a certain cost, it is preferable not to incur costs of retranslation whenever components of translated text are reused. Moreover, when exhaustive human translation is not practical, the most “useful” components should be translated first. Identifying reusable elements and prioritizing their translation based on utility is essential to maximizing effectiveness and reducing cost. 945 We present a process of prioritized translation that balances the issues discussed above. Our work is situated in the context of the MALACH project, an NSF-funded effort to improve multilingual information access to large archives of spoken language (Gustman, et al., 2002). Our process leverages a small set of manuallyacquired English-Czech translations to translate a large ontology of keyword phrases, thereby providing Czech speakers access to 116,000 hours of video testimonies in 32 languages. Starting from an initial out-of-vocabulary (OOV) rate of 85%, we show that a small set of prioritized translations can be elicited from human informants, aligned, decomposed and then recombined to cover 90% of the access value in a complex ontology. Moreover, we demonstrate that prioritization based on hierarchical position and frequency of use facilitates extremely efficient reuse of human input. Evaluations show that our technique is able to boost performance of a simple translation system by 65%. 2 The Problem The USC Shoah Foundation Institute for Visual History and Education manages what is presently the world's largest archive of videotaped oral histories (USC, 2006). The archive contains 116,000 hours of video from the testimonies of over 52,000 survivors, liberators, rescuers and witnesses of the Holocaust. If viewed end to end, the collection amounts to 13 years of continuous video. The Shoah Foundation uses a hierarchically arranged thesaurus of 56,000 keyword phrases representing domain-specific concepts. These are assigned to time-points in the video testimonies as a means of indexing the video content. Although the testimonies in the collection represent 32 different languages, the thesaurus used to catalog them is currently available only in English. Our task was to translate this resource to facilitate multilingual access, with Czech as the first target language. Our first pass at automating thesaurus translation revealed that only 15% of the words in the vocabulary could be found in an available aligned corpus (Čmejrek, et al., 2004). The rest of the vocabulary was not available from general resources. Lexical information for translating these terms had to be acquired from human input. Reliable access to digital archives requires accuracy. Highly accurate human translations incur a cost that is generally proportional to the number of words being translated. However, the keyword phrases in the Shoah Foundation’s archive occur in a Zipfian distribution—a relatively small number of terms provide access to a large portion of the video content. Similarly, a great number of highly specific terms describe only a small fraction of content. Therefore, not every keyword phrase in the thesaurus carries the same value for access to the archive. The hierarchical arrangement of keyword phrases presents another issue: some concepts, while not of great value for access to segments of video, may be important for organizing other concepts and for browsing the hierarchy. These factors must be balanced in developing a cost-effective process that maximizes utility. 3 Our Solution This paper presents a prioritized human-in-theloop approach to translating large-scale ontologies that is fast, efficient, and cost effective. Using this approach, we collected 3,000 manual translations of keyword phrases and reused the translated terms to generate a lexicon for automated translation of the rest of the thesaurus. The process begins by prioritizing keyword phrases for manual translation in terms of their value in accessing the collection and the reusability of their component terms. Translations collected from one human informant are then checked and aligned to the original English terms by a second informant. From these alignments we induce a probabilistic English-Czech phrase dictionary. To test the effectiveness of this process we implemented a simple translation system that utilizes the newly generated lexical resources. Section 4 reports on two evaluations of the translation output that quantify the effectiveness of our human-in-the-loop approach. 3.1 Maximizing Value and Reusability To quantify their utility, we defined two values for each keyword phrase in the thesaurus: a thesaurus value, representing the importance of the keyword phrase for providing access to the collection, and a translation value, representing the usefulness of having the keyword phrase translated. These values are not identical, but the second is related to the first. Thesaurus value: Keyword phrases in the Shoah Foundation’s thesaurus are arranged into a poly-hierarchy in which child nodes may have multiple parents. Internal (non-leaf) nodes of the hierarchy are used to organize concepts and support concept browsing. Some internal nodes are also used to index video content. Leaf nodes are 946 very specific and are only used to index video content. Thus, the usefulness of any keyword phrase for providing access to the digital collection is directly related to the concept’s position in the thesaurus hierarchy. A fragment of the hierarchy is shown in Figure 1. The keyword phrase “Auschwitz IIBirkenau (Poland: Death Camp)”, which describes a Nazi death camp, is assigned to 17,555 video segments in the collection. It has broader (parent) terms and narrower (child) terms. Some of the broader and narrower terms are also assigned to segments, but not all. Notably, “German death camps” is not assigned to any video segments. However, “German death camps” has very important narrower terms including “Auschwitz II-Birkenau” and others. From this example, we can see that an internal node is valuable in providing access to its children, even if the keyword phrase itself is not assigned to any segments. The value we assign to any term must reflect this fact. If we were to reduce cost by translating only the nodes assigned to video segments, we would neglect nodes that are crucial for browsing. However, if we value a node by the sum value of all its children, grandchildren, etc., the resulting calculation would bias the top of the hierarchy. Any prioritization based on this method would lead to translation of the top of the hierarchy first. Given limited resources, leaf nodes might never be translated. Support for searching and browsing calls for different approaches to prioritization. To strike a balance between these factors, we calculate a thesaurus value, which represents the importance of each keyword phrase to the thesaurus as a whole. This value is computed as: ( ) ( ) k children h s count h k children i i k k ∑∈ + = ) ( For leaf nodes in our thesaurus, this value is simply the number of video segments to which the concept has been assigned. For parent nodes, the thesaurus value is the number of segments (if any) to which the node has been assigned, plus the average of the thesaurus value of any child nodes. This recursive calculation yields a microaveraged value that represents the reachability of segments via downward edge traversals from a given node in the hierarchy. That is, it gives a kind of weighted value for the number of segments described by a given keyword phrase or its narrower-term keyword phrases. For example, in Figure 2 each of the leaf nodes n3, n4, and n5 have values based solely on the number of segments to which they are assigned. Node n1 has value both as an access point to the segments at s2 and as an access point to the keyword phrases at nodes n3 and n4. Other internal nodes, such as n2 have value only in providing access to other nodes/keyword phrases. Working from the bottom of the hierarchy up to the primary node (n0) we can compute the thesaurus value for each node in the hierarchy. In our example, we start with nodes n3 through n5, counting the number of the segments that have been assigned each keyword phrase. Then we move up to nodes n1 and n2. At n1 we count the number of segments s2 to which n1 was assigned and add that count to the average of the thesaurus values for n3, and n4. At n2 we simply average the thesaurus values for n4 and n5. The final values quantify how valuable the translation of any given keyword phrase would be in providing access to video segments. Translation value: After obtaining the thesaurus value for each node, we can compute the translation value for each word in the vocabulary Figure 2. Bottom-up micro-averaging Figure 1. Sample keyword phrase with broader and narrower terms Auschwitz II-Birkenau (Poland : Death Camp) Assigned to 17555 video segments Has as broader term phrases: Cracow (Poland : Voivodship) [ 534 narrower terms] [ 204 segments] German death camps [ 6 narrower terms] [ 0 segments] Has seven narrower term phrases including: Block 25 (Auschwitz II-Birkenau) [leaf node] [ 35 segments] Kanada (Auschwitz II-Birkenau) [leaf node] [ 378 segments] ... disinfection chamber (Auschwitz II-Birkenau) [leaf node] [ 9 segments] primary keyword segments n2 n4 n3 n0 n5 keyword phrases s2 n1 s1 s3 s4 947 as the sum of the thesaurus value for every keyword phrase that contains that word: tw= ∑ Κ ∈ w k kh where Kw={x | phrase x contains w} For example, the word “Auschwitz” occurs in 35 concepts. As a candidate for translation, it carries a large impact, both in terms of the number of keyword phrases that contains this word, and the potential value of those keyword phrases (once they are translated) in providing access to segments in the archive. The end result is a list of vocabulary words and the impact that correct translation of each word would have on the overall value of the translated thesaurus. We elicited human translations of entire keyword phrases rather than individual vocabulary terms. Having humans translate individual words without their surrounding context would have been less efficient. Also, the value any keyword phrase holds for translation is only indirectly related to its own value as a point of access to the collection (i.e., its thesaurus value). Some keyword phrases contain words with high translation value, but the keyword phrase itself has low thesaurus value. Thus, the value gained by translating any given phrase is more accurately estimated by the total value of any untranslated words it contains. Therefore, we prioritized the order of keyword phrase translations based on the translation value of the untranslated words in each keyword phrase. Our next step was to iterate through the thesaurus keyword phrases, prioritizing their translation based on the assumption that any words contained in a keyword phrase of higher priority would already have been translated. Starting from the assumption that the entire thesaurus is untranslated, we select the one keyword phrase that contains the most valuable un-translated words—we simply add up the translation value of all the untranslated words in each keyword phrase, and select the keyword phrase with the highest value. We add this keyword phrase to a prioritized list of items to be manually translated and we remove it from the list of untranslated phrases. We update our vocabulary list and, assuming translations of all the words in the prior keyword phrase to now be translated (neglecting issues such as morphology), we again select the keyword phrase that contains the most valuable untranslated words. We iterate the process until all vocabulary terms have been included at least one keyword phrases on the prioritized list. Ultimately we end up with an ordered list of the keyword phrases that should be translated to cover the entire vocabulary, with the most important words being covered first. A few words about additional characteristics of this approach: note that it is greedy and biased toward longer keyword phrases. As a result, some words may be translated more than once because they appear in more than one keyword phrase with high translation value. This side effect is actually desirable. To build an accurate translation dictionary, it is helpful to have more than one translation of frequently occuring words, especially for morphologically rich languages such as Czech. Our technique makes the operational assumption that translations of a word gathered in one context can be reused in another context. Obviously this is not always true, but contexts of use are relatively stable in controlled vocabularies. Our evaluations address the acceptability of this operational assumption and demonstrate that the technique yields acceptable translations. Following this process model, the most important elements of the thesaurus will be translated first, and the most important vocabulary terms will quickly become available for automated translation of keyword phrases with high thesaurus value that do not make it onto the prioritized list for manual translation (i.e., low translation value). The overall access value of the thesaurus rises very quickly after initial translations. With each subsequent human translation of keyword phrases on the prioritized list, we gain tremendous value in terms of providing non-English access to the collection of video testimonies. Figure 3 shows this rate of gain. It can be seen that prioritization based on translation value gives a much higher yield of total access than prioritization based on thesaurus value. Figure 3. Gain rate of access value based on number of human translations Gain rate of prioritized translation schemes 0% 20% 40% 60% 80% 100% 0 500 1000 1500 2000 number of translations percent of total access value priority by thesaurus value priority by translation value 948 3.2 Alignment and Decomposition Following the prioritization scheme above, we obtained professional translations for the top 3000 English keyword phrases. We tokenized these translations and presented them to another bilingual Czech speaker for verification and alignment. This second informant marked each Czech word in a translated keyword phrase with a link to the equivalent English word(s). Multiple links were used to convey the relationship between a single word in one language and a string of words in another. The output of the alignment process was then used to build a probabilistic dictionary of words and phrases. Figure 4. Sample alignment Figure 4 shows an example of an aligned tranlsation. The word “stills” is recorded as a translation for “statické snímky” and “kláštery” is recorded as a translation for “convents and monasteries.” We count the number of occurrences of each alignment in all of the translations and calculate probabilities for each Czech word or phrase given an English word or phrase. For example, in the top 3000 keyword phrases “stills” appears 29 times. It was aligned with “statické snímky” 28 times and only once with “statické záběry”, giving us a translation probability of 28/29=0.9655 for “statické snímky”. Human translation of the 3000 English keyword phrases into Czech took approximately 70 hours, and the alignments took 55 hours. The overall cost of human input (translation and alignment) was less than 1000 €. The projected cost of full translation for the entire thesaurus would have been close to 20000 € and would not have produced any reusable resources. Naturally, costs for building resources in this manner will vary, but in our case the cost savings is approximately twenty fold. 3.3 Machine Translation To demonstrate the effectiveness of our approach, we show that a probabilistic dictionary, induced through the process we just described, facilitates high quality machine translation of the rest of the thesaurus. We evaluated translation quality using a relatively simple translation system. However, more sophisticated systems can draw equal benefit from the same lexical resources. Our translation system implemented a greedy coverage algorithm with a simple back-off strategy. It first scans the English input to find the longest matching substring in our dictionary, and replaces it with the most likely Czech translation. Building on the example above, the system looks up “monasteries and convents stills” in the dictionary, finds no translation, and backs off to “monasteries and convents”, which is translated to “kláštery”. Had this phrase translation not been found, the system would have attempted to find a match for the individual tokens. Failing a match in our dictionary, the system then backs off to the Prague Czech-English Dependency Treebank dictionary, a much larger dictionary with broader scope. If no match is found in either dictionary for the full token, we stem the token and look for matches based on the stem. Finally, tokens whose translations can not be found are simply passed through untranslated. A minimal set of heuristic rules was applied to reordering the Czech tokens but the output is primarily phrase by phrase/word by word translation. Our evaluation scores below will partially reflect the simplicity of our system. Our system is simple by design. Any improvement or degradation to the input of our system has direct influence on the output. Thus, measures of translation accuracy for our system can be directly interpreted as quality measures for the lexical resources used and the process by which they were developed. 4 Evaluation We performed two different types of evaluation to validate our process. First, we compared our system output to human reference translations using Bleu (Papineni, et al., 2002), a widelyaccepted objective metric for evaluation of machine translations. Second, we showed corrected and uncorrected machine translations to Czech speakers and collected subjective judgments of fluency and accuracy. For evaluation purposes, we selected 418 keyword phrases to be used as target translations. These phrases were selected using a stratified sampling technique so that different levels of thesaurus value would be represented. There was no overlap between these keyword phrases and the 3000 prioritized keyword phrases used to build our lexicon. Prior to machine translation we obtained at least two independent humangenerated reference translations for each of the 418 keyword phrases. monasteries convents and stills ( ) statické kláštery snímky ( ) 949 After collecting the first 2500 prioritized translations, we induced a probabilistic dictionary and generated machine translations of the 418 target keyword phrases. These were then corrected by native Czech speakers, who adjusted word order, word choice, and morphology. We use this set of human-corrected machine translations as a second reference for evaluation. Measuring the difference between our uncorrected machine translations (MT) and the humangenerated reference establishes how accurate our translations are compared to an independently established target. Measuring the difference between our MT and the human-corrected machine translations (corrected MT) establishes how acceptable our translations are. We also measured the difference between corrected MT and the human-generated translations. We take this to be an upper bound on realistic system performance. The results from our objective evaluation are shown in Figure 5. Each set of bars in the graph shows performance after adding a different number of aligned translations into the lexicon (i.e., performance after adding 500, 1000, ..., 3000 aligned translations.) The zero condition is our baseline: translations generated using only the dictionary available in the Prague Czech-English Dependency Treebank. Three different reference sets are shown: human-generated, corrected MT, and a combination of the two. There is a notable jump in Bleu score after the very first translations are added into our probabilistic dictionary. Without any elicitation and alignment we got a baseline score of 0.46 (against the human-generated reference translations). After the aligned terms from only 500 translations were added to our dictionary, our Bleu score rose to 0.66. After aligned terms from 3000 translations were added, we achieved 0.69. Using corrected MT as the reference our Bleu scores improve from 0.48 to 0.79. If human-generated and human-corrected references are both considered to be correct translations, the improvement goes from .49 to .80. Regardless of the reference set, there is a consistent performance improvement as more and more translations are added. We found the same trend using the TER metric on a smaller data set (Murray, et al., 2006). The fact that the Bleu scores continue to rise indicates that our approach is successful in quickly expanding the lexicon with accurate translations. It is important to point out that Bleu scores are not meaningful in an absolute sense; the scores here should be interpreted with respect to each other. The trend in scores strongly indicates that our prioritization scheme is effective for generating a high-quality translation lexicon at relatively low cost. To determine an upper bound on machine performance, we compared our corrected MT output to the initial human-generated reference translations, which were collected prior to machine translation. Corrected MT achieved a Bleu score of 0.82 when compared to the human-generated reference translations. This upper bound is the “limit” indicated in Figure 5. To determine the impact of external resources, we removed the Prague Czech-English Dependency Treebank dictionary as a back-off resource and retranslated keyword phrases using only the lexicons induced from our aligned translations. The results of this experiment showed only marginal degradation of the output. Even when as few as 500 aligned translations were used for our dictionary, we still achieved a Bleu score of 0.65 against the human reference translations. This means that even for languages where prior resources are not available our prioritization scheme successfully addresses the OOV problem. In our subjective evaluation, we presented a random sample of our system output to seven Distribution of Subjective Judgment Scores 0% 20% 40% 60% 80% 100% 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 fluency accuracy fluency accuracy MT Corrected MT Judgment scores Percent of scores Bleu Scores After Increasing Translations 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 500 1000 1500 2000 2500 3000 Number of Translations Bleu-4 corrected human reference both limit Figure 5. Objective evaluation results Figure 6. Subjective evaluation results 950 native Czech speakers and collected judgments of accuracy and fluency using a 5-point Likert scale (1=good, 3=neutral, 5=bad). An overview of the results is presented in Figure 6. Scores are shown for corrected and uncorrected MT. In all cases, the mode is 1 (i.e., good fluency and good accuracy). 59% of the machine translated phrases were rated 2 or better for fluency. 66% were rated 2 or better for accuracy. Only a small percentage of the translations had meanings that were far from the intended meaning. Disfluencies were primarily due to errors in morphology and word order. 5 Related Work Several studies have taken a knowledgeacquisition approach to collecting multilingual word pairs. For example, Sadat et al. (2003) automatically extracted bilingual word pairs from comparable corpora. This approach is based on the simple assumption that if two words are mutual translations, then their most frequent collocates are likely to be mutual translations as well. However, the approach requires large comparable corpora, the collection of which presents non-trivial challenges. Others have made similar mutual-translation assumptions for lexical acquisition (Echizen-ya, et al., 2005; Kaji & Aizono, 1996; Rapp, 1999; Tanaka & Iwasaki, 1996). Most make use of either parallel corpora or a bilingual dictionary for the task of bilingual term extraction. Echizen-ya, et al. (2005) avoided using a bilingual dictionary, but required a parallel corpus to achieve their goal; whereas Fung (2000) and others have relied on pre-existing bilingual dictionaries. In either case, large bilingual resources of some kind are required. In addition, these approaches focused on the extraction of single-word pairs, not phrasal units. Many recent approaches to dictionary and thesaurus translation are geared toward providing domain-specific thesauri to specialists in a particular field, e.g., medical terminology (Déjean, et al., 2005) and agricultural terminology (Chun & Wenlin, 2002). Researchers on these projects are faced with either finding human translators who are specialized enough to manage the domain-particular translations—or applying automatic techniques to large-scale parallel corpora where data sparsity poses a problem for lowfrequency terms. Data sparsity is also an issue for more general state-of-the-art bilingual alignment approaches (Brown, et al., 2000; Och & Ney, 2003; Wantanabe & Sumita, 2003). 6 Conclusion The task of translating large ontologies can be recast as a problem of implementing fast and efficient processes for acquiring task-specific lexical resources. We developed a method for prioritizing keyword phrases from an English thesaurus of concepts and elicited Czech translations for a subset of the keyword phrases. From these, we decomposed phrase elements for reuse in an English-Czech probabilistic dictionary. We then applied the dictionary in machine translation of the rest of the thesaurus. Our results show an overall improvement in machine translation quality after collecting only a few hundred human translations. Translation quality continued to rise as more and more human translations were added. The test data used in our evaluations are small relative to the overall task. However, we fully expect these results to hold across larger samples and for more sophisticated translation systems. We leveraged the reusability of translated words to translate a thesaurus of 56,000 keyword phrases using information gathered from only 3000 manual translations. Our probabilistic dictionary was acquired at a fraction of the cost of manually translating the entire thesaurus. By prioritizing human translations based on the translation value of the words and the thesaurus value of the keyword phrases in which they appear, we optimized the rate of return on investment. This allowed us to choose a trade-off point between cost and utility. For this project we chose to stop human translation at a point where less than 0.01% of the value of the thesaurus would be gained from each additional human translation. This choice produced a high-quality lexicon with significant positive impact on machine translation systems. For other applications, a different trade-off point will be appropriate, depending on the initial OOV rate and the importance of detailed coverage. The value of our work lies in the process model we developed for cost-effective elicitation of lexical resources. The metrics we established for assessing the impact of each translation item are key to our approach. We use these to optimize the value gained from each human translation. In our case the items were keyword phrases arranged in a hierarchical thesaurus that describes an ontology of concepts. The operational value of these keyword phrases was determined by the access they provide to video segments in a large archive of oral histories. However, our technique is not limited to this application. 951 We have shown that careful prioritization of elicited human translations facilitates costeffective thesaurus translation with minimal human input. Our use of a prioritization scheme addresses the most important deficiencies in the vocabulary first. We induced a framework where the utility of lexical resources gained from each additional human translation becomes smaller and smaller. Under such a framework, choosing the number of human translation to elicit becomes merely a function of the financial resources available for the task. Acknowledgments Our thanks to Doug Oard for his contribution to this work. Thanks also to our Czech informants: Robert Fischmann, Eliska Kozakova, Alena Prunerova and Martin Smok; and to Soumya Bhat for her programming efforts. This work was supported in part by NSF IIS Award 0122466 and NSF CISE RI Award EIA0130422. Additional support also came from grants of the MSMT CR #1P05ME786 and #MSM0021620838, and the Grant Agency of the CR #GA405/06/0589. References Brown, P. F., Della-Pietra, V. J., Della-Pietra, S. A., & Mercer, R. L. (1993). The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2), 263-311. Chun, C., & Wenlin, L. (2002). The translation of agricultural multilingual thesaurus. In Proceedings of the Third Asian Conference for Information Technology in Agriculture. Beijing, China: Chinese Academy of Agricultural Sciences (CAAS) and Asian Federation for Information Technology in Agriculture (AFITA). Čmejrek, M., Cuřín, J., Havelka, J., Hajič, J., & Kubon, V. (2004). Prague Czech-English dependecy treebank: Syntactically annotated resources for machine translation. In 4th International Conference on Language Resources and Evaluation Lisbon, Portugal. Déjean, H., Gaussier, E., Renders, J.-M., & Sadat, F. (2005). Automatic processing of multilingual medical terminology: Applications to thesaurus enrichment and cross-language information retrieval. Artificial Intelligence in Medicine, 33(2 ), 111-124. Echizen-ya, H., Araki, K., & Momouchi, Y. (2005). Automatic acquisition of bilingual rules for extraction of bilingual word pairs from parallel corpora. In Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition (pp. 87-96). Fung, P. (2000). A statistical view of bilingual lexicon extraction: From parallel corpora to non-parallel corpora. In Jean Veronis (ed.), Parallel Text Processing. Dordrecht: Kluwer Academic Publishers. Gustman, Soergel, Oard, Byrne, Picheny, Ramabhadran, & Greenberg. (2002). Supporting access to large digital oral history archives. In Proceedings of the Joint Conference on Digital Libraries. Portland, Oregon. (pp. 18-27). Kaji, H., & Aizono, T. (1996). Extracting word correspondences from bilingual corpora based on word co-occurrence information. In Proceedings of COLING '96 (pp. 23-28). Murray, G. C., Dorr, B., Lin, J., Hajič, J., & Pecina, P. (2006). Leveraging recurrent phrase structure in large-scale ontology translation. In Proceedings of the 11th Annual Conference of the European Association for Machine Translation. Oslo, Norway. Nelson, S. J., Schopen, M., Savage, A. G., Schulman, J.-L., & Arluk, N. (2004). The MeSH translation maintenance system: Structure, interface design, and implementation. In Proceedings of the 11th World Congress on Medical Informatics. (pp. 6769). Amsterdam: IOS Press. Och, F. J., & Ney, H. (2003). A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1), 19-51. Papineni, K., Roukos, S., Ward, T., & Zhu, W.-J. (2002). BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (pp. 331-318). Rapp, R. (1999). Automatic identification of word translations from unrelated English and German corpora. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. (pp. 519-526). Sadat, F., Yoshikawa, M., & Uemura, S. (2003). Enhancing cross-language information retrieval by an automatic acquisition of bilingual terminology from comparable corpora . In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 397-398). Tanaka, K., & Iwasaki, H. (1996). Extraction of lexical translations from non-aligned corpora. In Proceedings of COLING '96. (pp. 580-585). USC. (2006) USC Shoah Foundation Institute for Visual History and Education, [online] http://www.usc.edu/schools/college/vhi Wantanabe, T., & Sumita, E. (2003). Example-based decoding for statistical machine translation. In Proceedings of MT Summit IX (pp. 410-417). 952 | 2006 | 119 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 89–96, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Estimating Class Priors in Domain Adaptation for Word Sense Disambiguation Yee Seng Chan and Hwee Tou Ng Department of Computer Science National University of Singapore 3 Science Drive 2, Singapore 117543 chanys,nght @comp.nus.edu.sg Abstract Instances of a word drawn from different domains may have different sense priors (the proportions of the different senses of a word). This in turn affects the accuracy of word sense disambiguation (WSD) systems trained and applied on different domains. This paper presents a method to estimate the sense priors of words drawn from a new domain, and highlights the importance of using well calibrated probabilities when performing these estimations. By using well calibrated probabilities, we are able to estimate the sense priors effectively to achieve significant improvements in WSD accuracy. 1 Introduction Many words have multiple meanings, and the process of identifying the correct meaning, or sense of a word in context, is known as word sense disambiguation (WSD). Among the various approaches to WSD, corpus-based supervised machine learning methods have been the most successful to date. With this approach, one would need to obtain a corpus in which each ambiguous word has been manually annotated with the correct sense, to serve as training data. However, supervised WSD systems faced an important issue of domain dependence when using such a corpus-based approach. To investigate this, Escudero et al. (2000) conducted experiments using the DSO corpus, which contains sentences drawn from two different corpora, namely Brown Corpus (BC) and Wall Street Journal (WSJ). They found that training a WSD system on one part (BC or WSJ) of the DSO corpus and applying it to the other part can result in an accuracy drop of 12% to 19%. One reason for this is the difference in sense priors (i.e., the proportions of the different senses of a word) between BC and WSJ. For instance, the noun interest has these 6 senses in the DSO corpus: sense 1, 2, 3, 4, 5, and 8. In the BC part of the DSO corpus, these senses occur with the proportions: 34%, 9%, 16%, 14%, 12%, and 15%. However, in the WSJ part of the DSO corpus, the proportions are different: 13%, 4%, 3%, 56%, 22%, and 2%. When the authors assumed they knew the sense priors of each word in BC and WSJ, and adjusted these two datasets such that the proportions of the different senses of each word were the same between BC and WSJ, accuracy improved by 9%. In another work, Agirre and Martinez (2004) trained a WSD system on data which was automatically gathered from the Internet. The authors reported a 14% improvement in accuracy if they have an accurate estimate of the sense priors in the evaluation data and sampled their training data according to these sense priors. The work of these researchers showed that when the domain of the training data differs from the domain of the data on which the system is applied, there will be a decrease in WSD accuracy. To build WSD systems that are portable across different domains, estimation of the sense priors (i.e., determining the proportions of the different senses of a word) occurring in a text corpus drawn from a domain is important. McCarthy et al. (2004) provided a partial solutionby describing a method to predict the predominant sense, or the most frequent sense, of a word in a corpus. Using the noun interest as an example, their method will try to predict that sense 1 is the predominant sense in the BC part of the DSO corpus, while sense 4 is the predominant sense in the WSJ part of the 89 corpus. In our recent work (Chan and Ng, 2005b), we directly addressed the problem by applying machine learning methods to automatically estimate the sense priors in the target domain. For instance, given the noun interest and the WSJ part of the DSO corpus, we attempt to estimate the proportion of each sense of interest occurring in WSJ and showed that these estimates help to improve WSD accuracy. In our work, we used naive Bayes as the training algorithm to provide posterior probabilities, or class membership estimates, for the instances in the target domain. These probabilities were then used by the machine learning methods to estimate the sense priors of each word in the target domain. However, it is known that the posterior probabilities assigned by naive Bayes are not reliable, or not well calibrated (Domingos and Pazzani, 1996). These probabilities are typically too extreme, often being very near 0 or 1. Since these probabilities are used in estimating the sense priors, it is important that they are well calibrated. In this paper, we explore the estimation of sense priors by first calibrating the probabilities from naive Bayes. We also propose using probabilities from another algorithm (logistic regression, which already gives well calibrated probabilities) to estimate the sense priors. We show that by using well calibrated probabilities, we can estimate the sense priors more effectively. Using these estimates improves WSD accuracy and we achieve results that are significantly better than using our earlier approach described in (Chan and Ng, 2005b). In the following section, we describe the algorithm to estimate the sense priors. Then, we describe the notion of being well calibrated and discuss why using well calibrated probabilities helps in estimating the sense priors. Next, we describe an algorithm to calibrate the probability estimates from naive Bayes. Then, we discuss the corpora and the set of words we use for our experiments before presenting our experimental results. Next, we propose using the well calibrated probabilities of logistic regression to estimate the sense priors, and perform significance tests to compare our various results before concluding. 2 Estimation of Priors To estimate the sense priors, or a priori probabilities of the different senses in a new dataset, we used a confusion matrix algorithm (Vucetic and Obradovic, 2001) and an EM based algorithm (Saerens et al., 2002) in (Chan and Ng, 2005b). Our results in (Chan and Ng, 2005b) indicate that the EM based algorithm is effective in estimating the sense priors and achieves greater improvements in WSD accuracy compared to the confusion matrix algorithm. Hence, to estimate the sense priors in our current work, we use the EM based algorithm, which we describe in this section. 2.1 EM Based Algorithm Most of this section is based on (Saerens et al., 2002). Assume we have a set of labeled data D with n classes and a set of N independent instances from a new data set. The likelihood of these N instances can be defined as:
! " (1) Assuming the within-class densities # , i.e., the probabilities of observing given the class , do not change from the training set D to the new data set, we can define: $ % # . To determine the a priori probability estimates & ' of the new data set that will maximize the likelihood of (1) with respect to ! , we can apply the iterative procedure of the EM algorithm. In effect, through maximizing the likelihood of (1), we obtain the a priori probability estimates as a by-product. Let us now define some notations. When we apply a classifier trained on D on an instance drawn from the new data set D ( , we get & ' , which we define as the probability of instance being classified as class by the classifier trained on D . Further, let us define & ' as the a priori probabilities of class in D . This can be estimated by the class frequency of in D . We also define & ) * + ' and & ) * + ' as estimates of the new a priori and a posteriori probabilities at step s of the iterative EM procedure. Assuming we initialize & ) , + ' - & ' , then for each instance in D ( and each class , the EM 90 algorithm provides the following iterative steps: & ) * + ' & ' & ) + & ) +
& ! & ) + & ) + (2) & ) * + ' & ) * + ! (3) where Equation (2) represents the expectation Estep, Equation (3) represents the maximization Mstep, and N represents the number of instances in D ( . Note that the probabilities & ' and & ' in Equation (2) will stay the same throughout the iterations for each particular instance and class . The new a posteriori probabilities & ) * + ' at step s in Equation (2) are simply the a posteriori probabilities in the conditions of the labeled data, & ' , weighted by the ratio of the new priors & ) * + ' to the old priors & ' . The denominator in Equation (2) is simply a normalizing factor. The a posteriori & ) * + ! and a priori probabilities & ) * + ' are re-estimated sequentially during each iteration s for each new instance and each class , until the convergence of the estimated probabilities & ) * + ' . This iterative procedure will increase the likelihoodof (1) at each step. 2.2 Using A Priori Estimates If a classifier estimates posterior class probabilities & ! when presented with a new instance from D ( , it can be directly adjusted according to estimated a priori probabilities & ' on D ( : & * ! & ' & ) + & ) +
& ' & ) + & ) + (4) where & ' denotes the a priori probability of class from D and & * ' denotes the adjusted predictions. 3 Calibration of Probabilities In our eariler work (Chan and Ng, 2005b), the posterior probabilities assigned by a naive Bayes classifier are used by the EM procedure described in the previous section to estimate the sense priors & ' in a new dataset. However, it is known that the posterior probabilities assigned by naive Bayes are not well calibrated (Domingos and Pazzani, 1996). It is important to use an algorithm which gives well calibrated probabilities, if we are to use the probabilities in estimating the sense priors. In this section, we will first describe the notion of being well calibrated before discussing why having well calibrated probabilities helps in estimating the sense priors. Finally, we will introduce a method used to calibrate the probabilities from naive Bayes. 3.1 Well Calibrated Probabilities Assume for each instance , a classifier outputs a probability S between 0 and 1, of belonging to class . The classifier is wellcalibrated if the empirical class membership probability ' S - converges to the probability value S as the number of examples classified goes to infinity (Zadrozny and Elkan, 2002). Intuitively, if we consider all the instances to which the classifier assigns a probability S of say 0.6, then 60% of these instances should be members of class . 3.2 Being Well Calibrated Helps Estimation To see why using an algorithm which gives well calibrated probabilities helps in estimating the sense priors, let us rewrite Equation (3), the Mstep of the EM procedure, as the following: & ) *! + ' " # $ " %'&)( # * ) +-, + /. & ) * + ' (5) where S = 0 )!1 2 denotes the set of posterior probability values for class , and S & denotes the posterior probability of class assigned by the classifier for instance & . Based on '1 , we can imagine that we have 3 bins, where each bin is associated with a specific value. Now, distribute all the instances in the new dataset D ( into the 3 bins according to their posterior probabilities 4 . Let B 5 , for 6 3 , denote the set of instances in bin 6 . Note that B 79888:7 B 5 798:887 B 1 = . Now, let 5 denote the proportion of instances with true class label in B 5 . Given a well calibrated algorithm, 5 ; 5 by definition and Equation (5) can be rewritten as: & ) * + ' < B 7 888=7 1 B 1 B 7>8:88=7 1 B 1 (6) 91 Input: training set sorted in ascending order of Initialize While k such that
, where and ! Set "
$#%'& (*) % + Replace
, with m Figure 1: PAV algorithm. where denotes the number of instances in D ( with true class label . Therefore, & ) *! + ' reflects the proportion of instances in D ( with true class label . Hence, using an algorithm which gives well calibrated probabilities helps in the estimation of sense priors. 3.3 Isotonic Regression Zadrozny and Elkan (2002) successfully used a method based on isotonic regression (Robertson et al., 1988) to calibrate the probability estimates from naive Bayes. To compute the isotonic regression, they used the pair-adjacent violators (PAV) (Ayer et al., 1955) algorithm, which we show in Figure 1. Briefly, what PAV does is to initially view each data value as a level set. While there are two adjacent sets that are out of order (i.e., the left level set is above the right one) then the sets are combined and the mean of the data values becomes the value of the new level set. PAV works on binary class problems. In a binary class problem, we have a positive class and a negative class. Now, let /.102. , where represent N examples and is the probability of belonging to the positive class, as predicted by a classifier. Further, let 3 represent the true label of . For a binary class problem, we let 3 if is a positive example and 3 54 if is a negative example. The PAV algorithm takes in a set of 3 , sorted in ascending order of and returns a series of increasing step-values,where each step-value 6 7 5 (denoted by m in Figure 1) is associated with a lowest boundary value and a highest boundary value 5 . We performed 10-fold crossvalidation on the training data to assign values to . We then applied the PAV algorithm to obtain values for 6 . To obtain the calibrated probability estimate for a test instance , we find the boundary values and 5 where . S . 5 and assign 6 7 5 as the calibrated probability estimate. To apply PAV on a multiclass problem, we first reduce the problem into a number of binary class problems. For reducing a multiclass problem into a set of binary class problems, experiments in (Zadrozny and Elkan, 2002) suggest that the oneagainst-all approach works well. In one-againstall, a separate classifier is trained for each class , where examples belonging to class are treated as positive examples and all other examples are treated as negative examples. A separate classifier is then learnt for each binary class problem and the probability estimates from each classifier are calibrated. Finally, the calibrated binary-class probability estimates are combined to obtain multiclass probabilities, computed by a simple normalization of the calibrated estimates from each binary classifier, as suggested by Zadrozny and Elkan (2002). 4 Selection of Dataset In this section, we discuss the motivations in choosing the particular corpora and the set of words used in our experiments. 4.1 DSO Corpus The DSO corpus (Ng and Lee, 1996) contains 192,800 annotated examples for 121 nouns and 70 verbs, drawn from BC and WSJ. BC was built as a balanced corpus and contains texts in various categories such as religion, fiction, etc. In contrast, the focus of the WSJ corpus is on financial and business news. Escudero et al. (2000) exploited the difference in coverage between these two corpora to separate the DSO corpus into its BC and WSJ parts for investigating the domain dependence of several WSD algorithms. Following their setup, we also use the DSO corpus in our experiments. The widely used SEMCOR (SC) corpus (Miller et al., 1994) is one of the few currently available manually sense-annotated corpora for WSD. SEMCOR is a subset of BC. Since BC is a balanced corpus, and training a classifier on a general corpus before applying it to a more specific corpus is a natural scenario, we will use examples from BC as training data, and examples from WSJ as evaluation data, or the target dataset. 4.2 Parallel Texts Scalability is a problem faced by current supervised WSD systems, as they usually rely on manually annotated data for training. To tackle this problem, in one of our recent work (Ng et al., 2003), we had gathered training data from parallel texts and obtained encouraging results in our 92 evaluation on the nouns of SENSEVAL-2 English lexical sample task (Kilgarriff, 2001). In another recent evaluation on the nouns of SENSEVAL2 English all-words task (Chan and Ng, 2005a), promising results were also achieved using examples gathered from parallel texts. Due to the potential of parallel texts in addressing the issue of scalability, we also drew training data for our earlier sense priors estimation experiments (Chan and Ng, 2005b) from parallel texts. In addition, our parallel texts training data represents a natural domain difference with the test data of SENSEVAL2 English lexical sample task, of which 91% is drawn from the British National Corpus (BNC). As part of our experiments, we followed the experimental setup of our earlier work (Chan and Ng, 2005b), using the same 6 English-Chinese parallel corpora (Hong Kong Hansards, Hong Kong News, Hong Kong Laws, Sinorama, Xinhua News, and English translation of Chinese Treebank), available from LinguisticData Consortium. To gather training examples from these parallel texts, we used the approach we described in (Ng et al., 2003) and (Chan and Ng, 2005b). We then evaluated our estimation of sense priors on the nouns of SENSEVAL-2 English lexical sample task, similar to the evaluation we conducted in (Chan and Ng, 2005b). Since the test data for the nouns of SENSEVAL-3 English lexical sample task (Mihalcea et al., 2004) were also drawn from BNC and represented a difference in domain from the parallel texts we used, we also expanded our evaluation to these SENSEVAL-3 nouns. 4.3 Choice of Words Research by (McCarthy et al., 2004) highlighted that the sense priors of a word in a corpus depend on the domain from which the corpus is drawn. A change of predominant sense is often indicative of a change in domain, as different corpora drawn from different domains usually give different predominant senses. For example, the predominant sense of the noun interest in the BC part of the DSO corpus has the meaning “a sense of concern with and curiosity about someone or something”. In the WSJ part of the DSO corpus, the noun interest has a different predominant sense with the meaning “a fixed charge for borrowing money”, reflecting the business and finance focus of the WSJ corpus. Estimation of sense priors is important when there is a significant change in sense priors between the training and target dataset, such as when there is a change in domain between the datasets. Hence, in our experiments involving the DSO corpus, we focused on the set of nouns and verbs which had different predominant senses between the BC and WSJ parts of the corpus. This gave us a set of 37 nouns and 28 verbs. For experiments involving the nouns of SENSEVAL-2 and SENSEVAL-3 English lexical sample task, we used the approach we described in (Chan and Ng, 2005b) of sampling training examples from the parallel texts using the natural (empirical) distribution of examples in the parallel texts. Then, we focused on the set of nouns having different predominant senses between the examples gathered from parallel texts and the evaluation data for the two SENSEVAL tasks. This gave a set of 6 nouns for SENSEVAL-2 and 9 nouns for SENSEVAL3. For each noun, we gathered a maximum of 500 parallel text examples as training data, similar to what we had done in (Chan and Ng, 2005b). 5 Experimental Results Similar to our previous work (Chan and Ng, 2005b), we used the supervised WSD approach described in (Lee and Ng, 2002) for our experiments, using the naive Bayes algorithm as our classifier. Knowledge sources used include partsof-speech, surrounding words, and local collocations. This approach achieves state-of-the-art accuracy. All accuracies reported in our experiments are micro-averages over all test examples. In (Chan and Ng, 2005b), we used a multiclass naive Bayes classifier (denoted by NB) for each word. Following this approach, we noted the WSD accuracies achieved without any adjustment, in the column L under NB in Table 1. The predictions & ' of these naive Bayes classifiers are then used in Equation (2) and (3) to estimate the sense priors & ' , before being adjusted by these estimated sense priors based on Equation (4). The resulting WSD accuracies after adjustment are listed in the column EM in Table 1, representing the WSD accuracies achievable by following the approach we described in (Chan and Ng, 2005b). Next, we used the one-against-all approach to reduce each multiclass problem into a set of binary class problems. We trained a naive Bayes classifier for each binary problem and calibrated the probabilities from these binary classifiers. The WSD 93 Classifier NB NBcal Method L EM EM ) L EM EM ) DSO nouns 44.5 46.1 46.6 45.8 47.0 51.1 DSO verbs 46.7 48.3 48.7 46.9 49.5 50.8 SE2 nouns 61.7 62.4 63.0 62.3 63.2 63.5 SE3 nouns 53.9 54.9 55.7 55.4 58.8 58.4 Table 1: Micro-averaged WSD accuracies using the various methods. The different naive Bayes classifiers are: multiclass naive Bayes (NB) and naive Bayes with calibrated probabilities (NBcal). Dataset True L EM L EM ) L DSO nouns 11.6 1.2 (10.3%) 5.3 (45.7%) DSO verbs 10.3 2.6 (25.2%) 3.9 (37.9%) SE2 nouns 3.0 0.9 (30.0%) 1.2 (40.0%) SE3 nouns 3.7 3.4 (91.9%) 3.0 (81.1%) Table 2: Relative accuracy improvement based on calibrated probabilities. accuracies of these calibrated naive Bayes classifiers (denoted by NBcal) are given in the column L under NBcal.1 The predictions of these classifiers are then used to estimate the sense priors & ' , before being adjusted by these estimates based on Equation (4). The resulting WSD accuracies after adjustment are listed in column EM 5 in Table 1. The results show that calibrating the probabilities improves WSD accuracy. In particular, EM 5 achieves the highest accuracy among the methods described so far. To provide a basis for comparison, we also adjusted the calibrated probabilities by the true sense priors ' of the test data. The increase in WSD accuracy thus obtained is given in the column True L in Table 2. Note that this represents the maximum possible increase in accuracy achievable provided we know these true sense priors ' . In the column EM 5 in Table 2, we list the increase in WSD accuracy when adjusted by the sense priors & ! which were automatically estimated using the EM procedure. The relative improvements obtained with using & ! (compared against using ' ) are given as percentages in brackets. As an example, according to Table 1 for the DSO verbs, EM 5 gives an improvement of 49.5% 46.9% = 2.6% in WSD accuracy, and the relative improvement compared to using the true sense priors is 2.6/10.3 = 25.2%, as shown in Table 2. Dataset EM EM EM ) DSO nouns 0.621 0.586 0.293 DSO verbs 0.651 0.602 0.307 SE2 nouns 0.371 0.307 0.214 SE3 nouns 0.693 0.632 0.408 Table 3: KL divergence between the true and estimated sense distributions. 6 Discussion The experimental results show that the sense priors estimated using the calibrated probabilities of naive Bayes are effective in increasing the WSD accuracy. However, using a learning algorithm which already gives well calibrated posteriorprobabilities may be more effective in estimating the sense priors. One possible algorithm is logistic regression, which directly optimizes for getting approximations of the posterior probabilities. Hence, its probability estimates are already well calibrated (Zhang and Yang, 2004; NiculescuMizil and Caruana, 2005). In the rest of this section, we first conduct experiments to estimate sense priors using the predictions of logistic regression. Then, we perform significance tests to compare the various methods. 6.1 Using Logistic Regression We trained logistic regression classifiers and evaluated them on the 4 datasets. However, the WSD accuracies of these unadjusted logistic regression classifiers are on average about 4% lower than those of the unadjusted naive Bayes classifiers. One possible reason is that being a discriminative learner, logistic regression requires more training examples for its performance to catch up to, and possibly overtake the generative naive Bayes learner (Ng and Jordan, 2001). Although the accuracy of logistic regression as a basic classifier is lower than that of naive Bayes, its predictions may still be suitable for estimating 1Though not shown, we also calculated the accuracies of these binary classifiers without calibration, and found them to be similar to the accuracies of the multiclass naive Bayes shown in the column L under NB in Table 1. 94 Method comparison DSO nouns DSO verbs SE2 nouns SE3 nouns NB-EM ) vs. NB-EM NBcal-EM vs. NB-EM NBcal-EM vs. NB-EM ) NBcal-EM ) vs. NB-EM NBcal-EM ) vs. NB-EM ) NBcal-EM ) vs. NBcal-EM Table 4: Paired t-tests between the various methods for the 4 datasets. sense priors. To gauge how well the sense priors are estimated, we measure the KL divergence between the true sense priors and the sense priors estimated by using the predictions of (uncalibrated) multiclass naive Bayes, calibrated naive Bayes, and logistic regression. These results are shown in Table 3 and the column EM shows that using the predictions of logistic regression to estimate sense priors consistently gives the lowest KL divergence. Results of the KL divergence test motivate us to use sense priors estimated by logistic regression on the predictions of the naive Bayes classifiers. To elaborate, we first use the probability estimates & ' of logistic regression in Equations (2) and (3) to estimate the sense priors & ' . These estimates & ' and the predictions & ' of the calibrated naive Bayes classifier are then used in Equation (4) to obtain the adjusted predictions. The resulting WSD accuracy is shown in the column EM under NBcal in Table 1. Corresponding results when the predictions & ' of the multiclass naive Bayes is used in Equation (4), are given in the column EM under NB. The relative improvements against using the true sense priors, based on the calibrated probabilities, are given in the column EM L in Table 2. The results show that the sense priors provided by logistic regression are in general effective in further improving the results. In the case of DSO nouns, this improvement is especially significant. 6.2 Significance Test Paired t-tests were conducted to see if one method is significantly better than another. The t statistic of the difference between each test instance pair is computed, giving rise to a p value. The results of significance tests for the various methods on the 4 datasets are given in Table 4, where the symbols “ ”, “ ”, and “ ” correspond to p-value 0.05, (0.01, 0.05], and . 0.01 respectively. The methods in Table 4 are represented in the form a1-a2, where a1 denotes adjusting the predictions of which classifier, and a2 denotes how the sense priors are estimated. As an example, NBcal-EM specifies that the sense priors estimated by logistic regression is used to adjust the predictions of the calibrated naive Bayes classifier, and corresponds to accuracies in column EM under NBcal in Table 1. Based on the significance tests, the adjusted accuracies of EM and EM 5 in Table 1 are significantly better than their respective unadjusted L accuracies, indicating that estimating the sense priors of a new domain via the EM approach presented in this paper significantly improves WSD accuracy compared to just using the sense priors from the old domain. NB-EM represents our earlier approach in (Chan and Ng, 2005b). The significance tests show that our current approach of using calibrated naive Bayes probabilities to estimate sense priors, and then adjusting the calibrated probabilities by these estimates (NBcal-EM 5 ) performs significantly better than NB-EM (refer to row 2 of Table 4). For DSO nouns, though the results are similar, the p value is a relatively low 0.06. Using sense priors estimated by logistic regression further improves performance. For example, row 1 of Table 4 shows that adjusting the predictions of multiclass naive Bayes classifiers by sense priors estimated by logistic regression (NBEM ) performs significantly better than using sense priors estimated by multiclass naive Bayes (NB-EM ). Finally, using sense priors estimated by logistic regression to adjust the predictions of calibrated naive Bayes (NBcal-EM ) in general performs significantly better than most other methods, achieving the best overall performance. In addition, we implemented the unsupervised method of (McCarthy et al., 2004), which calculates a prevalence score for each sense of a word to predict the predominant sense. As in our earlier work (Chan and Ng, 2005b), we normalized the prevalence score of each sense to obtain estimated sense priors for each word, which we then used 95 to adjust the predictions of our naive Bayes classifiers. We found that the WSD accuracies obtained with the method of (McCarthy et al., 2004) are on average 1.9% lower than our NBcal-EM method, and the difference is statistically significant. 7 Conclusion Differences in sense priors between training and target domain datasets will result in a loss of WSD accuracy. In this paper, we show that using well calibrated probabilities to estimate sense priors is important. By calibrating the probabilities of the naive Bayes algorithm, and using the probabilities given by logistic regression (which is already well calibrated), we achieved significant improvements in WSD accuracy over previous approaches. References Eneko Agirre and David Martinez. 2004. Unsupervised WSD based on automatically retrieved examples: The importance of bias. In Proc. of EMNLP04. Miriam Ayer, H. D. Brunk, G. M. Ewing, W. T. Reid, and Edward Silverman. 1955. An empirical distribution function for sampling with incomplete information. Annals of Mathematical Statistics, 26(4). Yee Seng Chan and Hwee Tou Ng. 2005a. Scaling up word sense disambiguation via parallel texts. In Proc. of AAAI05. Yee Seng Chan and Hwee Tou Ng. 2005b. Word sense disambiguation with distribution estimation. In Proc. of IJCAI05. Pedro Domingos and Michael Pazzani. 1996. Beyond independence: Conditions for the optimality of the simple Bayesian classifier. In Proc. of ICML-1996. Gerard Escudero, Lluis Marquez, and German Rigau. 2000. An empirical study of the domain dependence of supervised word sense disambiguation systems. In Proc. of EMNLP/VLC00. Adam Kilgarriff. 2001. English lexical sample task description. In Proc. of SENSEVAL-2. Yoong Keok Lee and Hwee Tou Ng. 2002. An empirical evaluation of knowledge sources and learning algorithms for word sense disambiguation. In Proc. of EMNLP02. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004. Finding predominant word senses in untagged text. In Proc. of ACL04. Rada Mihalcea, Timothy Chklovski, and Adam Kilgarriff. 2004. The senseval-3 english lexical sample task. In Proc. of SENSEVAL-3. George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Using a semantic concordance for sense identification. In Proc. of ARPA Human Language Technology Workshop. Andrew Y. Ng and Michael I. Jordan. 2001. On discriminative vs. generative classifiers: A comparison of logistic regression and naive Bayes. In Proc. of NIPS14. Hwee Tou Ng and Hian Beng Lee. 1996. Integrating multiple knowledge sources to disambiguate word sense: An exemplar-based approach. In Proc. of ACL96. Hwee Tou Ng, Bin Wang, and Yee Seng Chan. 2003. Exploiting parallel texts for word sense disambiguation: An empirical study. In Proc. of ACL03. Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learning. In Proc. of ICML05. Tim Robertson, F. T. Wright, and R. L. Dykstra. 1988. Chapter 1. Isotonic Regression. In Order Restricted Statistical Inference. John Wiley & Sons. Marco Saerens, Patrice Latinne, and Christine Decaestecker. 2002. Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure. Neural Computation, 14(1). Slobodan Vucetic and Zoran Obradovic. 2001. Classification on data with biased class distribution. In Proc. of ECML01. Bianca Zadrozny and Charles Elkan. 2002. Transforming classifier scores into accurate multiclass probability estimates. In Proc. of KDD02. Jian Zhang and Yiming Yang. 2004. Probabilistic score estimation with piecewise logistic regression. In Proc. of ICML04. 96 | 2006 | 12 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 953–960, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Accurate Collocation Extraction Using a Multilingual Parser Violeta Seretan Language Technology Laboratory University of Geneva 2, rue de Candolle, 1211 Geneva [email protected] Eric Wehrli Language Technology Laboratory University of Geneva 2, rue de Candolle, 1211 Geneva [email protected] Abstract This paper focuses on the use of advanced techniques of text analysis as support for collocation extraction. A hybrid system is presented that combines statistical methods and multilingual parsing for detecting accurate collocational information from English, French, Spanish and Italian corpora. The advantage of relying on full parsing over using a traditional window method (which ignores the syntactic information) is first theoretically motivated, then empirically validated by a comparative evaluation experiment. 1 Introduction Recent computational linguistics research fully acknowledged the stringent need for a systematic and appropriate treatment of phraseological units in natural language processing applications (Sag et al., 2002). Syntagmatic relations between words — also called multi-word expressions, or “idiosyncratic interpretations that cross word boundaries” (Sag et al., 2002, 2) — constitute an important part of the lexicon of a language: according to Jackendoff (1997), they are at least as numerous as the single words, while according to Mel’ˇcuk (1998) they outnumber single words ten to one. Phraseological units include a wide range of phenomena, among which we mention compound nouns (dead end), phrasal verbs (ask out), idioms (lend somebody a hand), and collocations (fierce battle, daunting task, schedule a meeting). They pose important problems for NLP applications, both text analysis and text production perspectives being concerned. In particular, collocations1 are highly problematic, for at least two reasons: first, because their linguistic status and properties are unclear (as pointed out by McKeown and Radev (2000), their definition is rather vague, and the distinction from other types of expressions is not clearly drawn); second, because they are prevalent in language. Mel’ˇcuk (1998, 24) claims that “collocations make up the lions share of the phraseme inventory”, and a recent study referred in (Pearce, 2001) showed that each sentence is likely to contain at least one collocation. Collocational information is not only useful, but also indispensable in many applications. In machine translation, for instance, it is considered “the key to producing more acceptable output” (Orliac and Dillinger, 2003, 292). This article presents a system that extracts accurate collocational information from corpora by using a syntactic parser that supports several languages. After describing the underlying methodology (section 2), we report several extraction results for English, French, Spanish and Italian (section 3). Then we present in sections 4 and 5 a comparative evaluation experiment proving that a hybrid approach leads to more accurate results than a classical approach in which syntactic information is not taken into account. 2 Hybrid Collocation Extraction We consider that syntactic analysis of source corpora is an inescapable precondition for collocation extraction, and that the syntactic structure of source text has to be taken into account in order to ensure the quality and interpretability of results. 1To put it simply, collocations are non-idiomatical, but restricted, conventional lexical combinations. 953 As a matter of fact, some of the existing collocation extraction systems already employ (but only to a limited extent) linguistic tools in order to support the collocation identification in text corpora. For instance, lemmatizers are often used for recognizing all the inflected forms of a lexical item, and POS taggers are used for ruling out certain categories of words, e.g., in (Justeson and Katz, 1995). Syntactic analysis has long since been recognized as a prerequisite for collocation extraction (for instance, by Smadja2), but the traditional systems simply ignored it because of the lack, at that time, of efficient and robust parsers required for processing large corpora. Oddly enough, this situation is nowadays perpetuated, in spite of the dramatic advances in parsing technology. Only a few exceptions exists, e.g., (Lin, 1998; Krenn and Evert, 2001). One possible reason for this might be the way that collocations are generally understood, as a purely statistical phenomenon. Some of the bestknown definitions are the following: “Collocations of a given word are statements of the habitual and customary places of that word” (Firth, 1957, 181); “arbitrary and recurrent word combination” (Benson, 1990); or “sequences of lexical items that habitually co-occur” (Cruse, 1986, 40). Most of the authors make no claims with respect to the grammatical status of the collocation, although this can indirectly inferred from the examples they provide. On the contrary, other definitions state explicitly that a collocation is an expression of language: “co-occurrence of two or more lexical items as realizations of structural elements within a given syntactic pattern” (Cowie, 1978); “a sequence of two or more consecutive words, that has characteristics of a syntactic and semantic unit” (Choueka, 1988). Our approach is committed to these later definitions, hence the importance we lend to using appropriate extraction methodologies, based on syntactic analysis. The hybrid method we developed relies on the parser Fips (Wehrli, 2004), that implements the Government and Binding formalism and supports several languages (besides the ones mentioned in 2“Ideally, in order to identify lexical relations in a corpus one would need to first parse it to verify that the words are used in a single phrase structure. However, in practice, freestyle texts contain a great deal of nonstandard features over which automatic parsers would fail. This fact is being seriously challenged by current research (...), and might not be true in the near future” (Smadja, 1993, 151). the abstract, a few other are also partly dealt with). We will not present details about the parser here; what is relevant for this paper is the type of syntactic structures it uses. Each constituent is represented by a simplified X-bar structure (without intermediate level), in which to the lexical head is attached a list of left constituents (its specifiers) and right constituents (its complements), and each of these are in turn represented by the same type of structure, recursively. Generally speaking, a collocation extraction can be seen as a two-stage process: I. in stage one, collocation candidates are identified from the text corpora, based on criteria which are specific to each system; II. in stage two, the candidates are scored and ranked using specific association measures (a review can be found in (Manning and Sch¨utze, 1999; Evert, 2004; Pecina, 2005)). According to this description, in our approach the parser is used in the first stage of extraction, for identifying the collocation candidates. A pair of lexical items is selected as a candidate only if there is a syntactic relation holding between the two items (one being the head of the current parse structure, and the other the lexical head of its specifier/complement). Therefore, the criterion we employ for candidate selection is the syntactic proximity, as opposed to the linear proximity used by traditional, window-based methods. As the parsing goes on, the syntactic word pairs are extracted from the parse structures created, from each head-specifier or head-complement relation. The pairs obtained are then partitioned according to their syntactic configuration (e.g., noun + adjectival or nominal specifier, noun + argument, noun + adjective in predications, verb + adverbial specifier, verb + argument (subject, object), verb + adjunt, etc). Finally, the loglikelihood ratios test (henceforth LLR) (Dunning, 1993) is applied on each set of pairs. We call this method hybrid, since it combines syntactic and statistical information (about word and cooccurrence frequency). The following examples — which, like all the examples in this paper, are actual extraction results — demonstrate the potential of our system to detect collocation candidates, even if subject to complex syntactic transformations. 954 1.a) raise question: The question of political leadership has been raised several times by previous speakers. 1.b) play role: What role can Canada’s immigration program play in helping developing nations... ? 1.c) make mistake: We could look back and probably see a lot of mistakes that all parties including Canada perhaps may have made. 3 Multilingual Extraction Results In this section, we present several extraction results obtained with the system presented in section 2. The experiments were performed on data in the four languages, and involved the following corpora: for English and French, a subpart or the Hansard Corpus of proceedings from the Canadian Parliament; for Italian, documents from the Swiss Parliament; and for Spanish, a news corpus distributed by the Linguistic Data Consortium. Some statistics on these corpora, some processing details and quantitative results are provided in Table 1. The first row lists the corpora size (in tokens); the next three rows show some parsing statistics3, and the last rows display the number of collocation candidates extracted and of candidates for which the LLR score could be computed4. Statistics English French Spanish Italian tokens 3509704 1649914 1023249 287804 sentences 197401 70342 67502 12008 compl. parse 139498 50458 13245 4511 avg. length 17.78 23.46 15.16 23.97 pairs 725025 370932 162802 58258 (extracted) 276670 147293 56717 37914 pairs 633345 308410 128679 47771 (scored) 251046 131384 49495 30586 Table 1: Extraction statistics In Table 2 we list the top collocations (of length two) extracted for each language. We do not specifically discuss here multilingual issues in collocation extraction; these are dealt with in a separate paper (Seretan and Wehrli, 2006). 3The low rate of completely parsed sentences for Spanish and Italian are due to the relatively reduced coverage of the parsers of these two languages (under development). However, even if a sentence is not assigned a complete parse tree, some syntactic pairs can still be collected from the partial parses. 4The log-likelihood ratios score is undefined for those pairs having a cell of the contingency table equal to 0. Language Key1 Key2 LLR score English federal government 7229.69 reform party 6530.69 house common 6006.84 minister finance 5829.05 acting speaker 5551.09 red book 5292.63 create job 4131.55 right Hon 4117.52 official opposition 3640.00 deputy speaker 3549.09 French premier ministre 4317.57 bloc qu´eb´ecois 3946.08 discours trˆone 3894.04 v´erificateur g´en´eral 3796.68 parti r´eformiste 3615.04 gouvernement f´ed´eral 3461.88 missile croisi`ere 3147.42 Chambre commune 3083.02 livre rouge 2536.94 secr´etaire parlementaire 2524.68 Spanish banco central 4210.48 mill´on d´olar 3312.68 mill´on peso 2335.00 libre comercio 2169.02 nuevo peso 1322.06 tasa inter´es 1179.62 deuda externo 1119.91 c´amara representante 1015.07 asamblea ordinario 992.85 papel comercial 963.95 Italian consiglio federale 3513.19 scrivere consiglio 594.54 unione europeo 479.73 servizio pubblico 452.92 milione franco 447.63 formazione continuo 388.80 iniziativa popolare 383.68 testo interpellanza 377.46 punto vista 373.24 scrivere risposta 348.77 Table 2: Top ten collocations extracted for each language The collocation pairs obtained were further processed with a procedure of long collocations extraction described elsewhere (Seretan et al., 2003). Some examples of collocations of length 3, 4 and 5 obtained are: minister of Canadian heritage, house proceed to statement by, secretary to leader of gouvernment in house of common (En), question adresser `a ministre, programme de aide `a r´enovation r´esidentielle, agent employer force susceptible causer (Fr), bolsa de comercio local, peso en cuota de fondo de inversi´on, permitir uso de papel de deuda esterno (Sp), consiglio federale disporre, creazione di nuovo posto di lavoro, costituire fattore penalizzante per regione (It)5. 5Note that the output of the procedure contains lemmas rather than inflected forms. 955 4 Comparative Evaluation Hypotheses 4.1 Does Parsing Really Help? Extracting collocations from raw text, without preprocessing the source corpora, offers some clear advantages over linguistically-informed methods such as ours, which is based on the syntactic analysis: speed (in contrast, parsing large corpora of texts is expected to be much more time consuming), robustness (symbolic parsers are often not robust enough for processing large quantities of data), portability (no need to a priori define syntactic configurations for collocations candidates). On the other hand, these basic systems suffer from the combinatorial explosion if the candidate pairs are chosen from a large search space. To cope with this problem, a candidate pair is usually chosen so that both words are inside a context (‘collocational’) window of a small length. A 5word window is the norm, while longer windows prove impractical (Dias, 2003). It has been argued that a window size of 5 is actually sufficient for capturing most of the collocational relations from texts in English. But there is no evidence sustaining that the same holds for other languages, like German or the Romance ones that exhibit freer word order. Therefore, as window-based systems miss the ‘long-distance’ pairs, their recall is presumably lower than that of parse-based systems. However, the parser could also miss relevant pairs due to inherent analysis errors. As for precision, the window systems are susceptible to return more noise, produced by the grammatically unrelated pairs inside the collocational window. By dividing the number of grammatical pairs by the total number of candidates considered, we obtain the overall precision with respect to grammaticality; this result is expected to be considerably worse in the case of basic method than for the parse-based methods, just by virtue of the parsing task. As for the overall precision with respect to collocability, we expect the proportional figures to be preserved. This is because the parser-based methods return less, but better pairs (i.e., only the pairs identified as grammatical), and because collocations are a subset of the grammatical pairs. Summing up, the evaluation hypothesis that can be stated here is the following: parse-based methods outperform basic methods thanks to a drastic reduction of noise. While unquestionable under the assumption of perfect parsing, this hypothesis has to be empirically validated in an actual setting. 4.2 Is More Data Better Than Better Data? The hypothesis above refers to the overall precision and recall, that is, relative to the entire list of selected candidates. One might argue that these numbers are less relevant for practice than they are from a theoretical (evaluation) perspective, and that the exact composition of the list of candidates identified is unimportant if only the top results (i.e., those pairs situated above a threshold) are looked at by a lexicographer or an application. Considering a threshold for the n-best candidates works very much in the favor of basic methods. As the amount of data increases, there is a reduction of the noise among the best-scored pairs, which tend to be more grammatical because the likelihood of encountering many similar noisy pairs is lower. However, as the following example shows, noisy pairs may still appear in top, if they occur often in a longer collocation: 2.a) les essais du missile de croisi`ere 2.b) essai - croisi`ere The pair essai - croisi`ere is marked by the basic systems as a collocation because of the recurrent association of the two words in text as part or the longer collocation essai du missile de croisi`ere. It is an grammatically unrelated pair, while the correct pairs reflecting the right syntactic attachment are essai missile and missile (de) croisi`ere. We mentioned that parsing helps detecting the ‘long-distance’ pairs that are outside the limits of the collocational window. Retrieving all such complex instances (including all the extraposition cases) certainly augment the recall of extraction systems, but this goal might seem unjustified, because the risk of not having a collocation represented at all diminishes as more and more data is processed. One might think that systematically missing long-distance pairs might be very simply compensated by supplying the system with more data, and thus that larger data is a valid alternative to performing complex processing. While we agree that the inclusion of more data compensates for the ‘difficult’ cases, we do consider this truly helpful in deriving collocational information, for the following reasons: (1) more data means more noise for the basic methods; (2) some collocations might systematically appear in 956 a complex grammatical environment (such as passive constructions or with additional material inserted between the two items); (3) more importantly, the complex cases not taken into account alter the frequency profile of the pairs concerned. These observations entitle us to believe that, even when more data is added, the n-best precision might remain lower for the basic methods with respect to the parse-based ones. 4.3 How Real the Counts Are? Syntactic analysis (including shallower levels of linguistic analysis traditionally used in collocation extraction, such as lemmatization, POS tagging, or chunking) has two main functions. On the one hand, it guides the extraction system in the candidate selection process, in order to better pinpoint the pairs that might form collocations and to exclude the ones considered as inappropriate (e.g., the pairs combining function words, such as a preposition followed by a determiner). On the other, parsing supports the association measures that will be applied on the selected candidates, by providing more exact frequency information on words — the inflected forms count as instances of the same lexical item — and on their co-occurrence frequency — certain pairs might count as instance of the same pair, others do not. In the following example, the pair loi modifier is an instance of a subject-verb collocation in 3.a), and of a verb-object collocation type in 3.b). Basic methods are unable to distinguish between the two types, and therefore count them as equivalent. 3.a) Loi modifiant la Loi sur la responsabilit´e civile 3.b) la loi devrait ˆetre modifi´ee Parsing helps to create a more realistic frequency profile for the candidate pairs, not only because of the grammaticality constraint it applies on the pairs (wrong pairs are excluded), but also because it can detect the long-distance pairs that are outside the collocational window. Given that the association measures rely heavily on the frequency information, the erroneous counts have a direct influence on the ranking of candidates and, consequently, on the top candidates returned. We believe that in order to achieve a good performance, extraction systems should be as close as possible to the real frequency counts and, of course, to the real syntactic interpretation provided in the source texts6. Since parser-based methods rely on more accurate frequency information for words and their cooccurrence than window methods, it follows that the n-best list obtained with the first methods will probably show an increase in quality over the second. To conclude this section, we enumerate the hypotheses that have been formulated so far: (1) Parse methods provide a noise-freer list of collocation candidates, in comparison with the window methods; (2) Local precision (of best-scored results) with respect to grammaticality is higher for parse methods, since in basic methods some noise still persists, even if more data is included; (3) Local precision with respect to collocability is higher for parse methods, because they use a more realistic image of word co-occurrence frequency. 5 Comparative Evaluation We compare our hybrid method (based on syntactic processing of texts) against the window method classically used in collocation extraction, from the point of view of their precision with respect to grammaticality and collocability. 5.1 The Method The n-best extraction results, for a given n (in our experiment, n varies from 50 to 500 at intervals of 50) are checked in each case for grammatical well-formedness and for lexicalization. By lexicalization we mean the quality of a pair to constitute (part of) a multi-word expression — be it compound, collocation, idiom or another type of syntagmatic lexical combination. We avoid giving collocability judgments since the classification of multi-word expressions cannot be made precisely and with objective criteria (McKeown and Radev, 2000). We rather distinguish between lexicalizable and trivial combinations (completely regular productions, such as big house, buy bread, that do not deserve a place in the lexicon). As in (Choueka, 1988) and (Evert, 2004), we consider that a dominant feature of collocations is that they are unpredictable for speakers and therefore have to be stored into a lexicon. 6To exemplify this point: the pair d´eveloppement humain (which has been detected as a collocation by the basic method) looks like a valid expression, but the source text consistently offers a different interpretation: d´eveloppement des ressources humaines. 957 Each collocation from the n-best list at the different levels considered is therefore annotated with one of the three flags: 1. ungrammatical; 2. trivial combination; 3. multi-word expression (MWE). On the one side, we evaluate the results of our hybrid, parse-based method; on the other, we simulate a window method, by performing the following steps: POS-tag the source texts; filter the lexical items and retain only the open-class POS; consider all their combinations within a collocational window of length 5; and, finally, apply the log-likelihood ratios test on the pairs of each configuration type. In accordance with (Evert and Kermes, 2003), we consider that the comparative evaluation of collocation extraction systems should not be done at the end of the extraction process, but separately for each stage: after the candidate selection stage, for evaluating the quality (in terms of grammaticality) of candidates proposed; and after the application of collocability measures, for evaluating the measures applied. In each of these cases, different evaluation methodologies and resources are required. In our case, since we used the same measure for the second stage (the log-likelihood ratios test), we could still compare the final output of basic and parse-based methods, as given by the combination of the first stage with the same collocability measure. Again, similarly to Krenn and Evert (2001), we believe that the homogeneity of data is important for the collocability measures. We therefore applied the LLR test on our data after first partitioning it into separate sets, according to the syntactical relation holding in each candidate pair. As the data used in the basic method contains no syntactic information, the partitioning was done based on POS-combination type. 5.2 The Data The evaluation experiment was performed on the whole French corpus used in the extraction experiment (section 2), that is, a subpart of the Hansard corpus of Canadian Parliament proceedings. It contains 112 text files totalling 8.43 MB, with an average of 628.1 sentences/file and 23.46 tokens/sentence (as detected by the parser). The total number of tokens is 1, 649, 914. On the one hand, the texts were parsed and 370, 932 candidate pairs were extracted using the hybrid method we presented. Among the pairs extracted, 11.86% (44, 002 pairs) were multi-word expressions identified at parse-time, since present in the parser’s lexicon. The log-likelihood ratios test was applied on the rest of pairs. A score could be associated to 308, 410 of these pairs (corresponding to 131, 384 types); for the others, the score was undefined. On the other hand, the texts were POS-tagged using the same parser as in the first case. If in the first case the candidate pairs were extracted during the parsing, in the second they were generated after the open-class filtering. From 673, 789 POSfiltered tokens, a number of 1, 024, 888 combinations (560, 073 types) were created using the 5length window criterion, while taking care not to cross a punctuation mark. A score could be associated to 1, 018, 773 token pairs (554, 202 types), which means that the candidate list is considerably larger than in the first case. The processing time was more than twice longer than in the first case, because of the large amount of data to handle. 5.3 Results The 500 best-scored collocations retrieved with the two methods were manually checked by three human judges and annotated, as explained in 5.1, as either ungrammatical, trivial or MWE. The agreement statistics on the annotations for each method are shown in Table 3. Method Agr. 1,2,3 1,2 1,3 2,3 parse observed 285 365 362 340 k-score 55.4% 62.6% 69% 64% window observed 226 339 327 269 k-score 43.1% 63.8% 61.1% 48% Table 3: Inter-annotator agreement For reporting n-best precision results, we used as reference set the annotated pairs on which at least two of the three annotators agreed. That is, from the 500 initial pairs retrieved with each method, 497 pairs were retained in the first case (parse method), and 483 pairs in the second (window method). Table 4 shows the comparative evaluation results for precision at different levels in the list of best-scored pairs, both with respect to grammaticality and to collocability (or, more exactly, the potential of a pair to constitute a MWE). The numbers show that a drastic reduction of noise is achieved by parsing the texts. The error rate with 958 Precision (gram.) Precision (MWE) n window parse window parse 50 94.0 96.0 80.0 72.0 100 91.0 98.0 75.0 74.0 150 87.3 98.7 72.7 73.3 200 85.5 98.5 70.5 74.0 250 82.8 98.8 67.6 69.6 300 82.3 98.7 65.0 69.3 350 80.3 98.9 63.7 67.4 400 80.0 99.0 62.5 67.0 450 79.6 99.1 61.1 66.0 500 78.3 99.0 60.1 66.0 Table 4: Comparative evaluation results respect to grammaticality is, on average, 15.9% for the window method; with parsing, it drops to 1.5% (i.e., 10.6 times smaller). This result confirms our hypothesis regarding the local precision which was stated in section 4.2. Despite the inherent parsing errors, the noise reduction is substantial. It is also worth noting that we compared our method against a rather high baseline, as we made a series of choices susceptible to alleviate the candidates identification with the window-based method: we filtered out function words, we used a parser for POS-tagging (that eliminated POS-ambiguity), and we filtered out cross-punctuation pairs. As for the MWE precision, the window method performs better for the first 100 pairs7); on the remaining part, the parsing-based method is on average 3.7% better. The precision curve for the window method shows a more rapid degradation than it does for the other. Therefore we can conclude that parsing is especially advantageous if one investigates more that the first hundred results (as it seems reasonable for large extraction experiments). In spite of the rough classification we used in annotation, we believe that the comparison performed is nonetheless meaningful since results should be first checked for grammaticality and ’triviality’ before defining more difficult tasks such as collocability. 6 Conclusion In this paper, we provided both theoretical and empirical arguments in the favor of performing syntactic analysis of texts prior to the extraction of collocations with statistical methods. 7A closer look at the data revealed that this might be explained by some inconsistencies between annotations. Part of the extraction work that, like ours, relies on parsing was cited in section 2. Most often, it concerns chunking rather than complete parsing; specific syntactic configurations (such as adjective-noun, preposition-noun-verb); and languages other than the ones we deal with (usually, English and German). Parsing has been also used after extraction (Smadja, 1993) for filtering out invalid results. We believe that this is not enough and that parsing is required prior to the application of statistical tests, for computing a realistic frequency profile for the pairs tested. As for evaluation, unlike most of the existing work, we are not concerned here with comparing the performance of association measures (cf. (Evert, 2004; Pecina, 2005) for comprehensive references), but with a contrastive evaluation of syntactic-based and standard extraction methods, combined with the same statistical computation. Our study finally clear the doubts on the usefulness of parsing for collocation extraction. Previous work that quantified the influence of parsing on the quality of results suggested the performance for tagged and parsed texts is similar (Evert and Kermes, 2003). This result applies to a quite rigid syntactic pattern, namely adjective-noun in German. But a preceding study on noun-verb pairs (Breidt, 1993) came to the conclusion that good precision can only be achieved for German with parsing. Its author had to simulate parsing because of the lack, at the time, of parsing tools for German. Our report, that concerns an actual system and a large data set, validates Breidt’s finding for a new language (French). Our experimental results confirm the hypotheses put forth in section 4, and show that parsing (even if imperfect) benefits to extraction, notably by a drastic reduction of the noise in the top of the significance list. In future work, we consider investigating other levels of the significance list, extending the evaluation to other languages, comparing against shallow-parsing methods instead of the window method, and performing recall-based evaluation as well. Acknowledgements We would like to thank Jorge Antonio Leoni de Leon, Mar Ndiaye, Vincenzo Pallotta and Yves Scherrer for participating to the annotation task. We are also grateful to Gabrielle Musillo and to the anonymous reviewers of an earlier version of 959 this paper for useful comments and suggestions. References Morton Benson. 1990. Collocations and generalpurpose dictionaries. International Journal of Lexicography, 3(1):23–35. Elisabeth Breidt. 1993. Extraction of V-N-collocations from text corpora: A feasibility study for German. In Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, Columbus, U.S.A. Yaacov Choueka. 1988. Looking for needles in a haystack, or locating interesting collocational expressions in large textual databases expressions in large textual databases. In Proceedings of the International Conference on User-Oriented ContentBased Text and Image Handling, pages 609–623, Cambridge, MA. Anthony P. Cowie. 1978. The place of illustrative material and collocations in the design of a learner’s dictionary. In P. Strevens, editor, In Honour of A.S. Hornby, pages 127–139. Oxford: Oxford University Press. D. Alan Cruse. 1986. Lexical Semantics. Cambridge University Press, Cambridge. Ga¨el Dias. 2003. Multiword unit hybrid extraction. In Proceedings of the ACL Workshop on Multiword Expressions, pages 41–48, Sapporo, Japan. Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61–74. Stefan Evert and Hannah Kermes. 2003. Experiments on candidate data for collocation extraction. In Companion Volume to the Proceedings of the 10th Conference of The European Chapter of the Association for Computational Linguistics, pages 83–86, Budapest, Hungary. Stefan Evert. 2004. The Statistics of Word Cooccurrences: Word Pairs and Collocations Word Pairs and Collocations. Ph.D. thesis, University of Stuttgart. John Rupert Firth. 1957. Papers in Linguistics 19341951. Oxford Univ. Press, Oxford. Ray Jackendoff. 1997. The Architecture of the Language Faculty. MIT Press, Cambridge, MA. John S. Justeson and Slava M. Katz. 1995. Technical terminology: Some linguistis properties and an algorithm for identification in text. Natural Language Engineering, 1:9–27. Brigitte Krenn and Stefan Evert. 2001. Can we do better than frequency? A case study on extracting PP-verb collocations. In Proceedings of the ACL Workshop on Collocations, pages 39–46, Toulouse, France. Dekang Lin. 1998. Extracting collocations from text corpora. In First Workshop on Computational Terminology, pages 57–63, Montreal. Christopher Manning and Heinrich Sch¨utze. 1999. Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, Mass. Kathleen R. McKeown and Dragomir R. Radev. 2000. Collocations. In Robert Dale, Hermann Moisl, and Harold Somers, editors, A Handbook of Natural Language Processing, pages 507–523. Marcel Dekker, New York, U.S.A. Igor Mel’ˇcuk. 1998. Collocations and lexical functions. In Anthony P. Cowie, editor, Phraseology. Theory, Analysis, and Applications, pages 23–53. Claredon Press, Oxford. Brigitte Orliac and Mike Dillinger. 2003. Collocation extraction for machine translation. In Proceedings of Machine Translation Summit IX, pages 292–298, New Orleans, Lousiana, U.S.A. Darren Pearce. 2001. Synonymy in collocation extraction. In WordNet and Other Lexical Resources: Applications, Extensions and Customizations (NAACL 2001 Workshop), pages 41–46, Carnegie Mellon University, Pittsburgh. Pavel Pecina. 2005. An extensive empirical study of collocation extraction methods. In Proceedings of the ACL Student Research Workshop, pages 13–18, Ann Arbor, Michigan, June. Association for Computational Linguistics. Ivan A. Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword expressions: A pain in the neck for NLP. In Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics (CICLING 2002), pages 1–15, Mexico City. Violeta Seretan and Eric Wehrli. 2006. Multilingual collocation extraction: Issues and solutions solutions. In Proceedings or COLING/ACL Workshop on Multilingual Language Resources and Interoperability, Sydney, Australia, July. To appear. Violeta Seretan, Luka Nerima, and Eric Wehrli. 2003. Extraction of multi-word collocations using syntactic bigram composition. In Proceedings of the Fourth International Conference on Recent Advances in NLP (RANLP-2003), pages 424–431, Borovets, Bulgaria. Frank Smadja. 1993. Retrieving collocations form text: Xtract. Computational Linguistics, 19(1):143– 177. Eric Wehrli. 2004. Un mod`ele multilingue d’analyse syntaxique. In A. Auchlin et al., editor, Structures et discours - M´elanges offerts `a Eddy Roulet, pages 311–329. ´Editions Nota bene, Qu´ebec. 960 | 2006 | 120 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 961–968, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Scalable Inference and Training of Context-Rich Syntactic Translation Models Michel Galley*, Jonathan Graehl†, Kevin Knight†‡, Daniel Marcu†‡, Steve DeNeefe†, Wei Wang‡ and Ignacio Thayer† *Columbia University Dept. of Computer Science New York, NY 10027 [email protected], {graehl,knight,marcu,sdeneefe}@isi.edu, [email protected], [email protected] †University of Southern California Information Sciences Institute Marina del Rey, CA 90292 ‡Language Weaver, Inc. 4640 Admiralty Way Marina del Rey, CA 90292 Abstract Statistical MT has made great progress in the last few years, but current translation models are weak on re-ordering and target language fluency. Syntactic approaches seek to remedy these problems. In this paper, we take the framework for acquiring multi-level syntactic translation rules of (Galley et al., 2004) from aligned tree-string pairs, and present two main extensions of their approach: first, instead of merely computing a single derivation that minimally explains a sentence pair, we construct a large number of derivations that include contextually richer rules, and account for multiple interpretations of unaligned words. Second, we propose probability estimates and a training procedure for weighting these rules. We contrast different approaches on real examples, show that our estimates based on multiple derivations favor phrasal re-orderings that are linguistically better motivated, and establish that our larger rules provide a 3.63 BLEU point increase over minimal rules. 1 Introduction While syntactic approaches seek to remedy wordordering problems common to statistical machine translation (SMT) systems, many of the earlier models—particularly child re-ordering models— fail to account for human translation behavior. Galley et al. (2004) alleviate this modeling problem and present a method for acquiring millions of syntactic transfer rules from bilingual corpora, which we review below. Here, we make the following new contributions: (1) we show how to acquire larger rules that crucially condition on more syntactic context, and show how to compute multiple derivations for each training example, capturing both large and small rules, as well as multiple interpretations for unaligned words; (2) we develop probability models for these multilevel transfer rules, and give estimation methods for assigning probabilities to very large rule sets. We contrast our work with (Galley et al., 2004), highlight some severe limitations of probability estimates computed from single derivations, and demonstrate that it is critical to account for many derivations for each sentence pair. We also use real examples to show that our probability models estimated from a large number of derivations favor phrasal re-orderings that are linguistically well motivated. An empirical evaluation against a state-of-the-art SMT system similar to (Och and Ney, 2004) indicates positive prospects. Finally, we show that our contextually richer rules provide a 3.63 BLEU point increase over those of (Galley et al., 2004). 2 Inferring syntactic transformations We assume we are given a source-language (e.g., French) sentence f, a target-language (e.g., English) parse tree π, whose yield e is a translation of f, and a word alignment a between f and e. Our aim is to gain insight into the process of transforming π into f and to discover grammaticallygrounded translation rules. For this, we need a formalism that is expressive enough to deal with cases of syntactic divergence between source and target languages (Fox, 2002): for any given (π, f, a) triple, it is useful to produce a derivation that minimally explains the transformation between π and f, while remaining consistent with a. Galley et al. (2004) present one such formalism (henceforth “GHKM”). 2.1 Tree-to-string alignments It is appealing to model the transformation of π into f using tree-to-string (xRs) transducers, since their theory has been worked out in an extensive literature and is well understood (see, e.g., (Graehl and Knight, 2004)). Formally, transformational rules ri presented in (Galley et al., 2004) are equivalent to 1-state xRs transducers mapping a given pattern (subtree to match in π) to a right hand side string. We will refer to them as lhs(ri) and rhs(ri), respectively. For example, some xRs 961 rules may describe the transformation of does not into ne ... pas in French. A particular instance may look like this: VP(AUX(does), RB(not), x0:VB) →ne, x0, pas lhs(ri) can be any arbitrary syntax tree fragment. Its leaves are either lexicalized (e.g. does) or variables (x0, x1, etc). rhs(ri) is represented as a sequence of target-language words and variables. Now we give a brief overview of how such transformational rules are acquired automatically in GHKM.1 In Figure 1, the (π, f, a) triple is represented as a directed graph G (edges going downward), with no distinction between edges of π and alignments. Each node of the graph is labeled with its span and complement span (the latter in italic in the figure). The span of a node n is defined by the indices of the first and last word in f that are reachable from n. The complement span of n is the union of the spans of all nodes n′ in G that are neither descendants nor ancestors of n. Nodes of G whose spans and complement spans are nonoverlapping form the frontier set F ∈G. What is particularly interesting about the frontier set? For any frontier of graph G containing a given node n ∈F, spans on that frontier define an ordering between n and each other frontier node n′. For example, the span of VP[4-5] either precedes or follows, but never overlaps the span of any node n′ on any graph frontier. This property does not hold for nodes outside of F. For instance, PP[4-5] and VBG[4] are two nodes of the same graph frontier, but they cannot be ordered because of their overlapping spans. The purpose of xRs rules in this framework is to order constituents along sensible frontiers in G, and all frontiers containing undefined orderings, as between PP[4-5] and VBG[4], must be disregarded during rule extraction. To ensure that xRs rules are prevented from attempting to re-order any such pair of constituents, these rules are designed in such a way that variables in their lhs can only match nodes of the frontier set. Rules that satisfy this property are said to be induced by G.2 For example, rule (d) in Table 1 is valid according to GHKM, since the spans corresponding to 1Note that we use a slightly different terminology. 2Specifically, an xRs rule ri is extracted from G by taking a subtree γ ∈π as lhs(ri), appending a variable to each leaf node of γ that is internal to π, adding those variables to rhs(ri), ordering them in accordance to a, and if necessary inserting any word of f to ensure that rhs(ri) is a sequence of contiguous spans (e.g., [4-5][6][7-8] for rule (f) in Table 1). DT CD VBP NNS IN NNP NP NNS VBG 3 2 2 1 7-8 4 4 5 9 1 2 3 4 5 6 7 8 9 3 1-2,4-9 2 1-9 2 1-9 1 2-9 7-8 1-5,9 4 1-9 4 1-9 5 1-4,7-9 9 1-8 1-2 3-9 NP 7-8 1-5,9 NP 5 1-4, 7-9 PP 4-5 1-4,7-9 VP 4-5 1-3,7-9 NP 4-8 1-3,9 VP 3-8 1-2,9 S 1-9 7! "#$ %& '( ) *+ , . These people include astronauts coming from France . . 7 Figure 1: Spans and complement-spans determine what rules are extracted. Constituents in gray are members of the frontier set; a minimal rule is extracted from each of them. (a) S(x0:NP, x1:VP, x2:.) →x0, x1, x2 (b) NP(x0:DT, CD(7), NNS(people)) →x0, 7º (c) DT(these) →Ù (d) VP(x0:VBP, x1:NP) →x0, x1 (e) VBP(include) →-ì (f) NP(x0:NP, x1:VP) →x1, , x0 (g) NP(x0:NNS) →x0 (h) NNS(astronauts) →*, X (i) VP(VBG(coming), PP(IN(from), x0:NP)) →eê, x0 (j) NP(x0:NNP) →x0 (k) NNP(France) →Õý (l) .(.) →. Table 1: A minimal derivation corresponding to Figure 1. its rhs constituents (VBP[3] and NP[4-8]) do not overlap. Conversely, NP(x0:DT, x1:CD:, x2:NNS) is not the lhs of any rule extractible from G, since its frontier constituents CD[2] and NNS[2] have overlapping spans.3 Finally, the GHKM procedure produces a single derivation from G, which is shown in Table 1. The concern in GHKM was to extract minimal rules, whereas ours is to extract rules of any arbitrary size. Minimal rules defined over G are those that cannot be decomposed into simpler rules induced by the same graph G, e.g., all rules in Table 1. We call minimal a derivation that only contains minimal rules. Conversely, a composed rule results from the composition of two or more minimal rules, e.g., rule (b) and (c) compose into: NP(DT(these), CD(7), NNS(people)) →Ù, 7º 3It is generally reasonable to also require that the root n of lhs(ri) be part of F, because no rule induced by G can compose with ri at n, due to the restrictions imposed on the extraction procedure, and ri wouldn’t be part of any valid derivation. 962 OR NP(x0:NP, x1:VP) ! x1, !, x0 VP(x0:VBP, x1:NP) ! x0 , x1 S(x0:NP, x1:VP, x2:.) ! x0 , x1, x2 NP(x0:DT CD(7), NNS(people)) ! x0, 7" .(.) ! . DT(these) ! # VBP(include) ! $%& NP(x0:NP, x1:VP) ! x1, x0 NP(x0:NP, x1:VP) ! x1, x0 VP(VBG(coming), PP(IN(from), x0:NP)) ! '(, x0, ! VP(VBG(coming), PP(IN(from), x0:NP)) ! '(, x0 NP(x0:NNS) ! x0 NP(x0:NNS) ! !, x0 NP(x0:NNP) ! x0, ! NNP(France) ! )* NNS(astronauts) ! +,, OR OR NNS(astronauts) !!,+,, OR NP(x0:NNP) ! x0 NP(x0:NNP) ! x0 NNP(France) ! )*, ! NP(x0:NNS) ! x0 VP(VBG(coming), PP(IN(from), x0:NP)) ! '(, x0 coming from NNS IN NNP NP VP NP VBG PP NP 7-8 5 7-8 5 7-8 4 4 5 4 5 6 7 8 4 4 4-5 4-5 4-8 NNP(France) !)*, ! NP(x0:NNP) ! x0, ! VP(VBG(coming), PP(IN(from), x0:NP)) ! '(, x0, ! NNS(astronauts) ! !, +,, NP(x0:NNS) ! !, x0 NP(x0:NP, x1:VP) ! x1, !, x0 (a) (b) '( )* ! +, astronauts France Figure 2: (a) Multiple ways of aligning to constituents in the tree. (b) Derivation corresponding to the parse tree in Figure 1, which takes into account all alignments of pictured in (a). Note that these properties are dependent on G, and the above rule would be considered a minimal rule in a graph G′ similar to G, but additionally containing a word alignment between 7 and Ù. We will see in Sections 3 and 5 why extracting only minimal rules can be highly problematic. 2.2 Unaligned words While the general theory presented in GHKM accounts for any kind of derivation consistent with G, it does not particularly discuss the case where some words of the source-language string f are not aligned to any word of e, thus disconnected from the rest of the graph. This case is highly frequent: 24.1% of Chinese words in our 179 million word English-Chinese bilingual corpus are unaligned, and 84.8% of Chinese sentences contain at least one unaligned word. The question is what to do with such lexical items, e.g., in Figure 2(a). The approach of building one minimal derivation for G as in the algorithm described in GHKM assumes that we commit ourselves to a particular heuristic to attach the unaligned item to a certain constituent of π, e.g., highest attachment (in the example, is attached to NP[4-8] and the heuristic generates rule (f)). A more reasonable approach is to invoke the principle of insufficient reason and make no a priori assumption about what is a “correct” way of assigning the item to a constituent, and return all derivations that are consistent with G. In Section 4, we will see how to use corpus evidence to give preference to unaligned-word attachments that are the most consistent across the data. Figure 2(a) shows the six possible ways of attaching to constituents of π: besides the highest attachment (rule (f)), can move along the ancestors of France, since it is to the right of the translation of that word, and be considered to be part of an NNP, NP, or VP rule. We make the same reasoning to the left: can either start the NNS of astronauts, or start an NP. Our account of all possible ways of consistently attaching to constituents means we must extract more than one derivation to explain transformations in G, even if we still restrict ourselves to minimal derivations (a minimal derivation for G is unique if and only if no source-language word in G is unaligned). While we could enumerate all derivations separately, it is much more efficient both in time and space to represent them as a derivation forest, as in Figure 2(b). Here, the forest covers all minimal derivations that correspond to G. It is necessary to ensure that for each derivation, each unaligned item (here ) appears only once in the rules of that derivation, as shown in Figure 2 (which satisfies the property). That requirement will prove to be critical when we address the problem of estimating probabilities for our rules: if we allowed in our example to spuriously generate s in multiple successive steps of the same derivation, we would not only represent the transformation incorrectly, but also -rules would be disproportionately represented, leading to strongly biased estimates. We will now see how to ensure this constraint is satisfied in our rule extraction and derivation building algorithm. 963 2.3 Algorithm The linear-time algorithm presented in GHKM is only a particular case of the more general one we describe here, which is used to extract all rules, minimal and composed, induced by G. Similarly to the GHKM algorithm, ours performs a topdown traversal of G, but differs in the operations it performs at each node n ∈F: we must explore all subtrees rooted at n, find all consistent ways of attaching unaligned words of f, and build valid derivations in accordance to these attachments. We use a table or-dforest[x, y, c] to store ORnodes, in which each OR-node can be uniquely defined by a syntactic category c and a span [x, y] (which may cover unaligned words of f). This table is used to prevent the same partial derivation to be followed multiple times (the in-degrees of OR-nodes generally become large with composed rules). Furthermore, to avoid over-generating unaligned words, the root and variables in each rule are represented with their spans. For example, in Figure 2(b), the second and third child of the topmost OR-node respectively span across [4-5][6-8] and [4-6][7-8] (after constituent reordering). In the former case, will eventually be realized in an NP, and in the latter case, in a VP. The preprocessing step consists of assigning spans and complement spans to nodes of G, in the first case by a bottom-up exploration of the graph, and in the latter by a top-down traversal. To assign complement spans, we assign the complement span of any node n to each of its children, and for each of them, add the span of the child to the complement span of all other children. In another traversal of G, we determine the minimal rule extractible from each node in F. We explore all tree fragments rooted at n by maintaining an open and a closed queue of rules extracted from n (qo and qc). At each step, we pick the smallest rule in qo, and for each of its variable nodes, try to discover new rules (‘successor rules’) by means of composition with minimal rules, until a given threshold on rule size or maximum number of rules in qc is reached. There may be more that one successor per rule, since we must account for all possible spans than can be assigned to non-lexical leaves of a rule. Once a threshold is reached, or if the open queue is empty, we connect a new OR-node to all rules that have just been extracted from n, and add it to or-dforest. Finally, we proceed recursively, and extract new rules from each node at the frontier of the minimal rule rooted at n. Once all nodes of F have been processed, the or-dforest table contains a representation encoding only valid derivations. 3 Probability models The overall goal of our translation system is to transform a given source-language sentence f into an appropriate translation e in the set E of all possible target-language sentences. In a noisy-channel approach to SMT, we uses Bayes’ theorem and choose the English sentence ˆe ∈E that maximizes:4 ˆe = arg max e ∈E n Pr(e) · Pr(f|e) o (1) Pr(e) is our language model, and Pr(f|e) our translation model. In a grammatical approach to MT, we hypothesize that syntactic information can help produce good translation, and thus introduce dependencies on target-language syntax trees. The function to optimize becomes: ˆe = arg max e∈E n Pr(e)· X π∈τ(e) Pr(f|π)·Pr(π|e) o (2) τ(e) is the set of all English trees that yield the given sentence e. Estimating Pr(π|e) is a problem equivalent to syntactic parsing and thus is not discussed here. Estimating Pr(f|π) is the task of syntax-based translation models (SBTM). Given a rule set R, our SBTM makes the common assumption that left-most compositions of xRs rules θi = r1 ◦... ◦rn are independent from one another in a given derivation θi ∈Θ, where Θ is the set of all derivations constructible from G = (π, f, a) using rules of R. Assuming that Λ is the set of all subtree decompositions of π corresponding to derivations in Θ, we define the estimate: Pr(f|π) = 1 |Λ| X θi∈Θ Y rj∈θi p(rhs(rj)|lhs(rj)) (3) under the assumption: X rj∈R:lhs(rj)=lhs(ri) p(rhs(rj)|lhs(rj)) = 1 (4) It is important to notice that the probability distribution defined in Equation 3 requires a normalization factor (|Λ|) in order to be tight, i.e., sum to 1 over all strings fi ∈F that can be derived 4We denote general probability distributions with Pr(·) and use p(·) for probabilities assigned by our models. 964 X a Y b a b c c (!,f1,a1): X a Y b b a c c (!,f2,a2): Figure 3: Example corpus. from π. A simple example suffices to demonstrate it is not tight without normalization. Figure 3 contains a sample corpus from which four rules can be extracted: r1: X(a, Y(b, c)) →a’, b’, c’ r2: X(a, Y(b, c)) →b’, a’, c’ r3: X(a, x0:Y) →a’, x0 r4: Y(b, c) →b’, c’ From Equation 4, the probabilities of r3 and r4 must be 1, and those of r1 and r2 must sum to 1. Thus, the total probability mass, which is distributed across two possible output strings a’b’c’ and b’a’c’, is: p(a’b’c’|π) + p(b’a’c’|π) = p1 + p3 · p4 + p2 = 2, where pi = p(rhs(ri)|lhs(ri)). It is relatively easy to prove that the probabilities of all derivations that correspond to a given decomposition λi ∈Λ sum to 1 (the proof is omitted due to constraints on space). From this property we can immediately conclude that the model described by Equation 3 is tight.5 We examine two estimates p(rhs(r)|lhs(r)). The first one is the relative frequency estimator conditioning on left hand sides: p(rhs(r)|lhs(r)) = f(r) P r′:lhs(r′)=lhs(r) f(r′) (5) f(r) represents the number of times rule r occurred in the derivations of the training corpus. One of the major negative consequences of extracting only minimal rules from a corpus is that an estimator such as Equation 5 can become extremely biased. This again can be observed from Figure 3. In the minimal-rule extraction of GHKM, only three rules are extracted from the example corpus, i.e. rules r2, r3, and r4. Let’s assume now that the triple (π, f1, a1) is represented 99 times, and (π, f2, a2) only once. Given a tree π, the model trained on that corpus can generate the two strings a’b’c’ and b’a’c’ only through two derivations, r3 ◦r4 and r2, respectively. Since all rules in that example have probability 1, and 5If each tree fragment in π is the lhs of some rule in R, then we have |Λ| = 2n, where n is the number of nodes of the frontier set F ∈G (each node is a binary choice point). given that the normalization factor |Λ| is 2, both probabilities p(a’b’c’|π) and p(b’a’c’|π) are 0.5. On the other hand, if all rules are extracted and incorporated into our relative-frequency probability model, r1 seriously counterbalances r2 and the probability of a’b’c’ becomes: 1 2 ·( 99 100 +1) = .995 (since it differs from .99, the estimator remains biased, but to a much lesser extent). An alternative to the conditional model of Equation 3 is to use a joint model conditioning on the root node instead of the entire left hand side: p(r|root(r)) = f(r) P r′:root(r′)=root(r) f(r′) (6) This can be particularly useful if no parser or syntax-based language model is available, and we need to rely on the translation model to penalize ill-formed parse trees. Section 6 will describe an empirical evaluation based on this estimate. 4 EM training In our previous discussion of parameter estimation, we did not explore the possibility that one derivation in a forest may be much more plausible than the others. If we knew which derivation in each forest was the “true” derivation, then we could straightforwardly collect rule counts off those derivations. On the other hand, if we had good rule probabilities, we could compute the most likely (Viterbi) derivations for each training example. This is a situation in which we can employ EM training, starting with uniform rule probabilities. For each training example, we would like to: (1) score each derivation θi as a product of the probabilities of the rules it contains, (2) compute a conditional probability pi for each derivation θi (conditioned on the observed training pair) by normalizing those scores to add to 1, and (3) collect weighted counts for each rule in each θi, where the weight is pi. We can then normalize the counts to get refined probabilities, and iterate; the corpus likelihood is guaranteed to improve with each iteration. While it is infeasible to enumerate the millions of derivations in each forest, Graehl and Knight (2004) demonstrate an efficient algorithm. They also analyze how to train arbitrary tree transducers into two steps. The first step is to build a derivation forest for each training example, where the forest contains those derivations licensed by the (already supplied) transducer’s rules. The second step employs EM on those derivation forests, running in time proportional to the size of the 965 Best minimal-rule derivation (Cm) p(r) (a) S(x0:NP-C x1:VP x2:.) →x0 x1 x2 .845 (b) NP-C(x0:NPB) →x0 .82 (c) NPB(DT(the) x0:NNS) →x0 .507 (d) NNS(gunmen) →ªK .559 (e) VP(VBD(were) x0:VP-C) →x0 .434 (f) VP-C(x0:VBN x1:PP) →x1 x0 .374 (g) PP(x0:IN x1:NP-C) →x0 x1 .64 (h) IN(by) →« .0067 (i) NP-C(x0:NPB) →x0 .82 (j) NPB(DT(the) x0:NN) →x0 .586 (k) NN(police) →f¹ .0429 (l) VBN(killed) →ûÙ .0072 (m) .(.) →. .981 . The gunmen were killed by the police . DT VBD VBN DT NN NP PP VP-C VP S NNS IN NP . !" #$ % &' Best composed-rule derivation (C4) p(r) (o) S(NP-C(NPB(DT(the) NNS(gunmen))) x0:VP .(.)) →ªK x0 . 1 (p) VP(VBD(were) VP-C(x0:VBN PP(IN(by) x1:NP-C))) →« x1 x0 0.00724 (q) NP-C(NPB(DT(the) NN(police))) →f¹ 0.173 (r) VBN(killed) →ûÙ 0.00719 Figure 4: Two most probable derivations for the graph on the right: the top table restricted to minimal rules; the bottom one, much more probable, using a large set of composed rules. Note: the derivations are constrained on the (π, f, a) triple, and thus include some non-literal translations with relatively low probabilities (e.g. killed, which is more commonly translated as {¡). rule nb. of nb. of derivEMset rules nodes time time Cm 4M 192M 2 h. 4 h. C3 142M 1255M 52 h. 34 h. C4 254M 2274M 134 h. 60 h. Table 2: Rules and derivation nodes for a 54M-word, 1.95M sentence pair English-Chinese corpus, and time to build derivations (on 10 cluster nodes) and run 50 EM iterations. forests. We only need to borrow the second step for our present purposes, as we construct our own derivation forests when we acquire our rule set. A major challenge is to scale up this EM training to large data sets. We have been able to run EM for 50 iterations on our Chinese-English 54million word corpus. The derivation forests for this corpus contain 2.2 billion nodes; the largest forest contains 1.1 million nodes. The outcome is to assign probabilities to over 254 million rules. Our EM runs with either lhs normalization or lhsroot normalization. In the former case, each lhs has an average of three corresponding rhs’s that compete with each other for probability mass. 5 Model coverage We now present some examples illustrating the benefit of composed rules. We trained three p(rhs(ri)|lhs(ri)) models on a 54 million-word English-Chinese parallel corpus (Table 2): the first one (Cm) with only minimal rules, and the two others (C3 and C4) additionally considering composed rules with no more than three, respectively four, internal nodes in lhs(ri). We evaluated these models on a section of the NIST 2002 evaluation corpus, for which we built derivation forests and lhs: S(x0:NP-C VP(x1:VBD x2:NP-C) x3:.) corpus rhsi p(rhsi|lhs) Chinese x1 x0 x2 x3 .3681 (minimal) x0 x1 , x3 x2 .0357 x2 , x0 x1 x3 .0287 x0 x1 , x3 x2 . .0267 Chinese x0 x1 x2 x3 .9047 (composed) x0 x1 , x2 x3 .016 x0 , x1 x2 x3 .0083 x0 x1 x2 x3 .0072 Arabic x1 x0 x2 x3 .5874 (composed) x0 x1 x2 x3 .4027 x1 x2 x0 x3 .0077 x1 x0 x2 " x3 .0001 Table 3: Our model transforms English subject-verb-object (SVO) structures into Chinese SVO and into Arabic VSO. With only minimal rules, Chinese VSO is wrongly preferred. extracted the most probable one (Viterbi) for each sentence pair (based on an automatic alignment produced by GIZA). We noticed in general that Viterbi derivations according to C4 make extensive usage of composed rules, as it is the case in the example in Figure 4. It shows the best derivation according to Cm and C4 on the unseen (π,f,a) triple displayed on the right. The second derivation (log p = −11.6) is much more probable than the minimal one (log p = −17.7). In the case of Cm, we can see that many small rules must be applied to explain the transformation, and at each step, the decision regarding the re-ordering of constituents is made with little syntactic context. For example, from the perspective of a decoder, the word by is immediately transformed into a preposition (IN), but it is in general useful to know which particular function word is present in the sentence to motivate good re-orderings in the up966 lhs1: NP-C(x0:NPB PP(IN(of) x1:NP-C)) (NP-of-NP) lhs2: PP(IN(of) NP-C(x0:NPB PP(IN(of) NP-C(x1:NPB x2:VP)))) (of-NP-of-NP-VP) lhs3: VP(VBD(said) SBAR-C(IN(that) x0:S-C)) (said-that-S) lhs4: SBAR(WHADVP(WRB(when)) S-C(x0:NP-C VP(VBP(are) x1:VP-C))) (when-NP-are-VP) rhs1i p(rhs1i|lhs1) rhs2i p(rhs2i|lhs2) rhs3i p(rhs3i|lhs3) rhs4i p(rhs4i|lhs4) x1 x0 .54 x2 x1 x0 .6754 ô , x0 .6062 ( x1 x0 ö .6618 x0 x1 .2351 ( x2 x1 x0 .035 ô x0 .1073 S x1 x0 ö .0724 x1 x0 .0334 x2 x1 x0 , .0263 h: , x0 .0591 ( x1 x0 ö , .0579 x1 x0 .026 x2 x1 x0 .0116 Ö ô , x0 .0234 , ( x1 x0 ö .0289 Table 4: Translation probabilities promote linguistically motivated constituent re-orderings (for lhs1 and lhs2), and enable non-constituent (lhs3) and non-contiguous (lhs4) phrasal translations. per levels of the tree. A rule like (e) is particularly unfortunate, since it allows the word were to be added without any other evidence that the VP should be in passive voice. On the other hand, the composed-rule derivation of C4 incorporates more linguistic evidence in its rules, and re-orderings are motivated by more syntactic context. Rule (p) is particularly appropriate to create a passive VP construct, since it expects a Chinese passive marker («), an NP-C, and a verb in its rhs, and creates the were ... by construction at once in the left hand side. 5.1 Syntactic translation tables We evaluate the promise of our SBTM by analyzing instances of translation tables (t-table). Table 3 shows how a particular form of SVO construction is transformed into Chinese, which is also an SVO language. While the t-table for Chinese composed rules clearly gives good estimates for the “correct” x0 x1 ordering (p = .9), i.e. subject before verb, the t-table for minimal rules unreasonably gives preference to verb-subject ordering (x1 x0, p = .37), because the most probable transformation (x0 x1) does not correspond to a minimal rule. We obtain different results with Arabic, an VSO language, and our model effectively learns to move the subject after the verb (p = .59). lhs1 in Table 4 shows that our model is able to learn large-scale constituent re-orderings, such as re-ordering NPs in a NP-of-NP construction, and put the modifier first as it is more commonly the case in Chinese (p = .54). If more syntactic context is available as in lhs2, our model provides much sharper estimates, and appropriately reverses the order of three constituents with high probability (p = .68), inserting modifiers first (possessive markers are needed here for better syntactic disambiguation). A limitation of earlier syntax-based systems is their poor handling of non-constituent phrases. Table 4 shows that our model can learn rules for such phrases, e.g., said that (lhs3). While the that has no direct translation, our model effectively learns to separate ô (said) from the relative clause with a comma, which is common in Chinese. Another promising prospect of our model seems to lie in its ability to handle non-contiguous phrases, a feature that state of the art systems such as (Och and Ney, 2004) do not incorporate. The when-NP-are-VP construction of lhs4 presents such a case. Our model identifies that are needs to be deleted, that when translates into the phrase ( ... ö, and that the NP needs to be moved after the VP in Chinese (p = .66). 6 Empirical evaluation The task of our decoder is to find the most likely English tree π that maximizes all models involved in Equation 2. Since xRs rules can be converted to context-free productions by increasing the number of non-terminals, we implemented our decoder as a standard CKY parser with beam search. Its rule binarization is described in (Zhang et al., 2006). We compare our syntax-based system against an implementation of the alignment template (AlTemp) approach to MT (Och and Ney, 2004), which is widely considered to represent the state of the art in the field. We registered both systems in the NIST 2005 evaluation; results are presented in Table 5. With a difference of 6.4 BLEU points for both language pairs, we consider the results of our syntax-based system particularly promising, since these are the highest scores to date that we know of using linguistic syntactic transformations. Also, on the one hand, our AlTemp system represents quite mature technology, and incorporates highly tuned model parameters. On the other hand, our syntax decoder is still work in progress: only one model was used during search, i.e., the EM-trained root-normalized SBTM, and as yet no language model is incorporated in the search (whereas the search in the AlTemp system uses two phrase-based translation models and 967 Syntactic AlTemp Arabic-to-English 40.2 46.6 Chinese-to-English 24.3 30.7 Table 5: BLEU-4 scores for the 2005 NIST test set. Cm C3 C4 Chinese-to-English 24.47 27.42 28.1 Table 6: BLEU-4 scores for the 2002 NIST test set, with rules of increasing sizes. 12 other feature functions). Furthermore, our decoder doesn’t incorporate any syntax-based language model, and admittedly our ability to penalize ill-formed parse trees is still limited. Finally, we evaluated our system on the NIST02 test set with the three different rule sets (see Table 6). The performance with our largest rule set represents a 3.63 BLEU point increase (14.8% relative) compared to using only minimal rules, which indicates positive prospects for using even larger rules. While our rule inference algorithm scales to higher thresholds, one important area of future work will be the improvement of our decoder, conjointly with analyses of the impact in terms of BLEU of contextually richer rules. 7 Related work Similarly to (Poutsma, 2000; Wu, 1997; Yamada and Knight, 2001; Chiang, 2005), the rules discussed in this paper are equivalent to productions of synchronous tree substitution grammars. We believe that our tree-to-string model has several advantages over tree-to-tree transformations such as the ones acquired by Poutsma (2000). While tree-to-tree grammars are richer formalisms that provide the potential benefit of rules that are linguistically better motivated, modeling the syntax of both languages comes as an extra cost, and it is admittedly more helpful to focus our syntactic modeling effort on the target language (e.g., English) in cases where it has syntactic resources (parsers and treebanks) that are considerably more available than for the source language. Furthermore, we think there is, overall, less benefit in modeling the syntax of the source language, since the input sentence is fixed during decoding and is generally already grammatical. With the notable exception of Poutsma, most related works rely on models that are restricted to synchronous context-free grammars (SCFG). While the state-of-the-art hierarchical SMT system (Chiang, 2005) performs well despite stringent constraints imposed on its context-free grammar, we believe its main advantage lies in its ability to extract hierarchical rules across phrasal boundaries. Context-free grammars (such as Penn Treebank and Chiang’s grammars) make independence assumptions that are arguably often unreasonable, but as our work suggests, relaxations of these assumptions by using contextually richer rules results in translations of increasing quality. We believe it will be beneficial to account for this finding in future work in syntax-based SMT and in efforts to improve upon (Chiang, 2005). 8 Conclusions In this paper, we developed probability models for the multi-level transfer rules presented in (Galley et al., 2004), showed how to acquire larger rules that crucially condition on more syntactic context, and how to pack multiple derivations, including interpretations of unaligned words, into derivation forests. We presented some theoretical arguments for not limiting extraction to minimal rules, validated them on concrete examples, and presented experiments showing that contextually richer rules provide a 3.63 BLEU point increase over the minimal rules of (Galley et al., 2004). Acknowledgments We would like to thank anonymous reviewers for their helpful comments and suggestions. This work was partially supported under the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR001106-C-0022. References D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. of ACL. H. Fox. 2002. Phrasal cohesion and statistical machine translation. In Proc. of EMNLP, pages 304–311. M. Galley, M. Hopkins, K. Knight, and D. Marcu. 2004. What’s in a translation rule? In Proc. of HLT/NAACL-04. J. Graehl and K. Knight. 2004. Training tree transducers. In Proc. of HLT/NAACL-04, pages 105–112. F. Och and H. Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449. A. Poutsma. 2000. Data-oriented translation. In Proc. of COLING, pages 635–641. D. Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–404. K. Yamada and K. Knight. 2001. A syntax-based statistical translation model. In Proc. of ACL, pages 523–530. H. Zhang, L. Huang, D. Gildea, and K. Knight. 2006. Synchronous binarization for machine translation. In Proc. of HLT/NAACL. 968 | 2006 | 121 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 969–976, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Modelling lexical redundancy for machine translation David Talbot and Miles Osborne School of Informatics, University of Edinburgh 2 Buccleuch Place, Edinburgh, EH8 9LW, UK [email protected], [email protected] Abstract Certain distinctions made in the lexicon of one language may be redundant when translating into another language. We quantify redundancy among source types by the similarity of their distributions over target types. We propose a languageindependent framework for minimising lexical redundancy that can be optimised directly from parallel text. Optimisation of the source lexicon for a given target language is viewed as model selection over a set of cluster-based translation models. Redundant distinctions between types may exhibit monolingual regularities, for example, inflexion patterns. We define a prior over model structure using a Markov random field and learn features over sets of monolingual types that are predictive of bilingual redundancy. The prior makes model selection more robust without the need for language-specific assumptions regarding redundancy. Using these models in a phrase-based SMT system, we show significant improvements in translation quality for certain language pairs. 1 Introduction Data-driven machine translation (MT) relies on models that can be efficiently estimated from parallel text. Token-level independence assumptions based on word-alignments can be used to decompose parallel corpora into manageable units for parameter estimation. However, if training data is scarce or language pairs encode significantly different information in the lexicon, such as Czech and English, additional independence assumptions may assist the model estimation process. Standard statistical translation models use separate parameters for each pair of source and target types. In these models, distinctions in either lexicon that are redundant to the translation process will result in unwarranted model complexity and make parameter estimation from limited parallel data more difficult. A natural way to eliminate such lexical redundancy is to group types into homogeneous clusters that do not differ significantly in their distributions over types in the other language. Cluster-based translation models capture the corresponding independence assumptions. Previous work on bilingual clustering has focused on coarse partitions of the lexicon that resemble automatically induced part-of-speech classes. These were used to model generic word-alignment patterns such as noun-adjective re-ordering between English and French (Och, 1998). In contrast, we induce fine-grained partitions of the lexicon, conceptually closer to automatic lemmatisation, optimised specifically to assign translation probabilities. Unlike lemmatisation or stemming, our method specifically quantifies lexical redundancy in a bilingual setting and does not make language-specific assumptions. We tackle the problem of redundancy in the translation lexicon via Bayesian model selection over a set of cluster-based translation models. We search for the model, defined by a clustering of the source lexicon, that maximises the marginal likelihood of target tokens in parallel data. In this optimisation, source types are combined into clusters if their distributions over target types are too similar to warrant distinct parameters. Redundant distinctions between types may exhibit regularities within a language, for instance, inflexion patterns. These can be used to guide model selection. Here we show that the inclusion of a model ‘prior’ over the lexicon structure leads to more robust translation models. Although a priori we do not know which monolingual features characterise redundancy for a given language pair, by defining a model over the prior monolingual 969 space of source types and cluster assignments, we can introduce an inductive bias that allows clustering decisions in different parts of the lexicon to influence one another via monolingual features. We use an EM-type algorithm to learn weights for a Markov random field parameterisation of this prior over lexicon structure. We obtain significant improvements in translation quality as measured by BLEU, incorporating these optimised model within a phrase-based SMT system for three different language pairs. The MRF prior improves the results and picks up features that appear to agree with linguistic intuitions of redundancy for the language pairs considered. 2 Lexical redundancy between languages In statistical MT, the source and target lexicons are usually defined as the sets of distinct types observed in the parallel training corpus for each language. Such models may not be optimal for certain language pairs and training regimes. A word-level statistical translation model approximates the probability Pr(E|F) that a source type indexed by F will be translated as a target type indexed by E. Standard models, e.g. Brown et al. (1993), consist of discrete probability distributions with separate parameters for each unique pairing of a source and target types; no attempt is made to leverage structure within the event spaces E and F during parameter estimation. This results in a large number of parameters that must be estimated from limited amounts of parallel corpora. We refer to distinctions made between lexical types in one language that do not result in different distributions over types in the other language as lexically redundant for the language pair. Since the role of the translation model is to determine a distribution over target types given a source type, when the corresponding target distributions do not vary significantly over a set of source types, the model gains nothing by maintaining a distinct set of parameters for each member of this set. Lexical redundancy may arise when languages differ in the specificity with which they refer to the same concepts. For instance, colours of the spectrum may be partitioned differently (e.g. blue in English v.s. sinii and goluboi in Russian). It will also arise when languages explicitly encode different information in the lexicon. For example, translating from French to English, a standard model would treat the following pairs of source and target types as distinct events with entirely unrelated parameters: (vert, green), (verte, green), (verts, green) and (vertes, green). Here the French types differ only in their final suffixes due to adjectival agreement. Since there is no equivalent mechanism in English, these distinctions are redundant with respect to this target language. Distinctions that are redundant in the source lexicon when translating into one language may, however, be significant when translating into another. For instance, the French adjectival number agreement (the addition of an s) may be significant when translating to Russian which also marks adjectives for number (the inflexion to -ye). We can remove redundancy from the translation model by conflating redundant types, e.g. vert .= {vert, verte, verts, vertes}, and averaging bilingual statistics associated with these events. 3 Eliminating redundancy in the model Redundancy in the translation model can be viewed as unwarranted model complexity. A cluster-based translation model defined via a hardclustering of the lexicon can reduce this complexity by introducing additional independence assumptions: given the source cluster label, cj, the target type, ei, is assumed to be independent of the exact source type, fj, observed, i.e., p(ei|fj) ≈ p(ei|cj). Optimising the model for lexical redundancy can be viewed as model selection over a set of such cluster-based translation models. We formulate model search as a maximum a posteriori optimisation: the data-dependent term, p(D|C), quantifies evidence provided for a model, C, by bilingual training data, D, while the prior, p(C), can assert a preference for a particular model structure (clustering of the source lexicon) on the basis of monolingual features. Both terms have parameters that are estimated from data. Formally, we search for C∗, C∗ = arg maxC p(C|D) = arg maxC p(C)p(D|C). (1) Evaluating the data-dependent term, p(D|C), for different partitions of the source lexicon, we can compare how well different models predict the target tokens aligned in a parallel corpus. This term will prefer models that group together source types with similar distributions over target types. By using the marginal likelihood (integrating out the parameters of the translation model) to calculate 970 p(D|C), we can account explicitly for the complexity of the translation model and compare models with different numbers of clusters as well as different assignments of types to clusters. In addition to an implicit uniform prior over cluster labels as in k-means clustering (e.g. Chou (1991)), we also consider a Markov random field (MRF) parameterisation of the p(C) term to capture monolingual regularities in the lexicon. The MRF induces dependencies between clustering decisions in different parts of the lexicon via a monolingual feature space biasing the search towards models that exhibit monolingual regularities. Rather than assuming a priori knowledge of redundant distinctions in the source language, we use an EM algorithm to update parameters for features defined over sets of source types on the basis of existing cluster assignments. While initially the model search will be guided only by information from the bilingual statistics in p(D|C), monolingual regularities in the lexicon, such as inflexion patterns, may gradually be propagated through the model as p(C) becomes informative. Our experiments suggest that the MRF prior enables more robust model selection. As stated, the model selection procedure accounts for redundancy in the source lexicon using the target distributions. The target lexicon can be optimised analogously. Clustering target types allows the implementation of independence assumptions asserting that the exact specification of a target type is independent of the source type given knowledge of the target cluster label. For example, when translating an English adjective into French it may be more efficient to use the translation model to specify only that the translation lies within a certain set of French adjectives, corresponding to a single lemma, and have the language model select the exact form. Our experiments suggest that it can be useful to account for redundancy in both languages in this way; this can be incorporated simply within our optimisation procedure. In Section 3.1 we describe the bilingual marginal likelihood, p(D|C), clustering procedure; in Section 3.2 we introduce the MRF parameterisation of the prior, p(C), over model structure; and in Section 3.3, we describe algorithmic approximations. 3.1 Bilingual model selection Assume we are optimising the source lexicon (the target lexicon is optimised analogously). A clustering of the lexicon is a unique mapping CF : F →CF defined for all f ∈F where, in addition to all source types observed in the parallel training corpus, F may include items seen in other monolingual corpora (and, in the case of the source lexicon only, the development and test data). The standard SMT lexicon can be viewed as a clustering with each type observed in the parallel training corpus assigned to a distinct cluster and all other types assigned to a single ‘unknown word’ cluster. We optimise a conditional model of target tokens from word-aligned parallel corpora, D = {Dc0, ..., DcN }, where Dci represents the set of target words that were aligned to the set of source types in cluster ci. We assume that each target token in the corpus is generated conditionally i.i.d. given the cluster label of the source type to which it is aligned. Sufficient statistics for this model consist of co-occurrence counts of source and target types summed across each source cluster, #cf (e) .= X f′∈cf #(e, f′). (2) Maximising the likelihood of the data under this model would require us to specify the number of clusters (the size of the lexicon) in advance. Instead we place a Dirichlet prior parameterised by α1 over the translation model parameters of each cluster, µcf,e, defining the conditional distributions over target types. Given a clustering, the Dirichlet prior, and independent parameters, the distribution over data and parameters factorises, p(D, µ|CF , α) = Y cf∈CF p(Dcf , µcf |cf, α) ∝ Y cf∈CF Y e∈E µ α−1+#cf (e) cf,e We optimise cluster assignments with respect to the marginal likelihood which averages the likelihood of the set of counts assigned to a cluster, Dcf , under the current model over the prior, p(Dcf |α, cf) = Z p(µcf |α)p(Dcf |µcf , cf)dµcf . This can be evaluated analytically for a Dirichlet prior with multinomial parameters. Assuming a (fixed) uniform prior over model structure, p(C), model selection involves iteratively re-assigning source types to clusters such as to maximise the marginal likelihood. Reassignments may alter the total number of clusters 1Distinct from the prior over model structure, p(C). 971 at any point. Updates can be calculated locally, for instance, given the sets of target tokens Dci and Dcj aligned to source types currently in clusters ci and cj, the change in log marginal likelihood if clusters ci and cj are merged into cluster ¯c is, ∆ci,cj→¯c = log p(D¯c|α, ¯c) p(Dci|α, ci)p(Dcj|α, cj), (3) which is a Bayes factor in favour of the hypothesis that Dci and Dcj were sampled from the same distribution (Wolpert, 1995). Unlike its equivalent in maximum likelihood clustering, Eq.(3) may assume positive values favouring a smaller number of clusters when the data does not support a more complex hypothesis. The more complex model, with ci and cj modelled separately, is penalised for being able to model a wider range of data sets. The hyperparameter, α, is tied across clusters and taken to be proportional to the marginal (the ‘background’) distribution over target types in the corpus. Under this prior, source types aligned to the same target types, will be clustered together more readily if these target types are less frequent in the corpus as a whole. 3.2 Markov random field model prior As described above we consider a Markov random field (MRF) parameterisation of the prior over model structure, p(C). This defines a distribution over cluster assignments of the source lexicon as a whole based solely on monolingual characteristics of the lexical types and the relations between their respective cluster assignments. Viewed as graph, each variable in the MRF is modelled as conditionally independent of all other variables given the values of its neighbours (the Markov property; (Geman and Geman, 1984)). Each variable in the MRF prior corresponds to a lexical source type and its cluster assignment. Fig. 1 shows a section of the complete model including the MRF prior for a Welsh source lexicon; shading denotes cluster assignments and English target tokens are shown as directed nodes.2 From the Markov property it follows that this prior decomposes over neighbourhoods, pMRF(C)∝e β P f∈F P f′∈Nf P i λiψi(f,f′,cf,c′ f) Here Nf is the set of neighbours of source type f; i indexes a set of functions ψi(·) that pick out features of a clique; each function has a parameter λi 2The plates represent repeated sampling; each Welsh source type may be aligned to multiple English tokens. Figure 1: Model with Markov random field prior #(f) #(f) #(f) #(f) car car #(f) wales wales car gar cymru gymru bar mar that we learn from the data; these are tied across the graph. β is a free parameter used to control the overall contribution of the prior in Eq. (1). Here features are defined over pairs of types but higherorder interactions can also be modelled. We only consider ‘positive’ prior knowledge that is indicative of redundancy among source types. Hence all features are non-zero only when their arguments are assigned to the same cluster. Features can be defined over any aspects of the lexicon; in our experiments we use binary features over constrained string edits between types. The following feature would be 1, for instance, if the Welsh types cymru and gymru (see Fig. 1), were assigned to the same cluster.3 ψ1(fi = (c ∼) ∧fj = (g ∼) ∧ci = cj) Setting the parameters of the MRF prior over this feature space by hand would require a priori knowledge of redundancies for the language pair. In the absence of such knowledge, we use an iterative EM algorithm to update the parameters on the basis of the previous solution to the bilingual clustering procedure. EM parameter estimation forces the cluster assignments of the MRF prior to agree with those obtained on the basis of bilingual data using monolingual features alone. Since features are tied across the MRF, patterns that characterise redundant relations between types will be re-enforced across the model. For instance (see Fig. 1), if cymru and gymru are clustered together, the parameter for feature ψ1, shown above, may increase. This induces a prior preference for car and gar to form a cluster on subsequent iterations. A similar feature defined for mar and gar in the a priori string edit feature space, on the other hand, may remain uninformative if not observed frequently on pairs of types assigned to the same clusters. In this way, the model learns to 3Here ∼matches a common substring of both arguments. 972 generalise language-specific redundancy patterns from a large a priori feature space. Changes in the prior due to re-assignments can be calculated locally and combined with the marginal likelihood. 3.3 Algorithmic approximations The model selection procedure is an EM algorithm. Each source type is initially assigned to its own cluster and the MRF parameters, λi, are initialised to zero. A greedy E-step iteratively reassigns each source type to the cluster that maximises Eq. (1); cluster statistics are updated after any re-assignment. To reduce computation, we only consider re-assignments that would cause at least one (non-zero) feature in the MRF to fire, or to clusters containing types sharing target wordalignments with the current type; types may also be re-assigned to a cluster of their own at any iteration. When clustering both languages simultaneously, we average ‘target’ statistics over the number of events in each ‘target’ cluster in Eq. (2). We re-estimate the MRF parameters after each pass through the vocabulary. These are updated according to MLE using a pseudolikelihood approximation (Besag, 1986). Since MRF parameters can only be non-zero for features observed on types clustered together during an E-step, we use lazy instantiation to work with a large implicit feature set defined by a constrained string edit. The algorithm has two free parameters: α determining the strength of the Dirichlet prior used in the marginal likelihood, p(D|C), and β which determines the contribution of pMRF(C) to Eq. (1). 4 Experiments Phrase-based SMT systems have been shown to outperform word-based approaches (Koehn et al., 2003). We evaluate the effects of lexicon model selection on translation quality by considering two applications within a phrase-based SMT system. 4.1 Applications to phrase-based SMT A phrase-based translation model can be estimated in two stages: first a parallel corpus is aligned at the word-level and then phrase pairs are extracted (Koehn et al., 2003). Aligning tokens in parallel sentences using the IBM Models (Brown et al., 1993), (Och and Ney, 2003) may require less information than full-blown translation since the task is constrained by the source and target tokens present in each sentence pair. In the phrase-level translation table, however, the model must assign Source Tokens Types Singletons Test OOV Czech 468K 54K 29K 6K 469 French 5682K 53K 19K 16K 112 Welsh 4578K 46K 18K 15K 64 Table 1: Parallel corpora used in the experiments. probabilities to a potentially unconstrained set of target phrases. We anticipate the optimal model sizes to be different for these two tasks. We can incorporate an optimised lexicon at the word-alignment stage by mapping tokens in the training corpus to their cluster labels. The mapping will not change the number of tokens in a sentence, hence the word-alignments can be associated with the original corpus (see Exp. 1). To extrapolate a mapping over phrases from our type-level models we can map each type within a phrase to its corresponding cluster label. This, however, results in a large number of distinct phrases being collapsed down to a single ‘clustered phrase’. Using these directly may spread probability mass too widely. Instead we use them to smooth the phrase translation model (see Exp. 2). Here we consider a simple interpolation scheme; they could also be used within a backoff model (Yang and Kirchhoff, 2006). 4.2 Experimental set-up The system we use is described in (Koehn, 2004). The phrase-based translation model includes phrase-level and lexical weightings in both directions. We use the decoder’s default behaviour for unknown words copying them verbatim to the output. Smoothed trigram language models are estimated on training sections of the parallel corpus. We used the parallel sections of the Prague Treebank (Cmejrek et al., 2004), French and English sections of the Europarl corpus (Koehn, 2005) and parallel text from the Welsh Assembly4 (see Table1). The source languages, Czech, French and Welsh, were chosen on the basis that they may exhibit different degrees of redundancy with respect to English and that they differ morphologically. Only the Czech corpus has explicit morphological annotation. 4.3 Models All models used in the experiments are defined as mappings of the source and target vocabularies. The target vocabulary includes all distinct types 4This Welsh-English parallel text is in the public domain. Contact the first author for details. 973 seen in the training corpus; the source vocabulary also includes types seen only in development and test data. Free parameters were set to maximize our evaluation metric, BLEU, on development data. The results are reported on the test sets (see Table 1). The baseline mappings used were: • standard: the identity mapping; • max-pref: a prefix of no more than n letters; • min-freq: a prefix with a frequency of at least n in the parallel training corpus. • lemmatize: morphological lemmas (Czech) standard corresponds to the standard SMT lexicon. max-pref and min-freq are both simple stemming algorithms that can be applied to raw text. These mappings result in models defined over fewer distinct events that will have higher frequencies; min-freq optimises the latter directly. We optimise over (possibly different) values of n for source and target languages. The lemmatize mapping which maps types to their lemmas was only applicable to the Czech corpus. The optimised lexicon models define mappings directly via their clusterings of the vocabulary. We consider the following four models: • src: clustered source lexicon; • src+mrf: as src with MRF prior; • src+trg: clustered source and target lexicons; • src+trg+mrf: as src+trg with MRF priors. In each case we optimise over α (a single value for both languages) and, when using the MRF prior, over β (a single value for both languages). 4.4 Experiments The two sets of experiments evaluate the baseline models and optimised lexicon models during word-alignment and phrase-level translation model estimation respectively. • Exp. 1: map the parallel corpus, perform word-alignment; estimate the phrase translation model using the original corpus. • Exp. 2: smooth the phrase translation model, p(e|f) = #(e, f) + γ#(ce, cf) #(f) + γ#(cf) Here e, f and ce, cf are phrases mapped under the standard model and the model being tested respectively; γ is set once for all experiments on development data. Wordalignments were generated using the optimal max-pref mapping for each training set. 5 Results Table 2 shows the changes in BLEU when we incorporate the lexicon mappings during the wordalignment process. The standard SMT lexicon model is not optimal, as measured by BLEU, for any of the languages or training set sizes considered. Increases over this baseline, however, diminish with more training data. For both Czech and Welsh, the explicit model selection procedure that we have proposed results in better translations than all of the baseline models when the MRF prior is used; again these increases diminish with larger training sets. We note that the stemming baseline models appear to be more effective for Czech than for Welsh. The impact of the MRF prior is also greater for smaller training sets. Table 3 shows the results of using these models to smooth the phrase translation table.5 With the exception of Czech, the improvements are smaller than for Exp 1. For all source languages and models we found that it was optimal to leave the target lexicon unmapped when smoothing the phrase translation model. Using lemmatize for word-alignment on the Czech corpus gave BLEU scores of 32.71 and 37.21 for the 10K and 21K training sets respectively; used to smooth the phrase translation model it gave scores of 33.96 and 37.18. 5.1 Discussion Model selection had the largest impact for smaller data sets suggesting that the complexity of the standard model is most excessive in sparse data conditions. The larger improvements seen for Czech and Welsh suggest that these languages encode more redundant information in the lexicon with respect to English. Potential sources could be grammatical case markings (Czech) and mutation patterns (Welsh). The impact of the MRF prior for smaller data sets suggests it overcomes sparsity in the bilingual statistics during model selection. The location of redundancies, in the form of case markings, at the ends of words in Czech as assumed by the stemming algorithms may explain why these performed better on this language than 5The standard model in Exp. 2 is equivalent to the optimised max-pref in Exp. 1. 974 Table 2: BLEU scores with optimised lexicon applied during word-alignment (Exp. 1) Czech-English French-English Welsh-English Model 10K sent. 21K 10K 25K 100K 250K 10K 25K 100K 250K standard 32.31 36.17 20.76 23.17 26.61 27.63 35.45 39.92 45.02 46.47 max-pref 34.18 37.34 21.63 23.94 26.45 28.25 35.88 41.03 44.82 46.11 min-freq 33.95 36.98 21.22 23.77 26.74 27.98 36.23 40.65 45.38 46.35 src 33.95 37.27 21.43 24.42 26.99 27.82 36.98 40.98 45.81 46.45 src+mrf 33.97 37.89 21.63 24.38 26.74 28.39 37.36 41.13 46.50 46.56 src+trg 34.24 38.28 22.05 24.02 26.53 27.80 36.83 41.31 45.22 46.51 src+trg+mrf 34.70 38.44 22.33 23.95 26.69 27.75 37.56 42.19 45.18 46.48 Table 3: BLEU scores with optimised lexicon used to smooth phrase-based translation model (Exp. 2) Czech-English French-English Welsh-English Model 10K sent. 21K 10K 25K 100K 250K 10K 25K 100K 250K (standard)5 34.18 37.34 21.63 23.94 26.45 28.25 35.88 41.03 44.82 46.11 max-pref 35.63 38.81 22.49 24.10 26.99 28.26 37.31 40.09 45.57 46.41 min-freq 34.65 37.75 21.14 23.41 26.29 27.47 36.40 40.84 45.75 46.45 src 34.38 37.98 21.28 24.17 26.88 28.35 36.94 39.99 45.75 46.65 src+mrf 36.24 39.70 22.02 24.10 26.82 28.09 37.81 41.04 46.16 46.51 Table 4: System output (Welsh 25K; Exp. 2) Src ehangu o ffilm i deledu. Ref an expansion from film into television. standard expansion of footage to deledu. max-pref expansion of ffilm to television. src+mrf expansion of film to television. Src yw gwarchod cymru fel gwlad brydferth Ref safeguarding wales as a picturesque country standard protection of wales as a country brydferth max-pref protection of wales as a country brydferth src+mrf protecting wales as a beautiful country Src cynhyrchu canlyniadau llai na pherffaith Ref produces results that are less than perfect standard produce results less than pherffaith max-pref produce results less than pherffaith src+mrf generates less than perfect results Src y dynodiad o graidd y broblem Ref the identification of the nub of the problem standard the dynodiad of the heart of the problem max-pref the dynodiad of the heart of the problem src+mrf the identified crux of the problem on Welsh. The highest scoring features in the MRF (see Table 5) show that Welsh redundancies, on the other hand, are primarily between initial characters. Inspection of system output confirms that OOV types could be mapped to known Welsh words with the MRF prior but not via stemming (see Table 4). For each language pair the MRF learned features that capture intuitively redundant patterns: adjectival endings for French, case markings for Czech, and mutation patterns for Welsh. The greater improvements in Exp. 1 were mirrored by higher compression rates for these lexicons (see Table. 6) supporting the conjecture that word-alignment requires less information than full-blown translation. The results of the lemmaTable 5: Features learned by MRF prior Czech French Welsh (∼, ∼m) (∼, ∼s) (c ∼, g ∼) (∼, ∼u) (∼, ∼e) (d ∼, dd ∼) (∼, ∼a) (∼, ∼es) (d ∼, t ∼) (∼, ∼ch) (∼e, ∼es) (b ∼, p ∼) (∼, ∼ho) (∼e, ∼er) (c ∼, ch ∼) (∼a, ∼u) (∼e, ∼ent) (b ∼, f ∼) Note: Features defined over pairs of source types assigned to the same cluster; here ∼matches a common substring. Table 6: Optimal lexicon size (ratio of raw vocab.) Czech French Welsh Word-alignment 0.26 0.22 0.24 TM smoothing 0.28 0.38 0.51 tize model on Czech show the model selection procedure improving on a simple supervised baseline. 6 Related Work Previous work on automatic bilingual word clustering has been motivated somewhat differently and not made use of cluster-based models to assign translation probabilities directly (Wang et al., 1996), (Och, 1998). There is, however, a large body of work using morphological analysis to define cluster-based translation models similar to ours but in a supervised manner (Zens and Ney, 2004), (Niessen and Ney, 2004). These approaches have used morphological annotation (e.g. lemmas and part of speech tags) to provide explicit supervision. They have also involved manually specifying which morphological distinc975 tions are redundant (Goldwater and McClosky, 2005). In contrast, we attempt to learn both equivalence classes and redundant relations automatically. Our experiments with orthographic features suggest that some morphological redundancies can be acquired in an unsupervised fashion. The marginal likelihood hard-clustering algorithm that we propose here for translation model selection can be viewed as a Bayesian k-means algorithm and is an application of Bayesian model selection techniques, e.g., (Wolpert, 1995). The Markov random field prior over model structure extends the fixed uniform prior over clusters implicit in k-means clustering and is common in computer vision (Geman and Geman, 1984). Recently Basu et al. (2004) used an MRF to embody hard constraints within semi-supervised clustering. In contrast, we use an iterative EM algorithm to learn soft constraints within the ‘prior’ monolingual space based on the results of clustering with bilingual statistics. 7 Conclusions and Future Work We proposed a framework for modelling lexical redundancy in machine translation and tackled optimisation of the lexicon via Bayesian model selection over a set of cluster-based translation models. We showed improvements in translation quality incorporating these models within a phrasebased SMT sytem. Additional gains resulted from the inclusion of an MRF prior over model structure. We demonstrated that this prior could be used to learn weights for monolingual features that characterise bilingual redundancy. Preliminary experiments defining MRF features over morphological annotation suggest this model can also identify redundant distinctions categorised linguistically (for instance, that morphological case is redundant on Czech nouns and adjectives with respect to English, while number is redundant only on adjectives). In future work we will investigate the use of linguistic resources to define feature sets for the MRF prior. Lexical redundancy would ideally be addressed in the context of phrases, however, computation and statistical estimation may then be significantly more challenging. Acknowledgements The authors would like to thank Philipp Koehn for providing training scripts used in this work; and Steve Renals, Mirella Lapata and members of the Edinburgh SMT Group for valuable comments. This work was supported by an MRC Priority Area Studentship to the School of Informatics, University of Edinburgh. References Sugato Basu, Mikhail Bilenko, and Raymond J. Mooney. 2004. A probabilistic framework for semi-supervised clustering. In Proc. of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2004). Julian Besag. 1986. The statistical analysis of dirty pictures. Journal of the Royal Society Series B, 48(2):259–302. Peter Brown, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer. 1993. The mathematics of machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Philip A. Chou. 1991. Optimal partitioning for classification and regression trees. IEEE Trans. on Pattern Analysis and Machine Intelligence, 13(4). M. Cmejrek, J. Curin, J. Havelka, J. Hajic, and V. Kubon. 2004. Prague Czech-English dependency treebank: Syntactically annotated resources for machine translation. In 4th International Conference on Language Resources and Evaluation, Lisbon, Portugal S. Geman and D. Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. on Pattern Analysis and Machine Intelligence, 6:721–741. Sharon Goldwater and David McClosky. 2005. Improving statistical MT through morphological analysis. In Proc. of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002). Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the HLT/NAACL 2003. Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation models. In Proceedings of the AMTA 2004. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit 2005. S. Niessen and H. Ney. 2004. Statistical machine translation with scarce resources using morpho-syntactic information. Computational Linguistics, 30(2):181–204. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. F.-J. Och. 1998. An efficient method for determining bilingual word classes. In Proc. of the European Chapter of the Association for Computational Linguistics 1998. Ye-Yi Wang, John Lafferty, and Alex Waibel. 1996. Word clustering with parallel spoken language corpora. In Proc. of 4th International Conference on Spoken Language Processing, ICSLP 96, Philadelphia, PA. D.H. Wolpert. 1995. Determining whether two data sets are from the same distribution. In 15th international workshop on Maximum Entropy and Bayesian Methods. Mei Yang and Katrin Kirchhoff. 2006. Phrase-based backoff models for machine translation of highly inflected languages. In Proc. of the the European Chapter of the Association for Computational Linguistics 2006. R. Zens and H. Ney. 2004. Improvements in phrase-based statistical machine translation. In Proc. of the Human Language Technology Conference (HLT-NAACL 2004). 976 | 2006 | 122 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 977–984, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Empirical Lower Bounds on the Complexity of Translational Equivalence ∗ Benjamin Wellington Computer Science Dept. New York University New York, NY 10003 {lastname}@cs.nyu.edu Sonjia Waxmonsky Computer Science Dept. University of Chicago† Chicago, IL, 60637 [email protected] I. Dan Melamed Computer Science Dept. New York University New York, NY, 10003 {lastname}@cs.nyu.edu Abstract This paper describes a study of the patterns of translational equivalence exhibited by a variety of bitexts. The study found that the complexity of these patterns in every bitext was higher than suggested in the literature. These findings shed new light on why “syntactic” constraints have not helped to improve statistical translation models, including finitestate phrase-based models, tree-to-string models, and tree-to-tree models. The paper also presents evidence that inversion transduction grammars cannot generate some translational equivalence relations, even in relatively simple real bitexts in syntactically similar languages with rigid word order. Instructions for replicating our experiments are at http://nlp.cs.nyu.edu/GenPar/ACL06 1 Introduction Translational equivalence is a mathematical relation that holds between linguistic expressions with the same meaning. The most common explicit representations of this relation are word alignments between sentences that are translations of each other. The complexity of a given word alignment can be measured by the difficulty of decomposing it into its atomic units under certain constraints detailed in Section 2. This paper describes a study of the distribution of alignment complexity in a variety of bitexts. The study considered word alignments both in isolation and in combination with independently generated parse trees for one or both sentences in each pair. Thus, the study ∗Thanks to David Chiang, Liang Huang, the anonymous reviewers, and members of the NYU Proteus Project for helpful feedback. This research was supported by NSF grant #’s 0238406 and 0415933. † SW made most of her contribution while at NYU. is relevant to finite-state phrase-based models that use no parse trees (Koehn et al., 2003), tree-tostring models that rely on one parse tree (Yamada and Knight, 2001), and tree-to-tree models that rely on two parse trees (Groves et al., 2004, e.g.). The word alignments that are the least complex on our measure coincide with those that can be generated by an inversion transduction grammar (ITG). Following Wu (1997), the prevailing opinion in the research community has been that more complex patterns of word alignment in real bitexts are mostly attributable to alignment errors. However, the experiments in Section 3 show that more complex patterns occur surprisingly often even in highly reliable alignments in relatively simple bitexts. As discussed in Section 4, these findings shed new light on why “syntactic” constraints have not yet helped to improve the accuracy of statistical machine translation. Our study used two kinds of data, each controlling a different confounding variable. First, we wanted to study alignments that contained as few errors as possible. So unlike some other studies (Zens and Ney, 2003; Zhang et al., 2006), we used manually annotated alignments instead of automatically generated ones. The results of our experiments on these data will remain relevant regardless of improvements in technology for automatic word alignment. Second, we wanted to measure how much of the complexity is not attributable to systematic translation divergences, both in the languages as a whole (SVO vs. SOV), and in specific constructions (English not vs. French ne. . . pas). To eliminate this source of complexity of translational equivalence, we used English/English bitexts. We are not aware of any previous studies of word alignments in monolingual bitexts. Even manually annotated word alignments vary in their reliability. For example, annotators sometimes link many words in one sentence to many 977 (a) that , I believe we all find unacceptable , regardless of political party , je pense que , independamment de notre parti , nous trouvons tous cela inacceptable (b) (Y / Y,Y) −−> (D C / D,C) * (S / S) −−> (X A / X A X) (X / X,X) −−> (Y B / B Y,Y) X A Y B A D C B A B D A C Y A Y B X A X S S believe party pense unacc that cela parti inacc Figure 1: (a) Part of a word alignment. (b) Derivation of this word alignment using only binary and nullary productions requires one gap per nonterminal, indicated by commas in the production rules. words in the other, instead of making the effort to tease apart more fine-grained distinctions. A study of such word alignments might say more about the annotation process than about the translational equivalence relation in the data. The inevitable noise in the data motivated us to focus on lower bounds, complementary to Fox (2002), who wrote that her results “should be looked on as more of an upper bound.” (p. 307) As explained in Section 3, we modified all unreliable alignments so that they cannot increase the complexity measure. Thus, we arrived at complexity measurements that were underestimates, but reliably so. It is almost certain that the true complexity of translational equivalence is higher than what we report. 2 A Measure of Alignment Complexity Any translation model can memorize a training sentence pair as a unit. For example, given a sentence pair like (he left slowly / slowly he left) with the correct word alignment, a phrase-based translation model can add a single 3-word biphrase to its phrase table. However, this biphrase would not help the model predict translations of the individual words in it. That’s why phrase-based models typically decompose such training examples into their sub-biphrases and remember them too. Decomposing the translational equivalence relations in the training data into smaller units of knowledge can improve a model’s ability to generalize (Zhang et al., 2006). In the limit, to maximize the chances of covering arbitrary new data, a model should decompose the training data into the smallest possible units, and learn from them.1 For phrasebased models, this stipulation implies phrases of length one. If the model is a synchronous rewriting system, then it should be able to generate every training sentence pair as the yield of a binary1Many popular models learn from larger units at the same time, but the size of the smallest learnable unit is what’s important for our purposes. branching synchronous derivation tree, where every word-to-word link is generated by a different derivation step. For example, a model that uses production rules could generate the previous example using the synchronous productions (S, S) →(X Y / Y X); (X, X) →(U V / U V); (Y, Y) →(slowly, slowly); (U, U) →(he, he); and (V, V) →(left, left). A problem arises when this kind of decomposition is attempted for the alignment in Figure 1(a). If each link is represented by its own nonterminal, and production rules must be binary-branching, then some of the nonterminals involved in generating this alignment need discontinuities, or gaps. Figure 1(b) illustrates how to generate the sentence pair and its word alignment in this manner. The nonterminals X and Y have one discontinuity each. More generally, for any positive integer k, it is possible to construct a word alignment that cannot be generated using binary production rules whose nonterminals all have fewer than k gaps (Satta and Peserico, 2005). Our study measured the complexity of a word alignment as the minimum number of gaps needed to generate it under the following constraints: 1. Each step of the derivation generates no more than two different nonterminals. 2. Each word-to-word link is generated from a separate nonterminal.2 Our measure of alignment complexity is analogous to what Melamed et al. (2004) call “fanout.”3 The least complex alignments on this measure — those that can be generated with zero gaps — are precisely those that can be generated by an 2If we imagine that each word is generated from a separate nonterminal as in GCNF (Melamed et al., 2004), then constraint 2 becomes a special case of constraint 1. 3For grammars that generate bitexts, fan-out is equal to the maximum number of allowed gaps plus two. 978 bitext # SPs min median max 95% C.I. Chinese/English 491 4 24 52 .02 Romanian/English 200 2 19 76 .03 Hindi/English 90 1 10 40 .04 Spanish/English 199 4 23 49 .03 French/English 447 2 15 29 .01 Eng/Eng MTEval 5253 2 26 92 .01 Eng/Eng fiction 6263 2 15 97 .01 Table 1: Number of sentence pairs and minimum/median/maximum sentence lengths in each bitext. All failure rates reported later have a 95% confidence interval that is no wider than the value shown for each bitext. ITG. For the rest of the paper, we restrict our attention to binary derivations, except where explicitly noted otherwise. To measure the number of gaps needed to generate a given word alignment, we used a bottom-up hierarchical alignment algorithm to infer a binary synchronous parse tree that was consistent with the alignment, using as few gaps as possible. A hierarchical alignment algorithm is a type of synchronous parser where, instead of constraining inferences by the production rules of a grammar, the constraints come from word alignments and possibly other sources (Wu, 1997; Melamed and Wang, 2005). A bottom-up hierarchical aligner begins with word-to-word links as constituents, where some of the links might be to nothing (“NULL”). It then repeatedly composes constituents with other constituents to make larger ones, trying to find a constituent that covers the entire input. One of the important design choices in this kind of study is how to treat multiple links attached to the same word token. Word aligners, both human and automatic, are often inconsistent about whether they intend such sets of links to be disjunctive or conjunctive. In accordance with its focus on lower bounds, the present study treated them as disjunctive, to give the hierarchical alignment algorithm more opportunities to use fewer gaps. This design decision is one of the main differences between our study and that of Fox (2002), who treated links to the same word conjunctively. By treating many-to-one links disjunctively, our measure of complexity ignored a large class of discontinuities. Many types of discontinuous constituents exist in text independently of any translation. Simard et al. (2005) give examples such as English verb-particle constructions, and the French negation ne. . . pas. The disparate elements of such constituents would usually be aligned to the same word in a translation. However, when PP NP b) V S left George Friday George left on Friday VP S NP V PP left George Friday George left on Friday on on a) Figure 2: a) With a parse tree constraining the top sentence, a hierarchical alignment is possible without gaps. b) With a parse tree constraining the bottom sentence, no such alignment exists. our hierarchical aligner saw two words linked to one word, it ignored one of the two links. Our lower bounds would be higher if they accounted for this kind of discontinuity. 3 Experiments 3.1 Data We used two monolingual bitexts and five bilingual bitexts. The Romanian/English and Hindi/English data came from Martin et al. (2005). For Chinese/English and Spanish/English, we used the data from Ayan et al. (2005). The French/English data were those used by Mihalcea and Pedersen (2003). The monolingual bitext labeled “MTEval” in the tables consists of multiple independent translations from Chinese to English (LDC, 2002). The other monolingual bitext, labeled “fiction,” consists of two independent translations from French to English of Jules Verne’s novel 20,000 Leagues Under the Sea, sentencealigned by Barzilay and McKeown (2001). From the monolingual bitexts, we removed all sentence pairs where either sentence was longer than 100 words. Table 1 gives descriptive statistics for the remaining data. The table also shows the upper bound of the 95% confidence intervals for the coverage rates reported later. The results of experiments on different bitexts are not directly comparable, due to the varying genres and sentence lengths. 3.2 Constraining Parse Trees One of the main independent variables in our experiments was the number of monolingual parse trees used to constrain the hierarchical alignments. To induce models of translational equivalence, some researchers have tried to use such trees to constrain bilingual constituents: The span of every node in the constraining parse tree must coincide with the relevant monolingual span of some 979 crew astronauts included S NP VP NP VP VP S NP PP the in are crew included astronauts the Figure 3: A word alignment that cannot be generated without gaps in a manner consistent with both parse trees. node in the bilingual derivation tree. These additional constraints can thwart attempts at hierarchical alignment that might have succeeded otherwise. Figure 2a shows a word alignment and a parse tree that can be hierarchically aligned without gaps. George and left can be composed in both sentences into a constituent without crossing any phrase boundaries in the tree, as can on and Friday. These two constituents can then be composed to cover the entire sentence pair. On the other hand, if a constraining tree is applied to the other sentence as shown in Figure 2b, then the word alignment and tree constraint conflict. The projection of the VP is discontinuous in the top sentence, so the links that it covers cannot be composed into a constituent without gaps. On the other hand, if a gap is allowed, then the VP can compose as on Friday . . . left in the top sentence, where the ellipsis represents a gap. This VP can then compose with the NP complete a synchronous parse tree. Some authors have applied constraining parse trees to both sides of the bitext. The example in Figure 3 can be hierarchically aligned using either one of the two constraining trees, but gaps are necessary to align it with both trees. 3.3 Methods We parsed the English side of each bilingual bitext and both sides of each English/English bitext using an off-the-shelf syntactic parser (Bikel, 2004), which was trained on sections 02-21 of the Penn English Treebank (Marcus et al., 1993). Our bilingual bitexts came with manually annotated word alignments. For the monolingual bitexts, we used an automatic word aligner based on a cognate heuristic and a list of 282 function words compiled by hand. The aligner linked two words to each other only if neither of them was on the function word list and their longest common subsequence ratio (Melamed, 1995) was at least 0.75. Words that were not linked to another word in this manner were linked to NULL. For the purposes of this study, a word aligned to NULL is a non-constraint, because it can always be composed without a gap with some constituent that is adjacent to it on just one side of the bitext. The number of automatically induced non-NULL links was lower than what would be drawn by hand. We modified the word alignments in all bitexts to minimize the chances that alignment errors would lead to an over-estimate of alignment complexity. All of the modifications involved adding links to NULL. Due to our disjunctive treatment of conflicting links, the addition of a link to NULL can decrease but cannot increase the complexity of an alignment. For example, if we added the links (cela, NULL) and (NULL, that) to the alignment in Figure 1, the hierarchical alignment algorithm could use them instead of the link between cela and that. It could thus generate the modified alignment without using a gap. We added NULL links in two situations. First, if a subset of the links in an alignment formed a many-to-many mapping but did not form a bipartite clique (i.e. every word on one side linked to every word on the other side), then we added links from each of these words to NULL. Second, if n words on one side of the bitext aligned to m words on the other side with m > n then we added NULL links for each of the words on the side with m words. After modifying the alignments and obtaining monolingual parse trees, we measured the alignment complexity of each bitext using a hierarchical alignment algorithm, as described in Section 2. Separate measurements were taken with zero, one, and two constraining parse trees. The synchronous parser in the GenPar toolkit4 can be configured for all of these cases (Burbank et al., 2005). Unlike Fox (2002) and Galley et al. (2004), we measured failure rates per corpus rather than per sentence pair or per node in a constraining tree. This design was motivated by the observation that if a translation model cannot correctly model a certain word alignment, then it is liable to make incorrect inferences about arbitrary parts of that alignment, not just the particular word links involved in a complex pattern. The failure rates we report represent lower bounds on the fraction of training data 4http://nlp.cs.nyu.edu/GenPar 980 # of gaps allowed → 0/0 0/1 or 1/0 Chinese/English 26 = 5% 0 = 0% Romanian/English 1 = 0% 0 = 0% Hindi/English 2 = 2% 0 = 0% Spanish/English 3 = 2% 0 = 0% French/English 3 = 1% 0 = 0% Table 2: Failure rates for hierarchical alignment of bilingual bitexts under word alignment constraints only. # of gaps allowed on non-English side → 0 1 2 Chinese/English 298 = 61% 28 = 6% 0 = 0% Romanian/English 82 = 41% 6 = 3% 1 = 0% Hindi/English 33 = 37% 1 = 1% 0 = 0% Spanish/English 75 = 38% 4 = 2% 0 = 0% French/English 67 = 15% 2 = 0% 0 = 0% Table 3: Failure rates for hierarchical alignment of bilingual bitexts under the constraints of a word alignment and a monolingual parse tree on the English side. that is susceptible to misinterpretation by overconstrained translation models. 3.4 Summary Results Table 2 shows the lower bound on alignment failure rates with and without gaps for five languages paired with English. This table represents the case where the only constraints are from word alignments. Wu (1997) has “been unable to find real examples” of cases where hierarchical alignment would fail under these conditions, at least in “fixed-word-order languages that are lightly inflected, such as English and Chinese.” (p. 385). In contrast, we found examples in all bitexts that could not be hierarchically aligned without gaps, including at least 5% of the Chinese/English sentence pairs. Allowing constituents with a single gap on one side of the bitext decreased the observed failure rate to zero for all five bitexts. Table 3 shows what happened when we used monolingual parse trees to restrict the compositions on the English side. The failure rates were above 35% for four of the five language pairs, and 61% for Chinese/English! Again, the failure rate fell dramatically when one gap was allowed on the unconstrained (non-English) side of the bitext. Allowing two gaps on the non-English side led to almost complete coverage of these word alignments. Table 3 does not specify the number of gaps allowed on the English side, because varying this parameter never changed the outcome. The only way that a gap on that side could increase coverage is if there was a node in the constraining parse tree that # of gaps → 0/0 0/1 0/2 0 CTs 171 = 3% 0 = 0% 0 = 0% 1 CTs 1792 = 34% 143 = 3% 7 = 0% 2 CTs 3227 = 61% 3227 = 61% 3227 = 61% Table 4: Failure rates for hierarchical alignment of the MTEval bitext, over varying numbers of gaps and constraining trees (CTs). # of gaps → 0/0 0/1 0/2 0 CTs 23 = 0% 0 = 0% 0 = 0% 1 CTs 655 = 10% 22 = 0% 1 = 0% 2 CTs 1559 = 25% 1559 = 25% 1559 = 25% Table 5: Failure rates for hierarchical alignment of the fiction bitext, over varying numbers of gaps and constraining trees (CTs). had at least four children whose translations were in one of the complex permutations. The absence of such cases in the data implies that the failure rates under the constraints of one parse tree would be identical even if we allowed production rules of rank higher than two. Table 4 shows the alignment failure rates for the MTEval bitext. With word alignment constraints only, 3% of the sentence pairs could not be hierarchically aligned without gaps. Allowing a single gap on one side decreased this failure rate to zero. With a parse tree constraining constituents on one side of the bitext and with no gaps, alignment failure rates rose from 3% to 34%, but allowing a single gap on the side of the bitext that was not constrained by a parse tree brought the failure rate back down to 3%. With two constraining trees the failure rate was 61%, and allowing gaps did not lower it, for the same reasons that allowing gaps on the tree-constrained side made no difference in Table 3. The trends in the fiction bitext (Table 5) were similar to those in the MTEval bitext, but the coverage was always higher, for two reasons. First, the median sentence size was lower in the fiction bitext. Second, the MTEval translators were instructed to translate as literally as possible, but the fiction translators paraphrased to make the fiction more interesting. This freedom in word choice reduced the frequency of cognates and thus imposed fewer constraints on the hierarchical alignment, which resulted in looser estimates of the lower bounds. We would expect the opposite effect with hand-aligned data (Galley et al., 2004). To study how sentence length correlates with the complexity of translational equivalence, we took subsets of each bitext while varying the max981 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 10 20 30 40 50 60 70 80 90 100 failure rate maximum length of shortest sentence 0 constraining trees Chinese/Eng MTeval fiction 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 10 20 30 40 50 60 70 80 90 100 failure rate maximum length of shorter sentence 1 constraining tree Chinese/Eng Romanian/Eng Hindi/Eng Spanish/Eng MTeval French/Eng fiction 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 10 20 30 40 50 60 70 80 90 100 failure rate maximum length of shorter sentence 2 constraining trees MTeval fiction Figure 4: Failure rates for hierarchical alignment without gaps vs. maximum length of shorter sentence. category → 1 2 3 valid reordering 12 10 5 parser error n/a 16 25 same word used differently 15 4 0 erroneous cognates 3 0 0 total sample size 30 30 30 initial failure rate (%) 3.25 31.9 38.4 % false negatives 60±7 66±7 84±3 adjusted failure rate (%) 1.3±.22 11±2.2 6±1.1 Table 6: Detailed analysis of hierarchical alignment failures in MTEval bitext. imum length of the shorter sentence in each pair.5 Figure 4 plots the resulting alignment failure rates with and without constraining parse trees. The lines in these graphs are not comparable to each other because of the variety of genres involved. 3.5 Detailed Failure Analysis We examined by hand 30 random sentence pairs from the MTEval bitext in each of three different categories: (1) the set of sentence pairs that could not be hierarchically aligned without gaps, even without constraining parse trees; (2) the set of sentence pairs that could not be hierarchically aligned without gaps with one constraining parse tree, but that did not fall into category 1; and (3) the set of sentence pairs that could not be hierarchically aligned without gaps with two constraining parse trees, but that did not fall into category 1 or 2. Table 6 shows the results of this analysis. In category 1, 60% of the word alignments that could not be hierarchically aligned without gaps were caused by word alignment errors. E.g.: 1a GlaxoSmithKline’s second-best selling drug may have to face competition. 1b Drug maker GlaxoSmithKline may have to face competition on its second best selling product. The word drug appears in both sentences, but for different purposes, so drug and drug should not 5The length of the shorter sentence is the upper bound on the number of non-NULL word alignments. have been linked.6 Three errors were caused by words like targeted and started, which our word alignment algorithm deemed cognates. 12 of the hierarchical alignment failures in this category were true failures. For example: 2a Cheney denied yesterday that the mission of his trip was to organize an assault on Iraq, while in Manama. 2b Yesterday in Manama, Cheney denied that the mission of his trip was to organize an assault on Iraq. The alignment pattern of the words in bold is the familiar (3,1,4,2) permutation, as in Figure 1. Most of the 12 true failures were due to movement of prepositional phrases. The freedom of movement for such modifiers would be greater in bitexts that involve languages with less rigid word order than English. Of the 30 sentence pairs in category 2, 16 could not be hierarchically aligned due to parser errors and 4 due to faulty word alignments. 10 were due to valid word reordering. In the following example, a co-referring pronoun causes the word alignment to fail with a constraining tree on the second sentence: 3a But Chretien appears to have changed his stance after meeting with Bush in Washington last Thursday. 3b But after Chretien talked to Bush last Thursday in Washington, he seemed to change his original stance. 25 of the 30 sentence pairs in category 3 failed to align due to parser error. 5 examples failed because of valid word reordering. 1 of the 5 reorderings was due to a difference between active voice and passive voice, as in Figure 3. The last row of Table 6 takes the various reasons for alignment failure into account. It estimates what the failure rates would be if the monolingual parses and word alignments were perfect, with 95% confidence intervals. These revised rates emphasize the importance of reliable word alignments for this kind of study. 6This sort of error is likely to happen with other word alignment algorithms too, because words and their common translations are likely to be linked even if they’re not translationally equivalent in the given sentence. 982 4 Discussion Figure 1 came from a real bilingual bitext, and Example 2 in Section 3.5 came from a real monolingual bitext.7 Neither of these examples can be hierarchically aligned correctly without gaps, even without constraining parse trees. The received wisdom in the literature led us to expect no such examples in bilingual bitexts, let alone in monolingual bitexts. See http://nlp.cs.nyu.edu/GenPar/ACL06 for more examples. The English/English lower bounds are very loose, because the automatic word aligner would not link words that were not cognates. Alignment failure rates on a hand aligned bitext would be higher. We conclude that the ITG formalism cannot account for the “natural” complexity of translational equivalence, even when translation divergences are factored out. Perhaps our most surprising results were those involving one constraining parse tree. These results explain why constraints from independently generated monolingual parse trees have not improved statistical translation models. For example, Koehn et al. (2003) reported that “requiring constituents to be syntactically motivated does not lead to better constituent pairs, but only fewer constituent pairs, with loss of a good amount of valuable knowledge.” This statement is consistent with our findings. However, most of the knowledge loss could be prevented by allowing a gap. With a parse tree constraining constituents on the English side, the coverage failure rate was 61% for the Chinese/English bitext (top row of Table 3), but allowing a gap decreased it to 6%. Zhang and Gildea (2004) found that their alignment method, which did not use external syntactic constraints, outperformed the model of Yamada and Knight (2001). However, Yamada and Knight’s model could explain only the data that would pass the nogap test in our experiments with one constraining tree (first column of Table 3). Zhang and Gildea’s conclusions might have been different if Yamada and Knight’s model were allowed to use discontinuous constituents. The second row of Table 4 suggests that when constraining parse trees are used without gaps, at least 34% of training sentence pairs are likely to introduce noise into the model, even if systematic syntactic differences between languages are factored out. We should not 7The examples were shortened for the sake of space and clarity. 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 cumulative %age of sentences span length Figure 5: Lengths of spans covering words in (3,1,4,2) permutations. be surprised when such constraints do more harm than good. To increase the chances that a translation model can explain complex word alignments, some authors have proposed various ways of extending a model’s domain of locality. For example, Callison-Burch et al. (2005) have advocated for longer phrases in finite-state phrase-based translation models. We computed the phrase length that would be necessary to cover the words involved in each (3,1,4,2) permutation in the MTEval bitext. Figure 5 shows the cumulative percentage of these cases that would be covered by phrases up to a certain length. Only 9 of the 171 cases (5.2%) could be covered by phrases of length 10 or less. Analogous techniques for tree-structured translation models involve either allowing each nonterminal to generate both terminals and other nonterminals (Groves et al., 2004; Chiang, 2005), or, given a constraining parse tree, to “flatten” it (Fox, 2002; Zens and Ney, 2003; Galley et al., 2004). Both of these approaches can increase coverage of the training data, but, as explained in Section 2, they risk losing generalization ability. Our study suggests that there might be some benefits to an alternative approach using discontinuous constituents, as proposed, e.g., by Melamed et al. (2004) and Simard et al. (2005). The large differences in failure rates between the first and second columns of Table 3 are largely independent of the tightness of our lower bounds. Synchronous parsing with discontinuities is computationally expensive in the worst case, but recently invented data structures make it feasible for typical inputs, as long as the number of gaps allowed per constituent is fixed at a small maximum (Waxmonsky and Melamed, 2006). More research is needed to investigate the trade-off between these costs and benefits. 983 5 Conclusions This paper presented evidence of phenomena that can lead to complex patterns of translational equivalence in bitexts of any language pair. There were surprisingly many examples of such patterns that could not be analyzed using binary-branching structures without discontinuities. Regardless of the languages involved, the translational equivalence relations in most real bitexts of non-trivial size cannot be generated by an inversion transduction grammar. The low coverage rates without gaps under the constraints of independently generated monolingual parse trees might be the main reason why “syntactic” constraints have not yet increased the accuracy of SMT systems. Allowing a single gap in bilingual phrases or other types of constituent can improve coverage dramatically. References Necip Ayan, Bonnie J. Dorr, and Christof Monz. 2005. Alignment link projection using transformationbased learning. In EMNLP. Regina Barzilay and Kathleen McKeown. 2001. Extracting paraphrases from a parallel corpus. In ACL. Andrea Burbank, Marine Carpuat, Stephen Clark, Markus Dreyer and Pamela Fox, Declan Groves, Keith Hall, Mary Hearne, I. Dan Melamed, Yihai Shen, Andy Way, Ben Wellington, and Dekai Wu. 2005. Final Report on Statistical Machine Translation by Parsing. JHU CLSP. http://www.clsp.jhu.edu/ws2005 /groups/statistical/report.html Dan Bikel. 2004. A distributional analysis of a lexicalized statistical parsing model. In EMNLP. Chris Callison-Burch, Colin Bannard, and Josh Scroeder. 2005. Scaling phrase-based statistical machine translation to larger corpora and longer phrases. In ACL. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In ACL. Bonnie Dorr. 1994. Machine translation divergences: A formal description and proposed solution. Computational Linguistics 20(4):597–633. Heidi Fox. 2002. Phrasal cohesion and statistical machine translation. In EMNLP. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In HLT-NAACL. Declan Groves, Mary Hearne, and Andy Way. 2004. Robust sub-sentential alignment of phrase-structure trees. In COLING. Philipp Koehn, Franz Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL. Mitchell Marcus, Beatrice Santorini, and Mary-Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Joel Martin, Rada Mihalcea, and Ted Pedersen. 2005. Word alignments for languages with scarce resources. In ACL Workshop on Building and Using Parallel Texts. I. Dan Melamed. 1995. Automatic evaluation and uniform filter cascades for inducing N-best translation lexicons. In ACL Workshop on Very Large Corpora. I. Dan Melamed, Giorgio Satta, and Benjamin Wellington. 2004. Generalized multitext grammars. In ACL. I. Dan Melamed and Wei Wang. 2005. Generalized Parsers for Machine Translation. NYU Proteus Project Technical Report 05-001 http://nlp.cs.nyu.edu/pubs/. Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In HLT-NAACL Workshop on Building and Using Parallel Texts. LDC. 2002. NIST MT evaluation data, Linguistic Data Consortium catalogue # LDC2002E53. http://projects.ldc.upenn.edu /TIDES/mt2003.html. Giorgio Satta and Enoch Peserico. 2005. Some computational complexity results for synchronous context-free grammars. In EMNLP. Michel Simard, Nicola Cancedda, Bruno Cavestro, Marc Dymetman, Eric Guassier, Cyril Goutte, and Kenji Yamada. 2005. Translating with noncontiguous phrases. In EMNLP. Sonjia Waxmonsky and I. Dan Melamed. 2006. A dynamic data structure for parsing with discontinuous constituents. NYU Proteus Project Technical Report 06-001 http://nlp.cs.nyu.edu/pubs/. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–404. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In ACL. Richard Zens and Hermann Ney. 2003. A comparative study on reordering constraints in statistical machine translation. In ACL. Hao Zhang and Daniel Gildea. 2004. Syntax-based alignment: Supervised or unsupervised? In COLING. Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In HLT-NAACL. 984 | 2006 | 123 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 985–992, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Hierarchical Bayesian Language Model based on Pitman-Yor Processes Yee Whye Teh School of Computing, National University of Singapore, 3 Science Drive 2, Singapore 117543. [email protected] Abstract We propose a new hierarchical Bayesian n-gram model of natural languages. Our model makes use of a generalization of the commonly used Dirichlet distributions called Pitman-Yor processes which produce power-law distributions more closely resembling those in natural languages. We show that an approximation to the hierarchical Pitman-Yor language model recovers the exact formulation of interpolated Kneser-Ney, one of the best smoothing methods for n-gram language models. Experiments verify that our model gives cross entropy results superior to interpolated Kneser-Ney and comparable to modified Kneser-Ney. 1 Introduction Probabilistic language models are used extensively in a variety of linguistic applications, including speech recognition, handwriting recognition, optical character recognition, and machine translation. Most language models fall into the class of n-gram models, which approximate the distribution over sentences using the conditional distribution of each word given a context consisting of only the previous n −1 words, P(sentence) ≈ T Y i=1 P(wordi | wordi−1 i−n+1) (1) with n = 3 (trigram models) being typical. Even for such a modest value of n the number of parameters is still tremendous due to the large vocabulary size. As a result direct maximum-likelihood parameter fitting severely overfits to the training data, and smoothing methods are indispensible for proper training of n-gram models. A large number of smoothing methods have been proposed in the literature (see (Chen and Goodman, 1998; Goodman, 2001; Rosenfeld, 2000) for good overviews). Most methods take a rather ad hoc approach, where n-gram probabilities for various values of n are combined together, using either interpolation or back-off schemes. Though some of these methods are intuitively appealing, the main justification has always been empirical—better perplexities or error rates on test data. Though arguably this should be the only real justification, it only answers the question of whether a method performs better, not how nor why it performs better. This is unavoidable given that most of these methods are not based on internally coherent Bayesian probabilistic models, which have explicitly declared prior assumptions and whose merits can be argued in terms of how closely these fit in with the known properties of natural languages. Bayesian probabilistic models also have additional advantages—it is relatively straightforward to improve these models by incorporating additional knowledge sources and to include them in larger models in a principled manner. Unfortunately the performance of previously proposed Bayesian language models had been dismal compared to other smoothing methods (Nadas, 1984; MacKay and Peto, 1994). In this paper, we propose a novel language model based on a hierarchical Bayesian model (Gelman et al., 1995) where each hidden variable is distributed according to a Pitman-Yor process, a nonparametric generalization of the Dirichlet distribution that is widely studied in the statistics and probability theory communities (Pitman and Yor, 1997; Ishwaran and James, 2001; Pitman, 2002). 985 Our model is a direct generalization of the hierarchical Dirichlet language model of (MacKay and Peto, 1994). Inference in our model is however not as straightforward and we propose an efficient Markov chain Monte Carlo sampling scheme. Pitman-Yor processes produce power-law distributions that more closely resemble those seen in natural languages, and it has been argued that as a result they are more suited to applications in natural language processing (Goldwater et al., 2006). We show experimentally that our hierarchical Pitman-Yor language model does indeed produce results superior to interpolated Kneser-Ney and comparable to modified Kneser-Ney, two of the currently best performing smoothing methods (Chen and Goodman, 1998). In fact we show a stronger result—that interpolated Kneser-Ney can be interpreted as a particular approximate inference scheme in the hierarchical Pitman-Yor language model. Our interpretation is more useful than past interpretations involving marginal constraints (Kneser and Ney, 1995; Chen and Goodman, 1998) or maximum-entropy models (Goodman, 2004) as it can recover the exact formulation of interpolated Kneser-Ney, and actually produces superior results. (Goldwater et al., 2006) has independently noted the correspondence between the hierarchical Pitman-Yor language model and interpolated Kneser-Ney, and conjectured improved performance in the hierarchical Pitman-Yor language model, which we verify here. Thus the contributions of this paper are threefold: in proposing a langauge model with excellent performance and the accompanying advantages of Bayesian probabilistic models, in proposing a novel and efficient inference scheme for the model, and in establishing the direct correspondence between interpolated Kneser-Ney and the Bayesian approach. We describe the Pitman-Yor process in Section 2, and propose the hierarchical Pitman-Yor language model in Section 3. In Sections 4 and 5 we give a high level description of our sampling based inference scheme, leaving the details to a technical report (Teh, 2006). We also show how interpolated Kneser-Ney can be interpreted as approximate inference in the model. We show experimental comparisons to interpolated and modified Kneser-Ney, and the hierarchical Dirichlet language model in Section 6 and conclude in Section 7. 2 Pitman-Yor Process Pitman-Yor processes are examples of nonparametric Bayesian models. Here we give a quick description of the Pitman-Yor process in the context of a unigram language model; good tutorials on such models are provided in (Ghahramani, 2005; Jordan, 2005). Let W be a fixed and finite vocabulary of V words. For each word w ∈W let G(w) be the (to be estimated) probability of w, and let G = [G(w)]w∈W be the vector of word probabilities. We place a Pitman-Yor process prior on G: G ∼PY(d, θ, G0) (2) where the three parameters are: a discount parameter 0 ≤d < 1, a strength parameter θ > −d and a mean vector G0 = [G0(w)]w∈W . G0(w) is the a priori probability of word w: before observing any data, we believe word w should occur with probability G0(w). In practice this is usually set uniformly G0(w) = 1/V for all w ∈W. Both θ and d can be understood as controlling the amount of variability around G0 in different ways. When d = 0 the Pitman-Yor process reduces to a Dirichlet distribution with parameters θG0. There is in general no known analytic form for the density of PY(d, θ, G0) when the vocabulary is finite. However this need not deter us as we will instead work with the distribution over sequences of words induced by the Pitman-Yor process, which has a nice tractable form and is sufficient for our purpose of language modelling. To be precise, notice that we can treat both G and G0 as distributions over W, where word w ∈W has probability G(w) (respectively G0(w)). Let x1, x2, . . . be a sequence of words drawn independently and identically (i.i.d.) from G. We shall describe the Pitman-Yor process in terms of a generative procedure that produces x1, x2, . . . iteratively with G marginalized out. This can be achieved by relating x1, x2, . . . to a separate sequence of i.i.d. draws y1, y2, . . . from the mean distribution G0 as follows. The first word x1 is assigned the value of the first draw y1 from G0. Let t be the current number of draws from G0 (currently t = 1), ck be the number of words assigned the value of draw yk (currently c1 = 1), and c· = Pt k=1 ck be the current number of draws from G. For each subsequent word xc·+1, we either assign it the value of a previous draw yk with probability ck−d θ+c· (increment ck; set xc·+1 ←yk), or we assign it the value of a new draw from G0 986 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 0 10 1 10 2 10 3 10 4 10 5 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 0 10 1 10 2 10 3 10 4 10 5 10 0 10 1 10 2 10 3 10 4 10 5 10 6 0 0.2 0.4 0.6 0.8 1 10 0 10 1 10 2 10 3 10 4 10 5 10 6 0 0.2 0.4 0.6 0.8 1 Figure 1: First panel: number of unique words as a function of the number of words drawn on a log-log scale, with d = .5 and θ = 1 (bottom), 10 (middle) and 100 (top). Second panel: same, with θ = 10 and d = 0 (bottom), .5 (middle) and .9 (top). Third panel: proportion of words appearing only once, as a function of the number of words drawn, with d = .5 and θ = 1 (bottom), 10 (middle), 100 (top). Last panel: same, with θ = 10 and d = 0 (bottom), .5 (middle) and .9 (top). with probability θ+dt θ+c· (increment t; set ct = 1; draw yt ∼G0; set xc·+1 ←yt). The above generative procedure produces a sequence of words drawn i.i.d. from G, with G marginalized out. It is informative to study the Pitman-Yor process in terms of the behaviour it induces on this sequence of words. Firstly, notice the rich-gets-richer clustering property: the more words have been assigned to a draw from G0, the more likely subsequent words will be assigned to the draw. Secondly, the more we draw from G0, the more likely a new word will be assigned to a new draw from G0. These two effects together produce a power-law distribution where many unique words are observed, most of them rarely. In particular, for a vocabulary of unbounded size and for d > 0, the number of unique words scales as O(θT d) where T is the total number of words. For d = 0, we have a Dirichlet distribution and the number of unique words grows more slowly at O(θ log T). Figure 1 demonstrates the power-law behaviour of the Pitman-Yor process and how this depends on d and θ. In the first two panels we show the average number of unique words among 10 sequences of T words drawn from G, as a function of T, for various values of θ and d. We see that θ controls the overall number of unique words, while d controls the asymptotic growth of the number of unique words. In the last two panels, we show the proportion of words appearing only once among the unique words; this gives an indication of the proportion of words that occur rarely. We see that the asymptotic behaviour depends on d but not on θ, with larger d’s producing more rare words. This procedure for generating words drawn from G is often referred to as the Chinese restaurant process (Pitman, 2002). The metaphor is as follows. Consider a sequence of customers (corresponding to the words draws from G) visiting a Chinese restaurant with an unbounded number of tables (corresponding to the draws from G0), each of which can accommodate an unbounded number of customers. The first customer sits at the first table, and each subsequent customer either joins an already occupied table (assign the word to the corresponding draw from G0), or sits at a new table (assign the word to a new draw from G0). 3 Hierarchical Pitman-Yor Language Models We describe an n-gram language model based on a hierarchical extension of the Pitman-Yor process. An n-gram language model defines probabilities over the current word given various contexts consisting of up to n −1 words. Given a context u, let Gu(w) be the probability of the current word taking on value w. We use a Pitman-Yor process as the prior for Gu[Gu(w)]w∈W , in particular, Gu ∼PY(d|u|, θ|u|, Gπ(u)) (3) where π(u) is the suffix of u consisting of all but the earliest word. The strength and discount parameters are functions of the length |u| of the context, while the mean vector is Gπ(u), the vector of probabilities of the current word given all but the earliest word in the context. Since we do not know Gπ(u) either, We recursively place a prior over Gπ(u) using (3), but now with parameters θ|π(u)|, d|π(u)| and mean vector Gπ(π(u)) instead. This is repeated until we get to G∅, the vector of probabilities over the current word given the 987 empty context ∅. Finally we place a prior on G∅: G∅∼PY(d0, θ0, G0) (4) where G0 is the global mean vector, given a uniform value of G0(w) = 1/V for all w ∈W. Finally, we place a uniform prior on the discount parameters and a Gamma(1, 1) prior on the strength parameters. The total number of parameters in the model is 2n. The structure of the prior is that of a suffix tree of depth n, where each node corresponds to a context consisting of up to n−1 words, and each child corresponds to adding a different word to the beginning of the context. This choice of the prior structure expresses our belief that words appearing earlier in a context have (a priori) the least importance in modelling the probability of the current word, which is why they are dropped first at successively higher levels of the model. 4 Hierarchical Chinese Restaurant Processes We describe a generative procedure analogous to the Chinese restaurant process of Section 2 for drawing words from the hierarchical PitmanYor language model with all Gu’s marginalized out. This gives us an alternative representation of the hierarchical Pitman-Yor language model that is amenable to efficient inference using Markov chain Monte Carlo sampling and easy computation of the predictive probabilities for test words. The correspondence between interpolated KneserNey and the hierarchical Pitman-Yor language model is also apparent in this representation. Again we may treat each Gu as a distribution over the current word. The basic observation is that since Gu is Pitman-Yor process distributed, we can draw words from it using the Chinese restaurant process given in Section 2. Further, the only operation we need of its parent distribution Gπ(u) is to draw words from it too. Since Gπ(u) is itself distributed according to a Pitman-Yor process, we can use another Chinese restaurant process to draw words from that. This is recursively applied until we need draws from the global mean distribution G0, which is easy since it is just uniform. We refer to this as the hierarchical Chinese restaurant process. Let us introduce some notations. For each context u we have a sequence of words xu1, xu2, . . . drawn i.i.d. from Gu and another sequence of words yu1, yu2, . . . drawn i.i.d. from the parent distribution Gπ(u). We use l to index draws from Gu and k to index the draws from Gπ(u). Define tuwk = 1 if yuk takes on value w, and tuwk = 0 otherwise. Each word xul is assigned to one of the draws yuk from Gπ(u). If yuk takes on value w define cuwk as the number of words xul drawn from Gu assigned to yuk, otherwise let cuwk = 0. Finally we denote marginal counts by dots. For example, cu·k is the number of xul’s assigned the value of yuk, cuw· is the number of xul’s with value w, and tu·· is the current number of draws yuk from Gπ(u). Notice that we have the following relationships among the cuw·’s and tuw·: ( tuw· = 0 if cuw· = 0; 1 ≤tuw· ≤cuw· if cuw· > 0; (5) cuw· = X u′:π(u′)=u tu′w· (6) Pseudo-code for drawing words using the hierarchical Chinese restaurant process is given as a recursive function DrawWord(u), while pseudocode for computing the probability that the next word drawn from Gu will be w is given in WordProb(u,w). The counts are initialized at all cuwk = tuwk = 0. Function DrawWord(u): Returns a new word drawn from Gu. If u = 0, return w ∈W with probability G0(w). Else with probabilities proportional to: cuwk −d|u|tuwk: assign the new word to yuk. Increment cuwk; return w. θ|u| + d|u|tu··: assign the new word to a new draw yuknew from Gπ(u). Let w ←DrawWord(π(u)); set tuwknew = cuwknew = 1; return w. Function WordProb(u,w): Returns the probability that the next word after context u will be w. If u = 0, return G0(w). Else return cuw·−d|u|tuw· θ|u|+cu·· + θ|u|+d|u|tu·· θ|u|+cu·· WordProb(π(u),w). Notice the self-reinforcing property of the hierarchical Pitman-Yor language model: the more a word w has been drawn in context u, the more likely will we draw w again in context u. In fact word w will be reinforced for other contexts that share a common suffix with u, with the probability of drawing w increasing as the length of the 988 common suffix increases. This is because w will be more likely under the context of the common suffix as well. The hierarchical Chinese restaurant process is equivalent to the hierarchical Pitman-Yor language model insofar as the distribution induced on words drawn from them are exactly equal. However, the probability vectors Gu’s have been marginalized out in the procedure, replaced instead by the assignments of words xul to draws yuk from the parent distribution, i.e. the seating arrangement of customers around tables in the Chinese restaurant process corresponding to Gu. In the next section we derive tractable inference schemes for the hierarchical Pitman-Yor language model based on these seating arrangements. 5 Inference Schemes In this section we give a high level description of a Markov chain Monte Carlo sampling based inference scheme for the hierarchical PitmanYor language model. Further details can be obtained at (Teh, 2006). We also relate interpolated Kneser-Ney to the hierarchical Pitman-Yor language model. Our training data D consists of the number of occurrences cuw· of each word w after each context u of length exactly n −1. This corresponds to observing word w drawn cuw· times from Gu. Given the training data D, we are interested in the posterior distribution over the latent vectors G = {Gv : all contexts v} and parameters Θ = {θm, dm : 0 ≤m ≤n −1}: p(G, Θ|D) = p(G, Θ, D)/p(D) (7) As mentioned previously, the hierarchical Chinese restaurant process marginalizes out each Gu, replacing it with the seating arrangement in the corresponding restaurant, which we shall denote by Su. Let S = {Sv : all contexts v}. We are thus interested in the equivalent posterior over seating arrangements instead: p(S, Θ|D) = p(S, Θ, D)/p(D) (8) The most important quantities we need for language modelling are the predictive probabilities: what is the probability of a test word w after a context u? This is given by p(w|u, D) = Z p(w|u, S, Θ)p(S, Θ|D) d(S, Θ) (9) where the first probability on the right is the predictive probability under a particular setting of seating arrangements S and parameters Θ, and the overall predictive probability is obtained by averaging this with respect to the posterior over S and Θ (second probability on right). We approximate the integral with samples {S(i), Θ(i)}I i=1 drawn from p(S, Θ|D): p(w|u, D) ≈ I X i=1 p(w|u, S(i), Θ(i)) (10) while p(w|u, S, Θ) is given by the function WordProb(u,w): p(w | 0, S, Θ) = 1/V (11) p(w | u, S, Θ) = cuw· −d|u|tuw· θ|u| + cu·· + θ|u| + d|u|tu·· θ|u| + cu·· p(w | π(u), S, Θ) (12) where the counts are obtained from the seating arrangement Su in the Chinese restaurant process corresponding to Gu. We use Gibbs sampling to obtain the posterior samples {S, Θ} (Neal, 1993). Gibbs sampling keeps track of the current state of each variable of interest in the model, and iteratively resamples the state of each variable given the current states of all other variables. It can be shown that the states of variables will converge to the required samples from the posterior distribution after a sufficient number of iterations. Specifically for the hierarchical Pitman-Yor language model, the variables consist of, for each u and each word xul drawn from Gu, the index kul of the draw from Gπ(u) assigned xul. In the Chinese restaurant metaphor, this is the index of the table which the lth customer sat at in the restaurant corresponding to Gu. If xul has value w, it can only be assigned to draws from Gπ(u) that has value w as well. This can either be a preexisting draw with value w, or it can be a new draw taking on value w. The relevant probabilities are given in the functions DrawWord(u) and WordProb(u,w), where we treat xul as the last word drawn from Gu. This gives: p(kul = k|S−ul, Θ) ∝ max(0, c−ul uxulk −d) θ + c−ul u·· (13) p(kul = knew with yuknew = xul|S−ul, Θ) ∝ θ + dt−ul u·· θ + c−ul u·· p(xul|π(u), S−ul, Θ) (14) 989 where the superscript −ul means the corresponding set of variables or counts with xul excluded. The parameters Θ are sampled using an auxiliary variable sampler as detailed in (Teh, 2006). The overall sampling scheme for an n-gram hierarchical Pitman-Yor language model takes O(nT) time and requires O(M) space per iteration, where T is the number of words in the training set, and M is the number of unique n-grams. During test time, the computational cost is O(nI), since the predictive probabilities (12) require O(n) time to calculate for each of I samples. The hierarchical Pitman-Yor language model produces discounts that grow gradually as a function of n-gram counts. Notice that although each Pitman-Yor process Gu only has one discount parameter, the predictive probabilities (12) produce different discount values since tuw· can take on different values for different words w. In fact tuw· will on average be larger if cuw· is larger; averaged over the posterior, the actual amount of discount will grow slowly as the count cuw· grows. This is shown in Figure 2 (left), where we see that the growth of discounts is sublinear. The correspondence to interpolated Kneser-Ney is now straightforward. If we restrict tuw· to be at most 1, that is, tuw· = min(1, cuw·) (15) cuw· = X u′:π(u′)=u tu′w· (16) we will get the same discount value so long as cuw· > 0, i.e. absolute discounting. Further supposing that the strength parameters are all θ|u| = 0, the predictive probabilities (12) now directly reduces to the predictive probabilities given by interpolated Kneser-Ney. Thus we can interpret interpolated Kneser-Ney as the approximate inference scheme (15,16) in the hierarchical Pitman-Yor language model. Modified Kneser-Ney uses the same values for the counts as in (15,16), but uses a different valued discount for each value of cuw· up to a maximum of c(max). Since the discounts in a hierarchical Pitman-Yor language model are limited to between 0 and 1, we see that modified Kneser-Ney is not an approximation of the hierarchical PitmanYor language model. 6 Experimental Results We performed experiments on the hierarchical Pitman-Yor language model on a 16 million word corpus derived from APNews. This is the same dataset as in (Bengio et al., 2003). The training, validation and test sets consist of about 14 million, 1 million and 1 million words respectively, while the vocabulary size is 17964. For trigrams with n = 3, we varied the training set size between approximately 2 million and 14 million words by six equal increments, while we also experimented with n = 2 and 4 on the full 14 million word training set. We compared the hierarchical Pitman-Yor language model trained using the proposed Gibbs sampler (HPYLM) against interpolated KneserNey (IKN), modified Kneser-Ney (MKN) with maximum discount cut-off c(max) = 3 as recommended in (Chen and Goodman, 1998), and the hierarchical Dirichlet language model (HDLM). For the various variants of Kneser-Ney, we first determined the parameters by conjugate gradient descent in the cross-entropy on the validation set. At the optimal values, we folded the validation set into the training set to obtain the final n-gram probability estimates. This procedure is as recommended in (Chen and Goodman, 1998), and takes approximately 10 minutes on the full training set with n = 3 on a 1.4 Ghz PIII. For HPYLM we inferred the posterior distribution over the latent variables and parameters given both the training and validation sets using the proposed Gibbs sampler. Since the posterior is well-behaved and the sampler converges quickly, we only used 125 iterations for burn-in, and 175 iterations to collect posterior samples. On the full training set with n = 3 this took about 1.5 hours. Perplexities on the test set are given in Table 1. As expected, HDLM gives the worst performance, while HPYLM performs better than IKN. Perhaps surprisingly HPYLM performs slightly worse than MKN. We believe this is because HPYLM is not a perfect model for languages and as a result posterior estimates of the parameters are not optimized for predictive performance. On the other hand parameters in the Kneser-Ney variants are optimized using cross-validation, so are given optimal values for prediction. To validate this conjecture, we also experimented with HPYCV, a hierarchical Pitman-Yor language model where the parameters are obtained by fitting them in a slight generalization of IKN where the strength param990 T n IKN MKN HPYLM HPYCV HDLM 2e6 3 148.8 144.1 145.7 144.3 191.2 4e6 3 137.1 132.7 134.3 132.7 172.7 6e6 3 130.6 126.7 127.9 126.4 162.3 8e6 3 125.9 122.3 123.2 121.9 154.7 10e6 3 122.0 118.6 119.4 118.2 148.7 12e6 3 119.0 115.8 116.5 115.4 144.0 14e6 3 116.7 113.6 114.3 113.2 140.5 14e6 2 169.9 169.2 169.6 169.3 180.6 14e6 4 106.1 102.4 103.8 101.9 136.6 Table 1: Perplexities of various methods and for various sizes of training set T and length of ngrams. eters θ|u|’s are allowed to be positive and optimized over along with the discount parameters using cross-validation. Seating arrangements are Gibbs sampled as in Section 5 with the parameter values fixed. We find that HPYCV performs better than MKN (except marginally worse on small problems), and has best performance overall. Note that the parameter values in HPYCV are still not the optimal ones since they are obtained by cross-validation using IKN, an approximation to a hierarchical Pitman-Yor language model. Unfortunately cross-validation using a hierarchical Pitman-Yor language model inferred using Gibbs sampling is currently too costly to be practical. In Figure 2 (right) we broke down the contributions to the cross-entropies in terms of how many times each word appears in the test set. We see that most of the differences between the methods appear as differences among rare words, with the contribution of more common words being negligible. HPYLM performs worse than MKN on words that occurred only once (on average) and better on other words, while HPYCV is reversed and performs better than MKN on words that occurred only once or twice and worse on other words. 7 Discussion We have described using a hierarchical PitmanYor process as a language model and shown that it gives performance superior to state-of-the-art methods. In addition, we have shown that the state-of-the-art method of interpolated KneserNey can be interpreted as approximate inference in the hierarchical Pitman-Yor language model. In the future we plan to study in more detail the differences between our model and the variants of Kneser-Ney, to consider other approximate inference schemes, and to test the model on larger data sets and on speech recognition. The hierarchical Pitman-Yor language model is a fully Bayesian model, thus we can also reap other benefits of the paradigm, including having a coherent probabilistic model, ease of improvements by building in prior knowledge, and ease in using as part of more complex models; we plan to look into these possible improvements and extensions. The hierarchical Dirichlet language model of (MacKay and Peto, 1994) was an inspiration for our work. Though (MacKay and Peto, 1994) had the right intuition to look at smoothing techniques as the outcome of hierarchical Bayesian models, the use of the Dirichlet distribution as a prior was shown to lead to non-competitive cross-entropy results. Our model is a nontrivial but direct generalization of the hierarchical Dirichlet language model that gives state-of-the-art performance. We have shown that with a suitable choice of priors (namely the Pitman-Yor process), Bayesian methods can be competitive with the best smoothing techniques. The hierarchical Pitman-Yor process is a natural generalization of the recently proposed hierarchical Dirichlet process (Teh et al., 2006). The hierarchical Dirichlet process was proposed to solve a different problem—that of clustering, and it is interesting to note that such a direct generalization leads us to a good language model. Both the hierarchical Dirichlet process and the hierarchical Pitman-Yor process are examples of Bayesian nonparametric processes. These have recently received much attention in the statistics and machine learning communities because they can relax previously strong assumptions on the parametric forms of Bayesian models yet retain computational efficiency, and because of the elegant way in which they handle the issues of model selection and structure learning in graphical models. Acknowledgement I wish to thank the Lee Kuan Yew Endowment Fund for funding, Joshua Goodman for answering many questions regarding interpolated KneserNey and smoothing techniques, John Blitzer and Yoshua Bengio for help with datasets, Anoop Sarkar for interesting discussion, and Hal Daume III, Min Yen Kan and the anonymous reviewers for 991 0 10 20 30 40 50 0 1 2 3 4 5 6 Count of n−grams Average Discount IKN MKN HPYLM 2 4 6 8 10 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025 0.03 Cross−Entropy Differences from MKN Count of words in test set IKN MKN HPYLM HPYCV Figure 2: Left: Average discounts as a function of n-gram counts in IKN (bottom line), MKN (middle step function), and HPYLM (top curve). Right: Break down of cross-entropy on test set as a function of the number of occurrences of test words. Plotted is the sum over test words which occurred c times of cross-entropies of IKN, MKN, HPYLM and HPYCV, where c is as given on the x-axis and MKN is used as a baseline. Lower is better. Both panels are for the full training set and n = 3. helpful comments. References Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. S.F. Chen and J.T Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Computer Science Group, Harvard University. A. Gelman, J. Carlin, H. Stern, and D. Rubin. 1995. Bayesian data analysis. Chapman & Hall, London. Z. Ghahramani. 2005. Nonparametric Bayesian methods. Tutorial presentation at the UAI Conference. S. Goldwater, T.L. Griffiths, and M. Johnson. 2006. Interpolating between types and tokens by estimating power-law generators. In Advances in Neural Information Processing Systems, volume 18. J.T. Goodman. 2001. A bit of progress in language modeling. Technical Report MSR-TR-2001-72, Microsoft Research. J.T. Goodman. 2004. Exponential priors for maximum entropy models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. H. Ishwaran and L.F. James. 2001. Gibbs sampling methods for stick-breaking priors. Journal of the American Statistical Association, 96(453):161–173. M.I. Jordan. 2005. Dirichlet processes, Chinese restaurant processes and all that. Tutorial presentation at the NIPS Conference. R. Kneser and H. Ney. 1995. Improved backingoff for m-gram language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, volume 1. D.J.C. MacKay and L.C.B. Peto. 1994. A hierarchical Dirichlet language model. Natural Language Engineering. A. Nadas. 1984. Estimation of probabilities in the language model of the IBM speach recognition system. IEEE Transaction on Acoustics, Speech and Signal Processing, 32(4):859–861. R.M. Neal. 1993. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1, Department of Computer Science, University of Toronto. J. Pitman and M. Yor. 1997. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. Annals of Probability, 25:855–900. J. Pitman. 2002. Combinatorial stochastic processes. Technical Report 621, Department of Statistics, University of California at Berkeley. Lecture notes for St. Flour Summer School. R. Rosenfeld. 2000. Two decades of statistical language modeling: Where do we go from here? Proceedings of the IEEE, 88(8). Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. 2006. Hierarchical Dirichlet processes. To appear in Journal of the American Statistical Association. Y. W. Teh. 2006. A Bayesian interpretation of interpolated Kneser-Ney. Technical Report TRA2/06, School of Computing, National University of Singapore. 992 | 2006 | 124 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 993–1000, Sydney, July 2006. c⃝2006 Association for Computational Linguistics A Phonetic-Based Approach to Chinese Chat Text Normalization Yunqing Xia, Kam-Fai Wong Department of S.E.E.M. The Chinese University of Hong Kong Shatin, Hong Kong {yqxia, kfwong}@se.cuhk.edu.hk Wenjie Li Department of Computing The Hong Kong Polytechnic University Kowloon, Hong Kong [email protected] Abstract Chatting is a popular communication media on the Internet via ICQ, chat rooms, etc. Chat language is different from natural language due to its anomalous and dynamic natures, which renders conventional NLP tools inapplicable. The dynamic problem is enormously troublesome because it makes static chat language corpus outdated quickly in representing contemporary chat language. To address the dynamic problem, we propose the phonetic mapping models to present mappings between chat terms and standard words via phonetic transcription, i.e. Chinese Pinyin in our case. Different from character mappings, the phonetic mappings can be constructed from available standard Chinese corpus. To perform the task of dynamic chat language term normalization, we extend the source channel model by incorporating the phonetic mapping models. Experimental results show that this method is effective and stable in normalizing dynamic chat language terms. 1 Introduction Internet facilitates online chatting by providing ICQ, chat rooms, BBS, email, blogs, etc. Chat language becomes ubiquitous due to the rapid proliferation of Internet applications. Chat language text appears frequently in chat logs of online education (Heard-White, 2004), customer relationship management (Gianforte, 2003), etc. On the other hand, wed-based chat rooms and BBS systems are often abused by solicitors of terrorism, pornography and crime (McCullagh, 2004). Thus there is a social urgency to understand online chat language text. Chat language is anomalous and dynamic. Many words in chat text are anomalous to natural language. Chat text comprises of ill-edited terms and anomalous writing styles. We refer chat terms to the anomalous words in chat text. The dynamic nature reflects that chat language changes more frequently than natural languages. For example, many popular chat terms used in last year have been discarded and replaced by new ones in this year. Details on these two features are provided in Section 2. The anomalous nature of Chinese chat language is investigated in (Xia et al., 2005). Pattern matching and SVM are proposed to recognize the ambiguous chat terms. Experiments show that F-1 measure of recognition reaches 87.1% with the biggest training set. However, it is also disclosed that quality of both methods drops significantly when training set is older. The dynamic nature is investigated in (Xia et al., 2006a), in which an error-driven approach is proposed to detect chat terms in dynamic Chinese chat terms by combining standard Chinese corpora and NIL corpus (Xia et al., 2006b). Language texts in standard Chinese corpora are used as negative samples and chat text pieces in the NIL corpus as positive ones. The approach calculates confidence and entropy values for the input text. Then threshold values estimated from the training data are applied to identify chat terms. Performance equivalent to the methods in existence is achieved consistently. However, the issue of normalization is addressed in their work. Dictionary based chat term normalization is not a good solution because the dictionary cannot cover new chat terms appearing in the dynamic chat language. In the early stage of this work, a method based on source channel model is implemented for chat term normalization. The problem we encounter is addressed as follows. To deal with the anomalous nature, a chat language corpus is constructed with chat text collected from the Internet. How993 ever, the dynamic nature renders the static corpus outdated quickly in representing contemporary chat language. The dilemma is that timely chat language corpus is nearly impossible to obtain. The sparse data problem and dynamic problem become crucial in chat term normalization. We believe that some information beyond character should be discovered to help addressing these two problems. Observation on chat language text reveals that most Chinese chat terms are created via phonetic transcription, i.e. Chinese Pinyin in our case. A more exciting finding is that the phonetic mappings between standard Chinese words and chat terms remain stable in dynamic chat language. We are thus enlightened to make use of the phonetic mapping models, in stead of character mapping models, to design a normalization algorithm to translate chat terms to their standard counterparts. Different from the character mapping models constructed from chat language corpus, the phonetic mapping models are learned from a standard language corpus because they attempt to model mappings probabilities between any two Chinese characters in terms of phonetic transcription. Now the sparse data problem can thus be appropriately addressed. To normalize the dynamic chat language text, we extend the source channel model by incorporating phonetic mapping models. We believe that the dynamic problem can be resolved effectively and robustly because the phonetic mapping models are stable. The remaining sections of this paper are organized as follows. In Section 2, features of chat language are analyzed with evidences. In Section 3, we present methodology and problems of the source channel model approach to chat term normalization. In Section 4, we present definition, justification, formalization and parameter estimation for the phonetic mapping model. In Section 5, we present the extended source channel model that incorporates the phonetic mapping models. Experiments and results are presented in Section 6 as well as discussions and error analysis. We conclude this paper in Section 7. 2 Feature Analysis and Evidences Observation on NIL corpus discloses the anomalous and dynamic features of chat language. 2.1 Anomalous Chat language is explicitly anomalous in two aspects. Firstly, some chat terms are anomalous entries to standard dictionaries. For example, “介 里(here, jie4 li3)” is not a standard word in any contemporary Chinese dictionary while it is often used to replace “这里(here, zhe4 li3)” in chat language. Secondly, some chat terms can be found in standard dictionaries while their meanings in chat language are anomalous to the dictionaries. For example, “偶(even, ou3)” is often used to replace “我(me, wo2)” in chat text. But the entry that “偶” occupies in standard dictionary is used to describe even numbers. The latter case is constantly found in chat text, which makes chat text understanding fairly ambiguous because it is difficult to find out whether these terms are used as standard words or chat terms. 2.2 Dynamic Chat text is deemed dynamic due to the fact that a large proportion of chat terms used in last year may become obsolete in this year. On the other hand, ample new chat terms are born. This feature is not as explicit as the anomalous nature. But it is as crucial. Observation on chat text in NIL corpus reveals that chat term set changes along with time very quickly. An empirical study is conducted on five chat text collections extracted from YESKY BBS system (bbs.yesky.com) within different time periods, i.e. Jan. 2004, July 2004, Jan. 2005, July 2005 and Jan. 2006. Chat terms in each collection are picked out by hand together with their frequencies so that five chat term sets are obtained. The top 500 chat terms with biggest frequencies in each set are selected to calculate reoccurring rates of the earlier chat term sets on the later ones. Set Jul-04 Jan-05 Jul-05 Jan-06 Avg. Jan-04 0.882 0.823 0.769 0.706 0.795 Jul-04 - 0.885 0.805 0.749 0.813 Jan-05 - - 0.891 0.816 0.854 Jul-05 - - - 0.875 0.875 Table 1. Chat term re-occurring rates. The rows represent the earlier chat term sets and the columns the later ones. The surprising finding in Table 1 is that 29.4% of chat terms are replaced with new ones within two years and about 18.5% within one year. The changing speed is much faster than that in standard language. This thus proves that chat text is dynamic indeed. The dynamic nature renders the static corpus outdated quickly. It poses a challenging issue on chat language processing. 994 3 Source Channel Model and Problems The source channel model is implemented as baseline method in this work for chat term normalization. We brief its methodology and problems as follows. 3.1 The Model The source channel model (SCM) is a successful statistical approach in speech recognition and machine translation (Brown, 1990). SCM is deemed applicable to chat term normalization due to similar task nature. In our case, SCM aims to find the character string n i ic C ,..., 2 ,1 } { = = that the given input chat text n j it T ,..., 2,1 } { = = is most probably translated to, i.e. i i c t → , as follows. ) ( ) ( ) | ( max arg ) | ( max arg ˆ T p C p C T p T C p C C C = = (1) Since ) (T p is a constant for C , so Cˆ should also maximize ) ( ) | ( C p C T p . Now ) | ( T C p is decomposed into two components, i.e. chat term translation observation model ) | ( C T p and language model ) (C p . The two models can be both estimated with maximum likelihood method using the trigram model in NIL corpus. 3.2 Problems Two problems are notable in applying SCM in chat term normalization. First, data sparseness problem is serious because timely chat language corpus is expensive thus small due to dynamic nature of chat language. NIL corpus contains only 12,112 pieces of chat text created in eight months, which is far from sufficient to train the chat term translation model. Second, training effectiveness is poor due to the dynamic nature. Trained on static chat text pieces, the SCM approach would perform poorly in processing chat text in the future. Robustness on dynamic chat text thus becomes a challenging issue in our research. Updating the corpus with recent chat text constantly is obviously not a good solution to the above problems. We need to find some information beyond character to help addressing the sparse data problem and dynamic problem. Fortunately, observation on chat terms provides us convincing evidence that the underlying phonetic mappings exist between most chat terms and their standard counterparts. The phonetic mappings are found promising in resolving the two problems. 4 Phonetic Mapping Model 4.1 Definition of Phonetic Mapping Phonetic mapping is the bridge that connects two Chinese characters via phonetic transcription, i.e. Chinese Pinyin in our case. For example, “介 ⎯ ⎯ ⎯ ⎯ → ⎯ ) 56 .0, , ( jie zhe 这” is the phonetic mapping connecting “这(this, zhe4)” and “介(interrupt, jie4)”, in which “zhe” and “jie” are Chinese Pinyin for “这” and “介” respectively. 0.56 is phonetic similarity between the two Chinese characters. Technically, the phonetic mappings can be constructed between any two Chinese characters within any Chinese corpus. In chat language, any Chinese character can be used in chat terms, and phonetic mappings are applied to connect chat terms to their standard counterparts. Different from the dynamic character mappings, the phonetic mappings can be produced with standard Chinese corpus before hand. They are thus stable over time. 4.2 Justifications on Phonetic Assumption To make use of phonetic mappings in normalization of chat language terms, an assumption must be made that chat terms are mainly formed via phonetic mappings. To justify the assumption, two questions must be answered. First, how many percent of chat terms are created via phonetic mappings? Second, why are the phonetic mapping models more stable than character mapping models in chat language? Mapping type Count Percentage Chinese word/phrase 9370 83.3% English capital 2119 7.9% Arabic number 1021 8.0% Other 1034 0.8% Table 2. Chat term distribution in terms of mapping type. To answer the first question, we look into chat term distribution in terms of mapping type in Table 2. It is revealed that 99.2 percent of chat terms in NIL corpus fall into the first four phonetic mapping types that make use of phonetic mappings. In other words, 99.2 percent of chat terms can be represented by phonetic mappings. 0.8% chat terms come from the OTHER type, emoticons for instance. The first question is undoubtedly answered with the above statistics. To answer the second question, an observation is conducted again on the five chat term sets described in Section 2.2. We create phonetic map995 pings manually for the 500 chat terms in each set. Then five phonetic mapping sets are obtained. They are in turn compared against the standard phonetic mapping set constructed with Chinese Gigaword. Percentage of phonetic mappings in each set covered by the standard set is presented in Table 3. Set Jan-04 Jul-04 Jan-05 Jul-05 Jan-06 percentage 98.7 99.3 98.9 99.3 99.1 Table 3. Percentages of phonetic mappings in each set covered by standard set. By comparing Table 1 and Table 3, we find that phonetic mappings remain more stable than character mappings in chat language text. This finding is convincing to justify our intention to design effective and robust chat language normalization method by introducing phonetic mappings to the source channel model. Note that about 1% loss in these percentages comes from chat terms that are not formed via phonetic mappings, emoticons for example. 4.3 Formalism The phonetic mapping model is a five-tuple, i.e. > < ) | ( Pr ), ( ), ( , , C T C pt T pt C T pm , which comprises of chat term character T , standard counterpart character C , phonetic transcription of T and C , i.e. ) (T pt and ) (C pt , and the mapping probability ) | ( Pr C T pm that T is mapped to C via the phonetic mapping ( ) C T C T C pt T pt pm ⎯ ⎯ ⎯ ⎯ ⎯ ⎯ ⎯ → ⎯ ) | ( Pr ), ( ), ( (hereafter briefed by C T M⎯→ ⎯ ). As they manage mappings between any two Chinese characters, the phonetic mapping models should be constructed with a standard language corpus. This results in two advantages. One, sparse data problem can be addressed appropriately because standard language corpus is used. Two, the phonetic mapping models are as stable as standard language. In chat term normalization, when the phonetic mapping models are used to represent mappings between chat term characters and standard counterpart characters, the dynamic problem can be addressed in a robust manner. Differently, the character mapping model used in the SCM (see Section 3.1) connects two Chinese characters directly. It is a three-tuple, i.e. > < ) | ( Pr , , C T C T cm , which comprises of chat term character T , standard counterpart character C and the mapping probability ) | ( Pr C T cm that T is mapped to C via this character mapping. As they must be constructed from chat language training samples, the character mapping models suffer from data sparseness problem and dynamic problem. 4.4 Parameter Estimation Two questions should be answered in parameter estimation. First, how are the phonetic mapping space constructed? Second, how are the phonetic mapping probabilities estimated? To construct the phonetic mapping models, we first extract all Chinese characters from standard Chinese corpus and use them to form candidate character mapping models. Then we generate phonetic transcription for the Chinese characters and calculate phonetic probability for each candidate character mapping model. We exclude those character mapping models holding zero probability. Finally, the character mapping models are converted to phonetic mapping models with phonetic transcription and phonetic probability incorporated. The phonetic probability is calculated by combining phonetic similarity and character frequencies in standard language as follows. ( ) ( ) ∑ × × = i i i slc slc pm A A ps A fr A A ps A fr A A ob ) , ( ) ( ) , ( ) ( ) , ( Pr (2) In Equation (2) } { iA is the character set in which each element iA is similar to character A in terms of phonetic transcription. ) (c frslc is a function returning frequency of given character c in standard language corpus and ) , ( 2 1 c c ps phonetic similarity between character 1c and 2 c . Phonetic similarity between two Chinese characters is calculated based on Chinese Pinyin as follows. ))) ( ( )), ( ( ( ))) ( ( )), ( ( ( )) ( ), ( ( ) , ( A py final A py final Sim A py initial A py initial Sim A py A py Sim A A ps × = = (3) In Equation (3) ) (c py is a function that returns Chinese Pinyin of given character c , and ) (x initial and ) (x final return initial (shengmu) and final (yunmu) of given Chinese Pinyin x respectively. For example, Chinese Pinyin for the Chinese character “这” is “zhe”, in which “zh” is initial and “e” is final. When initial or final is 996 empty for some Chinese characters, we only calculate similarity of the existing parts. An algorithm for calculating similarity of initial pairs and final pairs is proposed in (Li et al., 2003) based on letter matching. Problem of this algorithm is that it always assigns zero similarity to those pairs containing no common letter. For example, initial similarity between “ch” and “q” is set to zero with this algorithm. But in fact, pronunciations of the two initials are very close to each other in Chinese speech. So non-zero similarity values should be assigned to these special pairs before hand (e.g., similarity between “ch” and “q” is set to 0.8). The similarity values are agreed by some native Chinese speakers. Thus Li et al.’s algorithm is extended to output a pre-defined similarity value before letter matching is executed in the original algorithm. For example, Pinyin similarity between “chi” and “qi” is calculated as follows. 8.0 1 8.0 ) , ( ) , ( ) ( = × = × = i i Sim q ch Sim chi,qi Sim 5 Extended Source Channel Model We extend the source channel model by inserting phonetic mapping models n i i m M ,..., 2 ,1 } { = = into equation (1), in which chat term character it is mapped to standard character ic via i m , i.e. i m i c t i⎯→ ⎯ . The extended source channel model (XSCM) is mathematically addressed as follows. ) ( ) ( ) | ( ) , | ( max arg ) , | ( max arg ˆ , , T p C p C M p C M T p T M C p C M C M C = = (4) Since ) (T p is a constant, Cˆ and Mˆ should also maximize ) ( ) | ( ) , | ( C p C M p C M T p . Now three components are involved in XSCM, i.e. chat term normalization observation model ) , | ( C M T p , phonetic mapping model ) | ( C M p and language model ) (C p . Chat Term Normalization Observation Model. We assume that mappings between chat terms and their standard Chinese counterparts are independent of each other. Thus chat term normalization probability can be calculated as follows. ∏ = i i i i c m t p C M T p ) , | ( ) , | ( (5) The ) , | ( i i i c m t p ’s are estimated using maximum likelihood estimation method with Chinese character trigram model in NIL corpus. Phonetic Mapping Model. We assume that the phonetic mapping models depend merely on the current observation. Thus the phonetic mapping probability is calculated as follows. ∏ = i i i c m p C M p ) | ( ) | ( (6) in which ) | ( i i c m p ’s are estimated with equation (2) and (3) using a standard Chinese corpus. Language Model. The language model ) (C p ’s can be estimated using maximum likelihood estimation method with Chinese character trigram model on NIL corpus. In our implementation, Katz Backoff smoothing technique (Katz, 1987) is used to handle the sparse data problem, and Viterbi algorithm is employed to find the optimal solution in XSCM. 6 Evaluation 6.1 Data Description Training Sets Two types of training data are used in our experiments. We use news from Xinhua News Agency in LDC Chinese Gigaword v.2 (CNGIGA) (Graf et al., 2005) as standard Chinese corpus to construct phonetic mapping models because of its excellent coverage of standard Simplified Chinese. We use NIL corpus (Xia et al., 2006b) as chat language corpus. To evaluate our methods on size-varying training data, six chat language corpora are created based on NIL corpus. We select 6056 sentences from NIL corpus randomly to make the first chat language corpus, i.e. C#1. In every next corpus, we add extra 1,211 random sentences. So 7,267 sentences are contained in C#2, 8,478 in C#3, 9,689 in C#4, 10,200 in C#5, and 12,113 in C#6. Test Sets Test sets are used to prove that chat language is dynamic and XSCM is effective and robust in normalizing dynamic chat language terms. Six time-varying test sets, i.e. T#1 ~ T#6, are created in our experiments. They contain chat language sentences posted from August 2005 to Jan 2006. We randomly extract 1,000 chat language sentences posted in each month. So timestamp of the six test sets are in temporal order, in which timestamp of T#1 is the earliest and that of T#6 the newest. The normalized sentences are created by hand and used as standard normalization answers. 997 6.2 Evaluation Criteria We evaluate two tasks in our experiments, i.e. recognition and normalization. In recognition, we use precision (p), recall (r) and f-1 measure (f) defined as follows. 2 r p r p f z x x r y x x p + × × = + = + = (7) where x denotes the number of true positives, y the false positives and z the true negatives. For normalization, we use accuracy (a), which is commonly accepted by machine translation researchers as a standard evaluation criterion. Every output of the normalization methods is compared to the standard answer so that normalization accuracy on each test set is produced. 6.3 Experiment I: SCM vs. XSCM Using Size-varying Chat Language Corpora In this experiment we investigate on quality of XSCM and SCM using same size-varying training data. We intend to prove that chat language is dynamic and phonetic mapping models used in XSCM are helpful in addressing the dynamic problem. As no standard Chinese corpus is used in this experiment, we use standard Chinese text in chat language corpora to construct phonetic mapping models in XSCM. This violates the basic assumption that the phonetic mapping models should be constructed with standard Chinese corpus. So results in this experiment should be used only for comparison purpose. It would be unfair to make any conclusion on general performance of XSCM method based on results in this experiments. We train the two methods with each of the six chat language corpora, i.e. C#1 ~ C#6 and test them on six time-varying test sets, i.e. T#1 ~ T#6. F-1 measure values produced by SCM and XSCM in this experiment are present in Table 3. Three tendencies should be pointed out according to Table 3. The first tendency is that f-1 measure in both methods drops on time-varying test sets (see Figure 1) using same training chat language corpora. For example, both SCM and XSCM perform best on the earliest test set T#1 and worst on newest T#4. We find that the quality drop is caused by the dynamic nature of chat language. It is thus revealed that chat language is indeed dynamic. We also find that quality of XSCM drops less than that of SCM. This proves that phonetic mapping models used in XSCM are helpful in addressing the dynamic problem. However, quality of XSCM in this experiment still drops by 0.05 on the six time-varying test sets. This is because chat language text corpus is used as standard language corpus to model the phonetic mappings. Phonetic mapping models constructed with chat language corpus are far from sufficient. We will investigate in Experiment-II to prove that stable phonetic mapping models can be constructed with real standard language corpus, i.e. CNGIGA. Test Set T#1 T#2 T#3 T#4 T#5 T#6 C#1 0.829 0.805 0.762 0.701 0.739 0.705 C#2 0.831 0.807 0.767 0.711 0.745 0.715 C#3 0.834 0.811 0.774 0.722 0.751 0.722 C#4 0.835 0.814 0.779 0.729 0.753 0.729 C#5 0.838 0.816 0.784 0.737 0.761 0.737 S C M C#6 0.839 0.819 0.789 0.743 0.765 0.743 C#1 0.849 0.840 0.820 0.790 0.805 0.790 C#2 0.850 0.841 0.824 0.798 0.809 0.796 C#3 0.850 0.843 0.824 0.797 0.815 0.800 C#4 0.851 0.844 0.829 0.805 0.819 0.805 C#5 0.852 0.846 0.833 0.811 0.823 0.811 X S C M C#6 0.854 0.849 0.837 0.816 0.827 0.816 Table 3. F-1 measure by SCM and XSCM on six test sets with six chat language corpora. 0.69 0.71 0.73 0.75 0.77 0.79 0.81 0.83 0.85 0.87 0.89 0.91 T#1 T#2 T#3 T#4 T#5 T#6 SCM-C#1 SCM-C#2 SCM-C#3 SCM-C#4 SCM-C#5 SCM-C#6 XSCM-C#1 XSCM-C#2 XSCM-C#3 XSCM-C#4 XSCM-C#5 XSCM-C#6 Figure 1. Tendency on f-1 measure in SCM and XSCM on six test sets with six chat language corpora. The second tendency is f-1 measure of both methods on same test sets drops when trained with size-varying chat language corpora. For example, both SCM and XSCM perform best on the largest training chat language corpus C#6 and worst on the smallest corpus C#1. This tendency reveals that both methods favor bigger training chat language corpus. So extending the chat language corpus should be one choice to improve quality of chat language term normalization. The last tendency is found on quality gap between SCM and XSCM. We calculate f-1 measure gaps between two methods using same training sets on same test sets (see Figure 2). Then the tendency is made clear. Quality gap between SCM and XSCM becomes bigger when test set 998 becomes newer. On the oldest test set T#1, the gap is smallest, while on the newest test set T#6, the gap reaches biggest value, i.e. around 0.09. This tendency reveals excellent capability of XSCM in addressing dynamic problem using the phonetic mapping models. 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 T#1 T#2 T#3 T#4 T#5 T#6 C#1 C#2 C#3 C#4 C#5 C#6 Figure 2. Tendency on f-1 measure gap in SCM and XSCM on six test sets with six chat language corpora. 6.4 Experiment II: SCM vs. XSCM Using Size-varying Chat Language Corpora and CNGIGA In this experiment we investigate on quality of SCM and XSCM when a real standard Chinese language corpus is incorporated. We want to prove that the dynamic problem can be addressed effectively and robustly when CNGIGA is used as standard Chinese corpus. We train the two methods on CNGIGA and each of the six chat language corpora, i.e. C#1 ~ C#6. We then test the two methods on six timevarying test sets, i.e. T#1 ~ T#6. F-1 measure values produced by SCM and XSCM in this experiment are present in Table 4. Test Set T#1 T#2 T#3 T#4 T#5 T#6 C#1 0.849 0.840 0.820 0.790 0.735 0.703 C#2 0.850 0.841 0.824 0.798 0.743 0.714 C#3 0.850 0.843 0.824 0.797 0.747 0.720 C#4 0.851 0.844 0.829 0.805 0.748 0.727 C#5 0.852 0.846 0.833 0.811 0.758 0.734 S C M C#6 0.854 0.849 0.837 0.816 0.763 0.740 C#1 0.880 0.878 0.883 0.878 0.881 0.878 C#2 0.883 0.883 0.888 0.882 0.884 0.880 C#3 0.885 0.885 0.890 0.884 0.887 0.883 C#4 0.890 0.888 0.893 0.888 0.893 0.887 C#5 0.893 0.892 0.897 0.892 0.897 0.892 X S C M C#6 0.898 0.896 0.900 0.897 0.901 0.896 Table 4. F-1 measure by SCM and XSCM on six test sets with six chat language corpora and CNGIGA. Three observations are conducted on our results. First, according to Table 4, f-1 measure of SCM with same training chat language corpora drops on time-varying test sets, but XSCM produces much better f-1 measure consistently using CNGIGA and same training chat language corpora (see Figure 3). This proves that phonetic mapping models are helpful in XSCM method. The phonetic mapping models contribute in two aspects. On the one hand, they improve quality of chat term normalization on individual test sets. On the other hand, satisfactory robustness is achieved consistently. 0.69 0.71 0.73 0.75 0.77 0.79 0.81 0.83 0.85 0.87 0.89 0.91 T#1 T#2 T#3 T#4 T#5 T#6 SCM-C#1 SCM-C#2 SCM-C#3 SCM-C#4 SCM-C#5 SCM-C#6 XSCM-C#1 XSCM-C#2 XSCM-C#3 XSCM-C#4 XSCM-C#5 XSCM-C#6 ` Figure 3. Tendency on f-1 measure in SCM and XSCM on six test sets with six chat language corpora and CNGIGA. The second observation is conducted on phonetic mapping models constructed with CNGIGA. We find that 4,056,766 phonetic mapping models are constructed in this experiment, while only 1,303,227 models are constructed with NIL corpus in Experiment I. This reveals that coverage of standard Chinese corpus is crucial to phonetic mapping modeling. We then compare two character lists constructed with two corpora. The 100 characters most frequently used in NIL corpus are rather different from those extracted from CNGIGA. We can conclude that phonetic mapping models should be constructed with a sound corpus that can represent standard language. The last observation is conducted on f-1 measure achieved by same methods on same test sets using size-varying training chat language corpora. Both methods produce best f-1 measure with biggest training chat language corpus C#6 on same test sets. This again proves that bigger training chat language corpus could be helpful to improve quality of chat language term normalization. One question might be asked whether quality of XSCM converges on size of the training chat language corpus. This question remains open due to limited chat language corpus available to us. 6.5 Error Analysis Typical errors in our experiments belong mainly to the following two types. 999 Err.1 Ambiguous chat terms Example-1: 我还是8 米 In this example, XSCM finds no chat term while the correct normalization answer is “我还 是不明 (I still don’t understand)”. Error illustrated in Example-1 occurs when chat terms “8(eight, ba1)” and “米(meter, mi3)” appear in a chat sentence together. In chat language, “米” in some cases is used to replace “明(understand, ming2)”, while in other cases, it is used to represent a unit for length, i.e. meter. When number “8” appears before “米”, it is difficult to tell whether they are chat terms within sentential context. In our experiments, 93 similar errors occurred. We believe this type of errors can be addressed within discoursal context. Err.2 Chat terms created in manners other than phonetic mapping Example-2: 忧虑ing In this example, XSCM does not recognize “ing” while the correct answer is “(正在)忧虑 (I’m worrying)”. This is because chat terms created in manners other than phonetic mapping are excluded by the phonetic assumption in XSCM method. Around 1% chat terms fall out of phonetic mapping types. Besides chat terms holding same form as showed in Example-2, we find that emoticon is another major exception type. Fortunately, dictionary-based method is powerful enough to handle the exceptions. So, in a real system, the exceptions are handled by an extra component. 7 Conclusions To address the sparse data problem and dynamic problem in Chinese chat text normalization, the phonetic mapping models are proposed in this paper to represent mappings between chat terms and standard words. Different from character mappings, the phonetic mappings are constructed from available standard Chinese corpus. We extend the source channel model by incorporating the phonetic mapping models. Three conclusions can be made according to our experiments. Firstly, XSCM outperforms SCM with same training data. Secondly, XSCM produces higher performance consistently on time-varying test sets. Thirdly, both SCM and XSCM perform best with biggest training chat language corpus. Some questions remain open to us regarding optimal size of training chat language corpus in XSCM. Does the optimal size exist? Then what is it? These questions will be addressed in our future work. Moreover, bigger context will be considered in chat term normalization, discourse for instance. Acknowledgement Research described in this paper is partially supported by the Chinese University of Hong Kong under the Direct Grant Scheme project (2050330) and Strategic Grant Scheme project (4410001). References Brown, P. F., J. Cocke, S. A. D. Pietra, V. J. D. Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer and P. S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, v.16 n.2, p.79-85. Gianforte, G.. 2003. From Call Center to Contact Center: How to Successfully Blend Phone, Email, Web and Chat to Deliver Great Service and Slash Costs. RightNow Technologies. Graf, D., K. Chen, J.Kong and K. Maeda. 2005. Chinese Gigaword Second Edition. LDC Catalog Number LDC2005T14. Heard-White, M., Gunter Saunders and Anita Pincas. 2004. Report into the use of CHAT in education. Final report for project of Effective use of CHAT in Online Learning, Institute of Education, University of London. James, F.. 2000. Modified Kneser-Ney Smoothing of n-gram Models. RIACS Technical Report 00.07. Katz, S. M.. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing, 35(3):400-401. Li, H., W. He and B. Yuan. 2003. An Kind of Chinese Text Strings' Similarity and its Application in Speech Recognition. Journal of Chinese Information Processing, 2003 Vol.17 No.1 P.60-64. McCullagh, D.. 2004. Security officials to spy on chat rooms. News provided by CNET Networks. November 24, 2004. Xia, Y., K.-F. Wong and W. Gao. 2005. NIL is not Nothing: Recognition of Chinese Network Informal Language Expressions. 4th SIGHAN Workshop at IJCNLP'05, pp.95-102. Xia, Y. and K.-F. Wong. 2006a. Anomaly Detecting within Dynamic Chinese Chat Text. EACL’06 NEW TEXT workshop, pp.48-55. Xia, Y., K.-F. Wong and W. Li. 2006b. Constructing A Chinese Chat Text Corpus with A Two-Stage Incremental Annotation Approach. LREC’06. 1000 | 2006 | 125 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1001–1008, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Discriminative Pruning of Language Models for Chinese Word Segmentation Jianfeng Li Haifeng Wang Dengjun Ren Guohua Li Toshiba (China) Research and Development Center 5/F., Tower W2, Oriental Plaza, No.1, East Chang An Ave., Dong Cheng District Beijing, 100738, China {lijianfeng, wanghaifeng, rendengjun, liguohua}@rdc.toshiba.com.cn Abstract This paper presents a discriminative pruning method of n-gram language model for Chinese word segmentation. To reduce the size of the language model that is used in a Chinese word segmentation system, importance of each bigram is computed in terms of discriminative pruning criterion that is related to the performance loss caused by pruning the bigram. Then we propose a step-by-step growing algorithm to build the language model of desired size. Experimental results show that the discriminative pruning method leads to a much smaller model compared with the model pruned using the state-of-the-art method. At the same Chinese word segmentation F-measure, the number of bigrams in the model can be reduced by up to 90%. Correlation between language model perplexity and word segmentation performance is also discussed. 1 Introduction Chinese word segmentation is the initial stage of many Chinese language processing tasks, and has received a lot of attention in the literature (Sproat et al., 1996; Sun and Tsou, 2001; Zhang et al., 2003; Peng et al., 2004). In Gao et al. (2003), an approach based on source-channel model for Chinese word segmentation was proposed. Gao et al. (2005) further developed it to a linear mixture model. In these statistical models, language models are essential for word segmentation disambiguation. However, an uncompressed language model is usually too large for practical use since all realistic applications have memory constraints. Therefore, language model pruning techniques are used to produce smaller models. Pruning a language model is to eliminate a number of parameters explicitly stored in it, according to some pruning criteria. The goal of research for language model pruning is to find criteria or methods, using which the model size could be reduced effectively, while the performance loss is kept as small as possible. A few criteria have been presented for language model pruning, including count cut-off (Jelinek, 1990), weighted difference factor (Seymore and Rosenfeld, 1996), KullbackLeibler distance (Stolcke, 1998), rank and entropy (Gao and Zhang, 2002). These criteria are general for language model pruning, and are not optimized according to the performance of language model in specific tasks. In recent years, discriminative training has been introduced to natural language processing applications such as parsing (Collins, 2000), machine translation (Och and Ney, 2002) and language model building (Kuo et al., 2002; Roark et al., 2004). To the best of our knowledge, it has not been applied to language model pruning. In this paper, we propose a discriminative pruning method of n-gram language model for Chinese word segmentation. It differentiates from the previous pruning approaches in two respects. First, the pruning criterion is based on performance variation of word segmentation. Second, the model of desired size is achieved by adding valuable bigrams to a base model, instead of by pruning bigrams from an unpruned model. We define a misclassification function that approximately represents the likelihood that a sentence will be incorrectly segmented. The 1001 variation value of the misclassification function caused by adding a parameter to the base model is used as the criterion for model pruning. We also suggest a step-by-step growing algorithm that can generate models of any reasonably desired size. We take the pruning method based on Kullback-Leibler distance as the baseline. Experimental results show that our method outperforms the baseline significantly with small model size. With the F-Measure of 96.33%, number of bigrams decreases by up to 90%. In addition, by combining the discriminative pruning method with the baseline method, we obtain models that achieve better performance for any model size. Correlation between language model perplexity and system performance is also discussed. The remainder of the paper is organized as follows. Section 2 briefly discusses the related work on language model pruning. Section 3 proposes our discriminative pruning method for Chinese word segmentation. Section 4 describes the experimental settings and results. Result analysis and discussions are also presented in this section. We draw the conclusions in section 5. 2 Related Work A simple way to reduce the size of an n-gram language model is to exclude those n-grams occurring infrequently in training corpus. It is named as count cut-off method (Jelinek, 1990). Because counts are always integers, the size of the model can only be reduced to discrete values. Gao and Lee (2000) proposed a distributionbased pruning. Instead of pruning n-grams that are infrequent in training data, they prune ngrams that are likely to be infrequent in a new document. Experimental results show that it is better than traditional count cut-off method. Seymore and Rosenfeld (1996) proposed a method to measure the difference of the models before and after pruning each n-gram, and the difference is computed as: )] | ( log ) | ( [log ) , ( j i j i i j h w P h w P w h N − ′ × − (1) Where P(wi|hj) denotes the conditional probabilities assigned by the original model, and P′(wi|hj) denotes the probabilities in the pruned model. N(hj, wi) is the discounted frequency of ngram event hjwi. Seymore and Rosenfeld (1996) showed that this method is more effective than the traditional cut-off method. Stolcke (1998) presented a more sound criterion for computing the difference of models before and after pruning each n-gram, which is called relative entropy or Kullback-Leibler distance. It is computed as: ∑ − ′ − j i h w j i j i j i h w P h w P h w P , )] | ( log ) | ( )[log , ( (2) The sum is over all words wi and histories hj. This criterion removes some of the approximations employed in Seymore and Rosenfeld (1996). In addition, Stolcke (1998) presented a method for efficient computation of the Kullback-Leibler distance of each n-gram. In Gao and Zhang (2002), three measures are studied for the purpose of language model pruning. They are probability, rank, and entropy. Among them, probability is very similar to that proposed by Seymore and Rosenfeld (1996). Gao and Zhang (2002) also presented a method of combining two criteria, and showed the combination of rank and entropy achieved the smallest models. 3 Discriminative Pruning for Chinese Word Segmentation 3.1 Problem Definition In this paper, discussions are restricted to bigram language model P(wy|wx). In a bigram model, three kinds of parameters are involved: bigram probability Pm(wy|wx) for seen bigram wxwy in training corpus, unigram probability Pm(w) and backoff coefficient αm(w) for any word w. For any wx and wy in the vocabulary, bigram probability P(wy|wx) is computed as: ⎩ ⎨ ⎧ = × > = 0 ) , ( ) ( ) ( 0 ) , ( ) | ( ) | ( y x y m x m y x x y m x y w w c if w P w w w c if w w P w w P α (3) As equation (3) shows, the probability of an unseen bigram is computed by the product of the unigram probability and the corresponding backoff coefficient. If we remove a seen bigram from the model, we can still yield a bigram probability for it, by regarding it as an unseen bigram. Thus, we can reduce the number of bigram probabilities explicitly stored in the model. By doing this, model size decreases. This is the foundation for bigram model pruning. The research issue is to find an effective criterion to compute "importance" of each bigram. Here, "importance" indicates the performance loss caused by pruning the bigram. Generally, given a target model size, the method for language model pruning is described in Figure 1. In fact, deciding which bigrams should be excluded from the model is equivalent to deciding 1002 which bigrams should be included in the model. Hence, we suggest a growing algorithm through which a model of desired size can also be achieved. It is illustrated in Figure 2. Here, two terms are introduced. Full-bigram model is the unpruned model containing all seen bigrams in training corpus. And base model is currently the unigram model. For the discriminative pruning method suggested in this paper, growing algorithm instead of pruning algorithm is applied to generate the model of desired size. In addition, "importance" of each bigram indicates the performance improvement caused by adding a bigram into the base model. Figure 1. Language Model Pruning Algorithm Figure 2. Growing Algorithm for Language Model Pruning 3.2 Discriminative Pruning Criterion Given a Chinese character string S, a word segmentation system chooses a sequence of words W* as the segmentation result, satisfying: )) | ( log ) ( (log max arg * W S P W P W W + = (4) The sum of the two logarithm probabilities in equation (4) is called discriminant function: ) | ( log ) ( log ) , ; , ( W S P W P W S g + = Γ Λ (5) Where Г denotes a language model that is used to compute P(W), and Λ denotes a generative model that is used to compute P(S|W). In language model pruning, Λ is an invariable. The discriminative pruning criterion is inspired by the comparison of segmented sentences using full-bigram model ГF and using base model ГB. Given a sentence S, full-bigram model chooses as the segmentation result, and base model chooses as the segmentation result, satisfying: B * F W * B W ) , ; , ( max arg * F W F W S g W Γ Λ = (6) 1. Given the desired model size, compute the number of bigrams that should be pruned. The number is denoted as m; 2. Compute "importance" of each bigram; 3. Sort all bigrams in the language model, according to their "importance"; 4. Remove m most "unimportant" bigrams from the model; 5. Re-compute backoff coefficients in the model. ) , ; , ( max arg * B W B W S g W Γ Λ = (7) Here, given a language model Г, we define a misclassification function representing the difference between discriminant functions of and : * F W * B W ) , ; , ( ) , ; , ( ) , ; ( * * Γ Λ − Γ Λ = Γ Λ F B W S g W S g S d (8) The misclassification function reflects which one of and is inclined to be chosen as the segmentation result. If , we may extract some hints from the comparison of them, and select a few valuable bigrams. By adding these bigrams to base model, we should make the model choose the correct answer between and . If , no hints can be extracted. * F W * B W * * B F W W ≠ * F W * B W * * B F W W = 1. Given the desired model size, compute the number of bigrams that should be added into the base model. The number is denoted as n; 2. Compute "importance" of each bigram included in the full-bigram model but excluded from the base model; 3. Sort the bigrams according to their "importance"; 4. Add n most "important" bigrams into the base model; 5. Re-compute backoff coefficients in the base model. Let W0 be the known correct word sequence. Under the precondition , we describe our method in the following three cases. * * B F W W ≠ Case 1: and 0 * W WF = 0 * W WB ≠ Here, full-bigram model chooses the correct answer, while base model does not. Based on equation (6), (7) and (8), we know that d(S;Λ,ГB) > 0 and d(S;Λ,ГF) < 0. It implies that adding bigrams into base model may lead the misclassification function from positive to negative. Which bigram should be added depends on the variation of misclassification function caused by adding it. If adding a bigram makes the misclassification function become smaller, it should be added with higher priority. We add each bigram individually to ГB, and then compute the variation of the misclassification function. Let Г′ denotes the model after addB 1003 ing bigram wxwy into ГB B. According to equation (5) and (8), we can write the misclassification function using ГB and Г′ separately: B ) | ( log ) ( log ) | ( log ) ( log ) , ; ( * * * * F F B B B B B W S P W P W S P W P S d Λ Λ − − + = Γ Λ (9) ) | ( log ) ( log ) | ( log ) ( log ) , ; ( * * * * F F B B W S P W P W S P W P S d Λ Λ − ′ − + ′ = Γ′ Λ (10) Where PB(.), P′(.), P B ] ] Λ(.) represent probabilities in base model, model Г′ and model Λ separately. The variation of the misclassification function is computed as: )] ( log ) ( [log )] ( log ) ( [log ) , ; ( ) , ; ( ) ; ( * * * * B B B F B F B y x W P W P W P W P S d S d w w S d − ′ − − ′ = Γ′ Λ − Γ Λ = Δ (11) Because the only difference between base model and model Г′ is that model Г′ involves the bigram probability P′(wy|wx), we have: )] ( log ) ( log ) | ( )[log , ( ] | ( log ) | ( [log ) ( log ) ( log * * )1 ( * ) ( * )1 ( * ) ( * * x B y B x y y x F i i F i F B i F i F F B F w w P w w P w w W n w w P w w P W P W P α − − ′ = − ′ = − ′ ∑ − − (12) Where denotes the number of times the bigram w ) , ( * y x F w w W n xwy appears in sequence . Note that in equation (12), base model is treated as a bigram model instead of a unigram model. The reason lies in two respects. First, the unigram model can be regarded as a particular bigram model by setting all backoff coefficients to 1. Second, the base model is not always a unigram model during the step-by-step growing algorithm, which will be discussed in the next subsection. * F W In fact, bigram probability P′(wy|wx) is extracted from full-bigram model, so P′(wy|wx) = PF(wy|wx). In addition, similar deductions can be conducted to the second bracket in equation (11). Thus, we have: [ [ ) ( log ) ( log ) | ( log ) , ( ) , ( ) ; ( * * x B y B x y F y x B y x F y x w w P w w P w w W n w w W n w w S d α − − × − = Δ (13) Note that d(S;Λ,Г) approximately indicates the likelihood that S will be incorrectly segmented, so Δd(S;wxwy) represents the performance improvement caused by adding wxwy. Thus, "importance" of bigram wxwy on S is computed as: ) ; ( ) ; ( y x y x w w S d S w w imp Δ = (14) Case 2: and 0 * W WF ≠ 0 * W WB = Here, it is just contrary to case 1. In this way, we have: ) ; ( ) ; ( y x y x w w S d S w w imp Δ − = (15) Case 3: * 0 * B F W W W ≠ ≠ In case 1 and 2, bigrams are added so that discriminant function of correct word sequence becomes bigger, and that of incorrect word sequence becomes smaller. In case 3, both and are incorrect. Thus, the misclassification function in equation (8) does not represent the likelihood that S will be incorrectly segmented. Therefore, variation of the misclassification function in equation (13) can not be used to measure the "importance" of a bigram. Here, sentence S is ignored, and the "importance" of all bigrams on S are zero. * F W * B W The above three cases are designed for one sentence. The "importance" of each bigram on the whole training corpus is the sum of its "importance" on each single sentence, as equation (16) shows. ∑ = S y x y x S w w imp w w imp ) ; ( ) ( (16) To sum up, the "importance" of each bigram is computed as Figure 3 shows. 1. For each wxwy, set imp(wxwy) = 0; 2. For each sentence in training corpus: For each wxwy: if W and W : 0 * W F = B ≠ F ≠ B = 0 * W imp(wxwy) += Δd(S;wxwy); else if W and W : 0 * W 0 * W imp(wxwy) −= Δd(S;wxwy); Figure 3. Calculation of "Importance" of Bigrams We illustrate the process of computing "importance" of bigrams with a simple example. Suppose S is " 这(zhe4) 样(yang4) 才(cai2) 能 (neng2) 更(geng4) 方(fang1) 便(bian4)". The segmented result using full-bigram model is "这 样(zhe4yang4)/才(cai2)/能(neng2)/更(geng4)/方 便(fang1bian4)", which is the correct word sequence. The segmented result using base model 1004 is " 这样(zhe4yang4)/ 才能(cai2neng2)/ 更 (geng4)/ 方便(fang1bian4)". Obviously, it matches case 1. For bigram "这样(zhe4yang4)才 (cai2)", it occurs in once, and does not occur in . According to equation (13), its "importance" on sentence S is: * F W * B W imp(这样(zhe4yang4)才(cai2);S) = logPF(才(cai2)|这样(zhe4yang4)) − [logPB(才(cai2)) + logα B B B(这样(zhe4yang4))] For bigram " 更(geng4) 方便(fang1bian4)", since it occurs once both in and , its "importance" on S is zero. * F W * B W 3.3 Step-by-step Growing Given the target model size, we can add exact number of bigrams to the base model at one time by using the growing algorithm illustrated in Figure 2. But it is more suitable to adopt a stepby-step growing algorithm illustrated in Figure 4. As shown in equation (13), the "importance" of each bigram depends on the base model. Initially, the base model is set to the unigram model. With bigrams added in, it becomes a growing bigram model. Thus, and * B W ) ( log x B w α will change. So, the added bigrams will affect the calculation of "importance" of bigrams to be added. Generally, adding more bigrams at one time will lead to more negative impacts. Thus, it is expected that models produced by step-by-step growing algorithm may achieve better performance than growing algorithm, and smaller step size will lead to even better performance. Figure 4. Step-by-step Growing Algorithm 4 Experiments 4.1 Experiment Settings The training corpus comes from People's daily 2000, containing about 25 million Chinese characters. It is manually segmented into word sequences, according to the word segmentation specification of Peking University (Yu et al., 2003). The testing text that is provided by Peking University comes from the second international Chinese word segmentation bakeoff organized by SIGHAN. The testing text is a part of People's daily 2001, consisting of about 170K Chinese characters. The vocabulary is automatically extracted from the training corpus, and the words occurring only once are removed. Finally, about 67K words are included in the vocabulary. The fullbigram model and the unigram model are trained by CMU language model toolkit (Clarkson and Rosenfeld, 1997). Without any count cut-off, the full-bigram model contains about 2 million bigrams. The word segmentation system is developed based on a source-channel model similar to that described in (Gao et al., 2003). Viterbi algorithm is applied to find the best word segmentation path. 4.2 Evaluation Metrics The language models built in our experiments are evaluated by two metrics. One is F-Measure of the word segmentation result; the other is language model perplexity. For F-Measure evaluation, we firstly segment the raw testing text using the model to be evaluated. Then, the segmented result is evaluated by comparing with the gold standard set. The evaluation tool is also from the word segmentation bakeoff. F-Measure is calculated as: 1. Given step size s; 2. Set the base model to be the unigram model; 3. Segment corpus with full-bigram model; 4. Segment corpus with base model; 5. Compute "importance" of each bigram included in the full-bigram model but excluded from the base model; 6. Sort the bigrams according to their "importance"; 7. Add s bigrams with the biggest "importance" to the base model; 8. Re-compute backoff coefficients in the base model; 9. If the base model is still smaller than the desired size, go to step 4; otherwise, stop. F-Measure Recall Precision Recall Precision 2 + × × = (17) For perplexity evaluation, the language model to be evaluated is used to provide the bigram probabilities for each word in the testing text. The perplexity is the mean logarithm probability as shown in equation (18): ∑= − − = N i i i w w P N M PP 1 1 2 ) | ( log 1 2 ) ( (18) 4.3 Comparison of Pruning Methods The Kullback-Leibler Distance (KLD) based method is the state-of-the-art method, and is 1005 taken as the baseline1. Pruning algorithm illustrated in Figure 1 is used for KLD based pruning. Growing algorithms illustrated in Figure 2 and Figure 4 are used for discriminative pruning method. Growing algorithms are not applied to KLD based pruning, because the computation of KLD is independent of the base model. At step 1 for KLD based pruning, m is set to produce ten models containing 10K, 20K, …, 100K bigrams. We apply each of the models to the word segmentation system, and evaluate the segmented results with the evaluation tool. The F-Measures of the ten models are illustrated in Figure 5, denoted by "KLD". For the discriminative pruning criterion, the growing algorithm illustrated in Figure 2 is firstly used. Unigram model acts as the base model. At step 1, n is set to 10K, 20K, …, 100K separately. At step 2, "importance" of each bigram is computed following Figure 3. Ten models are produced and evaluated. The F-Measures are also illustrated in Figure 5, denoted by "Discrim". By adding bigrams step by step as illustrated in Figure 4, and setting step size to 10K, 5K, and 2K separately, we obtain other three series of models, denoted by "Step-10K", "Step-5K" and "Step-2K" in Figure 5. We also include in Figure 5 the performance of the count cut-off method. Obviously, it is inferior to other methods. 96.0 96.1 96.2 96.3 96.4 96.5 96.6 1 2 3 4 5 6 7 8 9 10 Bigram Num(10K) F-Measure(%) KLD Discrim Step-10K Step-5K Step-2K Cut-off Figure 5. Performance Comparison of Different Pruning Methods First, we compare the performance of "KLD" and "Discrim". When the model size is small, 1 Our pilot study shows that the method based on KullbackLeibler distance outperforms methods based on other criteria introduced in section 2. such as those models containing less than 70K bigrams, the performance of "Discrim" is better than "KLD". For the models containing more than 70K bigrams, "KLD" gets better performance than "Discrim". The reason is that the added bigrams affect the calculation of "importance" of bigrams to be added, which has been discussed in section 3.3. If we add the bigrams step by step, better performance is achieved. From Figure 5, it can be seen that all of the models generated by step-bystep growing algorithm outperform "KLD" and "Discrim" consistently. Compared with the baseline KLD based method, step-by-step growing methods result in at least 0.2 percent improvement for each model size. Comparing "Step-10K", "Step-5K" and "Step2K", they perform differently before the 60Kbigram point, and perform almost the same after that. The reason is that they are approaching their saturation states, which will be discussed in section 4.5. Before 60K-bigram point, smaller step size yields better performance. An example of detailed comparison result is shown in Table 1, where the F-Measure is 96.33%. The last column shows the relative model sizes with respect to the KLD pruned model. It shows that with the F-Measure of 96.33%, number of bigrams decreases by up to 90%. # of bigrams % of KLD KLD 100,000 100% Step-10K 25,000 25% Step-5K 15,000 15% Step-2K 10,000 10% Table 1. Comparison of Number of Bigrams at F-Measure 96.33% 4.4 Correlation between Perplexity and FMeasure Perplexities of the models built above are evaluated over the gold standard set. Figure 6 shows how the perplexities vary with the bigram numbers in models. Here, we notice that the KLD models achieve the lowest perplexities. It is not a surprising result, because the goal of KLD based pruning is to minimize the Kullback-Leibler distance that can be interpreted as a relative change of perplexity (Stolcke, 1998). Now we compare Figure 5 and Figure 6. Perplexities of KLD models are much lower than that of the other models, but their F-Measures are much worse than that of step-by-step growing 1006 models. It implies that lower perplexity does not always lead to higher F-Measure. However, when the comparison is restricted in a single pruning method, the case is different. For each pruning method, as more bigrams are included in the model, the perplexity curve falls, and the F-Measure curve rises. It implies there are correlations between them. We compute the Pearson product-moment correlation coefficient for each pruning method, as listed in Table 2. It shows that the correlation between perplexity and F-Measure is very strong. To sum up, the correlation between language model perplexity and system performance (here represented by F-Measure) depends on whether the models come from the same pruning method. If so, the correlation is strong. Otherwise, the correlation is weak. 300 350 400 450 500 550 600 650 700 1 2 3 4 5 6 7 8 9 10 Bigram Num(10K) Perplexity KLD Discrim Step-10K Step-5K Step-2K Cut-off Figure 6. Perplexity Comparison of Different Pruning Methods Pruning Method Correlation Cut-off -0.990 KLD -0.991 Discrim -0.979 Step-10K -0.985 Step-5K -0.974 Step-2K -0.995 Table 2. Correlation between Perplexity and F-Measure 4.5 Combination of Saturated Model and KLD The above experimental results show that stepby-step growing models achieve the best performance when less than 100K bigrams are added in. Unfortunately, they can not grow up into any desired size. A bigram has no chance to be added into the base model, unless it appears in the mis-aligned part of the segmented corpus, where ≠ . It is likely that not all bigrams have the opportunity. As more and more bigrams are added into the base model, the segmented training corpus using the current base model approaches to that using the full-bigram model. Gradually, none bigram can be added into the current base model. At that time, the model stops growing, and reaches its saturation state. The model that reaches its saturation state is named as saturated model. In our experiments, three step-by-step growing models reach their saturation states when about 100K bigrams are added in. * F W * B W By combining with the baseline KLD based method, we obtain models that outperform the baseline for any model size. We combine them as follows. If the desired model size is smaller than that of the saturated model, step-by-step growing is applied. Otherwise, Kullback-Leibler distance is used for further growing over the saturated model. For instance, by growing over the saturated model of "Step-2K", we obtain combined models containing from 100K to 2 million bigrams. The performance of the combined models and that of the baseline KLD models are illustrated in Figure 7. It shows that the combined model performs consistently better than KLD model over all of bigram numbers. Finally, the two curves converge at the performance of the full-bigram model. 96.3 96.4 96.5 96.6 96.7 96.8 96.9 97.0 10 30 50 70 90 110 130 150 170 190 207 Bigram Num(10K) F-Measure(%) KLD Combined Model Figure 7. Performance Comparison of Combined Model and KLD Model 5 Conclusions and Future Work A discriminative pruning criterion of n-gram language model for Chinese word segmentation was proposed in this paper, and a step-by-step growing algorithm was suggested to generate the model of desired size based on a full-bigram model and a base model. Experimental results 1007 showed that the discriminative pruning method achieves significant improvements over the baseline KLD based method. At the same F-measure, the number of bigrams can be reduced by up to 90%. By combining the saturated model and the baseline KLD based method, we achieved better performance for any model size. Analysis shows that, if the models come from the same pruning method, the correlation between perplexity and performance is strong. Otherwise, the correlation is weak. The pruning methods discussed in this paper focus on bigram pruning, keeping unigram probabilities unchanged. The future work will attempt to prune bigrams and unigrams simultaneously, according to a same discriminative pruning criterion. And we will try to improve the efficiency of the step-by-step growing algorithm. In addition, the method described in this paper can be extended to other applications, such as IME and speech recognition, where language models are applied in a similar way. References Philip Clarkson and Ronald Rosenfeld. 1997. Statistical Language Modeling Using the CMUCambridge Toolkit. In Proc. of the 5th European Conference on Speech Communication and Technology (Eurospeech-1997), pages 2707-2710. Michael Collins. 2000. Discriminative Reranking for Natural Language Parsing. In Machine Learning: Proc. of 17th International Conference (ICML2000), pages 175-182. Jianfeng Gao and Kai-Fu Lee. 2000. Distributionbased pruning of backoff language models. In Proc. of the 38th Annual Meeting of Association for Computational Linguistics (ACL-2000), pages 579-585. Jianfeng Gao, Mu Li, and Chang-Ning Huang. 2003. Improved Source-channel Models for Chinese Word Segmentation. In Proc. of the 41st Annual Meeting of Association for Computational Linguistics (ACL-2003), pages 272-279. Jianfeng Gao, Mu Li, Andi Wu, and Chang-Ning Huang. 2005. Chinese Word Segmentation and Named Entity Recognition: A Pragmatic Approach. Computational Linguistics, 31(4): 531-574. Jianfeng Gao and Min Zhang. 2002. Improving Language Model Size Reduction using Better Pruning Criteria. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL2002), pages 176-182. Fredrick Jelinek. 1990. Self-organized language modeling for speech recognition. In Alexander Waibel and Kai-Fu Lee (Eds.), Readings in Speech Recognition, pages 450-506. Hong-Kwang Jeff Kuo, Eric Fosler-Lussier, Hui Jiang, and Chin-Hui Lee. 2002. Discriminative Training of Language Models for Speech Recognition. In Proc. of the 27th International Conference On Acoustics, Speech and Signal Processing (ICASSP2002), pages 325-328. Franz Josef Och and Hermann Ney. 2002. Discriminative Training and Maximum Entropy Models for Statistical Machine Translation. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-2002), pages 295-302. Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese Segmentation and New Word Detection using Conditional Random Fields. In Proc. of the 20th International Conference on Computational Linguistics (COLING-2004), pages 562-568. Brian Roark, Murat Saraclar, Michael Collins, and Mark Johnson. 2004. Discriminative Language Modeling with Conditional Random Fields and the Perceptron Algorithm. In Proc. of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-2004), pages 47-54. Kristie Seymore and Ronald Rosenfeld. 1996. Scalable Backoff Language Models. In Proc. of the 4th International Conference on Spoken Language Processing (ICSLP-1996), pages. 232-235. Richard Sproat, Chilin Shih, William Gale, and Nancy Chang. 1996. A Stochastic Finite-state Word-segmentation Algorithm for Chinese. Computational Linguistics, 22(3): 377-404. Andreas Stolcke. 1998. Entropy-based Pruning of Backoff Language Models. In Proc. of DARPA News Transcription and Understanding Workshop, pages 270-274. Maosong Sun and Benjamin K. Tsou. 2001. A Review and Evaluation on Automatic Segmentation of Chinese. Contemporary Linguistics, 3(1): 22-32. Shiwen Yu, Huiming Duan, Xuefeng Zhu, Bin Swen, and Baobao Chang. 2003. Specification for Corpus Processing at Peking University: Word Segmentation, POS Tagging and Phonetic Notation. Journal of Chinese Language and Computing, 13(2): 121158. Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. HHMM-based Chinese Lexical Analyzer ICTCLAS, In Proc. of the ACL-2003 Workshop on Chinese Language Processing (SIGHAN), pages 184-187. 1008 | 2006 | 126 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1009–1016, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Novel Association Measures Using Web Search with Double Checking Hsin-Hsi Chen Ming-Shun Lin Yu-Chuan Wei Department of Computer Science and Information Engineering National Taiwan University Taipei, Taiwan [email protected];{mslin,ycwei}@nlg.csie.ntu.edu.tw Abstract A web search with double checking model is proposed to explore the web as a live corpus. Five association measures including variants of Dice, Overlap Ratio, Jaccard, and Cosine, as well as CoOccurrence Double Check (CODC), are presented. In the experiments on Rubenstein-Goodenough’s benchmark data set, the CODC measure achieves correlation coefficient 0.8492, which competes with the performance (0.8914) of the model using WordNet. The experiments on link detection of named entities using the strategies of direct association, association matrix and scalar association matrix verify that the double-check frequencies are reliable. Further study on named entity clustering shows that the five measures are quite useful. In particular, CODC measure is very stable on wordword and name-name experiments. The application of CODC measure to expand community chains for personal name disambiguation achieves 9.65% and 14.22% increase compared to the system without community expansion. All the experiments illustrate that the novel model of web search with double checking is feasible for mining associations from the web. 1 Introduction In statistical natural language processing, resources used to compute the statistics are indispensable. Different kinds of corpora have made available and many language models have been experimented. One major issue behind the corpus-based approaches is: if corpora adopted can reflect the up-to-date usage. As we know, languages are live. New terms and phrases are used in daily life. How to capture the new usages is an important research topic. The Web is a heterogeneous document collection. Huge-scale and dynamic nature are characteristics of the Web. Regarding the Web as a live corpus becomes an active research topic recently. How to utilize the huge volume of web data to measure association of information is an important issue. Resnik and Smith (2003) employ the Web as parallel corpora to provide bilingual sentences for translation models. Keller and Lapata (2003) show that bigram statistics for English language is correlated between corpus and web counts. Besides, how to get the word counts and the word association counts from the web pages without scanning over the whole collections is indispensable. Directly managing the web pages is not an easy task when the Web grows very fast. Search engine provides some way to return useful information. Page counts for a query denote how many web pages containing a specific word or a word pair roughly. Page count is different from word frequency, which denotes how many occurrences a word appear. Lin and Chen (2004) explore the use of the page counts provided by different search engines to compute the statistics for Chinese segmentation. In addition to the page counts, snippets returned by web search, are another web data for training. A snippet consists of a title, a short summary of a web page and a hyperlink to the web page. Because of the cost to retrieve the full web pages, short summaries are always adopted (Lin, Chen, and Chen, 2005). Various measures have been proposed to compute the association of objects of different granularity like terms and documents. Rodríguez and Egenhofer (2003) compute the semantic 1009 similarity from WordNet and SDTS ontology by word matching, feature matching and semantic neighborhood matching. Li et al. (2003) investigate how information sources could be used effectively, and propose a new similarity measure combining the shortest path length, depth and local density using WordNet. Matsuo et al. (2004) exploit the Jaccard coefficient to build “Web of Trust” on an academic community. This paper measures the association of terms using snippets returned by web search. A web search with double checking model is proposed to get the statistics for various association measures in Section 2. Common words and personal names are used for the experiments in Sections 3 and 4, respectively. Section 5 demonstrates how to derive communities from the Web using association measures, and employ them to disambiguate personal names. Finally, Section 6 concludes the remarks. 2 A Web Search with Double Checking Model Instead of simple web page counts and complex web page collection, we propose a novel model, a Web Search with Double Checking (WSDC), to analyze snippets. In WSDC model, two objects X and Y are postulated to have an association if we can find Y from X (a forward process) and find X from Y (a backward process) by web search. The forward process counts the total occurrences of Y in the top N snippets of query X, denoted as f(Y@X). Similarly, the backward process counts the total occurrences of X in the top N snippets of query Y, denoted as f(X@Y). The forward and the backward processes form a double check operation. Under WSDC model, the association scores between X and Y are defined by various formulas as follows. ⎪ ⎪ ⎩ ⎪ ⎪ ⎨ ⎧ + + = = = Otherwise Y f X f Y X f X Y f Y X f or X Y f if Y X e VariantDic ) ( ) ( ) @ ( ) @ ( 0 ) @ ( 0 ) @ ( 0 ) , ( (1) ) ( ) ( )) @ ( ), @ ( ( , ( Y f X f Y X f X Y f min Y) X ine VariantCos × = (2) )) @ ( ), @ ( ( ) ( ) ( )) @ ( ), @ ( ( Y X f X Y f max Y f X f Y X f X Y f min (X,Y) card VariantJac − + = (3) { } )} ( ), ( { ) @ ( ), @ ( ) , ( Y f X f min Y X f X Y f min Y X rlap VariantOve = (4) ⎪ ⎪ ⎩ ⎪⎪ ⎨ ⎧ = = = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ × Otherwise Y X f or X Y f if Y X CODC Y f Y X f X f X Y f log e α ) ( ) @ ( ) ( ) @ ( 0 ) @ ( 0 ) @ ( 0 ) , ( (5) Where f(X) is the total occurrences of X in the top N snippets of query X, and, similarly, f(Y) is the total occurrences of Y in the top N snippets of query Y. Formulas (1)-(4) are variants of the Dice, Cosine, Jaccard, and Overlap Ratio association measure. Formula (5) is a function CODC (Co-Occurrence Double-Check), which measures the association in an interval [0,1]. In the extreme cases, when f(Y@X)=0 or f(X@Y)=0, CODC(X,Y)=0; and when f(Y@X)=f(X) and f(X@Y)=f(Y), CODC(X,Y)=1. In the first case, X and Y are of no association. In the second case, X and Y are of the strongest association. 3 Association of Common Words We employ Rubenstein-Goodenough’s (1965) benchmark data set to compare the performance of various association measures. The data set consists of 65 word pairs. The similarities between words, called Rubenstein and Goodenough rating (RG rating), were rated on a scale of 0.0 to 4.0 for “semantically unrelated” to “highly synonymous” by 51 human subjects. The Pearson product-moment correlation coefficient, rxy, between the RG ratings X and the association scores Y computed by a model shown as follows measures the performance of the model. y x i n i i xy s s n y y x x r )1 ( ) )( ( 1 − − − = ∑ = (6) Where x and y are the sample means of xi and yi, and sx and sy are sample standard deviations of xi and yi and n is total samples. Most approaches (Resink, 1995; Lin, 1998; Li et al., 2003) used 28 word pairs only. Resnik (1995) obtained information content from WordNet and achieved correlation coefficient 0.745. Lin (1998) proposed an informationtheoretic similarity measure and achieved a correlation coefficient of 0.8224. Li et al. (2003) combined semantic density, path length and depth effect from WordNet and achieved the correlation coefficient 0.8914. 1010 100 200 300 400 500 600 700 800 900 VariantDice 0.5332 0.5169 0.5352 0.5406 0.5306 0.5347 0.5286 0.5421 0.5250 VariantOverlap 0.5517 0.6516 0.6973 0.7173 0.6923 0.7259 0.7473 0.7556 0.7459 VariantJaccard 0.5533 0.6409 0.6993 0.7229 0.6989 0.738 0.7613 0.7599 0.7486 VariantCosine 0.5552 0.6459 0.7063 0.7279 0.6987 0.7398 0.7624 0.7594 0.7501 CODC (α=0.15) 0.5629 0.6951 0.8051 0.8473 0.8438 0.8492 0.8222 0.8291 0.8182 Jaccard Coeff* 0.5847 0.5933 0.6099 0.5807 0.5463 0.5202 0.4855 0.4549 0.4622 Table 1. Correlation Coefficients of WSDC Model on Word-Word Experiments Model RG Rating Resnik (1995) Lin (1998) Li et al (2003) VariantCosine (#snippets=700) WSDC CODC(α=0.15, #snippets=600) WSDC Correlation Coefficient - 0.7450 0.8224 0.8914 0.7624 0.8492 chord-smile 0.02 1.1762 0.20 0 0 0 rooster-voyage 0.04 0 0 0 0 0 noon-string 0.04 0 0 0 0 0 glass-magician 0.44 1.0105 0.06 0 0 0 monk-slave 0.57 2.9683 0.18 0.350 0 0 coast-forest 0.85 0 0.16 0.170 0.0019 0.1686 monk-oracle 0.91 2.9683 0.14 0.168 0 0 lad-wizard 0.99 2.9683 0.20 0.355 0 0 forest-graveyard 1 0 0 0.132 0 0 food-rooster 1.09 1.0105 0.04 0 0 0 coast-hill 1.26 6.2344 0.58 0.366 0 0 car-journey 1.55 0 0 0 0.0014 0.2049 crane-implement 2.37 2.9683 0.39 0.366 0 0 brother-lad 2.41 2.9355 0.20 0.355 0.0027 0.1811 bird-crane 2.63 9.3139 0.67 0.472 0 0 bird-cock 2.63 9.3139 0.83 0.779 0.0058 0.2295 food-fruit 2.69 5.0076 0.24 0.170 0.0025 0.2355 brother-monk 2.74 2.9683 0.16 0.779 0.0027 0.1956 asylum-madhouse 3.04 15.666 0.97 0.779 0.0015 0.1845 furnace-stove 3.11 1.7135 0.18 0.585 0.0035 0.1982 magician-wizard 3.21 13.666 1 0.999 0.0031 0.2076 journey-voyage 3.58 6.0787 0.89 0.779 0.0086 0.2666 coast-shore 3.6 10.808 0.93 0.779 0.0139 0.2923 implement-tool 3.66 6.0787 0.80 0.778 0.0033 0.2506 boy-lad 3.82 8.424 0.85 0.778 0.0101 0.2828 Automobile-car 3.92 8.0411 1 1 0.0144 0.4229 Midday-noon 3.94 12.393 1 1 0.0097 0.2994 gem-jewel 3.94 14.929 1 1 0.0107 0.3530 Table 2. Comparisons of WSDC with Models in Previous Researches In our experiments on the benchmark data set, we used information from the Web rather than WordNet. Table 1 summarizes the correlation coefficients between the RG rating and the association scores computed by our WSDC model. We consider the number of snippets from 100 to 900. The results show that CODC > VariantCosine > VariantJaccard > VariantOverlap > VariantDice. CODC measure achieves the best performance 0.8492 when α=0.15 and total snippets to be analyzed are 600. Matsuo et al. (2004) used Jaccard coefficient to calculate similarity between personal names using the Web. The coefficient is defined as follows. 1011 ) ( ) ( ) , ( Y X f Y X f Y X Coff Jaccard ∪ ∩ = (7) Where f(X∩Y) is the number of pages including X’s and Y’s homepages when query “X and Y” is submitted to a search engine; f(X∪Y) is the number of pages including X’s or Y’s homepages when query “X or Y” is submitted to a search engine. We revised this formula as follows and evaluated it with Rubenstein-Goodenough’s benchmark. ) ( ) ( ) , ( * Y X f Y X f Y X Coff Jaccard s s ∪ ∩ = (8) Where fs(X∩Y) is the number of snippets in which X and Y co-occur in the top N snippets of query “X and Y”; fs(X∪Y) is the number of snippets containing X or Y in the top N snippets of query “X or Y”. We test the formula on the same benchmark. The last row of Table 1 shows that Jaccard Coeff* is worse than other models when the number of snippets is larger than 100. Table 2 lists the results of previous researches (Resink, 1995; Lin, 1998; Li et al., 2003) and our WSDC models using VariantCosine and CODC measures. The 28 word pairs used in the experiments are shown. CODC measure can compete with Li et al. (2003). The word pair “carjourney” whose similarity value is 0 in the papers (Resink, 1995; Lin, 1998; Li et al., 2003) is captured by our model. In contrast, our model cannot deal with the two word pairs “craneimplement” and “bird-crane”. 4 Association of Named Entities Although the correlation coefficient of WSDC model built on the web is a little worse than that of the model built on WordNet, the Web provides live vocabulary, in particular, named entities. We will demonstrate how to extend our WSDC method to mine the association of personal names. That will be difficult to resolve with previous approaches. We design two experiments – say, link detection test and named entity clustering, to evaluate the association of named entities. Given a named-entity set L, we define a link detection test to check if any two named entities NEi and NEj (i≠j) in L have a relationship R using the following three strategies. • Direct Association: If the double check frequency of NEi and NEj is larger than 0, Figure 1. Three Strategies for Link Detection i.e., f(NEj@NEi)>0 and f(NEi@NEj)>0, then the link detection test says “yes”, i.e., NEi and NEj have direct association. Otherwise, the test says “no”. Figure 1(a) shows the direct association. • Association Matrix: Compose an n×n binary matrix M=(mij), where mij=1 if f(NEj@NEi)>0 and f(NEi@NEj)>0; mij=0 if f(NEj@NEi)=0 or f(NEi@NEj)=0; and n is total number of named entities in L. Let Mt be a transpose matrix of M. The matrix A=M×Mt is an association matrix. Here the element aij in A means that total aij common named entities are associated with both NEi and NEj directly. Figure 1(b) shows a one-layer indirect association. Here, aij=3. We can define NEi and NEj have an indirect association if aij is larger than a threshold λ. That is, NEi and NEj should associate with at least λ common named entities directly. The strategy of association matrix specifies: if aij≥λ, then the link detection test says “yes”, otherwise it says “no”. In the example shown in Figure 1(b), NEi and NEj are indirectly associated when 0<λ≤3. • Scalar Association Matrix: Compose a binary association matrix B from the association matrix A as: bij=1 if aij>0 and bij=0 if aij=0. The matrix S= B×Bt is a scalar as1012 sociation matrix. NEi and NEj may indirectly associate with a common named entity NEk. Figure 1(c) shows a two-layer indirect association. The ∑= × = n k kj ik ij b b s 1 denotes how many such an NEk there are. In the example of Figure 1(c), two named entities indirectly associate NEi and NEj at the same time. We can define NEi and NEj have an indirect association if sij is larger than a threshold δ. In other words, if sij >δ, then the link detection test says “yes”, otherwise it says “no”. To evaluate the performance of the above three strategies, we prepare a test set extracted from domz web site (http://dmoz.org), the most comprehensive human-edited directory of the Web. The test data consists of three communities: actor, tennis player, and golfer, shown in Table 3. Total 220 named entities are considered. The golden standard of link detection test is: we compose 24,090 (=220×219/2) named entity pairs, and assign “yes” to those pairs belonging to the same community. Category Path in domz.org # of Person Names Top: Sports: Golf: Golfers 10 Top: Sports: Tennis: Players: Female (+Male) 90 Top: Arts: People: Image Galleries: Female (+Male): Individual 120 Table 3. Test Set for Association Evaluation of Named Entities When collecting the related values for computing the double check frequencies for any named entity pair (NEi and NEj), i.e., f(NEj@NEi), f(NEi@NEj), f(NEi), and f(NEj), we consider naming styles of persons. For example, “Alba, Jessica” have four possible writing: “Alba, Jessica”, “Jessica Alba”, “J. Alba” and “Alba, J.” We will get top N snippets for each naming style, and filter out duplicate snippets as well as snippets of ULRs including dmoz.org and google.com. Table 4 lists the experimental results of link detection on the test set. The precisions of two baselines are: guessing all “yes” (46.45%) and guessing all “no” (53.55%). All the three strategies are better than the two baselines and the performance becomes better when the numbers of snippets increase. The strategy of direct association shows that using double checks to measure the association of named entities also gets good effects as the association of common words. For the strategy of association matrix, the best performance 90.14% occurs in the case of 900 snippets and λ=6. When larger number of snippets is used, a larger threshold is necessary to achieve a better performance. Figure 2(a) illustrates the relationship between precision and threshold (λ). The performance decreases when λ>6. The performance of the strategy of scalar association matrix is better than that of the strategy of association matrix in some λ and δ. Figure 2(b) shows the relationship between precision and threshold δ for some number of snippets and λ. In link detection test, we only consider the binary operation of double checks, i.e., f(NEj@NEi) > 0 and f(NEi@NEj) > 0, rather than utilizing the magnitudes of f(NEj@NEi) and f(NEi@NEj). Next we employ the five formulas proposed in Section 2 to cluster named entities. The same data set as link detection test is adopted. An agglomerative average-link clustering algorithm is used to partition the given 220 named entities based on Formulas (1)-(5). Four-fold crossvalidation is employed and B-CUBED metric (Bagga and Baldwin, 1998) is adopted to evaluate the clustering results. Table 5 summarizes the experimental results. CODC (Formula 5), which behaves the best in computing association of common words, still achieves the better performance on different numbers of snippets in named entity clustering. The F-scores of the other formulas are larger than 95% when more snippets are considered to compute the double check frequencies. Strategies 100 200 300 400 500 600 700 800 900 Direct Association 59.20% 62.86% 65.72% 67.88% 69.83% 71.35% 72.05% 72.46% 72.55% Association Matrix 71.53% (λ=1) 79.95% (λ=1) 84.00% (λ=2) 86.08% (λ=3) 88.13% (λ=4) 89.67% (λ=5) 89.98% (λ=5) 90.09% (λ=6) 90.14% (λ=6) Scalar Association Matrix 73.93% (λ=1, δ=6) 82.69% (λ=2, δ=9) 86.70% (λ=4, δ=9) 88.61% (λ=5, δ=10) 90.90% (λ=6, δ=12) 91.93% (λ=7, δ=12) 91.90% (λ=7, δ=18) 92.20% (λ=10, δ=16) 92.35% (λ=10, δ=18) Table 4. Performance of Link Detection of Named Entities 1013 (a) (b) Figure 2. (a) Performance of association matrix strategy. (b) Performance of scalar association matrix strategy (where λ is fixed and its values reference to scalar association matrix in Table 4) 100 200 300 400 500 600 700 800 900 P 91.70% 88.71% 87.02% 87.49% 96.90% 100.00% 100.00% 100.00% 100.00% R 55.80% 81.10% 87.70% 93.00% 89.67% 93.61% 94.42% 94.88% 94.88% VariantDice F 69.38% 84.73% 87.35% 90.16% 93.14% 96.69% 97.12% 97.37% 97.37% P 99.13% 87.04% 85.35% 85.17% 88.16% 88.16% 88.16% 97.59% 98.33% R 52.16% 81.10% 86.24% 93.45% 92.03% 93.64% 92.82% 90.82% 93.27% VariantOverlap F 68.35% 83.96% 85.79% 89.11% 90.05% 90.81% 90.43% 94.08% 95.73% P 99.13% 97.59% 98.33% 95.42% 97.59% 88.16% 95.42% 100.00% 100.00% R 55.80% 77.53% 84.91% 88.67% 87.18% 90.58% 88.67% 93.27% 91.64% VariantJaccard F 71.40% 86.41% 91.12% 91.92% 92.09% 89.35% 91.92% 96.51% 95.63% P 84.62% 97.59% 85.35% 85.17% 88.16% 88.16% 88.16% 98.33% 98.33% R 56.22% 78.92% 86.48% 93.45% 92.03% 93.64% 93.64% 93.27% 93.27% VariantCosine F 67.55% 87.26% 85.91% 89.11% 90.05% 90.81% 90.81% 95.73% 95.73% P 91.70% 87.04% 87.02% 95.93% 98.33% 95.93% 95.93% 94.25% 94.25% R 55.80% 81.10% 90.73% 94.91% 94.91% 96.52% 98.24% 98.24% 98.24% CODC (α=0.15) F 69.38% 83.96% 88.83% 95.41% 96.58% 96.22% 97.07% 96.20% 96.20% Table 5. Performance of Various Scoring Formulas on Named Entity Clustering 5 Disambiguation Using Association of Named Entities This section demonstrates how to employ association mined from the Web to resolve the ambiguities of named entities. Assume there are n named entities, NE1, NE2, …, and NEn, to be disambiguated. A named entity NEj has m accompanying names, called cue names later, CNj1, CNj2, …, CNjm. We have two alternatives to use the cue names. One is using them directly, i.e., NEj is represented as a community of cue names Community(NEj)={CNj1, CNj2, …, CNjm}. The other is to expand the cue names CNj1, CNj2, …, CNjm for NEj using the web data as follows. Let CNj1 be an initial seed. Figure 3 sketches the concept of community expansion. (1) Collection: We submit a seed to Google, and select the top N returned snippets. Then, we use suffix trees to extract possible patterns (Lin and Chen, 2006). (2) Validation: We calculate CODC score of each extracted pattern (denoted Bi) with the seed A. If CODC(A,Bi) is strong enough, i.e., larger than a 1014 threshold θ, we employ Bi as a new seed and repeat steps (1) and (2). This procedure stops either expected number of nodes is collected or maximum number of layers is reached. (3) Union: The community initiated by the seed CNji is denoted by Community(CNji)={Bji1, Bji2, …, BBjir}, where Bjik is a new seed. The Cscore score, community score, of Bjik B is the CODC score of Bjik with its parent divided by the layer it is located. We repeat Collection and Validation steps until all the cue names CNji (1≤i≤m) of NEj are processed. Finally, we have ) ( ) ( 1 ji m i j CN Community NE Community = ∪ = Figure 3. A Community for a Seed “王建民” (“Chien-Ming Wang”) In a cascaded personal name disambiguation system (Wei, 2006), association of named entities is used with other cues such as titles, common terms, and so on. Assume k clusters, c1 c2 ... ck, have been formed using title cue, and we try to place NE1, NE2, …, and NEl into a suitable cluster. The cluster c is selected by the similarity measure defined below. ) ( ) ( 1 ) , ( 1 i i s i q j pn score C pn count r c NE score × = ∑= (9) ) , ( max arg ) k q 1( c q q j c NE score c ≤ ≤ = (10) Where pn1, pn2, …, pns are names which appear in both Community(NEj) and Community(cq); count(pni) is total occurrences of pni in Community(cq); r is total occurrences of names in Community(NEj); Cscore(pni) is community score of pni. If score(NEj, c ) is larger than a threshold, then NEj is placed into cluster c . In other words, NEj denotes the same person as those in c . We let the new Community( c ) be the old Community(c )∪{CNj1, CNj2, …, CNjm}. Otherwise, NEj is left undecided. To evaluate the personal name disambiguation, we prepare three corpora for an ambiguous name “ 王建民” (Chien-Ming Wang) from United Daily News Knowledge Base (UDN), Google Taiwan (TW), and Google China (CN). Table 6 summarizes the statistics of the test data sets. In UDN news data set, 37 different persons are mentioned. Of these, 13 different persons occur more than once. The most famous person is a pitcher of New York Yankees, which occupies 94.29% of 2,205 documents. In TW and CN web data sets, there are 24 and 107 different persons. The majority in TW data set is still the New York Yankees’s “Chien-Ming Wang”. He appears in 331 web pages, and occupies 88.03%. Comparatively, the majority in CN data set is a research fellow of Chinese Academy of Social Sciences, and he only occupies 18.29% of 421 web pages. Total 36 different “Chien-Ming Wang”s occur more than once. Thus, CN is an unbiased corpus. UDN TW CN # of documents 2,205 376 421 # of persons 37 24 107 # of persons of occurrences>1 13 9 36 Majority 94.29% 88.03% 18.29% Table 6. Statistics of Test Corpora M1 M2 P 0.9742 0.9674 (↓0.70%) R 0.9800 0.9677 (↓1.26%) UDN F 0.9771 0.9675 (↓0.98%) P 0.8760 0.8786 (↑0.07%) R 0.6207 0.7287 (↑17.40%) TW F 0.7266 0.7967 (↑9.65%) P 0.4910 0.5982 (↑21.83%) R 0.8049 0.8378 (↑4.09%) CN F 0.6111 0.6980 (↑14.22%) Table 7. Disambiguation without/with Community Expansion 1015 Table 7 shows the performance of a personal name disambiguation system without (M1)/with (M2) community expansion. In the news data set (i.e., UDN), M1 is a little better than M2. Compared to M1, M2 decreases 0.98% of F-score. In contrast, in the two web data sets (i.e., TW and CN), M2 is much better than M1. M2 has 9.65% and 14.22% increases compared to M1. It shows that mining association of named entities from the Web is very useful to disambiguate ambiguous names. The application also confirms the effectiveness of the proposed association measures indirectly. 6 Concluding Remarks This paper introduces five novel association measures based on web search with double checking (WSDC) model. In the experiments on association of common words, Co-Occurrence Double Check (CODC) measure competes with the model trained from WordNet. In the experiments on the association of named entities, which is hard to deal with using WordNet, WSDC model demonstrates its usefulness. The strategies of direct association, association matrix, and scalar association matrix detect the link between two named entities. The experiments verify that the double-check frequencies are reliable. Further study on named entity clustering shows that the five measures – say, VariantDice, VariantOverlap, ariantJaccard, VariantCosine and CODC, are quite useful. In particular, CODC is very stable on word-word and namename experiments. Finally, WSDC model is used to expand community chains for a specific personal name, and CODC measures the association of community member and the personal name. The application on personal name disambiguation shows that 9.65% and 14.22% increase compared to the system without community expansion. Acknowledgements Research of this paper was partially supported by National Science Council, Taiwan, under the contracts 94-2752-E-001-001-PAE and 95-2752E-001-001-PAE. References A. Bagga and B. Baldwin. 1998. Entity-Based CrossDocument Coreferencing Using the Vector Space Model. Proceedings of 36th COLING-ACL Conference, 79-85. F. Keller and M. Lapata. 2003. Using the Web to Obtain Frequencies for Unseen Bigrams. Computational Linguistics, 29(3): 459-484. Y. Li, Z.A. Bandar and D. McLean. 2003. An Approach for Measuring Semantic Similarity between Words Using Multiple Information Sources. IEEE Transactions on Knowledge and Data Engineering, 15(4): 871-882. D. Lin. 1998. An Information-Theoretic Definition of Similarity. Proceedings of the Fifteenth International Conference on Machine Learning, 296-304. H.C. Lin and H.H. Chen. 2004. Comparing Corpusbased Statistics and Web-based Statistics: Chinese Segmentation as an Example. Proceedings of 16th ROCLING Conference, 89-100. M.S. Lin, C.P. Chen and H.H. Chen. 2005. An Approach of Using the Web as a Live Corpus for Spoken Transliteration Name Access. Proceedings of 17th ROCLING Conference, 361-370. M.S. Lin and H.H. Chen. 2006. Constructing a Named Entity Ontology from Web Corpora. Proceedings of 5th International Conference on Language Resources and Evaluation. Y. Matsuo, H. Tomobe, K. Hasida, and M. Ishizuka. 2004. Finding Social Network for Trust Calculation. Proceedings of 16th European Conference on Artificial Intelligence, 510-514. P. Resnik. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. Proceedings of the 14th International Joint Conference on Artificial Intelligence, 448-453. P. Resnik and N.A. Smith. 2003. The Web as a Parallel Corpus. Computational Linguistics, 29(3): 349380. M.A. Rodríguez and M.J. Egenhofer. 2003. Determining Semantic Similarity among Entity Classes from Different Ontologies. IEEE Transactions on Knowledge and Data Engineering, 15(2): 442-456. H. Rubenstein and J.B. Goodenough. 1965. Contextual Correlates of Synonymy. Communications of the ACM, 8(10): 627-633. Y.C. Wei. 2006. A Study of Personal Name Disambiguation. Master Thesis, Department of Computer Science and Information Engineering, National Taiwan University, Taiwan. 1016 | 2006 | 127 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1017–1024, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Semantic Retrieval for the Accurate Identification of Relational Concepts in Massive Textbases Yusuke Miyao∗ Tomoko Ohta∗ Katsuya Masuda∗ Yoshimasa Tsuruoka† Kazuhiro Yoshida∗ Takashi Ninomiya‡ Jun’ichi Tsujii∗† ∗Department of Computer Science, University of Tokyo †School of Informatics, University of Manchester ‡Information Technology Center, University of Tokyo Hongo 7-3-1, Bunkyo-ku, Tokyo 113-0033 JAPAN {yusuke,okap,kmasuda,tsuruoka,kyoshida,ninomi,tsujii}@is.s.u-tokyo.ac.jp Abstract This paper introduces a novel framework for the accurate retrieval of relational concepts from huge texts. Prior to retrieval, all sentences are annotated with predicate argument structures and ontological identifiers by applying a deep parser and a term recognizer. During the run time, user requests are converted into queries of region algebra on these annotations. Structural matching with pre-computed semantic annotations establishes the accurate and efficient retrieval of relational concepts. This framework was applied to a text retrieval system for MEDLINE. Experiments on the retrieval of biomedical correlations revealed that the cost is sufficiently small for real-time applications and that the retrieval precision is significantly improved. 1 Introduction Rapid expansion of text information has motivated the development of efficient methods of accessing information in huge texts. Furthermore, user demand has shifted toward the retrieval of more precise and complex information, including relational concepts. For example, biomedical researchers deal with a massive quantity of publications; MEDLINE contains approximately 15 million references to journal articles in life sciences, and its size is rapidly increasing, at a rate of more than 10% yearly (National Library of Medicine, 2005). Researchers would like to be able to search this huge textbase for biomedical correlations such as protein-protein or gene-disease associations (Blaschke and Valencia, 2002; Hao et al., 2005; Chun et al., 2006). However, the framework of traditional information retrieval (IR) has difficulty with the accurate retrieval of such relational concepts because relational concepts are essentially determined by semantic relations between words, and keyword-based IR techniques are insufficient to describe such relations precisely. The present paper demonstrates a framework for the accurate real-time retrieval of relational concepts from huge texts. Prior to retrieval, we prepare a semantically annotated textbase by applying NLP tools including deep parsers and term recognizers. That is, all sentences are annotated in advance with semantic structures and are stored in a structured database. User requests are converted on the fly into patterns of these semantic annotations, and texts are retrieved by matching these patterns with the pre-computed semantic annotations. The accurate retrieval of relational concepts is attained because we can precisely describe relational concepts using semantic annotations. In addition, real-time retrieval is possible because semantic annotations are computed in advance. This framework has been implemented for a text retrieval system for MEDLINE. We first apply a deep parser (Miyao and Tsujii, 2005) and a dictionary-based term recognizer (Tsuruoka and Tsujii, 2004) to MEDLINE and obtain annotations of predicate argument structures and ontological identifiers of genes, gene products, diseases, and events. We then provide a search engine for these annotated sentences. User requests are converted into queries of region algebra (Clarke et al., 1995) extended with variables (Masuda et al., 2006) on these annotations. A search engine for the extended region algebra efficiently finds sentences having semantic annotations that match the input queries. In this paper, we evaluate this system with respect to the retrieval of biomedical correlations 1017 Symbol CRP Name C-reactive protein, pentraxin-related Species Homo sapiens Synonym MGC88244, PTX1 Product C-reactive protein precursor, C-reactive protein, pentraxin-related protein External links EntrezGene:1401, GDB:119071, ... Table 1: An example GENA entry and examine the effects of using predicate argument structures and ontological identifiers. The need for the discovery of relational concepts has been investigated intensively in Information Extraction (IE). However, little research has targeted on-demand retrieval from huge texts. One difficulty is that IE techniques such as pattern matching and machine learning require heavier processing in order to be applied on the fly. Another difficulty is that target information must be formalized beforehand and each system is designed for a specific task. For instance, an IE system for protein-protein interactions is not useful for finding gene-disease associations. Apart from IE research, enrichment of texts with various annotations has been proposed and is becoming a new research area for information management (IBM, 2005; TEI, 2004). The present study basically examines this new direction in research. The significant contribution of the present paper, however, is to provide the first empirical results of this framework for a real task with a huge textbase. 2 Background: Resources and Tools for Semantic Annotations The proposed system for the retrieval of relational concepts is a product of recent developments in NLP resources and tools. In this section, ontology databases, deep parsers, and search algorithms for structured data are introduced. 2.1 Ontology databases Ontology databases are collections of words and phrases in specific domains. Such databases have been constructed extensively for the systematic management of domain knowledge by organizing textual expressions of ontological entities that are detached from actual sentences. For example, GENA (Koike and Takagi, 2004) is a database of genes and gene products that is semi-automatically collected from well-known databases, including HUGO, OMIM, Genatlas, Locuslink, GDB, MGI, FlyBase, WormBase, Figure 1: An output of HPSG parsing Figure 2: A predicate argument structure CYGD, and SGD. Table 1 shows an example of a GENA entry. “Symbol” and “Name” denote short forms and nomenclatures of genes, respectively. “Species” represents the organism species in which this gene is observed. “Synonym” is a list of synonyms and name variations. “Product” gives a list of products of this gene, such as proteins coded by this gene. “External links” provides links to other databases, and helps to obtain detailed information from these databases. For biomedical terms other than genes/gene products, the Unified Medical Language System (UMLS) meta-thesaurus (Lindberg et al., 1993) is a large database that contains various names of biomedical and health-related concepts. Ontology databases provide mappings between textual expressions and entities in the real world. For example, Table 1 indicates that CRP, MGC88244, and PTX1 denote the same gene conceptually. Hence, these resources enable us to canonicalize variations of textual expressions of ontological entities. 2.2 Parsing technologies Recently, state-of-the-art CFG parsers (Charniak and Johnson, 2005) can compute phrase structures of natural sentences at fairly high accuracy. These parsers have been used in various NLP tasks including IE and text mining. In addition, parsers that compute deeper analyses, such as predicate argument structures, have become available for 1018 the processing of real-world sentences (Miyao and Tsujii, 2005). Predicate argument structures are canonicalized representations of sentence meanings, and express the semantic relations of words explicitly. Figure 1 shows an output of an HPSG parser (Miyao and Tsujii, 2005) for the sentence “A normal serum CRP measurement does not exclude deep vein thrombosis.” The dotted lines express predicate argument relations. For example, the ARG1 arrow coming from “exclude” points to the noun phrase “A normal serum CRP measurement”, which indicates that the subject of “exclude” is this noun phrase, while such relations are not explicitly represented by phrase structures. Predicate argument structures are beneficial for our purpose because they can represent relational concepts in an abstract manner. For example, the relational concept of “CRP excludes thrombosis” can be represented as a predicate argument structure, as shown in Figure 2. This structure is universal in various syntactic expressions, such as passivization (e.g., “thrombosis is excluded by CRP”) and relativization (e.g., “thrombosis that CRP excludes”). Hence, we can abstract surface variations of sentences and describe relational concepts in a canonicalized form. 2.3 Structural search algorithms Search algorithms for structured texts have been studied extensively, and examples include XML databases with XPath (Clark and DeRose, 1999) and XQuery (Boag et al., 2005), and region algebra (Clarke et al., 1995). The present study focuses on region algebra extended with variables (Masuda et al., 2006) because it provides an efficient search algorithm for tags with cross boundaries. When we annotate texts with various levels of syntactic/semantic structures, cross boundaries are inherently nonnegligible. In fact, as described in Section 3, our system exploits annotations of predicate argument structures and ontological entities, which include substantial cross boundaries. Region algebra is defined as a set of operators on regions, i.e., word sequences. Table 2 shows operators of the extended region algebra, where A and B denote regions, and results of operations are also regions. For example, “A & B” denotes a region that includes both A and B. Four containment operators, >, >>, <, and <<, represent ancestor/descendant relations in XML. For example, “A > B” indicates that A is an ancestor of B. In [tag] Region covered with “<tag>” A > B A containing B A >> B A containing B (A is not nested) A < B A contained by B A << B A contained by B (B is not nested) A - B Starting with A and ending with B A & B A and B A | B A or B Table 2: Operators of the extended region algebra [sentence] >> (([word arg1="$subject"] > exclude) & ([phrase id="$subject"] > CRP)) Figure 3: A query of the extended region algebra Figure 4: Matching with the query in Figure 3 search algorithms for region algebra, the cost of retrieving the first answer is constant, and that of an exhaustive search is bounded by the lowest frequency of a word in a query (Clarke et al., 1995). Variables in the extended region algebra allow us to express shared structures and are necessary in order to describe predicate argument structures. For example, Figure 3 shows a formula in the extended region algebra that represents the predicate argument structure of “CRP excludes something.” This formula indicates that a sentence contains a region in which the word “exclude” exists, the first argument (“arg1”) phrase of which includes the word “CRP.” A predicate argument relation is expressed by the variable, “$subject.” Figure 4 shows a situation in which this formula is satisfied. Three horizontal bars describe regions covered by <sentence>, <phrase>, and <word> tags, respectively. The dotted line denotes the relation expressed by this variable. Given this formula as a query, a search engine can retrieve sentences having semantic annotations that satisfy this formula. 3 A Text Retrieval System for MEDLINE While the above resources and tools have been developed independently, their collaboration opens up a new framework for the retrieval of relational concepts, as described below (Figure 5). Off-line processing: Prior to retrieval, a deep parser is applied to compute predicate argument 1019 Figure 5: Framework of semantic retrieval structures, and a term recognizer is applied to create mappings from textual expressions into identifiers in ontology databases. Semantic annotations are stored and indexed in a structured database for the extended region algebra. On-line processing: User input is converted into queries of the extended region algebra. A search engine retrieves sentences having semantic annotations that match the queries. This framework is applied to a text retrieval engine for MEDLINE. MEDLINE is an exhaustive database covering nearly 4,500 journals in the life sciences and includes the bibliographies of articles, about half of which have abstracts. Research on IE and text mining in biomedical science has focused mainly on MEDLINE. In the present paper, we target all articles indexed in MEDLINE at the end of 2004 (14,785,094 articles). The following sections explain in detail off-/on-line processing for the text retrieval system for MEDLINE. 3.1 Off-line processing: HPSG parsing and term recognition We first parsed all sentences using an HPSG parser (Miyao and Tsujii, 2005) to obtain their predicate argument structures. Because our target is biomedical texts, we re-trained a parser (Hara et al., 2005) with the GENIA treebank (Tateisi et al., 2005), and also applied a bidirectional part-ofspeech tagger (Tsuruoka and Tsujii, 2005) trained with the GENIA treebank as a preprocessor. Because parsing speed is still unrealistic for parsing the entire MEDLINE on a single machine, we used two geographically separated computer clusters having 170 nodes (340 Xeon CPUs). These clusters are separately administered and not dedicated for use in the present study. In order to effectively use such an environment, GXP (Taura, 2004) was used to connect these clusters and distribute the load among them. Our processes were given the lowest priority so that our task would not disturb other users. We finished parsing the entire MEDLINE in nine days (Ninomiya et al., 2006). # entries (genes) 517,773 # entries (gene products) 171,711 # entries (diseases) 148,602 # expanded entries 4,467,855 Table 3: Sizes of ontologies used for term recognition Event type Expressions influence effect, affect, role, response, ... regulation mediate, regulate, regulation, ... activation induce, activate, activation, ... Table 4: Event expression ontology Next, we annotated technical terms, such as genes and diseases, to create mappings to ontological identifiers. A dictionary-based term recognition algorithm (Tsuruoka and Tsujii, 2004) was applied for this task. First, an expanded term list was created by generating name variations of terms in GENA and the UMLS meta-thesaurus1. Table 3 shows the size of the original database and the number of entries expanded by name variations. Terms in MEDLINE were then identified by the longest matching of entries in this expanded list with words/phrases in MEDLINE. The necessity of ontologies is not limited to nominal expressions. Various verbs are used for expressing events. For example, activation events of proteins can be expressed by “activate,” “enhance,” and other event expressions. Although the numbers of verbs and their event types are much smaller than those of technical terms, verbal expressions are important for the description of relational concepts. Since ontologies of event expressions in this domain have not yet been constructed, we developed an ontology from scratch. We investigated 500 abstracts extracted from MEDLINE, and classified 167 frequent expressions, including verbs and their nominalized forms, into 18 event types. Table 4 shows a part of this ontology. These expressions in MEDLINE were automatically annotated with event types. As a result, we obtained semantically annotated MEDLINE. Table 5 shows the size of the original MEDLINE and semantic annotations. Figure 6 shows semantic annotations for the sentence in Figure 1, where “-” indicates nodes of XML,2 1We collected disease names by specifying a query with the semantic type as “Disease or Syndrome.” 2Although this example is shown in XML, this textbase contains tags with cross boundaries because tags for predicate argument structures and technical terms may overlap. 1020 # papers 14,785,094 # abstracts 7,291,857 # sentences 70,935,630 # words 1,462,626,934 # successfully parsed sentences 69,243,788 # predicate argument relations 1,510,233,701 # phrase tags 3,094,105,383 # terms (genes) 84,998,621 # terms (gene products) 27,471,488 # terms (diseases) 19,150,984 # terms (event expressions) 51,810,047 Size of the original MEDLINE 9.3 GByte Size of the semantic annotations 292 GByte Size of the index file for region algebra 954 GByte Table 5: Sizes of the original and semantically annotated MEDLINE textbases - <sentence sentence_id="e6e525"> - <phrase id="0" cat="S" head="15" lex_head="18"> - <phrase id="1" cat="NP" head="4" lex_head="14"> - <phrase id="2" cat="DT" head="3" lex_head="3"> - <word id="3" pos="DT" cat="DT" base="a" arg1="4"> - A - <phrase id="4" cat="NP" head="7" lex_head="14"> - <phrase id="5" cat="AJ" head="6" lex_head="6"> - <word id="6" pos="JJ" cat="AJ" base="normal" arg1="7"> - normal - <phrase id="7" cat="NP" head="10" lex_head="14"> - <phrase id="8" cat="NP" head="9" lex_head="9"> - <word id="9" pos="NN" cat="NP" base="serum" mod="10"> - serum - <phrase id="10" cat="NP" head="13" lex_head="14"> - <phrase id="11" cat="NP" head="12" lex_head="12"> - <entity_name id="entity-1" type="gene" gene_id="GHS003134" gene_symbol="CRP" gene_name="C-reactive protein, pentraxin-related" species="Homo sapiens" db_site="EntrezGene:1401|GDB:119071|GenAtlas:CRP"> - <word id="12" pos="NN" cat="NP" base="crp" mod="13"> - CRP - <phrase id="13" cat="NP" head="14" lex_head="14"> - <word id="14" pos="NN" cat="NP" base="measurement"> - measurement - <phrase id="15" cat="VP" head="16" lex_head="18"> - <phrase id="16" cat="VP" head="17" lex_head="18"> - <phrase id="17" cat="VP" head="18" lex_head="18"> - <word id="18" pos="VBZ" cat="VP" base="do" arg1="1" arg2="21"> - does - <phrase id="19" cat="AV" head="20" lex_head="20"> - <word id="20" pos="RB" cat="AV" base="not" arg1="21"> - not - <phrase id="21" cat="VP" head="22" lex_head="23"> - <phrase id="22" cat="VP" head="23" lex_head="23"> - <word id="23" pos="VB" cat="VP" base="exclude" arg1="1" arg2="24"> - exclude ... Figure 6: A semantically annotated sentence although the latter half of the sentence is omitted because of space limitations. Sentences are annotated with four tags,3 “phrase,” “word,” “sentence,” and “entity name,” and their attributes as given in Table 6. Predicate argument structures are annotated as attributes, “mod” and “argX,” which point to the IDs of the argument phrases. For example, in Figure 6, the <word> tag for “exclude” has the attributes arg1="1" and arg2="24", which denote the IDs of the subject and object phrases, respectively. 3Additional tags exist for representing document structures such as “title” (details omitted). Tag Attributes phrase id, cat, head, lex head word id, cat, pos, base, mod, argX, rel type sentence sentence id entity name id, type, gene id/disease id, gene symbol, gene name, species, db site Attribute Description id unique identifier cat syntactic category head head daughter’s ID lex head lexical head’s ID pos part-of-speech base base form of the word mod ID of modifying phrase argX ID of the X-th argument of the word rel type event type sentence id sentence’s ID type whether gene, gene prod, or disease gene id ID in GENA disease id ID in the UMLS meta-thesaurus gene symbol short form of the gene gene name nomenclature of the gene species species that have this gene db site links to external databases Table 6: Tags (upper) and attributes (lower) for semantic annotations 3.2 On-line processing The off-line processing described above results in much simpler on-line processing. User input is converted into queries of the extended region algebra, and the converted queries are entered into a search engine for the extended region algebra. The implementation of a search engine is described in detail in Masuda et al. (2006). Basically, given subject x, object y, and verb v, the system creates the following query: [sentence] >> ([word arg1="$subject" arg2="$object" base="v"] & ([phrase id="$subject"] > x) & ([phrase id="$object"] > y)) Ontological identifiers are substituted for x, y, and v, if possible. Nominal keywords, i.e., x and y, are replaced by [entity_name gene_id="n"] or [entity_name disease_id="n"], where n is the ontological identifier of x or y. For verbal keywords, base="v" is replaced by rel_type="r", where r is the event type of v. 4 Evaluation Our system is evaluated with respect to speed and accuracy. Speed is indispensable for real-time interactive text retrieval systems, and accuracy is key for the motivation of semantic retrieval. That is, our motivation for employing semantic retrieval 1021 Query No. User input 1 something inhibit ERK2 2 something trigger diabetes 3 adiponectin increase something 4 TNF activate IL6 5 dystrophin cause disease 6 macrophage induce something 7 something suppress MAP phosphorylation 8 something enhance p53 (negative) Table 7: Queries for experiments [sentence] >> ([word rel_type="activation"] & [entity_name type="gene" gene_id="GHS019685"] & [entity_name type="gene" gene_id="GHS009426"]) [sentence] >> ([word arg1="$subject" arg2="$object" rel_type="activation"] & ([phrase id="$subject"] > [entity_name type="gene" gene_id="GHS019685"]) & ([phrase cat="np" id="$object"] > [entity_name type="gene" gene_id="GHS009426"])) Figure 7: Queries of the extended region algebra for Query 4-3 (upper: keyword search, lower: semantic search) was to provide a device for the accurate identification of relational concepts. In particular, high precision is desired in text retrieval from huge texts because users want to extract relevant information, rather than collect exhaustive information. We have two parameters to vary: whether to use predicate argument structures and whether to use ontological identifiers. The effect of using predicate argument structures is evaluated by comparing “keyword search” with “semantic search.” The former is a traditional style of IR, in which sentences are retrieved by matching words in a query with words in sentences. The latter is a new feature of the present system, in which sentences are retrieved by matching predicate argument relations in a query with those in a semantically annotated textbase. The effect of using ontological identifiers is assessed by changing queries of the extended region algebra. When we use the term ontology, nominal keywords in queries are replaced with ontological identifiers in GENA and the UMLS meta-thesaurus. When we use the event expression ontology, verbal keywords in queries are replaced with event types. Table 7 is a list of queries used in the following experiments. Words in italics indicate a class of words: “something” indicates that any word can appear, and disease indicates that any disease expression can appear. These queries were selected by a biologist, and express typical relational concepts that a biologist may wish to find. Queries 1, 3, and 4 represent relations of genes/proteins, where ERK2, adiponectin, TNF, and IL6 are genes/proteins. Queries 2 and 5 describe relations concerning diseases, and Query 6 is a query that is not relevant to genes or diseases. Query 7 expresses a complex relation concerning a specific phenomena, i.e., phosphorylation, of MAP. Query 8 describes a relation concerning a gene, i.e., p53, while “(negative)” indicates that the target of retrieval is negative mentions. This is expressed by “not” modifying a predicate. For example, Query 4 attempts to retrieve sentences that mention the protein-protein interaction “TNF activates IL6.” This is converted into queries of the extended region algebra given in Figure 7. The upper query is for keyword search and only specifies the appearances of the three words. Note that the keywords are translated into the ontological identifiers, “activation,” “GHS019685,” and “GHS009426.” The lower query is for semantic search. The variables in “arg1” and “arg2” indicate that “GHS019685” and “GHS009426” are the subject and object, respectively, of “activation”. Table 8 summarizes the results of the experiments. The postfixes of query numbers denote whether ontological identifiers are used. X-1 used no ontologies, and X-2 used only the term ontology. X-3 used both the term and event expression ontologies4. Comparison of X-1 and X-2 clarifies the effect of using the term ontology. Comparison of X-2 and X-3 shows the effect of the event expression ontology. The results for X-3 indicate the maximum performance of the current system. This table shows that the time required for the semantic search for the first answer, shown as “time (first)” in seconds, was reasonably short. Thus, the present framework is acceptable for real-time text retrieval. The numbers of answers increased when we used the ontologies, and this result indicates the efficacy of both ontologies for obtaining relational concepts written in various expressions. Accuracy was measured by judgment by a biologist. At most 100 sentences were retrieved for each query, and the results of keyword search and semantic search were merged and shuffled. A biologist judged the shuffled sentences (1,839 sentences in total) without knowing whether the sen4Query 5-1 is not tested because “disease” requires the term ontology, and Query 6-2 is not tested because “macrophage” is not assigned an ontological identifier. 1022 Query Keyword search Semantic search No. # ans. time (first/all) precision n-precision # ans. time (first/all) precision relative recall 1-1 252 0.00/ 1.5 74/100 (74%) 74/100 (74%) 143 0.01/ 2.5 96/100 (96%) 51/74 (69%) 1-2 348 0.00/ 1.9 61/100 (61%) 61/100 (61%) 174 0.01/ 3.1 89/100 (89%) 42/61 (69%) 1-3 884 0.00/ 3.2 50/100 (50%) 50/100 (50%) 292 0.01/ 5.3 91/100 (91%) 21/50 (42%) 2-1 125 0.00/ 1.8 45/100 (45%) 9/ 27 (33%) 27 0.02/ 2.9 23/ 27 (85%) 17/45 (38%) 2-2 113 0.00/ 2.9 40/100 (40%) 10/ 26 (38%) 26 0.06/ 4.0 22/ 26 (85%) 19/40 (48%) 2-3 6529 0.00/ 12.1 42/100 (42%) 42/100 (42%) 662 0.01/1527.4 76/100 (76%) 8/42 (19%) 3-1 287 0.00/ 1.5 20/100 (20%) 4/ 30 (13%) 30 0.05/ 2.4 23/ 30 (80%) 6/20 (30%) 3-2 309 0.01/ 2.1 21/100 (21%) 4/ 32 (13%) 32 0.10/ 3.5 26/ 32 (81%) 6/21 (29%) 3-3 338 0.01/ 2.2 24/100 (24%) 8/ 39 (21%) 39 0.05/ 3.6 32/ 39 (82%) 8/24 (33%) 4-1 4 0.26/ 1.5 0/ 4 (0%) 0/ 0 (—) 0 2.44/ 2.4 0/ 0 (—) 0/ 0 (—) 4-2 195 0.01/ 2.5 9/100 (9%) 1/ 6 (17%) 6 0.09/ 4.1 5/ 6 (83%) 2/ 9 (22%) 4-3 2063 0.00/ 7.5 5/100 (5%) 5/ 94 (5%) 94 0.02/ 10.5 89/ 94 (95%) 2/ 5 (40%) 5-2 287 0.08/ 6.3 73/100 (73%) 73/100 (73%) 116 0.05/ 14.7 97/100 (97%) 37/73 (51%) 5-3 602 0.01/ 15.9 50/100 (50%) 50/100 (50%) 122 0.05/ 14.2 96/100 (96%) 23/50 (46%) 6-1 10698 0.00/ 42.8 14/100 (14%) 14/100 (14%) 1559 0.01/3014.5 65/100 (65%) 10/14 (71%) 6-3 42106 0.00/3379.5 11/100 (11%) 11/100 (11%) 2776 0.01/5100.1 61/100 (61%) 5/11 (45%) 7 87 0.04/ 2.7 34/ 87 (39%) 7/ 15 (47%) 15 0.05/ 4.2 10/ 15 (67%) 10/34 (29%) 8 1812 0.01/ 7.6 19/100 (19%) 17/ 84 (20%) 84 0.20/ 29.2 73/ 84 (87%) 7/19 (37%) Table 8: Number of retrieved sentences, retrieval time, and accuracy tence was retrieved by keyword search or semantic search. Without considering which words actually matched the query, a sentence is judged to be correct when any part of the sentence expresses all of the relations described by the query. The modality of sentences was not distinguished, except in the case of Query 8. These evaluation criteria may be disadvantageous for the semantic search because its ability to exactly recognize the participants of relational concepts is not evaluated. Table 8 shows the precision attained by keyword/semantic search and n-precision, which denotes the precision of the keyword search, in which the same number, n, of outputs is taken as the semantic search. The table also gives the relative recall of the semantic search, which represents the ratio of sentences that are correctly output by the semantic search among those correctly output by the keyword search. This does not necessarily represent the true recall because sentences not output by keyword search are excluded. However, this is sufficient for the comparison of keyword search and semantic search. The results show that the semantic search exhibited impressive improvements in precision. The precision was over 80% for most queries and was nearly 100% for Queries 4 and 5. This indicates that predicate argument structures are effective for representing relational concepts precisely, especially for relations in which two entities are involved. Relative recall was approximately 30– 50%, except for Query 2. In the following, we will investigate the reasons for the residual errors. Table 9 shows the classifications of the errors of Disregarding of noun phrase structures 45 Term recognition errors 33 Parsing errors 11 Other reasons 8 Incorrect human judgment 7 Nominal expressions 41 Phrasal verb expressions 26 Inference required 24 Coreference resolution required 19 Parsing errors 16 Other reasons 15 Incorrect human judgment 10 Table 9: Error analysis (upper: 104 false positives, lower: 151 false negatives) semantic retrieval. The major reason for false positives was that our queries ignore internal structures of noun phrases. The system therefore retrieved noun phrases that do not directly mention target entities. For example, “the increased mortality in patients with diabetes was caused by . . .” does not indicate the trigger of diabetes. Another reason was term recognition errors. For example, the system falsely retrieved sentences containing “p40,” which is sometimes, but not necessarily used as a synonym for “ERK2.” Machine learning-based term disambiguation will alleviate these errors. False negatives were caused mainly by nominal expressions such as “the inhibition of ERK2.” This is because the present system does not convert user input into queries on nominal expressions. Another major reason, phrasal verb expressions such as “lead to,” is also a shortage of our current strategy of query creation. Because semantic annotations already in1023 clude linguistic structures of these expressions, the present system can be improved further by creating queries on such expressions. 5 Conclusion We demonstrated a text retrieval system for MEDLINE that exploits pre-computed semantic annotations5. Experimental results revealed that the proposed system is sufficiently efficient for realtime text retrieval and that the precision of retrieval was remarkably high. Analysis of residual errors showed that the handling of noun phrase structures and the improvement of term recognition will increase retrieval accuracy. Although the present paper focused on MEDLINE, the NLP tools used in this system are domain/task independent. This framework will thus be applicable to other domains such as patent documents. The present framework does not conflict with conventional IR/IE techniques, and integration with these techniques is expected to improve the accuracy and usability of the proposed system. For example, query expansion and relevancy feedback can be integrated in a straightforward way in order to improve accuracy. Document ranking is useful for the readability of retrieved results. IE systems can be applied off-line, in the manner of the deep parser in our system, for annotating sentences with target information of IE. Such annotations will enable us to retrieve higher-level concepts, such as relationships among relational concepts. Acknowledgment This work was partially supported by Grant-in-Aid for Scientific Research on Priority Areas “Systems Genomics” (MEXT, Japan), Genome Network Project (NIG, Japan), and Solution-Oriented Research for Science and Technology (JST, Japan). References C. Blaschke and A. Valencia. 2002. The frame-based module of the SUISEKI information extraction system. IEEE Intelligent Systems, 17(2):14–20. S. Boag, D. Chamberlin, M. F. Fern´andez, D. Florescu, J. Robie, and J. Sim´eon. 2005. XQuery 1.0: An XML query language. E. Charniak and M. Johnson. 2005. Coarse-to-fine nbest parsing and MaxEnt discriminative reranking. In Proc. ACL 2005. 5A web-based demo of our system is available on-line at: http://www-tsujii.is.s.u-tokyo.ac.jp/medie/ H.-W. Chun, Y. Tsuruoka, J.-D. Kim, R. Shiba, N. Nagata, T. Hishiki, and J. Tsujii. 2006. Extraction of gene-disease relations from MedLine using domain dictionaries and machine learning. In Proc. PSB 2006, pages 4–15. J. Clark and S. DeRose. 1999. XML Path Language (XPath) version 1.0. C. L. A. Clarke, G. V. Cormack, and F. J. Burkowski. 1995. An algebra for structured text search and a framework for its implementation. The Computer Journal, 38(1):43–56. Y. Hao, X. Zhu, M. Huang, and M. Li. 2005. Discovering patterns to extract protein-protein interactions from the literature: Part II. Bioinformatics, 21(15):3294–3300. T. Hara, Y. Miyao, and J. Tsujii. 2005. Adapting a probabilistic disambiguation model of an HPSG parser to a new domain. In Proc. IJCNLP 2005. IBM, 2005. Unstructed Information Management Architecture (UIMA) SDK User’s Guide and Reference. A. Koike and T. Takagi. 2004. Gene/protein/family name recognition in biomedical literature. In Proc. Biolink 2004, pages 9–16. D. A. Lindberg, B. L. Humphreys, and A. T. McCray. 1993. The Unified Medical Language System. Methods in Inf. Med., 32(4):281–291. K. Masuda, T. Ninomiya, Y. Miyao, T. Ohta, and J. Tsujii. 2006. Nested region algebra extended with variables. In Preparation. Y. Miyao and J. Tsujii. 2005. Probabilistic disambiguation models for wide-coverage HPSG parsing. In Proc. 43rd ACL, pages 83–90. National Library of Medicine. 2005. Fact Sheet MEDLINE. Available at http://www.nlm.nih. gov/pubs/factsheets/medline.html. T. Ninomiya, Y. Tsuruoka, Y. Miyao, K. Taura, and J. Tsujii. 2006. Fast and scalable HPSG parsing. Traitement automatique des langues (TAL), 46(2). Y. Tateisi, A. Yakushiji, T. Ohta, and J. Tsujii. 2005. Syntax annotation for the GENIA corpus. In Proc. IJCNLP 2005, Companion volume, pages 222–227. K. Taura. 2004. GXP : An interactive shell for the grid environment. In Proc. IWIA2004, pages 59–67. TEI Consortium, 2004. Text Encoding Initiative. Y. Tsuruoka and J. Tsujii. 2004. Improving the performance of dictionary-based approaches in protein name recognition. Journal of Biomedical Informatics, 37(6):461–470. Y. Tsuruoka and J. Tsujii. 2005. Bidirectional inference with the easiest-first strategy for tagging sequence data. In Proc. HLT/EMNLP 2005, pages 467–474. 1024 | 2006 | 128 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1025–1032, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Exploring Distributional Similarity Based Models for Query Spelling Correction Mu Li Microsoft Research Asia 5F Sigma Center Zhichun Road, Haidian District Beijing, China, 100080 [email protected] Muhua Zhu School of Information Science and Engineering Northeastern University Shenyang, Liaoning, China, 110004 [email protected] Yang Zhang School of Computer Science and Technology Tianjin University Tianjin, China, 300072 [email protected] Ming Zhou Microsoft Research Asia 5F Sigma Center Zhichun Road, Haidian District Beijing, China, 100080 [email protected] Abstract A query speller is crucial to search engine in improving web search relevance. This paper describes novel methods for use of distributional similarity estimated from query logs in learning improved query spelling correction models. The key to our methods is the property of distributional similarity between two terms: it is high between a frequently occurring misspelling and its correction, and low between two irrelevant terms only with similar spellings. We present two models that are able to take advantage of this property. Experimental results demonstrate that the distributional similarity based models can significantly outperform their baseline systems in the web query spelling correction task. 1 Introduction Investigations into query log data reveal that more than 10% of queries sent to search engines contain misspelled terms (Cucerzan and Brill, 2004). Such statistics indicate that a good query speller is crucial to search engine in improving web search relevance, because there is little opportunity that a search engine can retrieve many relevant contents with misspelled terms. The problem of designing a spelling correction program for web search queries, however, poses special technical challenges and cannot be well solved by general purpose spelling correction methods. Cucerzan and Brill (2004) discussed in detail specialties and difficulties of a query spell checker, and illustrated why the existing methods could not work for query spelling correction. They also identified that no single evidence, either a conventional spelling lexicon or term frequency in the query logs, can serve as criteria for validate queries. To address these challenges, we concentrate on the problem of learning improved query spelling correction model by integrating distributional similarity information automatically derived from query logs. The key contribution of our work is identifying that we can successfully use the evidence of distributional similarity to achieve better spelling correction accuracy. We present two methods that are able to take advantage of distributional similarity information. The first method extends a string edit-based error model with confusion probabilities within a generative source channel model. The second method explores the effectiveness of our approach within a discriminative maximum entropy model framework by integrating distributional similarity-based features. Experimental results demonstrate that both methods can significantly outperform their baseline systems in the spelling correction task for web search queries. 1025 The rest of the paper is structured as follows: after a brief overview of the related work in Section 2, we discuss the motivations for our approach, and describe two methods that can make use of distributional similarity information in Section 3. Experiments and results are presented in Section 4. The last section contains summaries and outlines promising future work. 2 Related Work The method for web query spelling correction proposed by Cucerzan and Brill (2004) is essentially based on a source channel model, but it requires iterative running to derive suggestions for very-difficult-to-correct spelling errors. Word bigram model trained from search query logs is used as the source model, and the error model is approximated by inverse weighted edit distance of a correction candidate from its original term. The weights of edit operations are interactively optimized based on statistics from the query logs. They observed that an edit distance-based error model only has less impact on the overall accuracy than the source model. The paper reports that un-weighted edit distance will cause the overall accuracy of their speller’s output to drop by around 2%. The work of Ahmad and Kondrak (2005) tried to employ an unsupervised approach to error model estimation. They designed an EM (Expectation Maximization) algorithm to optimize the probabilities of edit operations over a set of search queries from the query logs, by exploiting the fact that there are more than 10% misspelled queries scattered throughout the query logs. Their method is concerned with single character edit operations, and evaluation was performed on an isolated word spelling correction task. There are two lines of research in conventional spelling correction, which deal with non-word errors and real-word errors respectively. Nonword error spelling correction is concerned with the task of generating and ranking a list of possible spelling corrections for each query word not found in a lexicon. While traditionally candidate ranking is based on manually tuned scores such as assigning weights to different edit operations or leveraging candidate frequencies, some statistical models have been proposed for this ranking task in recent years. Brill and Moore (2000) presented an improved error model over the one proposed by Kernigham et al. (1990) by allowing generic string-to-string edit operations, which helps with modeling major cognitive errors such as the confusion between le and al. Toutanova and Moore (2002) further explored this via explicit modeling of phonetic information of English words. Both these two methods require misspelled/correct word pairs for training, and the latter also needs a pronunciation lexicon. Realword spelling correction is also referred to as context sensitive spelling correction, which tries to detect incorrect usage of valid words in certain contexts (Golding and Roth, 1996; Mangu and Brill, 1997). Distributional similarity between words has been investigated and successfully applied in many natural language tasks such as automatic semantic knowledge acquisition (Dekang Lin, 1998) and language model smoothing (Essen and Steinbiss, 1992; Dagan et al., 1997). An investigation on distributional similarity functions can be found in (Lillian Lee, 1999). 3 Distributional Similarity-Based Models for Query Spelling Correction 3.1 Motivation Most of the previous work on spelling correction concentrates on the problem of designing better error models based on properties of character strings. This direction ever evolves from simple Damerau-Levenshtein distance (Damerau, 1964; Levenshtein, 1966) to probabilistic models that estimate string edit probabilities from corpus (Church and Gale, 1991; Mayes et al, 1991; Ristad and Yianilos, 1997; Brill and Moore, 2000; and Ahmad and Kondrak, 2005). In the mentioned methods, however, the similarities between two strings are modeled on the average of many misspelling-correction pairs, which may cause many idiosyncratic spelling errors to be ignored. Some of those are typical word-level cognitive errors. For instance, given the query term adventura, a character string-based error model usually assigns similar similarities to its two most probable corrections adventure and aventura. Taking into account that adventure has a much higher frequency of occurring, it is most likely that adventure would be generated as a suggestion. However, our observation into the query logs reveals that adventura in most cases is actually a common misspelling of aventura. Two annotators were asked to judge 36 randomly sampled queries that contain more than one term, and they agreed upon that 35 of them should be aventura. To solve this problem, we consider alternative methods to make use of the information beyond a 1026 term’s character strings. Distributional similarity provides such a dimension to view the possibility that one word can be replaced by another based on the statistics of words co-occuring with them. Distributional similarity has been proposed to perform tasks such as language model smoothing and word clustering, but to the best of our knowledge, it has not been explored in estimating similarities between misspellings and their corrections. In this section, we will only involve the consine metric for illustration purpose. Query logs can serve as an excellent corpus for distributional similarity estimation. This is because query logs are not only an up-to-date term base, but also a comprehensive spelling error repository (Cucerzan and Brill, 2004; Ahmad and Kondrak, 2005). Given enough size of query logs, some misspellings, such as adventura, will occur so frequently that we can obtain reliable statistics of their typical usage. Essential to our method is the observation of high distributional similarity between frequently occurring spelling errors and their corrections, but low between irrelevant terms. For example, we observe that adventura occurred more than 3,300 times in a set of logged queries that spanned three months, and its context was similar to that of aventura. Both of them usually appeared after words like peurto and lyrics, and were followed by mall, palace and resort. Further computation shows that, in the tf (term frequency) vector space based on surrounding words, the cosine value between them is approximately 0.8, which indicates these two terms are used in a very similar way among all the users trying to search aventura. The cosine between adventura and adventure is less than 0.03 and basically we can conclude that they are two irrelevant terms, although their spellings are similar. Distributional similarity is also helpful to address another challenge for query spelling correction: differentiating valid OOV terms from frequently occurring misspellings. InLex Freq Cosine vaccum No 18,430 vacuum Yes 158,428 0.99 seraphin No 1,718 seraphim Yes 14,407 0.30 Table 1. Statistics of two word pairs with similar spellings Table 1 lists detailed statistics of two word pairs, each of pair of words have similar spelling, lexicon and frequency properties. But the distributional similarity between each pair of words provides the necessary information to make correction classification that vacuum is a spelling error while seraphin is a valid OOV term. 3.2 Problem Formulation In this work, we view the query spelling correction task as a statistical sequence inference problem. Under the probabilistic model framework, it can be conceptually formulated as follows. Given a correction candidate set C for a query string q: } ) , ( | { δ < = c q EditDist c C in which each correction candidate c satisfies the constraint that the edit distance between c and q is less than a given threshold δ, the model is to find c* in C with the highest probability: ) | ( max arg * q c P c C c∈ = (1) In practice, the correction candidate set C is not generated from the entire query string directly. Correction candidates are generated for each term of a query first, and then C is constructed by composing the candidates of individual terms. The edit distance threshold δ is set for each term proportionally to the length of the term. 3.3 Source Channel Model Source channel model has been widely used for spelling correction (Kernigham et al., 1990; Mayes, Damerau et al., 1991; Brill and More, 2000; Ahmad and Kondrak, 2005). Instead of directly optimize (1), source channel model tries to solve an equivalent problem by applying Bayes’s rule and dropping the constant denominator: ) ( ) | ( max arg * c P c q P c C c∈ = (2) In this approach, two component generative models are involved: source model P(c) that generates the user’s intended query c and error model P(q|c) that generates the real query q given c. These two component models can be independently estimated. In practice, for a multi-term query, the source model can be approximated with an n-gram statistical language model, which is estimated with tokenized query logs. Taking bigram model for example, c is a correction candidate containing n terms, nc c c c … 2 1 = , then P(c) can be written as the product of consecutive bigram probabilities: ∏ − = ) | ( ) ( 1 i i c c P c P 1027 Similarly, the error model probability of a query is decomposed into generation probabilities of individual terms which are assumed to be independently generated: ∏ = ) | ( ) | ( i i c q P c q P Previous proposed methods for error model estimation are all based on the similarity between the character strings of qi and ci as described in 3.1. Here we describe a distributional similaritybased method for this problem. Essentially there are different ways to estimate distributional similarity between two words (Dagan et al., 1997), and the one we propose to use is confusion probability (Essen and Steinbiss, 1992). Formally, confusion probability cP estimates the possibility that one word w1 can be replaced by another word w2: ∑ = w c w P w w P w P w w P w w P ) ( ) | ( ) ( ) | ( ) | ( 2 2 1 1 2 (3) where w belongs to the set of words that cooccur with both w1 and w2. From the spelling correction point of view, given w1 to be a valid word and w2 one of its spelling errors, ) | ( 1 2 w w Pc actually estimates opportunity that w1 is misspelled as w2 in query logs. Compared to other similarity measures such as cosine or Euclidean distance, confusion probability is of interest because it defines a probabilistic distribution rather than a generic measure. This property makes it more theoretically sound to be used as error model probability in the Bayesian framework of the source channel model. Thus it can be applied and evaluated independently. However, before using confusion probability as our error model, we have to solve two problems: probability renormalization and smoothing. Unlike string edit-based error models, which distribute a major portion of probability over terms with similar spellings, confusion probability distributes probability over the entire vocabulary in the training data. This property may cause the problem of unfair comparison between different correction candidates if we directly use (3) as the error model probability. This is because the synonyms of different candidates may share different portion of confusion probabilities. This problem can be solved by re-normalizing the probabilities only over a term’s possible correction candidates and itself. To obtain better estimation, here we also require that the frequency of a correction candidate should be higher than that of the query term, based on the observation that correct spellings generally occur more often in query logs. Formally, given a word w and its correction candidate set C, the confusion probability of a word w′ conditioned on w can be redefined as ∉ ′ ∈ ′ ′ ′ ′ = ′ ∑∈ C w C w w c P w w P w w P C c c c c 0 ) | ( ) | ( ) | ( (4) where ) | ( w w Pc ′ ′ is the original definition of confusion probability. In addition, we might also have the zeroprobability problem when the query term has not appeared or there are few context words for it in the query logs. In such cases there is no distributional similarity information available to any known terms. To solve this problem, we define the final error model probability as the linear combination of confusion probability and a string edit-based error model probability ) | ( c q Ped : ) | ( ) 1( ) | ( ) | ( c q P c q P c q P ed c λ λ − + = (5) where λ is the interpolation parameter between 0 and 1 that can be experimentally optimized on a development data set. 3.4 Maximum Entropy Model Theoretically we are more interested in building a unified probabilistic spelling correction model that is able to leverage all available features, which could include (but not limited to) traditional character string-based typographical similarity, phonetic similarity and distributional similarity proposed in this work. The maximum entropy model (Berger et al., 1996) provides us with a well-founded framework for this purpose, which has been extensively used in natural lan guage processing tasks ranging from part-ofspeech tagging to machine translation. For our task, the maximum entropy model defines a posterior probabilistic distribution ) | ( q c P over a set of feature functions fi (q, c) defined on an input query q and its correction candidate c: ∑ ∑ ∑ = = = c N i i i N i i i q c f q c f q c P 1 1 ) , ( exp ) , ( exp ) | ( λ λ (6) 1028 where λs are feature weights, which can be optimized by maximizing the posterior probability on the training set: ∑ ∈ = TD q t q t P ) , ( ) | ( log max arg * λ λ λ where TD denotes the set of training samples in the form of query-truth pairs presented to the training algorithm. We use the Generalized Iterative Scaling (GIS) algorithm (Darroch and Ratcliff, 1972) to learn the model parameter λs of the maximum entropy model. GIS training requires normalization over all possible prediction classes as shown in the denominator in equation (6). Since the potential number of correction candidates may be huge for multi-term queries, it would not be practical to perform the normalization over the entire search space. Instead, we use a method to approximate the sum over the n-best list (a list of most probable correction candidates). This is similar to what Och and Ney (2002) used for their maximum entropy-based statistical machine translation training. 3.4.1 Features Features used in our maximum entropy model are classified into two categories I) baseline features and II) features supported by distributional similarity evidence. Below we list the feature templates. Category I: 1. Language model probability feature. This is the only real-valued feature with feature value set to the logarithm of source model probability: ) ( log ) , ( c P c q f prob = 2. Edit distance-based features, which are generated by checking whether the weighted Levenshtein edit distance between a query term and its correction is in certain range; All the following features, including this one, are binary features, and have the feature function of the following form: = otherwise satisfied constraint c q fn 0 1 ) , ( in which the feature value is set to 1 when the constraints described in the template are satisfied; otherwise the feature value is set to 0. 3. Frequency-based features, which are generated by checking whether the frequencies of a query term and its correction candidate are above certain thresholds; 4. Lexicon-based features, which are generated by checking whether a query term and its correction candidate are in a conventional spelling lexicon; 5. Phonetic similarity-based features, which are generated by checking whether the edit distance between the metaphones (Philips, 1990) of a query term and its correction candidate is below certain thresholds. Category II: 6. Distributional similarity based term features, which are generated by checking whether a query term’s frequency is higher than certain thresholds but there are no candidates for it with higher frequency and high enough distributional similarity. This is usually an indicator that the query term is valid and not covered by the spelling lexicon. The frequency thresholds are enumerated from 10,000 to 50,000 with the interval 5,000. 7. Distributional similarity based correction candidate features, which are generated by checking whether a correction candidate’s frequency is higher than the query term or the correction candidate is in the lexicon, and at the same time the distributional similarity is higher than certain thresholds. This generally gives the evidence that the query term may be a common misspelling of the current candidate. The distributional similarity thresholds are enumerated from 0.6 to 1 with the interval 0.1. 4 Experimental Results 4.1 Dataset We randomly sampled 7,000 queries from daily query logs of MSN Search and they were manually labeled by two annotators. For each query identified to contain spelling errors, corrections were given by the annotators independently. From the annotation results that both annotators agreed upon 3,061 queries were extracted, which were further divided into a test set containing 1,031 queries and a training set containing 2,030 queries. In the test set there are 171 queries identified containing spelling errors with an error rate of 16.6%. The numbers on the training set is 312 and 15.3%, respectively. The average length of queries on training set is 2.8 terms and on test set it is 2.6. 1029 In our experiments, a term bigram model is used as the source model. The bigram model is trained with query log data of MSN Search during the period from October 2004 to June 2005. Correction candidates are generated from a term base extracted from the same set of query logs. For each of the experiments, the performance is evaluated by the following metrics: Accuracy: The number of correct outputs generated by the system divided by the total number of queries in the test set; Recall: The number of correct suggestions for misspelled queries generated by the system divided by the total number of misspelled queries in the test set; Precision: The number of correct suggestions for misspelled queries generated by the system divided by the total number of suggestions made by the system. 4.2 Results We first investigated the impact of the interpolation parameter λ in equation (5) by applying the confusion probability-based error model on training set. For the string edit-based error model probability ) | ( c q Ped , we used a heuristic score computed as the inverse of weighted edit distance, which is similar to the one used by Cucerzan and Brill (2004). Figure 1 shows the accuracy metric at different settings of λ. The accuracy generally gains improvements before λ reaches 0.9. This shows that confusion probability plays a more important role in the combination. As a result, we empirically set λ= 0.9 in the following experiments. 88% 89% 89% 90% 90% 91% 91% 0.05 0.15 0.25 0.35 0.45 0.55 0.65 0.75 0.85 0.95 lambda accuracy Figure 1. Accuracy with different λs To evaluate whether the distributional similarity can contribute to performance improvements, we conducted the following experiments. For source channel model, we compared the confusion probability-based error model (SC-SimCM) against two baseline error model settings, which are source model only (SC-NoCM) and the heuristic string edit-based error model (SC-EdCM) we just described. Two maximum entropy models were trained with different feature sets. MENoSim is the model trained only with baseline features. It serves as the baseline for ME-Full, which is trained with all the features described in 3.4.1. In training ME-Full, cosine distance is used as the similarity measure examined by feature functions. In all the experiments we used the standard viterbi algorithm to search for the best output of source channel model. The n-best list for maximum entropy model training and testing is generated based on language model scores of correction candidates, which can be easily obtained by running the forward-viterbi backward-A* algorithm. On a 3.0GHZ Pentium4 personal computer, the system can process 110 queries per second for source channel model and 86 queries per second for maximum entropy model, in which 20 best correction candidates are used. Model Accuracy Recall Precision SC-NoCM 79.7% 63.3% 40.2% SC-EdCM 84.1% 62.7% 47.4% SC-SimCM 88.2% 57.4% 58.8% ME-NoSim 87.8% 52.0% 60.0% ME-Full 89.0% 60.4% 62.6% Table 2. Performance results for different models Table 2 details the performance scores for the experiments, which shows that both of the two distributional similarity-based models boost accuracy over their baseline settings. SC-SimCM achieves 26.3% reduction in error rate over SCEdCM, which is significant to the 0.001 level (paired t-test). ME-Full outperforms ME-NoSim in all three evaluation measures, with 9.8% reduction in error rate and 16.2% improvement in recall, which is significant to the 0.01 level. It is interesting to note that the accuracy of SC-SimCM is slightly better than ME-NoSim, although ME-NoSim makes use of a rich set of features. ME-NoSim tends to keep queries with frequently misspelled terms unchanged (e.g. caffine extractions from soda) to reduce false alarms (e.g. bicycle suggested for biocycle). We also investigated the performance of the models discussed above at different recall. Figure 2 and Figure 3 show the precision-recall curves and accuracy-recall curves of different models. We observed that the performance of SC-SimCM and ME-NoSim are very close to each other and ME-Full consistently yields better performance over the entire P-R curve. 1030 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 35% 40% 45% 50% 55% 60% recall precision ME-Full ME-NoSim SC-EdCM SC-SimCM SC-NoCM Figure 2. Precision-recall curve of different models 82% 83% 84% 85% 86% 87% 88% 89% 90% 91% 35% 40% 45% 50% 55% 60% recall accuracy ME-Full ME-NoSim SC-EdCM SC-SimCM SC-NoCM Figure 3. Accuracy-recall curve of different models We performed a study on the impact of training size to ensure all models are trained with enough data. 40% 50% 60% 70% 80% 90% 200 400 600 800 1000 1600 2000 ME-Full Recall ME-Full Accuracy ME-NoSim Recall ME-NoSim Accuracy Figure 4. Accuracy of maximum entropy models trained with different number of samples Figure 4 shows the accuracy of the two maximum entropy models as functions of number of training samples. From the results we can see that after the number of training samples reaches 600 there are only subtle changes in accuracy and recall. Therefore basically it can be concluded that 2,000 samples are sufficient to train a maximum entropy model with the current feature sets. 5 Conclusions and Future Work We have presented novel methods to learn better statistical models for the query spelling correction task by exploiting distributional similarity information. We explained the motivation of our methods with the statistical evidence distilled from query log data. To evaluate our proposed methods, two probabilistic models that can take advantage of such information are investigated. Experimental results show that both methods can achieve significant improvements over their baseline settings. A subject of future research is exploring more effective ways to utilize distributional similarity even beyond query logs. Currently for lowfrequency terms in query logs there are no reliable distribution similarity evidence available for them. A promising method of dealing with this in next steps is to explore information in the resulting page of a search engine, since the snippets in the resulting page can provide far greater detailed information about terms in a query. References Farooq Ahmad and Grzegorz Kondrak. 2005. Learning a spelling error model from search query logs. Proceedings of EMNLP 2005, pages 955-962. Adam L. Beger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computation Linguistics, 22(1):39-72. Eric Brill and Robert C. Moore. 2000. An improved error model for noisy channel spelling correction. Proceedings of 38th annual meeting of the ACL, pages 286-293. Kenneth W. Church and William A. Gale. 1991. Probability scoring for spelling correction. In Statistics and Computing, volume 1, pages 93-103. Silviu Cucerzan and Eric Brill. 2004. Spelling correction as an iterative process that exploits the collective knowledge of web users. Proceedings of EMNLP’04, pages 293-300. Ido Dagan, Lillian Lee and Fernando Pereira. 1997. Similarity-Based Methods for Word Sense Disambiguation. Proceedings of the 35th annual meeting of ACL, pages 56-63. Fred Damerau. 1964. A technique for computer detection and correction of spelling errors. Communication of the ACM 7(3):659-664. J. N. Darroch and D. Ratcliff. 1972. Generalized iterative scaling for long-linear models. Annals of Mathematical Statistics, 43:1470-1480. Ute Essen and Volker Steinbiss. 1992. Co-occurrence smoothing for stochastic language modeling. Proceedings of ICASSP, volume 1, pages 161-164. Andrew R. Golding and Dan Roth. 1996. Applying winnow to context-sensitive spelling correction. Proceedings of ICML 1996, pages 182-190. Mark D. Kernighan, Kenneth W. Church and William A. Gale. 1990. A spelling correction program 1031 based on a noisy channel model. Proceedings of COLING 1990, pages 205-210. Karen Kukich. 1992. Techniques for automatically correcting words in text. ACM Computing Surveys. 24(4): 377-439 Lillian Lee. 1999. Measures of distributional similarity. Proceedings of the 37th annual meeting of ACL, pages 25-32. V. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and reversals. Soviet Physice – Doklady 10: 707-710. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. Proceedings of COLING-ACL 1998, pages 768-774. Lidia Mangu and Eric Brill. 1997. Automatic rule acquisition for spelling correction. Proceedings of ICML 1997, pages 734-741. Eric Mayes, Fred Damerau and Robert Mercer. 1991. Context based spelling correction. Information processing and management 27(5): 517-522. Franz Och and Hermann Ney. 2002. Discriminative training and maimum entropy models for statistical machine translation. Proceedings of the 40th annual meeting of ACL, pages 295-302. Lawrence Philips. 1990. Hanging on the metaphone. Computer Language Magazine, 7(12): 39. Eric S. Ristad and Peter N. Yianilos. 1997. Learning string edit distance. Proceedings of ICML 1997. pages 287-295 Kristina Toutanova and Robert Moore. 2002. Pronunciation modeling for improved spelling correction. Proceedings of the 40th annual meeting of ACL, pages 144-151. 1032 | 2006 | 129 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 97–104, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Ensemble Methods for Unsupervised WSD Samuel Brody School of Informatics University of Edinburgh [email protected] Roberto Navigli Dipartimento di Informatica Universit`a di Roma “La Sapienza” [email protected] Mirella Lapata School of Informatics University of Edinburgh [email protected] Abstract Combination methods are an effective way of improving system performance. This paper examines the benefits of system combination for unsupervised WSD. We investigate several voting- and arbiterbased combination strategies over a diverse pool of unsupervised WSD systems. Our combination methods rely on predominant senses which are derived automatically from raw text. Experiments using the SemCor and Senseval-3 data sets demonstrate that our ensembles yield significantly better results when compared with state-of-the-art. 1 Introduction Word sense disambiguation (WSD), the task of identifying the intended meanings (senses) of words in context, holds promise for many NLP applications requiring broad-coverage language understanding. Examples include summarization, question answering, and text simplification. Recent studies have also shown that WSD can benefit machine translation (Vickrey et al., 2005) and information retrieval (Stokoe, 2005). Given the potential of WSD for many NLP tasks, much work has focused on the computational treatment of sense ambiguity, primarily using data-driven methods. Most accurate WSD systems to date are supervised and rely on the availability of training data, i.e., corpus occurrences of ambiguous words marked up with labels indicating the appropriate sense given the context (see Mihalcea and Edmonds 2004 and the references therein). A classifier automatically learns disambiguation cues from these hand-labeled examples. Although supervised methods typically achieve better performance than unsupervised alternatives, their applicability is limited to those words for which sense labeled data exists, and their accuracy is strongly correlated with the amount of labeled data available (Yarowsky and Florian, 2002). Furthermore, obtaining manually labeled corpora with word senses is costly and the task must be repeated for new domains, languages, or sense inventories. Ng (1997) estimates that a high accuracy domain independent system for WSD would probably need a corpus of about 3.2 million sense tagged words. At a throughput of one word per minute (Edmonds, 2000), this would require about 27 person-years of human annotation effort. This paper focuses on unsupervised methods which we argue are useful for broad coverage sense disambiguation. Unsupervised WSD algorithms fall into two general classes: those that perform token-based WSD by exploiting the similarity or relatedness between an ambiguous word and its context (e.g., Lesk 1986); and those that perform type-based WSD, simply by assigning all instances of an ambiguous word its most frequent (i.e., predominant) sense (e.g., McCarthy et al. 2004; Galley and McKeown 2003). The predominant senses are automatically acquired from raw text without recourse to manually annotated data. The motivation for assigning all instances of a word to its most prevalent sense stems from the observation that current supervised approaches rarely outperform the simple heuristic of choosing the most common sense in the training data, despite taking local context into account (Hoste et al., 2002). Furthermore, the approach allows sense inventories to be tailored to specific domains. The work presented here evaluates and compares the performance of well-established unsupervised WSD algorithms. We show that these algorithms yield sufficiently diverse outputs, thus motivating the use of combination methods for improving WSD performance. While combination approaches have been studied previously for supervised WSD (Florian et al., 2002), their use in an unsupervised setting is, to our knowledge, novel. We examine several existing and novel combination methods and demonstrate that our combined systems consistently outperform the 97 state-of-the-art (e.g., McCarthy et al. 2004). Importantly, our WSD algorithms and combination methods do not make use of training material in any way, nor do they use the first sense information available in WordNet. In the following section, we briefly describe the unsupervised WSD algorithms considered in this paper. Then, we present a detailed comparison of their performance on SemCor (Miller et al., 1993). Next, we introduce our system combination methods and report on our evaluation experiments. We conclude the paper by discussing our results. 2 The Disambiguation Algorithms In this section we briefly describe the unsupervised WSD algorithms used in our experiments. We selected methods that vary along the following dimensions: (a) the type of WSD performed (i.e., token-based vs. type-based), (b) the representation and size of the context surrounding an ambiguous word (i.e., graph-based vs. word-based, document vs. sentence), and (c) the number and type of semantic relations considered for disambiguation. We base most of our discussion below on the WordNet sense inventory; however, the approaches are not limited to this particular lexicon but could be adapted for other resources with traditional dictionary-like sense definitions and alternative structure. Extended Gloss Overlap Gloss Overlap was originally introduced by Lesk (1986) for performing token-based WSD. The method assigns a sense to a target word by comparing the dictionary definitions of each of its senses with those of the words in the surrounding context. The sense whose definition has the highest overlap (i.e., words in common) with the context words is assumed to be the correct one. Banerjee and Pedersen (2003) augment the dictionary definition (gloss) of each sense with the glosses of related words and senses. The extended glosses increase the information available in estimating the overlap between ambiguous words and their surrounding context. The range of relationships used to extend the glosses is a parameter, and can be chosen from any combination of WordNet relations. For every sense sk of the target word we estimate: SenseScore(sk) = ∑ Rel∈Relations Overlap(context,Rel(sk)) where context is a simple (space separated) concatenation of all words wi for −n ≤i ≤n,i ̸= 0 in a context window of length ±n around the target word w0. The overlap scoring mechanism is also parametrized and can be adjusted to take the into account gloss length or to ignore function words. Distributional and WordNet Similarity McCarthy et al. (2004) propose a method for automatically ranking the senses of ambiguous words from raw text. Key in their approach is the observation that distributionally similar neighbors often provide cues about a word’s senses. Assuming that a set of neighbors is available, sense ranking is equivalent to quantifying the degree of similarity among the neighbors and the sense descriptions of the polysemous word. Let N(w) = {n1,n2,...,nk} be the k most (distributionally) similar words to an ambiguous target word w and senses(w) = {s1,s2,...sn} the set of senses for w. For each sense si and for each neighbor n j, the algorithm selects the neighbor’s sense which has the highest WordNet similarity score (wnss) with regard to si. The ranking score of sense si is then increased as a function of the WordNet similarity score and the distributional similarity score (dss) between the target word and the neighbor: RankScore(si) =∑ n j∈Nw dss(w,n j) wnss(si,n j) ∑ s′ i∈senses(w) wnss(s′ i,n j) where wnss(si,n j) = max nsx∈senses(n j)wnss(si,nsx). The predominant sense is simply the sense with the highest ranking score (RankScore) and can be consequently used to perform type-based disambiguation. The method presented above has four parameters: (a) the semantic space model representing the distributional properties of the target words (it is acquired from a large corpus representative of the domain at hand and can be augmented with syntactic relations such as subject or object), (b) the measure of distributional similarity for discovering neighbors (c) the number of neighbors that the ranking score takes into account, and (d) the measure of sense similarity. Lexical Chains Lexical cohesion is often represented via lexical chains, i.e., sequences of related words spanning a topical text unit (Morris and Hirst, 1991). Algorithms for computing lexical chains often perform WSD before inferring which words are semantically related. Here we describe one such disambiguation algorithm, proposed by Galley and McKeown (2003), while omitting the details of creating the lexical chains themselves. Galley and McKeown’s (2003) method consists of two stages. First, a graph is built representing all possible interpretations of the target words 98 in question. The text is processed sequentially, comparing each word against all words previously read. If a relation exists between the senses of the current word and any possible sense of a previous word, a connection is formed between the appropriate words and senses. The strength of the connection is a function of the type of relationship and of the distance between the words in the text (in terms of words, sentences and paragraphs). Words are represented as nodes in the graph and semantic relations as weighted edges. Again, the set of relations being considered is a parameter that can be tuned experimentally. In the disambiguation stage, all occurrences of a given word are collected together. For each sense of a target word, the strength of all connections involving that sense are summed, giving that sense a unified score. The sense with the highest unified score is chosen as the correct sense for the target word. In subsequent stages the actual connections comprising the winning unified score are used as a basis for computing the lexical chains. The algorithm is based on the “one sense per discourse” hypothesis and uses information from every occurrence of the ambiguous target word in order to decide its appropriate sense. It is therefore a type-based algorithm, since it tries to determine the sense of the word in the entire document/discourse at once, and not separately for each instance. Structural Semantic Interconnections Inspired by lexical chains, Navigli and Velardi (2005) developed Structural Semantic Interconnections (SSI), a WSD algorithm which makes use of an extensive lexical knowledge base. The latter is primarily based on WordNet and its standard relation set (i.e., hypernymy, meronymy, antonymy, similarity, nominalization, pertainymy) but is also enriched with collocation information representing semantic relatedness between sense pairs. Collocations are gathered from existing resources (such as the Oxford Collocations, the Longman Language Activator, and collocation web sites). Each collocation is mapped to the WordNet sense inventory in a semi-automatic manner (Navigli, 2005) and transformed into a relatedness edge. Given a local word context C = {w1,...,wn}, SSI builds a graph G = (V,E) such that V = nS i=1 senses(wi) and (s,s′) ∈E if there is at least one interconnection j between s (a sense of the word) and s′ (a sense of its context) in the lexical knowledge base. The set of valid interconnections is determined by a manually-created context-free Method WSD Context Relations LexChains types document first-order Overlap tokens sentence first-order Similarity types corpus higher-order SSI tokens sentence higher-order Table 1: Properties of the WSD algorithms grammar consisting of a small number of rules. Valid interconnections are computed in advance on the lexical database, not at runtime. Disambiguation is performed in an iterative fashion. At each step, for each sense s of a word in C (the set of senses of words yet to be disambiguated), SSI determines the degree of connectivity between s and the other senses in C: SSIScore(s) = ∑ s′∈C\{s} ∑ j∈Interconn(s,s′) 1 length(j) ∑ s′∈C\{s} |Interconn(s,s′)| where Interconn(s,s′) is the set of interconnections between senses s and s′. The contribution of a single interconnection is given by the reciprocal of its length, calculated as the number of edges connecting its ends. The overall degree of connectivity is then normalized by the number of contributing interconnections. The highest ranking sense s of word wi is chosen and the senses of wi are removed from the context C. The procedure terminates when either C is the empty set or there is no sense such that its SSIScore exceeds a fixed threshold. Summary The properties of the different WSD algorithms just described are summarized in Table 1. The methods vary in the amount of data they employ for disambiguation. SSI and Extended Gloss Overlap (Overlap) rely on sentencelevel information for disambiguation whereas McCarthy et al. (2004) (Similarity) and Galley and McKeown (2003) (LexChains) utilize the entire document or corpus. This enables the accumulation of large amounts of data regarding the ambiguous word, but does not allow separate consideration of each individual occurrence of that word. LexChains and Overlap take into account a restricted set of semantic relations (paths of length one) between any two words in the whole document, whereas SSI and Similarity use a wider set of relations. 99 3 Experiment 1: Comparison of Unsupervised Algorithms for WSD 3.1 Method We evaluated the disambiguation algorithms outlined above on two tasks: predominant sense acquisition and token-based WSD. As previously explained, Overlap and SSI were not designed for acquiring predominant senses (see Table 1), but a token-based WSD algorithm can be trivially modified to acquire predominant senses by disambiguating every occurrence of the target word in context and selecting the sense which was chosen most frequently. Type-based WSD algorithms simply tag all occurrences of a target word with its predominant sense, disregarding the surrounding context. Our first set of experiments was conducted on the SemCor corpus, on the same 2,595 polysemous nouns (53,674 tokens) used as a test set by McCarthy et al. (2004). These nouns were attested in SemCor with a frequency > 2 and occurred in the British National Corpus (BNC) more than 10 times. We used the WordNet 1.7.1 sense inventory. The following notation describes our evaluation measures: W is the set of all noun types in the SemCor corpus (|W| = 2,595), and Wf is the set of noun types with a dominant sense. senses(w) is the set of senses for noun type w, while fs(w) and fm(w) refer to w’s first sense according to the SemCor gold standard and our algorithms, respectively. Finally, T(w) is the set of tokens of w and senses(t) denotes the sense assigned to token t according to SemCor. We first measure how well our algorithms can identify the predominant sense, if one exists: Accps = |{w ∈Wf | fs(w) = fm(w)}| |Wf | A baseline for this task can be easily defined for each word type by selecting a sense at random from its sense inventory and assuming that this is the predominant sense: Baselinesr = 1 |Wf | ∑ w ∈Wf 1 |senses(w)| We evaluate the algorithms’ disambiguation performance by measuring the ratio of tokens for which our models choose the right sense: Accwsd = ∑ w∈W |{t ∈T(w)| fm(w) = senses(t)}| ∑ w∈W |T(w)| In the predominant sense detection task, in case of ties in SemCor, any one of the predominant senses was considered correct. Also, all algorithms were designed to randomly choose from among the top scoring options in case of a tie in the calculated scores. This introduces a small amount of randomness (less than 0.5%) in the accuracy calculation, and was done to avoid the pitfall of defaulting to the first sense listed in WordNet, which is usually the actual predominant sense (the order of senses in WordNet is based primarily on the SemCor sense distribution). 3.2 Parameter Settings We did not specifically tune the parameters of our WSD algorithms on the SemCor corpus, as our goal was to use hand labeled data solely for testing purposes. We selected parameters that have been considered “optimal” in the literature, although admittedly some performance gains could be expected had parameter optimization taken place. For Overlap, we used the semantic relations proposed by Banerjee and Pedersen (2003), namely hypernyms, hyponyms, meronyms, holonyms, and troponym synsets. We also adopted their overlap scoring mechanism which treats each gloss as a bag of words and assigns an n word overlap the score of n2. Function words were not considered in the overlap computation. For LexChains, we used the relations reported in Galley and McKeown (2003). These are all first-order WordNet relations, with the addition of the siblings – two words are considered siblings if they are both hyponyms of the same hypernym. The relations have different weights, depending on their type and the distance between the words in the text. These weights were imported from Galley and McKeown into our implementation without modification. Because the SemCor corpus is relatively small (less than 700,00 words), it is not ideal for constructing a neighbor thesaurus appropriate for McCarthy et al.’s (2004) method. The latter requires each word to participate in a large number of cooccurring contexts in order to obtain reliable distributional information. To overcome this problem, we followed McCarthy et al. and extracted the neighbor thesaurus from the entire BNC. We also recreated their semantic space, using a RASPparsed (Briscoe and Carroll, 2002) version of the BNC and their set of dependencies (i.e., VerbObject, Verb-Subject, Noun-Noun and AdjectiveNoun relations). Similarly to McCarthy et al., we used Lin’s (1998) measure of distributional similarity, and considered only the 50 highest ranked 100 Method Accps Accwsd/dir Accwsd/ps Baseline 34.5 – 23.0 LexChains 48.3∗†$ – 40.7∗#†$ Overlap 49.4∗†$ 36.5$ 42.5∗†$ Similarity 54.9∗ – 46.5∗$ SSI 53.7∗ 42.7 47.9∗ UpperBnd 100 – 68.4 Table 2: Results of individual disambiguation algorithms on SemCor nouns2 (∗: sig. diff. from Baseline, †: sig. diff. from Similarity, $: sig diff. from SSI, #: sig. diff. from Overlap, p < 0.01) neighbors for a given target word. Sense similarity was computed using the Lesk’s (Banerjee and Pedersen, 2003) similarity measure1. 3.3 Results The performance of the individual algorithms is shown in Table 2. We also include the baseline discussed in Section 3 and the upper bound of defaulting to the first (i.e., most frequent) sense provided by the manually annotated SemCor. We report predominant sense accuracy (Accps), and WSD accuracy when using the automatically acquired predominant sense (Accwsd/ps). For tokenbased algorithms, we also report their WSD performance in context, i.e., without use of the predominant sense (Accwsd/dir). As expected, the accuracy scores in the WSD task are lower than the respective scores in the predominant sense task, since detecting the predominant sense correctly only insures the correct tagging of the instances of the word with that first sense. All methods perform significantly better than the baseline in the predominant sense detection task (using a χ2-test, as indicated in Table 2). LexChains and Overlap perform significantly worse than Similarity and SSI, whereas LexChains is not significantly different from Overlap. Likewise, the difference in performance between SSI and Similarity is not significant. With respect to WSD, all the differences in performance are statistically significant. 1This measure is identical to the Extended gloss Overlap from Section 2, but instead of searching for overlap between an extended gloss and a word’s context, the comparison is done between two extended glosses of two synsets. 2The LexChains results presented here are not directly comparable to those reported by Galley and McKeown (2003), since they tested on a subset of SemCor, and included monosemous nouns. They also used the first sense in SemCor in case of ties. The results for the Similarity method are slightly better than those reported by McCarthy et al. (2004) due to minor improvements in implementation. Overlap LexChains Similarity LexChains 28.05 Similarity 35.87 33.10 SSI 30.48 31.67 37.14 Table 3: Algorithms’ pairwise agreement in detecting the predominant sense (as % of all words) Interestingly, using the predominant sense detected by the Gloss Overlap and the SSI algorithm to tag all instances is preferable to tagging each instance individually (compare Accwsd/dir and Accwsd/ps for Overlap and SSI in Table 2). This means that a large part of the instances which were not tagged individually with the predominant sense were actually that sense. A close examination of the performance of the individual methods in the predominant-sense detection task shows that while the accuracy of all the methods is within a range of 7%, the actual words for which each algorithm gives the correct predominant sense are very different. Table 3 shows the degree of overlap in assigning the appropriate predominant sense among the four methods. As can be seen, the largest amount of overlap is between Similarity and SSI, and this corresponds approximately to 2 3 of the words they correctly label. This means that each of these two methods gets more than 350 words right which the other labels incorrectly. If we had an “oracle” which would tell us which method to choose for each word, we would achieve approximately 82.4% in the predominant sense task, giving us 58% in the WSD task. We see that there is a large amount of complementation between the algorithms, where the successes of one make up for the failures of the others. This suggests that the errors of the individual methods are sufficiently uncorrelated, and that some advantage can be gained by combining their predictions. 4 Combination Methods An important finding in machine learning is that a set of classifiers whose individual decisions are combined in some way (an ensemble) can be more accurate than any of its component classifiers, provided that the individual components are relatively accurate and diverse (Dietterich, 1997). This simple idea has been applied to a variety of classification problems ranging from optical character recognition to medical diagnosis, part-of-speech tagging (see Dietterich 1997 and van Halteren et al. 2001 for overviews), and notably supervised 101 WSD (Florian et al., 2002). Since our effort is focused exclusively on unsupervised methods, we cannot use most machine learning approaches for creating an ensemble (e.g., stacking, confidence-based combination), as they require a labeled training set. We therefore examined several basic ensemble combination approaches that do not require parameter estimation from training data. We define Score(Mi,s j) as the (normalized) score which a method Mi gives to word sense s j. The predominant sense calculated by method Mi for word w is then determined by: PS(Mi,w) = argmax sj∈senses(w) Score(Mi,s j) All ensemble methods receive a set {Mi}k i=1 of individual methods to combine, so we denote each ensemble method by MethodName({Mi}k i=1). Direct Voting Each ensemble component has one vote for the predominant sense, and the sense with the most votes is chosen. The scoring function for the voting ensemble is defined as: Score(Voting({Mi}k i=1),s)) = k ∑ i=1 eq[s,PS(Mi,w)] where eq[s,PS(Mi,w)] = 1 if s = PS(Mi,w) 0 otherwise Probability Mixture Each method provides a probability distribution over the senses. These probabilities (normalized scores) are summed, and the sense with the highest score is chosen: Score(ProbMix({Mi}k i=1),s)) = k ∑ i=1 Score(Mi,s) Rank-Based Combination Each method provides a ranking of the senses for a given target word. For each sense, its placements according to each of the methods are summed and the sense with the lowest total placement (closest to first place) wins. Score(Ranking({Mi}k i=1),s)) = k ∑ i=1 (−1)·Placei(s) where Placei(s) is the number of distinct scores that are larger or equal to Score(Mi,s). Arbiter-based Combination One WSD method can act as an arbiter for adjudicating disagreements among component systems. It makes sense for the adjudicator to have reasonable performance on its own. We therefore selected Method Accps Accwsd/ps Similarity 54.9 46.5 SSI 53.5 47.9 Voting 57.3†$ 49.8†$ PrMixture 57.2†$ 50.4†$‡ Rank-based 58.1†$ 50.3†$‡ Arbiter-based 56.3†$ 48.7†$‡ UpperBnd 100 68.4 Table 4: Ensemble Combination Results (†: sig. diff. from Similarity, $: sig. diff. from SSI, ‡: sig. diff. from Voting, p < 0.01) SSI as the arbiter since it had the best accuracy on the WSD task (see Table 2). For each disagreed word w, and for each sense s of w assigned by any of the systems in the ensemble {Mi}k i=1, we calculate the following score: Score(Arbiter({Mi}k i=1),s) = SSIScore∗(s) where SSIScore∗(s) is a modified version of the score introduced in Section 2 which exploits as a context for s the set of agreed senses and the remaining words of each sentence. We exclude from the context used by SSI the senses of w which were not chosen by any of the systems in the ensemble . This effectively reduces the number of senses considered by the arbiter and can positively influence the algorithm’s performance, since it eliminates noise coming from senses which are likely to be wrong. 5 Experiment 2: Ensembles for Unsupervised WSD 5.1 Method and Parameter Settings We assess the performance of the different ensemble systems on the same set of SemCor nouns on which the individual methods were tested. For the best ensemble, we also report results on disambiguating all nouns in the Senseval-3 data set. We focus exclusively on nouns to allow comparisons with the results obtained from SemCor. We used the same parameters as in Experiment 1 for constructing the ensembles. As discussed earlier, token-based methods can disambiguate target words either in context or using the predominant sense. SSI was employed in the predominant sense setting in our arbiter experiment. 5.2 Results Our results are summarized in Table 4. As can be seen, all ensemble methods perform significantly 102 Ensemble Accps Accwsd/ps Rank-based 58.1 50.3 Overlap 57.6 (−0.5) 49.7 (−0.6) LexChains 57.2 (−0.7) 50.2 (−0.1) Similarity 56.3 (−1.8) 49.4 (−0.9) SSI 56.3 (−1.8) 48.2 (−2.1) Table 5: Decrease in accuracy as a result of removal of each method from the rank-based ensemble. better than the best individual methods, i.e., Similarity and SSI. On the WSD task, the voting, probability mixture, and rank-based ensembles significantly outperform the arbiter-based one. The performances of the probability mixture, and rankbased combinations do not differ significantly but both ensembles are significantly better than voting. One of the factors contributing to the arbiter’s worse performance (compared to the other ensembles) is the fact that in many cases (almost 30%), none of the senses suggested by the disagreeing methods is correct. In these cases, there is no way for the arbiter to select the correct sense. We also examined the relative contribution of each component to overall performance. Table 5 displays the drop in performance by eliminating any particular component from the rank-based ensemble (indicated by −). The system that contributes the most to the ensemble is SSI. Interestingly, Overlap and Similarity yield similar improvements in WSD accuracy (0.6 and 0.9, respectively) when added to the ensemble. Figure 1 shows the WSD accuracy of the best single methods and the ensembles as a function of the noun frequency in SemCor. We can see that there is at least one ensemble outperforming any single method in every frequency band and that the rank-based ensemble consistently outperforms Similarity and SSI in all bands. Although Similarity has an advantage over SSI for low and medium frequency words, it delivers worse performance for high frequency words. This is possibly due to the quality of neighbors obtained for very frequent words, which are not semantically distinct enough to reliably discriminate between different senses. Table 6 lists the performance of the rank-based ensemble on the Senseval-3 (noun) corpus. We also report results for the best individual method, namely SSI, and compare our results with the best unsupervised system that participated in Senseval3. The latter was developed by Strapparava et al. (2004) and performs domain driven disambiguation (IRST-DDD). Specifically, the approach com1-4 5-9 10-19 20-99 100+ Noun frequency bands 40 42 44 46 48 50 52 54 WSD Accuracy (%) Similarity SSI Arbiter Voting ProbMix Ranking Figure 1: WSD accuracy as a function of noun frequency in SemCor Method Precision Recall Fscore Baseline 36.8 36.8 36.8 SSI 62.5 62.5 62.5 IRST-DDD 63.3 62.2 61.2 Rank-based 63.9 63.9 63.9 UpperBnd 68.7 68.7 68.7 Table 6: Results of individual disambiguation algorithms and rank-based ensemble on Senseval-3 nouns pares the domain of the context surrounding the target word with the domains of its senses and uses a version of WordNet augmented with domain labels (e.g., economy, geography). Our baseline selects the first sense randomly and uses it to disambiguate all instances of a target word. Our upper bound defaults to the first sense from SemCor. We report precision, recall and Fscore. In cases where precision and recall figures coincide, the algorithm has 100% coverage. As can be seen the rank-based, ensemble outperforms both SSI and the IRST-DDD system. This is an encouraging result, suggesting that there may be advantages in developing diverse classes of unsupervised WSD algorithms for system combination. The results in Table 6 are higher than those reported for SemCor (see Table 4). This is expected since the Senseval-3 data set contains monosemous nouns as well. Taking solely polysemous nouns into account, SSI’s Fscore is 53.39% and the ranked-based ensemble’s 55.0%. We further note that not all of the components in our ensemble are optimal. Predominant senses for Lesk and LexChains were estimated from the Senseval3 data, however a larger corpus would probably yield more reliable estimates. 103 6 Conclusions and Discussion In this paper we have presented an evaluation study of four well-known approaches to unsupervised WSD. Our comparison involved type- and token-based disambiguation algorithms relying on different kinds of WordNet relations and different amounts of corpus data. Our experiments revealed two important findings. First, type-based disambiguation yields results superior to a token-based approach. Using predominant senses is preferable to disambiguating instances individually, even for token-based algorithms. Second, the outputs of the different approaches examined here are sufficiently diverse to motivate combination methods for unsupervised WSD. We defined several ensembles on the predominant sense outputs of individual methods and showed that combination systems outperformed their best components both on the SemCor and Senseval-3 data sets. The work described here could be usefully employed in two tasks: (a) to create preliminary annotations, thus supporting the “annotate automatically, correct manually” methodology used to provide high volume annotation in the Penn Treebank project; and (b) in combination with supervised WSD methods that take context into account; for instance, such methods could default to an unsupervised system for unseen words or words with uninformative contexts. In the future we plan to integrate more components into our ensembles. These include not only domain driven disambiguation algorithms (Strapparava et al., 2004) but also graph theoretic ones (Mihalcea, 2005) as well as algorithms that quantify the degree of association between senses and their co-occurring contexts (Mohammad and Hirst, 2006). Increasing the number of components would allow us to employ more sophisticated combination methods such as unsupervised rank aggregation algorithms (Tan and Jin, 2004). Acknowledgements We are grateful to Diana McCarthy for her help with this work and to Michel Galley for making his code available to us. Thanks to John Carroll and Rob Koeling for insightful comments and suggestions. The authors acknowledge the support of EPSRC (Brody and Lapata; grant EP/C538447/1) and the European Union (Navigli; Interop NoE (508011)). References Banerjee, Satanjeev and Ted Pedersen. 2003. Extended gloss overlaps as a measure of semantic relatedness. In Proceedings of the 18th IJCAI. Acapulco, pages 805–810. Briscoe, Ted and John Carroll. 2002. Robust accurate statistical annotation of general text. In Proceedings of the 3rd LREC. Las Palmas, Gran Canaria, pages 1499–1504. Dietterich, T. G. 1997. Machine learning research: Four current directions. AI Magazine 18(4):97–136. Edmonds, Philip. 2000. Designing a task for SENSEVAL-2. Technical note. Florian, Radu, Silviu Cucerzan, Charles Schafer, and David Yarowsky. 2002. Combining classifiers for word sense disambiguation. Natural Language Engineering 1(1):1–14. Galley, Michel and Kathleen McKeown. 2003. Improving word sense disambiguation in lexical chaining. In Proceedings of the 18th IJCAI. Acapulco, pages 1486–1488. Hoste, V´eronique, Iris Hendrickx, Walter Daelemans, and Antal van den Bosch. 2002. Parameter optimization for machine-learning of word sense disambiguation. Language Engineering 8(4):311–325. Lesk, Michael. 1986. Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the 5th SIGDOC. New York, NY, pages 24–26. Lin, Dekang. 1998. An information-theoretic definition of similarity. In Proceedings of the 15th ICML. Madison, WI, pages 296–304. McCarthy, Diana, Rob Koeling, Julie Weeds, and John Carroll. 2004. Finding predominant senses in untagged text. In Proceedings of the 42th ACL. Barcelona, Spain, pages 280–287. Mihalcea, Rada. 2005. Unsupervised large-vocabulary word sense disambiguation with graph-based algorithms for sequence data labeling. In Proceedings of the HLT/EMNLP. Vancouver, BC, pages 411–418. Mihalcea, Rada and Phil Edmonds, editors. 2004. Proceedings of the SENSEVAL-3. Barcelona, Spain. Miller, George A., Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Proceedings of the ARPA HLT Workshop. Morgan Kaufman, pages 303–308. Mohammad, Saif and Graeme Hirst. 2006. Determining word sense dominance using a thesaurus. In Proceedings of the EACL. Trento, Italy, pages 121–128. Morris, Jane and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics 1(17):21–43. Navigli, Roberto. 2005. Semi-automatic extension of largescale linguistic knowledge bases. In Proceedings of the 18th FLAIRS. Florida. Navigli, Roberto and Paola Velardi. 2005. Structural semantic interconnections: a knowledge-based approach to word sense disambiguation. PAMI 27(7):1075–1088. Ng, Tou Hwee. 1997. Getting serious about word sense disambiguation. In Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How?. Washington, DC, pages 1–7. Stokoe, Christopher. 2005. Differentiating homonymy and polysemy in information retrieval. In Proceedings of the HLT/EMNLP. Vancouver, BC, pages 403–410. Strapparava, Carlo, Alfio Gliozzo, and Claudio Giuliano. 2004. Word-sense disambiguation for machine translation. In Proceedings of the SENSEVAL-3. Barcelona, Spain, pages 229–234. Tan, Pang-Ning and Rong Jin. 2004. Ordering patterns by combining opinions from multiple sources. In Proceedings of the 10th KDD. Seattle, WA, pages 22–25. van Halteren, Hans, Jakub Zavrel, and Walter Daelemans. 2001. Improving accuracy in word class tagging through combination of machine learning systems. Computational Linguistics 27(2):199–230. Vickrey, David, Luke Biewald, Marc Teyssier, and Daphne Koller. 2005. Word-sense disambiguation for machine translation. In Proceedings of the HLT/EMNLP. Vancouver, BC, pages 771–778. Yarowsky, David and Radu Florian. 2002. Evaluating sense disambiguation across diverse parameter spaces. Natural Language Engineering 9(4):293–310. 104 | 2006 | 13 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1033–1040, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Robust PCFG-Based Generation using Automatically Acquired LFG Approximations Aoife Cahill1 and Josef van Genabith1,2 1 National Centre for Language Technology (NCLT) School of Computing, Dublin City University, Dublin 9, Ireland 2 Center for Advanced Studies, IBM Dublin, Ireland {acahill,josef}@computing.dcu.ie Abstract We present a novel PCFG-based architecture for robust probabilistic generation based on wide-coverage LFG approximations (Cahill et al., 2004) automatically extracted from treebanks, maximising the probability of a tree given an f-structure. We evaluate our approach using stringbased evaluation. We currently achieve coverage of 95.26%, a BLEU score of 0.7227 and string accuracy of 0.7476 on the Penn-II WSJ Section 23 sentences of length ≤20. 1 Introduction Wide coverage grammars automatically extracted from treebanks are a corner-stone technology in state-of-the-art probabilistic parsing. They achieve robustness and coverage at a fraction of the development cost of hand-crafted grammars. It is surprising to note that to date, such grammars do not usually figure in the complementary operation to parsing – natural language surface realisation. Research on statistical natural language surface realisation has taken three broad forms, differing in where statistical information is applied in the generation process. Langkilde (2000), for example, uses n-gram word statistics to rank alternative output strings from symbolic hand-crafted generators to select paths in parse forest representations. Bangalore and Rambow (2000) use n-gram word sequence statistics in a TAG-based generation model to rank output strings and additional statistical and symbolic resources at intermediate generation stages. Ratnaparkhi (2000) uses maximum entropy models to drive generation with word bigram or dependency representations taking into account (unrealised) semantic features. Valldal and Oepen (2005) present a discriminative disambiguation model using a hand-crafted HPSG grammar for generation. Belz (2005) describes a method for building statistical generation models using an automatically created generation treebank for weather forecasts. None of these probabilistic approaches to NLG uses a full treebank grammar to drive generation. Bangalore et al. (2001) investigate the effect of training size on performance while using grammars automatically extracted from the PennII Treebank (Marcus et al., 1994) for generation. Using an automatically extracted XTAG grammar, they achieve a string accuracy of 0.749 on their test set. Nakanishi et al. (2005) present probabilistic models for a chart generator using a HPSG grammar acquired from the Penn-II Treebank (the Enju HPSG). They investigate discriminative disambiguation models following Valldal and Oepen (2005) and their best model achieves coverage of 90.56% and a BLEU score of 0.7723 on Penn-II WSJ Section 23 sentences of length ≤20. In this paper we present a novel PCFG-based architecture for probabilistic generation based on wide-coverage, robust Lexical Functional Grammar (LFG) approximations automatically extracted from treebanks (Cahill et al., 2004). In Section 2 we briefly describe LFG (Kaplan and Bresnan, 1982). Section 3 presents our generation architecture. Section 4 presents evaluation results on the Penn-II WSJ Section 23 test set using string-based metrics. Section 5 compares our approach with alternative approaches in the literature. Section 6 concludes and outlines further research. 2 Lexical Functional Grammar Lexical Functional Grammar (LFG) (Kaplan and Bresnan, 1982) is a constraint-based theory of grammar. It (minimally) posits two levels of representation, c(onstituent)-structure and f(unctional)structure. C-structure is represented by contextfree phrase-structure trees, and captures surface 1033 S ↑=↓ NP VP (↑SUBJ)= ↓ ↑=↓ NNP V SBAR ↑=↓ ↑=↓ (↑COMP)= ↓ They believe S (↑PRED) = ‘pro’ (↑PRED) = ‘believe’ ↑=↓ (↑NUM) = PL (↑TENSE) = present (↑PERS) = 3 NP VP (↑SUBJ)= ↓ ↑=↓ NNP V ↑=↓ ↑=↓ John resigned (↑PRED) = ‘John’ (↑PRED) = ‘resign’ (↑NUM) = SG (↑TENSE) = PAST (↑PERS) = 3 f1: PRED ‘BELIEVE⟨(↑SUBJ)(↑COMP)⟩’ SUBJ f2: PRED ‘PRO’ NUM PL PERS 3 COMP f3: SUBJ f4: PRED ‘JOHN’ NUM SG PERS 3 PRED RESIGN⟨(↑SUBJ)⟩’ TENSE PAST TENSE PRESENT Figure 1: C- and f-structures for the sentence They believe John resigned. grammatical configurations such as word order. The nodes in the trees are annotated with functional equations (attribute-value structure constraints) which are resolved to produce an fstructure. F-structures are recursive attributevalue matrices, representing abstract syntactic functions. F-structures approximate to basic predicate-argument-adjunct structures or dependency relations. Figure 1 shows the c- and fstructures for the sentence “They believe John resigned”. 3 PCFG-Based Generation for Treebank-Based LFG Resources Cahill et al. (2004) present a method to automatically acquire wide-coverage robust probabilistic LFG approximations1 from treebanks. The method is based on an automatic f-structure annotation algorithm that associates nodes in treebank trees with f-structure equations. For each tree, the equations are collected and passed on to a constraint solver which produces an f-structure for the tree. Cahill et al. (2004) present two parsing architectures: the pipeline and the integrated parsing architecture. In the pipeline architecture, a PCFG (or a history-based lexicalised generative parser) is extracted from the treebank and used to parse unseen text into trees, the resulting trees are annotated with f-structure equations by the f-structure annotation algorithm and a constraint solver produces an f-structure. In the in1The resources are approximations in that (i) they do not enforce LFG completeness and coherence constraints and (ii) PCFG-based models can only approximate LFG and similar constraint-based formalisms (Abney, 1997). tegrated architecture, first the treebank trees are automatically annotated with f-structure information, f-structure annotated PCFGs with rules of the form NP(↑OBJ=↓)→DT(↑=↓) NN(↑=↓) are extracted, syntactic categories followed by equations are treated as monadic CFG categories during grammar extraction and parsing, unseen text is parsed into trees with f-structure annotations, the annotations are collected and a constraint solver produces an f-structure. The generation architecture presented here builds on the integrated parsing architecture resources of Cahill et al. (2004). The generation process takes an f-structure (such as the f-structure on the right in Figure 1) as input and outputs the most likely f-structure annotated tree (such as the tree on the left in Figure 1) given the input fstructure argmaxTreeP(Tree|F-Str) where the probability of a tree given an fstructure is decomposed as the product of the probabilities of all f-structure annotated productions contributing to the tree but where in addition to conditioning on the LHS of the production (as in the integrated parsing architecture of Cahill et al. (2004)) each production X →Y is now also conditioned on the set of f-structure features Feats φ-linked2 to the LHS of the rule. For an f-structure annotated tree Tree and f-structure F-Str with Φ(Tree)=F-Str:3 2φ links LFG’s c-structure to f-structure in terms of manyto-one functions from tree nodes into f-structure. 3Φ resolves the equations in Tree into F-Str (if satisfiable) in terms of the piece-wise function φ. 1034 Conditioning F-Structure Features Grammar Rules Probability {PRED, SUBJ, COMP, TENSE} VP(↑=↓) →VBD(↑=↓) SBAR(↑COMP=↓) 0.4998 {PRED, SUBJ, COMP, TENSE} VP(↑=↓) →VBP(↑=↓) SBAR(↑COMP=↓) 0.0366 {PRED, SUBJ, COMP, TENSE} VP(↑=↓) →VBD(↑=↓) , S(↑COMP=↓) 6.48e-6 {PRED, SUBJ, COMP, TENSE} VP(↑=↓) →VBD(↑=↓) S(↑COMP=↓) 3.88e-6 {PRED, SUBJ, COMP, TENSE} VP(↑=↓) →VBP(↑=↓) , SBARQ(↑COMP=↓) 7.86e-7 {PRED, SUBJ, COMP, TENSE} VP(↑=↓) →VBD(↑=↓) SBARQ(↑COMP=↓) 1.59e-7 Table 1: Example VP Generation rules automatically extracted from Sections 02–21 of the Penn-II Treebank P(Tree|F-Str) := Y X →Y in Tree φ(X) = Feats P(X →Y |X, Feats) (1) P(X →Y |X, Feats) = P(X →Y, X, Feats) P(X, Feats) = (2) P(X →Y, Feats) P(X, Feats) ≈#(X →Y, Feats) #(X →. . . , Feats) (3) and where probabilities are estimated using a simple MLE and rule counts (#) from the automatically f-structure annotated treebank resource of Cahill et al. (2004). Lexical rules (rules expanding preterminals) are conditioned on the full set of (atomic) feature-value pairs φ-linked to the RHS. The intuition for conditioning rules in this way is that local f-structure components of the input f-structure drive the generation process. This conditioning effectively turns the f-structure annotated PCFGs of Cahill et al. (2004) into probabilistic generation grammars. For example, in Figure 1 (where φ-links are represented as arrows), we automatically extract the rule S(↑=↓) → NP(↑SUBJ=↓) VP(↑=↓) conditioned on the feature set {PRED,SUBJ,COMP,TENSE}. The probability of the rule is then calculated by counting the number of occurrences of that rule (and the associated set of features), divided by the number of occurrences of rules with the same LHS and set of features. Table 1 gives example VP rule expansions with their probabilities when we train a grammar from Sections 02–21 of the Penn Treebank. 3.1 Chart Generation Algorithm The generation algorithm is based on chart generation as first introduced by Kay (1996) with Viterbi-pruning. The generation grammar is first converted into Chomsky Normal Form (CNF). We recursively build a chart-like data structure in a bottom-up fashion. In contrast to packing of locally equivalent edges (Carroll and Oepen, 2005), in our approach if two chart items have equivalent rule left-hand sides and lexical coverage, only the most probable one is kept. Each grammatical function-labelled (sub-)f-structure in the overall fstructure indexes a (sub-)chart. The chart for each f-structure generates the most probable tree for that f-structure, given the internal set of conditioning f-structure features and its grammatical function label. At each level, grammatical function indexed charts are initially unordered. Charts are linearised by generation grammar rules once the charts themselves have produced the most probable tree for the chart. Our example in Figure 1 generates the following grammatical function indexed, embedded and (at each level of embedding) unordered (sub-)chart configuration: SUBJ f : 2 COMP f :3 SUBJ f : 4 TOP f : 1 For each local subchart, the following algorithm is applied: Add lexical rules While subchart is Changing Apply unary productions Apply binary productions Propagate compatible rules 3.2 A Worked Example As an example, we step through the construction of the COMP-indexed chart at level f3 of the f-structure in Figure 1. For lexical rules, we check the feature set at the sub-f-structure level and the values of the features. Only features associated with lexical material are considered. The SUBJ-indexed sub-chart f4 is constructed by first adding the rule NNP(↑=↓) → John(↑PRED=‘John’,↑NUM=pl,↑PERS=3). If more than one lexical rule corresponds to a particular set of features and values in the f-structure, we add all rules with different LHS categories. If two or more 1035 rules with equal LHS categories match the feature set, we only add the most probable one. Unary productions are applied if the RHS of the unary production matches the LHS of an item already in the chart and the feature set of the unary production matches the conditioning feature set of the local sub-f-structure. In our example, this results in the rule NP(↑SUBJ=↓) →NNP(↑=↓), conditioned on {NUM, PERS, PRED}, being added to the sub-chart at level f4 (the probability associated with this item is the probability of the rule multiplied by the probability of the previous chart item which combines with the new rule). When a rule is added to the chart, it is automatically associated with the yield of the rule, allowing us to propagate chunks of generated material upwards in the chart. If two items in the chart have the same LHS (and the same yield independent of word order), only the item with the highest probability is kept. This Viterbi-style pruning ensures that processing is efficient. At sub-chart f4 there are no binary rules that can be applied. At this stage, it is not possible to add any more items to the sub-chart, therefore we propagate items in the chart that are compatible with the sub-chart index SUBJ. In our example, only the rule NP(↑SUBJ=↓) →NNP(↑=↓) (which yields the string John) is propagated to the next level up in the overall chart for consideration in the next iteration. If the yield of an item being propagated upwards in the chart is subsumed by an element already at that level, the subsumed item is removed. This results in efficiently treating the well known problem originally described in Kay (1996), where one unnecessarily retains sub-optimal strings. For example, generating the string “The very tall strong athletic man”, one does not want to keep variations such as “The very tall man”, or “The athletic man”, if one can generate the entire string. Our method ensures that only the most probable tree with the longest yield will be propagated upwards. The COMP-indexed chart at level f3 of the fstructure is constructed in a similar fashion. First the lexical rule V(↑=↓) →resigned is added. Next, conditioning on {PRED, SUBJ, TENSE}, the unary rule VP(↑=↓) →V(↑=↓) (with yield resigned) is added. We combine the new VP(↑=↓) rule with the NP(↑SUBJ=↓) already present from the previous iteration to enable us to add the rule S(↑=↓) →NP(↑SUBJ=↓) VP(↑=↓), conditioned on {PRED, SUBJ, TENSE}. The yield of this rule is John resigned. Next, conditioning on the same feature set, we add the rule SBAR(↑comp=↓) → S(↑=↓) with yield John resigned to the chart. It is not possible to add any more new rules, so at this stage, only the SBAR(↑COMP=↓) rule with yield John resigned is propagated up to the next level. The process continues until at the outermost level of the f-structure, there are no more rules to be added to the chart. At this stage, we search for the most probable rule with TOP as its LHS category and return the yield of this rule as the output of the generation process. Generation fails if there is no rule with LHS TOP at this level in the chart. 3.3 Lexical Smoothing Currently, the only smoothing in the system applies at the lexical level. Our backoff uses the built-in lexical macros4 of the automatic fstructure annotation algorithm of Cahill et al. (2004) to identify potential part-of-speech categories corresponding to a particular set of features. Following Baayen and Sproat (1996) we assume that unknown words have a probability distribution similar to hapax legomena. We add a lexical rule for each POS tag that corresponds to the fstructure features at that level to the chart with a probability computed from the original POS tag probability distribution multiplied by a very small constant. This means that lexical rules seen during training have a much higher probability than lexical rules added during the smoothing phase. Lexical smoothing has the advantage of boosting coverage (as shown in Tables 3, 4, 5 and 6 below) but slightly degrades the quality of the strings generated. We believe that the tradeoff in terms of quality is worth the increase in coverage. Smoothing is not carried out when there is no suitable phrasal grammar rule that applies during the process of generation. This can lead to the generation of partial strings, since some f-structure components may fail to generate a corresponding string. In such cases, generation outputs the concatenation of the strings generated by the remaining components. 4 Experiments We train our system on WSJ Sections 02–21 of the Penn-II Treebank and evaluate against the raw 4The lexical macros associate POS tags with sets of features, for example the tag NNS (plural noun) is associated with the features ↑PRED=$LEMMA and ↑NUM=pl. 1036 S. length ≤20 ≤25 ≤30 ≤40 all Training 16667 23597 29647 36765 39832 Test 1034 1464 1812 2245 2416 Table 2: Number of training and test sentences per sentence length strings from Section 23. We use Section 22 as our development set. As part of our evaluation, we experiment with sentences of varying length (20, 25, 30, 40, all), both in training and testing. Table 2 gives the number of training and test sentences for each sentence length. In each case, we use the automatically generated f-structures from Cahill et al. (2004) from the original Section 23 treebank trees as f-structure input to our generation experiments. We automatically mark adjunct and coordination scope in the input f-structure. Notice that these automatically generated f-structures are not “perfect”, i.e. they are not guaranteed to be complete and coherent (Kaplan and Bresnan, 1982): a local f-structure may contain material that is not supposed to be there (incoherence) and/or may be missing material that is supposed to be there (incompleteness). The results presented below show that our method is robust with respect to the quality of the f-structure input and will always attempt to generate partial output rather than fail. We consider this an important property as pristine generation input cannot always be guaranteed in realistic application scenarios, such as probabilistic transfer-based machine translation where generation input may contain a certain amount of noise. 4.1 Pre-Training Treebank Transformations During the development of the generation system, we carried out error analysis on our development set WSJ Section 22 of the Penn-II Treebank. We identified some initial pre-training transformations to the treebank that help generation. Punctuation: Punctuation is not usually encoded in f-structure representations. Because our architecture is completely driven by rules conditioned by f-structure information automatically extracted from an f-structure annotated treebank, its placement of punctuation is not principled. This led to anomalies such as full stops appearing mid sentence and quotation marks appearing in undesired locations. One partial solution to this was to reduce the amount of punctuation that the system trained on. We removed all punctuation apart from commas and full stops from the training data. We did not remove any punctuation from the evaluation test set (Section 23), but our system will ever only produce commas and full stops. In the evaluation (Tables 3, 4, 5 and 6) we are penalised for the missing punctuation. To solve the problem of full stops appearing mid sentence, we carry out a punctuation post-processing step on all generated strings. This removes mid-sentence full stops and adds missing full stops at the end of generated sentences prior to evaluation. We are working on a more appropriate solution allowing the system to generate all punctuation. Case: English does not have much case marking, and for parsing no special treatment was encoded. However, when generating, it is very important that the first person singular pronoun is I in the nominative case and me in the accusative. Given the original grammar used in parsing, our generation system was not able to distinguish nominative from accusative contexts. The solution we implemented was to carry out a grammar transformation in a pre-processing step, to automatically annotate personal pronouns with their case information. This resulted in phrasal and lexical rules such as NP(↑SUBJ) →PRPˆnom(↑=↓) and PRPˆnom(↑=↓) →I and greatly improved the accuracy of the pronouns generated. 4.2 String-Based Evaluation We evaluate the output of our generation system against the raw strings of Section 23 using the Simple String Accuracy and BLEU (Papineni et al., 2002) evaluation metrics. Simple String Accuracy is based on the string edit distance between the output of the generation system and the gold standard sentence. BLEU is the weighted average of n-gram precision against the gold standard sentences. We also measure coverage as the percentage of input f-structures that generate a string. For evaluation, we automatically expand all contracted words. We only evaluate strings produced by the system (similar to Nakanishi et al. (2005)). We conduct a total of four experiments. The parameters we investigate are lexical smoothing (Section 3.3) and partial output. Partial output is a robustness feature for cases where a sub-fstructure component fails to generate a string and the system outputs a concatenation of the strings generated by the remaining components, rather than fail completely. 1037 Sentence length of Evaluation Section 23 Sentences of length: Training Data Metric ≤20 ≤25 ≤30 ≤40 all ≤20 BLEU 0.6812 0.6601 0.6373 0.6013 0.5793 String Accuracy 0.7274 0.7052 0.6875 0.6572 0.6431 Coverage 96.52 95.83 94.59 93.76 93.92 ≤25 BLEU 0.6915 0.6800 0.6696 0.6396 0.6233 String Accuracy 0.7262 0.7095 0.6983 0.6731 0.6618 Coverage 96.52 95.83 94.59 93.76 93.92 ≤30 BLEU 0.6979 0.6881 0.6792 0.6576 0.6445 String Accuracy 0.7317 0.7169 0.7075 0.6853 0.6749 Coverage 97.97 97.95 97.41 97.15 97.31 ≤40 BLEU 0.7045 0.6951 0.6852 0.6715 0.6605 String Accuracy 0.7349 0.7212 0.7074 0.6881 0.6788 Coverage 98.45 98.36 98.01 97.82 97.93 all BLEU 0.7077 0.6974 0.6859 0.6734 0.6651 String Accuracy 0.7373 0.7221 0.7087 0.6894 0.6808 Coverage 98.65 98.5 98.12 97.95 98.05 Table 3: Generation +partial output +lexical smoothing Sentence length of Evaluation Section 23 Sentences of length: Training Data Metric ≤20 ≤25 ≤30 ≤40 all all BLEU 0.6253 0.6097 0.5887 0.5730 0.5590 String Accuracy 0.6886 0.6688 0.6513 0.6317 0.6207 Coverage 91.20 91.19 90.84 90.33 90.11 Table 4: Generation +partial output -lexical smoothing Varying the length of the sentences included in the training data (Tables 3 and 5) shows that results improve (both in terms of coverage and string quality) as the length of sentence included in the training data increases. Tables 3 and 5 give the results for the experiments including lexical smoothing and varying partial output. Table 3 (+partial, +smoothing) shows that training on sentences of all lengths and evaluating all strings (including partial outputs), our system achieves coverage of 98.05%, a BLEU score of 0.6651 and string accuracy of 0.6808. Table 5 (-partial, +smoothing) shows that coverage drops to 89.49%, BLEU score increases to 0.6979 and string accuracy to 0.7012, when the system is trained on sentences of all lengths. Similarly, for strings ≤20, coverage drops from 98.65% to 95.26%, BLEU increases from 0.7077 to 0.7227 and String Accuracy from 0.7373 to 0.7476. Including partial output increases coverage (by more than 8.5 percentage points for all sentences) and hence robustness while slightly decreasing quality. Tables 3 (+partial, +smoothing) and 4 (+partial, -smoothing) give results for the experiments including partial output but varying lexical smoothing. With no lexical smoothing (Table 4), the system (trained on all sentence lengths) produces strings for 90.11% of the input f-structures and achieves a BLEU score of 0.5590 and string accuracy of 0.6207. Switching off lexical smoothing has a negative effect on all evaluation metrics (coverage and quality), because many more strings produced are now partial (since for PRED values unseen during training, no lexical entries are added to the chart). Comparing Tables 5 (-partial, +smoothing) and 6 (-partial, -smoothing), where the system does not produce any partial outputs and lexical smoothing is varied, shows that training on all sentence lengths, BLEU score increases from 0.6979 to 0.7147 and string accuracy increases from 0.7012 to 0.7192. At the same time, coverage drops dramatically from 89.49% (Table 5) to 47.60% (Table 6). Comparing Tables 4 and 6 shows that while partial output almost doubles coverage, this comes at a price of a severe drop in quality (BLEU score drops from 0.7147 to 0.5590). On the other hand, comparing Tables 5 and 6 shows that lexical smoothing achieves a similar increase in coverage with only a very slight drop in quality. 5 Discussion Nakanishi et al. (2005) achieve 90.56% coverage and a BLEU score of 0.7723 on Section 23 1038 Sentence length of Evaluation Section 23 Sentences of length: Training Data Metric ≤20 ≤25 ≤30 ≤40 all ≤20 BLEU 0.7326 0.7185 0.7165 0.7082 0.7052 String Accuracy 0.76 0.7428 0.7363 0.722 0.7175 Coverage 85.49 81.56 77.26 71.94 69.08 ≤25 BLEU 0.7300 0.7235 0.7218 0.7118 0.7077 String Accuracy 0.7517 0.7382 0.7315 0.7172 0.7116 Coverage 89.65 87.77 84.38 80.31 78.56 ≤30 BLEU 0.7207 0.7125 0.7107 0.6991 0.6946 String Accuracy 0.747 0.7336 0.7275 0.711 0.7045 Coverage 93.23 92.14 89.74 86.59 85.18 ≤40 BLEU 0.7221 0.7140 0.7106 0.7016 0.6976 String Accuracy 0.746 0.7331 0.7236 0.7072 0.7001 Coverage 94.58 93.85 91.89 89.62 88.33 all BLEU 0.7227 0.7145 0.7095 0.7011 0.6979 String Accuracy 0.7476 0.7331 0.7239 0.7077 0.7012 Coverage 95.26 94.40 92.55 90.69 89.49 Table 5: Generation -partial output +lexical smoothing Sentence length of Evaluation Section 23 Sentences of length: Training Data Metric ≤20 ≤25 ≤30 ≤40 all all BLEU 0.7272 0.7237 0.7201 0.7160 0.7147 String Accuracy 0.7547 0.7436 0.7361 0.7237 0.7192 Coverage 61.99 57.38 53.64 47.60 47.60 Table 6: Generation -partial output -lexical smoothing sentences, restricted to length ≤20 for efficiency reasons. Langkilde-Geary’s (2002) best system achieves 82.8% coverage, a BLEU score of 0.924 and string accuracy of 0.945 against Section 23 sentences of all lengths. Callaway (2003) achieves 98.7% coverage and a string accuracy of 0.6607 on sentences of all lengths. Our best results for sentences of length ≤20 are coverage of 95.26%, BLEU score of 0.7227 and string accuracy of 0.7476. For all sentence lengths, our best results are coverage of 89.49%, a BLEU score of 0.6979 and string accuracy of 0.7012. Using hand-crafted grammar-based generation systems (Langkilde-Geary, 2002; Callaway, 2003), it is possible to achieve very high results. However, hand-crafted systems are expensive to construct and not easily ported to new domains or other languages. Our methodology, on the other hand, is based on resources automatically acquired from treebanks and easily ported to new domains and languages, simply by retraining on suitable data. Recent work on the automatic acquisition of multilingual LFG resources from treebanks for Chinese, German and Spanish (Burke et al., 2004; Cahill et al., 2005; O’Donovan et al., 2005) has shown that given a suitable treebank, it is possible to automatically acquire high quality LFG resources in a very short space of time. The generation architecture presented here is easily ported to those different languages and treebanks. 6 Conclusion and Further Work We present a new architecture for stochastic LFG surface realisation using the automatically annotated treebanks and extracted PCFG-based LFG approximations of Cahill et al. (2004). Our model maximises the probability of a tree given an fstructure, supporting a simple and efficient implementation that scales to wide-coverage treebankbased resources. An improved model would maximise the probability of a string given an fstructure by summing over trees with the same yield. More research is required to implement such a model efficiently using packed representations (Carroll and Oepen, 2005). Simple PCFGbased models, while effective and computationally efficient, can only provide approximations to LFG and similar constraint-based formalisms (Abney, 1997). Research on discriminative disambiguation methods (Valldal and Oepen, 2005; Nakanishi et al., 2005) is important. Kaplan and Wedekind (2000) show that for certain linguistically interesting classes of LFG (and PATR etc.) grammars, generation from f-structures yields a context free language. Their proof involves the notion of a 1039 “refinement” grammar where f-structure information is compiled into CFG rules. Our probabilistic generation grammars bear a conceptual similarity to Kaplan and Wedekind’s “refinement” grammars. It would be interesting to explore possible connections between the treebank-based empirical work presented here and the theoretical constructs in Kaplan and Wedekind’s proofs. We presented a full set of generation experiments on varying sentence lengths training on Sections 02–21 of the Penn Treebank and evaluating on Section 23 strings. Sentences of length ≤20 achieve coverage of 95.26%, BLEU score of 0.7227 and string accuracy of 0.7476 against the raw Section 23 text. Sentences of all lengths achieve coverage of 89.49%, BLEU score of 0.6979 and string accuracy of 0.7012. Our method is robust and can cope with noise in the f-structure input to generation and will attempt to produce partial output rather than fail. Acknowledgements We gratefully acknowledge support from Science Foundation Ireland grant 04/BR/CS0370 for the research reported in this paper. References Stephen Abney. 1997. Stochastic Attribute-Value Grammars. Computational Linguistics, 23(4):597–618. Harald Baayen and Richard Sproat. 1996. Estimating lexical priors for low-frequency morphologically ambiguous forms. Computational Linguistics, 22(2):155–166. Srinivas Bangalore and Owen Rambow. 2000. Exploiting a probabilistic hierarchical model for generation. In Proceedings of COLING 2000, pages 42–48, Saarbrcken, Germany. Srinivas Bangalore, John Chen, and Owen Rambow. 2001. Impact of quality and quantity of corpora on stochastic generation. In Proceedings of EMNLP 2001, pages 159– 166. Anja Belz. 2005. Statistical generation: Three methods compared and evaluated. In Proceedings of the 10th European Workshop on Natural Language Generation (ENLG’ 05), pages 15–23, Aberdeen, Scotland. Michael Burke, Olivia Lam, Rowena Chan, Aoife Cahill, Ruth O’Donovan, Adams Bodomo, Josef van Genabith, and Andy Way. 2004. Treebank-Based Acquisition of a Chinese Lexical-Functional Grammar. In Proceedings of the 18th Pacific Asia Conference on Language, Information and Computation, pages 161–172, Tokyo, Japan. Aoife Cahill, Michael Burke, Ruth O’Donovan, Josef van Genabith, and Andy Way. 2004. Long-Distance Dependency Resolution in Automatically Acquired WideCoverage PCFG-Based LFG Approximations. In Proceedings of ACL-04, pages 320–327, Barcelona, Spain. Aoife Cahill, Martin Forst, Michael Burke, Mairead McCarthy, Ruth O’Donovan, Christian Rohrer, Josef van Genabith, and Andy Way. 2005. Treebank-based acquisition of multilingual unification grammar resources. Journal of Research on Language and Computation; Special Issue on “Shared Representations in Multilingual Grammar Engineering”, pages 247–279. Charles B. Callaway. 2003. Evaluating coverage for large symbolic NLG grammars. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, pages 811–817, Acapulco, Mexico. John Carroll and Stephan Oepen. 2005. High efficiency realization for a wide-coverage unification grammar. In Proceedings of IJCNLP05, pages 165–176, Jeju Island, Korea. Ron Kaplan and Joan Bresnan. 1982. Lexical Functional Grammar, a Formal System for Grammatical Representation. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations, pages 173–281. MIT Press, Cambridge, MA. Ron Kaplan and Juergen Wedekind. 2000. LFG Generation produces Context-free languages. In Proceedings of COLING 2000, pages 141–148, Saarbruecken, Germany. Martin Kay. 1996. Chart Generation. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pages 200–204, Santa Cruz, CA. Irene Langkilde-Geary. 2002. An empirical verification of coverage and correctness for a general-purpose sentence generator. In Second International Natural Language Generation Conference, pages 17–24, Harriman, NY. Irene Langkilde. 2000. Forest-based statistical sentence generation. In Proceedings of NAACL 2000, pages 170–177, Seattle, WA. Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. 1994. The Penn Treebank: Annotating Predicate Argument Structure. In Proceedings of the ARPA Workshop on Human Language Technology, pages 110–115, Princton, NJ. Hiroko Nakanishi, Yusuke Miyao, and Jun’ichi Tsujii. 2005. Probabilistic models for disambiguation of an HPSGbased chart generator. In Proceedings of the International Workshop on Parsing Technology, Vancouver, Canada. Ruth O’Donovan, Aoife Cahill, Josef van Genabith, and Andy Way. 2005. Automatic Acquisition of Spanish LFG Resources from the CAST3LB Treebank. In Proceedings of LFG 05, pages 334–352, Bergen, Norway. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of ACL 2002, pages 311–318, Philadelphia, PA. Adwait Ratnaparkhi. 2000. Trainable methods for natural language generation. In Proceedings of NAACL 2000, pages 194–201, Seattle, WA. Erik Valldal and Stephan Oepen. 2005. Maximum Entropy Models for Realization Reranking. In Proceedings of the 10th Machine Translation Summit, pages 109–116, Phuket, Thailand. 1040 | 2006 | 130 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1041–1048, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Incremental generation of spatial referring expressions in situated dialog∗ John D. Kelleher Dublin Institute of Technology Dublin, Ireland [email protected] Geert-Jan M. Kruijff DFKI GmbH Saarbr¨ucken, Germany [email protected] Abstract This paper presents an approach to incrementally generating locative expressions. It addresses the issue of combinatorial explosion inherent in the construction of relational context models by: (a) contextually defining the set of objects in the context that may function as a landmark, and (b) sequencing the order in which spatial relations are considered using a cognitively motivated hierarchy of relations, and visual and discourse salience. 1 Introduction Our long-term goal is to develop conversational robots with whom we can interact through natural, fluent, visually situated dialog. An inherent aspect of visually situated dialog is reference to objects located in the physical environment (Moratz and Tenbrink, 2006). In this paper, we present a computational approach to the generation of spatial locative expressions in such situated contexts. The simplest form of locative expression is a prepositional phrase, modifying a noun phrase to locate an object. (1) illustrates the type of locative we focus on generating. In this paper we use the term target (T) to refer to the object that is being located by a spatial expression and the term landmark (L) to refer to the object relative to which the target’s location is described. (1) a. the book [T] on the table [L] Generating locative expressions is part of the general field of generating referring expressions (GRE). Most GRE algorithms deal with the same problem: given a domain description and a target object, generate a description of the target object that distinguishes it from the other objects in the domain. We use distractor objects to indicate the ∗The research reported here was supported by the CoSy project, EU FP6 IST ”Cognitive Systems” FP6-004250-IP. objects in the context excluding the target that at a given point in processing fulfill the description of the target object that has been generated. The description generated is said to be distinguishing if the set of distractor objects is empty. Several GRE algorithms have addressed the issue of generating locative expressions (Dale and Haddock, 1991; Horacek, 1997; Gardent, 2002; Krahmer and Theune, 2002; Varges, 2004). However, all these algorithms assume the GRE component has access to a predefined scene model. For a conversational robot operating in dynamic environments this assumption is unrealistic. If a robot wishes to generate a contextually appropriate reference it cannot assume the availability of a fixed scene model, rather it must dynamically construct one. However, constructing a model containing all the relationships between all the entities in the domain is prone to combinatorial explosion, both in terms of the number objects in the context (the location of each object in the scene must be checked against all the other objects in the scene) and number of inter-object spatial relations (as a greater number of spatial relations will require a greater number of comparisons between each pair of objects).1 Also, the context free a priori construction of such an exhaustive scene model is cognitively implausible. Psychological research indicates that spatial relations are not preattentively perceptually available (Treisman and Gormican, 1988), their perception requires attention (Logan, 1994; Logan, 1995). Subjects appear to construct contextually dependent reduced relational scene models, not exhaustive context free models. Contributions We present an approach to in1In English, the vast majority of spatial locatives are binary, some notable exceptions include: between, amongst etc. However, we will not deal with these exceptions in this paper. 1041 crementally generating locative expressions. It addresses the issue of combinatorial explosion inherent in relational scene model construction by incrementally creating a series of reduced scene models. Within each scene model only one spatial relation is considered and only a subset of objects are considered as candidate landmarks. This reduces both the number of relations that must be computed over each object pair and the number of object pairs. The decision as to which relations should be included in each scene model is guided by a cognitively motivated hierarchy of spatial relations. The set of candidate landmarks in a given scene is dependent on the set of objects in the scene that fulfil the description of the target object and the relation that is being considered. Overview §2 presents some relevant background data. §3 presents our GRE approach. §4 illustrates the framework on a worked example and expands on some of the issues relevant to the framework. We end with conclusions. 2 Data If we consider that English has more than eighty spatial prepositions (omitting compounds such as right next to) (Landau, 1996), the combinatorial aspect of relational scene model construction becomes apparent. It should be noted that for our purposes, the situation is somewhat easier because a distinction can be made between static and dynamic prepositions: static prepositions primarily2 denote the location of an object, dynamic prepositions primarily denote the path of an object (Jackendoff, 1983; Herskovits, 1986), see (2). However, even focusing just on the set of static prepositions does not remove the combinatorial issues effecting the construction of a scene model. (2) a. the tree is behind [static] the house b. the man walked across [dyn.] the road In general, static prepositions can be divided into two sets: topological and projective. Topological prepositions are the category of prepositions referring to a region that is proximal to the landmark; e.g., at, near, etc. Often, the distinctions between the semantics of the different topological prepositions is based on pragmatic contraints, e.g. the use of at licences the target to be 2Static prepositions can be used in dynamic contexts, e.g. the man ran behind the house, and dynamic prepositions can be used in static ones, e.g. the tree lay across the road. in contact with the landmark, whereas the use of near does not. Projective prepositions describe a region projected from the landmark in a particular direction; e.g., to the right of, to the left of. The specification of the direction is dependent on the frame of reference being used (Herskovits, 1986). Static prepositions have both qualitative and quantitative semantic properties. The qualitative aspect is evident when they are used to denote an object by contrasting its location with that of the distractor objects. Using Figure 1 as visual context, the locative expression the circle on the left of the square illustrates the contrastive semantics of a projective preposition, as only one of the circles in the scene is located in that region. Taking Figure 2, the locative expression the circle near the black square shows the contrastive semantics of a topological preposition. Again, of the two circles in the scene only one of them may be appropriately described as being near the black square, the other circle is more appropriately described as being near the white square. The quantitative aspect is evident when a static preposition denotes an object using a relative scale. In Figure 3 the locative the circle to the right of the square shows the relative semantics of a projective preposition. Although both the circles are located to the right of the square we can distinguish them based on their location in the region. Figure 3 also illustrates the relative semantics of a topological preposition Figure 3. We can apply a description like the circle near the square to either circle if none other were present. However, if both are present we can interpret the reference based on relative proximity to the landmark the square. Figure 1: Visual context illustrating contrastive semantics of projective prepositions Figure 2: Visual context illustrating contrastive semantics of topological prepositions Figure 3: Visual context illustrating relative semantics of topological and projective prepositions 1042 3 Approach We base our GRE approach on an extension of the incremental algorithm (Dale and Reiter, 1995). The motivation for basing our approach on this algorithm is its polynomial complexity. The algorithm iterates through the properties of the target and for each property computes the set of distractor objects for which (a) the conjunction of the properties selected so far, and (b) the current property hold. A property is added to the list of selected properties if it reduces the size of the distractor object set. The algorithm succeeds when all the distractors have been ruled out, it fails if all the properties have been processed and there are still some distractor objects. The algorithm can be refined by ordering the checking of properties according to fixed preferences, e.g. first a taxonomic description of the target, second an absolute property such as colour, third a relative property such as size. (Dale and Reiter, 1995) also stipulate that the type description of the target should be included in the description even if its inclusion does not make the target distinguishable. We extend the original incremental algorithm in two ways. First we integrate a model of object salience by modifying the condition under which a description is deemed to be distinguishing: it is, if all the distractors have been ruled out or if the salience of the target object is greater than the highest salience score ascribed to any of the current distractors. This is motivated by the observation that people can easily resolve underdetermined references using salience (Duwe and Strohner, 1997). We model the influence of visual and discourse salience using a function salience(L), Equation 1. The function returns a value between 0 and 1 to represent the relative salience of a landmark L in the scene. The relative salience of an object is the average of its visual salience (Svis) and discourse salience (Sdisc), salience(L) = (Svis(L) + Sdisc(L))/2 (1) Visual salience Svis is computed using the algorithm of (Kelleher and van Genabith, 2004). Computing a relative salience for each object in a scene is based on its perceivable size and its centrality relative to the viewer focus of attention, returning scores in the range of 0 to 1. The discourse salience (Sdisc) of an object is computed based on recency of mention (Hajicov´a, 1993) except we represent the maximum overall salience in the scene as 1, and use 0 to indicate that the landmark is not salient in the current context. Algorithm 1 gives the basic algorithm with salience. Algorithm 1 The Basic Incremental Algorithm Require: T = target object; D = set of distractor objects. Initialise: P = {type, colour, size}; DESC = {} for i = 0 to |P| do if T salience() >MAXDISTRACTORSALIENCE then Distinguishing description generated if type(x) ̸∈DESC then DESC = DESC ∪type(x) end if return DESC else D′ = {x : x ∈D, P i(x) = P i(T)} if |D′| < |D| then DESC = DESC ∪P i(T) D = {x : x ∈D, P i(x) = P i(T)} end if end if end for Failed to generate distinguishing description return DESC Secondly, we extend the incremental algorithm in how we construct the context model used by the algorithm. The context model determines to a large degree the output of the incremental algorithm. However, Dale and Reiter do not define how this set should be constructed, they only write: “[w]e define the context set to be the set of entities that the hearer is currently assumed to be attending to” (Dale and Reiter, 1995, pg. 236). Before applying the incremental algorithm we must construct a context model in which we can check whether or not the description generated distinguishes the target object. To constrain the combinatorial explosion in relational scene model construction we construct a series of reduced scene models, rather than one complex exhaustive model. This construction is driven by a hierarchy of spatial relations and the partitioning of the context model into objects that may and may not function as landmarks. These two components are developed below. §3.1 discusses a hierarchy of spatial relations, and §3.2 presents a classification of landmarks and uses these groupings to create a definition of a distinguishing locative description. In §3.3 we give the generation algorithm integrating these components. 3.1 Cognitive Ordering of Contexts Psychological research indicates that spatial relations are not preattentively perceptually available (Treisman and Gormican, 1988). Rather, their perception requires attention (Logan, 1994; 1043 Logan, 1995). These findings point to subjects constructing contextually dependent reduced relational scene models, rather than an exhaustive context free model. Mimicking this, we have developed an approach to context model construction that constrains the combinatorial explosion inherent in the construction of relational context models by incrementally building a series of reduced context models. Each context model focuses on a different spatial relation. The ordering of the spatial relations is based on the cognitive load of interpreting the relation. Below we motivate and develop the ordering of relations used. We can reasonably asssume that it takes less effort to describe one object than two. Following the Principle of Minimal Cooperative Effort (Clark and Wilkes-Gibbs, 1986), one should only use a locative expression when there is no distinguishing description of the target object using a simple feature based approach. Also, the Principle of Sensitivity (Dale and Reiter, 1995) states that when producing a referring expression, one should prefer features the hearer is known to be able to interpret and see. This points to a preference, due to cognitive load, for descriptions that identify an object using purely physical and easily perceivable features ahead of descriptions that use spatial expressions. Experimental results support this (van der Sluis and Krahmer, 2004). Similarly, we can distinguish between the cognitive loads of processing different forms of spatial relations. In comparing the cognitive load associated with different spatial relations it is important to recognize that they are represented and processed at several levels of abstraction. For example, the geometric level, where metric properties are dealt with, the functional level, where the specific properties of spatial entities deriving from their functions in space are considered, and the pragmatic level, which gathers the underlying principles that people use in order to discard wrong relations or to deduce more information (Edwards and Moulin, 1998). Our discussion is grounded at the geometric level. Focusing on static prepositions, we assume topological prepositions have a lower perceptual load than projective ones, as perceiving two objects being close to each other is easier than the processing required to handle frame of reference ambiguity (Carlson-Radvansky and Irwin, 1994; Carlson-Radvansky and Logan, 1997). Figure 4 lists the preferences, further Figure 4: Cognitive load discerning objects type as the easiest to process, before absolute gradable predicates (e.g. color), which is still easier than relative gradable predicates (e.g. size) (Dale and Reiter, 1995). We can refine the topological versus projective preference further if we consider their contrastive and relative uses of these relations (§2). Perceiving and interpreting a contrastive use of a spatial relation is computationally easier than judging a relative use. Finally, within projective prepositions, psycholinguistic data indicates a perceptually based ordering of the relations: above/below are easier to percieve and interpret than in front of/behind which in turn are easier than to the right of/to the left of (Bryant et al., 1992; Gapp, 1995). In sum, we propose the following ordering: topological contrastive < topological relative < projective constrastive < projective relative. For each level of this hierarchy we require a computational model of the semantics of the relation at that level that accomodates both contrastive and relative representations. In §2 we noted that the distinctions between the semantics of the different topological prepositions is often based on functional and pragmatic issues.3 Currently, however, more psycholinguistic data is required to distinguish the cognitive load associated with the different topological prepositions. We use the model of topological proximity developed in (Kelleher et al., 2006) to model all the relations at this level. Using this model we can define the extent of a region proximal to an object. If the target or one of the distractor objects is the only object within the region of proximity around a given landmark this is taken to model a contrastive use of a topological relation relative to that landmark. If the landmark’s region of proximity contains more than one object from the target and distractor object set then it is a relative use of a topological relation. We handle the issue of frame of reference ambiguity and model the semantics of projective prepostions using the framework developed in (Kelleher et al., 2006). Here again, the contrastive-relative distinc3See inter alia (Talmy, 1983; Herskovits, 1986; Vandeloise, 1991; Fillmore, 1997; Garrod et al., 1999) for more discussion on these differences 1044 tion is dependent on the number of objects within the region of space defined by the preposition. 3.2 Landmarks and Descriptions If we want to use a locative expression, we must choose another object in the scene to function as landmark. An implicit assumption in selecting a landmark is that the hearer can easily identify and locate the object within the context. A landmark can be: the speaker (3)a, the hearer (3)b, the scene (3)c, an object in the scene (3)d, or a group of objects in the scene (3)e.4 (3) a. the ball on my right [speaker] b. the ball to your left [hearer] c. the ball on the right [scene] d. the ball to the left of the box [an object in the scene] e. the ball in the middle [group of objects] Currently, we need new empirical research to see if there is a preference order between these landmark categories. Intuitively, in most situations, either of the interlocutors are ideal landmarks because the speaker can naturally assume that the hearer is aware of the speaker’s location and their own. Focusing on instances where an object in the scene is used as a landmark, several authors (Talmy, 1983; Landau, 1996; Gapp, 1995) have noted a target-landmark asymmetry: generally, the landmark object is more permanently located, larger, and taken to have greater geometric complexity. These characteristics are indicative of salient objects and empirical results support this correlation between object salience and landmark selection (Beun and Cremers, 1998). However, the salience of an object is intrinsically linked to the context it is embedded in. For example, in Figure 5 the ball has a relatively high salience, because it is a singleton, despite the fact that it is smaller and geometrically less complex than the other figures. Moreover, in this scene it is the only object that can function as a landmark without recourse to using the scene itself or a grouping of objects. Clearly, deciding which objects in a given context are suitable to function as landmarks is a complex and contextually dependent process. Some of the factors effecting this decision are object 4See (Gorniak and Roy, 2004) for further discussion on the use of spatial extrema of the scene and groups of objects in the scene as landmarks Figure 5: Landmark salience salience and the functional relationships between objects. However, one basic constraint on landmark selection is that the landmark should be distinguishable from the target. For example, given the context in Figure 5 and all other factors being equal, using a locative such as the man to the left of the man would be much less helpful than using the man to the right of the ball. Following this observation, we treat an object as a candidate landmark if the following conditions are met: (1) the object is not the target, and (2) it is not in the distractor set either. Furthermore, a target landmark is a member of the candidate landmark set that stands in relation to the target. A distractor landmark is a member of the candidate landmark set that stands in the considered relation to a distractor object. We then define a distinguishing locative description as a locative description where there is target landmark that can be distinguished from all the members of the set of distractor landmarks under the relation used in the locative. 3.3 Algorithm We first try to generate a distinguishing description using Algorithm 1. If this fails, we divide the context into three components: the target, the distractor objects, and the set of candidate landmarks. We then iterate through the set of candidate landmarks (using a salience ordering if there is more than one, cf. Equation 1) and try to create a distinguishing locative description. The salience ordering of the landmarks is inspired by (Conklin and McDonald, 1982) who found that the higher the salience of an object the more likely it appears in the description of the scene it was embedded in. For each candidate landmark we iterate through the hierarchy of relations, checking for each relation whether the candidate can function as a target landmark under that relation. If so we create a context model that defines the set of target and distractor landmarks. We create a distinguishing locative description by using the basic incremental algorithm to distinguish the target landmark from the distractor landmarks. If we succeed in generating a distinguishing locative description we return 1045 the description and stop. Algorithm 2 The Locative Incremental Algorithm DESC = Basic-Incremental-Algorithm(T,D) if DESC ̸= Distinguishing then create CL the set of candidate landmarks CL = {x : x ̸= T, DESC(x) = false} for i = 0 to |CL| by salience(CL) do for j = 0 to |R| do if Rj (T, CLi)=true then TL = {CLi} DL = {z : z ∈CL, Rj (D, z) = true} LANDDESC = Basic-IncrementalAlgorithm(TL, DL) if LANDDESC = Distinguishing then Distinguishing locative generated return {DESC,Rj ,LANDDESC} end if end if end for end for end if FAIL If we cannot create a distinguishing locative description we face two choices: (1) iterate on to the next relation in the hierarchy, (2) create an embedded locative description distinguishing the landmark. We adopt (1) over (2), preferring the dog to the right of the car over the dog near the car to the right of the house. However, we can generate these longer embedded descriptions if needed, by replacing the call to the basic incremental algorithm for the landmark object with a call to the whole locative expression generation algorithm, using the target landmark as the target object and the set of distractor landmarks as the distractors. An important point in this context is the issue of infinite regression (Dale and Haddock, 1991). A compositional GRE system may in certain contexts generate an infinite description, trying to distinguish the landmark in terms of the target, and the target in terms of the landmark, cf. (4). But, this infinite recursion can only occur if the context is not modified between calls to the algorithm. This issue does not affect Algorithm 2 as each call to the algorithm results in the domain being partitioned into those objects we can and cannot use as landmarks. This not only reduces the number of object pairs that relations must be computed for, but also means that we need to create a distinguishing description for a landmark on a context that is a strict subset of the context the target description was generated in. This way the algorithm cannot distinguish a landmark using its target. (4) the bowl on the table supporting the bowl on the table supporting the bowl ... 3.4 Complexity The computational complexity of the incremental algorithm is O(nd*nl), with nd the number of distractors, and nl the number of attributes in the final referring description (Dale and Reiter, 1995). This complexity is independent of the number of attributes to be considered. Algorithm 2 is bound by the same complexity. For the average case, however, we see the following. For one, with every increase in nl, we see a strict decrease in nd: the more attributes we need, the fewer distractors we strictly have due to the partitioning into distractor and target landmarks. On the other hand, we have the dynamic construction of a context model. This latter factor is not considered in (Dale and Reiter, 1995), meaning we would have to multiply O(nd*nl) with a constant Kctxt for context construction. Depending on the size of this constant, we may see an advantage of our algorithm in that we only consider a single spatial relation each time we construct a context model, we avoid an exponential number of comparisons: we need to make at most nd * (nd −1) comparisons (and only nd if relations are symmetric). 4 Discussion We examplify the approach on the visual scene on the left of Figure 6. This context consists of two red boxes R1 and R2 and two blue balls B1 and B2. Imagine that we want to refer to B1. We begin by calling Algorithm 2. This in turn calls Algorithm 1, returning the property ball. This is not sufficient to create a distinguishing description as B2 is also a ball. In this context the set of candidate landmarks equals {R1,R2}. We take R1 as first candidate landmark, and check for topological proximity in the scene as modeled in (Kelleher et al., 2006). The image on the right of Figure 6 illustrates the resulting scene analysis: the green region on the left defines the area deemed to be proximal to R1, and the yellow region on the right defines the area proximal to R2. Clearly, B1 is in the area proximal to R1, making R1 a target landmark. As none of the distractors (i.e., B2) are located in a region that is proximal to a candidate landmark there are no distractor landmarks. As a result when the basic incremental algorithm is called to create a distinguishing description for the target landmark R1 it will return box and this will be deemed to be a distinguishing locative description. The overall algorithm will then return 1046 Figure 6: A visual scene and the topological analsis of R1 and R2 the vector {ball, proximal, box} which would result in the realiser generating a reference of the form: the ball near the box.5 The relational hierarchy used by the framework has some commonalities with the relational subsumption hierarchy proposed in (Krahmer and Theune, 2002). However, there are two important differences between them. First, an implication of the subsumption hierarchy proposed in (Krahmer and Theune, 2002) is that the semantics of the relations at lower levels in the hierarchy are subsumed by the semantics of their parent relations. For example, in the portion of the subsumption hierarchy illustrated in (Krahmer and Theune, 2002) the relation next to subsumes the relations left of and right of. By contrast, the relational hierarchy developed here is based solely on the relative cognitive load associated with the semantics of the spatial relations and makes not claims as to the semantic relationships between the semantics of the spatial relations. Secondly, (Krahmer and Theune, 2002) do not use their relational hierarchy to guide the construction of domain models. By providing a basic contextual definition of a landmark we are able to partition the context in an appropriate manner. This partitioning has two advantages. One, it reduces the complexity of the context model construction, as the relationships between the target and the distractor objects or between the distractor objects themselves do not need to be computed. Two, the context used during the generation of a landmark description is always a subset of the context used for a target (as the target, its distractors and the other objects in the domain that do not stand in relation to the target or distractors under the relation being considered are excluded). As a result the framework avoids the issue of infinite recusion. Furthermore, the target-landmark relationship is automat5For more examples, see the videos available at http://www.dfki.de/cosy/media/. ically included as a property of the landmark as its feature based description need only distinguish it from objects that stand in relation to one of the distractor objects under the same spatial relationship. In future work we will focus on extending the framework to handle some of the issues effecting the incremental algorithm, see (van Deemter, 2001). For example, generating locative descriptions containing negated relations, conjunctions of relations and involving sets of objects (sets of targets and landmarks). 5 Conclusions We have argued that an if a conversational robot functioning in dynamic partially known environments needs to generate contextually appropriate locative expressions it must be able to construct a context model that explicitly marks the spatial relations between objects in the scene. However, the construction of such a model is prone to the issue of combinatorial explosion both in terms of the number objects in the context (the location of each object in the scene must be checked against all the other objects in the scene) and number of inter-object spatial relations (as a greater number of spatial relations will require a greater number of comparisons between each pair of objects. We have presented a framework that addresses this issue by: (a) contextually defining the set of objects in the context that may function as a landmark, and (b) sequencing the order in which spatial relations are considered using a cognitively motivated hierarchy of relations. Defining the set of objects in the scene that may function as a landmark reduces the number of object pairs that a spatial relation must be computed over. Sequencing the consideration of spatial relations means that in each context model only one relation needs to be checked and in some instances the agent need not compute some of the spatial relations, as it may have succeeded in generating a distinguishing locative using a relation earlier in the sequence. A further advantage of our approach stems from the partitioning of the context into those objects that may function as a landmark and those that may not. As a result of this partitioning the algorithm avoids the issue of infinite recursion, as the partitioning of the context stops the algorithm from distinguishing a landmark using its target. We have employed the approach in a system for Human-Robot Interaction, in the setting of object 1047 manipulation in natural scenes. For more detail, see (Kruijff et al., 2006a; Kruijff et al., 2006b). References R.J. Beun and A. Cremers. 1998. Object reference in a shared domain of conversation. Pragmatics and Cognition, 6(1/2):121–152. D.J. Bryant, B. Tversky, and N. Franklin. 1992. Internal and external spatial frameworks representing described scenes. Journal of Memory and Language, 31:74–98. L.A. Carlson-Radvansky and D. Irwin. 1994. Reference frame activation during spatial term assignment. Journal of Memory and Language, 33:646–671. L.A. Carlson-Radvansky and G.D. Logan. 1997. The influence of reference frame selection on spatial template construction. Journal of Memory and Language, 37:411–437. H. Clark and D. Wilkes-Gibbs. 1986. Referring as a collaborative process. Cognition, 22:1–39. E. Jeffrey Conklin and David D. McDonald. 1982. Salience: the key to the selection problem in natural language generation. In ACL Proceedings, 20th Annual Meeting, pages 129–135. R. Dale and N. Haddock. 1991. Generating referring expressions involving relations. In Proceeding of the Fifth Conference of the European ACL, pages 161–166, Berlin, April. R. Dale and E. Reiter. 1995. Computational interpretations of the Gricean maxims in the generation of referring expressions. Cognitive Science, 19(2):233–263. I. Duwe and H. Strohner. 1997. Towards a cognitive model of linguistic reference. Report: 97/1 - Situierte K¨unstliche Kommunikatoren 97/1, Univerist¨at Bielefeld. G. Edwards and B. Moulin. 1998. Towards the simulation of spatial mental images using the vorono¨ı model. In P. Oliver and K.P. Gapp, editors, Representation and processing of spatial expressions, pages 163–184. Lawrence Erlbaum Associates. C. Fillmore. 1997. Lecture on Deixis. CSLI Publications. K.P. Gapp. 1995. Angle, distance, shape, and their relationship to projective relations. In Proceedings of the 17th Conference of the Cognitive Science Society. C Gardent. 2002. Generating minimal definite descriptions. In Proceedings of the 40th International Confernce of the Association of Computational Linguistics (ACL-02), pages 96–103. S. Garrod, G. Ferrier, and S. Campbell. 1999. In and on: investigating the functional geometry of spatial prepositions. Cognition, 72:167–189. P. Gorniak and D. Roy. 2004. Grounded semantic composition for visual scenes. Journal of Artificial Intelligence Research, 21:429–470. E. Hajicov´a. 1993. Issues of sentence structure and discourse patterns. In Theoretical and Computational Linguistics, volume 2, Charles University, Prague. A Herskovits. 1986. Language and spatial cognition: An interdisciplinary study of prepositions in English. Studies in Natural Language Processing. Cambridge University Press. H. Horacek. 1997. An algorithm for generating referential descriptions with flexible interfaces. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, Madrid. R. Jackendoff. 1983. Semantics and Cognition. Current Studies in Linguistics. The MIT Press. J. Kelleher and J. van Genabith. 2004. A false colouring real time visual salency algorithm for reference resolution in simulated 3d environments. AI Review, 21(3-4):253–267. J.D. Kelleher, G.J.M. Kruijff, and F. Costello. 2006. Proximity in context: An empirically grounded computational model of proximity for processing topological spatial expressions. In Proceedings ACL/COLING 2006. E. Krahmer and M. Theune. 2002. Efficient context-sensitive generation of referring expressions. In K. van Deemter and R. Kibble, editors, Information Sharing: Reference and Presupposition in Language Generation and Interpretation. CLSI Publications, Standford. G.J.M. Kruijff, J.D. Kelleher, G. Berginc, and A. Leonardis. 2006a. Structural descriptions in human-assisted robot visual learning. In Proceedings of the 1st Annual Conference on Human-Robot Interaction (HRI’06). G.J.M. Kruijff, J.D. Kelleher, and Nick Hawes. 2006b. Information fusion for visual reference resolution in dynamic situated dialogue. In E. Andr´e, L. Dybkjaer, W. Minker, H.Neumann, and M. Weber, editors, Perception and Interactive Technologies (PIT 2006). Springer Verlag. B Landau. 1996. Multiple geometric representations of objects in language and language learners. In P Bloom, M. Peterson, L Nadel, and M. Garrett, editors, Language and Space, pages 317–363. MIT Press, Cambridge. G. D. Logan. 1994. Spatial attention and the apprehension of spatial realtions. Journal of Experimental Psychology: Human Perception and Performance, 20:1015–1036. G.D. Logan. 1995. Linguistic and conceptual control of visual spatial attention. Cognitive Psychology, 12:523–533. R. Moratz and T. Tenbrink. 2006. Spatial reference in linguistic human-robot interaction: Iterative, empirically supported development of a model of projective relations. Spatial Cognition and Computation. L. Talmy. 1983. How language structures space. In H.L. Pick, editor, Spatial orientation. Theory, research and application, pages 225–282. Plenum Press. A. Treisman and S. Gormican. 1988. Feature analysis in early vision: Evidence from search assymetries. Psychological Review, 95:15–48. K. van Deemter. 2001. Generating referring expressions: Beyond the incremental algorithm. In 4th Int. Conf. on Computational Semantics (IWCS-4), Tilburg. I van der Sluis and E Krahmer. 2004. The influence of target size and distance on the production of speech and gesture in multimodal referring expressions. In Proceedings of International Conference on Spoken Language Processing (ICSLP04). C. Vandeloise. 1991. Spatial Prepositions: A Case Study From French. The University of Chicago Press. S. Varges. 2004. Overgenerating referring expressions involving relations and booleans. In Proceedings of the 3rd International Conference on Natural Language Generation, University of Brighton. 1048 | 2006 | 131 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1049–1056, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Learning to Predict Case Markers in Japanese Hisami Suzuki Kristina Toutanova1 Microsoft Research One Microsoft Way, Redmond WA 98052 USA {hisamis,kristout}@microsoft.com Abstract Japanese case markers, which indicate the grammatical relation of the complement NP to the predicate, often pose challenges to the generation of Japanese text, be it done by a foreign language learner, or by a machine translation (MT) system. In this paper, we describe the task of predicting Japanese case markers and propose machine learning methods for solving it in two settings: (i) monolingual, when given information only from the Japanese sentence; and (ii) bilingual, when also given information from a corresponding English source sentence in an MT context. We formulate the task after the well-studied task of English semantic role labelling, and explore features from a syntactic dependency structure of the sentence. For the monolingual task, we evaluated our models on the Kyoto Corpus and achieved over 84% accuracy in assigning correct case markers for each phrase. For the bilingual task, we achieved an accuracy of 92% per phrase using a bilingual dataset from a technical domain. We show that in both settings, features that exploit dependency information, whether derived from gold-standard annotations or automatically assigned, contribute significantly to the prediction of case markers.1 1 Introduction: why predict case? Generation of grammatical elements such as inflectional endings and case markers has become an important component technology, particularly in the context of machine translation (MT). In an English-to-Japanese MT system, for example, Japanese case markers, which indicate grammatical relations (e.g., subject, object, location) of the complement noun phrase to the predicate, are among the most difficult to generate appropriately. This is because the case markers often do not correspond to any word in the source language as many grammatical relations are expressed via word order in English. It is also difficult because the mapping between the case markers and the grammatical 1 Author names arranged alphabetically relations they express is very complex. For the same reasons, generation of case markers is challenging to foreign language learners. This difficulty in generation, however, does not mean the choice of case markers is insignificant: when a generated sentence contains mistakes in grammatical elements, they often lead to severe unintelligibility, sometimes resulting in a different semantic interpretation from the intended one. Therefore, having a model that makes reasonable predictions about which case marker to generate given the content words of a sentence, is expected to help MT and generation in general, particularly when the source (or native) and the target languages are morphologically divergent. But how reliably can we predict case markers in Japanese using the information that exists only in the sentence? Consider the example in Figure 1. This sentence contains two case markers, kara 'from' and ni, the latter not corresponding to any word in English. If we were to predict the case markers in this sentence, there are multiple valid answers for each decision, many of which correspond to different semantic relations. For example, for the first case marker slot in Figure 1 filled by kara, wa (topic marker), ni 'in' or no case marker at all are all reasonable choices, while other markers such as wo (object marker), de 'at', made 'until', etc. are not considered reasonable. For the second slot filled by ni, ga (subject marker) is also a grammatically reasonable choice, making Einstein the subject of idolize, thus changing the meaning of the sentence. As is obvious in this example, the choice among the correct answers is determined by the speaker's intent in uttering the sentence, and is therefore impossible to recover from the content words or the sentence structure alone. At the same time, many impossible or unlikely case marking decisions can be eliminated by a case prediction model. Combined with an external component (for example an MT component) that can resolve semantic and intentional ambiguity, a case prediction model can be quite useful in sentence generation. This paper discusses the task of case marker assignment in two distinct but related settings. After defining the task in Section 2 and describing our models in Section 3, we first discuss the monolingual task in Sections 4, whose goal is to predict the case markers 1049 using Japanese sentences and their dependency structure alone. We formulated this task after the well-studied task of semantic role labeling in English (e.g., Gildea and Jurafsky, 2002; Carreras and Màrques, 2005), whose goal is to assign one of 20 semantic role labels to each phrase in a sentence with respect to a given predicate, based on the annotations provided by PropBank (Palmer et al., 2005). Though the task of case marker prediction is more ambiguous and subject to uncertainty than the semantic role labeling task, we obtained some encouraging results which we present in Section 4. Next, in Section 5, we describe the bilingual task, in which information about case assignment can be extracted from a corresponding source language sentence. Though the process of MT introduces uncertainties in generating the features we use, we show that the benefit of using dependency structure in our models is far greater than not using it even when the assigned structure is not perfect. 2 The task of case prediction In this section, we define the task of case prediction. We start with the description of the case markers we used in this study. 2.1 Nominal particles in Japanese Traditionally, Japanese nominal postpositions are classified into the following three categories (e.g., Teramura, 1991; Masuoka and Takubo, 1992): Case particles (or case markers). They indicate grammatical relations of the complement NP to the predicate. As they are jointly determined by the NP and the predicate, case markers often do not allow a simple mapping to a word in another language, which makes their generation more difficult. The relationship between the case marker and the grammatical relation it indicates is not straightforward either: a case marker can (and often does) indicate multiple grammatical relations as in Ainshutain-ni akogareru "idolize Einstein" where ni marks the Object relation, and in Tokyo-ni sumu "live in Tokyo" where ni indicates Location. Conversely, the same grammatical relation may be indicated by different case markers: both ni and de in Tokyo-ni sumu "live in Tokyo" and Tokyo-de au "meet in Tokyo" indicate the Location relation. We included 10 case markers as the primary target of prediction, as shown in the first 10 lines of Table 1. Conjunctive particles. These particles are used to conjoin words and phrases, corresponding to English "and" and "or". As their occurrence is not predictable from the sentence structure alone, we did not include them in the current prediction task. Focus particles. These particles add focus to a phrase against a given background or contextual knowledge, for example shika and mo in pasuta-shika tabenakatta "ate only pasta" and pasuta-mo tabeta "also ate pasta", corresponding to only and also respectively. Note that they often replace case markers: in the above examples, the object marker wo is no longer present when shika or mo is used. As they add information to the predicate-argument structure and are in principle not predictable given the sentence structure alone, we did not consider them as the target of our task. One exception is the topic marker wa, which we included as a target of prediction for the following reasons: Some linguists recognize wa as a topic marker, separately from other focus particles (e.g. Masuoka and Takubo, 1992). The main function of wa is to introduce a topic in the sentence, which is to a some extent predictable from the structure of the sentence. wa is extremely frequent in Japanese text. For example, it accounts for 13.2% of all postpositions in Kyoto University Text Corpus (henceforth Kyoto Corpus, Kurohashi and Nagao, 1997), making it the third most frequent postposition after no (20.57%) and wo (13.5%). Generating wa appropriately thus greatly enhances the readability of the text. Unlike other focus particles such as shika and mo, wa does not translate into any word in English, which makes it difficult to generate by using the information from the source language. Therefore, in addition to the 10 true case markers, we also included wa as a case marker in our study.2 Furthermore, we also included the combination of case particles plus wa as a secondary target of prediction. The case markers that can appear followed by wa are indicated by a check mark in the column "+wa" in Table 1. Thus there are seven secondary targets: niwa, karawa, towa, dewa, ewa, madewa, yoriwa. Therefore, we have in total 18 case particles to assign to phrases. 2.2 Task definition The case prediction task we are solving is as follows. We are given a sentence as a list of bunsetsu together 2 This set comprises the majority (92.5%) of the nominal particles, while conjunctive and focus particles account for only 7.5% of the nominal particles in Kyoto Corpus. Figure 1. Example of case markers in Japanese (taken from the Kyoto Corpus). Square brackets indicate bunsetsu (phrase) boundaries, to be discussed below. Arrows between phrases indicate dependency relations. 1050 with a dependency structure. For our monolingual experiments, we used the dependency structure annotation in the Kyoto Corpus; for our bilingual experiments, we used automatically derived dependency structure (Quirk et al., 2005). Each bunsetsu (or simply phrase in this paper) is defined as consisting of one content word (or n-content words in the case of compounds with n-components) plus any number of function words (including particles, auxiliaries and affixes). Case markers are classified as function words, and there is at most one case marker per phrase.3 In testing, the case marker for each phrase is hidden; the task is to assign to each phrase one of the 18 case markers defined above or NONE; NONE indicates that the phrase does not have a case marker. 2.3 Related work Though the task of case marker prediction as formulated in this paper is novel, similar tasks have been defined in the past. The semantic role labeling task mentioned in Section 1 is one example; the task of function tag assignment in English (e.g., Blaheta and Charniak, 2000) is another. These tasks are similar to the case prediction task in that they try to assign semantic or function tags to a parsed structure. However, there is one major difference between these tasks and the current task: semantic role labels and function tags can for the most part be uniquely determined given the sentence and its parse structure; decisions about case markers, on the other hand, are highly ambiguous given the sentence structure alone, as mentioned in Section 1. This makes our task more ambiguous than the related tasks. As a concrete comparison, the two most frequent semantic role labels (ARG0 and ARG1) account for 60% of the labeled arguments in PropBank 3 One exception is that no can appear after certain case markers; in such cases, we considered no to be the case for the phrase. 4 no is typically not considered as a case marker but rather as a conjunctive particle indicating adnominal relation; however, as no can also be used to indicate the subject in a relative clause, we included it in our study. (Carreras and Màrquez, 2005), whereas our 2 most frequent case markers (no and wo) account for only 43% of the case-marked phrases. We should also note that semantic role labels and function tags have been artificially defined in accordance with theoretical decisions about what annotations should be useful for natural language understanding tasks; in contrast, the case markers are part of the surface sentence string and do not reflect any theoretical decisions. The task of case prediction in Japanese has previously focused on recovering implicit case relations, which result when noun phrases are relativized or topicalized (e.g., Baldwin, 2000; Kawahara et al., 2004; Murata and Isahara, 2005). Their goal is different form ours, as we aim to generate surface forms of case markers rather than recover deeper case relations for which surface case marker are often used as a proxy. In the context of sentence generation, Gamon et al. (2002) used a decision tree to classify nouns into one of the four cases in German, as part of their sentence realization from a semantic representation, achieving high accuracy (87% to 93.5%). Again, this is a substantially easier task than ours, because there are only four classes and one of them (nominative) accounts for 70% of all cases. Uchimoto et al. (2002), which is the work most related to ours, propose a model of generating function words (not limited to case markers) from "keywords" or headwords of phrases in Japanese. The components of their model are based on n-gram language models using the surface word strings and bunsetsu dependency information, and the results they report are not comparable to ours, as they limit their test sentences to the ones consisting only of two or three content words. We will see in the next section that our models are also quite different from theirs as we employ a much richer set of features. 3 Classifiers for case prediction We implemented two types of models for the task of case prediction: local models, which choose the case marker of each phrase independently of the case markers of other phrases, and joint models, which incorporate dependencies among the case markers of dependents of the same head phrase. We describe the two types of models in turn. 3.1 Local classifiers Following the standard practice in semantic role labeling, we divided the case prediction task into the tasks of identification and classification (Gildea and Jurafsky, 2002; Pradhan et al., 2004). In the identification task, we assign to each phrase one of two labels: HASCASE, meaning that the phrase has a case marker, or NONE, meaning that it does not have a case. In the case markers grammatical functions (e.g.) +wa ga subject; object wo object; path 4 no genitive; subject ni dative object, location kara source to quotative, reciprocal, as de location, instrument, cause e goal, direction made goal (up to, until) yori source, object of comparison
wa topic Table 1. Case markers included in this study 1051 classification task, we assign one of the 18 case markers to each phrase that has been labeled with HASCASE by the identification model. We train a binary classifier for identification and a multi-class classifier (with 18 classes) for classification. We obtain a classifier for the complete task by chaining the two classifiers. Let PID(c|b) and PCLS(c|b) denote the probability of class c for bunsetsu b according to the identification and classification models, respectively. We define the probability distribution over classes of the complete model for case assignment as follows: PCaseAssign(NONE |b) = PID(NONE |b) PCaseAssign(l|b) = PID(HASCASE |b)* PCLS(l|b) Here, l denotes one of the 18 case markers. We employ this decomposition mainly for efficiency in training: that is, the decomposition allows us to train the classification models on a subset of training examples consisting only of those phrases that have a case marker, following Toutanova et al. (2005). Among various machine learning methods that can be used to train the classifiers, we chose log-linear models for both identification and classification tasks, as they produce probability distributions which allows chaining of the two component models and easy integration into an MT system. 3.2 Joint classifiers Toutanova et al. (2005) report a substantial improvement in performance on the semantic role labeling task by building a joint classifier, which takes the labels of other phrases into account when classifying a given phrase. This is motivated by the fact that the argument structure is a joint structure, with strong dependencies among arguments. Since the case markers also reflect the argument structure to some extent, we implemented a joint classifier for the case prediction task as well. We applied the joint classifiers in the framework of N-best reranking (Collins, 2000), following Toutanova et al. (2005). That is, we produced N-best (N=5 in our experiments) case assignment sequence candidates for a set of sister phrases using the local models, and trained a joint classifier that learns to choose the best candidate from the set of sisters. The oracle accuracy of the 5-best candidate list was 95.9% per phrase. 4 Monolingual case prediction task In this section we describe our models trained and evaluated using the gold-standard dependency annotations provided by the Kyoto Corpus. These annotations allow us to define a rich set of features exploring the syntactic structure. 4.1 Features The basic local model features we used for the identification and classification models are listed in Table 2. They consist of features for a phrase, for its parent phrase and for their relations. Only one feature (GrandparentNounSubPos) currently refers to the grandparent of the phrase; all other features are between the phrase, its parent and its sibling nodes, and are a superset of the dependency-based features used by Hacioglu (2004) for the semantic labeling task. In addition to these basic features, we added 20 combined features, some of which are shown at the bottom of Table 2. For the joint model, we implemented only two types of features: sequence of non-NONE case markers for a set of sister phrases, and repetition of non-NONE case markers. These features are intended to capture regularities in the sequence of case markers of phrases that modify the same head phrase. All of these features are represented as binary features: that is, when the value of a feature is not binary, we have treated the combination of the feature name plus the value as a unique feature. With a count cut-off of 2 (i.e., features must occur at least twice to be in the model), we have 724,264 features in the identification Basic features for phrases (self, parent) HeadPOS, PrevHeadPOS, NextHeadPOS PrevPOS,Prev2POS,NextPOS,Next2POS HeadNounSubPos: time, formal nouns, adverbial HeadLemma HeadWord, PrevHeadWord, NextHeadWord PrevWord, Prev2Word, NextWord, Next2Word LastWordLemma (excluding case markers) LastWordInfl (excluding case markers) IsFiniteClause IsDateExpression IsNumberExpression HasPredicateNominal HasNominalizer HasPunctuation: comma, period HasFiniteClausalModifier RelativePosition: sole, first, mid, last NSiblings (number of siblings) Position (absolute position among siblings) Voice: pass, caus, passcaus Negation Basic features for phrase relations (parent-child pair) DependencyType: D,P,A,I Distance: linear distance in bunsetsu, 1, 2-5, >6 Subcat: POS tag of parent + POS tag of all children + indication for current Combined features (selected) HeadPOS + HeadLemma ParentLemma + HeadLemma Position + NSiblings IsFiniteClause + GrandparentNounSubPos Table 2: Basic and combined features for local classifiers 1052 model, and 3,963,096 features in the classification model. The number of joint features in the joint model is 3,808. All models are trained using a Gaussian prior. 4.2 Data and baselines We divided the Kyoto Corpus (version 3.0) into the following three sections: Training: contains news articles of January 1, 3-11 and editorial articles of January-August; 24,263 sentences, 234,474 phrases. Devtest: contains news articles of January 12-13 and editorial article of September. 4,833 sentences, 47,580 phrases. Test: contains news articles of January 14-17 and editorial articles of October-December. 9,287 sentences, 89,982 phrases. The devtest set was used only for tuning model parameters and for performing error analysis. As no previous work exists on the task of predicting case markers on the Kyoto Corpus, it is important to establish a good baseline. The simplest baseline of always selecting the most frequent label (NONE) gives us an accuracy of 47.5% on the test set. Out of the non-NONE case markers, the most frequent is no, which occurs in 26.6% of all case-marked phrases. A more reasonable baseline is to use a language model to predict case. We trained and tested two language models: the first model, called KCLM, is trained on the same data as our log-linear models (24,263 sentences); the second model, called BigCLM, is trained on much more data from the same domain (826,373 sentences), taking advantage of the fact that language models do not require dependency annotation for training. The language models were trained using the CMU language modeling toolkit with default parameter settings (Clarkson and Rosenfeld, 1997). We tested the language model baselines using the same task set-up as for our classifier: for each phrase, each of the 18 possible case markers and NONE is evaluated. The position for insertion of a case marker in each phrase is given according to our task set-up, i.e., at the end of a phrase preceding any punctuation. We choose the case assignment of the sequence of phrases in the sentence that maximizes the language model probability of the resulting sentence. We computed the most likely case assignment sequence using a dynamic programming algorithm. 4.3 Results and discussion The results of running our models on case marker prediction are shown in Table 3. The first three rows correspond to the components of the local model: the identification task (Id, for all phrases), the classification task (Cls, only for case-marked phrases) and the complete task (Both, for all phrases). The accuracy on the complete task using the local model is 83.9%; the joint model improves it to 84.3%. The improvement due to the joint model is small in absolute percentage points (0.4%), but is statistically significant according to a test for the difference of proportions (p< 0.05). The use of a joint classifier did not lead to as large an improvement over the local classifier as for the semantic role labeling task. There are several reasons for that we can think of. First, we have only used a limited set of features for the joint model, i.e., case sequence and repetition features. A more extensive use of global features might lead to a larger improvement. Secondly, unlike the task of semantic role labeling, where there are about 20 phrases that need to be labeled with respect to a predicate, about 50% of all phrases in the Kyoto Corpus do not have sister nodes. This means that these phrases cannot take advantage of the joint classifier using the current model formulation. Finally, case markers are much shallower than semantic role labels in the level of linguistic analysis, and so are inherently subject to more variations, including missing arguments (so called zero pronouns) and repeated case markers corresponding to different semantic roles. From Table 3, it is clear that our models outperform the baseline model significantly. The language model trained on the same data has much lower performance (67.0% vs. 84.3%), which shows that our system is exploiting the training data much more efficiently by looking at the dependency and other syntactic features. An inspection of the 500 most highly weighted features also indicates that phrase dependency-based features are very useful for both identification and classification. Given much more data, though, the language model improves significantly to 78%, but our classifier still achieves a 29% error reduction over it. The differences between the language models and the log-linear models are statistically significant at level p < 0.01 according to a test for the difference of proportions. Figure 2 plots the recall and precision for the frequently occurring (>500) cases. We achieve good results on NONE and no, which are the least ambiguous decisions. Cases such as ni, wa, ga, and de are highly confusable with other markers as they indicate multiple grammatical relations, and the performance of our Models Task Training Test log-linear Id 99.8 96.9 log-linear Cls 96.6 74.3 log-linear (local) Both 98.0 83.9 log-linear( joint) Both 97.8 84.3 baseline (frequency) Both 48.2 47.5 baseline (KCLM) Both 93.9 67.0 baseline (BigCLM) Both — 78.0 Table 3: Accuracy of case prediction models (%) 1053 models on them is therefore limited. As expected, performance (especially recall) on secondary targets (dewa, niwa) suffers greatly due to the ambiguity with their primary targets. 5 Bilingual case prediction task: simulating case prediction in MT Incorporating a case prediction model into MT requires taking additional factors into consideration, compared to the monolingual task described above. On the one hand, we need to extend our model to handle the additional knowledge source, i.e., the source sentence. This can potentially provide very useful features to our model, which are not available in the monolingual task. On the other hand, since gold-standard dependency annotation is not available in the MT context, we must deal with the imperfections in structural annotations. In this section, we describe our case prediction models in the context of English-to-Japanese MT. In this setting, dependency information for the target language (Japanese) is available only through projection of a dependency structure from the source language (English) in a tree-to-string-based statistical MT system (Quirk et al., 2005). We conducted experiments using the English source sentences and the reference translations in Japanese: that is, our task is to predict the case markers of the Japanese reference translations correctly using all other words in the reference sentence, information from the source sentence through word alignment, and the Japanese dependency structure projected via an MT component. Ultimately, our goal is to improve the case marker assignment of a candidate translation using a case prediction model; the experiments described in this section on reference translations serve as an important preliminary step toward achieving that final goal. We will show in this section that even the automatically derived syntactic information is very useful in assigning case markers in the target language, and that utilizing the information from the source language also greatly contributes to reducing case marking errors. 5.1 Data and task set-up The dataset we used is a collection of parallel English-Japanese sentences from a technical (computer) domain. We used 15,000 sentence pairs for training, 5,000 for development, and 4,241 for testing. The parallel sentences were word-aligned using GIZA++ (Och and Ney, 2000), and submitted to a tree-to-string-based MT system (Quirk et al., 2005) which utilizes the dependency structure of the source language and projects dependency structure to the target language. Figure 3 shows an example of an aligned sentence pair: on the source (English) side, part-of-speech (POS) tags and word dependency structure are assigned (solid arcs). The alignments between English and Japanese words are indicated by the dotted lines. In order to create phrase-level dependency structures like the ones utilized in the Kyoto Corpus monolingual task, we derived some additional information for the Japanese sentence in the following manner. Figure 3. Aligned English-Japanese sentence pair First, we tagged the sentence using an automatic tagger with a set of 19 POS tags. We used these POS tags to parse the words into phrases (bunsetsu): each bunsetsu consists of one content word plus any number of function words, where content and function words are defined via POS. We then constructed a phrase-level dependency structure using a breadth-first traversal of the word dependency structure projected from English. These phrase dependencies are indicated by bold arcs in Figure 3. The case markers to be predicted (wa and de in this case) are underlined. The task of case marker prediction is the same as described in Section 2: to assign one of the 18 case markers described in Section 2 or NONE to each phrase. 5.2 Baseline models We implemented the baseline models discussed in Section 4.2 for this domain as well. The most frequent 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 niwa (523) dewa (548) kara (868) de (2582) to (3664) ga (5797) wa (5937) ni (6457) wo (7782) no (12570) NONE (42756) precision recall Figure 2: Precision and recall per case marker (frequency in parentheses) 1054 case assignment is again NONE, which accounts for 62.0% of the test set. The frequency of NONE is higher in this task than in the Kyoto Corpus, because our bunsetsu-parsing algorithm prefers to err on the side of making too many rather than too few phrases. This is because our final goal is to generate all case markers, and if we mistakenly joined two bunsetsu into one, our case assigner would be able to propose only one case marker for the resulting bunsetsu, which would be necessarily wrong if both bunsetsu had case markers. The most frequent case marker is again no, which occurs in 29.4% of all case-marked phrases. As in the monolingual task, we trained two trigram language models: one was trained on the training set of our case prediction models (15,000 sentences); another was trained on a much larger set of 450,000 sentences from the same domain. The results of these baselines are discussed in Section 5.4. 5.3 Log-linear models The models we built for this task are log-linear models as described in Section 3. In order to isolate the impact of information from the source language available for the case prediction task, we built two kinds of models: monolingual models, which do not use any information from the source English sentences, and bilingual models, which use information from the source. Both models are local models in the sense discussed in Section 3. Table 4 shows the features used in the monolingual and bilingual models, along with the examples (the value of the feature for the phrase [saabisu wa] in Figure 3); in addition to these, we also provided some feature combinations for both monolingual and bilingual models. Many of the monolingual features (i.e., first 11 lines in Table 4) are also present in Table 2. Note that lexically based features are of greater importance for this task, as the dependency information available in this context is of much poorer quality than that provided by the Kyoto Corpus. In addition to the features in Table 2, we added a Direction feature (with values left and right), and an Alternative Parent feature. Alternative parents are all words which are the parents of any word in the phrase, according to the word-based dependency tree, with the constraint that case markers cannot be alternative parents. This feature captures the information that is potentially lost in the process of building a phrase dependency structure from word dependency information in the target language. The bottom half of Table 4 shows bilingual features. The features of the source sentence are obtained through word alignments. We create features from the source words aligned to the head of the phrase, to the head of the parent phrase, or to any alterative parents. If any word in the phrase is aligned to a preposition in the source language, our model can use the information as well. In addition to word- and POS-features for aligned source words, we also refer to the corresponding dependency between the phrase and its parent phrase in the English source. If the head of the Japanese phrase is aligned to a single source word s1, and the head of its parent phrase is aligned to a single source word s2, we extract the relationship between s1 and s2, and define subcategorization, direction, distance, and number of siblings features, in order to capture the grammatical relation in the source, which is more reliable than in the projected target dependency structure. 5.4 Results and discussion Table 5 summarizes the results on the complete case assignment task in the MT context. Compared to the language model trained on the same data (15kLM), our Monolingual features Feature Example HeadWord /HeadPOS saabisu/NN PrevWord/PrevPOS kono/AND Prev2Word/Prev2WordPOS none/none NextWord/NextPOS seefu/NN Next2Word/Net2POS moodo/NN PrevHeadWord/PrevHeadPOS kono/AND NextHeadWord/NextHeadPOS seefu/NN ParentHeadWord/ParentHeadPOS kaishi/VN Subcat: POS tags of all sisters and parent NN-c,NN,VN-h NSiblings (including self) 2 Distance 1 Direction left Alternative Parent Word /POS saabisu/NN Bilingual features Feature Example Word/POS of source words aligned to the head of the phrase service/NN Word/POS of all source words aligned to any word in the phrase service/NN Word/POS of all source words aligned to the head word of the parent phrase started/VERB Word/POS of all source words aligned to alternative parent words of the phrase service/NN, started/VERB All source preposition words in Word/POS of parent of source word aligned to any word in the phrase started/VERB Aligned Subcat NN-c,VERB,VERB,VERB-h,PREP Aligned NSiblings 4 Aligned Distance 2 Aligned Direction left Table 4: Monolingual and bilingual features Model Test data baseline (frequency) 62.0 baseline (15kLM) 79.0 baseline (450kLM) 83.6 log-linear monolingual 85.3 log-linear bilingual 92.3 Table 5: Accuracy of bilingual case prediction (%) 1055 monolingual model performs significantly better, achieving a 30% error reduction (85.3% vs. 79.0%). Our monolingual model outperforms even the language model trained on 30 times more data (85.3% vs. 83.6%), with an error reduction of 10%. The difference is statistically significant at level p < 0.01 according to a test for the difference of proportions. This means that even though the projected dependency information is not perfect, it is still useful for the case prediction task. When we add the bilingual features, the error rate of our model is cut almost in half: the bilingual model achieves an error reduction of 48% over the monolingual model (92.3% vs. 85.3%, statistically significant at level p < 0.01). This result is very encouraging: it indicates that information from the source sentence can be exploited very effectively to improve the accuracy of case assignment. The usefulness of the source language information is also obvious when we inspect which case markers had the largest gains in accuracy due to this information: the top three cases were kara (0.28 to 0.65, a 57% gain), dewa (0.44 to 0.65, a 32% gain) and to (0.64 to 0.85, a 24% gain), all of which have translations as English prepositions. Markers such as ga (subject marker, 0.68 to 0.74, a 8% gain) and wo (object marker, 0.83 to 0.86, a 3.5% gain), on the other hand, showed only a limited gain. 6 Conclusion and future directions This paper described the task of predicting case markers in Japanese, and reported results in a monolingual and a bilingual settings. The results show that the models we proposed, which explore syntax-based features and features from the source language in the bilingual task, can effectively predict case markers. There are a number of extensions and next steps we can think of at this point, the most immediate and important one of which is to incorporate the proposed model in an end-to-end MT system to make improvements in the output of MT. We would also like to perform a more extensive analysis of features and feature ablation experiments. Finally, we would also like to extend the proposed model to include languages with inflectional morphology and the prediction of grammatical elements in general. Acknowledgements We would like to thank the anonymous reviewers for their comments, and Bob Moore, Arul Menezes, Chris Quirk, and Lucy Vanderwende for helpful discussions. References Baldwin, T. 2004. Making Sense of Japanese Relative Clause Constructions, In Proceedings of the 2nd Workshop on Text Meaning and Interpretation. Blaheta, D. and E. Charniak. 2000. Assigning function tags to parsed text. In Proceedings of NAACL, pp.234-240. Carreras, X. and L. Màrquez. 2005. Introduction to the CoNLL-2005 Shared Task: Semantic Role Labeling. In Proceedings of CoNLL-2005. Clarkson, P.R. and R. Rosenfeld. 1997. Statistical Language Modeling Using the CMU-Cambridge Toolkit. In Proceedings of ESCA Eurospeech, pp. 2007-2010. Collins, M. 2000. Discriminative reranking for natural language parsing. In Proceedings of ICML. Gamon, M., E. Ringger, S. Corston-Oliver and R. Moore. 2002. Machine-learned Context for Linguistic Operations in German Sentence Realization. In Proceeding of ACL. Gildea, D. and D. Jurafsky. 2002. Automatic Labeling of Semantic Roles. In Computational Linguistics 28(3): 245-288. Hacioglu, K. 2004. Semantic Role Labeling using Dependency Trees. In Proceedings of COLING 2004. Kawahara, D., N. Kaji and S. Kurohashi. 2000. Japanese Case Structure Analysis by Unsupervised Construction of a Case Frame Dictionary. In Proceedings of COLING, pp. 432-438. Kurohashi, S. and M.Nagao. 1997. Kyoto University Text Corpus Project. In Proceedings of ANLP, pp.115-118. Masuoka, T. and Y. Takubo. 1992. Kiso Nihongo Bunpou (Fundamental Japanese grammar), revised version. Kuroshio Shuppan, Tokyo. Murata, M., and H. Isahara. 2005. Japanese Case Analysis Based on Machine Learning Method that Uses Borrowed Supervised Data. In Proceedings of IEEE NLP-KE-2005, pp.774-779. Och, F.J. and H. Ney. 2000. Improved statistical alignment models. In Proceedings of ACL: pp.440-447. Palmer, M., D. Gildea and P. Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. In Computational Linguistics 31(1). Pradhan, S., W. Ward, K. Hacioglu, L. Martin, D. Jurafsky. 2004. Shallow Semantic Parsing Using Support Vector Machines. In Proceedings of HLT/NAACL. Quirk, C., A. Menezes and C. Cherry. 2005. Dependency Tree Translation: Syntactically Informed Phrasal SMT. In Proceedings of ACL. Teramura, H. 1991. Nihongo-no shintakusu-to imi (Japanese syntax and meaning). Volume III. Kuroshio Shuppan, Tokyo. Toutanova, K., A. Haghighi and C. D. Manning. 2005. Joint Learning Improves Semantic Role Labeling. In Proceeding of ACL, pp.589-596. Uchimoto, K., S. Sekine and H. Isahara. 2002. Text Generation from Keywords. In Proceedings of COLING 2002, pp.1037-1043. 1056 | 2006 | 132 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1057–1064, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Are These Documents Written from Different Perspectives? A Test of Different Perspectives Based On Statistical Distribution Divergence Wei-Hao Lin Language Technologies Institute School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 U.S.A. [email protected] Alexander Hauptmann Language Technologies Institute School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 U.S.A. [email protected] Abstract In this paper we investigate how to automatically determine if two document collections are written from different perspectives. By perspectives we mean a point of view, for example, from the perspective of Democrats or Republicans. We propose a test of different perspectives based on distribution divergence between the statistical models of two collections. Experimental results show that the test can successfully distinguish document collections of different perspectives from other types of collections. 1 Introduction Conflicts arise when two groups of people take very different perspectives on political, socioeconomical, or cultural issues. For example, here are the answers that two presidential candidates, John Kerry and George Bush, gave during the third presidential debate in 2004 in response to a question on abortion: (1) Kerry: What is an article of faith for me is not something that I can legislate on somebody who doesn’t share that article of faith. I believe that choice is a woman’s choice. It’s between a woman, God and her doctor. And that’s why I support that. (2) Bush: I believe the ideal world is one in which every child is protected in law and welcomed to life. I understand there’s great differences on this issue of abortion, but I believe reasonable people can come together and put good law in place that will help reduce the number of abortions. After reading the above transcripts some readers may conclude that one takes a “pro-choice” perspective while the other takes a “pro-life” perspective, the two dominant perspectives in the abortion controversy. Perspectives, however, are not always manifested when two pieces of text together are put together. For example, the following two sentences are from Reuters newswire: (3) Gold output in the northeast China province of Heilongjiang rose 22.7 pct in 1986 from 1985’s level, the New China News Agency said. (4) Exco Chairman Richard Lacy told Reuters the acquisition was being made from Bank of New York Co Inc, which currently holds a 50.1 pct, and from RMJ partners who hold the remainder. A reader would not from this pair of examples perceive as strongly contrasting perspectives as the Kerry-Bush answers. Instead, as the Reuters annotators did, one would label Example 3 as “gold” and Example 4 as “acquisition”, that is, as two topics instead of two perspectives. Why does the contrast between Example 1 and Example 2 convey different perspectives, but the contrast between Example 3 and Example 4 result in different topics? How can we define the impalpable “different perspectives” anyway? The definition of “perspective” in the dictionary is “subjective evaluation of relative significance,”1 but can we have a computable definition to test the existence of different perspectives? 1The American Heritage Dictionary of the English Language, 4th ed. We are interested in identifying “ideological perspectives” (Verdonk, 2002), not first-person or secondperson “perspective” in narrative. 1057 The research question about the definition of different perspectives is not only scientifically intriguing, it also enables us to develop important natural language processing applications. Such a computational definition can be used to detect the emergence of contrasting perspectives. Media and political analysts regularly monitor broadcast news, magazines, newspapers, and blogs to see if there are public opinion splitting. The huge number of documents, however, make the task extremely daunting. Therefore an automated test of different perspectives will be very valuable to information analysts. We first review the relevant work in Section 2. We take a model-based approach to develop a computational definition of different perspectives. We first develop statistical models for the two document collections, A and B, and then measure the degree of contrast by calculating the “distance” between A and B. How document collections are statistically modeled and how distribution difference is estimated are described in Section 3. The document corpora are described in Section 4. In Section 5, we evaluate how effective the proposed test of difference perspectives based on statistical distribution. The experimental results show that the distribution divergence can successfully separate document collections of different perspectives from other kinds of collection pairs. We also investigate if the pattern of distribution difference is due to personal writing or speaking styles. 2 Related Work There has been interest in understanding how beliefs and ideologies can be represented in computers since mid-sixties of the last century (Abelson and Carroll, 1965; Schank and Abelson, 1977). The Ideology Machine (Abelson, 1973) can simulate a right-wing ideologue, and POLITICS (Carbonell, 1978) can interpret a text from conservative or liberal ideologies. In this paper we take a statistics-based approach, which is very different from previous work that rely very much on manually-constructed knowledge base. Note that what we are interested in is to determine if two document collections are written from different perspectives, not to model individual perspectives. We aim to capture the characteristics, specifically the statistical regularities of any pairs of document collections with opposing perspectives. Given a pair of document collections A and B, our goal is not to construct classifiers that can predict if a document was written from the perspective of A or B (Lin et al., 2006), but to determine if the document collection pair (A, B) convey opposing perspectives. There has been growing interest in subjectivity and sentiment analysis. There are studies on learning subjective language (Wiebe et al., 2004), identifying opinionated documents (Yu and Hatzivassiloglou, 2003) and sentences (Riloff et al., 2003; Riloff and Wiebe, 2003), and discriminating between positive and negative language (Turney and Littman, 2003; Pang et al., 2002; Dave et al., 2003; Nasukawa and Yi, 2003; Morinaga et al., 2002). There are also research work on automatically classifying movie or product reviews as positive or negative (Nasukawa and Yi, 2003; Mullen and Collier, 2004; Beineke et al., 2004; Pang and Lee, 2004; Hu and Liu, 2004). Although we expect by its very nature much of the language used when expressing a perspective to be subjective and opinionated, the task of labeling a document or a sentence as subjective is orthogonal to the test of different perspectives. A subjectivity classifier may successfully identify all subjective sentences in the document collection pair A and B, but knowing the number of subjective sentences in A and B does not necessarily tell us if they convey opposing perspectives. We utilize the subjectivity patterns automatically extracted from foreign news documents (Riloff and Wiebe, 2003), and find that the percentages of the subjective sentences in the bitterlemons corpus (see Section 4) are similar (65.6% in the Palestinian documents and 66.2% in the Israeli documents). The high but almost equivalent number of subjective sentences in two perspectives suggests that perspective is largely expressed in subjective language but subjectivity ratio is not enough to tell if two document collections are written from the same (Palestinian v.s. Palestinian) or different perspectives (Palestinian v.s. Israeli)2. 3 Statistical Distribution Divergence We take a model-based approach to measure to what degree, if any, two document collections are different. A document is represented as a point 2However, the close subjectivity ratio doesn’t mean that subjectivity can never help identify document collections of opposing perspectives. For example, the accuracy of the test of different perspectives may be improved by focusing on only subjective sentences. 1058 in a V -dimensional space, where V is vocabulary size. Each coordinate is the frequency of a word in a document, i.e., term frequency. Although vector representation, commonly known as a bag of words, is oversimplified and ignores rich syntactic and semantic structures, more sophisticated representation requires more data to obtain reliable models. Practically, bag-of-word representation has been very effective in many tasks, including text categorization (Sebastiani, 2002) and information retrieval (Lewis, 1998). We assume that a collection of N documents, y1, y2, . . . , yN are sampled from the following process, θ ∼ Dirichlet(α) yi ∼ Multinomial(ni, θ). We first sample a V -dimensional vector θ from a Dirichlet prior distribution with a hyperparameter α, and then sample a document yi repeatedly from a Multinomial distribution conditioned on the parameter θ, where ni is the document length of the ith document in the collection and assumed to be known and fixed. We are interested in comparing the parameter θ after observing document collections A and B: p(θ|A) = p(A|θ)p(θ) p(A) = Dirichlet(θ|α + X yi∈A yi). The posterior distribution p(θ|·) is a Dirichlet distribution since a Dirichlet distribution is a conjugate prior for a Multinomial distribution. How should we measure the difference between two posterior distributions p(θ|A) and p(θ|B)? One common way to measure the difference between two distributions is Kullback-Leibler (KL) divergence (Kullback and Leibler, 1951), defined as follows, D(p(θ|A)||p(θ|B)) = Z p(θ|A) log p(θ|A) p(θ|B) dθ. (5) Directly calculating KL divergence according to (5) involves a difficult high-dimensional integral. As an alternative, we approximate KL divergence using Monte Carlo methods as follows, 1. Sample θ1, θ2, . . . , θM from Dirichlet(θ|α + P yi∈A yi). 2. Return ˆD = 1 M PM i=1 log p(θi|A) p(θi|B) as a Monte Carlo estimate of D(p(θ|A)||p(θ|B)). Algorithms of sampling from Dirichlet distribution can be found in (Ripley, 1987). As M →∞, the Monte Carlo estimate will converge to true KL divergence by the Law of Large Numbers. 4 Corpora To evaluate how well KL divergence between posterior distributions can discern a document collection pair of different perspectives, we collect two corpora of documents that were written or spoken from different perspectives and one newswire corpus that covers various topics, as summarized in Table 1. No stemming algorithms is performed; no stopwords are removed. Corpus Subset |D| ¯ |d| V bitterlemons Palestinian 290 748.7 10309 Israeli 303 822.4 11668 Pal. Editor 144 636.2 6294 Pal. Guest 146 859.6 8661 Isr. Editor 152 819.4 8512 Isr. Guest 151 825.5 8812 2004 Presidential Debate Kerry 178 124.7 2554 Bush 176 107.8 2393 1st Kerry 33 216.3 1274 1st Bush 41 155.3 1195 2nd Kerry 73 103.8 1472 2nd Bush 75 89.0 1333 3rd Kerry 72 104.0 1408 3rd Bush 60 98.8 1281 Reuters21578 ACQ 2448 124.7 14293 CRUDE 634 214.7 9009 EARN 3987 81.0 12430 GRAIN 628 183.0 8236 INTEREST 513 176.3 6056 MONEY-FX 801 197.9 8162 TRADE 551 255.3 8175 Table 1: The number of documents |D|, average document length ¯|d| , and vocabulary size V of the three corpora. The first perspective corpus consists of articles published on the bitterlemons website3 from late 2001 to early 2005. The website is set up to “contribute to mutual understanding [between Palestinians and Israelis] through the open exchange of ideas”4. Every week an issue about the Israeli-Palestinian conflict is selected for discussion (e.g., “Disengagement: unilateral or coordinated?”), and a Palestinian editor and an Israeli editor each contribute one article addressing the 3http://www.bitterlemons.org/ 4http://www.bitterlemons.org/about/ about.html 1059 issue. In addition, the Israeli and Palestinian editors interview a guest to express their views on the issue, resulting in a total of four articles in a weekly edition. The perspective from which each article is written is labeled as either Palestinian or Israeli by the editors. The second perspective corpus consists of the transcripts of the three Bush-Kerry presidential debates in 2004. The transcripts are from the website of the Commission on Presidential Debates5. Each spoken document is roughly an answer to a question or a rebuttal. The transcript are segmented by the speaker tags already in the transcripts. All words from moderators are discarded. The topical corpus contains newswire from Reuters in 1987. Reuters-215786 is one of the most common testbeds for text categorization. Each document belongs to none, one, or more of the 135 categories (e.g., “Mergers” and “U.S. Dollars”.) The number of documents in each category is not evenly distributed (median 9.0, mean 105.9). To estimate statistics reliably, we only consider categories with more than 500 documents, resulting in a total of seven categories (ACQ, CRUDE, EARN, GRAIN, INTEREST, MONEY-FX, and TRADE). 5 Experiments A test of different perspectives is acute when it can draw distinctions between document collection pairs of different perspectives and document collection pairs of the same perspective and others. We thus evaluate the proposed test of different perspectives in the following four types of document collection pairs (A, B): Different Perspectives (DP) A and B are written from different perspectives. For example, A is written from the Palestinian perspective and B is written from the Israeli perspective in the bitterlemons corpus. Same Perspective (SP) A and B are written from the same perspective. For example, A and B consist of the words spoken by Kerry. Different Topics (DT) A and B are written on different topics. For example, A is about 5http://www.debates.org/pages/ debtrans.html 6http://www.ics.uci.edu/∼kdd/ databases/reuters21578/reuters21578.html acquisition (ACQ) and B is about crude oil (CRUDE). Same Topic (ST) A and B are written on the same topic. For example, A and B are both about earnings (EARN). The effectiveness of the proposed test of different perspectives can thus be measured by how the distribution divergence of DP document collection pairs is separated from the distribution divergence of SP, DT, and ST document collection pairs. The little the overlap of the range of distribution divergence, the sharper the test of different perspectives. To account for large variation in the number of words and vocabulary size across corpora, we normalize the total number of words in a document collection to be the same K, and consider only the top C% frequent words in the document collection pair. We vary the values of K and C, and find that K changes the absolute scale of KL divergence but does not change the rankings of four conditions. Rankings among four conditions is consistent when C is small. We only report results of K = 1000, C = 10 in the paper due to space limit. There are two kinds of variances in the estimation of divergence between two posterior distribution and should be carefully checked. The first kind of variance is due to Monte Carlo methods. We assess the Monte Carlo variance by calculating a 100α percent confidence interval as follows, [ ˆD −Φ−1(α 2 ) ˆσ √ M , ˆD + Φ−1(1 −α 2 ) ˆσ √ M ] where ˆσ2 is the sample variance of θ1, θ2, . . . , θM, and Φ(·)−1 is the inverse of the standard normal cumulative density function. The second kind of variance is due to the intrinsic uncertainties of data generating processes. We assess the second kind of variance by collecting 1000 bootstrapped samples, that is, sampling with replacement, from each document collection pair. 5.1 Quality of Monte Carlo Estimates The Monte Carlo estimates of the KL divergence from several document collection pair are listed in Table 2. A complete list of the results is omitted due to the space limit. We can see that the 95% confidence interval captures well the Monte Carlo estimates of KL divergence. Note that KL divergence is not symmetric. The KL divergence 1060 A B ˆD 95% CI ACQ ACQ 2.76 [2.62, 2.89] Palestinian Palestinian 3.00 [3.54, 3.85] Palestinian Israeli 27.11 [26.64, 27.58] Israeli Palestinian 28.44 [27.97, 28.91] Kerry Bush 58.93 [58.22, 59.64] ACQ EARN 615.75 [610.85, 620.65] Table 2: The Monte Carlo estimate ˆD and 95% confidence interval (CI) of the Kullback-Leibler divergence of several document collection pairs (A, B) with the number of Monte Carlo samples M = 1000. of the pair (Israeli, Palestinian) is not necessarily the same as (Palestinian, Israeli). KL divergence is greater than zero (Cover and Thomas, 1991) and equal to zero only when document collections A and B are exactly the same. Here (ACQ, ACQ) is close to but not exactly zero because they are different samples of documents in the ACQ category. Since the CIs of Monte Carlo estimates are reasonably tight, we assume them to be exact and ignore the errors from Monte Carlo methods. 5.2 Test of Different Perspectives We now present the main result of the paper. We calculate the KL divergence between posterior distributions of document collection pairs in four conditions using Monte Carlo methods, and plot the results in Figure 1. The test of different perspectives based on statistical distribution divergence is shown to be very acute. The KL divergence of the document collection pairs in the DP condition fall mostly in the middle range, and is well separated from the high KL divergence of the pairs in DT condition and from the low KL divergence of the pairs in SP and ST conditions. Therefore, by simply calculating the KL divergence of a document collection pair, we can reliably predict that they are written from different perspectives if the value of KL divergence falls in the middle range, from different topics if the value is very large, from the same topic or perspective if the value is very small. 5.3 Personal Writing Styles or Perspectives? One may suspect that the mid-range distribution divergence is attributed to personal speaking or writing styles and has nothing to do with different perspectives. The doubt is expected because half of the bitterlemons corpus are written by one Palestinian editor and one Israeli editor (see Table 1), and the debate transcripts come from only two candidates. We test the hypothesis by computing the distribution divergence of the document collection pair (Israeli Guest, Palestinian Guest), that is, a Different Perspectives (DP) pair. There are more than 200 different authors in the Israeli Guest and Palestinian Guest collection. If the distribution divergence of the pair with diverse authors falls out of the middle range, it will support that mid-range divergence is due to writing styles. On the other hand, if the distribution divergence still fall in the middle range, we are more confident the effect is attributed to different perspectives. We compare the distribution divergence of the pair (Israeli Guest, Palestinian Guest) with others in Figure 2. ST SP DP Guest DT KL Divergence 1 2 5 10 20 50 200 500 Figure 2: The average KL divergence of document collection pairs in the bitterlemons Guest subset (Israeli Guest vs. Palestinian Guest), ST ,SP, DP, DT conditions. The horizontal lines are the same as those in Figure 1. The results show that the distribution divergence of the (Israeli Guest, Palestinian Guest) pair, as other pairs in the DP condition, still falls in the middle range, and is well separated from SP and ST in the low range and DT in the high range. The decrease in KL divergence due to writing or speaking styles is noticeable, and the overall effect due to different perspectives is strong enough to make the test robust. We thus conclude that the test of different perspectives based on distribution divergence indeed captures different perspectives, not personal writing or speaking styles. 5.4 Origins of Differences While the effectiveness of the test of different perspectives is demonstrated in Figure 1, one may 1061 2 5 10 20 50 100 200 500 1000 0.00 0.05 0.10 0.15 KL Divergence Density SP ST DP DT Figure 1: The KL divergence of the document collection pairs in four conditions: Different Perspectives (DP), Same Perspective (SP), Different Topics (DT), and Same Topic (ST). Note that the x axis is in log scale. The Monte Carlo estimates ˆD of the pairs in DP condition are plotted as rugs. ˆD of the pairs in other conditions are omitted to avoid clutter and summarized in one-dimensional density using Kernel Density Estimation. The vertical lines are drawn at the points with equivalent densities. wonder why the distribution divergence of the document collection pair with different perspectives falls in the middle range and what causes the large and small divergence of the document collection pairs with different topics (DT) and the same topic (ST) or perspective (SP), respectively. In other words where do the differences result from? We answer the question by taking a closer look at the causes of the distribution divergence in our model. We compare the expected marginal difference of θ between two posterior distributions p(θ|A) and p(θ|B). The marginal distribution of the i-th coordinate of θ, that is, the i-th word in the vocabulary, is a Beta distribution, and thus the expected value can be easily calculated. We plot the ∆θ = E[θi|A] −E[θi|B] against E[θi|A] for each condition in Figure 3. How ∆θ is deviated from zero partially explains different patterns of distribution divergence in Figure 1. In Figure 3d we see that the ∆θ increases as θ increases, and the deviance from zero is much greater than those in the Same Perspective (Figure 3b) and Same Topic (Figure 3a) conditions. The large ∆θ not only accounts for large distribution divergence of the document pairs in DT conditions, but also shows that words in different topics that is frequent in one topic are less likely to be frequent in the other topic. At the other extreme, document collection pairs of the Same Perspective (SP) or Same Topic (ST) show very little difference in θ, which matches our intuition that documents of the same perspective or the same topic use the same vocabulary in a very similar way. The manner in which ∆θ is varied with the value of θ in the Different Perspective (DP) condition is very unique. The ∆θ in Figure 3c is not as small as those in the SP and ST conditions, but at the same time not as large as those in DT conditions, resulting in mid-range distribution divergence in Figure 1. Why do document collections of different perspectives distribute this way? Partly because articles from different perspectives focus on the closely related issues (the PalestinianIsraeli conflict in the bitterlemons corpus, or the political and economical issues in the debate corpus), the authors of different perspectives write or speak in a similar vocabulary, but with emphasis on different words. 6 Conclusions In this paper we develop a computational test of different perspectives based on statistical distribution divergence between the statistical models of document collections. We show that the pro1062 0.00 0.01 0.02 0.03 0.04 0.05 0.06 −0.04 −0.02 0.00 0.02 0.04 (a) Same Topic (ST) 0.00 0.01 0.02 0.03 0.04 0.05 0.06 −0.04 −0.02 0.00 0.02 0.04 (b) Same Topic (SP) 0.00 0.01 0.02 0.03 0.04 0.05 0.06 −0.04 −0.02 0.00 0.02 0.04 0.00 0.01 0.02 0.03 0.04 0.05 0.06 −0.04 −0.02 0.00 0.02 0.04 (c) Two examples of Different Perspective (DP) Figure 3: The ∆θ vs. θ plots of the typical document collection pairs in four conditions. The horizontal line is ∆θ = 0. 0.00 0.01 0.02 0.03 0.04 0.05 0.06 −0.04 −0.02 0.00 0.02 0.04 0.00 0.01 0.02 0.03 0.04 0.05 0.06 −0.04 −0.02 0.00 0.02 0.04 (d) Two examples of Different Topics (DT) Figure 3: Cont’d posed test can successfully separate document collections of different perspectives from other types of document collection pairs. The distribution divergence falling in the middle range can not simply be attributed to personal writing or speaking styles. From the plot of multinomial parameter difference we offer insights into where the different patterns of distribution divergence come from. Although we validate the test of different perspectives by comparing the DP condition with DT, SP, and ST conditions, the comparisons are by no means exhaustive, and the distribution divergence of some document collection pairs may also fall in the middle range. We plan to investigate more types of document collections pairs, e.g., the document collections from different text genres (Kessler et al., 1997). Acknowledgment We would like thank the anonymous reviewers for useful comments and suggestions. This material is based on work supported by the Advanced Research and Development Activity (ARDA) under contract number NBCHC040037. 1063 References Robert P. Abelson and J. Douglas Carroll. 1965. Computer simulation of individual belief systems. The American Behavioral Scientist, 8:24–30, May. Robert P. Abelson, 1973. Computer Models of Thought and Language, chapter The Structure of Belief Systems, pages 287–339. W. H. Freeman and Company. Philip Beineke, Trevor Hastie, and Shivakumar Vaithyanathan. 2004. The sentimental factor: Improving review classification via human-provided information. In Proceedings of the Association for Computational Linguistics (ACL-2004). Jaime G. Carbonell. 1978. POLITICS: Automated ideological reasoning. Cognitive Science, 2(1):27– 51. Thomas M. Cover and Joy A. Thomas. 1991. Elements of Information Theory. Wiley-Interscience. Kushal Dave, Steve Lawrence, and David M. Pennock. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In Proceedings of the 12th International World Wide Web Conference (WWW2003). Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the 2004 ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Brett Kessler, Geoffrey Nunberg, and Hinrich Sch¨utze. 1997. Automatic detection of text genre. In Proceedings of the 35th Conference on Association for Computational Linguistics, pages 32–38. S. Kullback and R. A. Leibler. 1951. On information and sufficiency. The Annals of Mathematical Statistics, 22(1):79–86, March. David D. Lewis. 1998. Naive (Bayes) at forty: The independence assumption in information retrieval. In Proceedings of the 9th European Conference on Machine Learning (ECML). Wei-Hao Lin, Theresa Wilson, Janyce Wiebe, and Alexander Hauptmann. 2006. Which side are you on? identifying perspectives at the document and sentence levels. In Proceedings of Tenth Conference on Natural Language Learning (CoNLL). S. Morinaga, K. Yamanishi, K. Tateishi, and T. Fukushima. 2002. Mining product reputations on the web. In Proceedings of the 2002 ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Tony Mullen and Nigel Collier. 2004. Sentiment analysis using support vector machines with diverse information sources. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2004). T. Nasukawa and J. Yi. 2003. Sentiment analysis: Capturing favorability using natural language processing. In Proceedings of the 2nd International Conference on Knowledge Capture (K-CAP 2003). Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the Association for Computational Linguistics (ACL-2004). Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2002). Ellen Riloff and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2003). Ellen Riloff, Janyce Wiebe, and Theresa Wilson. 2003. Learning subjective nouns using extraction pattern bootstrapping. In Proceedings of the 7th Conference on Natural Language Learning (CoNLL-2003). B. D. Ripley. 1987. Stochastic Simulation. Wiley. Roger C. Schank and Robert P. Abelson. 1977. Scripts, plans, goals, and understanding: an inquiry into human knowledge structures. Lawrene Erlbaum Associates. Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1–47, March. Peter Turney and Michael L. Littman. 2003. Measuring praise and criticism: Inference of semantic orientation from association. ACM Transactions on Information Systems (TOIS), 21(4):315–346. Peter Verdonk. 2002. Stylistics. Oxford University Press. Janyce Wiebe, Theresa Wilson, Rebecca Bruce, Matthew Bell, and Melanie Martin. 2004. Learning subjective language. Computational Linguistics, 30(3). Hong Yu and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2003). 1064 | 2006 | 133 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1065–1072, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Word Sense and Subjectivity Janyce Wiebe Department of Computer Science University of Pittsburgh [email protected] Rada Mihalcea Department of Computer Science University of North Texas [email protected] Abstract Subjectivity and meaning are both important properties of language. This paper explores their interaction, and brings empirical evidence in support of the hypotheses that (1) subjectivity is a property that can be associated with word senses, and (2) word sense disambiguation can directly benefit from subjectivity annotations. 1 Introduction There is growing interest in the automatic extraction of opinions, emotions, and sentiments in text (subjectivity), to provide tools and support for various NLP applications. Similarly, there is continuous interest in the task of word sense disambiguation, with sense-annotated resources being developed for many languages, and a growing number of research groups participating in large-scale evaluations such as SENSEVAL. Though both of these areas are concerned with the semantics of a text, over time there has been little interaction, if any, between them. In this paper, we address this gap, and explore possible interactions between subjectivity and word sense. There are several benefits that would motivate such a joint exploration. First, at the resource level, the augmentation of lexical resources such as WordNet (Miller, 1995) with subjectivity labels could support better subjectivity analysis tools, and principled methods for refining word senses and clustering similar meanings. Second, at the tool level, an explicit link between subjectivity and word sense could help improve methods for each, by integrating features learned from one into the other in a pipeline approach, or through joint simultaneous learning. In this paper we address two questions about word sense and subjectivity. First, can subjectivity labels be assigned to word senses? To address this question, we perform two studies. The first (Section 3) investigates agreement between annotators who manually assign the labels subjective, objective, or both to WordNet senses. The second study (Section 4) evaluates a method for automatic assignment of subjectivity labels to word senses. We devise an algorithm relying on distributionally similar words to calculate a subjectivity score, and show how it can be used to automatically assess the subjectivity of a word sense. Second, can automatic subjectivity analysis be used to improve word sense disambiguation? To address this question, the output of a subjectivity sentence classifier is input to a word-sense disambiguation system, which is in turn evaluated on the nouns from the SENSEVAL-3 English lexical sample task (Section 5). The results of this experiment show that a subjectivity feature can significantly improve the accuracy of a word sense disambiguation system for those words that have both subjective and objective senses. A third obvious question is, can word sense disambiguation help automatic subjectivity analysis? However, due to space limitations, we do not address this question here, but rather leave it for future work. 2 Background Subjective expressions are words and phrases being used to express opinions, emotions, evaluations, speculations, etc. (Wiebe et al., 2005). A general covering term for such states is private state, “a state that is not open to objective obser1065 vation or verification” (Quirk et al., 1985).1 There are three main types of subjective expressions:2 (1) references to private states: His alarm grew. He absorbed the information quickly. He was boiling with anger. (2) references to speech (or writing) events expressing private states: UCC/Disciples leaders roundly condemned the Iranian President’s verbal assault on Israel. The editors of the left-leaning paper attacked the new House Speaker. (3) expressive subjective elements: He would be quite a catch. What’s the catch? That doctor is a quack. Work on automatic subjectivity analysis falls into three main areas. The first is identifying words and phrases that are associated with subjectivity, for example, that think is associated with private states and that beautiful is associated with positive sentiments (e.g., (Hatzivassiloglou and McKeown, 1997; Wiebe, 2000; Kamps and Marx, 2002; Turney, 2002; Esuli and Sebastiani, 2005)). Such judgments are made for words. In contrast, our end task (in Section 4) is to assign subjectivity labels to word senses. The second is subjectivity classification of sentences, clauses, phrases, or word instances in the context of a particular text or conversation, either subjective/objective classifications or positive/negative sentiment classifications (e.g.,(Riloff and Wiebe, 2003; Yu and Hatzivassiloglou, 2003; Dave et al., 2003; Hu and Liu, 2004)). The third exploits automatic subjectivity analysis in applications such as review classification (e.g., (Turney, 2002; Pang and Lee, 2004)), mining texts for product reviews (e.g., (Yi et al., 2003; Hu and Liu, 2004; Popescu and Etzioni, 2005)), summarization (e.g., (Kim and Hovy, 2004)), information extraction (e.g., (Riloff et al., 2005)), 1Note that sentiment, the focus of much recent work in the area, is a type of subjectivity, specifically involving positive or negative opinion, emotion, or evaluation. 2These distinctions are not strictly needed for this paper, but may help the reader appreciate the examples given below. and question answering (e.g., (Yu and Hatzivassiloglou, 2003; Stoyanov et al., 2005)). Most manual subjectivity annotation research has focused on annotating words, out of context (e.g., (Heise, 2001)), or sentences and phrases in the context of a text or conversation (e.g., (Wiebe et al., 2005)). The new annotations in this paper are instead targeting the annotation of word senses. 3 Human Judgment of Word Sense Subjectivity To explore our hypothesis that subjectivity may be associated with word senses, we developed a manual annotation scheme for assigning subjectivity labels to WordNet senses,3 and performed an inter-annotator agreement study to assess its reliability. Senses are classified as S(ubjective), O(bjective), or B(oth). Classifying a sense as S means that, when the sense is used in a text or conversation, we expect it to express subjectivity; we also expect the phrase or sentence containing it to be subjective. We saw a number of subjective expressions in Section 2. A subset is repeated here, along with relevant WordNet senses. In the display of each sense, the first part shows the synset, gloss, and any examples. The second part (marked with =>) shows the immediate hypernym. His alarm grew. alarm, dismay, consternation – (fear resulting from the awareness of danger) => fear, fearfulness, fright – (an emotion experienced in anticipation of some specific pain or danger (usually accompanied by a desire to flee or fight)) He was boiling with anger. seethe, boil – (be in an agitated emotional state; “The customer was seething with anger”) => be – (have the quality of being; (copula, used with an adjective or a predicate noun); “John is rich”; “This is not a good answer”) What’s the catch? catch – (a hidden drawback; “it sounds good but what’s the catch?”) => drawback – (the quality of being a hindrance; “he pointed out all the drawbacks to my plan”) That doctor is a quack. quack – (an untrained person who pretends to be a physician and who dispenses medical advice) => doctor, doc, physician, MD, Dr., medico Before specifying what we mean by an objective sense, we give examples. 3All our examples and data used in the experiments are from WordNet 2.0. 1066 The alarm went off. alarm, warning device, alarm system – (a device that signals the occurrence of some undesirable event) => device – (an instrumentality invented for a particular purpose; “the device is small enough to wear on your wrist”; “a device intended to conserve water”) The water boiled. boil – (come to the boiling point and change from a liquid to vapor; “Water boils at 100 degrees Celsius”) => change state, turn – (undergo a transformation or a change of position or action; “We turned from Socialism to Capitalism”; “The people turned against the President when he stole the election”) He sold his catch at the market. catch, haul – (the quantity that was caught; “the catch was only 10 fish”) => indefinite quantity – (an estimated quantity) The duck’s quack was loud and brief. quack – (the harsh sound of a duck) => sound – (the sudden occurrence of an audible event; “the sound awakened them”) While we expect phrases or sentences containing subjective senses to be subjective, we do not necessarily expect phrases or sentences containing objective senses to be objective. Consider the following examples: Will someone shut that damn alarm off? Can’t you even boil water? While these sentences contain objective senses of alarm and boil, the sentences are subjective nonetheless. But they are not subjective due to alarm and boil, but rather to punctuation, sentence forms, and other words in the sentence. Thus, classifying a sense as O means that, when the sense is used in a text or conversation, we do not expect it to express subjectivity and, if the phrase or sentence containing it is subjective, the subjectivity is due to something else. Finally, classifying a sense as B means it covers both subjective and objective usages, e.g.: absorb, suck, imbibe, soak up, sop up, suck up, draw, take in, take up – (take in, also metaphorically; “The sponge absorbs water well”; “She drew strength from the minister’s words”) Manual subjectivity judgments were added to a total of 354 senses (64 words). One annotator, Judge 1 (a co-author), tagged all of them. A second annotator (Judge 2, who is not a co-author) tagged a subset for an agreement study, presented next. 3.1 Agreement Study For the agreement study, Judges 1 and 2 independently annotated 32 words (138 senses). 16 words have both S and O senses and 16 do not (according to Judge 1). Among the 16 that do not have both S and O senses, 8 have only S senses and 8 have only O senses. All of the subsets are balanced between nouns and verbs. Table 1 shows the contingency table for the two annotators’ judgments on this data. In addition to S, O, and B, the annotation scheme also permits U(ncertain) tags. S O B U Total S 39 O O 4 43 O 3 73 2 4 82 B 1 O 3 1 5 U 3 2 O 3 8 Total 46 75 5 12 138 Table 1: Agreement on balanced set (Agreement: 85.5%, κ: 0.74) Overall agreement is 85.5%, with a Kappa (κ) value of 0.74. For 12.3% of the senses, at least one annotator’s tag is U. If we consider these cases to be borderline and exclude them from the study, percent agreement increases to 95% and κ rises to 0.90. Thus, annotator agreement is especially high when both are certain. Considering only the 16-word subset with both S and O senses (according to Judge 1), κ is .75, and for the 16-word subset for which Judge 1 gave only S or only O senses, κ is .73. Thus, the two subsets are of comparable difficulty. The two annotators also independently annotated the 20 ambiguous nouns (117 senses) of the SENSEVAL-3 English lexical sample task used in Section 5. For this tagging task, U tags were not allowed, to create a definitive gold standard for the experiments. Even so, the κ value for them is 0.71, which is not substantially lower. The distributions of Judge 1’s tags for all 20 words can be found in Table 3 below. We conclude this section with examples of disagreements that illustrate sources of uncertainty. First, uncertainty arises when subjective senses are missing from the dictionary. The labels for the senses of noun assault are (O:O,O:O,O:O,O:UO).4 For verb assault there is a subjective sense: attack, round, assail, lash out, snipe, assault (attack in speech or writing) “The editors of the left-leaning paper attacked the new House Speaker” However, there is no corresponding sense for 4I.e., the first three were labeled O by both annotators. For the fourth sense, the second annotator was not sure but was leaning toward O. 1067 noun assault. A missing sense may lead an annotator to try to see subjectivity in an objective sense. Second, uncertainty can arise in weighing hypernym against sense. It is fine for a synset to imply just S or O, while the hypernym implies both (the synset specializes the more general concept). However, consider the following, which was tagged (O:UB). attack – (a sudden occurrence of an uncontrollable condition; “an attack of diarrhea”) => affliction – (a cause of great suffering and distress) While the sense is only about the condition, the hypernym highlights subjective reactions to the condition. One annotator judged only the sense (giving tag O), while the second considered the hypernym as well (giving tag UB). 4 Automatic Assessment of Word Sense Subjectivity Encouraged by the results of the agreement study, we devised a method targeting the automatic annotation of word senses for subjectivity. The main idea behind our method is that we can derive information about a word sense based on information drawn from words that are distributionally similar to the given word sense. This idea relates to the unsupervised word sense ranking algorithm described in (McCarthy et al., 2004). Note, however, that (McCarthy et al., 2004) used the information about distributionally similar words to approximate corpus frequencies for word senses, whereas we target the estimation of a property of a given word sense (the “subjectivity”). Starting with a given ambiguous word w, we first find the distributionally similar words using the method of (Lin, 1998) applied to the automatically parsed texts of the British National Corpus. Let DSW = dsw1, dsw2, ..., dswn be the list of top-ranked distributionally similar words, sorted in decreasing order of their similarity. Next, for each sense wsi of the word w, we determine the similarity with each of the words in the list DSW, using a WordNet-based measure of semantic similarity (wnss). Although a large number of such word-to-word similarity measures exist, we chose to use the (Jiang and Conrath, 1997) measure, since it was found both to be efficient and to provide the best results in previous experiments involving word sense ranking (McCarthy et al., 2004)5. For distributionally similar words 5Note that unlike the above measure of distributional simAlgorithm 1 Word Sense Subjectivity Score Input: Word sense wi Input: Distributionally similar words DSW = {dswj|j = 1..n} Output: Subjectivity score subj(wi) 1: subj(wi) = 0 2: totalsim = 0 3: for j = 1 to n do 4: Instsj = all instances of dswj in the MPQA corpus 5: for k in Instsj do 6: if k is in a subj. expr. in MPQA corpus then 7: subj(wi) += sim(wi,dswj) 8: else if k is not in a subj. expr. in MPQA corpus then 9: subj(wi) -= sim(wi,dswj) 10: end if 11: totalsim += sim(wi,dswj) 12: end for 13: end for 14: subj(wi) = subj(wi) / totalsim that are themselves ambiguous, we use the sense that maximizes the similarity score. The similarity scores associated with each word dswj are normalized so that they add up to one across all possible senses of w, which results in a score described by the following formula: sim(wsi, dswj) = wnss(wsi,dswj) P i′∈senses(w) wnss(wsi′ ,dswj) where wnss(wsi, dswj) = max k∈senses(dswj) wnss(wsi, dswk j ) A selection process can also be applied so that a distributionally similar word belongs only to one sense. In this case, for a given sense wi we use only those distributionally similar words with whom wi has the highest similarity score across all the senses of w. We refer to this case as similarityselected, as opposed to similarity-all, which refers to the use of all distributionally similar words for all senses. Once we have a list of similar words associated with each sense wsi and the corresponding similarity scores sim(wsi, dswj), we use an annotated corpus to assign subjectivity scores to the senses. The corpus we use is the MPQA Opinion Corpus, which consists of over 10,000 sentences from the world press annotated for subjective expressions (all three types of subjective expressions described in Section 2).6 ilarity which measures similarity between words, rather than word senses, here we needed a similarity measure that also takes into account word senses as defined in a sense inventory such as WordNet. 6The MPQA corpus is described in (Wiebe et al., 2005) and available at www.cs.pitt.edu/mpqa/databaserelease/. 1068 Algorithm 1 is our method for calculating sense subjectivity scores. The subjectivity score is a value in the interval [-1,+1] with +1 corresponding to highly subjective and -1 corresponding to highly objective. It is a sum of sim scores, where sim(wi,dswj) is added for each instance of dswj that is in a subjective expression, and subtracted for each instance that is not in a subjective expression. Note that the annotations in the MPQA corpus are for subjective expressions in context. Thus, the data is somewhat noisy for our task, because, as discussed in Section 3, objective senses may appear in subjective expressions. Nonetheless, we hypothesized that subjective senses tend to appear more often in subjective expressions than objective senses do, and use the appearance of words in subjective expressions as evidence of sense subjectivity. (Wiebe, 2000) also makes use of an annotated corpus, but in a different approach: given a word w and a set of distributionally similar words DSW, that method assigns a subjectivity score to w equal to the conditional probability that any member of DSW is in a subjective expression. Moreover, the end task of that work was to annotate words, while our end task is the more difficult problem of annotating word senses for subjectivity. 4.1 Evaluation The evaluation of the algorithm is performed against the gold standard of 64 words (354 word senses) using Judge 1’s annotations, as described in Section 3. For each sense of each word in the set of 64 ambiguous words, we use Algorithm 1 to determine a subjectivity score. A subjectivity label is then assigned depending on the value of this score with respect to a pre-selected threshold. While a threshold of 0 seems like a sensible choice, we perform the evaluation for different thresholds ranging across the [-1,+1] interval, and correspondingly determine the precision of the algorithm at different points of recall7. Note that the word senses for which none of the distributionally similar words are found in the MPQA corpus are not 7Specifically, in the list of word senses ranked by their subjectivity score, we assign a subjectivity label to the top N word senses. The precision is then determined as the number of correct subjectivity label assignments out of all N assignments, while the recall is measured as the correct subjective senses out of all the subjective senses in the gold standard data set. By varying the value of N from 1 to the total number of senses in the corpus, we can derive precision and recall curves. included in this evaluation (excluding 82 senses), since in this case a subjectivity score cannot be calculated. The evaluation is therefore performed on a total of 272 word senses. As a baseline, we use an “informed” random assignment of subjectivity labels, which randomly assigns S labels to word senses in the data set, such that the maximum number of S assignments equals the number of correct S labels in the gold standard data set. This baseline guarantees a maximum recall of 1 (which under true random conditions might not be achievable). Correspondingly, given the controlled distribution of S labels across the data set in the baseline setting, the precision is equal for all eleven recall points, and is determined as the total number of correct subjective assignments divided by the size of the data set8. Number Break-even Algorithm of DSW point similarity-all 100 0.41 similarity-selected 100 0.50 similarity-all 160 0.43 similarity-selected 160 0.50 baseline 0.27 Table 2: Break-even point for different algorithm and parameter settings There are two aspects of the sense subjectivity scoring algorithm that can influence the label assignment, and correspondingly their evaluation. First, as indicated above, after calculating the semantic similarity of the distributionally similar words with each sense, we can either use all the distributionally similar words for the calculation of the subjectivity score of each sense (similarityall), or we can use only those that lead to the highest similarity (similarity-selected). Interestingly, this aspect can drastically affect the algorithm accuracy. The setting where a distributionally similar word can belong only to one sense significantly improves the algorithm performance. Figure 1 plots the interpolated precision for eleven points of recall, for similarity-all, similarity-selected, and baseline. As shown in this figure, the precisionrecall curves for our algorithm are clearly above the “informed” baseline, indicating the ability of our algorithm to automatically identify subjective word senses. Second, the number of distributionally similar words considered in the first stage of the algorithm can vary, and might therefore influence the 8In other words, this fraction represents the probability of making the correct subjective label assignment by chance. 1069 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision Recall Precision recall curves selected all baseline Figure 1: Precision and recall for automatic subjectivity annotations of word senses (DSW=160). output of the algorithm. We experiment with two different values, namely 100 and 160 top-ranked distributionally similar words. Table 2 shows the break-even points for the four different settings that were evaluated,9 with results that are almost double compared to the informed baseline. As it turns out, for weaker versions of the algorithm (i.e., similarity-all), the size of the set of distributionally similar words can significantly impact the performance of the algorithm. However, for the already improved similarity-selected algorithm version, this parameter does not seem to have influence, as similar results are obtained regardless of the number of distributionally similar words. This is in agreement with the finding of (McCarthy et al., 2004) that, in their word sense ranking method, a larger set of neighbors did not influence the algorithm accuracy. 5 Automatic Subjectivity Annotations for Word Sense Disambiguation The final question we address is concerned with the potential impact of subjectivity on the quality of a word sense classifier. To answer this question, we augment an existing data-driven word sense disambiguation system with a feature reflecting the subjectivity of the examples where the ambiguous word occurs, and evaluate the performance of the new subjectivity-aware classifier as compared to the traditional context-based sense classifier. We use a word sense disambiguation system that integrates both local and topical features. 9The break-even point (Lewis, 1992) is a standard measure used in conjunction with precision-recall evaluations. It represents the value where precision and recall become equal. Specifically, we use the current word and its partof-speech, a local context of three words to the left and right of the ambiguous word, the parts-ofspeech of the surrounding words, and a global context implemented through sense-specific keywords determined as a list of at most five words occurring at least three times in the contexts defining a certain word sense. This feature set is similar to the one used by (Ng and Lee, 1996), as well as by a number of SENSEVAL systems. The parameters for sense-specific keyword selection were determined through cross-fold validation on the training set. The features are integrated in a Naive Bayes classifier, which was selected mainly for its performance in previous work showing that it can lead to a state-of-the-art disambiguation system given the features we consider (Lee and Ng, 2002). The experiments are performed on the set of ambiguous nouns from the SENSEVAL-3 English lexical sample evaluation (Mihalcea et al., 2004). We use the rule-based subjective sentence classifier of (Riloff and Wiebe, 2003) to assign an S, O, or B label to all the training and test examples pertaining to these ambiguous words. This subjectivity annotation tool targets sentences, rather than words or paragraphs, and therefore the tool is fed with sentences. We also include a surrounding context of two additional sentences, because the classifier considers some contextual information. Our hypothesis motivating the use of a sentence-level subjectivity classifier is that instances of subjective senses are more likely to be in subjective sentences, and thus that sentence subjectivity is an informative feature for the disambiguation of words having both subjective and objective senses. For each ambiguous word, we perform two separate runs: one using the basic disambiguation system described earlier, and another using the subjectivity-aware system that includes the additional subjectivity feature. Table 3 shows the results obtained for these 20 nouns, including word sense disambiguation accuracy for the two different systems, the most frequent sense baseline, and the subjectivity/objectivity split among the word senses (according to Judge 1). The words in the top half of the table are the ones that have both S and O senses, and those in the bottom are the ones that do not. If we were to use Judge 2’s tags instead of Judge 1’s, only one word would change: source would move from the top to the bottom of the table. 1070 Sense Data Classifier Word Senses subjectivity train test Baseline basic + subj. Words with subjective senses argument 5 3-S 2-O 221 111 49.4% 51.4% 54.1% atmosphere 6 2-S 4-O 161 81 65.4% 65.4% 66.7% difference 5 2-S 3-O 226 114 40.4% 54.4% 57.0% difficulty 4 2-S 2-O 46 23 17.4% 47.8% 52.2% image 7 2-S 5-O 146 74 36.5% 41.2% 43.2% interest 7 1-S 5-O 1-B 185 93 41.9% 67.7% 68.8% judgment 7 5-S 2-O 62 32 28.1% 40.6% 43.8% plan 3 1-S 2-O 166 84 81.0% 81.0% 81.0% sort 4 1-S 2-O 1-B 190 96 65.6% 66.7% 67.7% source 9 1-S 8-O 64 32 40.6% 40.6% 40.6% Average 46.6% 55.6% 57.5% Words with no subjective senses arm 6 6-O 266 133 82.0% 85.0% 84.2% audience 4 4-O 200 100 67.0% 74.0% 74.0% bank 10 10-O 262 132 62.6% 62.6% 62.6% degree 7 5-O 2-B 256 128 60.9% 71.1% 71.1% disc 4 4-O 200 100 38.0% 65.6% 66.4% organization 7 7-O 112 56 64.3% 64.3% 64.3% paper 7 7-O 232 117 25.6% 49.6% 48.0% party 5 5-O 230 116 62.1% 62.9% 62.9% performance 5 5-O 172 87 26.4% 34.5% 34.5% shelter 5 5-O 196 98 44.9% 65.3% 65.3% Average 53.3% 63.5% 63.3% Average for all words 50.0% 59.5% 60.4% Table 3: Word Sense Disambiguation with and without subjectivity information, for the set of ambiguous nouns in SENSEVAL-3 For the words that have both S and O senses, the addition of the subjectivity feature alone can bring a significant error rate reduction of 4.3% (p < 0.05 paired t-test). Interestingly, no improvements are observed for the words with no subjective senses; on the contrary, the addition of the subjectivity feature results in a small degradation. Overall for the entire set of ambiguous words, the error reduction is measured at 2.2% (significant at p < 0.1 paired t-test). In almost all cases, the words with both S and O senses show improvement, while the others show small degradation or no change. This suggests that if a subjectivity label is available for the words in a lexical resource (e.g. using Algorithm 1 from Section 4), such information can be used to decide on using a subjectivity-aware system, thereby improving disambiguation accuracy. One of the exceptions is disc, which had a small benefit, despite not having any subjective senses. As it happens, the first sense of disc is phonograph record. phonograph record, phonograph recording, record, disk, disc, platter – (sound recording consisting of a disc with continuous grooves; formerly used to reproduce music by rotating while a phonograph needle tracked in the grooves) The improvement can be explained by observing that many of the training and test sentences containing this sense are labeled subjective by the classifier, and indeed this sense frequently occurs in subjective sentences such as “This is anyway a stunning disc.” Another exception is the noun plan, which did not benefit from the subjectivity feature, although it does have a subjective sense. This can perhaps be explained by the data set for this word, which seems to be particularly difficult, as the basic classifier itself could not improve over the most frequent sense baseline. The other word that did not benefit from the subjectivity feature is the noun source, for which its only subjective sense did not appear in the sense-annotated data, leading therefore to an “objective only” set of examples. 6 Conclusion and Future Work The questions posed in the introduction concerning the possible interaction between subjectivity and word sense found answers throughout the paper. As it turns out, a correlation can indeed be established between these two semantic properties of language. Addressing the first question of whether subjectivity is a property that can be assigned to word senses, we showed that good agreement (κ=0.74) can be achieved between human annotators labeling the subjectivity of senses. When uncertain cases are removed, the κ value is even higher (0.90). Moreover, the automatic subjectivity scoring mechanism that we devised was able to successfully assign subjectivity labels to senses, significantly outperforming an “informed” baseline associated with the task. While much work remains to be done, this first attempt has proved the feasibility of correctly assigning subjectivity labels to the fine-grained level of word senses. The second question was also positively answered: the quality of a word sense disambiguation system can be improved with the addition of subjectivity information. Section 5 provided evidence that automatic subjectivity classification may improve word sense disambiguation performance, but mainly for words with both subjective and objective senses. As we saw, performance may even degrade for words that do not. Tying the pieces of this paper together, once the senses in a dictionary have been assigned subjectivity labels, a word sense disambiguation system could consult them to decide whether it should consider or ignore the subjectivity feature. There are several other ways our results could impact future work. Subjectivity labels would be a useful source of information when manually augmenting the lexical knowledge in a dictionary, 1071 e.g., when choosing hypernyms for senses or deciding which senses to eliminate when defining a coarse-grained sense inventory (if there is a subjective sense, at least one should be retained). Adding subjectivity labels to WordNet could also support automatic subjectivity analysis. First, the input corpus could be sense tagged and the subjectivity labels of the assigned senses could be exploited by a subjectivity recognition tool. Second, a number of methods for subjectivity or sentiment analysis start with a set of seed words and then search through WordNet to find other subjective words (Kamps and Marx, 2002; Yu and Hatzivassiloglou, 2003; Hu and Liu, 2004; Kim and Hovy, 2004; Esuli and Sebastiani, 2005). However, such searches may veer off course down objective paths. The subjectivity labels assigned to senses could be consulted to keep the search traveling along subjective paths. Finally, there could be different strategies for exploiting subjectivity annotations and word sense. While the current setting considered a pipeline approach, where the output of a subjectivity annotation system was fed to the input of a method for semantic disambiguation, future work could also consider the role of word senses as a possible way of improving subjectivity analysis, or simultaneous annotations of subjectivity and word meanings, as done in the past for other language processing problems. Acknowledgments We would like to thank Theresa Wilson for annotating senses, and the anonymous reviewers for their helpful comments. This work was partially supported by ARDA AQUAINT and by the NSF (award IIS-0208798). References K. Dave, S. Lawrence, and D. Pennock. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In Proc. WWW-2003, Budapest, Hungary. Available at http://www2003.org. A. Esuli and F. Sebastiani. 2005. Determining the semantic orientation of terms through gloss analysis. In Proc. CIKM-2005. V. Hatzivassiloglou and K. McKeown. 1997. Predicting the semantic orientation of adjectives. In Proc. ACL-97, pages 174–181. D. Heise. 2001. Project magellan: Collecting crosscultural affective meanings via the internet. Electronic Journal of Sociology, 5(3). M. Hu and B. Liu. 2004. Mining and summarizing customer reviews. In Proceedings of ACM SIGKDD. J. Jiang and D. Conrath. 1997. Semantic similarity based on corpus statistics and lexical tax onomy. In Proceedings of the International Conference on Research in Computational Linguistics, Taiwan. J. Kamps and M. Marx. 2002. Words with attitude. In Proc. 1st International WordNet Conference. S.M. Kim and E. Hovy. 2004. Determining the sentiment of opinions. In Proc. Coling 2004. Y.K. Lee and H.T. Ng. 2002. An empirical evaluation of knowledge sources and learning algo rithms for word sense disambiguation. In Proc. EMNLP 2002. D. Lewis. 1992. An evaluation of phrasal and clustered representations on a text categorization task. In Proceedings of ACM SIGIR. D. Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL, Montreal, Canada. D. McCarthy, R. Koeling, J. Weeds, and J. Carroll. 2004. Finding predominant senses in untagged text. In Proc. ACL 2004. R. Mihalcea, T. Chklovski, and A. Kilgarriff. 2004. The Senseval-3 English lexical sample task. In Proc. ACL/SIGLEX Senseval-3. G. Miller. 1995. Wordnet: A lexical database. Communication of the ACM, 38(11):39–41. H.T. Ng and H.B. Lee. 1996. Integrating multiple knowledge sources to disambiguate word se nse: An examplar-based approach. In Proc. ACL 1996. B. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summariza tion based on minimum cuts. In Proc. ACL 2004. A. Popescu and O. Etzioni. 2005. Extracting product features and opinions from reviews. In Proc. of HLT/EMNLP 2005. R. Quirk, S. Greenbaum, G. Leech, and J. Svartvik. 1985. A Comprehensive Grammar of the English Language. Longman, New York. E. Riloff and J. Wiebe. 2003. Learning extraction patterns for subjective expressions. In Proc. EMNLP 2003. E. Riloff, J. Wiebe, and W. Phillips. 2005. Exploiting subjectivity classification to improve information ex traction. In Proc. AAAI 2005. V. Stoyanov, C. Cardie, and J. Wiebe. 2005. Multiperspective question answering using the opqa corpus. In Proc. HLT/EMNLP 2005. P. Turney. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proc. ACL 2002. J. Wiebe, T. Wilson, and C. Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 1(2). J. Wiebe. 2000. Learning subjective adjectives from corpora. In Proc. AAAI 2000. J. Yi, T. Nasukawa, R. Bunescu, and W. Niblack. 2003. Sentiment analyzer: Extracting sentiments about a given topic using natu ral language processing techniques. In Proc. ICDM 2003. H. Yu and V. Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proc. EMNLP 2003. 1072 | 2006 | 134 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1073–1080, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Improving QA Accuracy by Question Inversion John Prager IBM T.J. Watson Res. Ctr. Yorktown Heights N.Y. 10598 [email protected] Pablo Duboue IBM T.J. Watson Res. Ctr. Yorktown Heights N.Y. 10598 [email protected] Jennifer Chu-Carroll IBM T.J. Watson Res. Ctr. Yorktown Heights N.Y. 10598 [email protected] Abstract This paper demonstrates a conceptually simple but effective method of increasing the accuracy of QA systems on factoid-style questions. We define the notion of an inverted question, and show that by requiring that the answers to the original and inverted questions be mutually consistent, incorrect answers get demoted in confidence and correct ones promoted. Additionally, we show that lack of validation can be used to assert no-answer (nil) conditions. We demonstrate increases of performance on TREC and other question-sets, and discuss the kinds of future activities that can be particularly beneficial to approaches such as ours. 1 Introduction Most QA systems nowadays consist of the following standard modules: QUESTION PROCESSING, to determine the bag of words for a query and the desired answer type (the type of the entity that will be offered as a candidate answer); SEARCH, which will use the query to extract a set of documents or passages from a corpus; and ANSWER SELECTION, which will analyze the returned documents or passages for instances of the answer type in the most favorable contexts. Each of these components implements a set of heuristics or hypotheses, as devised by their authors (cf. Clarke et al. 2001, ChuCarroll et al. 2003). When we perform failure analysis on questions incorrectly answered by our system, we find that there are broadly speaking two kinds of failure. There are errors (we might call them bugs) on the implementation of the said heuristics: errors in tagging, parsing, named-entity recognition; omissions in synonym lists; missing patterns, and just plain programming errors. This class can be characterized by being fixable by identifying incorrect code and fixing it, or adding more items, either explicitly or through training. The other class of errors (what we might call unlucky) are at the boundaries of the heuristics; situations were the system did not do anything “wrong,” in the sense of bug, but circumstances conspired against finding the correct answer. Usually when unlucky errors occur, the system generates a reasonable query and an appropriate answer type, and at least one passage containing the right answer is returned. However, there may be returned passages that have a larger number of query terms and an incorrect answer of the right type, or the query terms might just be physically closer to the incorrect answer than to the correct one. ANSWER SELECTION modules typically work either by trying to prove the answer is correct (Moldovan & Rus, 2001) or by giving them a weight produced by summing a collection of heuristic features (Radev et al., 2000); in the latter case candidates having a larger number of matching query terms, even if they do not exactly match the context in the question, might generate a larger score than a correct passage with fewer matching terms. To be sure, unlucky errors are usually bugs when considered from the standpoint of a system with a more sophisticated heuristic, but any system at any point in time will have limits on what it tries to do; therefore the distinction is not absolute but is relative to a heuristic and system. It has been argued (Prager, 2002) that the success of a QA system is proportional to the impedance match between the question and the knowledge sources available. We argue here similarly. Moreover, we believe that this is true not only in terms of the correct answer, but the distracters,1 or incorrect answers too. In QA, an unlucky incorrect answer is not usually predictable in advance; it occurs because of a coincidence of terms and syntactic contexts that cause it to be preferred over the correct answer. It has no connection with the correct answer and is only returned because its enclosing passage so happens to exist in the same corpus as the correct answer context. This would lead us to believe that if a 1 We borrow the term from multiple-choice test design. 1073 different corpus containing the correct answer were to be processed, while there would be no guarantee that the correct answer would be found, it would be unlikely (i.e. very unlucky) if the same incorrect answer as before were returned. We have demonstrated elsewhere (Prager et al. 2004b) how using multiple corpora can improve QA performance, but in this paper we achieve similar goals without using additional corpora. We note that factoid questions are usually about relations between entities, e.g. “What is the capital of France?”, where one of the arguments of the relationship is sought and the others given. We can invert the question by substituting the candidate answer back into the question, while making one of the given entities the socalled wh-word, thus “Of what country is Paris the capital?” We hypothesize that asking this question (and those formed from other candidate answers) will locate a largely different set of passages in the corpus than the first time around. As will be explained in Section 3, this can be used to decrease the confidence in the incorrect answers, and also increase it for the correct answer, so that the latter becomes the answer the system ultimately proposes. This work is part of a continuing program of demonstrating how meta-heuristics, using what might be called “collateral” information, can be used to constrain or adjust the results of the primary QA system. In the next Section we review related work. In Section 3 we describe our algorithm in detail, and in Section 4 present evaluation results. In Section 5 we discuss our conclusions and future work. 2 Related Work Logic and inferencing have been a part of QuestionAnswering since its earliest days. The first such systems were natural-language interfaces to expert systems, e.g., SHRDLU (Winograd, 1972), or to databases, e.g., LIFER/LADDER (Hendrix et al. 1977). CHAT-80 (Warren & Pereira, 1982), for instance, was a DCG-based NL-query system about world geography, entirely in Prolog. In these systems, the NL question is transformed into a semantic form, which is then processed further. Their overall architecture and system operation is very different from today’s systems, however, primarily in that there was no text corpus to process. Inferencing is a core requirement of systems that participate in the current PASCAL Recognizing Textual Entailment (RTE) challenge (see http://www.pascal-network.org/Challenges/RTE and .../RTE2). It is also used in at least two of the more visible end-to-end QA systems of the present day. The LCC system (Moldovan & Rus, 2001) uses a Logic Prover to establish the connection between a candidate answer passage and the question. Text terms are converted to logical forms, and the question is treated as a goal which is “proven”, with realworld knowledge being provided by Extended WordNet. The IBM system PIQUANT (ChuCarroll et al., 2003) used Cyc (Lenat, 1995) in answer verification. Cyc can in some cases confirm or reject candidate answers based on its own store of instance information; in other cases, primarily of a numerical nature, Cyc can confirm whether candidates are within a reasonable range established for their subtype. At a more abstract level, the use of inversions discussed in this paper can be viewed as simply an example of finding support (or lack of it) for candidate answers. Many current systems (see, e.g. (Clarke et al., 2001; Prager et al. 2004b)) employ redundancy as a significant feature of operation: if the same answer appears multiple times in an internal top-n list, whether from multiple sources or multiple algorithms/agents, it is given a confidence boost, which will affect whether and how it gets returned to the end-user. The work here is a continuation of previous work described in (Prager et al. 2004a,b). In the former we demonstrated that for a certain kind of question, if the inverted question were given, we could improve the F-measure of accuracy on a question set by 75%. In this paper, by contrast, we do not manually provide the inverted question, and in the second evaluation presented here we do not restrict the question type. 3 Algorithm 3.1 System Architecture A simplified block-diagram of our PIQUANT system is shown in Figure 1. The outer block on the left, QS1, is our basic QA system, in which the QUESTION PROCESSING (QP), SEARCH (S) and ANSWER SELECTION (AS) subcomponents are indicated. The outer block on the right, QS2, is another QA-System that is used to answer the inverted questions. In principle QS2 could be QS1 but parameterized differently, or even an entirely different system, but we use another instance of QS1, as-is. The block in the middle is our Constraints Module CM, which is the subject of this paper. 1074 The Question Processing component of QS2 is not used in this context since CM simulates its output by modifying the output of QP in QS1, as described in Section 3.3. 3.2 Inverting Questions Our open-domain QA system employs a namedentity recognizer that identifies about a hundred types. Any of these can be answer types, and there are corresponding sets of patterns in the QUESTION PROCESSING module to determine the answer type sought by any question. When we wish to invert a question, we must find an entity in the question whose type we recognize; this entity then becomes the sought answer for the inverted question. We call this entity the inverted or pivot term. Thus for the question: (1) “What was the capital of Germany in 1985?” Germany is identified as a term with a known type (COUNTRY). Then, given the candidate answer <CANDANS>, the inverted question becomes (2) “Of what country was < CANDANS> the capital in 1985?” Some questions have more than one invertible term. Consider for example: (3) “Who was the 33rd president of the U.S.?” This question has 3 inversion points: (4) “What number president of the U.S. was <CANDANS>?” (5) “Of what country was <CANDANS> the 33rd president?” (6) “<CANDANS> was the 33rd what of the U.S.?” Having more than one possible inversion is in theory a benefit, since it gives more opportunity for enforcing consistency, but in our current implementation we just pick one for simplicity. We observe on training data that, in general, the smaller the number of unique instances of an answer type, the more likely it is that the inverted question will be correctly answered. We generated a set NELIST of the most frequently-occurring named-entity types in questions; this list is sorted in order of estimated cardinality. It might seem that the question inversion process can be quite tricky and can generate possibly unnatural phrasings, which in turn can be difficult to reparse. However, the examples given above were simply English renditions of internal inverted structures – as we shall see the system does not need to use a natural language representation of the inverted questions. Some questions are either not invertible, or, like “How did X die?” have an inverted form (“Who died of cancer?”) with so many correct answers that we know our algorithm is unlikely to benefit us. However, as it is constituted it is unlikely to hurt us either, and since it is difficult to automatically identify such questions, we don’t attempt to intercept them. As reported in (Prager et al. 2004a), an estimated 79% of the questions in TREC question sets can be inverted meaningfully. This places an upper limit on the gains to be achieved with our algorithm, but is high enough to be worth pursuing. Figure 1. Constraints Architecture. QS1 and QS2 are (possibly identical) QA systems. Answers Question QS1 QA system QP question proc. S search AS answer selection QS2 QA system QP question proc. S search AS answer selection CM constraints module 1075 3.3 Inversion Algorithm As shown in the previous section, not all questions have easily generated inverted forms (even by a human). However, we do not need to explicate the inverted form in natural language in order to process the inverted question. In our system, a question is processed by the QUESTION PROCESSING module, which produces a structure called a QFrame, which is used by the subsequent SEARCH and ANSWER SELECTION modules. The QFrame contains the list of terms and phrases in the question, along with their properties, such as POS and NE-type (if it exists), and a list of syntactic relationship tuples. When we have a candidate answer in hand, we do not need to produce the inverted English question, but merely the QFrame that would have been generated from it. Figure 1 shows that the CONSTRAINTS MODULE takes the QFrame as one of its inputs, as shown by the link from QP in QS1 to CM. This inverted QFrame can be generated by a set of simple transformations, substituting the pivot term in the bag of words with a candidate answer <CANDANS>, the original answer type with the type of the pivot term, and in the relationships the pivot term with its type and the original answer type with <CANDANS>. When relationships are evaluated, a type token will match any instance of that type. Figure 2 shows a simplified view of the original QFrame for “What was the capital of Germany in 1945?”, and Figure 3 shows the corresponding Inverted QFrame. COUNTRY is determined to be a better type to invert than YEAR, so “Germany” becomes the pivot. In Figure 3, the token <CANDANS> might take in turn “Berlin”, “Moscow”, “Prague” etc. Figure 2. Simplified QFrame Figure 3. Simplified Inverted QFrame. The output of QS2 after processing the inverted QFrame is a list of answers to the inverted question, which by extension of the nomenclature we call “inverted answers.” If no term in the question has an identifiable type, inversion is not possible. 3.4 Profiting From Inversions Broadly speaking, our goal is to keep or re-rank the candidate answer hit-list on account of inversion results. Suppose that a question Q is inverted around pivot term T, and for each candidate answer Ci, a list of “inverted” answers {Cij} is generated as described in the previous section. If T is on one of the {Cij}, then we say that Ci is validated. Validation is not a guarantee of keeping or improving Ci’s position or score, but it helps. Most cases of failure to validate are called refutation; similarly, refutation of Ci is not a guarantee of lowering its score or position. It is an open question how to adjust the results of the initial candidate answer list in light of the results of the inversion. If the scores associated with candidate answers (in both directions) were true probabilities, then a Bayesian approach would be easy to develop. However, they are not in our system. In addition, there are quite a few parameters that describe the inversion scenario. Suppose Q generates a list of the top-N candidates {Ci}, with scores {Si}. If this inversion method were not to be used, the top candidate on this list, C1, would be the emitted answer. The question generated by inverting about T and substituting Ci is QTi. The system is fixed to find the top 10 passages responsive to QTi, and generates an ordered list Cij of candidate answers found in this set. Each inverted question QTi is run through our system, generating inverted answers {Cij}, with scores {Sij}, and whether and where the pivot term T shows up on this list, represented by a list of positions {Pi}, where Pi is defined as: Pi = j if Cij = T, for some j Pi = -1 otherwise We added to the candidate list the special answer nil, representing “no answer exists in the corpus.” As described earlier, we had observed from training data that failure to validate candidates of certain types (such as Person) would not necessarily be a real refutation, so we established a set of types SOFTREFUTATION which would contain the broadest of our types. At the other end of the spectrum, we observed that certain narrow candidate types such as UsState would definitely be refuted if validation didn’t occur. These are put in set MUSTCONSTRAIN. Our goal was to develop an algorithm for recomputing all the original scores {Si} from some combination (based on either arithmetic or decision-trees) of Keywords: {1945, <CANDANS>, capital} AnswerType: COUNTRY Relationships: {(COUNTRY, capital), (capital, <CANDANS>), (capital, 1945)} Keywords: {1945, Germany, capital} AnswerType: CAPITAL Relationships: {(Germany, capital), (capital, CAPITAL), (capital, 1945)} 1076 {Si} and {Sij} and membership of SOFTREFUTATION and MUSTCONSTRAIN. Reliably learning all those weights, along with set membership, was not possible given only several hundred questions of training data. We therefore focused on a reduced problem. We observed that when run on TREC question sets, the frequency of the rank of our top answer fell off rapidly, except with a second mode when the tail was accumulated in a single bucket. Our numbers for TRECs 11 and 12 are shown in Table 1. Top answer rank TREC11 TREC12 1 170 108 2 35 32 3 23 14 4 7 7 5 14 9 elsewhere 251 244 % correct 34 26 Table 1. Baseline statistics for TREC11-12. We decided to focus on those questions where we got the right answer in second place (for brevity, we’ll call these second-place questions). Given that TREC scoring only rewards first-place answers, it seemed that with our incremental approach we would get most benefit there. Also, we were keen to limit the additional response time incurred by our approach. Since evaluating the top N answers to the original question with the Constraints process requires calling the QA system another N times per question, we were happy to limit N to 2. In addition, this greatly reduced the number of parameters we needed to learn. For the evaluation, which consisted of determining if the resulting top answer was right or wrong, it meant ultimately deciding on one of three possible outcomes: the original top answer, the original second answer, or nil. We hoped to promote a significant number of second-place finishers to top place and introduce some nils, with minimal disturbance of those already in first place. We used TREC11 data for training, and established a set of thresholds for a decision-tree approach to determining the answer, using Weka (Witten & Frank, 2005). We populated sets SOFTREFUTATION and MUSTCONSTRAIN by manual inspection. The result is Algorithm A, where (i ∈ {1,2}) and o The Ci are the original candidate answers o The ak are learned parameters (k ∈ {1..13}) o Vi means the ith answer was validated o Pi was the rank of the validating answer to question QTi o Ai was the score of the validating answer to QTi. Algorithm A. Answer re-ranking using constraints validation data. 1. If C1 = nil and V2, return C2 2. If V1 and A1 > a1, return C1 3. If not V1 and not V2 and type(T) ∈ MUSTCONSTRAIN, return nil 4. If not V1 and not V2 and type(T) ∉SOFTREFUTATION, if S1 > a2,, return C1 else nil 5. If not V2, return C1 6. If not V1 and V2 and A2 > a3 and P2 < a4 and S1-S2 < a5 and S2 > a6, return C2 7. If V1 and V2 and (A2 - P2/a7) > (A1 - P1/a7) and A1 < a8 and P1 > a9 and A2 < a10 and P2 > a11 and S1-S2 < a12 and (S2 - P2/a7) > a13, return C2 8. else return C1 4 Evaluation Due to the complexity of the learned algorithm, we decided to evaluate in stages. We first performed an evaluation with a fixed question type, to verify that the purely arithmetic components of the algorithm were performing reasonably. We then evaluated on the entire TREC12 factoid question set. 4.1 Evaluation 1 We created a fixed question set of 50 questions of the form “What is the capital of X?”, for each state in the U.S. The inverted question “What state is Z the capital of?” was correctly generated in each case. We evaluated against two corpora: the AQUAINT corpus, of a little over a million newswire documents, and the CNS corpus, with about 37,000 documents from the Center for Nonproliferation Studies in Monterey, CA. We expected there to be answers to most questions in the former corpus, so we hoped there our method would be useful in converting 2nd place answers to first place. The latter corpus is about WMDs, so we expected there to be holes in the state capital coverage2, for which nil identification would be useful.3 2 We manually determined that only 23 state capitals were attested to in the CNS corpus, compared with all in AQUAINT. 3 We added Tbilisi to the answer key for “What is the capital of Georgia?”, since there was nothing in the question to disambiguate Georgia. 1077 The baseline is our regular search-based QA-System without the Constraint process. In this baseline system there was no special processing for nil questions, other than if the search (which always contained some required terms) returned no documents. Our results are shown in Table 2. AQUAINT baseline AQUAINT w/constraints CNS baseline CNS w/constraints Firsts (non-nil) 39/50 43/50 7/23 4/23 Total nils 0/0 0/0 0/27 16/27 Total firsts 39/50 43/50 7/50 20/50 % correct 78 86 14 40 Table 2. Evaluation on AQUAINT and CNS corpora. On the AQUAINT corpus, four out of seven 2nd place finishers went to first place. On the CNS corpus 16 out of a possible 26 correct no-answer cases were discovered, at a cost of losing three previously correct answers. The percentage correct score increased by a relative 10.3% for AQUAINT and 186% for CNS. In both cases, the error rate was reduced by about a third. 4.2 Evaluation 2 For the second evaluation, we processed the 414 factoid questions from TREC12. Of special interest here are the questions initially in first and second places, and in addition any questions for which nils were found. As seen in Table 1, there were 32 questions which originally evaluated in rank 2. Of these, four questions were not invertible because they had no terms that were annotated with any of our named-entity types, e.g. #2285 “How much does it cost for gastric bypass surgery?” Of the remaining 28 questions, 12 were promoted to first place. In addition, two new nils were found. On the down side, four out of 108 previous first place answers were lost. There was of course movement in the ranks two and beyond whenever nils were introduced in first place, but these do not affect the current TREC-QA factoid correctness measure, which is whether the top answer is correct or not. These results are summarized in Table 3. While the overall percentage improvement was small, note that only second–place answers were candidates for re-ranking, and 43% of these were promoted to first place and hence judged correct. Only 3.7% of originally correct questions were casualties. To the extent that these percentages are stable across other collections, as long as the size of the set of second-place answers is at least about 1/10 of the set of first-place answers, this form of the Constraint process can be applied effectively. Baseline Constraints Firsts (non-nil) 105 113 nils 3 5 Total firsts 108 118 % correct 26.1 28.5 Table 3. Evaluation on TREC12 Factoids. 5 Discussion The experiments reported here pointed out many areas of our system which previous failure analysis of the basic QA system had not pinpointed as being too problematic, but for which improvement should help the Constraints process. In particular, this work brought to light a matter of major significance, term equivalence, which we had not previously focused on too much (and neither had the QA community as a whole). We will discuss that in Section 5.4. Quantitatively, the results are very encouraging, but it must be said that the number of questions that we evaluated were rather small, as a result of the computational expense of the approach. From Table 1, we conclude that the most mileage is to be achieved by our QA-System as a whole by addressing those questions which did not generate a correct answer in the first one or two positions. We have performed previous analyses of our system’s failure modes, and have determined that the passages that are output from the SEARCH component contain the correct answer 70-75% of the time. The ANSWER SELECTION module takes these passages and proposes a candidate answer list. Since the CONSTRAINTS MODULE’s operation can be viewed as a re-ranking of the output of ANSWER SELECTION, it could in principle boost the system’s accuracy up to that 70-75% level. However, this would either require a massive training set to establish all the parameters and weights required for all the possible reranking decisions, or a new model of the answer-list distribution. 5.1 Probability-based Scores Our ANSWER SELECTION component assigns scores to candidate answers on the basis of the number of terms and term-term syntactic relationships from the 1078 original question found in the answer passage (where the candidate answer and wh-word(s) in the question are identified terms). The resulting numbers are in the range 0-1, but are not true probabilities (e.g. where answers with a score of 0.7 would be correct 70% of the time). While the generated scores work well to rank candidates for a given question, inter-question comparisons are not generally meaningful. This made the learning of a decision tree (Algorithm A) quite difficult, and we expect that when addressed, will give better performance to the Constraints process (and maybe a simpler algorithm). This in turn will make it more feasible to re-rank the top 10 (say) original answers, instead of the current 2. 5.2 Better confidences Even if no changes to the ranking are produced by the Constraints process, then the mere act of validation (or not) of existing answers can be used to adjust confidence scores. In TREC2002 (Voorhees, 2003), there was an evaluation of responses according to systems’ confidences in their own answers, using the Average Precision (AP) metric. This is an important consideration, since it is generally better for a system to say “I don’t know” than to give a wrong answer. On the TREC12 questions set, our AP score increased 2.1% with Constraints, using the algorithm we presented in (Chu-Carroll et al. 2002). 5.3 More complete NER Except in pure pattern-based approaches, e.g. (Brill, 2002), answer types in QA systems typically correspond to the types identifiable by their named-entity recognizer (NER). There is no agreed-upon number of classes for an NER system, even approximately. It turns out that for best coverage by our CONSTRAINTS MODULE, it is advantageous to have a relatively large number of types. It was mentioned in Section 4.2 that certain questions were not invertible because no terms in them were of a recognizable type. Even when questions did have typed terms, if the types were very high-level then creating a meaningful inverted question was problematic. For example, for QA without Constraints it is not necessary to know the type of “MTV” in “When was MTV started?”, but if it is only known to be a Name then the inverted question “What <Name> was started in 1980?” could be too general to be effective. 5.4 Establishing Term Equivalence The somewhat surprising condition that emerged from this effort was the need for a much more complete ability than had previously been recognized for the system to establish the equivalence of two terms. Redundancy has always played a large role in QA systems – the more occurrences of a candidate answer in retrieved passages the higher the answer’s score is made to be. Consequently, at the very least, a string-matching operation is needed for checking equivalence, but other techniques are used to varying degrees. It has long been known in IR that stemming or lemmatization is required for successful term matching, and in NLP applications such as QA, resources such as WordNet (Miller, 1995) are employed for checking synonym and hypernym relationships; Extended WordNet (Moldovan & Novischi, 2002) has been used to establish lexical chains between terms. However, the Constraints work reported here has highlighted the need for more extensive equivalence testing. In direct QA, when an ANSWER SELECTION module generates two (or more) equivalent correct answers to a question (e.g. “Ferdinand Marcos” vs. “President Marcos”; “French” vs. “France”), and fails to combine them, it is observed that as long as either one is in first place then the question is correct and might not attract more attention from developers. It is only when neither is initially in first place, but combining the scores of correct candidates boosts one to first place that the failure to merge them is relevant. However, in the context of our system, we are comparing the pivot term from the original question to the answers to the inverted questions, and failure here will directly impact validation and hence the usefulness of the entire approach. As a consequence, we have identified the need for a component whose sole purpose is to establish the equivalence, or generally the kind of relationship, between two terms. It is clear that the processing will be very type-dependent – for example, if two populations are being compared, then a numerical difference of 5% (say) might not be considered a difference at all; for “Where” questions, there are issues of granularity and physical proximity, and so on. More examples of this problem were given in (Prager et al. 2004a). Moriceau (2006) reports a system that addresses part of this problem by trying to rationalize different but “similar” answers to the user, but does not extend to a general-purpose equivalence identifier. 6 Summary We have extended earlier Constraints-based work through the method of question inversion. The approach uses our QA system recursively, by taking candidate answers and attempts to validate them through asking the inverted questions. The outcome 1079 is a re-ranking of the candidate answers, with the possible insertion of nil (no answer in corpus) as the top answer. While we believe the approach is general, and can work on any question and arbitrary candidate lists, due to training limitations we focused on two restricted evaluations. In the first we used a fixed question type, and showed that the error rate was reduced by 36% and 30% on two very different corpora. In the second evaluation we focused on questions whose direct answers were correct in the second position. 43% of these questions were subsequently judged correct, at a cost of only 3.7% of originally correct questions. While in the future we would like to extend the Constraints process to the entire answer candidate list, we have shown that applying it only to the top two can be beneficial as long as the second-place answers are at least a tenth as numerous as first-place answers. We also showed that the application of Constraints can improve the system’s confidence in its answers. We have identified several areas where improvement to our system would make the Constraints process more effective, thus getting a double benefit. In particular we feel that much more attention should be paid to the problem of determining if two entities are the same (or “close enough”). 7 Acknowledgments This work was supported in part by the Disruptive Technology Office (DTO)’s Advanced Question Answering for Intelligence (AQUAINT) Program under contract number H98230-04-C-1577. We would like to thank the anonymous reviewers for their helpful comments. References Brill, E., Dumais, S. and Banko M. “An analysis of the AskMSR question-answering system.” In Proceedings of EMNLP 2002. Chu-Carroll, J., J. Prager, C. Welty, K. Czuba and D. Ferrucci. “A Multi-Strategy and Multi-Source Approach to Question Answering”, Proceedings of the 11th TREC, 2003. Clarke, C., Cormack, G., Kisman, D. and Lynam, T. “Question answering by passage selection (Multitext experiments for TREC-9)” in Proceedings of the 9th TREC, pp. 673-683, 2001. Hendrix, G., Sacerdoti, E., Sagalowicz, D., Slocum J.: Developing a Natural Language Interface to Complex Data. VLDB 1977: 292 Lenat, D. 1995. "Cyc: A Large-Scale Investment in Knowledge Infrastructure." Communications of the ACM 38, no. 11. Miller, G. “WordNet: A Lexical Database for English”, Communications of the ACM 38(11) pp. 39-41, 1995. Moldovan, D. and Novischi, A, “Lexical Chains for Question Answering”, COLING 2002. Moldovan, D. and Rus, V., “Logic Form Transformation of WordNet and its Applicability to Question Answering”, Proceedings of the ACL, 2001. Moriceau, V. “Numerical Data Integration for Cooperative Question-Answering”, in EACL Workshop on Knowledge and Reasoning for Language Processing (KRAQ’06), Trento, Italy, 2006. Prager, J.M., Chu-Carroll, J. and Czuba, K. "Question Answering using Constraint Satisfaction: QA-by-Dossier-with-Constraints", Proc. 42nd ACL, pp. 575-582, Barcelona, Spain, 2004(a). Prager, J.M., Chu-Carroll, J. and Czuba, K. "A Multi-Strategy, Multi-Question Approach to Question Answering" in New Directions in Question-Answering, Maybury, M. (Ed.), AAAI Press, 2004(b). Prager, J., "A Curriculum-Based Approach to a QA Roadmap"' LREC 2002 Workshop on Question Answering: Strategy and Resources, Las Palmas, May 2002. Radev, D., Prager, J. and Samn, V. "Ranking Suspected Answers to Natural Language Questions using Predictive Annotation", Proceedings of ANLP 2000, pp. 150-157, Seattle, WA. Voorhees, E. “Overview of the TREC 2002 Question Answering Track”, Proceedings of the 11th TREC, Gaithersburg, MD, 2003. Warren, D., and F. Pereira "An efficient easily adaptable system for interpreting natural language queries," Computational Linguistics, 8:3-4, 110122, 1982. Winograd, T. Procedures as a representation for data in a computer program for under-standing natural language. Cognitive Psychology, 3(1), 1972. Witten, I.H. & Frank, E. Data Mining. Practical Machine Learning Tools and Techniques. Elsevier Press, 2005. 1080 | 2006 | 135 |
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1081–1088, Sydney, July 2006. c⃝2006 Association for Computational Linguistics Reranking Answers for Definitional QA Using Language Modeling Yi Chen School of Software Engineering Chongqing University Chongqing, China, 400044 [email protected] Ming Zhou Microsoft Research Asia 5F Sigma Center, No.49 Zhichun Road, Haidian Bejing, China, 100080 [email protected] Shilong Wang College of Mechanical Engineering Chongqing University Chongqing, China, 400044 [email protected] Abstract* Statistical ranking methods based on centroid vector (profile) extracted from external knowledge have become widely adopted in the top definitional QA systems in TREC 2003 and 2004. In these approaches, terms in the centroid vector are treated as a bag of words based on the independent assumption. To relax this assumption, this paper proposes a novel language model-based answer reranking method to improve the existing bag-ofwords model approach by considering the dependence of the words in the centroid vector. Experiments have been conducted to evaluate the different dependence models. The results on the TREC 2003 test set show that the reranking approach with biterm language model, significantly outperforms the one with the bag-ofwords model and unigram language model by 14.9% and 12.5% respectively in F-Measure(5). 1 Introduction In recent years, QA systems in TREC (Text REtrieval Conference) have made remarkable progress (Voorhees, 2002). The task of TREC QA before 2003 has mainly focused on the factoid questions, in which the answer to the question is a number, a person name, or an organization name, or the like. Questions like “Who is Colin Powell?” or “What is mold?” are definitional questions *This work was finished while the first author was visiting Microsoft Research Asia during March 2005-March 2006 as a component of the project of AskBill Chatbot led by Dr. Ming Zhou. (Voorhees, 2003). Statistics from 2,516 Frequently Asked Questions (FAQ) extracted from Internet FAQ Archives1 show that around 23.6% are definitional questions. This indicates that definitional questions occur frequently and are important question types. TREC started the evaluation for definitional QA in 2003. The definitional QA systems in TREC are required to extract definitional nuggets/sentences that contain the highly descriptive information about the question target from a given large corpus. For definitional question, statistical ranking methods based on centroid vector (profile) extracted from external resources, such as the online encyclopedia, are widely adopted in the top systems in TREC 2003 and 2004 (Xu et al., 2003; Blair-Goldensohn et al., 2003; Wu et al., 2004). In these systems, for a given question, a vector is formed consisting of the most frequent co-occurring terms with the question target as the question profile. Candidate answers extracted from a given large corpus are ranked based on their similarity to the question profile. The similarity is normally the TFIDF score in which both the candidate answer and the question profile are treated as a bag of words in the framework of Vector Space Model (VSM). VSM is based on an independence assumption, which assumes that terms in a vector are statistically independent from one another. Although this assumption makes the development of retrieval models easier and the retrieval operation tractable, it does not hold in textual data. For example, for question “Who is Bill Gates?” words “born” and “1955” in the candidate answer are not independent. In this paper, we are interested in considering the term dependence to improve the answer reranking for definitional QA. Specifically, the 1 http://www.faqs.org/faqs/ 1081 language model is utilized to capture the term dependence. A language model is a probability distribution that captures the statistical regularities of natural language use. In a language model, key elements are the probabilities of word sequences, denoted as P(w1, w2, ..., wn) or P (w1,n) for short. Recently, language model has been successfully used for information retrieval (IR) (Ponte and Croft, 1998; Song and Croft, 1998; Lafferty et al., 2001; Gao et al., 2004; Cao et al., 2005). Our natural thinking is to apply language model to rank the candidate answers as it has been applied to rank search results in IR task. The basic idea of our research is that, given a definitional question q, an ordered centroid OC which is learned from the web and a language model LM(OC) which is trained with it. Candidate answers can be ranked by probability estimated by LM(OC). A series of experiments on standard TREC 2003 collection have been conducted to evaluate bigram and biterm language models. Results show that both these two language models produce promising results by capturing the term dependence and biterm model achieves the best performance. Biterm language model interpolating with unigram model significantly improves the VSM and unigram model by 14.9% and 12.5% in F-Measure(5). In the rest of this paper, Section 2 reviews related work. Section 3 presents details of the proposed method. Section 4 introduces the structure of our experimental system. We show the experimental results in Section 5, and conclude the paper in Section 6. 2 Related Work Web information has been widely used for answer reranking and validation. For factoid QA task, AskMSR (Brill et al., 2001) ranks the answers by counting the occurrences of candidate answers returned from a search engine. Similarly, DIOGENE (Magnini et al., 2002) applies search engines to validate candidate answers. For definitional QA task, Lin (2002) presented an approach in which web-based answer reranking is combined with dictionary-based (e.g., WordNet) reranking, which leads to a 25% increase in mean reciprocal rank (MRR). Xu et al. (2003) proposed a statistical ranking method based on centroid vector (i.e., vector of words and frequencies) learned from the online encyclopedia (i.e., Wikipedia2) and the web. Candi 2 http://www.wikipedia.org date answers were reranked based on their similarity (TFIDF score) to the centroid vector. Similar techniques were explored in (BlairGoldensohn et al., 2003). In this paper, we explore the dependence among terms in centroid vector for improving the answer reranking for definitional QA. In recent years, language modeling has been widely employed in IR (Ponte and Croft, 1998; Song and Croft, 1998; Miller and Zhai, 1999; Lafferty and Zhai, 2001). The basic idea is to compute the conditional probability P(Q|D), i.e., the probability of generating a query Q given the observation of a document D. The searched documents are ranked in descending order of this probability. Song and Croft (1998) proposed a general language model to incorporate word dependence by using bigrams. Srikanth and Srihari (2002) introduced biterm language models similar to the bigram model except that the constraint of order in terms is relaxed and improved performance was observed. Gao et al. (2004) presented a new method of capturing word dependencies, in which they extended state-of-the-art language modeling approaches to information retrieval by introducing a dependence structure that learned from training data. Cao et al. (2005) proposed a novel dependence model to incorporate both relationships of WordNet and co-occurrence with the language modeling framework for IR. In our approach, we propose bigram and biterm models to capture the term dependence in centroid vector. Applying language modeling for the QA task has not been widely researched. Zhang D. and Lee (2003) proposed a method using language model for passage retrieval for the factoid QA. They trained two language models, in which one was the question-topic language model and the other was passage language model. They utilized the divergence between the two language models to rank passages. In this paper, we focus on reranking answers for definitional questions. As other ranking approaches, Xu, et al. (2005) formalized ranking definitions as classification problems, and Cui et al. (2004) proposed soft patterns to rank answers for definitional QA. 3 Reranking Answers Using Language Model 3.1 Model background In practice, language model is often approximated by N-gram models. Unigram: 1082 (1) 2 1 1 ) )...P(w )P(w P(w ) P(w n ,n = Bigram: (2) 1 1 2 1 1 ) |w )...P(w |w )P(w P(w ) P(w nn ,n = The unigram model makes a strong assumption that each word occurs independently. The bigram model takes the local context into consideration. It has been proved to work better than the unigram language model in IR (e.g., Song and Croft, 1998). Biterm language models are similar to bigram language models except that the constraint of order in terms is relaxed. Therefore, a document containing information retrieval and a document containing retrieval (of) information will be assigned the same generation probability. The biterm probabilities can be approximated using the frequency of occurrence of terms. Three approximation methods were proposed in Srikanth and Srihari (2002). The so-called min-Adhoc approximation truly relaxes the constraint of word order and outperformed other two approximation methods in their experiments. (3) )} ( ), ( min{ ) , ( ) , ( ) | ( 1 1 1 1 i i i i i i i i BT w C w C w w C w w C w w P − − − − + ≈ Equation (3) is the min-Adhoc approximation. Where C(X) gives the occurrences of the string X. 3.2 Reranking based on language model In our approach, we adopt bigram and biterm language models. As a smoothing approach, linear interpolation of unigrams and bigrams is employed. Given a candidate answer A=t1t2...ti...tn and a bigram or biterm back-off language model OC trained with the ordered centroid, the probability of generating A can be estimated by Equation (4). [ ] ∏ = − − + = = n i i i i n O t t P OC t P OC t P OC t t P OC A P 2 1 1 1 C) , | ( ) 1( ) | ( ) | ( (4) ) | ,..., ( ) | ( λ λ where OC stands for the language model of the ordered centroid and λ is the mixture weight combining the unigram and bigram (or biterm) probabilities. After taking logarithm and exponential for Equation (4), we get Equation (5). [ ] (5) ) , | ( ) 1( ) | ( log ) | ( log exp ) ( 2 1 1 ∑ − + + = = − n i i i i OC t t P OC t P OC t P A Score λ λ We observe that this formula penalizes verbose candidate answers. This can be alleviated by adding a brevity penalty, BP, which is inspired by machine translation evaluation (Papineni et al., 2001). (6) 1 , 1 min exp − = A ref L L BP where Lref is a constant standing for the length of reference answer (i.e., centroid vector). LA is the length of the candidate answer. By combining Equation (5) and (6), we get the final scoring function. [ ] ∑ − + + × − = × = = − n i i i i A ref OC t t P OC t P OC t P L L A Score BP A FinalScore 2 1 1 ) , | ( ) 1( ) | ( log ) | ( log exp 1 , 1 min exp (7) ) ( ) ( λ λ 3.3 Parameter estimation In Equation (7), we need to estimate three parameters: P(ti|OC), P(ti|ti-1, OC) and λ . For P(ti|OC), P(ti|ti-1, OC), maximum likelihood estimation (MLE) is employed. (8) ) ( ) | ( OC i OC i N t Count OC t P = (9) ) ( ) , ( ) , | ( 1 1 1 − − − = i OC i i OC i i t Count t t Count OC t t P where CountOC(X) is the occurrences of the string X in the ordered centroid and NOC stands for the total number of tokens in the ordered centroid. For biterm language model, we use the above mentioned min-Adhoc approximation (Srikanth and Srihari, 2002). (10) )} ( ), ( min{ ) , ( ) , ( ) , | ( 1 1 1 1 i OC i OC i i OC i i OC i i BT t Count t Count t t Count t t Count OC t t P − − − − + = For unigram, we do not need smoothing because we only concern terms in the centroid vector. Recall that bigram and biterm probabilities have already been smoothed by interpolation. The λ can be learned from a training corpus using an Expectation Maximization (EM) algorithm. Specifically, we estimate λ by maximizing the likelihood of all training instances, given the bigram or biterm model: [ ] ∑∑ ∑ = = − = ∗ − + = = | | 1 2 ) ( 1 ) ( ) ( | | 1 ) ( ) ( ) ( 1 ) | ( ) 1( ) ( log max arg (11) ) | ... ( max arg INS j l i j i j i j i INS j j j l j j t t P t P OC t t P λ λ λ λ λ BP and P(t1) are ignored because they do not affect λ . λ can be estimated using EM iterative procedure: 1) Initialize λ to a random estimate between 0 and 1, i.e., 0.5; 2) Update λ using: ∑ ∑ = − = + − + − × = jl i j i j i r j i r j i r INS j j r t t P t P t P l INS 2 ) ( 1 ) ( ) ( ) ( ) ( ) ( ) ( | | 1 ) 1 ( (12) ) | ( ) 1( ) ( ) ( 1 1 | | 1 λ λ λ λ where INS denotes all training instances and |INS| gives the number of training instances which is used as a normalization factor. lj gives 1083 the number of tokens in the jth instance in the training data; 3) Repeat Step 2 until λ converges. We use the TREC 2004 test set3 as our training data and we set λ as 0.4 for bigram model and 0.6 for biterm model according to the experimental results. 4 System Architecture Target (e.g., Aaron Copland) Ordered centroid list (e.g., born Nov 14 1900) Candidate answers Removing redundant answers Extracting candidate answers Answers (e.g., American composer) Learning ordered centroid Answer reranking Training language model AQUAINT Web Stage 1 Training language model Stage 3 Removing redundancies Stage 2 Reranking using LM Figure 1. System architecture. We propose a three-stage approach for answer extraction. It involves: 1) learning a language model from the web; 2) adopting the language model to rerank candidate answers; 3) removing redundancies. Figure 1 shows five main modules. Learning ordered centroid: 1) Query expansion. Definitional questions are normally short (i.e., who is Bill Gates?). Query expansion is used to refine the query intention. First, reformulate query via simply adding clue words to the questions. i.e., for “Who is ...?” question, we add the word “biography”; and for “What is ...?” question, we add the word “is usually”, “refers to”, etc. We learn these clue words using the similar method proposed in (Ravichandran and Hovy, 2002). Second, query a web search engine (i.e., Google4) with reformulated query and learn top-R (we empirically set R=5) most frequent co-occurring terms with the target from returned snippets as query expansion terms; 2) Learning centroid vector (profile). We query Google again with the target and expanded terms learned in the previous step, download top-N (we empirically set N=500 based on the tradeoff between the snippet number and the time complexity) snippets, and split snippets into sentences. Then, we retain the generated sentences that contain the target, denoted as W. Finally, learn topM (We empirically set M=350) most frequent co 3 The test data for TREC-13 includes 65 definition questions. NIST drops one in the official evaluation. 4 http://www.google.com occurring terms (stemmed) from W using Equation (15) (Cui et al., 2004) as the centroid vector. (13) ) ( )1 ) ( log( )1 ) ( log( )1 ) , ( log( ) ( t idf T Count t Count T t Co t Weight × + + + + = where Co(t, T) denotes the number of sentences in which t co-occurs with the target T, and Count(t) gives the number of sentences containing the word t. We also use the inverse document frequency of t, idf(t) 5, as a measurement of the global importance of the word; 3) Extracting ordered centroid. For each sentence in W, we retain the terms in the centroid vector as the ordered centroid list. Words not contained in the centroid vector will be treated as the “stop words” and ignored. E.g., “Who is Aaron Copland?”, the ordered centroid list is shown below(where italics are extracted and put in the ordered centroid list): 1. Today's Highlight in History: On November 14, 1900, Aaron Copland, one of America's leading 20th century composers, was born in New York City. ⇒ November 14 1900 Aaron Copland America composer born New York City 2. ... Extracting candidate answers: We extract candidates from AQUAINT corpus. 1) Querying AQUAINT corpus with the target and retrieve relevant documents; 2) Splitting documents into sentences and extracting the sentences containing the target. Here in order to improve recall, simple heuristics rules are used to handle the problem of coreference resolution. If a sentence is deemed to contain the target and its next sentence starts with “he”, “she”, “it”, or “they”, then the next sentence is retained. Training language models: As mentioned above, we train language models using the obtained ordered centroid for each question. Answer reranking: Once the language models and the candidate answers are ready for a given question, candidate answers are reranked based on the probabilities of the language models generating candidate answers. Removing redundancies: Repetitive and similar candidate sentences will be removed. Given a reranked candidate answer set CA, redundancy removing is conducted as follows: 5 We use the statistics from British National Corpus (BNC) site to approximate words’ IDF, http://www.itri.brighton.ac.uk/~Adam.Kilgarriff/bncreadme.html. 1084 Step 1: Initially set the result A={}, and get top j=1 element from CA and then add it to A, j=2. Step 2: Get the jth element from CA, denoted as CAj. Compute cosine similarity between CAj and each element i of A, which is expressed as sij. Then let sik=max{s1j, s2j, ..., sij}, if sik < threshold (we set it to 0.75), then add j to the set A. Step 3: If length of A exceeds a predefined threshold, exit; otherwise, j=j+1, go to Step 2. Figure 2. Algorithm for removing redundancy. 5 Experiment & Evaluation In order to get comparable evaluation, we apply our approach to TREC 2003 definitional QA task. More details will be shown in the following sections. 5.1 Experiment setup 5.1.1 Dataset We employ the dataset from the TREC 2003 QA task. It includes the AQUAINT corpus of more than 1 million news articles from the New York Times (1998-2000), Associated Press (19982000), Xinhua News Agency (1996-2000) and 50 definitional question/answer pairs. In these 50 definitional questions, 30 are for people (e.g., Aaron Copland), 10 are for organizations (e.g., Friends of the Earth) and 10 are for other entities (e.g., Quasars). We employ Lemur6 to retrieve relevant documents from the AQUAINT corpus. For each query, we return the top 500 documents. 5.1.2 Evaluation metrics We adopt the evaluation metrics used in the TREC definitional QA task (Voorhees, 2003 and 2004). TREC provides a list of essential and acceptable nuggets for answering each question. We use these nuggets to assess our approach. During this progress, two human assessors examine how many essential and acceptable nuggets are covered in the returned answers. Every question is scored using nugget recall (NR) and an approximation to nugget precision (NP) based on answer length. The final score for a definition response is computed using F-Measure. In TREC 2003, the β parameter was set to 5 indicating that recall is 5 times as important as precision (Voorhees, 2003). 6 A free IR tool, http://www.lemurproject.org/ (14) )1 5 ( 5 ) 5 ( 2 2 NR NP NR NP F + + ∗ ∗ = = β in which, (15) uggets l answer n # essentia returned l nuggets # essentia NR = (16) ) (otherwise , 1 ) ( ,1 < = length allowance) (length - - allowance length NP where allowance = 100 * (# essential + # acceptable nuggets returned) and length = # nonwhite space characters in strings returned. 5.1.3 Baseline system We employ the TFIDF heuristics algorithmbased approach as our baseline system, in which the candidate answers and the centroid are treated as a bag of words. (17) ln i i i i i DF N TF IDF TF weight ∗ = ∗ = where TFi gives the occurrences of term i. DF i 7 is the number of documents containing term i. N gives the total number of documents. For comparison purpose, the unigram model is adopted and its scoring function is similar with Equation (7). The main difference is that we only concern unigram probability P(ti|OC) in unigram-based scoring function. For all systems, we empirically set the threshold of answer length to 12 sentences for people targets (i.e., Aaron Copland), and 10 sentences for other targets (i.e., Quasars). 5.2 Performance evaluation As the first evaluation, we assess the performance obtained by our language model method against the baseline system without query expansion (QE). The evaluation results are shown in Table 1. Average NR Average NP F(5) Baseline (TFIDF) 0.469 0.221 0.432 Unigram 0.508 (+8.3%) 0.204 (-7.7%) 0.459 (+6.3%) Bigram 0.554 (+18.1%) 0.234 (+5.9%) 0.505 (+16.9%) Biterm 0.567 (+20.9%) 0.222 (+0.5%) 0.511 (+18.3%) Table 1. Comparisons without QE. From Table 1, it is easy to observe that the unigram, bigram and biterm-based approaches improve the F(5) by 6.3%, 16.9% and 18.3% against the baseline system respectively. At the same time, the bigram and biterm improves the 7 We also use British National Corpus (BNC) to estimate it. 1085 F(5) by 10.0% and 11.3% against the unigram respectively. The unigram slightly outperform the baseline. We also notice that the biterm model improves slightly over the bigram model since it ignores the order of term-occurrence. This observation coincides with the experimental results of Srikanth and Srihari (2002). These results show that the bigram and biterm models outperform the VSM model and the unigram model dramatically. It is a clear indication that the language model which takes into account the term dependence among centroid vector is an effective way to rerank answers. As mentioned above, QE is involved in our system. In the second evaluation, we assess the performance obtained by the language model method against the baseline system with QE. We list the evaluation results in Table 2. Average NR Average NP F(5) Baseline (QE) 0.508 0.207 0.462 Unigram (QE) 0.518 (+2.0%) 0.223 (+7.7%) 0.472 (+2.2%) Bigram (QE) 0.573 (+12.8%) 0.228 (+10.1%) 0.518 (+12.1%) Biterm (QE) 0.582 (+14.6%) 0.240 (+15.9%) 0.531 (+14.9%) Table 2. Comparisons with QE. From Table 2, we observe that, with QE, the bigram and biterm still outperform the baseline system (VSM) significantly by 12.1% (p8=0.03) and 14.9% (p=0.004) in F(5). Furthermore, the bigram and biterm perform significantly better than the unigram by 9.7% (p=0.07) and 12.5% (p=0.02) in F(5) respectively. This indicates that the term dependence is effective in keeping improving the performance. It is easy to observe that the baseline is close to the unigram model since both two systems are based on the independent assumption. We also notice that the biterm model improves slightly over the bigram model. At the same time, all of the four systems improve the performance against the corresponding system without QE. The main reason is that the qualities of the centroid vector can be enhanced with QE. We are interested in the performance comparison with or without QE for each system. Through comparison it is found that the baseline system relies on QE more heavily than our approach does. With QE, the baseline system improves the performance by 6.9% and the language model approaches improve the performance by 2.8%, 2.6% and 3.9%, respectively. 8 T-Test has been performed. F(5) performance comparison between the baseline model and the biterm model for each of 50 TREC questions is shown in Figure 3. QE is used in both the baseline system and the biterm system. F(5) performance comparision for each question (Both with QE) 0 0.2 0.4 0.6 0.8 1 1.2 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 Question ID F-5 Score Baseline Our Biterm LM Figure 3. Biterm vs. Baseline. We are also interested in the comparison with the systems in TREC 2003. The best F(5) score returned by our proposed approach is 0.531, which is close to the top 1 run in TREC 2003 (Voorhees, 2003). The F(5) score of the best system is 0.555, reported by BBN’s system (Xu et al., 2003). In BBN’s experiments, the centroid vector was learned from the human made external knowledge resources, such as encyclopedia and the web. Table 3 gives the comparison between our biterm model-based system with the BBN’s run with different β values. F( β ) Score Run Tag β =1 β =2 β =3 β =4 β =5 BBN 0.310 0.423 0.493 0.532 0.555 Ours 0.288 0.382 0.470 0.509 0.531 Table 3. Comparison with BBN’s run. 5.3 Case study A positive example returned by our proposed approach is given below. For Qid: 2304: “Who is Niels Bohr?”, the reference answers are given in Table 4 (only vital nuggets are listed): vital Danish vital Nuclear physicist vital Helped create atom bomb vital Nobel Prize winner Table 4. Reference answers for question “Who is Niels Bohr?”. Answers returned by the baseline system and our proposed system are presented in Table 5. System Returned answers (Partly) Baseline system 1. ..., Niels Bohr, the great Danish scientist 2. ...the German physicist Werner Heisenberg and the Danish physicist 1086 Niels Bohr 3. ...took place between the Danish physicist Niels Bohr and his onetime protege, the German scientist ... 4. ... two great physicists, the Dane Niels Bohr and Werner Heisenberg ... 5. ... Proposed system 1. ...physicist Werner Heisenberg travel to ... his colleague and old mentor, Niels Bohr, the great Danish scientist 2. ... two great physicists, the Dane Niels Bohr and Werner Heisen-berg ... 3. Today's Birthdays: ... Danish nuclear physicist and Nobel Prize winner Niels Bohr (1885-1962) 4. the Danish atomic physicist, and his German pupil, Werner Heisenberg, the author of the uncertainty principle 5. ... Table 5. Baseline vs. our system for question “Who is Niels Bohr?”. From Table 5, it can be seen that the baseline system returned only one vital nugget: Danish (here we don’t think that physicist is equal to nuclear physicist semantically). Our proposed system returned three vital nuggets: Danish, Nuclear physicist, and Nobel Prize winner. The answer sentence “Today's Birthdays: ... Danish nuclear physicist and Nobel Prize winner Niels Bohr (1885-1962)” contains more descriptive information for the question target “Niels Bohr” and is ranked 3rd in the top 12 answers in our proposed system. 5.4 Error analysis Although we have shown that the language model-based approach significantly improves the system performance, there is still plenty of room for improvement. 1) Sparseness of search results derogated the learning of the ordered centroid: E.g.: Qid 2348: “What is the medical condition shingles?”, in which we treat the words “medical condition shingles” as the question target. We found that few sentences contain the target “medical condition shingles”. We found utilizing multiple search engines, such as MSN9, AltaVista10 might alleviate this problem. Besides, more effective smoothing techniques could be promising. 2) Term ambiguity: for some queries, the irrelated documents are returned. E.g., for Qid 2267: “Who is Alexander Pope?”, all documents returned from the IR tool Lemur for 9 http://www.msn.com 10 http://www.altavista.com this question are about “Pope John Paul II”, not “Alexander Pope”. This may be caused by the ambiguity of the word “Pope”. In this case, term disambiguation or adding some constraint terms which are learned from the web to the query to the AQUAINT corpus might be helpful. 6 Conclusions and Future Work In this paper, we presented a novel answer reranking method for definitional question. We use bigram and biterm language models to capture the term dependence. Our contribution can be summarized as follows: 1) Word dependence is explored from ordered centroid learned from snippets of a search engine; 2) Bigram and biterm models are presented to capture the term dependence and rerank candidate answers for definitional QA; 3) Evaluation results show that both bigram and biterm models outperform the VSM and unigram model significantly on TREC 2003 test set. In our experiments, centroid words were learned from the returned snippets of a web search engine. In the future, we are interested in enhancing the centroid learning using human knowledge sources such as encyclopedia. In addition, we will explore new smoothing techniques to enhance the interpolation method in our current approach. 7 Acknowledgements The authors are grateful to Dr. Cheng Niu, Yunbo Cao for their valuable suggestions on the draft of this paper. We are indebted to Shiqi Zhao, Shenghua Bao, Wei Yuan for their valuable discussions about this paper. We also thank Dwight for his assistance to polish the English. Thanks also go to anonymous reviewers whose comments have helped improve the final version of this paper. References E. Brill, J. Lin, M. Banko, S. Dumais and A. Ng. 2001. Data-Intensive Question Answering. In Proceedings of the Tenth Text Retrieval Conference (TREC 2001), Gaithersburg, MD, pp. 183-189. S. Blair-Goldensohn, K.R. McKeown and A. Hazen Schlaikjer. 2003. A Hybrid Approach for QA Track Definitional Questions. In Proceedings of the Tenth Text Retrieval Conference (TREC 2003), pp. 336-343. 1087 S. F. Chen and J. T. Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th Annual Meeting of the ACL, pp. 310-318. Hang Cui, Min-Yen Kan and Tat-Seng Chua. 2004. Unsupervised Learning of Soft Patterns for Definitional Question Answering. In Proceedings of the Thirteenth World Wide Web conference (WWW 2004), New York, pp. 90-99. Guihong Cao, Jian-Yun Nie, and Jing Bai. 2005. Integrating Word Relationships into Language Models. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development of Information Retrieval (SIGIR 2005), Salvador, Brazil. Jianfeng Gao, Jian-Yun Nie, Guangyuan Wu and Guihong Cao. 2004. Dependence language model for information retrieval. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development of Information Retrieval (SIGIR 2004), Sheffield, UK. Chin-Yew Lin. 2002. The Effectiveness of Dictionary and Web-Based Answer Reranking. In Proceedings of the 19th International Conference on Computational Linguistics (COLING 2002), Taipei, Taiwan. Lafferty, J. and Zhai, C. 2001. Document language models, query models, and risk minimization for information retrieval. In W.B. Croft, D.J. Harper, D.H. Kraft, & J. Zobel (Eds.), In Proceedings of the 24th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, New Orleans, Louisiana, New York, pp.111-119. Magnini, B., Negri, M., Prevete, R., and Tanev, H. 2002. Is It the Right Answer? Exploiting Web Redundancy for Answer Validation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-2002), Philadelphia, PA. Miller, D., Leek, T., and Schwartz, R. 1999. A hidden Markov model information retrieval system. In Proceedings of the 22nd Annual International ACM SIGIR Conference, pp. 214-221. K. Papineni, S. Roukos, T. Ward, and W.J. Zhu. 2001. Bleu: a Method for Automatic Evaluation of Machine Translation. IBM Research Report rc22176 (w0109022), Thomas J. Watson Research Center. Ponte, J., and Croft, W.B. 1998. A language modeling approach to information retrieval. In Proceedings of the 21st Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, New York, pp.275-281. J. Prager, D. Radev, and K. Czuba. 2001. Answering what-is questions by virtual annotation. In Proceedings of the Human Language Technology Conference (HLT 2001), San Diego, CA. Deepak Ravichandran and Eduard Hovy. 2002. Learning Surface Text Patterns for a Question Answering System. In Proceedings of the 40th Annual Meeting of the ACL, pp. 41-47. Song, F., and Croft, W.B. 1999. A general language model for information retrieval. In Proceedings of the 22nd Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, New York, pp.279-280. Srikanth, M. and Srihari, R. 2002. Biterm language models for document retrieval. In Proceedings of the 2002 ACM SIGIR Conference on Research and Development in Information Retrieval, Tampere, Finland. Ellen M. Voorhees. 2002. Overview of the TREC 2002 question answering track. In Proceedings of the Eleventh Text REtrieval Conference (TREC 2002). Ellen M. Voorhees. 2003. Overview of the TREC 2003 question answering track. In Proceedings of the Twelfth Text REtrieval Conference (TREC 2003). Ellen M. Voorhees. 2004. Overview of the TREC 2004 question answering track. In Proceedings of the Twelfth Text REtrieval Conference (TREC 2004). Lide Wu, Xuanjing Huang, Lan You, Zhushuo Zhang, Xin Li, and Yaqian Zhou. 2004. FDUQA on TREC2004 QA Track. In Proceedings of the Thirteenth Text REtrieval Conference (TREC 2004). Jinxi Xu, Ana Licuanan, and Ralph Weischedel. 2003. TREC2003 QA at BBN: Answering definitional questions. In Proceedings of the Twelfth Text REtrieval Conference (TREC 2003). Jun Xu, Yunbo Cao, Hang Li and Min Zhao. 2005. Ranking Definitions with Supervised Learning Methods. In Proceedings of 14th International World Wide Web Conference (WWW 2005), Industrial and Practical Experience Track, Chiba, Japan, pp.811-819. Zhang D. and Lee WS. 2003. A Language Modeling Approach to Passage Question Answering. In Proceedings of The 12th Text Retrieval Conference (TREC2003), NIST, Gaithersburg. Zhai, C, and Lafferty, J. 2001. A Study of Smoothing Methods for Language Models Applied to Information Retrieval. In Proceedings of the 2001 ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 334-342. 1088 | 2006 | 136 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.