text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Convolution Kernels with Feature Selection for Natural Language Processing Tasks Jun Suzuki, Hideki Isozaki and Eisaku Maeda NTT Communication Science Laboratories, NTT Corp. 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto,619-0237 Japan {jun, isozaki, maeda}@cslab.kecl.ntt.co.jp Abstract Convolution kernels, such as sequence and tree kernels, are advantageous for both the concept and accuracy of many natural language processing (NLP) tasks. Experiments have, however, shown that the over-fitting problem often arises when these kernels are used in NLP tasks. This paper discusses this issue of convolution kernels, and then proposes a new approach based on statistical feature selection that avoids this issue. To enable the proposed method to be executed efficiently, it is embedded into an original kernel calculation process by using sub-structure mining algorithms. Experiments are undertaken on real NLP tasks to confirm the problem with a conventional method and to compare its performance with that of the proposed method. 1 Introduction Over the past few years, many machine learning methods have been successfully applied to tasks in natural language processing (NLP). Especially, state-of-the-art performance can be achieved with kernel methods, such as Support Vector Machine (Cortes and Vapnik, 1995). Examples include text categorization (Joachims, 1998), chunking (Kudo and Matsumoto, 2002) and parsing (Collins and Duffy, 2001). Another feature of this kernel methodology is that it not only provides high accuracy but also allows us to design a kernel function suited to modeling the task at hand. Since natural language data take the form of sequences of words, and are generally analyzed using discrete structures, such as trees (parsed trees) and graphs (relational graphs), discrete kernels, such as sequence kernels (Lodhi et al., 2002), tree kernels (Collins and Duffy, 2001), and graph kernels (Suzuki et al., 2003a), have been shown to offer excellent results. These discrete kernels are related to convolution kernels (Haussler, 1999), which provides the concept of kernels over discrete structures. Convolution kernels allow us to treat structural features without explicitly representing the feature vectors from the input object. That is, convolution kernels are well suited to NLP tasks in terms of both accuracy and concept. Unfortunately, experiments have shown that in some cases there is a critical issue with convolution kernels, especially in NLP tasks (Collins and Duffy, 2001; Cancedda et al., 2003; Suzuki et al., 2003b). That is, the over-fitting problem arises if large “substructures” are used in the kernel calculations. As a result, the machine learning approach can never be trained efficiently. To solve this issue, we generally eliminate large sub-structures from the set of features used. However, the main reason for using convolution kernels is that we aim to use structural features easily and efficiently. If use is limited to only very small structures, it negates the advantages of using convolution kernels. This paper discusses this issue of convolution kernels, and proposes a new method based on statistical feature selection. The proposed method deals only with those features that are statistically significant for kernel calculation, large significant substructures can be used without over-fitting. Moreover, the proposed method can be executed efficiently by embedding it in an original kernel calculation process by using sub-structure mining algorithms. In the next section, we provide a brief overview of convolution kernels. Section 3 discusses one issue of convolution kernels, the main topic of this paper, and introduces some conventional methods for solving this issue. In Section 4, we propose a new approach based on statistical feature selection to offset the issue of convolution kernels using an example consisting of sequence kernels. In Section 5, we briefly discuss the application of the proposed method to other convolution kernels. In Section 6, we compare the performance of conventional methods with that of the proposed method by using real NLP tasks: question classification and sentence modality identification. The experimental results described in Section 7 clarify the advantages of the proposed method. 2 Convolution Kernels Convolution kernels have been proposed as a concept of kernels for discrete structures, such as sequences, trees and graphs. This framework defines the kernel function between input objects as the convolution of “sub-kernels”, i.e. the kernels for the decompositions (parts) of the objects. Let X and Y be discrete objects. Conceptually, convolution kernels K(X, Y ) enumerate all substructures occurring in X and Y and then calculate their inner product, which is simply written as: K(X, Y ) = ⟨φ(X), φ(Y )⟩= X i φi(X) · φi(Y ). (1) φ represents the feature mapping from the discrete object to the feature space; that is, φ(X) = (φ1(X), . . . , φi(X), . . .). With sequence kernels (Lodhi et al., 2002), input objects X and Y are sequences, and φi(X) is a sub-sequence. With tree kernels (Collins and Duffy, 2001), X and Y are trees, and φi(X) is a sub-tree. When implemented, these kernels can be efficiently calculated in quadratic time by using dynamic programming (DP). Finally, since the size of the input objects is not constant, the kernel value is normalized using the following equation. ˆK(X, Y ) = K(X, Y ) p K(X, X) · K(Y, Y ) (2) The value of ˆK(X, Y ) is from 0 to 1, ˆK(X, Y ) = 1 if and only if X = Y . 2.1 Sequence Kernels To simplify the discussion, we restrict ourselves hereafter to sequence kernels. Other convolution kernels are briefly addressed in Section 5. Many kinds of sequence kernels have been proposed for a variety of different tasks. This paper basically follows the framework of word sequence kernels (Cancedda et al., 2003), and so processes gapped word sequences to yield the kernel value. Let Σ be a set of finite symbols, and Σn be a set of possible (symbol) sequences whose sizes are n or less that are constructed by symbols in Σ. The meaning of “size” in this paper is the number of symbols in the sub-structure. Namely, in the case of sequence, size n means length n. S and T can represent any sequence. si and tj represent the ith and jth symbols in S and T, respectively. Therefore, a S T 1 2 1 1 2 1 λ + λ λ 1 λ λ 1 1 1 1 a, b, c, aa, ab, ac, ba, bc, aba, aac, abc, bac, abac abc S = abac T = prod. 1 0 1 0 1 0 0 1 0 2 1 1 0 1 3 λ λ + 0 λ 0 0 λ 0 (a, b, c, ab, ac, bc, abc) (a, b, c, aa, ab, ac, ba, bc, aba, aac, abc, bac, abac) u 3 5 3λ λ + + kernel value λ sequences sub-sequences 1 0 0 Figure 1: Example of sequence kernel output sequence S can be written as S = s1 . . . si . . . s|S|, where |S| represents the length of S. If sequence u is contained in sub-sequence S[i : j] def = si . . . sj of S (allowing the existence of gaps), the position of u in S is written as i = (i1 : i|u|). The length of S[i] is l(i) = i|u| −i1 + 1. For example, if u = ab and S = cacbd, then i = (2 : 4) and l(i) = 4 −2 + 1 = 3. By using the above notations, sequence kernels can be defined as: KSK(S, T) = X u∈Σn X i|u=S[i] λγ(i) X j|u=T [j] λγ(j), (3) where λ is the decay factor that handles the gap present in a common sub-sequence u, and γ(i) = l(i)−|u|. In this paper, | means “such that”. Figure 1 shows a simple example of the output of this kernel. However, in general, the number of features |Σn|, which is the dimension of the feature space, becomes very high, and it is computationally infeasible to calculate Equation (3) explicitly. The efficient recursive calculation has been introduced in (Cancedda et al., 2003). To clarify the discussion, we redefine the sequence kernels with our notation. The sequence kernel can be written as follows: KSK(S, T) = n X m=1 X 1≤i≤|S| X 1≤j≤|T | Jm(Si, Tj). (4) where Si and Tj represent the sub-sequences Si = s1, s2, . . . , si and Tj = t1, t2, . . . , tj, respectively. Let Jm(Si, Tj) be a function that returns the value of common sub-sequences if si = tj. Jm(Si, Tj) = J′ m−1(Si, Tj) · I(si, tj) (5) I(si, tj) is a function that returns a matching value between si and tj. This paper defines I(si, tj) as an indicator function that returns 1 if si = tj, otherwise 0. Then, J′ m(Si, Tj) and J′′ m(Si, Tj) are introduced to calculate the common gapped sub-sequences between Si and Tj. J′ m(Si, Tj) =      1 if m = 0, 0 if j = 0 and m > 0, λJ′ m(Si, Tj−1) + J′′ m(Si, Tj−1) otherwise (6) J′′ m(Si, Tj) =    0 if i = 0, λJ′′ m(Si−1, Tj) + Jm(Si−1, Tj) otherwise (7) If we calculate Equations (5) to (7) recursively, Equation (4) provides exactly the same value as Equation (3). 3 Problem of Applying Convolution Kernels to NLP tasks This section discusses an issue that arises when applying convolution kernels to NLP tasks. According to the original definition of convolution kernels, all the sub-structures are enumerated and calculated for the kernels. The number of substructures in the input object usually becomes exponential against input object size. As a result, all kernel values ˆK(X, Y ) are nearly 0 except the kernel value of the object itself, ˆK(X, X), which is 1. In this situation, the machine learning process becomes almost the same as memory-based learning. This means that we obtain a result that is very precise but with very low recall. To avoid this, most conventional methods use an approach that involves smoothing the kernel values or eliminating features based on the sub-structure size. For sequence kernels, (Cancedda et al., 2003) use a feature elimination method based on the size of sub-sequence n. This means that the kernel calculation deals only with those sub-sequences whose size is n or less. For tree kernels, (Collins and Duffy, 2001) proposed a method that restricts the features based on sub-trees depth. These methods seem to work well on the surface, however, good results are achieved only when n is very small, i.e. n = 2. The main reason for using convolution kernels is that they allow us to employ structural features simply and efficiently. When only small sized substructures are used (i.e. n = 2), the full benefits of convolution kernels are missed. Moreover, these results do not mean that larger sized sub-structures are not useful. In some cases we already know that larger sub-structures are significant features as regards solving the target problem. That is, these significant larger sub-structures, Table 1: Contingency table and notation for the chisquared value c ¯c P row u Ouc = y Ou¯c Ou = x ¯u O¯uc O¯u¯c O¯u P column Oc = M O¯c N which the conventional methods cannot deal with efficiently, should have a possibility of improving the performance furthermore. The aim of the work described in this paper is to be able to use any significant sub-structure efficiently, regardless of its size, to solve NLP tasks. 4 Proposed Feature Selection Method Our approach is based on statistical feature selection in contrast to the conventional methods, which use sub-structure size. For a better understanding, consider the twoclass (positive and negative) supervised classification problem. In our approach we test the statistical deviation of all the sub-structures in the training samples between the appearance of positive samples and negative samples. This allows us to select only the statistically significant sub-structures when calculating the kernel value. Our approach, which uses a statistical metric to select features, is quite natural. We note, however, that kernels are calculated using the DP algorithm. Therefore, it is not clear how to calculate kernels efficiently with a statistical feature selection method. First, we briefly explain a statistical metric, the chisquared (χ2) value, and provide an idea of how to select significant features. We then describe a method for embedding statistical feature selection into kernel calculation. 4.1 Statistical Metric: Chi-squared Value There are many kinds of statistical metrics, such as chi-squared value, correlation coefficient and mutual information. (Rogati and Yang, 2002) reported that chi-squared feature selection is the most effective method for text classification. Following this information, we use χ2 values as statistical feature selection criteria. Although we selected χ2 values, any other statistical metric can be used as long as it is based on the contingency table shown in Table 1. We briefly explain how to calculate the χ2 value by referring to Table 1. In the table, c and ¯c represent the names of classes, c for the positive class S T 1 2 1 1 2 1 λ + λ λ 1 λ λ 1 ( ) 2 u χ 0.1 0.5 1.2 1 1 1 1.5 0.9 0.8 a, b, c, aa, ab, ac, ba, bc, aba, aac, abc, bac, abac abc S = abac T = prod. 1 0 1 0 1 0 0 1 0 2 1 1 0 1 3 λ λ + 0 λ 0 0 λ 0 1.0 τ = threshold 2.5 1 1 λ (a, b, c, ab, ac, bc, abc) (a, b, c, aa, ab, ac, ba, bc, aba, aac, abc, bac, abac) u 3 5 3λ λ + + 2 λ + 0 0 0 0 2 1 1 0 1 3 λ λ + 0 λ 0 0 λ 0 kernel value kernel value under the feature selection feature selection λ sequences sub-sequences 1 0 0 0 Figure 2: Example of statistical feature selection and ¯c for the negative class. Ouc, Ou¯c, O¯uc and O¯u¯c represent the number of u that appeared in the positive sample c, the number of u that appeared in the negative sample ¯c, the number of u that did not appear in c, and the number of u that did not appear in ¯c, respectively. Let y be the number of samples of positive class c that contain sub-sequence u, and x be the number of samples that contain u. Let N be the total number of (training) samples, and M be the number of positive samples. Since N and M are constant for (fixed) data, χ2 can be written as a function of x and y, χ2(x, y) = N(Ouc · O¯u¯c −O¯uc · Ou¯c)2 Ou · O¯u · Oc · O¯c . (8) χ2 expresses the normalized deviation of the observation from the expectation. We simply represent χ2(x, y) as χ2(u). 4.2 Feature Selection Criterion The basic idea of feature selection is quite natural. First, we decide the threshold τ of the χ2 value. If χ2(u) < τ holds, that is, u is not statistically significant, then u is eliminated from the features and the value of u is presumed to be 0 for the kernel value. The sequence kernel with feature selection (FSSK) can be defined as follows: KFSSK(S, T) = X τ≤χ2(u)|u∈Σn X i|u=S[i] λγ(i) X j|u=T [j] λγ(j). (9) The difference between Equations (3) and (9) is simply the condition of the first summation. FSSK selects significant sub-sequence u by using the condition of the statistical metric τ ≤χ2(u). Figure 2 shows a simple example of what FSSK calculates for the kernel value. 4.3 Efficient χ2(u) Calculation Method It is computationally infeasible to calculate χ2(u) for all possible u with a naive exhaustive method. In our approach, we use a sub-structure mining algorithm to calculate χ2(u). The basic idea comes from a sequential pattern mining technique, PrefixSpan (Pei et al., 2001), and a statistical metric pruning (SMP) method, Apriori SMP (Morishita and Sese, 2000). By using these techniques, all the significant sub-sequences u that satisfy τ ≤χ2(u) can be found efficiently by depth-first search and pruning. Below, we briefly explain the concept involved in finding the significant features. First, we denote uv, which is the concatenation of sequences u and v. Then, u is a specific sequence and uv is any sequence that is constructed by u with any suffix v. The upper bound of the χ2 value of uv can be defined by the value of u (Morishita and Sese, 2000). χ2(uv)≤max χ2(yu, yu), χ2(xu −yu, 0)  =bχ2(u) where xu and yu represent the value of x and y of u. This inequation indicates that if bχ2(u) is less than a certain threshold τ, all sub-sequences uv can be eliminated from the features, because no subsequence uv can be a feature. The PrefixSpan algorithm enumerates all the significant sub-sequences by using a depth-first search and constructing a TRIE structure to store the significant sequences of internal results efficiently. Specifically, PrefixSpan algorithm evaluates uw, where uw represents a concatenation of a sequence u and a symbol w, using the following three conditions. 1. τ ≤χ2(uw) 2. τ > χ2(uw), τ > bχ2(uw) 3. τ > χ2(uw), τ ≤bχ2(uw) With 1, sub-sequence uw is selected as a significant feature. With 2, sub-sequence uw and arbitrary subsequences uwv, are less than the threshold τ. Then w is pruned from the TRIE, that is, all uwv where v represents any suffix pruned from the search space. With 3, uw is not selected as a significant feature because the χ2 value of uw is less than τ, however, uwv can be a significant feature because the upperbound χ2 value of uwv is greater than τ, thus the search is continued to uwv. Figure 3 shows a simple example of PrefixSpan with SMP that searches for the significant features a b c c d b c a b a c a c d a b d a b c c d b c b a c a c d a b d ⊥ a b c d b c 1.0 τ = b: c: d: +1 -1 +1 -1 -1 a u = w = ( ) 2 uw χ ( ) 2ˆ uw χ TRIE representation x y +1 -1 +1 -1 +1 ab u = d c … w 2 3 1 1 2 1 +1 -1 +1 -1 -1 class training data suffix c: d: w = x y 1 1 1 0 5.0 0.0 5.0 0.8 5.0 0.8 2.2 2.2 1.9 0.1 1.9 1.9 0.8 0.8 5.0 2.2 a: b: c: d: +1 -1 +1 -1 -1 u = Λ w = x y 5 4 4 2 2 2 2 0 c d 1.9 1.9 0.8 0.8 … a b c c d b c a b a c a c d a b d suffix suffix a b c c d b c b a c a c d a b d 5 N = 2 M = 2 3 1 4 5 search order pruned pruned Figure 3: Efficient search for statistically significant sub-sequences using the PrefixSpan algorithm with SMP by using a depth-first search with a TRIE representation of the significant sequences. The values of each symbol represent χ2(u) and bχ2(u) that can be calculated from the number of xu and yu. The TRIE structure in the figure represents the statistically significant sub-sequences that can be shown in a path from ⊥to the symbol. We exploit this TRIE structure and PrefixSpan pruning method in our kernel calculation. 4.4 Embedding Feature Selection in Kernel Calculation This section shows how to integrate statistical feature selection in the kernel calculation. Our proposed method is defined in the following equations. KFSSK(S, T) = n X m=1 X 1≤i≤|S| X 1≤j≤|T | Km(Si, Tj) (10) Let Km(Si, Tj) be a function that returns the sum value of all statistically significant common subsequences u if si = tj. Km(Si, Tj) = X u∈Γm(Si,Tj) Ju(Si, Tj), (11) where Γm(Si, Tj) represents a set of sub-sequences whose size |u| is m and that satisfy the above condition 1. The Γm(Si, Tj) is defined in detail in Equation (15). Then, let Ju(Si, Tj), J ′ u(Si, Tj) and J ′′ u (Si, Tj) be functions that calculate the value of the common sub-sequences between Si and Tj recursively, as well as equations (5) to (7) for sequence kernels. We introduce a special symbol Λ to represent an “empty sequence”, and define Λw = w and |Λw| = 1. Juw(Si, Tj) =    J ′ u(Si, Tj) · I(w) if uw ∈bΓ|uw|(Si, Tj), 0 otherwise (12) where I(w) is a function that returns a matching value of w. In this paper, we define I(w) is 1. bΓm(Si, Tj) has realized conditions 2 and 3; the details are defined in Equation (16). J ′ u(Si, Tj) =      1 if u = Λ, 0 if j = 0 and u ̸= Λ, λJ ′ u(Si, Tj−1) + J ′′ u (Si, Tj−1) otherwise (13) J ′′ u (Si, Tj) =    0 if i = 0, λJ ′′ u (Si−1, Tj) + Ju(Si−1, Tj) otherwise (14) The following five equations are introduced to select a set of significant sub-sequences. Γm(Si, Tj) and bΓm(Si, Tj) are sets of sub-sequences (features) that satisfy condition 1 and 3, respectively, when calculating the value between Si and Tj in Equations (11) and (12). Γm(Si, Tj) = {u | u ∈bΓm(Si, Tj), τ ≤χ2(u)} (15) bΓm(Si, Tj) =    Ψ(bΓ′ m−1(Si, Tj), si) if si = tj ∅ otherwise (16) Ψ(F, w) = {uw | u ∈F, τ ≤bχ2(uw)}, (17) where F represents a set of sub-sequences. Notice that Γm(Si, Tj) and bΓm(Si, Tj) have only subsequences u that satisfy τ ≤χ2(uw) or τ ≤ bχ2(uw), respectively, if si = tj(= w); otherwise they become empty sets. The following two equations are introduced for recursive set operations to calculate Γm(Si, Tj) and bΓm(Si, Tj). bΓ′ m(Si, Tj) =        {Λ} if m = 0, ∅ if j = 0 and m > 0, bΓ′ m(Si, Tj−1) ∪bΓ′′ m(Si, Tj−1) otherwise (18) bΓ′′ m(Si, Tj) =    ∅ if i = 0 , bΓ′′ m(Si−1, Tj) ∪bΓm(Si−1, Tj) otherwise (19) In the implementation, Equations (11) to (14) can be performed in the same way as those used to calculate the original sequence kernels, if the feature selection condition of Equations (15) to (19) has been removed. Then, Equations (15) to (19), which select significant features, are performed by the PrefixSpan algorithm described above and the TRIE representation of statistically significant features. The recursive calculation of Equations (12) to (14) and Equations (16) to (19) can be executed in the same way and at the same time in parallel. As a result, statistical feature selection can be embedded in oroginal sequence kernel calculation based on a dynamic programming technique. 4.5 Properties The proposed method has several important advantages over the conventional methods. First, the feature selection criterion is based on a statistical measure, so statistically significant features are automatically selected. Second, according to Equations (10) to (18), the proposed method can be embedded in an original kernel calculation process, which allows us to use the same calculation procedure as the conventional methods. The only difference between the original sequence kernels and the proposed method is that the latter calculates a statistical metric χ2(u) by using a sub-structure mining algorithm in the kernel calculation. Third, although the kernel calculation, which unifies our proposed method, requires a longer training time because of the feature selection, the selected sub-sequences have a TRIE data structure. This means a fast calculation technique proposed in (Kudo and Matsumoto, 2003) can be simply applied to our method, which yields classification very quickly. In the classification part, the features (subsequences) selected in the learning part must be known. Therefore, we store the TRIE of selected sub-sequences and use them during classification. 5 Proposed Method Applied to Other Convolution Kernels We have insufficient space to discuss this subject in detail in relation to other convolution kernels. However, our proposals can be easily applied to tree kernels (Collins and Duffy, 2001) by using string encoding for trees. We enumerate nodes (labels) of tree in postorder traversal. After that, we can employ a sequential pattern mining technique to select statistically significant sub-trees. This is because we can convert to the original sub-tree form from the string encoding representation. Table 2: Parameter values of proposed kernels and Support Vector Machines parameter value soft margin for SVM (C) 1000 decay factor of gap (λ) 0.5 threshold of χ2 (τ) 2.7055 3.8415 As a result, we can calculate tree kernels with statistical feature selection by using the original tree kernel calculation with the sequential pattern mining technique introduced in this paper. Moreover, we can expand our proposals to hierarchically structured graph kernels (Suzuki et al., 2003a) by using a simple extension to cover hierarchical structures. 6 Experiments We evaluated the performance of the proposed method in actual NLP tasks, namely English question classification (EQC), Japanese question classification (JQC) and sentence modality identification (MI) tasks. We compared the proposed method (FSSK) with a conventional method (SK), as discussed in Section 3, and with bag-of-words (BOW) Kernel (BOW-K)(Joachims, 1998) as baseline methods. Support Vector Machine (SVM) was selected as the kernel-based classifier for training and classification. Table 2 shows some of the parameter values that we used in the comparison. We set thresholds of τ = 2.7055 (FSSK1) and τ = 3.8415 (FSSK2) for the proposed methods; these values represent the 10% and 5% level of significance in the χ2 distribution with one degree of freedom, which used the χ2 significant test. 6.1 Question Classification Question classification is defined as a task similar to text categorization; it maps a given question into a question type. We evaluated the performance by using data provided by (Li and Roth, 2002) for English and (Suzuki et al., 2003b) for Japanese question classification and followed the experimental setting used in these papers; namely we use four typical question types, LOCATION, NUMEX, ORGANIZATION, and TIME TOP for JQA, and “coarse” and “fine” classes for EQC. We used the one-vs-rest classifier of SVM as the multi-class classification method for EQC. Figure 4 shows examples of the question classification data used here. question types input object : word sequences ([ ]: information of chunk and ⟨⟩: named entity) ABBREVIATION what,[B-NP] be,[B-VP] the,[B-NP] abbreviation,[I-NP] for,[B-PP] Texas,[B-NP],⟨B-GPE⟩?,[O] DESCRIPTION what,[B-NP] be,[B-VP] Aborigines,[B-NP] ?,[O] HUMAN who,[B-NP] discover,[B-VP] America,[B-NP],⟨B-GPE⟩?,[O] Figure 4: Examples of English question classification data Table 3: Results of the Japanese question classification (F-measure) (a) TIME TOP (b) LOCATION (c) ORGANIZATION (d) NUMEX n FSSK1 FSSK2 SK BOW-K 1 2 3 4 ∞ - .961 .958 .957 .956 - .961 .956 .957 .956 - .946 .910 .866 .223 .902 .909 .886 .855 1 2 3 4 ∞ - .795 .793 .798 .792 - .788 .799 .804 .800 - .791 .775 .732 .169 .744 .768 .756 .747 1 2 3 4 ∞ - .709 .720 .720 .723 - .703 .710 .716 .720 - .705 .668 .594 .035 .641 690 .636 .572 1 2 3 4 ∞ - .912 .915 .908 .908 - .913 .916 .911 .913 - .912 .885 .817 .036 .842 .852 .807 .726 6.2 Sentence Modality Identification For example, sentence modality identification techniques are used in automatic text analysis systems that identify the modality of a sentence, such as “opinion” or “description”. The data set was created from Mainichi news articles and one of three modality tags, “opinion”, “decision” and “description” was applied to each sentence. The data size was 1135 sentences consisting of 123 sentences of “opinion”, 326 of “decision” and 686 of “description”. We evaluated the results by using 5-fold cross validation. 7 Results and Discussion Tables 3 and 4 show the results of Japanese and English question classification, respectively. Table 5 shows the results of sentence modality identification. n in each table indicates the threshold of the sub-sequence size. n = ∞means all possible subsequences are used. First, SK was consistently superior to BOW-K. This indicates that the structural features were quite efficient in performing these tasks. In general we can say that the use of structural features can improve the performance of NLP tasks that require the details of the contents to perform the task. Most of the results showed that SK achieves its maximum performance when n = 2. The performance deteriorates considerably once n exceeds 4. This implies that SK with larger sub-structures degrade classification performance. These results show the same tendency as the previous studies discussed in Section 3. Table 6 shows the precision and recall of SK when n = ∞. As shown in Table 6, the classifier offered high precision but low recall. This is evidence of over-fitting in learning. As shown by the above experiments, FSSK proTable 6: Precision and recall of SK: n = ∞ Precision Recall F MI:Opinion .917 .209 .339 JQA:LOCATION .896 .093 .168 vided consistently better performance than the conventional methods. Moreover, the experiments confirmed one important fact. That is, in some cases maximum performance was achieved with n = ∞. This indicates that sub-sequences created using very large structures can be extremely effective. Of course, a larger feature space also includes the smaller feature spaces, Σn ⊂Σn+1. If the performance is improved by using a larger n, this means that significant features do exist. Thus, we can improve the performance of some classification problems by dealing with larger substructures. Even if optimum performance was not achieved with n = ∞, difference between the performance of smaller n are quite small compared to that of SK. This indicates that our method is very robust as regards substructure size; It therefore becomes unnecessary for us to decide sub-structure size carefully. This indicates our approach, using large sub-structures, is better than the conventional approach of eliminating sub-sequences based on size. 8 Conclusion This paper proposed a statistical feature selection method for convolution kernels. Our approach can select significant features automatically based on a statistical significance test. Our proposed method can be embedded in the DP based kernel calculation process for convolution kernels by using substructure mining algorithms. Table 4: Results of English question classification (Accuracy) (a) coarse (b) fine n FSSK1 FSSK2 SK BOW-K 1 2 3 4 ∞ - .908 .914 .916 .912 - .902 .896 .902 .906 - .912 .914 .912 .892 .728 .836 .864 .858 1 2 3 4 ∞ - .852 .854 .852 .850 - .858 .856 .854 .854 - .850 .840 .830 .796 .754 .792 .790 .778 Table 5: Results of sentence modality identification (F-measure) (a) opinion (b) decision (c) description n FSSK1 FSSK2 SK BOW-K 1 2 3 4 ∞ - .734 .743 .746 .751 - .740 .748 .750 .750 - .706 .672 .577 .058 .507 .531 .438 .368 1 2 3 4 ∞ - .828 .858 .854 .857 - .824 .855 .859 .860 - .816 .834 .830 .339 .652 .708 .686 .665 1 2 3 4 ∞ - .896 .906 .910 .910 - .894 .903 .909 .909 - .902 .913 .910 .808 .819 .839 .826 .793 Experiments show that our method is superior to conventional methods. Moreover, the results indicate that complex features exist and can be effective. Our method can employ them without over-fitting problems, which yields benefits in terms of concept and performance. References N. Cancedda, E. Gaussier, C. Goutte, and J.-M. Renders. 2003. Word-Sequence Kernels. Journal of Machine Learning Research, 3:1059–1082. M. Collins and N. Duffy. 2001. Convolution Kernels for Natural Language. In Proc. of Neural Information Processing Systems (NIPS’2001). C. Cortes and V. N. Vapnik. 1995. Support Vector Networks. Machine Learning, 20:273–297. D. Haussler. 1999. Convolution Kernels on Discrete Structures. In Technical Report UCS-CRL99-10. UC Santa Cruz. T. Joachims. 1998. Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Proc. of European Conference on Machine Learning (ECML ’98), pages 137– 142. T. Kudo and Y. Matsumoto. 2002. Japanese Dependency Analysis Using Cascaded Chunking. In Proc. of the 6th Conference on Natural Language Learning (CoNLL 2002), pages 63–69. T. Kudo and Y. Matsumoto. 2003. Fast Methods for Kernel-based Text Analysis. In Proc. of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-2003), pages 24–31. X. Li and D. Roth. 2002. Learning Question Classifiers. In Proc. of the 19th International Conference on Computational Linguistics (COLING 2002), pages 556–562. H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and C. Watkins. 2002. Text Classification Using String Kernel. Journal of Machine Learning Research, 2:419–444. S. Morishita and J. Sese. 2000. Traversing Itemset Lattices with Statistical Metric Pruning. In Proc. of ACM SIGACT-SIGMOD-SIGART Symp. on Database Systems (PODS’00), pages 226– 236. J. Pei, J. Han, B. Mortazavi-Asl, and H. Pinto. 2001. PrefixSpan: Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth. In Proc. of the 17th International Conference on Data Engineering (ICDE 2001), pages 215–224. M. Rogati and Y. Yang. 2002. High-performing Feature Selection for Text Classification. In Proc. of the 2002 ACM CIKM International Conference on Information and Knowledge Management, pages 659–661. J. Suzuki, T. Hirao, Y. Sasaki, and E. Maeda. 2003a. Hierarchical Directed Acyclic Graph Kernel: Methods for Natural Language Data. In Proc. of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-2003), pages 32–39. J. Suzuki, Y. Sasaki, and E. Maeda. 2003b. Kernels for Structured Natural Language Data. In Proc. of the 17th Annual Conference on Neural Information Processing Systems (NIPS2003).
2004
16
Improving Pronoun Resolution by Incorporating Coreferential Information of Candidates Xiaofeng Yang†‡ Jian Su† Guodong Zhou† Chew Lim Tan‡ †Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore, 119613 {xiaofengy,sujian,zhougd} @i2r.a-star.edu.sg ‡ Department of Computer Science National University of Singapore, Singapore, 117543 {yangxiao,tancl}@comp.nus.edu.sg Abstract Coreferential information of a candidate, such as the properties of its antecedents, is important for pronoun resolution because it reflects the salience of the candidate in the local discourse. Such information, however, is usually ignored in previous learning-based systems. In this paper we present a trainable model which incorporates coreferential information of candidates into pronoun resolution. Preliminary experiments show that our model will boost the resolution performance given the right antecedents of the candidates. We further discuss how to apply our model in real resolution where the antecedents of the candidate are found by a separate noun phrase resolution module. The experimental results show that our model still achieves better performance than the baseline. 1 Introduction In recent years, supervised machine learning approaches have been widely explored in reference resolution and achieved considerable success (Ge et al., 1998; Soon et al., 2001; Ng and Cardie, 2002; Strube and Muller, 2003; Yang et al., 2003). Most learning-based pronoun resolution systems determine the reference relationship between an anaphor and its antecedent candidate only from the properties of the pair. The knowledge about the context of anaphor and antecedent is nevertheless ignored. However, research in centering theory (Sidner, 1981; Grosz et al., 1983; Grosz et al., 1995; Tetreault, 2001) has revealed that the local focusing (or centering) also has a great effect on the processing of pronominal expressions. The choices of the antecedents of pronouns usually depend on the center of attention throughout the local discourse segment (Mitkov, 1999). To determine the salience of a candidate in the local context, we may need to check the coreferential information of the candidate, such as the existence and properties of its antecedents. In fact, such information has been used for pronoun resolution in many heuristicbased systems. The S-List model (Strube, 1998), for example, assumes that a co-referring candidate is a hearer-old discourse entity and is preferred to other hearer-new candidates. In the algorithms based on the centering theory (Brennan et al., 1987; Grosz et al., 1995), if a candidate and its antecedent are the backwardlooking centers of two subsequent utterances respectively, the candidate would be the most preferred since the CONTINUE transition is always ranked higher than SHIFT or RETAIN. In this paper, we present a supervised learning-based pronoun resolution system which incorporates coreferential information of candidates in a trainable model. For each candidate, we take into consideration the properties of its antecedents in terms of features (henceforth backward features), and use the supervised learning method to explore their influences on pronoun resolution. In the study, we start our exploration on the capability of the model by applying it in an ideal environment where the antecedents of the candidates are correctly identified and the backward features are optimally set. The experiments on MUC-6 (1995) and MUC-7 (1998) corpora show that incorporating coreferential information of candidates boosts the system performance significantly. Further, we apply our model in the real resolution where the antecedents of the candidates are provided by separate noun phrase resolution modules. The experimental results show that our model still outperforms the baseline, even with the low recall of the non-pronoun resolution module. The remaining of this paper is organized as follows. Section 2 discusses the importance of the coreferential information for candidate evaluation. Section 3 introduces the baseline learning framework. Section 4 presents and evaluates the learning model which uses backward features to capture coreferential information, while Section 5 proposes how to apply the model in real resolution. Section 6 describes related research work. Finally, conclusion is given in Section 7. 2 The Impact of Coreferential Information on Pronoun Resolution In pronoun resolution, the center of attention throughout the discourse segment is a very important factor for antecedent selection (Mitkov, 1999). If a candidate is the focus (or center) of the local discourse, it would be selected as the antecedent with a high possibility. See the following example, <s> Gitano1 has pulled offa clever illusion2 with its3 advertising4. <s> <s> The campaign5 gives its6 clothes a youthful and trendy image to lure consumers into the store. <s> Table 1: A text segment from MUC-6 data set In the above text, the pronoun “its6” has several antecedent candidates, i.e., “Gitano1”, “a clever illusion2”, “its3”, “its advertising4” and “The campaign5”. Without looking back, “The campaign5” would be probably selected because of its syntactic role (Subject) and its distance to the anaphor. However, given the knowledge that the company Gitano is the focus of the local context and “its3” refers to “Gitano1”, it would be clear that the pronoun “its6” should be resolved to “its3” and thus “Gitano1”, rather than other competitors. To determine whether a candidate is the “focus” entity, we should check how the status (e.g. grammatical functions) of the entity alternates in the local context. Therefore, it is necessary to track the NPs in the coreferential chain of the candidate. For example, the syntactic roles (i.e., subject) of the antecedents of “its3” would indicate that “its3” refers to the most salient entity in the discourse segment. In our study, we keep the properties of the antecedents as features of the candidates, and use the supervised learning method to explore their influence on pronoun resolution. Actually, to determine the local focus, we only need to check the entities in a short discourse segment. That is, for a candidate, the number of its adjacent antecedents to be checked is limited. Therefore, we could evaluate the salience of a candidate by looking back only its closest antecedent instead of each element in its coreferential chain, with the assumption that the closest antecedent is able to provide sufficient information for the evaluation. 3 The Baseline Learning Framework Our baseline system adopts the common learning-based framework employed in the system by Soon et al. (2001). In the learning framework, each training or testing instance takes the form of i{ana, candi}, where ana is the possible anaphor and candi is its antecedent candidate1. An instance is associated with a feature vector to describe their relationships. As listed in Table 2, we only consider those knowledge-poor and domain-independent features which, although superficial, have been proved efficient for pronoun resolution in many previous systems. During training, for each anaphor in a given text, a positive instance is created by paring the anaphor and its closest antecedent. Also a set of negative instances is formed by paring the anaphor and each of the intervening candidates. Based on the training instances, a binary classifier is generated using C5.0 learning algorithm (Quinlan, 1993). During resolution, each possible anaphor ana, is paired in turn with each preceding antecedent candidate, candi, from right to left to form a testing instance. This instance is presented to the classifier, which will then return a positive or negative result indicating whether or not they are co-referent. The process terminates once an instance i{ana, candi} is labelled as positive, and ana will be resolved to candi in that case. 4 The Learning Model Incorporating Coreferential Information The learning procedure in our model is similar to the above baseline method, except that for each candidate, we take into consideration its closest antecedent, if possible. 4.1 Instance Structure During both training and testing, we adopt the same instance selection strategy as in the baseline model. The only difference, however, is the structure of the training or testing instances. Specifically, each instance in our model is composed of three elements like below: 1In our study candidates are filtered by checking the gender, number and animacy agreements in advance. Features describing the candidate (candi) 1. candi DefNp 1 if candi is a definite NP; else 0 2. candi DemoNP 1 if candi is an indefinite NP; else 0 3. candi Pron 1 if candi is a pronoun; else 0 4. candi ProperNP 1 if candi is a proper name; else 0 5. candi NE Type 1 if candi is an “organization” named-entity; 2 if “person”, 3 if other types, 0 if not a NE 6. candi Human the likelihood (0-100) that candi is a human entity (obtained from WordNet) 7. candi FirstNPInSent 1 if candi is the first NP in the sentence where it occurs 8. candi Nearest 1 if candi is the candidate nearest to the anaphor; else 0 9. candi SubjNP 1 if candi is the subject of the sentence it occurs; else 0 Features describing the anaphor (ana): 10. ana Reflexive 1 if ana is a reflexive pronoun; else 0 11. ana Type 1 if ana is a third-person pronoun (he, she,. . . ); 2 if a single neuter pronoun (it,. . . ); 3 if a plural neuter pronoun (they,. . . ); 4 if other types Features describing the relationships between candi and ana: 12. SentDist Distance between candi and ana in sentences 13. ParaDist Distance between candi and ana in paragraphs 14. CollPattern 1 if candi has an identical collocation pattern with ana; else 0 Table 2: Feature set for the baseline pronoun resolution system i{ana, candi, ante-of-candi} where ana and candi, similar to the definition in the baseline model, are the anaphor and one of its candidates, respectively. The new added element in the instance definition, anteof-candi, is the possible closest antecedent of candi in its coreferential chain. The ante-ofcandi is set to NIL in the case when candi has no antecedent. Consider the example in Table 1 again. For the pronoun “it6”, three training instances will be generated, namely, i{its6, The compaign5, NIL}, i{its6, its advertising4, NIL}, and i{its6, its3, Gitano1}. 4.2 Backward Features In addition to the features adopted in the baseline system, we introduce a set of backward features to describe the element ante-of-candi. The ten features (15-24) are listed in Table 3 with their respective possible values. Like feature 1-9, features 15-22 describe the lexical, grammatical and semantic properties of ante-of-candi. The inclusion of the two features Apposition (23) and candi NoAntecedent (24) is inspired by the work of Strube (1998). The feature Apposition marks whether or not candi and ante-of-candi occur in the same appositive structure. The underlying purpose of this feature is to capture the pattern that proper names are accompanied by an appositive. The entity with such a pattern may often be related to the hearers’ knowledge and has low preference. The feature candi NoAntecedent marks whether or not a candidate has a valid antecedent in the preceding text. As stipulated in Strube’s work, co-referring expressions belong to hearer-old entities and therefore have higher preference than other candidates. When the feature is assigned value 1, all the other backward features (15-23) are set to 0. 4.3 Results and Discussions In our study we used the standard MUC6 and MUC-7 coreference corpora. In each data set, 30 “dry-run” documents were annotated for training as well as 20-30 documents for testing. The raw documents were preprocessed by a pipeline of automatic NLP components (e.g. NP chunker, part-of-speech tagger, named-entity recognizer) to determine the boundary of the NPs, and to provide necessary information for feature calculation. In an attempt to investigate the capability of our model, we evaluated the model in an optimal environment where the closest antecedent of each candidate is correctly identified. MUC6 and MUC-7 can serve this purpose quite well; the annotated coreference information in the data sets enables us to obtain the correct closest Features describing the antecedent of the candidate (ante-of-candi): 15. ante-candi DefNp 1 if ante-of-candi is a definite NP; else 0 16. ante-candi IndefNp 1 if ante-of-candi is an indefinite NP; else 0 17. ante-candi Pron 1 if ante-of-candi is a pronoun; else 0 18. ante-candi Proper 1 if ante-of-candi is a proper name; else 0 19. ante-candi NE Type 1 if ante-of-candi is an “organization” named-entity; 2 if “person”, 3 if other types, 0 if not a NE 20. ante-candi Human the likelihood (0-100) that ante-of-candi is a human entity 21. ante-candi FirstNPInSent 1 if ante-of-candi is the first NP in the sentence where it occurs 22. ante-candi SubjNP 1 if ante-of-candi is the subject of the sentence where it occurs Features describing the relationships between the candidate (candi) and ante-of-candi: 23. Apposition 1 if ante-of-candi and candi are in an appositive structure Features describing the candidate (candi): 24. candi NoAntecedent 1 if candi has no antecedent available; else 0 Table 3: Backward features used to capture the coreferential information of a candidate antecedent for each candidate and accordingly generate the training and testing instances. In the next section we will further discuss how to apply our model into the real resolution. Table 4 shows the performance of different systems for resolving the pronominal anaphors 2 in MUC-6 and MUC-7. Default learning parameters for C5.0 were used throughout the experiments. In this table we evaluated the performance based on two kinds of measurements: • “Recall-and-Precision”: Recall = #positive instances classified correctly #positive instances Precision = #positive instances classified correctly #instances classified as positive The above metrics evaluate the capability of the learned classifier in identifying positive instances3. F-measure is the harmonic mean of the two measurements. • “Success”: Success = #anaphors resolved correctly #total anaphors The metric4 directly reflects the pronoun resolution capability. The first and second lines of Table 4 compare the performance of the baseline system (Base2The first and second person pronouns are discarded in our study. 3The testing instances are collected in the same ways as the training instances. 4In the experiments, an anaphor is considered correctly resolved only if the found antecedent is in the same coreferential chain of the anaphor. ante-candi_SubjNP = 1: 1 (49/5) ante-candi_SubjNP = 0: :..candi_SubjNP = 1: :..SentDist = 2: 0 (3) : SentDist = 0: : :..candi_Human > 0: 1 (39/2) : : candi_Human <= 0: : : :..candi_NoAntecedent = 0: 1 (8/3) : : candi_NoAntecedent = 1: 0 (3) : SentDist = 1: : :..ante-candi_Human <= 50 : 0 (4) : ante-candi_Human > 50 : 1 (10/2) : candi_SubjNP = 0: :..candi_Pron = 1: 1 (32/7) candi_Pron = 0: :..candi_NoAntecedent = 1: :..candi_FirstNPInSent = 1: 1 (6/2) : candi_FirstNPInSent = 0: ... candi_NoAntecedent = 0: ... Figure 1: Top portion of the decision tree learned on MUC-6 with the backward features line) and our system (Optimal), where DTpron and DTpron−opt are the classifiers learned in the two systems, respectively. The results indicate that our system outperforms the baseline system significantly. Compared with Baseline, Optimal achieves gains in both recall (6.4% for MUC-6 and 4.1% for MUC-7) and precision (1.3% for MUC-6 and 9.0% for MUC-7). For Success, we also observe an apparent improvement by 4.7% (MUC-6) and 3.5% (MUC-7). Figure 1 shows the portion of the pruned decision tree learned for MUC-6 data set. It visualizes the importance of the backward features for the pronoun resolution on the data set. From Testing Backward feature MUC-6 MUC-7 Experiments classifier assigner* R P F S R P F S Baseline DTpron NIL 77.2 83.4 80.2 70.0 71.9 68.6 70.2 59.0 Optimal DTpron−opt (Annotated) 83.6 84.7 84.1 74.7 76.0 77.6 76.8 62.5 RealResolve-1 DTpron−opt DTpron−opt 75.8 83.8 79.5 73.1 62.3 77.7 69.1 53.8 RealResolve-2 DTpron−opt DTpron 75.8 83.8 79.5 73.1 63.0 77.9 69.7 54.9 RealResolve-3 DT ′ pron DTpron 79.3 86.3 82.7 74.7 74.7 67.3 70.8 60.8 RealResolve-4 DT ′ pron DT ′ pron 79.3 86.3 82.7 74.7 74.7 67.3 70.8 60.8 Table 4: Results of different systems for pronoun resolution on MUC-6 and MUC-7 (*Here we only list backward feature assigner for pronominal candidates. In RealResolve-1 to RealResolve-4, the backward features for non-pronominal candidates are all found by DTnon−pron.) the tree we could find that: 1.) Feature ante-candi SubjNP is of the most importance as the root feature of the tree. The decision tree would first examine the syntactic role of a candidate’s antecedent, followed by that of the candidate. This nicely proves our assumption that the properties of the antecedents of the candidates provide very important information for the candidate evaluation. 2.) Both features ante-candi SubjNP and candi SubjNP rank top in the decision tree. That is, for the reference determination, the subject roles of the candidate’s referent within a discourse segment will be checked in the first place. This finding supports well the suggestion in centering theory that the grammatical relations should be used as the key criteria to rank forward-looking centers in the process of focus tracking (Brennan et al., 1987; Grosz et al., 1995). 3.) candi Pron and candi NoAntecedent are to be examined in the cases when the subject-role checking fails, which confirms the hypothesis in the S-List model by Strube (1998) that co-refereing candidates would have higher preference than other candidates in the pronoun resolution. 5 Applying the Model in Real Resolution In Section 4 we explored the effectiveness of the backward feature for pronoun resolution. In those experiments our model was tested in an ideal environment where the closest antecedent of a candidate can be identified correctly when generating the feature vector. However, during real resolution such coreferential information is not available, and thus a separate module has algorithm PRON-RESOLVE input: DTnon−pron: classifier for resolving non-pronouns DTpron: classifier for resolving pronouns begin: M1..n:= the valid markables in the given document Ante[1..n] := 0 for i = 1 to N for j = i - 1 downto 0 if (Mi is a non-pron and DTnon−pron(i{Mi, Mj}) == + ) or (Mi is a pron and DTpron(i{Mi, Mj, Ante[j]}) == +) then Ante[i] := Mj break return Ante Figure 2: The pronoun resolution algorithm by incorporating coreferential information of candidates to be employed to obtain the closest antecedent for a candidate. We describe the algorithm in Figure 2. The algorithm takes as input two classifiers, one for the non-pronoun resolution and the other for pronoun resolution. Given a testing document, the antecedent of each NP is identified using one of these two classifiers, depending on the type of NP. Although a separate nonpronoun resolution module is required for the pronoun resolution task, this is usually not a big problem as these two modules are often integrated in coreference resolution systems. We just use the results of the one module to improve the performance of the other. 5.1 New Training and Testing Procedures For a pronominal candidate, its antecedent can be obtained by simply using DTpron−opt. For Training Procedure: T1. Train a non-pronoun resolution classifier DTnon−pron and a pronoun resolution classifier DTpron, using the baseline learning framework (without backward features). T2. Apply DTnon−pron and DTpron to identify the antecedent of each non-pronominal and pronominal markable, respectively, in a given document. T3. Go through the document again. Generate instances with backward features assigned using the antecedent information obtained in T2. T4. Train a new pronoun resolution classifier DT ′ pron on the instances generated in T3. Testing Procedure: R1. For each given document, do T2∼T3. R2. Resolve pronouns by applying DT ′ pron. Table 5: New training and testing procedures a non-pronominal candidate, we built a nonpronoun resolution module to identify its antecedent. The module is a duplicate of the NP coreference resolution system by Soon et al. (2001)5 , which uses the similar learning framework as described in Section 3. In this way, we could do pronoun resolution just by running PRON-RESOLVE(DTnon−pron, DTpron−opt), where DTnon−pron is the classifier of the non-pronoun resolution module. One problem, however, is that DTpron−opt is trained on the instances whose backward features are correctly assigned. During real resolution, the antecedent of a candidate is found by DTnon−pron or DTpron−opt, and the backward feature values are not always correct. Indeed, for most noun phrase resolution systems, the recall is not very high. The antecedent sometimes can not be found, or is not the closest one in the preceding coreferential chain. Consequently, the classifier trained on the “perfect” feature vectors would probably fail to output anticipated results on the noisy data during real resolution. Thus we modify the training and testing procedures of the system. For both training and testing instances, we assign the backward feature values based on the results from separate NP resolution modules. The detailed procedures are described in Table 5. 5Details of the features can be found in Soon et al. (2001) algorithm REFINE-CLASSIFIER begin: DT1 pron := DT ′ pron for i = 1 to ∞ Use DTi pron to update the antecedents of pronominal candidates and the corresponding backward features; Train DTi+1 pron based on the updated training instances; if DTi+1 pron is not better than DTi pron then break; return DTi pron Figure 3: The classifier refining algorithm The idea behind our approach is to train and test the pronoun resolution classifier on instances with feature values set in a consistent way. Here the purpose of DTpron and DTnon−pron is to provide backward feature values for training and testing instances. From this point of view, the two modules could be thought of as a preprocessing component of our pronoun resolution system. 5.2 Classifier Refining If the classifier DT ′ pron outperforms DTpron as expected, we can employ DT ′ pron in place of DTpron to generate backward features for pronominal candidates, and then train a classifier DT ′′ pron based on the updated training instances. Since DT ′ pron produces more correct feature values than DTpron, we could expect that DT ′′ pron will not be worse, if not better, than DT ′ pron. Such a process could be repeated to refine the pronoun resolution classifier. The algorithm is described in Figure 3. In algorithm REFINE-CLASSIFIER, the iteration terminates when the new trained classifier DTi+1 pron provides no further improvement than DTi pron. In this case, we can replace DTi+1 pron by DTi pron during the i+1(th) testing procedure. That means, by simply running PRON-RESOLVE(DTnon−pron,DTi pron), we can use for both backward feature computation and instance classification tasks, rather than applying DTpron and DT ′ pron subsequently. 5.3 Results and Discussions In the experiments we evaluated the performance of our model in real pronoun resolution. The performance of our model depends on the performance of the non-pronoun resolution classifier, DTnon−pron. Hence we first examined the coreference resolution capability of DTnon−pron based on the standard scoring scheme by Vilain et al. (1995). For MUC-6, the module obtains 62.2% recall and 78.8% precision, while for MUC-7, it obtains 50.1% recall and 75.4% precision. The poor recall and comparatively high precision reflect the capability of the state-ofthe-art learning-based NP resolution systems. The third block of Table 4 summarizes the performance of the classifier DTpron−opt in real resolution. In the systems RealResolve-1 and RealResolve-2, the antecedents of pronominal candidates are found by DTpron−opt and DTpron respectively, while in both systems the antecedents of non-pronominal candidates are by DTnon−pron. As shown in the table, compared with the Optimal where the backward features of testing instances are optimally assigned, the recall rates of two systems drop largely by 7.8% for MUC-6 and by about 14% for MUC-7. The scores of recall are even lower than those of Baseline. As a result, in comparison with Optimal, we see the degrade of the F-measure and the success rate, which confirms our hypothesis that the classifier learned on perfect training instances would probably not perform well on the noisy testing instances. The system RealResolve-3 listed in the fifth line of the table uses the classifier trained and tested on instances whose backward features are assigned according to the results from DTnon−pron and DTpron. From the table we can find that: (1) Compared with Baseline, the system produces gains in recall (2.1% for MUC-6 and 2.8% for MUC-7) with no significant loss in precision. Overall, we observe the increase in F-measure for both data sets. If measured by Success, the improvement is more apparent by 4.7% (MUC-6) and 1.8% (MUC-7). (2) Compared with RealResolve-1(2), the performance decrease of RealResolve-3 against Optimal is not so large. Especially for MUC-6, the system obtains a success rate as high as Optimal. The above results show that our model can be successfully applied in the real pronoun resolution task, even given the low recall of the current non-pronoun resolution module. This should be owed to the fact that for a candidate, its adjacent antecedents, even not the closest one, could give clues to reflect its salience in the local discourse. That is, the model prefers a high precision to a high recall, which copes well with the capability of the existing non-pronoun resolution module. In our experiments we also tested the classifier refining algorithm described in Figure 3. We found that for both MUC-6 and MUC-7 data set, the algorithm terminated in the second round. The comparison of DT2 pron and DT1 pron (i.e. DT ′ pron) showed that these two trees were exactly the same. The algorithm converges fast probably because in the data set, most of the antecedent candidates are non-pronouns (89.1% for MUC-6 and 83.7% for MUC-7). Consequently, the ratio of the training instances with backward features changed may be not substantial enough to affect the classifier generation. Although the algorithm provided no further refinement for DT ′ pron, we can use DT ′ pron, as suggested in Section 5.2, to calculate backward features and classify instances by running PRON-RESOLVE(DTnon−pron, DT ′ pron). The results of such a system, RealResolve-4, are listed in the last line of Table 4. For both MUC6 and MUC-7, RealResolve-4 obtains exactly the same performance as RealResolve-3. 6 Related Work To our knowledge, our work is the first effort that systematically explores the influence of coreferential information of candidates on pronoun resolution in learning-based ways. Iida et al. (2003) also take into consideration the contextual clues in their coreference resolution system, by using two features to reflect the ranking order of a candidate in Salience Reference List (SRL). However, similar to common centering models, in their system the ranking of entities in SRL is also heuristic-based. The coreferential chain length of a candidate, or its variants such as occurrence frequency and TFIDF, has been used as a salience factor in some learning-based reference resolution systems (Iida et al., 2003; Mitkov, 1998; Paul et al., 1999; Strube and Muller, 2003). However, for an entity, the coreferential length only reflects its global salience in the whole text(s), instead of the local salience in a discourse segment which is nevertheless more informative for pronoun resolution. Moreover, during resolution, the found coreferential length of an entity is often incomplete, and thus the obtained length value is usually inaccurate for the salience evaluation. 7 Conclusion and Future Work In this paper we have proposed a model which incorporates coreferential information of candidates to improve pronoun resolution. When evaluating a candidate, the model considers its adjacent antecedent by describing its properties in terms of backward features. We first examined the effectiveness of the model by applying it in an optimal environment where the closest antecedent of a candidate is obtained correctly. The experiments show that it boosts the success rate of the baseline system for both MUC-6 (4.7%) and MUC-7 (3.5%). Then we proposed how to apply our model in the real resolution where the antecedent of a non-pronoun is found by an additional non-pronoun resolution module. Our model can still produce Success improvement (4.7% for MUC-6 and 1.8% for MUC-7) against the baseline system, despite the low recall of the non-pronoun resolution module. In the current work we restrict our study only to pronoun resolution. In fact, the coreferential information of candidates is expected to be also helpful for non-pronoun resolution. We would like to investigate the influence of the coreferential factors on general NP reference resolution in our future work. References S. Brennan, M. Friedman, and C. Pollard. 1987. A centering approach to pronouns. In Proceedings of the 25th Annual Meeting of the Association for Compuational Linguistics, pages 155–162. N. Ge, J. Hale, and E. Charniak. 1998. A statistical approach to anaphora resolution. In Proceedings of the 6th Workshop on Very Large Corpora. B. Grosz, A. Joshi, and S. Weinstein. 1983. Providing a unified account of definite noun phrases in discourse. In Proceedings of the 21st Annual meeting of the Association for Computational Linguistics, pages 44–50. B. Grosz, A. Joshi, and S. Weinstein. 1995. Centering: a framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–225. R. Iida, K. Inui, H. Takamura, and Y. Matsumoto. 2003. Incorporating contextual cues in trainable models for coreference resolution. In Proceedings of the 10th Conference of EACL, Workshop ”The Computational Treatment of Anaphora”. R. Mitkov. 1998. Robust pronoun resolution with limited knowledge. In Proceedings of the 17th Int. Conference on Computational Linguistics, pages 869–875. R. Mitkov. 1999. Anaphora resolution: The state of the art. Technical report, University of Wolverhampton. MUC-6. 1995. Proceedings of the Sixth Message Understanding Conference. Morgan Kaufmann Publishers, San Francisco, CA. MUC-7. 1998. Proceedings of the Seventh Message Understanding Conference. Morgan Kaufmann Publishers, San Francisco, CA. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 104–111, Philadelphia. M. Paul, K. Yamamoto, and E. Sumita. 1999. Corpus-based anaphora resolution towards antecedent preference. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, Workshop ”Coreference and It’s Applications”, pages 47–52. J. R. Quinlan. 1993. C4.5: Programs for machine learning. Morgan Kaufmann Publishers, San Francisco, CA. C. Sidner. 1981. Focusing for interpretation of pronouns. American Journal of Computational Linguistics, 7(4):217–231. W. Soon, H. Ng, and D. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. M. Strube and C. Muller. 2003. A machine learning approach to pronoun resolution in spoken dialogue. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 168–175, Japan. M. Strube. 1998. Never look back: An alternative to centering. In Proceedings of the 17th Int. Conference on Computational Linguistics and 36th Annual Meeting of ACL, pages 1251–1257. J. R. Tetreault. 2001. A corpus-based evaluation of centering and pronoun resolution. Computational Linguistics, 27(4):507–520. M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of the Sixth Message understanding Conference (MUC-6), pages 45–52, San Francisco, CA. Morgan Kaufmann Publishers. X. Yang, G. Zhou, J. Su, and C. Tan. 2003. Coreference resolution using competition learning approach. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, Japan.
2004
17
A Mention-Synchronous Coreference Resolution Algorithm Based on the Bell Tree Xiaoqiang Luo and Abe Ittycheriah Hongyan Jing and Nanda Kambhatla and Salim Roukos 1101 Kitchawan Road Yorktown Heights, NY 10598, U.S.A. {xiaoluo,abei,hjing,nanda,roukos}@us.ibm.com Abstract This paper proposes a new approach for coreference resolution which uses the Bell tree to represent the search space and casts the coreference resolution problem as finding the best path from the root of the Bell tree to the leaf nodes. A Maximum Entropy model is used to rank these paths. The coreference performance on the 2002 and 2003 Automatic Content Extraction (ACE) data will be reported. We also train a coreference system using the MUC6 data and competitive results are obtained. 1 Introduction In this paper, we will adopt the terminologies used in the Automatic Content Extraction (ACE) task (NIST, 2003). Coreference resolution in this context is defined as partitioning mentions into entities. A mention is an instance of reference to an object, and the collection of mentions referring to the same object in a document form an entity. For example, in the following sentence, mentions are underlined: “The American Medical Association voted yesterday to install the heir apparent as its president-elect, rejecting a strong, upstart challenge by a District doctor who argued that the nation’s largest physicians’ group needs stronger ethics and new leadership.” “American Medical Association”, “its” and “group” belong to the same entity as they refer to the same object. Early work of anaphora resolution focuses on finding antecedents of pronouns (Hobbs, 1976; Ge et al., 1998; Mitkov, 1998), while recent advances (Soon et al., 2001; Yang et al., 2003; Ng and Cardie, 2002; Ittycheriah et al., 2003) employ statistical machine learning methods and try to resolve reference among all kinds of noun phrases (NP), be it a name, nominal, or pronominal phrase – which is the scope of this paper as well. One common strategy shared by (Soon et al., 2001; Ng and Cardie, 2002; Ittycheriah et al., 2003) is that a statistical model is trained to measure how likely a pair of mentions corefer; then a greedy procedure is followed to group mentions into entities. While this approach has yielded encouraging results, the way mentions are linked is arguably suboptimal in that an instant decision is made when considering whether two mentions are linked or not. In this paper, we propose to use the Bell tree to represent the process of forming entities from mentions. The Bell tree represents the search space of the coreference resolution problem – each leaf node corresponds to a possible coreference outcome. We choose to model the process from mentions to entities represented in the Bell tree, and the problem of coreference resolution is cast as finding the “best” path from the root node to leaves. A binary maximum entropy model is trained to compute the linking probability between a partial entity and a mention. The rest of the paper is organized as follows. In Section 2, we present how the Bell tree can be used to represent the process of creating entities from mentions and the search space. We use a maximum entropy model to rank paths in the Bell tree, which is discussed in Section 3. After presenting the search strategy in Section 4, we show the experimental results on the ACE 2002 and 2003 data, and the Message Understanding Conference (MUC) (MUC, 1995) data in Section 5. We compare our approach with some recent work in Section 6. 2 Bell Tree: From Mention to Entity Let us consider traversing mentions in a document from beginning (left) to end (right). The process of forming entities from mentions can be represented by a tree structure. The root node is the initial state of the process, which consists of a partial entity containing the first mention of a document. The second mention is [1][2] 3* [1][2][3] [1] [23] [13][2] [123] [12][3] [1] 2* 3 [1] [12] [1] [2] (c1) (c5) (b1) (c2) (c3) (c4) (a) [12] 3* (b2) Figure 1: Bell tree representation for three mentions: numbers in [] denote a partial entity. In-focus entities are marked on the solid arrows, and active mentions are marked by *. Solid arrows signify that a mention is linked with an in-focus partial entity while dashed arrows indicate starting of a new entity. added in the next step by either linking to the existing entity, or starting a new entity. A second layer of nodes are created to represent the two possible outcomes. Subsequent mentions are added to the tree in the same manner. The process is mention-synchronous in that each layer of tree nodes are created by adding one mention at a time. Since the number of tree leaves is the number of possible coreference outcomes and it equals the Bell Number (Bell, 1934), the tree is called the Bell tree. The Bell Number  is the number of ways of partitioning  distinguishable objects (i.e., mentions) into non-empty disjoint subsets (i.e., entities). The Bell Number has a “closed” formula     and it increases rapidly as  increases: ! #" %$&' )( ! Clearly, an efficient search strategy is necessary, and it will be addressed in Section 4. Figure 1 illustrates how the Bell tree is created for a document with three mentions. The initial node consists of the first partial entity [1] (i.e., node (a) in Figure 1). Next, mention 2 becomes active (marked by “*” in node (a)) and can either link with the partial entity [1] and result in a new node (b1), or start a new entity and create another node (b2). The partial entity which the active mention considers linking with is said to be in-focus. In-focus entities are highlighted on the solid arrows in Figure 1. Similarly, mention 3 will be active in the next stage and can take five possible actions, which create five possible coreference results shown in node (c1) through (c5). Under the derivation illustrated in Figure 1, each leaf node in the Bell tree corresponds to a possible coreference outcome, and there is no other way to form entities. The Bell tree clearly represents the search space of the coreference resolution problem. The coreference resolution can therefore be cast equivalently as finding the “best” leaf node. Since the search space is large (even for a document with a moderate number of mentions), it is difficult to estimate a distribution over leaves directly. Instead, we choose to model the process from mentions to entities, or in other words, score paths from the root to leaves in the Bell tree. A nice property of the Bell tree representation is that the number of linking or starting steps is the same for all the hypotheses. This makes it easy to rank them using the “local” linking and starting probabilities as the number of factors is the same. The Bell tree representation is also incremental in that mentions are added sequentially. This makes it easy to design a decoder and search algorithm. 3 Coreference Model 3.1 Linking and Starting Model We use a binary conditional model to compute the probability that an active mention links with an infocus partial entity. The conditions include all the partially-formed entities before, the focus entity index, and the active mention. Formally, let *'+-,. &0/213/465 be  mentions in a document. Mention index 1 represents the order it appears in the document. Let 78 be an entity, and 9 . 1;:<>= be the (many-to-one) map from mention index 1 to entity index = . For an active mention index ? )&@/ ? /A , define B   *CD.C  9 E1FG for some &H/A1I/ ?KJ &5LG the set of indices of the partially-established entities to the left of +  (note that B NM ), and O   *P7Q .CIR B  5G the set of the partially-established entities. The link model is then STVUXW O  G +  GZY   C G (1) the probability linking the active mention +  with the in-focus entity 7 Q . The random variable Y  takes value from the set B  and signifies which entity is in focus; U takes binary value and is & if +  links with 7'Q . As an example, for the branch from (b2) to (c4) in Figure 1, the active mention is “3”, the set of partial entities to the left of “3” is O (  *\[ &]^G [ P]5 , the active entity is the second partial entity “[2]”. Probability STVU_`&\W O ( Gbadce eGZY ( fb measures how likely mention “3” links with the entity “[2].” The model STVUXW O  G +  GdY   C  only computes how likely +  links with 7 Q ; It does not say anything about the possibility that +  starts a new entity. Fortunately, the starting probability can be computed using link probabilities (1), as shown now. Since starting a new entity means that +  does not link with any entities in O  , the probability of starting a new entity, STVUA  W O  G +   , can be computed as STU  W O  G +   (2)   Q STVUA #GZY   C W O  G +   K& J  Q  STY   C W O  G +    STUA &\W O  G +  GZY   C " (3) (3) indicates that the probability of starting an entity can be computed using the linking probabilities STU_ & W O  G +  GdY   C  , provided that the marginal STY Q  C W O  G +   is known. In this paper, STY   C W O  G +   is approximated as: STY   C W O  G +  L & if C  , STU  & W O  G +  GdY  1F  otherwise (4) With the approximation (4), the starting probability (3) is STUA W O  G +   K& J  Q  STUA &\W O  G +  GZY   C  " (5) The linking model (1) and approximated starting model (5) can be used to score paths in the Bell tree. For example, the score for the path (a)-(b2)-(c4) in Figure 1 is the product of the start probability from (a) to (b2) and the linking probability from (b2) to (c4). Since (5) is an approximation, not true probability, a constant  is introduced to balance the linking probability and starting probability and the starting probability becomes: S VUA W O  G +  L  STUA  W O  G +  " (6) If  & , it penalizes creating new entities; Therefore,  is called start penalty. The start penalty  can be used to balance entity miss and false alarm. 3.2 Model Training and Features The model STU W O  G +  GdY   C  depends on all partial entities O  , which can be very expensive. After making some modeling assumptions, we can approximate it as: STUA &\W O  G +  GZY   C  (7) XSTUA &\W 7 Q G +   (8)    ! STU  &\W + G +  " (9) From (7) to (8), entities other than the one in focus, 7 Q , are assumed to have no influence on the decision of linking +  with 7'Q . (9) further assumes that the entity-mention score can be obtained by the maximum mention pair score. The model (9) is very similar to the model in (Morton, 2000; Soon et al., 2001; Ng and Cardie, 2002) while (8) has more conditions. We use maximum entropy model (Berger et al., 1996) for both the mention-pair model (9) and the entity-mention model (8): STVUXW + , G +    7#"$ #% '&(*) ,+  .0/1 2  + , G +   G (10) STU W 7 Q G +    7#" $  % '&(*)    .0/ 1 2  7'Q G +   G (11) where 9  3 G4 GZUI is a feature and 5  is its weight; 2 ! G6  is a normalizing factor to ensure that (10) or (11) is a probability. Effective training algorithm exists (Berger et al., 1996) once the set of features * 9  ! G6 GdUD 5 is selected. The basic features used in the models are tabulated in Table 1. Features in the lexical category are applicable to non-pronominalmentions only. Distance features characterize how far the two mentions are, either by the number of tokens, by the number of sentences, or by the number of mentions in-between. Syntactic features are derived from parse trees output from a maximum entropy parser (Ratnaparkhi, 1997). The “Count” feature calculates how many times a mention string is seen. For pronominal mentions, attributes such as gender, number, possessiveness and reflexiveness are also used. Apart from basic features in Table 1, composite features can be generated by taking conjunction of basic features. For example, a distance feature together with reflexiveness of a pronoun mention can help to capture that the antecedent of a reflexive pronoun is often closer than that of a non-reflexive pronoun. The same set of basic features in Table 1 is used in the entity-mention model, but feature definitions are slightly different. Lexical features, including the acronym features, and the apposition feature are computed by testing any mention in the entity 7 Q against the active mention +  . Editing distance for  7 Q G +   is defined as the minimum distance over any non-pronoun mentions and the active mention. Distance features are computed by taking minimum between mentions in the entity and the active mention. In the ACE data, mentions are annotated with three levels: NAME, NOMINAL and PRONOUN. For each ACE entity, a canonical mention is defined as the longest NAME mention if available; or if the entity does not have a NAME mention, the most recent NOMINAL mention; if there is no NAME and NOMINAL mention, the most recent pronoun mention. In the entity-mention model, “ncd”,“spell” and “count” features are computed over the canonical mention of the in-focus entity and the active mention. Conjunction features are used in the entity-mention model too. The mention-pair model is appealing for its simplicity: features are easy to compute over a pair of menCategory Features Remark Lexical exact_strm 1 if two mentions have the same spelling; 0 otherwise left_subsm 1 if one mention is a left substring of the other; 0 otherwise right_subsm 1 if one mention is a right substring of the other; 0 otherwise acronym 1 if one mention is an acronym of the other; 0 otherwise edit_dist quantized editing distance between two mention strings spell pair of actual mention strings ncd number of different capitalized words in two mentions Distance token_dist how many tokens two mentions are apart (quantized) sent_dist how many sentences two mentions are apart (quantized) gap_dist how many mentions in between the two mentions in question (quantized) Syntax POS_pair POS-pair of two mention heads apposition 1 if two mentions are appositive; 0 otherwise Count count pair of (quantized) numbers, each counting how many times a mention string is seen Pronoun gender pair of attributes of {female, male, neutral, unknown } number pair of attributes of {singular, plural, unknown} possessive 1 if a pronoun is possessive; 0 otherwise reflexive 1 if a pronoun is reflexive; 0 otherwise Table 1: Basic features used in the maximum entropy model. tions; its drawback is that information outside the mention pair is ignored. Suppose a document has three mentions “Mr. Clinton”, “Clinton” and “she”, appearing in that order. When considering the mention pair “Clinton” and “she”, the model may tend to link them because of their proximity; But this mistake can be easily avoided if “Mr. Clinton” and “Clinton” have been put into the same entity and the model knows “Mr. Clinton” referring to a male while “she” is female. Since gender and number information is propagated at the entity level, the entity-mention model is able to check the gender consistency when considering the active mention “she”. 3.3 Discussion There is an in-focus entity in the condition of the linking model (1) while the starting model (2) conditions on all left entities. The disparity is intentional as the starting action is influenced by all established entities on the left. (4) is not the only way STY   C W O  G +   can be approximated. For example, one could use a uniform distribution over B  . We experimented several schemes of approximation, including a uniform distribution, and (4) worked the best and is adopted here. One may consider training STY   C W O  G +   directly and use it to score paths in the Bell tree. The problem is that 1) the size of B  from which Y  takes value is variable; 2) the start action depends on all entities in O  , which makes it difficult to train STY   C W O  G +   directly. 4 Search Issues As shown in Section 2, the search space of the coreference problem can be represented by the Bell tree. Thus, the search problem reduces to creating the Bell tree while keeping track of path scores and picking the top-N best paths. This is exactly what is described in Algorithm 1. In Algorithm 1, contains all the hypotheses, or paths from the root to the current layer of nodes. Variable  VO  stores the cumulative score for a coreference result O . At line 1, is initialized with a single entity consisting of mention + , which corresponds to the root node of the Bell tree in Figure 1. Line 2 to 15 loops over the remaining mentions ( + to + ), and for each mention +  , the algorithm extends each result O in (or a path in the Bell tree) by either linking +  with an existing entity 7 , (line 5 to 10), or starting an entity [ +  ] (line 11 to 14). The loop from line 2 to 12 corresponds to creating a new layer of nodes for the active mention +  in the Bell tree.   in line 4 and  in line 6 and 11 have to do with pruning, which will be discussed shortly. The last line returns top  results, where O ) / denotes the Q result ranked by  3  :  VO ) /   O )  /  66  VO )  / " Algorithm 1 Search Algorithm Input: mentions   *+;,6. &bG" ""G)65 ;  Output: top  entity results 1:Initialize: .  * O .  *[ + ]5b5  O  f& 2:for ?  to  3: foreach node O R4: compute   . 5: foreach 1 R B  6: if ( S VUA & W OTG +  GdY 1    ) { 8: Extend O to O e , by linking +  with 7P, 9:  VO e ,  .   O  S UAf& W O G +  GdY 1F 10: } 11: if( SHUA  W OTG +     ) { 12: Extend O to O e by starting [ +  ] . 13:  O e  .   VO S  VUA W OTG +   14: } 15: .  * O e5 * O e , . 1 R B  5 . 16:return * O ) / GZO )  / G646GZO )  / 5 The complexity of the search Algorithm 1 is the total number of nodes in the Bell tree, which is    ?  , where  ?  is the Bell Number. Since the Bell number increases rapidly as a function of the number of mentions, pruning is necessary. We prune the search space in the following places: Local pruning: any children with a score below a fixed factor  of the maximum score are pruned. This is done at line 6 and 11 in Algorithm 1. The operation in line 4 is:   . * * S VU  W OTG +  5  * STVU f&\W OTG +  GZY  1F . 1 R B  5" Block 8-9 is carried out only if STU  & W OTG +  GZY  1F    and block 12-13 is carried out only if SUA  W OTG +     . Global pruning: similar to local pruning except that this is done using the cumulative score  O  . Pruning based on the global scores is carried out at line 15 of Algorithm 1. Limit hypotheses: we set a limit on the maximum number of live paths. This is useful when a document contains many mentions, in which case excessive number of paths may survive local and global pruning. Whenever available, we check the compatibility of entity types between the in-focus entity and the active mention. A hypothesis with incompatible entity types is discarded. In the ACE annotation, every mention has an entity type. Therefore we can eliminate hypotheses with two mentions of different types. 5 Experiments 5.1 Performance Metrics The official performance metric for the ACE task is ACE-value. ACE-value is computed by first calculating the weighted cost of entity insertions, deletions and substitutions; The cost is then normalized against the cost of a nominal coreference system which outputs no entities; The ACE-value is obtained by subtracting the normalized cost from & . Weights are designed to emphasize NAME entities, while PRONOUN entities (i.e., an entity consisting of only pronominal mentions) carry very low weights. A perfect coreference system will get a &'b ACE-value while a system outputs no entities will get a  ACE-value. Thus, the ACE-value can be interpreted as percentage of value a system has, relative to the perfect system. Since the ACE-value is an entity-level metric and is weighted heavily toward NAME entities, we also measure our system’s performance by an entity-constrained mention F-measure (henceforth “ECM-F”). The metric first aligns the system entities with the reference entities so that the number of common mentions is maximized. Each system entity is constrained to align with at most one reference entity, and vice versa. For example, suppose that a reference document contains three entities: *\[ + ]G [ +  G + ( ]G [ + ]5 while a system outputs four entities: * [ + G +  ]^G [ + ( ]G [ + ]G [ + ]5 , then the best alignment (from reference to system) would be [ + ]  [ + G + ] , [ +  G + ( ]  [ + ( ] and other entities are not aligned. The number of common mentions of the best alignment is  (i.e., + and + ( ), which leads to a mention recall   and precision   . The ECM-F measures the percentage of mentions that are in the “right” entities. For tests on the MUC data, we report both F-measure using the official MUC score (Vilain et al., 1995) and ECM-F. The MUC score counts the common links between the reference and the system output. 5.2 Results on the ACE data The system is first developed and tested using the ACE data. The ACE coreference system is trained with & documents (about &b words) of ACE 2002 training data. A separate b documents (  words) is used as the development-test (Devtest) set. In 2002, NIST released two test sets in February (Feb02) and September (Sep02), respectively. Statistics of the three test sets is summarized in Table 2. We will report coreference results on the true mentions of the three test sets. TestSet #-docs #-words #-mentions #-entities Devtest 90 50426 7470 2891 Feb02 97 52677 7665 3104 Sep02 186 69649 10577 4355 Table 2: Statistics of three test sets. For the mention-pair model, training events are generated for all compatible mention-pairs, which results in about b events, about &'  of which are positive examples. The full mention-pair model uses about & & features; Most are conjunction features. For the entity-mention model, events are generated by walking through the Bell tree. Only events on the true path (i.e., positive examples) and branches emitting from a node on the true path to a node not on the true path (i.e., negative examples) are generated. For example, in Figure 1, suppose that the path (a)-(b2)-(c4) is the truth, then positive training examples are starting event from (a) to (b2) and linking event from (b2) to (c4); While the negative examples are linking events from (a) to (b1), (b2) to (c3), and the starting event from (b2) to (c5). This scheme generates about c events, out of which about & are positive training examples. The full entity-mention model has about #"  features, due to less number of conjunction features and training examples. Coreference results on the true mentions of the Devtest, Feb02, and Sep02 test sets are tabulated in Table 3. These numbers are obtained with a fixed search beam b and pruning threshold    " #& (widening the search beam or using a smaller pruning threshold did not change results significantly). The mention-pair model in most cases performs better than the mention-entity model by both ACE-value and ECM-F measure although none of the differences is statistically significant (pair-wise t-test) at p-value #"  . Note that, however, the mention-pair model uses  times more features than the entity-pair model. We also observed that, because the score between the infocus entity and the active mention is computed by (9) in the mention-pair model, the mention-pair sometimes mistakenly places a male pronoun and female pronoun into the same entity, while the same mistake is avoided in the entity-mention model. Using the canonical mentions when computing some features (e.g., “spell”) in the entity-mention model is probably not optimal and it is an area that needs further research. When the same mention-pair model is used to score the ACE 2003 evaluation data, an ACE-value  c "  is obtained on the system1 mentions. After retrained with Chinese and Arabic data (much less training data than English), the system got  "   and "  ACE-value on the system mentions of ACE 2003 evaluation data for Chinese and Arabic, respectively. The results for all three languages are among the top-tier submission systems. Details of the mention detection and coreference system can be found in (Florian et al., 2004). Since the mention-pair model is better, subsequent analyses are done with the mention pair model only. 5.2.1 Feature Impact To see how each category of features affects the performance, we start with the aforementioned mentionpair model, incrementally remove each feature category, retrain the system and test it on the Devtest set. The result is summarized in Table 4. The last column lists the number of features. The second row is the full mention-pair model, the third through seventh row correspond to models by removing the syntactic features (i.e., POS tags and apposition features), count features, distance features, mention type and level information, and pair of mention-spelling features. If a basic feature is removed, conjunction features using that basic feature are also removed. It is striking that the smallest system consisting of only c features (string and substring match, acronym, edit distance and number of different capitalized words) can get as much as  #"  ACE-value. Table 4 shows clearly that these lexical features and the distance features are the most important. Sometimes the ACE-value increases after removing a set of features, but the ECM-F measure tracks nicely the trend that the more features there are, the better the performance is. This is because the ACE-value 1System mentions are output from a mention detection system. −2.5 −2 −1.5 −1 −0.5 0 0.65 0.7 0.75 0.8 0.85 0.9 log α ACE−value or ECM−F ECM−F ACE−value Figure 2: Performance vs. log start penalty is a weighted metric. A small fluctuation of NAME entities will impact the ACE-value more than many NOMINAL or PRONOUN entities. Model ACE-val(%) ECM-F(%) #-features Full 89.8 73.20 (  2.9) 171K -syntax 89.0 72.6 (  2.5) 71K -count 89.4 72.0 (  3.3) 70K -dist 86.7 *66.2 (  3.9) 24K -type/level 86.8 65.7 (  2.2) 5.4K -spell 86.0 64.4 (  1.9) 39 Table 4: Impact of feature categories. Numbers after  are the standard deviations. * indicates that the result is significantly (pair-wise t-test) different from the line above at  #"  . 5.2.2 Effect of Start Penalty As discussed in Section 3.1, the start penalty  can be used to balance the entity miss and false alarm. To see this effect, we decode the Devtest set by varying the start penalty and the result is depicted in Figure 2. The ACE-value and ECM-F track each other fairly well. Both achieve the optimal when     J #"  . 5.3 Experiments on the MUC data To see how the proposed algorithm works on the MUC data, we test our algorithm on the MUC6 data. To minimize the change to the coreference system, we first map the MUC data into the ACE style. The original MUC coreference data does not have entity types (i.e., “ORGANIZATION”, “LOCATION” etc), required in the ACE style. Part of entity types can be recovered from the corresponding named-entity annotations. The recovered named-entity label is propagated to all mentions belonging to the same entity. There are 504 out of 2072 mentions of the MUC6 formal test set and 695 out of 2141 mentions of the MUC6 dry-run test set that cannot be assigned labels by this procedure. A Devtest Feb02 Sep02 Model ACE-val(%) ECM-F(%) ACE-val(%) ECM-F(%) ACE-val(%) ECM-F(%) MP 89.8 73.2 (  2.9) 90.0 73.1 (  4.0) 88.0 73.1 (  6.8) EM 89.9 71.7 (  2.4) 88.2 70.8 (  3.9) 87.6 72.4 (  6.2) Table 3: Coreference results on true mentions: MP – mention-pair model; EM – entity-mention model; ACE-val: ACE-value; ECM-F: Entity-constrained Mention F-measure. MP uses & & features while EM uses only  "  features. None of the ECM-F differences between MP and EM is statistically significant at   #"  . generic type “UNKNOWN” is assigned to these mentions. Mentions that can be found in the named-entity annotation are assumed to have the ACE mention level “NAME”; All other mentions other than English pronouns are assigned the level “NOMINAL.” After the MUC data is mapped into the ACE-style, the same set of feature templates is used to train a coreference system. Two coreference systems are trained on the MUC6 data: one trained with 30 dry-run test documents (henceforth “MUC6-small”); the other trained with 191 “dryrun-train” documents that have both coreference and named-entity annotations (henceforth “MUC6-big”) in the latest LDC release. To use the official MUC scorer, we convert the output of the ACE-style coreference system back into the MUC format. Since MUC does not require entity label and level, the conversion from ACE to MUC is “lossless.” Table 5 tabulates the test results on the true mentions of the MUC6 formal test set. The numbers in the table represent the optimal operating point determined by ECM-F. The MUC scorer cannot be used since it inherently favors systems that output fewer number of entities (e.g., putting all mentions of the MUC6 formal test set into one entity will yield a &'b recall and  "   precision of links, which gives an #"  F-measure). The MUC6-small system compares favorably with the similar experiment in Harabagiu et al. (2001) in which an  &b"  F-measure is reported. When measured by the ECM-F measure, the MUC6-small system has the same level of performance as the ACE system, while the MUC6-big system performs better than the ACE system. The results show that the algorithm works well on the MUC6 data despite some information is lost in the conversion from the MUC format to the ACE format. System MUC F-measure ECM-F MUC6-small 83.9% 72.1% MUC6-big 85.7% 76.8% Table 5: Results on the MUC6 formal test set. 6 Related Work There exists a large body of literature on the topic of coreference resolution. We will compare this study with some relevant work using machine learning or statistical methods only. Soon et al. (2001) uses a decision tree model for coreference resolution on the MUC6 and MUC7 data. Leaves of the decision tree are labeled with “link” or “not-link” in training. At test time, the system checks a mention against all its preceding mentions, and the first one labeled with “link” is picked as the antecedent. Their work is later enhanced by (Ng and Cardie, 2002) in several aspects: first, the decision tree returns scores instead of a hard-decision of “link” or “not-link” so that Ng and Cardie (2002) is able to pick the “best” candidate on the left, as opposed the first in (Soon et al., 2001); Second, Ng and Cardie (2002) expands the feature sets of (Soon et al., 2001). The model in (Yang et al., 2003) expands the conditioning scope by including a competing candidate. Neither (Soon et al., 2001) nor (Ng and Cardie, 2002) searches for the global optimal entity in that they make locally independent decisions during search. In contrast, our decoder always searches for the best result ranked by the cumulative score (subject to pruning), and subsequent decisions depend on earlier ones. Recently, McCallum and Wellner (2003) proposed to use graphical models for computing probabilities of entities. The model is appealing in that it can potentially overcome the limitation of mention-pair model in which dependency among mentions other than the two in question is ignored. However, models in (McCallum and Wellner, 2003) compute directly the probability of an entity configuration conditioned on mentions, and it is not clear how the models can be factored to do the incremental search, as it is impractical to enumerate all possible entities even for documents with a moderate number of mentions. The Bell tree representation proposed in this paper, however, provides us with a naturally incremental framework for coreference resolution. Maximum entropy method has been used in coreference resolution before. For example, Kehler (1997) uses a mention-pair maximum entropy model, and two methods are proposed to compute entity scores based on the mention-pair model: 1) a distribution over entity space is deduced; 2) the most recent mention of an entity, together with the candidate mention, is used to compute the entity-mention score. In contrast, in our mention pair model, an entity-mention pair is scored by taking the maximum score among possible mention pairs. Our entity-mention model eliminates the need to synthesize an entity-mention score from mention-pair scores. Morton (2000) also uses a maximum entropy mention-pair model, and a special “dummy” mention is used to model the event of starting a new entity. Features involving the dummy mention are essentially computed with the single (normal) mention, and therefore the starting model is weak. In our model, the starting model is obtained by “complementing” the linking scores. The advantage is that we do not need to train a starting model. To compensate the model inaccuracy, we introduce a “starting penalty” to balance the linking and starting scores. To our knowledge, the paper is the first time the Bell tree is used to represent the search space of the coreference resolution problem. 7 Conclusion We propose to use the Bell tree to represent the process of forming entities from mentions. The Bell tree represents the search space of the coreference resolution problem. We studied two maximum entropy models, namely the mention-pair model and the entitymention model, both of which can be used to score entity hypotheses. A beam search algorithm is used to search the best entity result. State-of-the-art performance has been achieved on the ACE coreference data across three languages. Acknowledgments This work was partially supported by the Defense Advanced Research Projects Agency and monitored by SPAWAR under contract No. N66001-99-2-8916. The views and findings contained in this material are those of the authors and do not necessarily reflect the position of policy of the Government and no official endorsement should be inferred. We also would like to thank the anonymous reviewers for suggestions of improving the paper. References E.T. Bell. 1934. Exponential numbers. Amer. Math. Monthly, pages 411–419. Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71, March. R Florian, H Hassan, A Ittycheriah, H Jing, N Kambhatla, X Luo, N Nicolov, and S Roukos. 2004. A statistical model for multilingual entity detection and tracking. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 1–8, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Niyu Ge, John Hale, and Eugene Charniak. 1998. A statistical approach to anaphora resolution. In Proc. of the sixth Workshop on Very Large Corpora. Sanda M. Harabagiu, Razvan C. Bunescu, and Steven J. Maiorano. 2001. Text and knowledge mining for coreference resolution. In Proc. of NAACL. J. Hobbs. 1976. Pronoun resolution. Technical report, Dept. of Computer Science, CUNY, Technical Report TR76-1. A. Ittycheriah, L. Lita, N. Kambhatla, N. Nicolov, S. Roukos, and M. Stys. 2003. Identifying and tracking entity mentions in a maximum entropy framework. In HLT-NAACL 2003: Short Papers, May 27 - June 1. Andrew Kehler. 1997. Probabilistic coreference in information extraction. In Proc. of EMNLP. Andrew McCallum and Ben Wellner. 2003. Toward conditional models of identity uncertainty with application to proper noun coreference. In IJCAI Workshop on Information Integration on the Web. R. Mitkov. 1998. Robust pronoun resolution with limited knowledge. In Procs. of the 17th Internaltional Conference on Computational Linguistics, pages 869–875. Thomas S. Morton. 2000. Coreference for NLP applications. In In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. MUC-6. 1995. Proceedings of the Sixth Message Understanding Conference(MUC-6), San Francisco, CA. Morgan Kaufmann. Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proc. of ACL, pages 104–111. NIST. 2003. The ACE evaluation plan. www.nist.gov/speech/tests/ace/index.htm. Adwait Ratnaparkhi. 1997. A Linear Observed Time Statistical Parser Based on Maximum Entropy Models. In Second Conference on Empirical Methods in Natural Language Processing, pages 1 – 10. Wee Meng Soon, Hwee Tou Ng, and Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. M. Vilain, J. Burger, J. Aberdeen, D. Connolly, , and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In In Proc. of MUC6, pages 45–52. Xiaofeng Yang, Guodong Zhou, Jian Su, and Chew Lim Tan. 2003. Coreference resolution using competition learning approach. In Proc. of the &Q ACL.
2004
18
LEARNING TO RESOLVE BRIDGING REFERENCES Massimo Poesio,♣Rahul Mehta,♣Axel Maroudas,♣and Janet Hitzeman♠ ♣Dept. of Comp. Science, University of Essex, UK poesio at essex dot ac dot uk ♠MITRE Corporation, USA hitz at mitre dot org Abstract We use machine learning techniques to find the best combination of local focus and lexical distance features for identifying the anchor of mereological bridging references. We find that using first mention, utterance distance, and lexical distance computed using either Google or WordNet results in an accuracy significantly higher than obtained in previous experiments. 1 Introduction BRIDGING REFERENCES (BR) (Clark, 1977)– anaphoric expressions that cannot be resolved purely on the basis of string matching and thus require the reader to ’bridge’ the gap using commonsense inferences–are arguably the most interesting and, at the same time, the most challenging problem in anaphora resolution. Work such as (Poesio et al., 1998; Poesio et al., 2002; Poesio, 2003) provided an experimental confirmation of the hypothesis first put forward by Sidner (1979) that BRIDGING DESCRIPTIONS (BD)1 are more similar to pronouns than to other types of definite descriptions, in that they are sensitive to the local rather than the global focus (Grosz and Sidner, 1986). This previuous work also suggested that simply choosing the entity whose description is lexically closest to that of the bridging description among those in the current focus space gives poor results; in fact, better results are obtained by always choosing as ANCHOR of the bridging reference2 the first-mentioned entity of the previous sentence (Poesio, 2003). But neither source of information in isolation resulted in an accuracy over 40%. In short, this earlier work suggested that a combination of salience and lexical / 1We will use the term bridging descriptions to indicate bridging references realized by definite descriptions, equated here with noun phrases with determiner the, like the top. 2Following (Poesio and Vieira, 1998), we use the term ‘anchor’ as as a generalization of the term ANTECEDENT, to indicate the discourse entity which an anaphoric expression either realizes, or is related to by an associative relation; reserving ‘antecedent’ for the cases of identity. commonsense information is needed to choose the most likely anchor; the problem remained of how to combine this information. In the work described in this paper, we used machine learning techniques to find the best combination of local focus features and lexical distance features, focusing on MEREOLOGICAL bridging references:3 references referring to parts of an object already introduced (the cabinet), such as the panels or the top (underlined) in the following example from the GNOME corpus (Poesio et al., 2004). (1) The combination of rare and expensive materials used on [this cabinet]i indicates that it was a particularly expensive commission. The four Japanese lacquer panels date from the mid- to late 1600s and were created with a technique known as kijimaki-e. For this type of lacquer, artisans sanded plain wood to heighten its strong grain and used it as the background of each panel. They then added the scenic elements of landscape, plants, and animals in raised lacquer. Although this technique was common in Japan, such large panels were rarely incorporated into French eighteenth-century furniture. Heavy Ionic pilasters, whose copper-filled flutes give an added rich color and contrast to the giltbronze mounts, flank the panels. Yellow jasper, a semiprecious stone, rather than the usual marble, forms the top. 2 Two sources of information for bridging reference resolution 2.1 Lexical information The use of different sources of lexical knowledge for resolving bridging references has been investigated in a series of papers by Poesio et al. all using as dataset the Bridging Descriptions (BDs) contained in the corpus used by Vieira and Poesio 3We make use of the classification of bridging references proposed by Vieira and Poesio (2000). ‘Mereological’ bridging references are one of the the ‘WordNet’ bridging classes, which cover cases where the information required to bridge the gap may be found in a resource such as WordNet (Fellbaum, 1998): synonymy, hyponymy, and meronymy. (2000). In these studies, the lexical distance between a BD and its antecedent was used to choose the anchor for the BD among the antecedents in the previous five sentences. In (Poesio et al., 1997; Vieira and Poesio, 2000) WordNet 1.6 was used as a lexical resource, with poor or mediocre results. These results were due in part to missing entries and / or relations; in part to the fact that because of the monotonic organization of information in WordNet, complex searches are required even to find apparently close associations (like that between wheel and car). Similar results using WordNet 1.6 were reported at around the same time by other groups - e.g., (Humphreys et al., 1997; Harabagiu and Moldovan, 1998) and have been confirmed by more recent studies studying both hyponymy (Markert et al., 2003) and more specifically mereological BDs. Poesio (2003) found that none of the 58 mereological references in the GNOME corpus (discussed below) had a direct mereological link to their anchor: for example, table is not listed as a possible holonym of drawer, nor is house listed as a possible holonym for furniture. Garcia-Almanza (2003) found that only 16 of these 58 mereological references could be resolved by means of more complex searches in WordNet, including following the hypernymy hierarchy for both the anchor and the bridging reference, and a ’spreading activation’ search. Poesio et al. (1998) explored the usefulness of vector-space representations of lexical meaning for BDs that depended on lexical knowledge about hyponymy and synonymy. The HAL model discussed in Lund et al. (1995) was used to find the anchor of the BDs in the dataset already used by Poesio et al. (1997). However, using vectorial representations did not improve the results for the ‘WordNet’ BDs: for the synonymy cases the results were comparable to those obtained with WordNet (4/12, 33%), but for the hyponymy BDs (2/14, as opposed to 8/14 with WordNet) and especially for mereological references (2/12) they were clearly worse. On the other hand, the post-hoc analysis of results suggested that the poor results were in part due to the lack of mechanisms for choosing the most salient (or most recent) BDs. The poor results for mereological BDs with both WordNet and vectorial representations indicated that a different approach was needed to acquire information about part-of relations. Grefenstette’s work on semantic similarity (Grefenstette, 1993) and Hearst’s work on acquiring taxonomic information (Hearst, 1998) suggested that certain syntactic constructions could be usefully viewed as reflecting underlying semantic relations. In (Ishikawa, 1998; Poesio et al., 2002) it was proposed that syntactic patterns (henceforth: CONSTRUCTIONS) such as the wheel of the car could indicate that wheel and car stood in a part-of relation.4 Vectorbased lexical representations whose elements encoded the strength of associations identified by means of constructions like the one discussed were constructed from the British National Corpus, using Abney’s CASS chunker. These representations were then used to choose the anchor of BDs, using again the same dataset and the same methods as in the previous two attempts, and using mutual information to determine the strength of association. The results on mereological BDs–recall .67, precision=.73–were drastically better than those obtained with WordNet or with simple vectorial representations. The results with the three types of lexical resources and the different types of BDs in the Vieira / Poesio dataset are summarized in Table 1. Finally, a number of researchers recently argued for using the Web as a way of addressing data sparseness (Keller and Lapata, 2003). The Web has proven a useful resource for work in anaphora resolution as well. Uryupina (2003) used the Web to estimate ‘Definiteness probabilities’ used as a feature to identify discourse-new definites. Markert et al. (2003) used the Web and the construction method to extract information about hyponymy used to resolve other-anaphora (achieving an f value of around 67%) as well as the BDs in the Vieira-Poesio dataset (their results for these cases were not better than those obtained by (Vieira and Poesio, 2000)). Markert et al. also found a sharp difference between using the Web as a a corpus and using the BNC, the results in the latter case being significantly worse than when using WordNet. Poesio (2003) used the Web to choose between the hypotheses concerning the anchors of mereological BDs in the GNOME corpus generated on the basis of Centering information (see below). 2.2 Salience One of the motivations behind Grosz and Sidner’s (1986) distinction between two aspects of the attentional state - the LOCAL FOCUS and the GLOBAL FOCUS–is the difference between the interpretive preferences of pronouns and definite descriptions. According to Grosz and Sidner, the interpretation for pronouns is preferentially found in the local focus, whereas that of definite descriptions is preferentially found in the global focus. 4A similar approach was pursued in parallel by Berland and Charniak (1999). Synonymy Hyponymy Meronymy Total WN Total BDs BDs in Vieira / Poesio corpus 12 14 12 38 204 Using WordNet 4 (33.3%) 8(57.1%) 3(33.3%) 15 (39%) 34 (16.7%) Using HAL Lexicon 4 (33.3%) 2(14.3%) 2(16.7%) 8 (22.2%) 46(22.7%) Using Construction Lexicon 1 (8.3%) 0 8(66.7%) 9 (23.7%) 34(16.7%) Table 1: BD resolution results using only lexical distance with WordNet, HAL-style vectorial lexicon, and construction-based lexicon. However, already Sidner (1979) hypothesized that BDs are different from other definite descriptions, in that the local focus is preferred for their interpretation. As already mentioned, the error analysis of Poesio et al. (1998) supported this finding: the study found that the strategy found to be optimal for anaphoric definite descriptions by Vieira and Poesio (2000), considering as equally likely all antecedents in the previous five-sentence window (as opposed to preferring closer antecedents), gave poor results for bridging references; entities introduced in the last two sentences and ‘main entities’ were clearly preferred. The following example illustrates how the local focus affects the interpretation of a mereological BD, the sides, in the third sentence. (2) [Cartonnier (Filing Cabinet)]i with Clock [This piece of mid-eighteenth-century furniture]i was meant to be used like a modern filing cabinet; papers were placed in [leatherfronted cardboard boxes]j (now missing) that were fitted into the open shelves. [A large table]k decorated in the same manner would have been placed in front for working with those papers. Access to [the cartonnier]i’s lower half can only be gained by the doors at the sides, because the table would have blocked the front. The three main candidate anchors in this example– the cabinet, the boxes, and the table–all have sides. However, the actual anchor, the cabinet, is clearly the Backward-Looking Center (CB) (Grosz et al., 1995) of the first sentence after the title;5 and if we assume that entities can be indirectly realized– see (Poesio et al., 2004)–the cabinet is the CB of all three sentences, including the one containing the BR, and therefore a preferred candidate. In (Poesio, 2003), the impact on associative BD resolution of both relatively simple salience features (such as distance and order or mention) and of more complex ones (such as whether the anchor was a CB or not) was studied using the GNOME corpus (discussed below) and the CB-tracking techniques developed to compare alternative ways of instantiating 5The CB is Centering theory’s (Grosz et al., 1995) implementation of the notion of ‘topic’ or ‘main entity’. the parameters of Centering by Poesio et al. (2004). Poesio (2003) analyzed, first of all, the distance between the BD and the closest mention of the anchor, finding that of the 169 associative BDs, 77.5% had an anchor occurring either in the same sentence (59) or the previous one (72); and that only 4.2% of anchors were realized more than 5 sentences back. These percentages are very similar to those found with pronouns (Hobbs, 1978). Next, Poesio analyzed the order of mention of the anchors of the 72 associative BD whose anchor was in the previous sentence, finding that 49/72, 68%, were realized in first position. This finding is consistent with the preference for first-mentioned entities (as opposed to the most recent ones) repeatedly observed in the psychological literature on anaphora (Gernsbacher and Hargreaves, 1988; Gordon et al., 1993). Finally, Poesio examined the hypothesis that finding the anchor of a BD involves knowing which entities are the CB and the CP in the sense of Centering (Grosz et al., 1995). He found that CB(U-1) is the anchor of 37/72 of the BDs whose anchor is in the previous utterance (51.3%), and only 33.6% overall. (CP(U-1) was the anchor for 38.2% associative BDs.) Clearly, simply choosing the CB (or the CP) of the previous sentence as the anchor doesn’t work very well. However, Poesio also found that 89% of the anchors of associative BDs had been CBs or CPs. This suggested that while knowing the local focus isn’t sufficient to determine the anchor of a BD, restricting the search for anchors to CBs and CPs only might increase the precision of the BD resolution process. This hypothesis was supported by a preliminary test with 20 associative BDs. The anchor for a BD with head noun NBD was chosen among the subset of all potential antecedents (PA) in the previous five sentences that had been CBs or CPs by calling Google (by hand) with the query “the NBD of the NPA”, where NPA is the head noun of the potential antecedent, and choosing the PA with the highest hit count. 14 mereological BDs (70%) were resolved correctly this way. 3 Methods The results just discussed suggest that lexical information and salience information combine to determine the anchor of associative BRs. The goal of the experiments discussed in this paper was to test more thoroughly this hypothesis using machine learning techniques to combine the two types of information, using a larger dataset than used in this previous work, and using completely automatic techniques. We concentrated on mereological BDs, but our methods could be used to study other types of bridging references, using, e.g., the constructions used by Markert et al. (2003).6 3.1 The corpus We used for these experiments the GNOME corpus, already used in (Poesio, 2003). An important property of this corpus for the purpose of studying BR resolution is that fewer types of BDs are annotated than in the original Vieira / Poesio dataset, but the annotation is reliable (Poesio et al., 2004).7 The corpus also contains more mereological BDs and BRs than the original dataset used by Poesio and Vieira. The GNOME corpus contains about 500 sentences and 3000 NPs. A variety of semantic and discourse information has been annotated (the manual is available from the GNOME project’s home page at http://www.hcrc.ed.ac.uk/ ˜ gnome). Four types of anaphoric relations were annotated: identity (IDENT), set membership (ELEMENT), subset (SUBSET), and ‘generalized possession’ (POSS), which also includes part-of relations. A total of 2073 anaphoric relations were annotated; these include 1164 identity relations (including those realized with synonyms and hyponyms) and 153 POSS relations. Bridging references are realized by noun phrases of different types, including indefinites (as in I bought a book and a page fell out (Prince, 1981)). Of the 153 mereological references, 58 mereological references are realized by definite descriptions. 6In (Poesio, 2003), bridging descriptions based on set relations (element, subset) were also considered, but we found that this class of BDs required completely different methods. 7A serious problem when working with bridging references is the fact that subjects, when asked for judgments about bridging references in general, have a great deal of difficulty in agreeing on which expressions in the corpus are bridging references, and what their anchors are (Poesio and Vieira, 1998). This finding raises a number of interesting theoretical questions concerning the extent of agreement on semantic judgments, but also the practical question of whether it is possible to evaluate the performance of a system on this task. Subsequent work found, however, that restricting the type of bridging inferences required does make it possible for annotators to agree among themselves (Poesio et al., 2004). In the GNOME corpus only a few types of associative relations are marked, but these can be marked reliably, and do include part-of relations like that between the top and the cabinet that we are concerned with. 3.2 Features Our classifiers use two types of input features. Lexical features Only one lexical feature was used: lexical distance, but extracted from two different lexical sources. Google distance was computed as in (Poesio, 2003) (see also Markert et al. (2003)): given head nouns NBD of the BD and NPA of a potential antecedent, Google is called (via the Google API) with a query of the form “the NBD of the NPA” (e.g., the sides of the table) and the number of hits NHits is computed. Then Google distance = ( 1 if NHits = 0 1 NHits otherwise The query “the NBD of NPA” (e.g., the amount of cream) is used when NPA is used as a mass noun (information about mass vs count is annotated in the GNOME corpus). If the potential antecedent is a pronoun, the head of the closest realization of the same discourse entity is used. We also reconsidered WordNet (1.7.1) as an alternative way of establishing lexical distance, but made a crucial change from the studies reported above. Both earlier studies such as (Poesio et al., 1997) and more recent ones (Poesio, 2003; GarciaAlmanza, 2003) had shown that mereological information in WordNet is extremely sparse. However, these studies also showed that information about hypernyms is much more extensive. This suggested trading precision for recall with an alternative way of using WordNet to compute lexical distance: instead of requiring the path between the head predicate of the associative BD and the head predicate of the potential antecedent to contain at least one mereological link (various strategies for performing a search of this type were considered in (GarciaAlmanza, 2003)), consider only hypernymy and hyponymy links. To compute our second measure of lexical distance between NBD and NPA defined as above, WordNet distance, the following algorithm was used. Let distance(s, s′) be the number of hypernim links between concepts s and s′. Then 1. Get from WordNet all the senses of both NBD and NPA; 2. Get the hypernym tree of each of these senses; 3. For each pair of senses sNBDi and sNPAj, find the Most Specific Common Subsumer scomm ij (this is the closest concept which is an hypernym of both senses). 4. The ShortestWNDistance between NBD and NPA is then computed as the shortest distance between any of the senses of NBD and any of the senses of NPA: ShtstWNDist(NBD, NPA) = mini,j(distance(sNBDi, scom ij ) + distance(scom ij , sNP Aj )) 5. Finally, a normalized WordNet distance in the range 0..1 is then obtained by dividing ShtstWNDist by a MaxWNDist factor (30 in our experiments). WordNet distance = 1 if no path between the concepts was found. WN distance = ( 1 if no path ShtstWNDist MaxWNDist otherwise Salience features In choosing the salience features we took into account the results in (Poesio, 2003), but we only used features that were easy to compute, hoping that they would approximate the more complex features used in (Poesio, 2003). The first of these features was utterance distance, the distance between the utterance in which the BR occurs and the utterance containing the potential antecedent. (Sentences are used as utterances, as suggested by the results of (Poesio et al., 2004).) As discussed above, studies such as (Poesio, 2003) suggested that bridging references were sensitive to distance, in the same way as pronouns (Hobbs, 1978; Clark and Sengul, 1979). This finding was confirmed in our study; all anchors of the 58 mereological BDs occurred within the previous five sentences, and 47/58 (81%) in the previous two. (It is interesting to note that no anchor occurred in the same sentence as the BD.) The second salience feature was boolean: whether the potential antecedent had been realized in first mention position in a sentence (Poesio, 2003; Gernsbacher and Hargreaves, 1988; Gordon et al., 1993). Two forms of this feature were tried: local first mention (whether the entity had been realized in first position within the previous five sentences) and global first mention (whether it had been realized in first position anywhere). 269 entities are realized in first position in the five sentences preceding one of the 58 BDs; 298 entities are realized in first position anywhere in the preceding text. For 31/58 of the anchors of mereological BDs, 53.5%, local first mention = 1; global first mention = 1 for 33/58 of anchors, 56.9%. 3.3 Training Methods Constructing the data set The data set used to train and test BR resolution consisted of a set of positive instances (the actual anchors of the mereological BRs) and a set of negative instances (other entities mentioned in the previous five sentences of the text). However, preliminary tests showed that simply including all potential antecedents as negative instances would make the data set too unbalanced, particularly when only bridging descriptions were considered: in this case we would have had 58 positive instances vs. 1672 negative ones. We therefore developed a parametric script that could create datasets with different positive / negative ratios - 1:1, 1:2, 1:3 - by including, with each positive instance, a varying number of negative instances (1, 2, 3, ...) randomly chosen among the other potential antecedents, the number of negative instances to be included for each positive one being a parameter chosen by the experimenter. We report the results obtained with 1:1 and 1:3 ratios. The dataset thus constructed was used for both training and testing, by means of a 10-fold crossvalidation. Types of Classifiers Used Multi-layer perceptrons (MLPs) have been claimed to work well with small datasets; we tested both our own implementation of an MLP with back-propagation in MatLab 6.5, experimenting with different configurations, and an off-the-shelf MLP included in the Weka Machine Learning Library8, Weka-NN. The best configuration for our own MLP proved to be one with a sigle hidden layer and 10 hidden nodes. We also used the implementation of a Naive Bayes classifier included in the Weka MLL, as Modjeska et al. (2003) reported good results. 4 Experimental Results In the first series of experiments only mereological Bridging Descriptions were considered (i.e., only bridging references realized by the-NPs). In a second series of experiments we considered all 153 mereological BRs, including ones realized with indefinites. Finally, we tested a classifier trained on balanced data (1:1 and 1:3) to find the anchors of BDs among all possible anchors. 4.1 Experiment 1: Mereological descriptions The GNOME corpus contains 58 mereological BDs. The five sentences preceding these 58 BDs contain a total of 1511 distinct entities for which a head could be recovered, possibly by examining their antecedents. This means an average of 26 distinct potential antecedents per BD, and 5.2 entities per sentence. The simplest baselines for the task of finding 8The library is available from http://www.cs.waikato.ac.nz/ml/weka/. the anchor are therefore 4% (by randomly choosing one antecedent among those in the previous five sentences) and 19.2% (by randomly choosing one antecedent among those in the previous sentence only). As 4.6 entities on average were realized in first mention position in the five sentences preceding a BD (269/58), choosing randomly among the first-mentioned entities gives a slighly higher accuracy of 21.3%. A few further baselines can be established by examining each feature separately. Google didn’t return any hits for 1089 out of 1511 distinct PAs, and no hit for 24/58 anchors; in 8/58 of cases (13.8%) the entity with the minimum Google distance is the correct anchor. We saw before that the method for computing WordNet distance used in (Poesio, 2003) didn’t find a path for any of the mereological BDs; however, not trying to follow mereological links worked much better, achieving the same accuracy as Google distance (8/58, 13.8%) and finding connections for much higher percentages of concepts: no path could be found for only 10/58 of actual anchors, and for 503/1511 potential antecedents. Pairwise combinations of these features were also considered. The best such combination, choosing the first mentioned entity in the previous sentence, achieves an accuracy of 18/58, 31%. These baseline results are summarized in the following table. Notice how even the best baselines achieve pretty low accuracy, and how even simple ’salience’ measures work better than lexical distance measures. Baseline Accuracy Random choice between entities in previous 5 4% Random choice between entities in previous 1 19% Random choice between First Ment. 21.3% entities in previous 5 Entity with min Google distance 13.8% Entity with min WordNet distance 13.8% FM entity in previous sentence 31% Min Google distance in previous sentence 17.2% Min WN distance in previous sentence 25.9% FM and Min Google distance 12% FM and Min WN distance 24.1% Table 2: Baselines for the BD task The features utterance distance, local first mention, and global f.m. were used in all machine learning experiments. But since one of our goals was to compare different lexical resources, only one lexical distance feature was used in the first two experiment. The three classifiers were trained to classify a potential antecedent as either ‘anchor’ or ‘not anchor’. The classification results with Google distance and WN distance for all three classifiers and the 1:1 data set (116 instances in total, 58 real anchor, 58 negative instances), for all elements of the data set, and averaging across the 10 cross-validations, are shown in Table 3. WN Distance Google Distance (Correct) (Correct) Our own MLP 92(79.3%) 89(76.7%) Weka NN 91(78.4%) 86(74.1%) Weka Naive Bayes 88(75.9%) 85(73.3%) Table 3: Classification results for BDs These results are clearly better than those obtained with any of the baseline methods discussed above. The differences between WN distance and Google distance, and that between our own MLP and the Weka implementation of Naive Bayes, are also significant (by a sign test, p ≤.05), whereas the pairwise differences between our own MLP and Weka’s NN, and between this and the Naive Bayes classifier, aren’t. In other words, although we find little difference between using WordNet and Google to compute lexical distance, using WordNet leads to slightly better results for BDs. The next table shows precision, recall and f-values for the positive data points, for the feature sets using WN distance and Google distance, respectively: Precision Recall F-value WN features 75.4% 84.5% 79.6% Google features 70.6% 86.2% 77.6% Table 4: Precision and recall for positive instances Using a 1:3 dataset (3 negative data points for each anchor), overall accuracy increases (to 82% using Google distance) and accuracy with Google distance is better than with Wordnet distance (80.6%); however, the precision and recall figures for the positive data points get much worse: 56.7% with Google, 55.7% with Wordnet. 4.2 All mereological references Clearly, 58 positive instances is a fairly small dataset. In order to have a larger dataset, we included every bridging reference in the corpus, including those realized with indefinite NPs, thus bringing the total to 153 positive instances. We then ran a second series of experiments using the same methods as before. The results were slightly lower than those for BDs only, but in this case there was no difference between using Google and using WN. Fmeasure on positive instances was 76.3% with WN, 75.8% with Google. 4.3 A harder test In a last experiment, we used classifiers trained on balanced and moderately unbalanced data to determine the anchor of 6 randomly chosen BDs among WN Distance Google Distance (Correct) (Correct) Weka NN 227(74.2%) 230(75.2%) Table 5: Classification results for all BDs all of their 346 possible antecedents in context. For these experiments, we also tried to use both Google and WordNet simultaneously. The results for BDs are shown in Table 6. The first column of the table specifies the lexical resource used; the second the degree of balance; the next two columns percentage correct and F value on a testing set with the same balance as the training set; the final two columns perc. correct and F value on the harder test set. The best results,F=.5, are obtained using both Google and WN distance, and using a larger (if unbalanced) training corpus. These results are not as good as those obtained (by hand) by Poesio (which, however, used a complete focus tracking mechanism), but the F measure is still 66% higher than that obtained with the highest baseline (FM only), and not far off from the results obtained with direct anaphoric definite descriptions (e.g., by (Poesio and Alexandrov-Kabadjov, 2004)). It’s also conforting to note that results with the harder test improve the more data are used, which suggests that better results could be obtained with a larger corpus. 5 Related work In recent years there has been a lot of work to develop anaphora resolution algorithms using both symbolic and statistical methods that could be quantitatively evaluated (Humphreys et al., 1997; Ng and Cardie, 2002) but this work focused on identity relations; bridging references were explicitly excluded from the MUC coreference task because of the problems with reliability discussed earlier. Thus, most work on bridging has been theoretical, like the work by Asher and Lascarides (1998). Apart from the work by Poesio et al., the main other studies attempting quantitative evaluations of bridging reference resolution are (Markert et al., 1996; Markert et al., 2003). Markert et al. (1996) also argue for the need to use both Centering information and conceptual knowledge, and attempt to characterize the ‘best’ paths on the basis of an analysis of part-of relations, but use a hand-coded, domain-dependent knowledge base. Markert et al. (2003) focus on other anaphora, using Hearst’ patterns to mine information about hyponymy from the Web, but do not use focusing knowledge. 6 Discussion and Conclusions The two main results of this study are, first of all, that combining ’salience’ features with ’lexical’ features leads to much better results than using either method in isolation; and that these results are an improvement over those previously reported in the literature. A secondary, but still interesting, result is that using WordNet in a different way –taking advantage of its extensive information about hypernyms to obviate its lack of information about meronymy–obviates the problems previously reported in the literature on using WordNet for resolving mereological bridging references, leading to results comparable to those obtained using Google. (Of course, from a practical perspective Google may still be preferrable, particularly for languages for which no WordNet exists.) The main limitation of the present work is that the number of BDs and BRs considered, while larger than in our previous studies, is still fairly small. Unfortunately, creating a reasonably accurate gold standard for this type of semantic interpretation process is slow work. Our first priority will be therefore to extend the data set, including also the original cases studied by Poesio and Vieira. Current and future work will also include incorporating the methods tested here in an actual anaphora resolution system, the GUITAR system (Poesio and Alexandrov-Kabadjov, 2004). We are also working on methods for automatically recognizing bridging descriptions, and dealing with other types of (non-associative) bridging references based on synonymy and hyponymy. Acknowledgments The creation of the GNOME corpus was supported by the EPSRC project GNOME, GR/L51126/01. References N. Asher and A. Lascarides. 1998. Bridging. Journal of Semantics, 15(1):83–13. M. Berland and E. Charniak. 1999. Finding parts in very large corpora. In Proc. of the 37th ACL. H. H. Clark and C. J. Sengul. 1979. In search of referents for nouns and pronouns. Memory and Cognition, 7(1):35–41. H. H. Clark. 1977. Bridging. In P. N. JohnsonLaird and P.C. Wason, editors, Thinking: Readings in Cognitive Science. Cambridge. C. Fellbaum, editor. 1998. WordNet: An electronic lexical database. The MIT Press. A. Garcia-Almanza. 2003. Using WordNet for mereological anaphora resolution. Master’s thesis, University of Essex. Lex Res Balance Perc on bal F on bal Perc on Hard F on Hard WN 1:1 70.2% .7 80.2% .2 1:3 75.9% .4 91.7% 0 Google 1:1 64.4% .7 63.6% .1 1.3 79.8% .5 88.4% .3 WN + 1:1 66.3% .6 65.3% .2 Google 1.3 77.9% .4 92.5% .5 Table 6: Results using a classifier trained on balanced data on unbalanced ones. M. A. Gernsbacher and D. Hargreaves. 1988. Accessing sentence participants. Journal of Memory and Language, 27:699–717. P. C. Gordon, B. J. Grosz, and L. A. Gillion. 1993. Pronouns, names, and the centering of attention in discourse. Cognitive Science, 17:311–348. G. Grefenstette. 1993. SEXTANT: extracting semantics from raw text. Heuristics. B. J. Grosz and C. L. Sidner. 1986. Attention, intention, and the structure of discourse. Computational Linguistics, 12(3):175–204. B. J. Grosz, A. K. Joshi, and S. Weinstein. 1995. Centering. Computational Linguistics, 21(2):202–225. S. Harabagiu and D. Moldovan. 1998. Knowledge processing on extended WordNet. In (Fellbaum, 1998), pages 379–405. M. A. Hearst. 1998. Automated discovery of Wordnet relations. In (Fellbaum, 1998). J. R. Hobbs. 1978. Resolving pronoun references. Lingua, 44:311–338. K. Humphreys, R. Gaizauskas, S. Azzam, C. Huyck, B. Mitchell, and H. Cunningham Y. Wilks. 1997. Description of the LaSIE-II System as used for MUC-7. In Proc. of the 7th Message Understanding Conference (MUC-7). T. Ishikawa. 1998. Acquisition of associative information and resolution of bridging descriptions. Master’s thesis, University of Edinburgh. F. Keller and M. Lapata. 2003. Using the Web to obtain frequencies for unseen bigrams. Computational Linguistics, 29(3). K. Lund, C. Burgess, and R. A. Atchley. 1995. Semantic and associative priming in highdimensional semantic space. In Proc. of the 17th Conf. of the Cogn. Science Soc., pages 660–665. K. Markert, M. Strube, and U. Hahn. 1996. Inferential realization constraints on functional anaphora in the centering model. In Proc. of 18th Conf. of the Cog. Science Soc., pages 609–614. K. Markert, M. Nissim, and N.. Modjeska. 2003. Using the Web for nominal anaphora resolution. In Proc. of the EACL Workshop on the Computational Treatment of Anaphora, pages 39–46. N. Modjeska, K. Markert, and M. Nissim. 2003. Using the Web in ML for anaphora resolution. In Proc. of EMNLP-03, pages 176–183. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Meeting of the ACL. M. Poesio and R. Vieira. 1998. A corpus-based investigation of definite description use. Computational Linguistics, 24(2):183–216, June. M. Poesio, R. Vieira, and S. Teufel. 1997. Resolving bridging references in unrestricted text. In R. Mitkov, editor, Proc. of the ACL Workshop on Robust Anaphora Resolution, pages 1–6, Madrid. M. Poesio, S. Schulte im Walde, and C. Brew. 1998. Lexical clustering and definite description interpretation. In Proc. of the AAAI Spring Symposium on Learning for Discourse, pages 82–89. M. Poesio, T. Ishikawa, S. Schulte im Walde, and R. Vieira. 2002. Acquiring lexical knowledge for anaphora resolution. In Proc. of the 3rd LREC. M. Poesio and M. Alexandrov-Kabadjov. 2004. A general-purpose, off the shelf anaphoric resolver. In Proc. of the 4th LREC, Lisbon. M. Poesio, R. Stevenson, B. Di Eugenio, and J. M. Hitzeman. 2004. Centering: A parametric theory and its instantiations. Comp. Linguistics. 30(3). M. Poesio. 2003. Associative descriptions and salience. In Proc. of the EACL Workshop on Computational Treatments of Anaphora. E. F. Prince. 1981. Toward a taxonomy of givennew information. In P. Cole, editor, Radical Pragmatics, pages 223–256. Academic Press. C. L. Sidner. 1979. Towards a computational theory of definite anaphora comprehension in English discourse. Ph.D. thesis, MIT. O. Uryupina. 2003. High-precision identification of discourse-new and unique noun phrases. In Proc. of ACL 2003 Stud. Workshop, pages 80–86. R. Vieira and M. Poesio. 2000. An empiricallybased system for processing definite descriptions. Computational Linguistics, 26(4), December.
2004
19
Constructivist Development of Grounded Construction Grammars Luc Steels University of Brussels (VUB AI Lab) SONY Computer Science Lab - Paris 6 Rue Amyot, 75005 Paris [email protected] Abstract The paper reports on progress in building computational models of a constructivist approach to language development. It introduces a formalism for construction grammars and learning strategies based on invention, abduction, and induction. Examples are drawn from experiments exercising the model in situated language games played by embodied artificial agents. 1 Introduction The constructivist approach to language learning proposes that ”children acquire linguistic competence (...) only gradually, beginning with more concrete linguistic structures based on particular words and morphemes, and then building up to more abstract and productive structures based on various types of linguistic categories, schemas, and constructions.” (TomaselloBrooks, 1999), p. 161. The approach furthermore assumes that language development is (i) grounded in cognition because prior to (or in a co-development with language) there is an understanding and conceptualisation of scenes in terms of events, objects, roles that objects play in events, and perspectives on the event, and (ii) grounded in communication because language learning is intimately embedded in interactions with specific communicative goals. In contrast to the nativist position, defended, for example, by Pinker (Pinker, 1998), the constructivist approach does not assume that the semantic and syntactic categories as well as the linking rules (specifying for example that the agent of an action is linked to the subject of a sentence) are universal and innate. Rather, semantic and syntactic categories as well as the way they are linked is built up in a gradual developmental process, starting from quite specific ‘verb-island constructions’. Although the constructivist approach appears to explain a lot of the known empirical data about child language acquisition, there is so far no worked out model that details how constructivist language development works concretely, i.e. what kind of computational mechanisms are implied and how they work together to achieve adult (or even child) level competence. Moreover only little work has been done so far to build computational models for handling the sort of ’construction grammars’ assumed by this approach. Both challenges inform the research discussed in this paper. 2 Abductive Learning In the constructivist literature, there is often the implicit assumption that grammatical development is the result of observational learning, and several research efforts are going on to operationalise this approach for acquiring grounded lexicons and grammars (see e.g. (Roy, 2001)). The agents are given pairs with a real world situation, as perceived by the sensori-motor apparatus, and a language utterance. For example, an image of a ball is shown and at the same time a stretch of speech containing the word “ball”. Based on a generalisation process that uses statistical pattern recognition algorithms or neural networks, the learner then gradually extracts what is common between the various situations in which the same word or construction is used, thus progressively building a grounded lexicon and grammar of a language. The observational learning approach has had some success in learning words for objects and acquiring simple grammatical constructions, but there seem to be two inherent limitations. First, there is the well known poverty of the stimulus argument, widely accepted in linguistics, which says that there is not enough data in the sentences normally available to the language learner to arrive at realistic lexicons and grammars, let alone learn at the same time the categorisations and conceptualisations of the world implied by the language. This has lead many linguists to adopt the nativist position mentioned earlier. The nativist position could in principle be integrated in an observational learning framework by introducing strong biases on the generalisation process, incorporating the constraints of universal grammar, but it has been difficult to identify and operationalise enough of these constraints to do concrete experiments in realistic settings. Second, observational learning assumes that the language system (lexicon and grammar) exists as a fixed static system. However, observations of language in use shows that language users constantly align their language conventions to suit the purposes of specific conversations (ClarkBrennan, 1991). Natural languages therefore appear more to be like complex adaptive systems, similar to living systems that constantly adapt and evolve. This makes it difficult to rely exclusively on statistical generalisation. It does not capture the inherently creative nature of language use. This paper explores an alternative approach, which assumes a much more active stance from language users based on the Peircian notion of abduction (Fann, 1970). The speaker first attempts to use constructions from his existing inventory to express whatever he wants to express. However when that fails or is judged unsatisfactory, the speaker may extend his existing repertoire by inventing new constructions. These new constructions should be such that there is a high chance that the hearer may be able to guess their meaning. The hearer also uses as much as possible constructions stored in his own inventory to make sense of what is being said. But when there are unknown constructions, or the meanings do not fit with the situation being talked about, the hearer makes an educated guess about what the meaning of the unknown language constructions could be, and adds them as new hypotheses to his own inventory. Abductive constructivist learning hence relies crucially on the fact that both agents have sufficient common ground, share the same situation, have established joint attention, and share communicative goals. Both speaker and hearer use themselves as models of the other in order to guess how the other one will interpret a sentence or why the speaker says things in a particular way. Because both speaker and hearer are taking risks making abductive leaps, a third activity is needed, namely induction, not in the sense of statistical generalisation as in observational learning but in the sense of Peirce (Fann, 1970): A hypothesis arrived at by making educated guesses is tested against further data coming from subsequent interactions. When a construction leads to a successful interaction, there is some evidence that this construction is (or could become) part of the set of conventions adopted by the group, and language users should therefore prefer it in the future. When the construction fails, the language user should avoid it if alternatives are available. Implementing these visions of language learning and use is obviously an enormous challenge for computational linguistics. It requires not only cognitiveand communicative grounding, but also grammar formalisms and associated parsing and production algorithms which are extremely flexible, both from the viewpoint of getting as far as possible in the interpretation or production process despite missing rules or incompatibilities in the inventories of speaker and hearer, and from the viewpoint of supporting continuous change. 3 Language Games The research reported here uses a methodological approach which is quite common in Artificial Life research but still relatively novel in (computational) linguistics: Rather than attempting to develop simulations that generate natural phenomena directly, as one does when using Newton’s equations to simulate the trajectory of a ball falling from a tower, we engage in computational simulations and robotic experiments that create (new) artificial phenomena that have some of the characteristics of natural phenomena and hence are seen as explaining them. Specifically, we implement artificial agents with components modeling certain cognitive operations (such as introducing a new syntactic category, computing an analogy between two events, etc.), and then see what language phenomena result if these agents exercise these components in embodied situated language games. This way we can investigate very precisely what causal factors may underly certain phenomena and can focus on certain aspects of (grounded) language use without having to face the vast full complexity of real human languages. A survey of work which follows a similar methodology is found in (CangelosiParisi, 2003). The artificial agents used in the experiments driving our research observe real-world scenes through their cameras. The scenes consist of interactions between puppets, as shown in figure 1. These scenes enact common events like movement of people and objects, actions such as push or pull, give or take, etc. In order to achieve the cognitive grounding assumed in constructivistlanguage learning, the scenes are processed by a battery of relatively standard machine vision algorithms that segment objects based on color and movement, track objects in real-time, and compute a stream of lowlevel features indicating which objects are touching, in which direction objects are moving, etc. These low-level features are input to an eventrecognition system that uses an inventory of hierarchical event structures and matches them against the data streaming in from low-level vision, similar to the systems described in (SteelsBaillie, 2003). Figure 1: Scene enacted with puppets so that typical interactions between humans involving agency can be perceived and described. In order to achieve the communicative grounding required for constructivist learning, agents go through scripts in which they play various language games, similar to the setups described in (Steels, 2003). These language games are deliberately quite similar to the kind of scenes and interactions used in a lot of child language research. A language game is a routinised interaction between two agents about a shared situation in the world that involves the exchange of symbols. Agents take turns playing the role of speaker and hearer and give each other feedback about the outcome of the game. In the game further used in this paper, one agent describes to another agent an event that happened in the most recently experienced scene. The game succeeds if the hearer agrees that the event being described occurred in the recent scene. 4 The Lexicon Visual processing and event recognition results in a world model in the form of a series of facts describing the scene. To play the description game, the speaker selects one event as the topic and then seeks a series of facts which discriminate this event and its objects against the other events and objects in the context. We use a standard predicate calculus-style representation for meanings. A semantic structure consists of a set of units where each unit has a referent, which is the object or event to which the unit draws attention, and a meaning, which is a set of clauses constraining the referent. A semantic structure with one unit is for example written down as follows: [1] unit1 ev1 fall(ev1,true), fall-1(ev1,obj1),ball(obj1) where unit1 is the unit, ev1 the referent, and fall(ev1, true), fall-1(ev1,obj1), ball(obj1) the meaning. The different arguments of an event are decomposed into different predicates. For example, for “John gives a book to Mary”, there would be four clauses: give(ev1,true) for the event itself, give-1(ev1, John), for the one who gives, give-2(ev1,book1), for the object given, and give-3(ev1,Mary), for the recipient. This representation is more flexible and makes it possible to add new components (like the manner of an event) at any time. Syntactic structures mirror semantic structures. They also consist of units and the name of units are shared with semantic structures so that crossreference between them is straightforward. The form aspects of the sentence are represented in a declarative predicate calculus style, using the units as arguments. For example, the following unit is constrained as introducing the string “fall”: [2] unit1 string(unit1, “fall”) The rule formalism we have developed uses ideas from several existing formalisms, particularly unification grammars and is most similar to the Embodied Construction Grammars proposed in (BergenChang, 2003). Lexical rules link parts of semantic structure with parts of syntactic structure. All rules are reversable. When producing, the left side of a rule is matched against the semantic structure and, if there is a match, the right side is unified with the syntactic structure. Conversely when parsing, the right side is matched against the syntactic structure and the left side unified with the semantic structure. Here is a lexical entry for the word ”fall”. [3] ?unit ?ev fall(?ev,?state), fall-1(?ev,?obj)  ?unit string(?unit,“fall”) It specifies that a unit whose meaning is fall(?ev,?state), fall-1(?ev,?obj) is expressed with the string “fall”. Variables are written down with a question mark in front. Their scope is restricted to the structure or rule in which they appear and rule application often implies the renaming of certain variables to take care of the scope constraints. Here is a lexical entry for “ball”: [4] ?unit ?obj ball(?obj)  ?unit string(?unit,“ball”) Lexicon lookup attempts to find the minimal set of rules that covers the total semantic structure. New units may get introduced (both in the syntactic and semantic structure) if the meaning of a unit is broken down in the lexicon into more than one word. Thus, the original semantic structure in [1] results after the application of the two rules [3] and [4] in the following syntactic and semantic structures: [5] unit1 ev1 fall(ev1,true), fall-1(ev1,obj1) unit2 obj1 ball(obj1) —– unit1 string(unit1, “fall”) unit2 string(unit2, “ball”) If this syntactic structure is rendered, it produces the utterance “fall ball”. No syntax is implied yet. In the reverse direction, the parser starts with the two units forming the syntactic structure in [5] and application of the rules produces the following semantic structure: [6] unit1 ?ev fall(?ev,?state), fall-1(?ev,?obj) unit2 ?obj1 ball(?obj1) The semantic structure in [6] now contains variables for the referent of each unit and for the various predicate-arguments in their meanings. The interpretation process matches these variables against the facts in the world model. If a single consistent series of bindings can be found, then interpretation is successful. For example, assume that the facts in the meaning part of [1] are in the world model then matching [6] against them results in the bindings: [7] ?ev/ev1, ?state/true, ?obj/obj1, ?obj1/obj1 When the same word or the same meaning is covered by more than one rule, a choice needs to be made. Competing rules may develop if an agent invented a new word for a particular meaning but is later confronted with another word used by somebody else for the same meaning. Every rule has a score and in production and parsing, rules with the highest score are preferred. When the speaker performs lexicon lookup and rules were found to cover the complete semantic structure, no new rules are needed. But when some part is uncovered, the speaker should create a new rule. We have experimented so far with a simple strategy where agents lump together the uncovered facts in a unit and create a brand new word, consisting of a randomly chosen configuration of syllables. For example, if no word for ball(obj1) exists yet to cover the semantic structure in [1], a new rule such as [4] can be constructed by the speaker and subsequently used. If there is no word at all for the whole semantic structure in [1], a single word covering the whole meaning will be created, giving the effect of holophrases. The hearer first attempts to parse as far as possible the given sentence, and then interprets the resulting semantic structure, possibly using joint attention or other means that may help to find the intended interpretation. If this results in a unique set of bindings, the language game is deemed successful. But if there were parts of the sentence which were not covered by any rule, then the hearer can use abductive learning. The first critical step is to guess as well as possible the meaning of the unknown word(s). Thus suppose the sentence is “fall ball”, resulting in the semantic structure: [8] unit1 ?ev fall(?ev,?state), fall-1(?ev,?obj) If this structure is matched, bindings for ?ev and ?obj are found. The agent can now try to find the possible meaning of the unknown word “ball”. He can assume that this meaning must somehow help in the interpretation process. He therefore conceptualises the same way as if he would be the speaker and constructs a distinctive description that draws attention to the event in question, for example by constraining the referent of ?obj with an additional predicate. Although there are usually several ways in which obj1 differs from other objects in the context. There is a considerable chance that the predicate ball is chosen and hence ball(?obj) is abductively inferred as the meaning of “ball” resulting in a rule like [4]. Agents use induction to test whether the rules they created by invention and abduction have been adopted by the group. Every rule has a score, which is local to each agent. When the speaker or hearer has success with a particular rule, its score is increased and the score of competing rules is decreased, thus implementing lateral inhibition. When there is a failure, the score of the rule that was used is decreased. Because the agents prefer rules with the highest score, there is a positive feedback in the system. The more a word is used for a particular meaning, the more success that word will have. Figure 2: Winner-take-all effect in words competing for same meaning. The x-axis plots language games and the y-axis the use frequency. Scores rise in all the agents for these words and so progressively we see a winner-take-all effect with one word dominating for the expression of a particular meaning (see figure 2). Many experiments have by now been performed showing that this kind of lateral inhibition dynamics allows a population of agents to negotiate a shared inventory of formmeaning pairs for content words (Steels, 2003). 5 Syntactisation The reader may have noticed that the semantic structure in [6] resulting from parsing the sentence “fall ball”, includes two variables which will both get bound to the same object, namely ?obj, introduced by the predicate fall-1(?ev,?obj), and ?obj1, introduced by the predicate ball(?obj1). We say that in this case ?obj and ?obj1 form an equality. Just from parsing the two words, the hearer cannot know that the object involved in the fall event is the same as the object introduced by ball. He can only figure this out when looking at the scene (i.e. the world model). In fact, if there are several balls in the scene and only one of them is falling, there is no way to know which object is intended. And even if the hearer can figure it out, it is still desirable that the speaker should provide extra-information about equalities to optimise the hearer’s interpretation efforts. A major thesis of the present paper is that resolving equivalences between variables is the main motor for the introduction of syntax. To achieve it, the agents could, as a first approximation, use rules like the following one, to be applied after all lexical rules have been applied: [9] ?unit1 ?ev1 fall-1(?ev1,?obj2) ?unit2 ?obj2 ball(?obj2)  ?unit1 string(?unit1, ”fall”) ?unit2 string(?unit2, ”ball”) This rule is formally equivalent to the lexical rules discussed earlier in the sense that it links parts of a semantic structure with parts of a syntactic structure. But now more than one unit is involved. Rule [9] will do the job, because when unifying its right side with the semantic structure (in parsing) ?obj2 unifies with the variables ?obj (supplied by ”fall”) and ?obj1 (supplied by ”ball”) and this forces them to be equivalent. Note that ?unit1 in [9] only contains those parts of the original meaning that involve the variables which need to be made equal. The above rule works but is completely specific to this case. It is an example of the ad hoc ‘verb-island’ constructions reported in an early stage of child language development. Obviously it is much more desirable to have a more general rule, which can be achieved by introducing syntactic and semantic categories. A semantic category (such as agent, perfective, countable, male) is a categorisation of a conceptual relation, which is used to constrain the semantic side of grammatical rules. A syntactic category (such as noun, verb, nominative) is a categorisation of a word or a group of words, which can be used to constrain the syntactic side of grammatical rules. A rule using categories can be formed by taking rule [9] above and turning all predicates or content words into semantic or syntactic categories. [10] ?unit1 ?ev1 semcat1(?ev1,?obj2) ?unit2 ?obj2 semcat2(?obj2)  ?unit1 syncat1 (?unit1) ?unit2 syncat2(?unit2) The agent then needs to create sem-rules to categorise a predicate as belonging to a semantic category, as in: [11] ?unit1 ?ev1 fall-1(?ev1,?obj2)   ?unit1 ?ev1 semcat1(?ev1,?obj1) and syn-rules to categorise a word as belonging to a syntactic category, as in: [12] ?unit1 string(?unit1,”fall”)   ?unit1 ?ev1 syncat1(?unit1) These rules have arrows going only in one direction because they are only applied in one way.1 During production, the sem-rules are applied first, then the lexical rules, next the syn-rules and then the gram1Actually if word morphology is integrated, syn-rules need to be bi-directional, but this topic is not discussed further here due to space limitations. matical rules. In parsing, the lexical rules are applied first (in reverse direction), then the syn-rules and the sem-rules, and only then the grammatical rules (in reverse direction). The complete syntactic and semantic structures for example [9] look as follows: [13] unit1 ?ev1 fall(?ev1,?state), fall-1(?ev1,?obj), semcat1(?ev1,?obj) unit2 ?obj1 ball(?obj1), semcat2(?obj1) —– unit1 string(unit1, “fall”), syncat-1(unit1) unit2 string(unit2, “ball”), syncat-2(unit2) The right side of rule [10] matches with this syntactic structure, and if the left side of rule [10] is unified with the semantic structure in [13] the variable ?obj2 unifies with ?obj and ?obj1, thus resolving the equality before semantic interpretation (matching against the world model) starts. How can language users develop such rules? The speaker can detect equalities that need to be resolved by re-entrance: Before rendering a sentence and communicating it to the hearer, the speaker reparses his own sentence and interprets it against the facts in his own world model. If the resulting set of bindings contains variables that are bound to the same object after interpretation, then these equalities are candidates for the construction of a rule and new syntactic and semantic categories are made as a side effect. Note how the speaker uses himself as a model of the hearer and fixes problems that the hearer might otherwise encounter. The hearer can detect equalities by first interpreting the sentence based on the constructions that are already part of his own inventory and the shared situation and prior joint attention. These equalities are candidates for new rules to be constructed by the hearer, and they again involve the introduction of syntactic and semantic categories. Note that syntactic and semantic categories are always local to an agent. The same lateral inhibition dynamics is used for grammatical rules as for lexical rules, and so is also a positive feedback loop leading to a winner-take-all effect for grammatical rules. 6 Hierarchy Natural languages heavily use categories to tighten rule application, but they also introduce additional syntactic markings, such as word order, function words, affixes, morphological variation of word forms, and stress or intonation patterns. These markings are often used to signal to which category certain words belong. They can be easily incorporated in the formalism developed so far by adding additional descriptors of the units in the syntactic structure. For example, rule [10] can be expanded with word order constraints and the introduction of a particle “ba”: [14] ?unit1 ?ev1 semcat1(?ev1,?obj2) ?unit2 ?obj2 semcat2(?obj2)  ?unit1 syncat1 (?unit1) ?unit2 syncat2(?unit2) ?unit3 string (?unit3, “ba”) ?unit4 syn-subunits (  ?unit1, ?unit2, ?unit3  ), preceeds(?unit2, ?unit3) Note that it was necessary to introduce a superunit ?unit4 in order to express the word order constraints between the ba-particle and the unit that introduces the object. Applying this rule as well as the synrules and sem-rules discussed earlier to the semantic structure in [5] yields: [13] unit1 ev1 fall(ev1,true), fall-1(ev1,obj), semcat1(ev1,obj) unit2 obj1 ball(obj1), semcat2(obj1) —– unit1 string(unit1, “fall”), syncat-1(unit1) unit2 string(unit2, “ball”), syncat-2(unit2) unit3 string(unit3, “ba”) unit4 syn-subunits(  unit1,unit2,unit3  ), preceeds(unit2,unit3) When this syntactic structure is rendered, it produces ”fall ball ba”, or equivalently ”ball ba fall”, because only the order between “ball” and “ba” is constrained. Obviously the introduction of additional syntactic features makes the learning of grammatical rules more difficult. Natural languages appear to have meta-level strategies for invention and abduction. For example, a language (like Japanese) tends to use particles for expressing the roles of objects in events and this usage is a strategy both for inventing the expression of a new relation and for guessing what the use of an unknown word in the sentence might be. Another language (like Swahili) uses morphological variations similar to Latin for the same purpose and thus has ended up with a rich set of affixes. In our experiments so far, we have implemented such strategies directly, so that invention and abduction is strongly constrained. We still need to work out a formalism for describing these strategies as metarules and research the associated learning mechanisms. Figure 3: The graph shows the dependency structure as well as the phrase-structure emerging through the application of multiple rules When the same word participates in several rules, we automatically get the emergence of hierarchical structures. For example, suppose that two predicates are used to draw attention to obj1 in [5]: ball and red. If the lexicon has two separate words for each predicate, then the initial semantic structure would introduce different variables so that the meaning after parsing ”fall ball ba red” would be: [15] fall(?ev,?state), fall-1(?ev,?obj), ball (?obj), red(?obj2) To resolve the equality between ?obj and ?obj2, the speaker could create the following rule: [14] ?unit1 ?obj semcat3(?obj) ?unit2 ?obj semcat4(?obj)  ?unit1 syncat3(?unit1) ?unit2 syncat4(?unit2) ?unit3 syn-subunits (  unit1,unit2  ), preceeds(unit1,unit2) The predicate ball is declared to belong to semcat4 and the word “ball” to syncat4. The predicate red belongs to semcat3 and the word “red” to syncat3. Rendering the syntactic structure after application of this rule gives the sentence ”fall red ball ba”. A hierarchical structure (figure 3) emerges because “ball” participates in two rules. 7 Re-use Agents obviously should not invent new conventions from scratch every time they need one, but rather use as much as possible existing categorisations and hence existing rules. This simple economy principle quickly leads to the kind of syntagmatic and paradigmatic regularities that one finds in natural grammars. For example, if the speaker wants to express that a block is falling, no new semantic or syntactic categories or linking rules are needed but block can simply be declared to belong to semcat4 and “block” to syncat3 and rule [14] applies. Re-use should be driven by analogy. In one of the largest experiments we have carried out so far, agents had a way to compute the similarity between two event-structures by pairing the primitive operations making up an event. For example, a pick-up action is decomposed into: an object moving into the direction of another stationary object, the first object then touching the second object, and next the two objects moving together in (roughly) the opposite direction. A put-down action has similar subevents, except that their ordering is different. The roles of the objects involved (the hand, the object being picked up) are identical and so their grammatical marking could be re-used with very low risk of being misunderstood. When a speaker reuses a grammatical marking for a particular semantic category, this gives a strong hint to the hearer what kind of analogy is expected. By using these invention and abduction strategies, semantic categories like agent or patient gradually emerged in the artificial grammars. Figure 4 visualises the result of this experiment (after 700 games between 2 agents taking turns). The x-axis (randomly) ranks the different predicate-argument relations, the y-axis their markers. Without re-use, every argument would have its own marker. Now several markers (such as “va” or “zu”) cover more than one relation. Figure 4: More compact grammars result from reuse based on semantic analogies. 8 Conclusions The paper reports significant steps towards the computational modeling of a constructivist approach to language development. It has introduced aspects of a construction grammar formalism that is designed to handle the flexibility required for emergent developing grammars. It also proposed that invention, abduction, and induction are necessary and sufficient for language learning. Much more technical work remains to be done but already significant experimental results have been obtained with embodied agents playing situated language games. Most of the open questions concern under what circumstances syntactic and semantic categories should be re-used. Research funded by Sony CSL with additional funding from ESF-OMLL program, EU FET-ECAgents and CNRS OHLL. References Bergen, B.K. and N.C. Chang. 2003. Embodied Construction Grammar in Simulation-Based Language Understanding. TR 02-004, ICSI, Berkeley. Cangelosi, and D. Parisi 2003. Simulating the Evolution of Language. Springer-Verlag, Berlin. Clark, H. and S. Brennan 1991. Grounding in communication. In: Resnick, L. J. Levine and S. Teasley (eds.) Perspectives on Socially Shared Cognition. APA Books, Washington. p. 127-149. Fann, K.T. 1970. Peirce’s Theory of Abduction Martinus Nijhoff, The Hague. Roy, D. 2001. Learning Visually Grounded Words and Syntax of Natural Spoken Language. Evolution of communication 4(1). Pinker, S. 1998. Learnability and Cognition: The acquisition of Argument Structure. The MIT Press, Cambridge Ma. Steels, L. 2003 Evolving grounded communication for robots. Trends in Cognitive Science. Volume 7, Issue 7, July 2003 , pp. 308-312. Steels, L. and J-C. Baillie 2003. Shared Grounding of Event Descriptions by Autonomous Robots. Journal of Robotics and Autonomous Systems 43, 2003, pp. 163-173. Tomasello, M. and P.J. Brooks 1999. Early syntactic development: A Construction Grammar approach In: Barrett, M. (ed.) (1999) The Development of LanguagePsychology Press, London. pp. 161-190.
2004
2
Learning Noun Phrase Anaphoricity to Improve Coreference Resolution: Issues in Representation and Optimization Vincent Ng Department of Computer Science Cornell University Ithaca, NY 14853-7501 [email protected] Abstract Knowledge of the anaphoricity of a noun phrase might be profitably exploited by a coreference system to bypass the resolution of non-anaphoric noun phrases. Perhaps surprisingly, recent attempts to incorporate automatically acquired anaphoricity information into coreference systems, however, have led to the degradation in resolution performance. This paper examines several key issues in computing and using anaphoricity information to improve learning-based coreference systems. In particular, we present a new corpus-based approach to anaphoricity determination. Experiments on three standard coreference data sets demonstrate the effectiveness of our approach. 1 Introduction Noun phrase coreference resolution, the task of determining which noun phrases (NPs) in a text refer to the same real-world entity, has long been considered an important and difficult problem in natural language processing. Identifying the linguistic constraints on when two NPs can co-refer remains an active area of research in the community. One significant constraint on coreference, the non-anaphoricity constraint, specifies that a nonanaphoric NP cannot be coreferent with any of its preceding NPs in a given text. Given the potential usefulness of knowledge of (non-)anaphoricity for coreference resolution, anaphoricity determination has been studied fairly extensively. One common approach involves the design of heuristic rules to identify specific types of (non-)anaphoric NPs such as pleonastic pronouns (e.g., Paice and Husk (1987), Lappin and Leass (1994), Kennedy and Boguraev (1996), Denber (1998)) and definite descriptions (e.g., Vieira and Poesio (2000)). More recently, the problem has been tackled using unsupervised (e.g., Bean and Riloff (1999)) and supervised (e.g., Evans (2001), Ng and Cardie (2002a)) approaches. Interestingly, existing machine learning approaches to coreference resolution have performed reasonably well without anaphoricity determination (e.g., Soon et al. (2001), Ng and Cardie (2002b), Strube and M¨uller (2003), Yang et al. (2003)). Nevertheless, there is empirical evidence that resolution systems might further be improved with anaphoricity information. For instance, our coreference system mistakenly identifies an antecedent for many non-anaphoric common nouns in the absence of anaphoricity information (Ng and Cardie, 2002a). Our goal in this paper is to improve learningbased coreference systems using automatically computed anaphoricity information. In particular, we examine two important, yet largely unexplored, issues in anaphoricity determination for coreference resolution: representation and optimization. Constraint-based vs. feature-based representation. How should the computed anaphoricity information be used by a coreference system? From a linguistic perspective, knowledge of nonanaphoricity is most naturally represented as “bypassing” constraints, with which the coreference system bypasses the resolution of NPs that are determined to be non-anaphoric. But for learning-based coreference systems, anaphoricity information can be simply and naturally accommodated into the machine learning framework by including it as a feature in the instance representation. Local vs. global optimization. Should the anaphoricity determination procedure be developed independently of the coreference system that uses the computed anaphoricity information (local optimization), or should it be optimized with respect to coreference performance (global optimization)? The principle of software modularity calls for local optimization. However, if the primary goal is to improve coreference performance, global optimization appears to be the preferred choice. Existing work on anaphoricity determination for anaphora/coreference resolution can be characterized along these two dimensions. Interestingly, most existing work employs constraintbased, locally-optimized methods (e.g., Mitkov et al. (2002) and Ng and Cardie (2002a)), leaving the remaining three possibilities largely unexplored. In particular, to our knowledge, there have been no attempts to (1) globally optimize an anaphoricity determination procedure for coreference performance and (2) incorporate anaphoricity into coreference systems as a feature. Consequently, as part of our investigation, we propose a new corpus-based method for achieving global optimization and experiment with representing anaphoricity as a feature in the coreference system. In particular, we systematically evaluate all four combinations of local vs. global optimization and constraint-based vs. feature-based representation of anaphoricity information in terms of their effectiveness in improving a learning-based coreference system. Results on three standard coreference data sets are somewhat surprising: our proposed globally-optimized method, when used in conjunction with the constraint-based representation, outperforms not only the commonly-adopted locallyoptimized approach but also its seemingly more natural feature-based counterparts. The rest of the paper is structured as follows. Section 2 focuses on optimization issues, discussing locally- and globally-optimized approaches to anaphoricity determination. In Section 3, we give an overview of the standard machine learning framework for coreference resolution. Sections 4 and 5 present the experimental setup and evaluation results, respectively. We examine the features that are important to anaphoricity determination in Section 6 and conclude in Section 7. 2 The Anaphoricity Determination System: Local vs. Global Optimization In this section, we will show how to build a model of anaphoricity determination. We will first present the standard, locally-optimized approach and then introduce our globally-optimized approach. 2.1 The Locally-Optimized Approach In this approach, the anaphoricity model is simply a classifier that is trained and optimized independently of the coreference system (e.g., Evans (2001), Ng and Cardie (2002a)). Building a classifier for anaphoricity determination. A learning algorithm is used to train a classifier that, given a description of an NP in a document, decides whether or not the NP is anaphoric. Each training instance represents a single NP and consists of a set of features that are potentially useful for distinguishing anaphoric and non-anaphoric NPs. The classification associated with a training instance — one of ANAPHORIC or NOT ANAPHORIC — is derived from coreference chains in the training documents. Specifically, a positive instance is created for each NP that is involved in a coreference chain but is not the head of the chain. A negative instance is created for each of the remaining NPs. Applying the classifier. To determine the anaphoricity of an NP in a test document, an instance is created for it as during training and presented to the anaphoricity classifier, which returns a value of ANAPHORIC or NOT ANAPHORIC. 2.2 The Globally-Optimized Approach To achieve global optimization, we construct a parametric anaphoricity model with which we optimize the parameter1 for coreference accuracy on heldout development data. In other words, we tighten the connection between anaphoricity determination and coreference resolution by using the parameter to generate a set of anaphoricity models from which we select the one that yields the best coreference performance on held-out data. Global optimization for a constraint-based representation. We view anaphoricity determination as a problem of determining how conservative an anaphoricity model should be in classifying an NP as (non-)anaphoric. Given a constraint-based representation of anaphoricity information for the coreference system, if the model is too liberal in classifying an NP as non-anaphoric, then many anaphoric NPs will be misclassified, ultimately leading to a deterioration of recall and of the overall performance of the coreference system. On the other hand, if the model is too conservative, then only a small fraction of the truly non-anaphoric NPs will be identified, and so the resulting anaphoricity information may not be effective in improving the coreference system. The challenge then is to determine a “good” degree of conservativeness. As a result, we can design a parametric anaphoricity model whose conservativeness can be adjusted via a conservativeness parameter. To achieve global optimization, we can simply tune this parameter to optimize for coreference performance on held-out development data. Now, to implement this conservativeness-based anaphoricity determination model, we propose two methods, each of which is built upon a different definition of conservativeness. Method 1: Varying the Cost Ratio Our first method exploits a parameter present in many off-the-shelf machine learning algorithms for 1We can introduce multiple parameters for this purpose, but to simply the optimization process, we will only consider single-parameter models in this paper. training a classifier — the cost ratio (cr), which is defined as follows. cr := cost of misclassifying a positive instance cost of misclassifying a negative instance Inspection of this definition shows that cr provides a means of adjusting the relative misclassification penalties placed on training instances of different classes. In particular, the larger cr is, the more conservative the classifier is in classifying an instance as negative (i.e., non-anaphoric). Given this observation, we can naturally define the conservativeness of an anaphoricity classifier as follows. We say that classifier A is more conservative than classifier B in determining an NP as non-anaphoric if A is trained with a higher cost ratio than B. Based on this definition of conservativeness, we can construct an anaphoricity model parameterized by cr. Specifically, the parametric model maps a given value of cr to the anaphoricity classifier trained with this cost ratio. (For the purpose of training anaphoricity classifiers with different values of cr, we use RIPPER (Cohen, 1995), a propositional rule learning algorithm.) It should be easy to see that increasing cr makes the model more conservative in classifying an NP as non-anaphoric. With this parametric model, we can tune cr to optimize for coreference performance on held-out data. Method 2: Varying the Classification Threshold We can also define conservativeness in terms of the number of NPs classified as non-anaphoric for a given set of NPs. Specifically, given two anaphoricity models A and B and a set of instances I to be classified, we say that A is more conservative than B in determining an NP as non-anaphoric if A classifies fewer instances in I as non-anaphoric than B. Again, this definition is consistent with our intuition regarding conservativeness. We can now design a parametric anaphoricity model based on this definition. First, we train in a supervised fashion a probablistic model of anaphoricity PA(c | i), where i is an instance representing an NP and c is one of the two possible anaphoricity values. (In our experiments, we use maximum entropy classification (MaxEnt) (Berger et al., 1996) to train this probability model.) Then, we can construct a parametric model making binary anaphoricity decisions from PA by introducing a threshold parameter t as follows. Given a specific t (0 ≤t ≤1) and a new instance i, we define an anaphoricity model Mt A in which Mt A(i) = NOT ANAPHORIC if and only if PA(c = NOT ANAPHORIC | i) ≥t. It should be easy to see that increasing t yields progressively more conservative anaphoricity models. Again, t can be tuned using held-out development data. Global optimization for a feature-based representation. We can similarly optimize our proposed conservativeness-based anaphoricity model for coreference performance when anaphoricity information is represented as a feature for the coreference system. Unlike in a constraint-based representation, however, we cannot expect that the recall of the coreference system would increase with the conservativeness parameter. The reason is that we have no control over whether or how the anaphoricity feature is used by the coreference learner. In other words, the behavior of the coreference system is less predictable in comparison to a constraint-based representation. Other than that, the conservativenessbased anaphoricity model is as good to use for global optimization with a feature-based representation as with a constraint-based representation. We conclude this section by pointing out that the locally-optimized approach to anaphoricity determination is indeed a special case of the global one. Unlike the global approach in which the conservativeness parameter values are tuned based on labeled data, the local approach uses “default” parameter values. For instance, when RIPPER is used to train an anaphoricity classifier in the local approach, cr is set to the default value of one. Similarly, when probabilistic anaphoricity decisions generated via a MaxEnt model are converted to binary anaphoricity decisions for subsequent use by a coreference system, t is set to the default value of 0.5. 3 The Machine Learning Framework for Coreference Resolution The coreference system to which our automatically computed anaphoricity information will be applied implements the standard machine learning approach to coreference resolution combining classification and clustering. Below we will give a brief overview of this standard approach. Details can be found in Soon et al. (2001) or Ng and Cardie (2002b). Training an NP coreference classifier. After a pre-processing step in which the NPs in a document are automatically identified, a learning algorithm is used to train a classifier that, given a description of two NPs in the document, decides whether they are COREFERENT or NOT COREFERENT. Applying the classifier to create coreference chains. Test texts are processed from left to right. Each NP encountered, NPj, is compared in turn to each preceding NP, NPi. For each pair, a test instance is created as during training and is presented to the learned coreference classifier, which returns a number between 0 and 1 that indicates the likelihood that the two NPs are coreferent. The NP with the highest coreference likelihood value among the preceding NPs with coreference class values above 0.5 is selected as the antecedent of NPj; otherwise, no antecedent is selected for NPj. 4 Experimental Setup In Section 2, we examined how to construct locallyand globally-optimized anaphoricity models. Recall that, for each of these two types of models, the resulting (non-)anaphoricity information can be used by a learning-based coreference system either as hard bypassing constraints or as a feature. Hence, given a coreference system that implements the twostep learning approach shown above, we will be able to evaluate the four different combinations of computing and using anaphoricity information for improving the coreference system described in the introduction. Before presenting evaluation details, we will describe the experimental setup. Coreference system. In all of our experiments, we use our learning-based coreference system (Ng and Cardie, 2002b). Features for anaphoricity determination. In both the locally-optimized and the globallyoptimized approaches to anaphoricity determination described in Section 2, an instance is represented by 37 features that are specifically designed for distinguishing anaphoric and non-anaphoric NPs. Space limitations preclude a description of these features; see Ng and Cardie (2002a) for details. Learning algorithms. For training coreference classifiers and locally-optimized anaphoricity models, we use both RIPPER and MaxEnt as the underlying learning algorithms. However, for training globally-optimized anaphoricity models, RIPPER is always used in conjunction with Method 1 and MaxEnt with Method 2, as described in Section 2.2. In terms of setting learner-specific parameters, we use default values for all RIPPER parameters unless otherwise stated. For MaxEnt, we always train the feature-weight parameters with 100 iterations of the improved iterative scaling algorithm (Della Pietra et al., 1997), using a Gaussian prior to prevent overfitting (Chen and Rosenfeld, 2000). Data sets. We use the Automatic Content Extraction (ACE) Phase II data sets.2 We choose ACE rather than the more widely-used MUC corpus (MUC-6, 1995; MUC-7, 1998) simply because 2See http://www.itl.nist.gov/iad/894.01/ tests/ace for details on the ACE research program. BNEWS NPAPER NWIRE Number of training texts 216 76 130 Number of test texts 51 17 29 Number of training insts (for anaphoricity) 20567 21970 27338 Number of training insts (for coreference) 97036 148850 122168 Table 1: Statistics of the three ACE data sets ACE provides much more labeled data for both training and testing. However, our system was set up to perform coreference resolution according to the MUC rules, which are fairly different from the ACE guidelines in terms of the identification of markables as well as evaluation schemes. Since our goal is to evaluate the effect of anaphoricity information on coreference resolution, we make no attempt to modify our system to adhere to the rules specifically designed for ACE. The coreference corpus is composed of three data sets made up of three different news sources: Broadcast News (BNEWS), Newspaper (NPAPER), and Newswire (NWIRE). Statistics collected from these data sets are shown in Table 1. For each data set, we train an anaphoricity classifier and a coreference classifier on the (same) set of training texts and evaluate the coreference system on the test texts. 5 Evaluation In this section, we will compare the effectiveness of four approaches to anaphoricity determination (see the introduction) in improving our baseline coreference system. 5.1 Coreference Without Anaphoricity As mentioned above, we use our coreference system as the baseline system where no explicit anaphoricity determination system is employed. Results using RIPPER and MaxEnt as the underlying learners are shown in rows 1 and 2 of Table 2 where performance is reported in terms of recall, precision, and F-measure using the model-theoretic MUC scoring program (Vilain et al., 1995). With RIPPER, the system achieves an F-measure of 56.3 for BNEWS, 61.8 for NPAPER, and 51.7 for NWIRE. The performance of MaxEnt is comparable to that of RIPPER for the BNEWS and NPAPER data sets but slightly worse for the NWIRE data set. 5.2 Coreference With Anaphoricity The Constraint-Based, Locally-Optimized (CBLO) Approach. As mentioned before, in constraint-based approaches, the automatically computed non-anaphoricity information is used as System Variation BNEWS NPAPER NWIRE Experiments L R P F C R P F C R P F C 1 No RIP 57.4 55.3 56.3 60.0 63.6 61.8 53.2 50.3 51.7 2 Anaphoricity ME 60.9 52.1 56.2 65.4 58.6 61.8 54.9 46.7 50.4 3 ConstraintRIP 42.5 77.2 54.8 cr=1 46.7 79.3 58.8† cr=1 42.1 64.2 50.9 cr=1 4 Based, RIP 45.4 72.8 55.9 t=0.5 52.2 75.9 61.9 t=0.5 36.9 61.5 46.1† t=0.5 5 LocallyME 44.4 76.9 56.3 cr=1 50.1 75.7 60.3 cr=1 43.9 63.0 51.7 cr=1 6 Optimized ME 47.3 70.8 56.7 t=0.5 57.1 70.6 63.1∗ t=0.5 38.1 60.0 46.6† t=0.5 7 FeatureRIP 53.5 61.3 57.2 cr=1 58.7 69.7 63.7∗ cr=1 54.2 46.8 50.2† cr=1 8 Based, RIP 58.3 58.3 58.3∗ t=0.5 63.5 57.0 60.1† t=0.5 63.4 35.3 45.3† t=0.5 9 LocallyME 59.6 51.6 55.3† cr=1 65.6 57.9 61.5 cr=1 55.1 46.2 50.3 cr=1 10 Optimized ME 59.6 51.6 55.3† t=0.5 66.0 57.7 61.6 t=0.5 54.9 46.7 50.4 t=0.5 11 ConstraintRIP 54.5 68.6 60.8∗ cr=5 58.4 68.8 63.2∗ cr=4 50.5 56.7 53.4∗ cr=3 12 Based, RIP 54.1 67.1 59.9∗ t=0.7 56.5 68.1 61.7 t=0.65 50.3 53.8 52.0 t=0.7 13 GloballyME 54.8 62.9 58.5∗ cr=5 62.4 65.6 64.0∗ cr=3 52.2 57.0 54.5∗ cr=3 14 Optimized ME 54.1 60.6 57.2 t=0.7 61.7 64.0 62.8∗ t=0.7 52.0 52.8 52.4∗ t=0.7 15 FeatureRIP 60.8 56.1 58.4∗ cr=8 62.2 61.3 61.7 cr=6 54.6 49.4 51.9 cr=8 16 Based, RIP 59.7 57.0 58.3∗ t=0.6 63.6 59.1 61.3 t=0.8 56.7 48.4 52.3 t=0.7 17 GloballyME 59.9 51.0 55.1† cr=9 66.5 57.1 61.4 cr=1 56.3 46.9 51.2∗ cr=10 18 Optimized ME 59.6 51.6 55.3† t=0.95 65.9 57.5 61.4 t=0.95 56.5 46.7 51.1∗ t=0.5 Table 2: Results of the coreference systems using different approaches to anaphoricity determination on the three ACE test data sets. Information on which Learner (RIPPER or MaxEnt) is used to train the coreference classifier, as well as performance results in terms of Recall, Precision, F-measure and the corresponding Conservativeness parameter are provided whenever appropriate. The strongest result obtained for each data set is boldfaced. In addition, results that represent statistically significant gains and drops with respect to the baseline are marked with an asterisk (*) and a dagger (†), respectively. hard bypassing constraints, with which the coreference system attempts to resolve only NPs that the anaphoricity classifier determines to be anaphoric. As a result, we hypothesized that precision would increase in comparison to the baseline system. In addition, we expect that recall will drop owing to the anaphoricity classifier’s misclassifications of truly anaphoric NPs. Consequently, overall performance is not easily predictable: F-measure will improve only if gains in precision can compensate for the loss in recall. Results are shown in rows 3-6 of Table 2. Each row corresponds to a different combination of learners employed in training the coreference and anaphoricity classifiers.3 As mentioned in Section 2.2, locally-optimized approaches are a special case of their globally-optimized counterparts, with the conservativeness parameter set to the default value of one for RIPPER and 0.5 for MaxEnt. In comparison to the baseline, we see large gains in precision at the expense of recall. Moreover, CBLO does not seem to be very effective in improving the baseline, in part due to the dramatic loss in recall. In particular, although we see improvements in F-measure in five of the 12 experiments in this group, only one of them is statistically significant.4 3Bear in mind that different learners employed in training anaphoricity classifiers correspond to different parametric methods. For ease of exposition, however, we will refer to the method simply by the learner it employs. 4The Approximate Randomization test described in Noreen Worse still, F-measure drops significantly in three cases. The Feature-Based, Locally-Optimized (FBLO) Approach. The experimental setting employed here is essentially the same as that in CBLO, except that anaphoricity information is incorporated into the coreference system as a feature rather than as constraints. Specifically, each training/test coreference instance i(NPi,NPj) (created from NPj and a preceding NP NPi) is augmented with a feature whose value is the anaphoricity of NPj as computed by the anaphoricity classifier. In general, we hypothesized that FBLO would perform better than the baseline: the addition of an anaphoricity feature to the coreference instance representation might give the learner additional flexibility in creating coreference rules. Similarly, we expect FBLO to outperform its constraint-based counterpart: since anaphoricity information is represented as a feature in FBLO, the coreference learner can incorporate the information selectively rather than as universal hard constraints. Results using the FBLO approach are shown in rows 7-10 of Table 2. Somewhat unexpectedly, this approach is not effective in improving the baseline: F-measure increases significantly in only two of the 12 cases. Perhaps more surprisingly, we see significant drops in F-measure in five cases. To get a bet(1989) is applied to determine if the differences in the Fmeasure scores between two coreference systems are statistically significant at the 0.05 level or higher. System Variation BNEWS (dev) NPAPER (dev) NWIRE (dev) Experiments L R P F C R P F C R P F C 1 ConstraintRIP 62.6 76.3 68.8 cr=5 65.5 73.0 69.1 cr=4 56.1 58.9 57.4 cr=3 2 Based, RIP 62.5 75.5 68.4 t=0.7 63.0 71.7 67.1 t=0.65 56.7 54.8 55.7 t=0.7 3 GloballyME 63.1 71.3 66.9 cr=5 66.2 71.8 68.9 cr=3 57.9 59.7 58.8 cr=3 4 Optimized ME 62.9 70.8 66.6 t=0.7 61.4 74.3 67.3 t=0.65 58.4 55.3 56.8 t=0.7 Table 3: Results of the coreference systems using a constraint-based, globally-optimized approach to anaphoricity determination on the three ACE held-out development data sets. Information on which Learner (RIPPER or MaxEnt) is used to train the coreference classifier as well as performance results in terms of Recall, Precision, F-measure and the corresponding Conservativeness parameter are provided whenever appropriate. The strongest result obtained for each data set is boldfaced. ter idea of why F-measure decreases, we examine the relevant coreference classifiers induced by RIPPER. We find that the anaphoricity feature is used in a somewhat counter-intuitive manner: some of the induced rules posit a coreference relationship between NPj and a preceding NP NPi even though NPj is classified as non-anaphoric. These results seem to suggest that the anaphoricity feature is an irrelevant feature from a machine learning point of view. In comparison to CBLO, the results are mixed: there does not appear to be a clear winner in any of the three data sets. Nevertheless, it is worth noticing that the CBLO systems can be characterized as having high precision/low recall, whereas the reverse is true for FBLO systems in general. As a result, even though CBLO and FBLO systems achieve similar performance, the former is the preferred choice in applications where precision is critical. Finally, we note that there are other ways to encode anaphoricity information in a coreference system. For instance, it is possible to represent anaphoricity as a real-valued feature indicating the probability of an NP being anaphoric rather than as a binary-valued feature. Future work will examine alternative encodings of anaphoricity. The Constraint-Based, Globally-Optimized (CBGO) Approach. As discussed above, we optimize the anaphoricity model for coreference performance via the conservativeness parameter. In particular, we will use this parameter to maximize the F-measure score for a particular data set and learner combination using held-out development data. To ensure a fair comparison between global and local approaches, we do not rely on additional development data in the former; instead we use 2 3 of the original training texts for acquiring the anaphoricity and coreference classifiers and the remaining 1 3 for development for each of the data sets. As far as parameter tuning is concerned, we tested values of 1, 2, . . . , 10 as well as their reciprocals for cr and 0.05, 0.1, . . . , 1.0 for t. In general, we hypothesized that CBGO would outperform both the baseline and the locallyoptimized approaches, since coreference performance is being explicitly maximized. Results using CBGO, which are shown in rows 11-14 of Table 2, are largely consistent with our hypothesis. The best results on all of the three data sets are achieved using this approach. In comparison to the baseline, we see statistically significant gains in F-measure in nine of the 12 experiments in this group. Improvements stem primarily from large gains in precision accompanied by smaller drops in recall. Perhaps more importantly, CBGO never produces results that are significantly worse than those of the baseline systems on these data sets, unlike CBLO and FBLO. Overall, these results suggest that CBGO is more robust than the locally-optimized approaches in improving the baseline system. As can be seen, CBGO fails to produce statistically significant improvements over the baseline in three cases. The relatively poorer performance in these cases can potentially be attributed to the underlying learner combination. Fortunately, we can use the development data not only for parameter tuning but also in predicting the best learner combination. Table 3 shows the performance of the coreference system using CBGO on the development data, along with the value of the conservativeness parameter used to achieve the results in each case. Using the notation Learner1/Learner2 to denote the fact that Learner1 and Learner2 are used to train the underlying coreference classifier and anaphoricity classifier respectively, we can see that the RIPPER/RIPPER combination achieves the best performance on the BNEWS development set, whereas MaxEnt/RIPPER works best for the other two. Hence, if we rely on the development data to pick the best learner combination for use in testing, the resulting coreference system will outperform the baseline in all three data sets and yield the bestperforming system on all but the NPAPER data sets, achieving an F-measure of 60.8 (row 11), 63.2 (row 11), and 54.5 (row 13) for the BNEWS, NPAPER, 1 2 3 4 5 6 7 8 9 10 50 55 60 65 70 75 80 85 cr Score Recall Precision F−measure Figure 1: Effect of cr on the performance of the coreference system for the NPAPER development data using RIPPER/RIPPER and NWIRE data sets, respectively. Moreover, the high correlation between the relative coreference performance achieved by different learner combinations on the development data and that on the test data also reflects the stability of CBGO. In comparison to the locally-optimized approaches, CBGO achieves better F-measure scores in almost all cases. Moreover, the learned conservativeness parameter in CBGO always has a larger value than the default value employed by CBLO. This provides empirical evidence that the CBLO anaphoricity classifiers are too liberal in classifying NPs as non-anaphoric. To examine the effect of the conservativeness parameter on the performance of the coreference system, we plot in Figure 1 the recall, precision, Fmeasure curves against cr for the NPAPER development data using the RIPPER/RIPPER learner combination. As cr increases, recall rises and precision drops. This should not be surprising, since (1) increasing cr causes fewer anaphoric NPs to be misclassified and allows the coreference system to find a correct antecedent for some of them, and (2) decreasing cr causes more truly non-anaphoric NPs to be correctly classified and prevents the coreference system from attempting to resolve them. The best F-measure in this case is achieved when cr=4. The Feature-Based, Globally-Optimized (FBGO) Approach. The experimental setting employed here is essentially the same as that in the CBGO setting, except that anaphoricity information is incorporated into the coreference system as a feature rather than as constraints. Specifically, each training/test instance i(NPi,NPj) is augmented with a feature whose value is the computed anaphoricity of NPj. The development data is used to select the anaphoricity model (and hence the parameter value) that yields the best-performing coreference system. This model is then used to compute the anaphoricity value for the test instances. As mentioned before, we use the same parametric anaphoricity model as in CBGO for achieving global optimization. Since the parametric model is designed with a constraint-based representation in mind, we hypothesized that global optimization in this case would not be as effective as in CBGO. Nevertheless, we expect that this approach is still more effective in improving the baseline than the locally-optimized approaches. Results using FBGO are shown in rows 15-18 of Table 2. As expected, FBGO is less effective than CBGO in improving the baseline, underperforming its constraint-based counterpart in 11 of the 12 cases. In fact, FBGO is able to significantly improve the corresponding baseline in only four cases. Somewhat surprisingly, FBGO is by no means superior to the locally-optimized approaches with respect to improving the baseline. These results seem to suggest that global optimization is effective only if we have a “good” parameterization that is able to take into account how anaphoricity information will be exploited by the coreference system. Nevertheless, as discussed before, effective global optimization with a feature-based representation is not easy to accomplish. 6 Analyzing Anaphoricity Features So far we have focused on computing and using anaphoricity information to improve the performance of a coreference system. In this section, we examine which anaphoricity features are important in order to gain linguistic insights into the problem. Specifically, we measure the informativeness of a feature by computing its information gain (see p.22 of Quinlan (1993) for details) on our three data sets for training anaphoricity classifiers. Overall, the most informative features are HEAD MATCH (whether the NP under consideration has the same head as one of its preceding NPs), STR MATCH (whether the NP under consideration is the same string as one of its preceding NPs), and PRONOUN (whether the NP under consideration is a pronoun). The high discriminating power of HEAD MATCH and STR MATCH is a probable consequence of the fact that an NP is likely to be anaphoric if there is a lexically similar noun phrase preceding it in the text. The informativeness of PRONOUN can also be expected: most pronominal NPs are anaphoric. Features that determine whether the NP under consideration is a PROPER NOUN, whether it is a BARE SINGULAR or a BARE PLURAL, and whether it begins with an “a” or a “the” (ARTICLE) are also highly informative. This is consistent with our intuition that the (in)definiteness of an NP plays an important role in determining its anaphoricity. 7 Conclusions We have examined two largely unexplored issues in computing and using anaphoricity information for improving learning-based coreference systems: representation and optimization. In particular, we have systematically evaluated all four combinations of local vs. global optimization and constraint-based vs. feature-based representation of anaphoricity information in terms of their effectiveness in improving a learning-based coreference system. Extensive experiments on the three ACE coreference data sets using a symbolic learner (RIPPER) and a statistical learner (MaxEnt) for training coreference classifiers demonstrate the effectiveness of the constraint-based, globally-optimized approach to anaphoricity determination, which employs our conservativeness-based anaphoricity model. Not only does this approach improve a “no anaphoricity” baseline coreference system, it is more effective than the commonly-adopted locally-optimized approach without relying on additional labeled data. Acknowledgments We thank Regina Barzilay, Claire Cardie, Bo Pang, and the anonymous reviewers for their invaluable comments on earlier drafts of the paper. This work was supported in part by NSF Grant IIS–0208028. References David Bean and Ellen Riloff. 1999. Corpus-based identification of non-anaphoric noun phrases. In Proceedings of the ACL, pages 373–380. Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. Stanley Chen and Ronald Rosenfeld. 2000. A survey of smoothing techniques for ME models. IEEE Transactions on Speech on Audio Processing, 8(1):37–50. William Cohen. 1995. Fast effective rule induction. In Proceedings of ICML. Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. 1997. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380–393. Michel Denber. 1998. Automatic resolution of anaphora in English. Technical report, Eastman Kodak Co. Richard Evans. 2001. Applying machine learning toward an automatic classification of it. Literary and Linguistic Computing, 16(1):45–57. Christopher Kennedy and Branimir Boguraev. 1996. Anaphor for everyone: Pronominal anaphora resolution without a parser. In Proceedings of COLING, pages 113–118. Shalom Lappin and Herbert Leass. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4):535–562. Ruslan Mitkov, Richard Evans, and Constantin Orasan. 2002. A new, fully automatic version of Mitkov’s knowledge-poor pronoun resolution method. In Al. Gelbukh, editor, Computational Linguistics and Intelligent Text Processing, pages 169–187. MUC-6. 1995. Proceedings of the Sixth Message Understanding Conference (MUC-6). MUC-7. 1998. Proceedings of the Seventh Message Understanding Conference (MUC-7). Vincent Ng and Claire Cardie. 2002a. Identifying anaphoric and non-anaphoricnoun phrases to improve coreference resolution. In Proceedings of COLING, pages 730–736. Vincent Ng and Claire Cardie. 2002b. Improving machine learning approaches to coreference resolution. In Proceedings of the ACL, pages 104–111. Eric W. Noreen. 1989. Computer Intensive Methods for Testing Hypothesis: An Introduction. John Wiley & Sons. Chris Paice and Gareth Husk. 1987. Towards the automatic recognition of anaphoric features in English text: the impersonal pronoun ’it’. Computer Speech and Language, 2. J. Ross Quinlan. 1993. C4.5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann. Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. Michael Strube and Christoph M¨uller. 2003. A machine learning approach to pronoun resolution in spoken dialogue. In Proceedings of the ACL, pages 168–175. Renata Vieira and Massimo Poesio. 2000. An empirically-based system for processing definite descriptions. Computational Linguistics, 26(4):539– 593. Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme. In Proceedings of the Sixth Message Understanding Conference (MUC-6), pages 45–52. Xiaofeng Yang, Guodong Zhou, Jian Su, and Chew Lim Tan. 2003. Coreference resolution using competitive learning approach. In Proceedings of the ACL, pages 176–183.
2004
20
A Joint Source-Channel Model for Machine Transliteration Li Haizhou, Zhang Min, Su Jian Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore 119613 {hli,sujian,mzhang}@i2r.a-star.edu.sg Abstract Most foreign names are transliterated into Chinese, Japanese or Korean with approximate phonetic equivalents. The transliteration is usually achieved through intermediate phonemic mapping. This paper presents a new framework that allows direct orthographical mapping (DOM) between two different languages, through a joint source-channel model, also called n-gram transliteration model (TM). With the n-gram TM model, we automate the orthographic alignment process to derive the aligned transliteration units from a bilingual dictionary. The n-gram TM under the DOM framework greatly reduces system development effort and provides a quantum leap in improvement in transliteration accuracy over that of other state-of-the-art machine learning algorithms. The modeling framework is validated through several experiments for English-Chinese language pair. 1 Introduction In applications such as cross-lingual information retrieval (CLIR) and machine translation, there is an increasing need to translate out-of-vocabulary words from one language to another, especially from alphabet language to Chinese, Japanese or Korean. Proper names of English, French, German, Russian, Spanish and Arabic origins constitute a good portion of out-of-vocabulary words. They are translated through transliteration, the method of translating into another language by preserving how words sound in their original languages. For writing foreign names in Chinese, transliteration always follows the original romanization. Therefore, any foreign name will have only one Pinyin (romanization of Chinese) and thus in Chinese characters. In this paper, we focus on automatic Chinese transliteration of foreign alphabet names. Because some alphabet writing systems use various diacritical marks, we find it more practical to write names containing such diacriticals as they are rendered in English. Therefore, we refer all foreign-Chinese transliteration to English-Chinese transliteration, or E2C. Transliterating English names into Chinese is not straightforward. However, recalling the original from Chinese transliteration is even more challenging as the E2C transliteration may have lost some original phonemic evidences. The Chinese-English backward transliteration process is also called back-transliteration, or C2E (Knight & Graehl, 1998). In machine transliteration, the noisy channel model (NCM), based on a phoneme-based approach, has recently received considerable attention (Meng et al. 2001; Jung et al, 2000; Virga & Khudanpur, 2003; Knight & Graehl, 1998). In this paper we discuss the limitations of such an approach and address its problems by firstly proposing a paradigm that allows direct orthographic mapping (DOM), secondly further proposing a joint source-channel model as a realization of DOM. Two other machine learning techniques, NCM and ID3 (Quinlan, 1993) decision tree, also are implemented under DOM as reference to compare with the proposed n-gram TM. This paper is organized as follows: In section 2, we present the transliteration problems. In section 3, a joint source-channel model is formulated. In section 4, several experiments are carried out to study different aspects of proposed algorithm. In section 5, we relate our algorithms to other reported work. Finally, we conclude the study with some discussions. 2 Problems in transliteration Transliteration is a process that takes a character string in source language as input and generates a character string in the target language as output. The process can be seen conceptually as two levels of decoding: segmentation of the source string into transliteration units; and relating the source language transliteration units with units in the target language, by resolving different combinations of alignments and unit mappings. A unit could be a Chinese character or a monograph, a digraph or a trigraph and so on for English. 2.1 Phoneme-based approach The problems of English-Chinese transliteration have been studied extensively in the paradigm of noisy channel model (NCM). For a given English name E as the observed channel output, one seeks a posteriori the most likely Chinese transliteration C that maximizes P(C|E). Applying Bayes rule, it means to find C to maximize P(E,C) = P(E | C)*P(C) (1) with equivalent effect. To do so, we are left with modeling two probability distributions: P(E|C), the probability of transliterating C to E through a noisy channel, which is also called transformation rules, and P(C), the probability distribution of source, which reflects what is considered good Chinese transliteration in general. Likewise, in C2E backtransliteration, we would find E that maximizes P(E,C) = P(C | E)*P(E) (2) for a given Chinese name. In eqn (1) and (2), P(C) and P(E) are usually estimated using n-gram language models (Jelinek, 1991). Inspired by research results of grapheme-tophoneme research in speech synthesis literature, many have suggested phoneme-based approaches to resolving P(E|C) and P(C|E), which approximates the probability distribution by introducing a phonemic representation. In this way, we convert the names in the source language, say E, into an intermediate phonemic representation P, and then convert the phonemic representation into the target language, say Chinese C. In E2C transliteration, the phoneme-based approach can be formulated as P(C|E) = P(C|P)P(P|E) and conversely we have P(E|C) = P(E|P)P(P|C) for C2E back-transliteration. Several phoneme-based techniques have been proposed in the recent past for machine transliteration using transformation-based learning algorithm (Meng et al. 2001; Jung et al, 2000; Virga & Khudanpur, 2003) and using finite state transducer that implements transformation rules (Knight & Graehl, 1998), where both handcrafted and data-driven transformation rules have been studied. However, the phoneme-based approaches are limited by two major constraints, which could compromise transliterating precision, especially in English-Chinese transliteration: 1) Latin-alphabet foreign names are of different origins. For instance, French has different phonic rules from those of English. The phoneme-based approach requires derivation of proper phonemic representation for names of different origins. One may need to prepare multiple language-dependent grapheme-to-phoneme (G2P) conversion systems accordingly, and that is not easy to achieve (The Onomastica Consortium, 1995). For example, /Lafontant/ is transliterated into 拉丰唐(La-FengTang) while /Constant/ becomes 康斯坦特(KangSi-Tan-Te) ,where syllable /-tant/ in the two names are transliterated differently depending on the names’ language of origin. 2) Suppose that language dependent graphemeto-phoneme systems are attainable, obtaining Chinese orthography will need two further steps: a) conversion from generic phonemic representation to Chinese Pinyin; b) conversion from Pinyin to Chinese characters. Each step introduces a level of imprecision. Virga and Khudanpur (2003) reported 8.3% absolute accuracy drops when converting from Pinyin to Chinese characters, due to homophone confusion. Unlike Japanese katakana or Korean alphabet, Chinese characters are more ideographic than phonetic. To arrive at an appropriate Chinese transliteration, one cannot rely solely on the intermediate phonemic representation. 2.2 Useful orthographic context To illustrate the importance of contextual information in transliteration, let’s take name /Minahan/ as an example, the correct segmentation should be /Mi-na-han/, to be transliterated as 米纳-汉 (Pinyin: Mi-Na-Han). English /mi- -na- -han/ Chinese 米 纳 汉 Pinyin Mi Nan Han However, a possible segmentation /Min-ah-an/ could lead to an undesirable syllabication of 明阿-安 (Pinyin: Min-A-An). English /min- -ah- -an/ Chinese 明 阿 安 Pinyin Min A An According to the transliteration guidelines, a wise segmentation can be reached only after exploring the combination of the left and right context of transliteration units. From the computational point of view, this strongly suggests using a contextual n-gram as the knowledge base for the alignment decision. Another example will show us how one-to-many mappings could be resolved by context. Let’s take another name /Smith/ as an example. Although we can arrive at an obvious segmentation /s-mi-th/, there are three Chinese characters for each of /s-/, /-mi-/ and /-th/. Furthermore, /s-/ and /-th/ correspond to overlapping characters as well, as shown next. English /s- -mi- -th/ Chinese 1 史 米 斯 Chinese 2 斯 密 史 Chinese 3 思 麦 瑟 A human translator will use transliteration rules between English syllable sequence and Chinese character sequence to obtain the best mapping 史密-斯, as indicated in italic in the table above. To address the issues in transliteration, we propose a direct orthographic mapping (DOM) framework through a joint source-channel model by fully exploring orthographic contextual information, aiming at alleviating the imprecision introduced by the multiple-step phoneme-based approach. 3 Joint source-channel model In view of the close coupling of the source and target transliteration units, we propose to estimate P(E,C) by a joint source-channel model, or n-gram transliteration model (TM). For K aligned transliteration units, we have ) ... , , ... , ( ) , ( 2 1 2 1 K K c c c e e e P C E P = ) , ... , , , ( 2 1 K c e c e c e P > < > < > < = (3) ∏ = − > < > < = K k k k c e c e P 1 1 1 ) , | , ( which provides an alternative to the phonemebased approach for resolving eqn. (1) and (2) by eliminating the intermediate phonemic representation. Unlike the noisy-channel model, the joint source-channel model does not try to capture how source names can be mapped to target names, but rather how source and target names can be generated simultaneously. In other words, we estimate a joint probability model that can be easily marginalized in order to yield conditional probability models for both transliteration and back-transliteration. Suppose that we have an English name m x x x ... 2 1 = α and a Chinese transliteration ny y y ... 2 1 = β where ix are letters and jy are Chinese characters. Oftentimes, the number of letters is different from the number of Chinese characters. A Chinese character may correspond to a letter substring in English or vice versa. m i i x x x x x x x ... ... 2 1 3 2 1 + + n j y y y y ... ... 2 1 where there exists an alignment γ with > =< > < 1 1 1 , , y x c e > =< > < 2 3 2 2 , , y x x c e … and > =< > < n m K y x c e , , . A transliteration unit correspondence > < c e, is called a transliteration pair. Then, the E2C transliteration can be formulated as ) , , ( max arg , γ β α β γ β P = (4) and similarly the C2E back-transliteration as ) , , ( max arg , γ β α α γ α P = (5) An n-gram transliteration model is defined as the conditional probability, or transliteration probability, of a transliteration pair k c e > < , depending on its immediate n predecessor pairs: ) , , ( ) , ( γ β α P C E P = ∏ = − + − > < > < = K k k n k k c e c e P 1 1 1) , | , ( (6) 3.1 Transliteration alignment A bilingual dictionary contains entries mapping English names to their respective Chinese transliterations. Like many other solutions in computational linguistics, it is possible to automatically analyze the bilingual dictionary to acquire knowledge in order to map new English names to Chinese and vice versa. Based on the transliteration formulation above, a transliteration model can be built with transliteration unit’s ngram statistics. To obtain the statistics, the bilingual dictionary needs to be aligned. The maximum likelihood approach, through EM algorithm (Dempster, 1977), allows us to infer such an alignment easily as described in the table below. The aligning process is different from that of transliteration given in eqn. (4) or (5) in that, here we have fixed bilingual entries, α and β . The aligning process is just to find the alignment segmentation γ between the two strings that maximizes the joint probability: ) , , ( max arg γ β α γ γ P = (7) A set of transliteration pairs that is derived from the aligning process forms a transliteration table, which is in turn used in the transliteration decoding. As the decoder is bounded by this table, it is important to make sure that the training database covers as much as possible the potential transliteration patterns. Here are some examples of resulting alignment pairs. 斯|s 尔|l 特|t 德|d 克|k 布|b 格|g 尔|r 尔|ll 克|c 罗|ro 里|ri 曼|man 姆|m 普|p 德|de 拉|ra 尔|le 阿|a 伯|ber 拉|la 森|son 顿|ton 特|tt 雷|re 科|co 奥|o 埃|e 马|ma 利|ley 利|li 默|mer Knowing that the training data set will never be sufficient for every n-gram unit, different smoothing approaches are applied, for example, by using backoff or class-based models, which can be found in statistical language modeling literatures (Jelinek, 1991). 3.2 DOM: n-gram TM vs. NCM Although in the literature, most noisy channel models (NCM) are studied under phoneme-based paradigm for machine transliteration, NCM can also be realized under direct orthographic mapping (DOM). Next, let’s look into a bigram case to see what n-gram TM and NCM present to us. For E2C conversion, re-writing eqn (1) and eqn (6) , we have ∏ = − ≈ K k k k k k c c P c e P P 1 1) | ( ) | ( ) , , ( γ β α (8) ) , , ( γ β α P ) , | , ( 1 1 − = > < > < ≈∏ k k K k c e c e P (9) The formulation of eqn. (8) could be interpreted as a hidden Markov model with Chinese characters as its hidden states and English transliteration units as the observations (Rabiner, 1989). The number of parameters in the bigram TM is potentially 2 T , while in the noisy channel model (NCM) it’s 2 C T + , where T is the number of transliteration pairs and C is the number of Chinese transliteration units. In eqn. (9), the current transliteration depends on both Chinese and English transliteration history while in eqn. (8), it depends only on the previous Chinese unit. As 2 2 C T T + >> , an n-gram TM gives a finer description than that of NCM. The actual size of models largely depends on the availability of training data. In Table 1, one can get an idea of how they unfold in a real scenario. With adequately sufficient training data, n-gram TM is expected to outperform NCM in the decoding. A perplexity study in section 4.1 will look at the model from another perspective. 4 The experiments1 We use a database from the bilingual dictionary “Chinese Transliteration of Foreign Personal Names” which was edited by Xinhua News Agency and was considered the de facto standard of personal name transliteration in today’s Chinese press. The database includes a collection of 37,694 unique English entries and their official Chinese transliteration. The listing includes personal names of English, French, Spanish, German, Arabic, Russian and many other origins. The database is initially randomly distributed into 13 subsets. In the open test, one subset is withheld for testing while the remaining 12 subsets are used as the training materials. This process is repeated 13 times to yield an average result, which is called the 13-fold open test. After experiments, we found that each of the 13-fold open tests gave consistent error rates with less than 1% deviation. Therefore, for simplicity, we randomly select one of the 13 subsets, which consists of 2896 entries, as the standard open test set to report results. In the close test, all data entries are used for training and testing. 1 demo at http://nlp.i2r.a-star.edu.sg/demo.htm The Expectation-Maximization algorithm 1. Bootstrap initial random alignment 2. Expectation: Update n-gram statistics to estimate probability distribution 3. Maximization: Apply the n-gram TM to obtain new alignment 4. Go to step 2 until the alignment converges 5. Derive a list transliteration units from final alignment as transliteration table 4.1 Modeling The alignment of transliteration units is done fully automatically along with the n-gram TM training process. To model the boundary effects, we introduce two extra units <s> and </s> for start and end of each name in both languages. The EM iteration converges at 8th round when no further alignment changes are reported. Next are some statistics as a result of the model training: # close set bilingual entries (full data) 37,694 # unique Chinese transliteration (close) 28,632 # training entries for open test 34,777 # test entries for open test 2,896 # unique transliteration pairs T 5,640 # total transliteration pairs T W 119,364 # unique English units E 3,683 # unique Chinese units C 374 # bigram TM ) , | , ( 1 − > < > < k k c e c e P 38,655 # NCM Chinese bigram ) | ( 1 − k k c c P 12,742 Table 1. Modeling statistics The most common metric for evaluating an ngram model is the probability that the model assigns to test data, or perplexity (Jelinek, 1991). For a test set W composed of V names, where each name has been aligned into a sequence of transliteration pair tokens, we can calculate the probability of test set ∏ = = V v v v v P W p 1 ) , , ( ) ( γ β α by applying the n-gram models to the token sequence. The cross-entropy ) (W H p of a model on data W is defined as ) ( log 1 ) ( 2 W p W W H T p − = where T W is the total number of aligned transliteration pair tokens in the data W. The perplexity ) (W PPp of a model is the reciprocal of the average probability assigned by the model to each aligned pair in the test set W as ) ( 2 ) ( W H p p W PP = . Clearly, lower perplexity means that the model describes better the data. It is easy to understand that closed test always gives lower perplexity than open test. TM open NCM open TM closed NCM closed 1-gram 670 729 655 716 2-gram 324 512 151 210 3-gram 306 487 68 127 Table 2. Perplexity study of bilingual database We have the perplexity reported in Table 2 on the aligned bilingual dictionary, a database of 119,364 aligned tokens. The NCM perplexity is computed using n-gram equivalents of eqn. (8) for E2C transliteration, while TM perplexity is based on those of eqn (9) which applies to both E2C and C2E. It is shown that TM consistently gives lower perplexity than NCM in open and closed tests. We have good reason to expect TM to provide better transliteration results which we expect to be confirmed later in the experiments. The Viterbi algorithm produces the best sequence by maximizing the overall probability, ) , , ( γ β α P . In CLIR or multilingual corpus alignment (Virga and Khudanpur, 2003), N-best results will be very helpful to increase chances of correct hits. In this paper, we adopted an N-best stack decoder (Schwartz and Chow, 1990) in both TM and NCM experiments to search for N-best results. The algorithm also allows us to apply higher order n-gram such as trigram in the search. 4.2 E2C transliteration In this experiment, we conduct both open and closed tests for TM and NCM models under DOM paradigm. Results are reported in Table 3 and Table 4. open (word) open (char) closed (word) closed (char) 1-gram 45.6% 21.1% 44.8% 20.4% 2-gram 31.6% 13.6% 10.8% 4.7% 3-gram 29.9% 10.8% 1.6% 0.8% Table 3. E2C error rates for n-gram TM tests. open (word) open (char) closed (word) closed (char) 1-gram 47.3% 23.9% 46.9% 22.1% 2-gram 39.6% 20.0% 16.4% 10.9% 3-gram 39.0% 18.8% 7.8% 1.9% Table 4. E2C error rates for n-gram NCM tests In word error report, a word is considered correct only if an exact match happens between transliteration and the reference. The character error rate is the sum of deletion, insertion and substitution errors. Only the top choice in N-best results is used for error rate reporting. Not surprisingly, one can see that n-gram TM, which benefits from the joint source-channel model coupling both source and target contextual information into the model, is superior to NCM in all the test cases. 4.3 C2E back-transliteration The C2E back-transliteration is more challenging than E2C transliteration. Not many studies have been reported in this area. It is common that multiple English names are mapped into the same Chinese transliteration. In Table 1, we see only 28,632 unique Chinese transliterations exist for 37,694 English entries, meaning that some phonemic evidence is lost in the process of transliteration. To better understand the task, let’s compare the complexity of the two languages presented in the bilingual dictionary. Table 1 also shows that the 5,640 transliteration pairs are cross mappings between 3,683 English and 374 Chinese units. In order words, on average, for each English unit, we have 1.53 = 5,640/3,683 Chinese correspondences. In contrast, for each Chinese unit, we have 15.1 = 5,640/374 English back-transliteration units! Confusion is increased tenfold going backward. The difficulty of back-transliteration is also reflected by the perplexity of the languages as in Table 5. Based on the same alignment tokenization, we estimate the monolingual language perplexity for Chinese and English independently using the n-gram language models ) | ( 1 1 − + − k n k k c c P and ) | ( 1 1 − + − k n k k e e P . Without surprise, Chinese names have much lower perplexity than English names thanks to fewer Chinese units. This contributes to the success of E2C but presents a great challenge to C2E backtransliteration. 1-gram 2-gram 3-gram Chinese 207/206 97/86 79/45 English 710/706 265/152 234/67 Table 5 language perplexity comparison (open/closed test) open (word) open (letter) closed (word) closed (letter) 1 gram 82.3% 28.2% 81% 27.7% 2 gram 63.8% 20.1% 40.4% 12.3% 3 gram 62.1% 19.6% 14.7% 5.0% Table 6. C2E error rate for n-gram TM tests E2C open E2C closed C2E open C2E closed 1-best 29.9% 1.6% 62.1% 14.7% 5-best 8.2% 0.94% 43.3% 5.2% 10-best 5.4% 0.90% 24.6% 4.8% Table 7. N-best word error rates for 3-gram TM tests A back-transliteration is considered correct if it falls within the multiple valid orthographically correct options. Experiment results are reported in Table 6. As expected, C2E error rate is much higher than that of E2C. In this paper, the n-gram TM model serves as the sole knowledge source for transliteration. However, if secondary knowledge, such as a lookup table of valid target transliterations, is available, it can help reduce error rate by discarding invalid transliterations top-down the N choices. In Table 7, the word error rates for both E2C and C2E are reported which imply potential error reduction by secondary knowledge source. The N-best error rates are reduced significantly at 10-best level as reported in Table 7. 5 Discussions It would be interesting to relate n-gram TM to other related framework. 5.1 DOM: n-gram TM vs. ID3 In section 4, one observes that contextual information in both source and target languages is essential. To capture them in the modeling, one could think of decision tree, another popular machine learning approach. Under the DOM framework, here is the first attempt to apply decision tree in E2C and C2E transliteration. With the decision tree, given a fixed size learning vector, we used top-down induction trees to predict the corresponding output. Here we implement ID3 (Quinlan, 1993) algorithm to construct the decision tree which contains questions and return values at terminal nodes. Similar to n-gram TM, for unseen names in open test, ID3 has backoff smoothing, which lies on the default case which returns the most probable value as its best guess for a partial tree path according to the learning set. In the case of E2C transliteration, we form a learning vector of 6 attributes by combining 2 left and 2 right letters around the letter of focus ke and 1 previous Chinese unit 1 − kc . The process is illustrated in Table 8, where both English and Chinese contexts are used to infer a Chinese character. Similarly, 4 attributes combining 1 left, 1 centre and 1 right Chinese character and 1 previous English unit are used for the learning vector in C2E test. An aligned bilingual dictionary is needed to build the decision tree. To minimize the effects from alignment variation, we use the same alignment results from section 4. Two trees are built for two directions, E2C and C2E. The results are compared with those 3-gram TM in Table 9. 2 − ke 1 − ke ke 1 + ke 2 + ke 1 − kc kc _ _ N I C _ > 尼 _ N I C E 尼 > _ N I C E _ _ > 斯 I C E _ _ 斯 > _ Table 8. E2C transliteration using ID3 decision tree for transliterating Nice to 尼斯 (尼|NI 斯|CE) open closed ID3 E2C 39.1% 9.7% 3-gram TM E2C 29.9% 1.6% ID3 C2E 63.3% 38.4% 3-gram TM C2E 62.1% 14.7% Table 9. Word error rate ID3 vs. 3-gram TM One observes that n-gram TM consistently outperforms ID3 decision tree in all tests. Three factors could have contributed: 1) English transliteration unit size ranges from 1 letter to 7 letters. The fixed size windows in ID3 obviously find difficult to capture the dynamics of various ranges. n-gram TM seems to have better captured the dynamics of transliteration units; 2) The backoff smoothing of n-gram TM is more effective than that of ID3; 3) Unlike n-gram TM, ID3 requires a separate aligning process for bilingual dictionary. The resulting alignment may not be optimal for tree construction. Nevertheless, ID3 presents another successful implementation of DOM framework. 5.2 DOM vs. phoneme-based approach Due to lack of standard data sets, it is difficult to compare the performance of the n-gram TM to that of other approaches. For reference purpose, we list some reported studies on other databases of E2C transliteration tasks in Table 10. As in the references, only character and Pinyin error rates are reported, we only include our character and Pinyin error rates for easy reference. The reference data are extracted from Table 1 and 3 of (Virga and Khudanpur 2003). As we have not found any C2E result in the literature, only E2C results are compared here. The first 4 setups by Virga et al all adopted the phoneme-based approach in the following steps: 1) English name to English phonemes; 2) English phonemes to Chinese Pinyin; 3) Chinese Pinyin to Chinese characters. It is obvious that the n-gram TM compares favorably to other techniques. n-gram TM presents an error reduction of 74.6%=(42.5-10.8)/42.5% for Pinyin over the best reported result, Huge MT (Big MT) test case, which is noteworthy. The DOM framework shows a quantum leap in performance with n-gram TM being the most successful implementation. The n-gram TM and ID3 under direct orthographic mapping (DOM) paradigm simplify the process and reduce the chances of conversion errors. As a result, n-gram TM and ID3 do not generate Chinese Pinyin as intermediate results. It is noted that in the 374 legitimate Chinese characters for transliteration, character to Pinyin mapping is unique while Pinyin to character mapping could be one to many. Since we have obtained results in character already, we expect less Pinyin error than character error should a character-to-Pinyin mapping be needed. System Trainin g size Test size Pinyin errors Char errors Meng et al 2,233 1,541 52.5% N/A Small MT 2,233 1,541 50.8% 57.4% Big MT 3,625 250 49.1% 57.4% Huge MT (Big MT) 309,01 9 3,122 42.5% N/A 3-gram TM/DOM 34,777 2,896 < 10.8% 10.8% ID3/DOM 34,777 2,896 < 15.6% 15.6% Table 10. Performance reference in recent studies 6 Conclusions In this paper, we propose a new framework (DOM) for transliteration. n-gram TM is a successful realization of DOM paradigm. It generates probabilistic orthographic transformation rules using a data driven approach. By skipping the intermediate phonemic interpretation, the transliteration error rate is reduced significantly. Furthermore, the bilingual aligning process is integrated into the decoding process in n-gram TM, which allows us to achieve a joint optimization of alignment and transliteration automatically. Unlike other related work where pre-alignment is needed, the new framework greatly reduces the development efforts of machine transliteration systems. Although the framework is implemented on an English-Chinese personal name data set, without loss of generality, it well applies to transliteration of other language pairs such as English/Korean and English/Japanese. It is noted that place and company names are sometimes translated in combination of transliteration and meanings, for example, /Victoria-Fall/ becomes 维多利亚瀑布 (Pinyin:Wei Duo Li Ya Pu Bu). As the proposed framework allows direct orthographical mapping, it can also be easily extended to handle such name translation. We expect to see the proposed model to be further explored in other related areas. References Dempster, A.P., N.M. Laird and D.B.Rubin, 1977. Maximum likelihood from incomplete data via the EM algorithm, J. Roy. Stat. Soc., Ser. B. Vol. 39, pp138 Helen M. Meng, Wai-Kit Lo, Berlin Chen and Karen Tang. 2001. Generate Phonetic Cognates to Handle Name Entities in English-Chinese cross-language spoken document retrieval, ASRU 2001 Jelinek, F. 1991, Self-organized language modeling for speech recognition, In Waibel, A. and Lee K.F. (eds), Readings in Speech Recognition, Morgan Kaufmann., San Mateo, CA K. Knight and J. Graehl. 1998. Machine Transliteration, Computational Linguistics 24(4) Paola Virga, Sanjeev Khudanpur, 2003. Transliteration of Proper Names in Crosslingual Information Retrieval. ACL 2003 workshop MLNER Quinlan J. R. 1993, C4.5 Programs for machine learning, Morgan Kaufmann , San Mateo, CA Rabiner, Lawrence R. 1989, A tutorial on hidden Markov models and selected applications in speech recognition, Proceedings of the IEEE 77(2) Schwartz, R. and Chow Y. L., 1990, The N-best algorithm: An efficient and Exact procedure for finding the N most likely sentence hypothesis, Proceedings of ICASSP 1990, Albuquerque, pp 81-84 Sung Young Jung, Sung Lim Hong and Eunok Paek, 2000, An English to Korean Transliteration Model of Extended Markov Window, Proceedings of COLING The Onomastica Consortium, 1995. The Onomastica interlanguage pronunciation lexicon, Proceedings of EuroSpeech, Madrid, Spain, Vol. 1, pp829-832 Xinhua News Agency, 1992, Chinese transliteration of foreign personal names, The Commercial Press
2004
21
Collocation Translation Acquisition Using Monolingual Corpora Yajuan LÜ Microsoft Research Asia 5F Sigma Center, No. 49 Zhichun Road, Haidian District, Beijing, China, 100080 [email protected] Ming ZHOU Microsoft Research Asia 5F Sigma Center, No. 49 Zhichun Road, Haidian District, Beijing, China, 100080 [email protected] Abstract Collocation translation is important for machine translation and many other NLP tasks. Unlike previous methods using bilingual parallel corpora, this paper presents a new method for acquiring collocation translations by making use of monolingual corpora and linguistic knowledge. First, dependency triples are extracted from Chinese and English corpora with dependency parsers. Then, a dependency triple translation model is estimated using the EM algorithm based on a dependency correspondence assumption. The generated triple translation model is used to extract collocation translations from two monolingual corpora. Experiments show that our approach outperforms the existing monolingual corpus based methods in dependency triple translation and achieves promising results in collocation translation extraction. 1 Introduction A collocation is an arbitrary and recurrent word combination (Benson, 1990). Previous work in collocation acquisition varies in the kinds of collocations they detect. These range from twoword to multi-word, with or without syntactic structure (Smadja 1993; Lin, 1998; Pearce, 2001; Seretan et al. 2003). In this paper, a collocation refers to a recurrent word pair linked with a certain syntactic relation. For instance, <solve, verb-object, problem> is a collocation with a syntactic relation verb-object. Translation of collocations is difficult for nonnative speakers. Many collocation translations are idiosyncratic in the sense that they are unpredictable by syntactic or semantic features. Consider Chinese to English translation. The translations of “解决” can be “solve” or “resolve”. The translations of “问题” can be “problem” or “issue”. However, translations of the collocation “解决 ~ 问题” as “solve~problem” or “resolve~ issue” is preferred over “solve~issue” or “resolve ~problem”. Automatically acquiring these collocation translations will be very useful for machine translation, cross language information retrieval, second language learning and many other NLP applications. (Smadja et al., 1996; Gao et al., 2002; Wu and Zhou, 2003). Some studies have been done for acquiring collocation translations using parallel corpora (Smadja et al, 1996; Kupiec, 1993; Echizen-ya et al., 2003). These works implicitly assume that a bilingual corpus on a large scale can be obtained easily. However, despite efforts in compiling parallel corpora, sufficient amounts of such corpora are still unavailable. Instead of heavily relying on bilingual corpora, this paper aims to solve the bottleneck in a different way: to mine bilingual knowledge from structured monolingual corpora, which can be more easily obtained in a large volume. Our method is based on the observation that despite the great differences between Chinese and English, the main dependency relations tend to have a strong direct correspondence (Zhou et al., 2001). Based on this assumption, a new translation model based on dependency triples is proposed. The translation probabilities are estimated from two monolingual corpora using the EM algorithm with the help of a bilingual translation dictionary. Experimental results show that the proposed triple translation model outperforms the other three models in comparison. The obtained triple translation model is also used for collocation translation extraction. Evaluation results demonstrate the effectiveness of our method. The remainder of this paper is organized as follows. Section 2 provides a brief description on the related work. Section 3 describes our triple translation model and training algorithm. Section 4 extracts collocation translations from two independent monolingual corpora. Section 5 evaluates the proposed method, and the last section draws conclusions and presents the future work. 2 Related work There has been much previous work done on monolingual collocation extraction. They can in general be classified into two types: window-based and syntax-based methods. The former extracts collocations within a fixed window (Church and Hanks 1990; Smadja, 1993). The latter extracts collocations which have a syntactic relationship (Lin, 1998; Seretan et al., 2003). The syntax-based method becomes more favorable with recent significant increases in parsing efficiency and accuracy. Several metrics have been adopted to measure the association strength in collocation extraction. Thanopoulos et al. (2002) give comparative evaluations on these metrics. Most previous research in translation knowledge acquisition is based on parallel corpora (Brown et al., 1993). As for collocation translation, Smadja et al. (1996) implement a system to extract collocation translations from a parallel EnglishFrench corpus. English collocations are first extracted using the Xtract system, then corresponding French translations are sought based on the Dice coefficient. Echizen-ya et al. (2003) propose a method to extract bilingual collocations using recursive chain-link-type learning. In addition to collocation translation, there is also some related work in acquiring phrase or term translations from parallel corpus (Kupiec, 1993; Yamamoto and Matsumoto 2000). Since large aligned bilingual corpora are hard to obtain, some research has been conducted to exploit translation knowledge from non-parallel corpora. Their work is mainly on word level. Koehn and Knight (2000) presents an approach to estimating word translation probabilities using unrelated monolingual corpora with the EM algorithm. The method exhibits promising results in selecting the right translation among several options provided by bilingual dictionary. Zhou et al.(2001) proposes a method to simulate translation probability with a cross language similarity score, which is estimated from monolingual corpora based on mutual information. The method achieves good results in word translation selection. In addition, (Dagan and Itai, 1994) and (Li, 2002) propose using two monolingual corpora for word sense disambiguation. (Fung, 1998) uses an IR approach to induce new word translations from comparable corpora. (Rapp, 1999) and (Koehn and Knight, 2002) extract new word translations from non-parallel corpus. (Cao and Li, 2002) acquire noun phrase translations by making use of web data. (Wu and Zhou, 2003) also make full use of large scale monolingual corpora and limited bilingual corpora for synonymous collocation extraction. 3 Training a triple translation model from monolingual corpora In this section, we first describe the dependency correspondence assumption underlying our approach. Then a dependency triple translation model and the monolingual corpus based training algorithm are proposed. The obtained triple translation model will be used for collocation translation extraction in next section. 3.1 Dependency correspondence between Chinese and English A dependency triple consists of a head, a dependant, and a dependency relation. Using a dependency parser, a sentence can be analyzed into dependency triples. We represent a triple as (w1,r,w2), where w1 and w2 are words and r is the dependency relation. It means that w2 has a dependency relation r with w1. For example, a triple (overcome, verb-object, difficulty) means that “difficulty” is the object of the verb “overcome”. Among all the dependency relations, we only consider the following three key types that we think, are the most important in text analysis and machine translation: verb-object (VO), nounadj(AN), and verb- adv(AV). It is our observation that there is a strong correspondence in major dependency relations in the translation between English and Chinese. For example, an object-verb relation in Chinese (e.g.(克服, VO, 困难)) is usually translated into the same verb-object relation in English(e.g. (overcome, VO, difficulty)). This assumption has been experimentally justified based on a large and balanced bilingual corpus in our previous work (Zhou et al., 2001). We come to the conclusion that more than 80% of the above dependency relations have a one-one mapping between Chinese and English. We can conclude that there is indeed a very strong correspondence between Chinese and English in the three considered dependency relations. This fact will be used to estimate triple translation model using two monolingual corpora. 3.2 Triple translation model According to Bayes’s theorem, given a Chinese triple ) , , ( 2 1 c r c c c tri = , and the set of its candidate English triple translations ) , , ( 2 1 e r e e e tri = , the best English triple ) ˆ , , ˆ( ˆ 2 1 e r e e e tri = is the one that maximizes the Equation (1): ) | ( ) ( max arg ) ( / ) | ( ) ( max arg ) | ( max arg ˆ tri tri tri e tri tri tri tri e tri tri e tri e c p e p c p e c p e p c e p e tri tri tri = = = (1) where ) ( tri e p is usually called the language model and ) | ( tri tri e c p is usually called the translation model. Language Model The language model ) ( tri e p is calculated with English triples database. In order to tackle with the data sparseness problem, we smooth the language model with an interpolation method, as described below. When the given English triple occurs in the corpus, we can calculate it as in Equation (2). N e r e freq e p e tri ) , , ( ) ( 2 1 = (2) where ) , , ( 2 1 e r e freq e represents the frequency of triple tri e . N represents the total counts of all the English triples in the training corpus. For an English triple ) , , ( 2 1 e r e e e tri = , if we assume that two words 1e and 2e are conditionally independent given the relation er , Equation (2) can be rewritten as in (3)(Lin, 1998). ) | ( ) | ( ) ( ) ( 2 1 e e e tri r e p r e p r p e p = (3) where N r freq r p e e ,*) (*, ) ( = , ,*) (*, ,*) , ( ) | ( 1 1 e e e r freq r e freq r e p = , ,*) (*, ) , (*, ) | ( 2 2 2 e e r freq e r freq r e p = . The wildcard symbol * means it can be any word or relation. With Equations (2) and (3), we get the interpolated language model as shown in (4). ) | ( ) | ( ) ( ) 1( ) ( ) ( 2 1 e e e tri tri r e p r e p r p N e freq e p λ λ − + = (4) where 1 0 < < λ . λ is calculated as below: ) ( 1 1 1 tri e freq + − = λ (5) Translation Model We simplify the translation model according the following two assumptions. Assumption 1: Given an English triple tri e , and the corresponding Chinese dependency relation cr , 1c and 2c are conditionally independent. We have: ) | ( ) , | ( ) , | ( ) | , , ( ) | ( 2 1 2 1 tri c tri c tri c tri c tri tri e r p e r c p e r c p e c r c p e c p = = (6) Assumption 2: For an English triple tri e , assume that ic only depends on {1,2}) (i ∈ ie , and cr only depends on er . Equation (6) is rewritten as: ) | ( ) | ( ) | ( ) | ( ) , | ( ) , | ( ) | ( 2 2 1 1 2 1 e c tri e tri c tri c tri tri r r p e c p e c p e r p e r c p e r c p e c p = = (7) Notice that ) | ( 1 1 e c p and ) | ( 2 2 e c p are translation probabilities within triples, they are different from the unrestricted probabilities such as the ones in IBM models (Brown et al., 1993). We distinguish translation probability between head ( ) | ( 1 1 e c p ) and dependant ( ) | ( 2 2 e c p ). In the rest of the paper, we use ) | ( e c phead and ) | ( e c pdep to denote the head translation probability and dependant translation probability respectively. As the correspondence between the same dependency relation across English and Chinese is strong, we simply assume 1 ) | ( = e c r r p for the corresponding er and cr , and 0 ) | ( = e c r r p for the other cases. ) | ( 1 1 e c phead and ) | ( 2 2 e c pdep cannot be estimated directly because there is no triple-aligned corpus available. Here, we present an approach to estimating these probabilities from two monolingual corpora based on the EM algorithm. 3.3 Estimation of word translation probability using the EM algorithm Chinese and English corpora are first parsed using a dependency parser, and two dependency triple databases are generated. The candidate English translation set of Chinese triples is generated through a bilingual dictionary and the assumption of strong correspondence of dependency relations. There is a risk that unrelated triples in Chinese and English can be connected with this method. However, as the conditions that are used to make the connection are quite strong (i.e. possible word translations in the same triple structure), we believe that this risk, is not very severe. Then, the expectation maximization (EM) algorithm is introduced to iteratively strengthen the correct connections and weaken the incorrect connections. EM Algorithm According to section 3.2, the translation probabilities from a Chinese triple tri c to an English triple tri e can be computed using the English triple language model ) ( tri e p and a translation model from English to Chinese ) | ( tri tri e c p . The English language model can be estimated using Equation (4) and the translation model can be calculated using Equation (7). The translation probabilities ) | ( e c phead and ) | ( e c pdep are initially set to a uniform distribution as follows: ⎪⎩ ⎪⎨ ⎧ Γ ∈ Γ = = otherwise c if e c p e c p e e dep head ,0 ) ( , 1 ) | ( ) | ( (8) Where e Γ represents the translation set of the English word e. Then, the word translation probabilities are estimated iteratively using the EM algorithm. Figure 1 gives a formal description of the EM algorithm. Figure 1: EM algorithm The basic idea is that under the restriction of the English triple language model ) ( tri e p and translation dictionary, we wish to estimate the translation probabilities ) | ( e c phead and ) | ( e c pdep that best explain the Chinese triple database as a translation from the English triple database. In each iteration, the normalized triple translation probabilities are used to update the word translation probabilities. Intuitively, after finding the most probable translation of the Chinese triple, we can collect counts for the word translation it contains. Since the English triple language model provides context information for the disambiguation of the Chinese words, only the appropriate occurrences are counted. Now, with the language model estimated using Equation (4) and the translation probabilities estimated using EM algorithm, we can compute the best triple translation for a given Chinese triple using Equations (1) and (7). 4 Collocation translation extraction from two monolingual corpora This section describes how to extract collocation translation from independent monolingual corpora. First, collocations are extracted from a monolingual triples database. Then, collocation translations are acquired using the triple translation model obtained in section 3. 4.1 Monolingual collocation extraction As introduced in section 2, much work has been done to extract collocations. Among all the measure metrics, log likelihood ratio (LLR) has proved to give better results (Duning, 1993; Thanopoulos et al., 2002). In this paper, we take LLR as the metric to extract collocations from a dependency triple database. For a given Chinese triple ) , , ( 2 1 c r c c c tri = , the LLR score is calculated as follows: N N d c d c d b d b c a c a b a b a d d c c b b a a Logl log ) log( ) ( ) log( ) ( ) log( ) ( ) log( ) ( log log log log + + + − + + − + + − + + − + + + = (9) where, . ), , , ( ) , (*, ), , , ( ,*) , ( ), , , ( 2 1 2 2 1 1 2 1 c b a N d c r c freq c r freq c c r c freq r c freq b c r c freq a c c c c c − − − = − = − = = N is the total counts of all Chinese triples. Those triples whose LLR values are larger than a given threshold are taken as a collocation. This syntax-based collocation has the advantage that it can represent both adjacent and long distance word association. Here, we only extract the three main types of collocation that have been mentioned in section 3.1. 4.2 Collocation translation extraction For the acquired collocations, we try to extract their translations from the other monolingual Train language model for English triple ) ( tri e p ; Initialize word translation probabilities ) | ( e c phead and ) | ( e c pdep uniformly as in Equation (8); Iterate Set ) | ( e c scorehead and ) | ( e c scoredep to 0 for all dictionary entries (c,e); for all Chinese triples ) , , ( 2 1 c r c c c tri = for all candidate English triple translations ) , , ( 2 1 e r e e e tri = compute triple translation probability ) | ( tri tri c e p by ) | ( ) | ( ) | ( ) ( 2 2 1 1 e c dep head tri r r p e c p e c p e p end for normalize ) | ( tri tri c e p , so that their sum is 1; for all triple translation ) , , ( 2 1 e r e e e tri = add ) | ( tri tri c e p to ) | ( 1 1 e c scorehead add ) | ( tri tri c e p to ) | ( 2 2 e c scoredep endfor endfor for all translation pairs (c,e) set ) | ( e c phead to normalized ) | ( e c scorehead ; set ) | ( e c pdep to normalized ) | ( e c scoredep ; endfor enditerate corpus using the triple translation model trained with the method proposed in section 3. Our objective is to acquire collocation translations as translation knowledge for a machine translation system, so only highly reliable collocation translations are extracted. Figure 2 describes the algorithm for Chinese-English collocation translation extraction. It can be seen that the best English triple candidate is extracted as the translation of the given Chinese collocation only if the Chinese collocation is also the best translation candidate of the English triple. But the English triple is not necessarily a collocation. English collocation translations can be extracted in a similar way. Figure 2: Collocation translation extraction 4.3 Implementation of our approach Our English corpus is from Wall Street Journal (1987-1992) and Associated Press (1988-1990), and the Chinese corpus is from People’s Daily (1980-1998). The two corpora are parsed using the NLPWin parser1 (Heidorn, 2000). The statistics for three main types of dependency triples are shown in tables 1 and 2. Token refers to the total number of triple occurrences and Type refers to the number of unique triples in the corpus. Statistic for the extracted Chinese collocations and the collocation translations is shown in Table 3. Class #Type #Token VO 1,579,783 19,168,229 AN 311,560 5,383,200 AV 546,054 9,467,103 Table 1: Chinese dependency triples 1 The NLPWin parser is a rule-based parser developed at Microsoft research, which parses several languages including Chinese and English. Its output can be a phrase structure parse tree or a logical form which is represented with dependency triples. Class #Type #Token VO 1,526,747 8,943,903 AN 1,163,440 6,386,097 AV 215,110 1,034,410 Table 2: English dependency triples Class #Type #Translated VO 99,609 28,841 AN 35,951 12,615 AV 46,515 6,176 Table 3: Extracted Chinese collocations and E-C translation pairs The translation dictionaries we used in training and translation are combined from two dictionaries: HITDic and NLPWinDic 2 . The final E-C dictionary contains 126,135 entries, and C-E dictionary contains 91,275 entries. 5 Experiments and evaluation To evaluate the effectiveness of our methods, two experiments have been conducted. The first one compares our method with three other monolingual corpus based methods in triple translation. The second one evaluates the accuracy of the acquired collocation translation. 5.1 Dependency triple translation Triple translation experiments are conducted from Chinese to English. We randomly selected 2000 Chinese triples (whose frequency is larger than 2) from the dependency triple database. The standard translation answer sets were built manually by three linguistic experts. For each Chinese triple, its English translation set contain English triples provided by anyone of the three linguists. Among 2000 candidate triples, there are 101 triples that can’t be translated into English triples with same relation. For example, the Chinese triple (讲, VO, 价钱) should be translated into “bargain”. The two words in triple cannot be translated separately. We call this kind of collocation translation no-compositional translations. Our current model cannot deal with this kind of translation. In addition, there are also 157 error dependency triples, which result from parsing mistakes. We filtered out these two kinds of triples and got a standard test set with 1,742 Chinese triples and 4,645 translations in total. We compare our triple translation model with three other models on the same standard test set with the same translation dictionary. As the 2 These two dictionaries are built by Harbin Institute of Technology and Microsoft Research respectively. For each Chinese collocation col c : a. Acquire the best English triple translation tri eˆ using C-E triple translation model: ) | ( ) ( max arg ˆ tri tri tri e tri e c p e p e tri = b. For the acquired tri eˆ , calculate the best Chinese triple translation tri cˆ using E-C triple translation model: ) | ˆ( ) ( max arg ˆ tri tri tri c tri c e p c p c tri = c. If col c = tri cˆ , add col c Ù tri eˆ to collocation translation database. baseline experiment, Model A selects the highestfrequency translation for each word in triple; Model B selects translation with the maximal target triple probability, as proposed in (Dagan 1994); Model C selects translation using both language model and translation model, but the translation probability is simulated by a similarity score which is estimated from monolingual corpus using mutual information measure (Zhou et al., 2001). And our triple translation model is model D. Suppose ) , , ( 2 1 c r c c c tri = is the Chinese triple to be translated. The four compared models can be formally expressed as follows: Model A: )) ( ( max arg , )), ( ( max arg ( 2 ) ( 1 ) ( max 2 2 1 1 e freq r e freq e c Trans e e c Trans e ∈ ∈ = Model B: ) , , ( max arg ) ( max arg 2 1 ) ( ) ( max 2 2 1 1 e r e p e p e e c Trans e c Trans e tri etri ∈ ∈ = = Model C: )) , Sim( ) , Sim( ) ( ( max arg )) | ( likelyhood ) ( ( max arg 2 2 1 1 ) ( ) ( max 2 2 1 1 c e c e e p e c e p e tri c Trans e c Trans e tri tri tri etri × × = × = ∈ ∈ where, ) , Sim( c e is similarity score between e and c (Zhou et al., 2001). Model D (our model): )) | ( ) | ( ) | ( ) ( ( max arg )) | ( ) ( ( max arg 2 2 1 1 ) ( ) ( max 2 2 1 1 e c dep head tri c Trans e c Trans e tri tri tri e r r p e c p e c p e p e c p e p e tri ∈ ∈ = = Accuracy(%) Cove- Rage(%) Top 1 Top 3 Oracle (%) Model A 17.21 ---- Model B 33.56 53.79 Model C 35.88 57.74 Model D 83.98 36.91 58.58 66.30 Table 4: Translation results comparison The evaluation results on the standard test set are shown in Table 4, where coverage is the percentages of triples which can be translated. Some triples can’t be translated by Model B, C and D because of the lack of dictionary translations or data sparseness in triples. In fact, the coverage of Model A is 100%. It was set to the same as others in order to compare accuracy using the same test set. The oracle score is the upper bound accuracy under the conditions of current translation dictionary and standard test set. Top N accuracy is defined as the percentage of triples whose selected top N translations include correct translations. We can see that both Model C and Model D achieve better results than Model B. This shows that the translation model trained from monolingual corpora really helps to improve the performance of translation. Our model also outperforms Model C, which demonstrates the probabilities trained by our EM algorithm achieve better performance than heuristic similarity scores. In fact, our evaluation method is very rigorous. To avoid bias in evaluation, we take human translation results as standard. The real translation accuracy is reasonably better than the evaluation results. But as we can see, compared to the oracle score, the current models still have much room for improvement. And coverage is also not high due to the limitations of the translation dictionary and the sparse data problem. 5.2 Collocation translation extraction 47,632 Chinese collocation translations are extracted with the method proposed in section 4. We randomly selected 1000 translations for evaluation. Three linguistic experts tag the acceptability of the translation. Those translations that are tagged as acceptable by at least two experts are evaluated as correct. The evaluation results are shown in Table 5. Total Acceptance Accuracy (%) VO 590 373 63.22 AN 292 199 68.15 AV 118 60 50.85 All 1000 632 63.20 ColTrans 334 241 72.16 Table 5: Extracted collocation translation results We can see that the extracted collocation translations achieve a much better result than triple translation. The average accuracy is 63.20% and the collocations with relation AN achieve the highest accuracy of 68.15%. If we only consider those Chinese collocations whose translations are also English collocations, we obtain an even better accuracy of 72.16% as shown in the last row of Table 5. The results justify our idea that we can acquire reliable translation for collocation by making use of triple translation model in two directions. These acquired collocation translations are very valuable for translation knowledge building. Manually crafting collocation translations can be time-consuming and cannot ensure high quality in a consistent way. Our work will certainly improve the quality and efficiency of collocation translation acquisition. 5.3 Discussion Although our approach achieves promising results, it still has some limitations to be remedied in future work. (1) Translation dictionary extension Due to the limited coverage of the dictionary, a correct translation may not be stored in the dictionary. This naturally limits the coverage of triple translations. Some research has been done to expand translation dictionary using a non-parallel corpus (Rapp, 1999; Keohn and Knight, 2002). It can be used to improve our work. (2) Noise filtering of parsers Since we use parsers to generate dependency triple databases, this inevitably introduces some parsing mistakes. From our triple translation test data, we can see that 7.85% (157/2000) types of triples are error triples. These errors will certainly influence the translation probability estimation in the training process. We need to find an effective way to filter out mistakes and perform necessary automatic correction. (3) Non-compositional collocation translation. Our model is based on the dependency correspondence assumption, which assumes that a triple’s translation is also a triple. But there are still some collocations that can’t be translated word by word. For example, the Chinese triple (富有, VO, 成效) usually be translated into “be effective”; the English triple (take, VO, place) usually be translated into “发生”. The two words in triple cannot be translated separately. Our current model cannot deal with this kind of non-compositional collocation translation. Melamed (1997) and Lin (1999) have done some research on noncompositional phrases discovery. We will consider taking their work as a complement to our model. 6 Conclusion and future work This paper proposes a novel method to train a triple translation model and extract collocation translations from two independent monolingual corpora. Evaluation results show that it outperforms the existing monolingual corpus based methods in triple translation, mainly due to the employment of EM algorithm in cross language translation probability estimation. By making use of the acquired triple translation model in two directions, promising results are achieved in collocation translation extraction. Our work also demonstrates the possibility of making full use of monolingual resources, such as corpora and parsers for bilingual tasks. This can help overcome the bottleneck of the lack of a large-scale bilingual corpus. This approach is also applicable to comparable corpora, which are also easier to access than bilingual corpora. In future work, we are interested in extending our method to solving the problem of noncompositional collocation translation. We are also interested in incorporating our triple translation model for sentence level translation. 7 Acknowledgements The authors would like to thank John Chen, Jianfeng Gao and Yunbo Cao for their valuable suggestions and comments on a preliminary draft of this paper. References Morton Benson. 1990. Collocations and generalpurpose dictionaries. International Journal of Lexicography. 3(1):23–35 Yunbo Cao, Hang Li. 2002. Base noun phrase translation using Web data and the EM algorithm. The 19th International Conference on Computational Linguistics. pp.127-133 Kenneth W. Church and Patrick Hanks. 1990. Word association norms, mutural information, and lexicography. Computational Linguistics, 16(1):22-29 Ido Dagan and Alon Itai. 1994. Word sense disambiguation using a second language monolingual corpus. Computational Linguistics, 20(4):563-596 Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics. 19(1):61-74 Hiroshi Echizen-ya, Kenji Araki, Yoshi Momouchi, Koji Tochinai. 2003. Effectiveness of automatic extraction of bilingual collocations using recursive chain-link-type learning. The 9th Machine Translation Summit. pp.102-109 Pascale Fung, and Yee Lo Yuen. 1998. An IR approach for translating new words from nonparallel, comparable Texts. The 36th annual conference of the Association for Computational Linguistics. pp. 414-420 Jianfeng Gao, Jianyun Nie, Hongzhao He, Weijun Chen, Ming Zhou. 2002. Resolving query translation ambiguity using a decaying cooccurrence model and syntactic dependence relations. The 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. pp.183 - 190 G. Heidorn. 2000. Intelligent writing assistant. In R. Dale, H. Moisl, and H. Somers, editors, A Handbook of Natural Language Processing: Techniques and Applications for the Processing of Language as Text. Marcel Dekker. Philipp Koehn and Kevin Knight. 2000. Estimating word translation probabilities from unrelated monolingual corpora using the EM algorithm. National Conference on Artificial Intelligence. pp.711-715 Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. Unsupervised Lexical Acquisition: Workshop of the ACL Special Interest Group on the Lexicon. pp. 9-16 Julian Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. The 31st Annual Meeting of the Association for Computational Linguistics, pp. 23-30 Cong Li, Hang Li. 2002. Word translation disambiguation using bilingual bootstrapping. The 40th annual conference of the Association for Computational Linguistics. pp: 343-351 Dekang Lin. 1998. Extracting collocation from Text corpora. First Workshop on Computational Terminology. pp. 57-63 Dekang Lin 1999. Automatic identification of noncompositional phrases. The 37th Annual Meeting of the Association for Computational Linguistics. pp.317--324 Ilya Dan Melamed. 1997. Automatic discovery of non-compositional compounds in parallel data. The 2nd Conference on Empirical Methods in Natural Language Processing. pp. 97~108 Brown P.F., Pietra, S.A.D., Pietra, V. J. D., and Mercer R. L. 1993. The mathematics of machine translation: parameter estimation. Computational Linguistics, 19(2):263-313 Reinhard Rapp. 1999. Automatic identification of word translations from unrelated English and German corpora. The 37th annual conference of the Association for Computational Linguistics. pp. 519-526 Violeta Seretan, Luka Nerima, Eric Wehrli. 2003. Extraction of Multi-Word collocations using syntactic bigram composition. International Conference on Recent Advances in NLP. pp. 424-431 Frank Smadja. 1993. Retrieving collocations from text: Xtract. Computational Linguistics, 19(1):143-177 Frank Smadja, Kathleen R. Mckeown, Vasileios Hatzivassiloglou. 1996. Translation collocations for bilingual lexicons: a statistical approach. Computational Linguistics, 22:1-38 Aristomenis Thanopoulos, Nikos Fakotakis, George Kokkinakis. 2002. Comparative evaluation of collocation extraction metrics. The 3rd International Conference on Language Resource and Evaluation. pp.620-625 Hua Wu, Ming Zhou. 2003. Synonymous collocation extraction using translation Information. The 41th annual conference of the Association for Computational Linguistics. pp. 120-127 Kaoru Yamamoto, Yuji Matsumoto. 2000. Acquisition of phrase-level bilingual correspondence using dependency structure. The 18th International Conference on Computational Linguistics. pp. 933-939 Ming Zhou, Ding Yuan and Changning Huang. 2001. Improving translation selection with a new translation model trained by independent monolingual corpora. Computaional Linguistics & Chinese Language Processing. 6(1): 1-26
2004
22
Statistical Machine Translation with Word- and Sentence-Aligned Parallel Corpora Chris Callison-Burch David Talbot Miles Osborne School on Informatics University of Edinburgh 2 Buccleuch Place Edinburgh, EH8 9LW [email protected] Abstract The parameters of statistical translation models are typically estimated from sentence-aligned parallel corpora. We show that significant improvements in the alignment and translation quality of such models can be achieved by additionally including wordaligned data during training. Incorporating wordlevel alignments into the parameter estimation of the IBM models reduces alignment error rate and increases the Bleu score when compared to training the same models only on sentence-aligned data. On the Verbmobil data set, we attain a 38% reduction in the alignment error rate and a higher Bleu score with half as many training examples. We discuss how varying the ratio of word-aligned to sentencealigned data affects the expected performance gain. 1 Introduction Machine translation systems based on probabilistic translation models (Brown et al., 1993) are generally trained using sentence-aligned parallel corpora. For many language pairs these exist in abundant quantities. However for new domains or uncommon language pairs extensive parallel corpora are often hard to come by. Two factors could increase the performance of statistical machine translation for new language pairs and domains: a reduction in the cost of creating new training data, and the development of more efficient methods for exploiting existing training data. Approaches such as harvesting parallel corpora from the web (Resnik and Smith, 2003) address the creation of data. We take the second, complementary approach. We address the problem of efficiently exploiting existing parallel corpora by adding explicit word-level alignments between a number of the sentence pairs in the training corpus. We modify the standard parameter estimation procedure for IBM Models and HMM variants so that they can exploit these additional wordlevel alignments. Our approach uses both word- and sentence-level alignments for training material. In this paper we: 1. Describe how the parameter estimation framework of Brown et al. (1993) can be adapted to incorporate word-level alignments; 2. Report significant improvements in alignment error rate and translation quality when training on data with word-level alignments; 3. Demonstrate that the inclusion of word-level alignments is more effective than using a bilingual dictionary; 4. Show the importance of amplifying the contribution of word-aligned data during parameter estimation. This paper shows that word-level alignments improve the parameter estimates for translation models, which in turn results in improved statistical translation for languages that do not have large sentence-aligned parallel corpora. 2 Parameter Estimation Using Sentence-Aligned Corpora The task of statistical machine translation is to choose the source sentence, e, that is the most probable translation of a given sentence, f, in a foreign language. Rather than choosing e∗that directly maximizes p(e|f), Brown et al. (1993) apply Bayes’ rule and select the source sentence: e∗ = arg max e p(e)p(f|e). (1) In this equation p(e) is a language model probability and is p(f|e) a translation model probability. A series of increasingly sophisticated translation models, referred to as the IBM Models, was defined in Brown et al. (1993). The translation model, p(f|e) defined as a marginal probability obtained by summing over word-level alignments, a, between the source and target sentences: p(f|e) = X a p(f, a|e). (2) While word-level alignments are a crucial component of the IBM models, the model parameters are generally estimated from sentence-aligned parallel corpora without explicit word-level alignment information. The reason for this is that word-aligned parallel corpora do not generally exist. Consequently, word level alignments are treated as hidden variables. To estimate the values of these hidden variables, the expectation maximization (EM) framework for maximum likelihood estimation from incomplete data is used (Dempster et al., 1977). The previous section describes how the translation probability of a given sentence pair is obtained by summing over all alignments p(f|e) = P a p(f, a|e). EM seeks to maximize the marginal log likelihood, log p(f|e), indirectly by iteratively maximizing a bound on this term known as the expected complete log likelihood, ⟨log p(f, a|e)⟩q(a),1 log p(f|e) = log X a p(f, a|e) (3) = log X a q(a)p(f, a|e) q(a) (4) ≥ X a q(a) log p(f, a|e) q(a) (5) = ⟨log p(f, a|e)⟩q(a) + H(q(a)) where the bound in (5) is given by Jensen’s inequality. By choosing q(a) = p(a|f, e) this bound becomes an equality. This maximization consists of two steps: • E-step: calculate the posterior probability under the current model of every permissible alignment for each sentence pair in the sentence-aligned training corpus; • M-step: maximize the expected log likelihood under this posterior distribution, ⟨log p(f, a|e)⟩q(a), with respect to the model’s parameters. While in standard maximum likelihood estimation events are counted directly to estimate parameter settings, in EM we effectively collect fractional counts of events (here permissible alignments weighted by their posterior probability), and use these to iteratively update the parameters. 1Here ⟨·⟩q(·) denotes an expectation with respect to q(·). Since only some of the permissible alignments make sense linguistically, we would like EM to use the posterior alignment probabilities calculated in the E-step to weight plausible alignments higher than the large number of bogus alignments which are included in the expected complete log likelihood. This in turn should encourage the parameter adjustments made in the M-step to converge to linguistically plausible values. Since the number of permissible alignments for a sentence grows exponentially in the length of the sentences for the later IBM Models, a large number of informative example sentence pairs are required to distinguish between plausible and implausible alignments. Given sufficient data the distinction occurs because words which are mutual translations appear together more frequently in aligned sentences in the corpus. Given the high number of model parameters and permissible alignments, however, huge amounts of data will be required to estimate reasonable translation models from sentence-aligned data alone. 3 Parameter Estimation Using Word- and Sentence-Aligned Corpora As an alternative to collecting a huge amount of sentence-aligned training data, by annotating some of our sentence pairs with word-level alignments we can explicitly provide information to highlight plausible alignments and thereby help parameters converge upon reasonable settings with less training data. Since word-alignments are inherent in the IBM translation models it is straightforward to incorporate this information into the parameter estimation procedure. For sentence pairs with explicit wordlevel alignments marked, fractional counts over all permissible alignments need not be collected. Instead, whole counts are collected for the single hand annotated alignment for each sentence pair which has been word-aligned. By doing this the expected complete log likelihood collapses to a single term, the complete log likelihood (p(f, a|e)), and the Estep is circumvented. The parameter estimation procedure now involves maximizing the likelihood of data aligned only at the sentence level and also of data aligned at the word level. The mixed likelihood function, M, combines the expected information contained in the sentence-aligned data with the complete information contained in the word-aligned data. M = Ns X s=1 (1 −λ)⟨log p(fs, as|es)⟩q(as) + Nw X w=1 λ log p(fw, aw|ew) (6) Here s and w index the Ns sentence-aligned sentences and Nw word-aligned sentences in our corpora respectively. Thus M combines the expected complete log likelihood and the complete log likelihood. In order to control the relative contributions of the sentence-aligned and word-aligned data in the parameter estimation procedure, we introduce a mixing weight λ that can take values between 0 and 1. 3.1 The impact of word-level alignments The impact of word-level alignments on parameter estimation is closely tied to the structure of the IBM Models. Since translation and word alignment parameters are shared between all sentences, the posterior alignment probability of a source-target word pair in the sentence-aligned section of the corpus that were aligned in the word-aligned section will tend to be relatively high. In this way, the alignments from the word-aligned data effectively percolate through to the sentencealigned data indirectly constraining the E-step of EM. 3.2 Weighting the contribution of word-aligned data By incorporating λ, Equation 6 becomes an interpolation of the expected complete log likelihood provided by the sentence-aligned data and the complete log likelihood provided by word-aligned data. The use of a weight to balance the contributions of unlabeled and labeled data in maximum likelihood estimation was proposed by Nigam et al. (2000). λ quantifies our relative confidence in the expected statistics and observed statistics estimated from the sentence- and word-aligned data respectively. Standard maximum likelihood estimation (MLE) which weighs all training samples equally, corresponds to an implicit value of lambda equal to the proportion of word-aligned data in the whole of the training set: λ = Nw Nw+Ns . However, having the total amount of sentence-aligned data be much larger than the amount of word-aligned data implies a value of λ close to zero. This means that M can be maximized while essentially ignoring the likelihood of the word-aligned data. Since we believe that the explicit word-alignment information will be highly effective in distinguishing plausible alignments in the corpus as a whole, we expect to see benefits by setting λ to amplify the contribution of the wordaligned data set particularly when this is a relatively small portion of the corpus. 4 Experimental Design To perform our experiments with word-level alignements we modified GIZA++, an existing and freely available implementation of the IBM models and HMM variants (Och and Ney, 2003). Our modifications involved circumventing the E-step for sentences which had word-level alignments and incorporating these observed alignment statistics in the M-step. The observed and expected statistics were weighted accordingly by λ and (1 −λ) respectively as were their contributions to the mixed log likelihood. In order to measure the accuracy of the predictions that the statistical translation models make under our various experimental settings, we choose the alignment error rate (AER) metric, which is defined in Och and Ney (2003). We also investigated whether improved AER leads to improved translation quality. We used the alignments created during our AER experiments as the input to a phrase-based decoder. We translated a test set of 350 sentences, and used the Bleu metric (Papineni et al., 2001) to automatically evaluate machine translation quality. We used the Verbmobil German-English parallel corpus as a source of training data because it has been used extensively in evaluating statistical translation and alignment accuracy. This data set comes with a manually word-aligned set of 350 sentences which we used as our test set. Our experiments additionally required a very large set of word-aligned sentence pairs to be incorporated in the training set. Since previous work has shown that when training on the complete set of 34,000 sentence pairs an alignment error rate as low as 6% can be achieved for the Verbmobil data, we automatically generated a set of alignments for the entire training data set using the unmodified version of GIZA++. We wanted to use automatic alignments in lieu of actual hand alignments so that we would be able to perform experiments using large data sets. We ran a pilot experiment to test whether our automatic would produce similar results to manual alignments. We divided our manual word alignments into training and test sets and compared the performance of models trained on human aligned data against models trained on automatically aligned data. A Size of training corpus Model .5k 2k 8k 16k Model 1 29.64 24.66 22.64 21.68 HMM 18.74 15.63 12.39 12.04 Model 3 26.07 18.64 14.39 13.87 Model 4 20.59 16.05 12.63 12.17 Table 1: Alignment error rates for the various IBM Models trained with sentence-aligned data 100-fold cross validation showed that manual and automatic alignments produced AER results that were similar to each other to within 0.1%.2 Having satisfied ourselves that automatic alignment were a sufficient stand-in for manual alignments, we performed our main experiments which fell into the following categories: 1. Verifying that the use of word-aligned data has an impact on the quality of alignments predicted by the IBM Models, and comparing the quality increase to that gained by using a bilingual dictionary in the estimation stage. 2. Evaluating whether improved parameter estimates of alignment quality lead to improved translation quality. 3. Experimenting with how increasing the ratio of word-aligned to sentence-aligned data affected the performance. 4. Experimenting with our λ parameter which allows us to weight the relative contributions of the word-aligned and sentence-aligned data, and relating it to the ratio experiments. 5. Showing that improvements to AER and translation quality held for another corpus. 5 Results 5.1 Improved alignment quality As a staring point for comparison we trained GIZA++ using four different sized portions of the Verbmobil corpus. For each of those portions we output the most probable alignments of the testing data for Model 1, the HMM, Model 3, and Model 2Note that we stripped out probable alignments from our manually produced alignments. Probable alignments are large blocks of words which the annotator was uncertain of how to align. The many possible word-to-word translations implied by the manual alignments led to lower results than with the automatic alignments, which contained fewer word-to-word translation possibilities. Size of training corpus Model .5k 2k 8k 16k Model 1 21.43 18.04 16.49 16.20 HMM 14.42 10.47 9.09 8.80 Model 3 20.56 13.25 10.82 10.51 Model 4 14.19 10.13 7.87 7.52 Table 2: Alignment error rates for the various IBM Models trained with word-aligned data 4,3 and evaluated their AERs. Table 1 gives alignment error rates when training on 500, 2000, 8000, and 16000 sentence pairs from Verbmobil corpus without using any word-aligned training data. We obtained much better results when incorporating word-alignments with our mixed likelihood function. Table 2 shows the results for the different corpus sizes, when all of the sentence pairs have been word-aligned. The best performing model in the unmodified GIZA++ code was the HMM trained on 16,000 sentence pairs, which had an alignment error rate of 12.04%. In our modified code the best performing model was Model 4 trained on 16,000 sentence pairs (where all the sentence pairs are word-aligned) with an alignment error rate of 7.52%. The difference in the best performing models represents a 38% relative reduction in AER. Interestingly, we achieve a lower AER than the best performing unmodified models using a corpus that is one-eight the size of the sentence-aligned data. Figure 1 show an example of the improved alignments that are achieved when using the word aligned data. The example alignments were held out sentence pairs that were aligned after training on 500 sentence pairs. The alignments produced when the training on word-aligned data are dramatically better than when training on sentence-aligned data. We contrasted these improvements with the improvements that are to be had from incorporating a bilingual dictionary into the estimation process. For this experiment we allowed a bilingual dictionary to constrain which words can act as translations of each other during the initial estimates of translation probabilities (as described in Och and Ney (2003)). As can be seen in Table 3, using a dictionary reduces the AER when compared to using GIZA++ without a dictionary, but not as dramatically as integrating the word-alignments. We further tried combining a dictionary with our word-alignments but found that the dictionary results in only very minimal improvements over using word-alignments alone. 3We used the default training schemes for GIZA++, and left model smoothing parameters at their default settings. Then assume . Dann reserviere ich zwei Einzelzimmer I will reserve two single , nehme rooms , I ich mal an . (a) Sentence-aligned Then assume . Dann reserviere ich zwei Einzelzimmer I will reserve two single , nehme rooms , I ich mal an . (b) Word-aligned Then assume . Dann reserviere ich zwei Einzelzimmer I will reserve two single , nehme rooms , I ich mal an . (c) Reference Figure 1: Example alignments using sentence-aligned training data (a), using word-aligned data (b), and a reference manual alignment (c) Size of training corpus Model .5k 2k 8k 16k Model 1 23.56 20.75 18.69 18.37 HMM 15.71 12.15 9.91 10.13 Model 3 22.11 16.93 13.78 12.33 Model 4 17.07 13.60 11.49 10.77 Table 3: The improved alignment error rates when using a dictionary instead of word-aligned data to constrain word translations Sentence-aligned Word-aligned Size AER Bleu AER Bleu 500 20.59 0.211 14.19 0.233 2000 16.05 0.247 10.13 0.260 8000 12.63 0.265 7.87 0.278 16000 12.17 0.270 7.52 0.282 Table 4: Improved AER leads to improved translation quality 5.2 Improved translation quality The fact that using word-aligned data in estimating the parameters for machine translation leads to better alignments is predictable. A more significant result is whether it leads to improved translation quality. In order to test that our improved parameter estimates lead to better translation quality, we used a state-of-the-art phrase-based decoder to translate a held out set of German sentences into English. The phrase-based decoder extracts phrases from the word alignments produced by GIZA++, and computes translation probabilities based on the frequency of one phrase being aligned with another (Koehn et al., 2003). We trained a language model AER when when Ratio λ = Standard MLE λ = .9 0.1 11.73 9.40 0.2 10.89 8.66 0.3 10.23 8.13 0.5 8.65 8.19 0.7 8.29 8.03 0.9 7.78 7.78 Table 5: The effect of weighting word-aligned data more heavily that its proportion in the training data (corpus size 16000 sentence pairs) using the 34,000 English sentences from the training set. Table 4 shows that using word-aligned data leads to better translation quality than using sentencealigned data. Particularly, significantly less data is needed to achieve a high Bleu score when using word alignments. Training on a corpus of 8,000 sentence pairs with word alignments results in a higher Bleu score than when training on a corpus of 16,000 sentence pairs without word alignments. 5.3 Weighting the word-aligned data We have seen that using training data consisting of entirely word-aligned sentence pairs leads to better alignment accuracy and translation quality. However, because manually word-aligning sentence pairs costs more than just using sentence-aligned data, it is unlikely that we will ever want to label an entire corpus. Instead we will likely have a relatively small portion of the corpus word aligned. We want to be sure that this small amount of data labeled with word alignments does not get overwhelmed by a larger amount of unlabeled data. 0.07 0.075 0.08 0.085 0.09 0.095 0.1 0.105 0.11 0.115 0.12 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Alignment Error Rate Lambda 20% word-aligned 50% word-aligned 70% word-aligned 100% word-aligned Figure 2: The effect on AER of varying λ for a training corpus of 16K sentence pairs with various proportions of word-alignments Thus we introduced the λ weight into our mixed likelihood function. Table 5 compares the natural setting of λ (where it is proportional to the amount of labeled data in the corpus) to a value that amplifies the contribution of the word-aligned data. Figure 2 shows a variety of values for λ. It shows as λ increases AER decreases. Placing nearly all the weight onto the word-aligned data seems to be most effective.4 Note this did not vary the training data size – only the relative contributions between sentence- and word-aligned training material. 5.4 Ratio of word- to sentence-aligned data We also varied the ratio of word-aligned to sentence-aligned data, and evaluated the AER and Bleu scores, and assigned high value to λ (= 0.9). Figure 3 shows how AER improves as more word-aligned data is added. Each curve on the graph represents a corpus size and shows its reduction in error rate as more word-aligned data is added. For example, the bottom curve shows the performance of a corpus of 16,000 sentence pairs which starts with an AER of just over 12% with no word-aligned training data and decreases to an AER of 7.5% when all 16,000 sentence pairs are word-aligned. This curve essentially levels off after 30% of the data is word-aligned. This shows that a small amount of word-aligned data is very useful, and if we wanted to achieve a low AER, we would only have to label 4,800 examples with their word alignments rather than the entire corpus. Figure 4 shows how the Bleu score improves as more word-aligned data is added. This graph also 4At λ = 1 (not shown in Figure 2) the data that is only sentence-aligned is ignored, and the AER is therefore higher. 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0 0.2 0.4 0.6 0.8 1 Alignment error rate Ratio of word-aligned to sentence-aligned data 500 sentence pairs 2000 sentence pairs 8000 sentence pairs 16000 sentence pairs Figure 3: The effect on AER of varying the ratio of word-aligned to sentence-aligned data 0.2 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0 0.2 0.4 0.6 0.8 1 Bleu Score Ratio of word-aligned to sentence-aligned data 500 sentence pairs 2000 sentence pairs 8000 sentence pairs 16000 sentence pairs Figure 4: The effect on Bleu of varying the ratio of word-aligned to sentence-aligned data reinforces the fact that a small amount of wordaligned data is useful. A corpus of 8,000 sentence pairs with only 800 of them labeled with word alignments achieves a higher Bleu score than a corpus of 16,000 sentence pairs with no word alignments. 5.5 Evaluation using a larger training corpus We additionally tested whether incorporating wordlevel alignments into the estimation improved results for a larger corpus. We repeated our experiments using the Canadian Hansards French-English parallel corpus. Figure 6 gives a summary of the improvements in AER and Bleu score for that corpus, when testing on a held out set of 484 hand aligned sentences. On the whole, alignment error rates are higher and Bleu scores are considerably lower for the Hansards corpus. This is probably due to the differences in the corpora. Whereas the Verbmobil corpus has a small vocabulary (<10,000 per lanSentence-aligned Word-aligned Size AER Bleu AER Bleu 500 33.65 0.054 25.73 0.064 2000 25.97 0.087 18.57 0.100 8000 19.00 0.115 14.57 0.120 16000 16.59 0.126 13.55 0.128 Table 6: Summary results for AER and translation quality experiments on Hansards data guage), the Hansards has ten times that many vocabulary items and has a much longer average sentence length. This made it more difficult for us to create a simulated set of hand alignments; we measured the AER of our simulated alignments at 11.3% (which compares to 6.5% for our simulated alignments for the Verbmobil corpus). Nevertheless, the trend of decreased AER and increased Bleu score still holds. For each size of training corpus we tested we found better results using the word-aligned data. 6 Related Work Och and Ney (2003) is the most extensive analysis to date of how many different factors contribute towards improved alignments error rates, but the inclusion of word-alignments is not considered. Och and Ney do not give any direct analysis of how improved word alignments accuracy contributes toward better translation quality as we do here. Mihalcea and Pedersen (2003) described a shared task where the goal was to achieve the best AER. A number of different methods were tried, but none of them used word-level alignments. Since the best performing system used an unmodified version of Giza++, we would expected that our modifed version would show enhanced performance. Naturally this would need to be tested in future work. Melamed (1998) describes the process of manually creating a large set of word-level alignments of sentences in a parallel text. Nigam et al. (2000) described the use of weight to balance the respective contributions of labeled and unlabeled data to a mixed likelihood function. Corduneanu (2002) provides a detailed discussion of the instability of maximum likelhood solutions estimated from a mixture of labeled and unlabeled data. 7 Discussion and Future Work In this paper we show with the appropriate modification of EM significant improvement gains can be had through labeling word alignments in a bilingual corpus. Because of this significantly less data is required to achieve a low alignment error rate or high Bleu score. This holds even when using noisy word alignments such as our automatically created set. One should take our research into account when trying to efficiently create a statistical machine translation system for a language pair for which a parallel corpus is not available. Germann (2001) describes the cost of building a Tamil-English parallel corpus from scratch, and finds that using professional translations is prohibitively high. In our experience it is quicker to manually word-align translated sentence pairs than to translate a sentence, and word-level alignment can be done by someone who might not be fluent enough to produce translations. It might therefore be possible to achieve a higher performance at a fraction of the cost by hiring a nonprofessional produce word-alignments after a limited set of sentences have been translated. We plan to investigate whether it is feasible to use active learning to select which examples will be most useful when aligned at the word-level. Section 5.4 shows that word-aligning a fraction of sentence pairs in a training corpus, rather than the entire training corpus can still yield most of the benefits described in this paper. One would hope that by selectively sampling which sentences are to be manually word-aligned we would achieve nearly the same performance as word-aligning the entire corpus. Acknowledgements The authors would like to thank Franz Och, Hermann Ney, and Richard Zens for providing the Verbmobil data, and Linear B for providing its phrase-based decoder. References Peter Brown, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer. 1993. The mathematics of machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311, June. Adrian Corduneanu. 2002. Stable mixing of complete and incomplete information. Master’s thesis, Massachusetts Institute of Technology, February. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(1):1–38, Nov. Ulrich Germann. 2001. Building a statistical machine translation system from scratch: How much bang for the buck can we expect? In ACL 2001 Workshop on Data-Driven Machine Translation, Toulouse, France, July 7. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the HLT/NAACL. I. Dan Melamed. 1998. Manual annotation of translational equivalence: The blinker project. Cognitive Science Technical Report 98/07, University of Pennsylvania. Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In Rada Mihalcea and Ted Pedersen, editors, HLT-NAACL 2003 Workshop: Building and Using Parallel Texts. Kamal Nigam, Andrew K. McCallum, Sebastian Thrun, and Tom M. Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103–134. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51, March. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. IBM Research Report RC22176(W0109-022), IBM. Philip Resnik and Noah Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29(3):349– 380, September.
2004
23
Finding Ideographic Representations of Japanese Names Written in Latin Script via Language Identification and Corpus Validation Yan Qu Clairvoyance Corporation 5001 Baum Boulevard, Suite 700 Pittsburgh, PA 15213-1854, USA [email protected] Gregory Grefenstette∗∗∗∗ LIC2M/LIST/CEA 18, route du Panorama, BP 6 Fontenay-aux-Roses, 92265 France [email protected] Abstract Multilingual applications frequently involve dealing with proper names, but names are often missing in bilingual lexicons. This problem is exacerbated for applications involving translation between Latin-scripted languages and Asian languages such as Chinese, Japanese and Korean (CJK) where simple string copying is not a solution. We present a novel approach for generating the ideographic representations of a CJK name written in a Latin script. The proposed approach involves first identifying the origin of the name, and then back-transliterating the name to all possible Chinese characters using language-specific mappings. To reduce the massive number of possibilities for computation, we apply a three-tier filtering process by filtering first through a set of attested bigrams, then through a set of attested terms, and lastly through the WWW for a final validation. We illustrate the approach with English-to-Japanese back-transliteration. Against test sets of Japanese given names and surnames, we have achieved average precisions of 73% and 90%, respectively. 1 Introduction Multilingual processing in the real world often involves dealing with proper names. Translations of names, however, are often missing in bilingual resources. This absence adversely affects multilingual applications such as machine translation (MT) or cross language information retrieval (CLIR) for which names are generally good discriminating terms for high IR performance (Lin et al., 2003). For language pairs with different writing systems, such as Japanese and English, and for which simple string-copying of a name from one language to another is not a solution, researchers have studied techniques for transliteration, i.e., phonetic translation across languages. For example, European names are often transcribed in Japanese using the syllabic katakana alphabet. Knight and Graehl (1998) used a bilingual English-katakana dictionary, a katakana-to-English phoneme mapping, and the CMU Speech Pronunciation Dictionary to create a series of weighted finite-state transducers between English words and katakana that produce and rank transliteration candidates. Using similar methods, Qu et al. (2003) showed that integrating automatically discovered transliterations of unknown katakana sequences, i.e. those not included in a large Japanese-English dictionary such as EDICT1, improves CLIR results. Transliteration of names between alphabetic and syllabic scripts has also been studied for languages such as Japanese/English (Fujii & Ishikawa, 2001), English/Korean (Jeong et al., 1999), and English/Arabic (Al-Onaizan and Knight, 2002). In work closest to ours, Meng et al (2001), working in cross-language retrieval of phonetically transcribed spoken text, studied how to transliterate names into Chinese phonemes (though not into Chinese characters). Given a list of identified names, Meng et al. first separated the names into Chinese names and English names. Romanized Chinese names were detected by a leftto-right longest match segmentation method, using the Wade-Giles2 and the pinyin syllable inventories in sequence. If a name could be segmented successfully, then the name was considered a Chinese name. As their spoken document collection had already been transcribed into pinyin, retrieval was based on pinyin-to-pinyin matching; pinyin to Chinese character conversion was not addressed. Names other than Chinese names were considered as foreign names and were converted into Chinese phonemes using a language model derived from a list of English-Chinese equivalents, both sides of which were represented in phonetic equivalents. ∗ The work was done by the author while at Clairvoyance Corporation. 1 http://www.csse.monash.edu.au/~jwb/edict.html 2 http://lcweb.loc.gov/catdir/pinyin/romcover.html The above English-to-Japanese or English-toChinese transliteration techniques, however, only solve a part of the name translation problem. In multilingual applications such as CLIR and Machine Translation, all types of names must be translated. Techniques for name translation from Latin scripts into CJK scripts often depend on the origin of the name. Some names are not transliterated into a nearly deterministic syllabic script but into ideograms that can be associated with a variety of pronunciations. For example, Chinese, Korean and Japanese names are usually written using Chinese characters (or kanji) in Japanese, while European names are transcribed using katakana characters, with each character mostly representing one syllable. In this paper, we describe a method for converting a Japanese name written with a Latin alphabet (or romanji), back into Japanese kanji3. Transcribing into Japanese kanji is harder than transliteration of a foreign name into syllabic katakana, since one phoneme can correspond to hundreds of possible kanji characters. For example, the sound “kou” can be mapped to 670 kanji characters. Our method for back-transliterating Japanese names from English into Japanese consists of the following steps: (1) language identification of the origins of names in order to know what languagespecific transliteration approaches to use, (2) generation of possible transliterations using sound and kanji mappings from the Unihan database (to be described in section 3.1) and then transliteration validation through a three-tier filtering process by filtering first through a set of attested bigrams, then through a set of attested terms, and lastly through the Web. The rest of the paper is organized as follows: in section 2, we describe and evaluate our name origin identifier; section 3 presents in detail the steps for back transliterating Japanese names written in Latin script into Japanese kanji representations; section 4 presents the evaluation setup and section 5 discusses the evaluation results; we conclude the paper in section 6. 2 Language Identification of Names Given a name in English for which we do not have a translation in a bilingual English-Japanese dictionary, we first have to decide whether the name is of Japanese, Chinese, Korean or some European origin. In order to determine the origin of names, we created a language identifier for names, using a trigram language identification 3 We have applied the same technique to Chinese and Korean names, though the details are not presented here. method (Cavner and Trenkle, 1994). During training, for Chinese names, we used a list of 11,416 Chinese names together with their frequency information4. For Japanese names, we used the list of 83,295 Japanese names found in ENAMDICT5. For English names, we used the list of 88,000 names found at the US. Census site6. (We did not obtain any training data for Korean names, so origin identification for Korean names is not available.) Each list of names7 was converted into trigrams; the trigrams for each list were then counted and normalized by dividing the count of the trigram by the number of all the trigrams. To identify a name as Chinese, Japanese or English (Other, actually), we divide the name into trigrams, and sum up the normalized trigram counts from each language. A name is identified with the language which provides the maximum sum of normalized trigrams in the word. Table 1 presents the results of this simple trigram-based language identifier over the list of names used for training the trigrams. The following are examples of identification errors: Japanese names recognized as English, e.g., aa, abason, abire, aebakouson; Japanese names recognized as Chinese, e.g., abeseimei, abei, adan, aden, afun, agei, agoin. These errors show that the language identifier can be improved, possibly by taking into account language-specific features, such as the number of syllables in a name. For origin detection of Japanese names, the current method works well enough for a first pass with an accuracy of 92%. Input names As JAP As CHI As ENG Accuracy Japanese 76816 5265 1212 92% Chinese 1147 9947 321 87% English 12115 14893 61701 70% Table 1: Accuracy of language origin identification for names in the training set (JAP, CHI, and ENG stand for Japanese, Chinese, and English, respectively) 4 http://www.geocities.com/hao510/namelist/ 5 http://www.csse.monash.edu.au/~jwb/ enamdict_doc.html 6 http://www.census.gov/genealogy/names 7 Some names appear in multiple name lists: 452 of the names are found both in the Japanese name list and in the Chinese name list; 1529 names appear in the Japanese name list and the US Census name list; and 379 names are found both in the Chinese name list and the US Census list. 3 English-Japanese Back-Transliteration Once the origin of a name in Latin scripts is identified, we apply language-specific rules for back-transliteration. For non-Asian names, we use a katakana transliteration method as described in (Qu et al., 2003). For Japanese and Chinese names, we use the method described below. For example, “koizumi” is identified as a name of Japanese origin and thus is back-transliterated to Japanese using Japanese specific phonetic mappings between romanji and kanji characters. 3.1 Romanji-Kanji Mapping To obtain the mappings between kanji characters and their romanji representations, we used the Unihan database, prepared by the Unicode Consortium 8 . The Unihan database, which currently contains 54,728 kanji characters found in Chinese, Japanese, and Korean, provides rich information about these kanji characters, such as the definition of the character, its values in different encoding systems, and the pronunciation(s) of the character in Chinese (listed under the feature kMandarin in the Unihan database), in Japanese (both the On reading and the Kun reading 9 : kJapaneseKun and kJapaneseOn), and in Korean (kKorean). For example, for the kanji character , coded with Unicode hexadecimal character 91D1, the Unihan database lists 49 features; we list below its pronunciations in Japanese, Chinese, and Korean: U+91D1 kJapaneseKun KANE U+91D1 kJapaneseOn KIN KON U+91D1 kKorean KIM KUM U+91D1 kMandarin JIN1 JIN4 In the example above, is represented in its Unicode scalar value in the first column, with a feature name in the second column and the values of the feature in the third column. The Japanese Kun reading of is KANE, while the Japanese On readings of is KIN and KON. From the Unicode database, we construct mappings between Japanese readings of a character in romanji and the kanji characters in its Unicode representation. As kanji characters in Japanese names can have either the Kun reading or the On 8 http://www.unicode.org/charts/unihan.html 9 Historically, when kanji characters were introduced into the Japanese writing system, two methods of transcription were used. One is called “on-yomi” (i.e., On reading), where the Chinese sounds of the characters were adopted for Japanese words. The other method is called “kun-yomi” (i.e., Kun reading), where a kanji character preserved its meaning in Chinese, but was pronounced using the Japanese sounds. reading, we consider both readings as candidates for each kanji character. The mapping table has a total of 5,525 entries. A typical mapping is as follows: kou U+4EC0 U+5341 U+554F U+5A09 U+5B58 U+7C50 U+7C58 ...... in which the first field specifies a pronunciation in romanji, while the rest of the fields specifies the possible kanji characters into which the pronunciation can be mapped. There is a wide variation in the distribution of these mappings. For example, kou can be the pronunciation of 670 kanji characters, while the sound katakumi can be mapped to only one kanji character. 3.2 Romanji Name Back-Transliteration In theory, once we have the mappings between romanji characters and the kanji characters, we can first segment a Japanese name written in romanji and then apply the mappings to back-transliterate the romanji characters into all possible kanji representations. However, for some segmentation, the number of the possible kanji combinations can be so large as to make the problem computationally intractable. For example, consider the short Japanese name “koizumi.” This name can be segmented into the romanji characters “ko-i-zu-mi” using the Romanji-Kanji mapping table described in section 3.1, but this segmentation then has 182*230*73*49 (over 149 million) possible kanji combinations. Here, 182, 239, 73, and 49 represents the numbers of possible kanji characters for the romanji characters “ko”, “i”, “zu”, and “mi”, respectively. In this study, we present an efficient procedure for back-transliterating romanji names to kanji characters that avoids this complexity. The procedure consists of the following steps: (1) romanji name segmentation, (2) kanji name generation, (3) kanji name filtering via monolingual Japanese corpus, and (4) kanjiromanji combination filtering via WWW. Our procedure relies on filtering using corpus statistics to reduce the hypothesis space in the last three steps. We illustrate the steps below using the romanji name “koizumi” (  . 3.2.1 Romanji Name Segmentation With the romanji characters from the RomanjiKanji mapping table, we first segment a name recognized as Japanese into sequences of romanji characters. Note that a greedy segmentation method, such as the left-to-right longest match method, often results in segmentation errors. For example, for “koizumi”, the longest match segmentation method produces segmentation “koizu-mi”, while the correct segmentation is “koizumi”. Motivated by this observation, we generate all the possible segmentations for a given name. The possible segmentations for “koizumi” are: ko-izumi koi-zu-mi ko-i-zu-mi 3.2.2 Kanji Name Segmentation Using the same Romanji-Kanji mapping table, we obtain the possible kanji combinations for a segmentation of a romanji name produced by the previous step. For the segmentation “ko-izumi”, we have a total of 546 (182*3) combinations (we use the Unicode scale value to represent the kanji characters and use spaces to separate them): U+5C0F U+6CC9 U+53E4 U+6CC9 ...... We do not produce all possible combinations. As we have discussed earlier, such a generation method can produce so many combinations as to make computation infeasible for longer segmentations. To control this explosion, we eliminate unattested combinations using a bigram model of the possible kanji sequences in Japanese. From the Japanese evaluation corpus of the NTCIR-4 CLIR track 10 , we collected bigram statistics by first using a statistical part-of-speech tagger of Japanese (Qu et al., 2004). All valid Japanese terms and their frequencies from the tagger output were extracted. From this term list, we generated kanji bigram statistics (as well as an attested term list used below in step 3). With this bigram-based model, our hypothesis space is significantly reduced. For example, with the segmentation “ko-i-zu-mi”, even though “ko-i” can have 182*230 possible combinations, we only retain the 42 kanji combinations that are attested in the corpus. Continuing with the romanji segments “i-zu”, we generate the possible kanji combinations for “i-zu” that can continue one of the 42 candidates for “koi”. This results in only 6 candidates for the segments “ko-i-zu”. Lastly, we consider the romanji segments “zumi”, and retain with only 4 candidates for the segmentation “ko-i-zu-mi” whose bigram sequences are attested in our language model: U+5C0F U+53F0 U+982D U+8EAB U+5B50 U+610F U+56F3 U+5B50 U+5C0F U+610F U+56F3 U+5B50 U+6545 U+610F U+56F3 U+5B50 10 http://research.nii.ac.jp/ntcir-ws4/clir/index.html Thus, for the segmentation “ko-i-zu-mi”, the bigram-based language model effectively reduces the hypothesis space from 182*230*73*49 possible kanji combinations to 4 candidates. For the other alternative segmentation “koi-zu-mi”, no candidates can be generated by the language model. 3.2.3 Corpus-based Kanji name Filtering In this step, we use a monolingual Japanese corpus to validate whether the kanji name candidates generated by step (2) are attested in the corpus. Here, we simply use Japanese term list extracted from the segmented NTCIR-4 corpus created for the previous step to filter out unattested kanji combinations. For the segmentation “koizumi”, the following kanji combinations are attested in the corpus (preceded by their frequency in the corpus): 4167  koizumi 16  koizumi 4  koizumi None of the four kanji candidates from the alternate segmentation “ko-i-zu-mi” is attested in the corpus. While step 2 filters out candidates using bigram sequences, step 3 uses corpus terms in their entirety to validate candidates. 3.2.4 Romanji-Kanji Combination Validation Here, we take the corpus-validated kanji candidates (but for which we are not yet sure if they correspond to the same reading as the original Japanese name written in romanji) and use the Web to validate the pairings of kanji-romanji combinations (e.g.,   AND koizumi). This is motivated by two observations. First, in contrast to monolingual corpus, Web pages are often mixedlingual. It is often possible to find a word and its translation on the same Web pages. Second, person names and specialized terminology are among the most frequent mixed-lingual items. Thus, we would expect that the appearance of both representations in close proximity on the same pages gives us more confidence in the kanji representations. For example, with the Google search engine, all three kanji-romanji combinations for “koizumi” are attested: 23,600 pages - koizumi 302 pages - koizumi 1 page - koizumi Among the three, the   koizumi combination is the most common one, being the name of the current Japanese Prime Minister. 4 Evaluation In this section, we describe the gold standards and evaluation measures for evaluating the effectiveness of the above method for backtransliterating Japanese names. 4.1 Gold Standards Based on two publicly accessible name lists and a Japanese-to-English name lexicon, we have constructed two Gold Standards. The Japanese-toEnglish name lexicon is ENAMDICT 11 , which contains more than 210,000 Japanese-English name translation pairs. Gold Standard – Given Names (GS-GN): to construct a gold standard for Japanese given names, we obtained 7,151 baby names in romanji from http://www.kabalarians.com/. Of these 7,151 names, 5,115 names have kanji translations in the ENAMDICT12. We took the 5115 romanji names and their kanji translations in the ENAMDICT as the gold standard for given names. Gold Standard – Surnames (GS-SN): to construct a gold standard for Japanese surnames, we downloaded 972 surnames in romanji from http://business.baylor.edu/Phil_VanAuken/Japanes eSurnames.html. Of these names, 811 names have kanji translations in the ENAMDICT. We took these 811 romanji surnames and their kanji translations in the ENAMDICT as the gold standard for Japanese surnames. 4.2 Evaluation Measures Each name in romanji in the gold standards has at least one kanji representation obtained from the ENAMDICT. For each name, precision, recall, and F measures are calculated as follows: • Precision: number of correct kanji output / total number of kanji output • Recall: number of correct kanji output / total number of kanji names in gold standard • F-measure: 2*Precision*Recall / (Precision + Recall) Average Precision, Average Recall, and Average F-measure are computed over all the names in the test sets. 5 Evaluation Results and Analysis 5.1 Effectiveness of Corpus Validation Table 2 and Table 3 present the precision, recall, and F statistics for the gold standards GS-GN and 11 http://mirrors.nihongo.org/monash/ enamdict_doc.html 12 The fact that above 2000 of these names were missing from ENAMDICT is a further justification for a name translation method as described in this paper. GS-SN, respectively. For given names, corpus validation produces the best average precision of 0.45, while the best average recall is a low 0.27. With the additional step of Web validation of the romanji-kanji combinations, the average precision increased by 62.2% to 0.73, while the best average recall improved by 7.4% to 0.29. We observe a similar trend for surnames. The results demonstrate that, through a large, mixed-lingual corpus such as the Web, we can improve both precision and recall for automatically transliterating romanji names back to kanji. Avg Prec Avg Recall F (1) Corpus 0.45 0.27 0.33 (2) Web (over (1)) 0.73 (+62.2%) 0.29 (+7.4%) 0.38 (+15.2%) Table 2: The best Avg Precision, Avg Recall, and Avg F statistics achieved through corpus validation and Web validation for GS-GN. Avg Prec Avg Recall F (1) Corpus 0.69 0.44 0.51 (2) Web (over (1)) 0.90 (+23.3%) 0.45 (+2.3%) 0.56 (+9.8%) Table 3: The best Avg Precision, Avg Recall, and Avg F statistics achieved through corpus validation and Web validation for GS-SN. We also observe that the performance statistics for the surnames are significantly higher than those of the given names, which might reflect the different degrees of flexibility in using surnames and given names in Japanese. We would expect that the surnames form a somewhat closed set, while the given names belong to a more open set. This may account for the higher recall for surnames. 5.2 Effectiveness of Corpus Validation If the big, mixed-lingual Web can deliver better validation than the limited-sized monolingual corpus, why not use it at every stage of filtering? Technically, we could use the Web as the ultimate corpus for validation at any stage when a corpus is required. In practice, however, each Web access involves additional computation time for file IO, network connections, etc. For example, accessing Google took about 2 seconds per name13; gathering 13 We inserted a 1 second sleep between calls to the search engine so as not to overload the engine. statistics for about 30,000 kanji-romanji combinations14 took us around 15 hours. In the procedure described in section 3.2, we have aimed to reduce computation complexity and time at several stages. In step 2, we use bigrambased language model from a corpus to reduce the hypothesis space. In step 3, we use corpus filtering to obtain a fast validation of the candidates, before passing the output to the Web validation in step 4. Table 4 illustrates the savings achieved through these steps. GS-GN GS-SN All possible 2.0e+017 296,761,622,763 2gram model 21,306,322 (-99.9%) 2,486,598 (-99.9%) Corpus validate 30,457 (-99.9%) 3,298 (-99.9%) Web validation 20,787 (-31.7%) 2,769 (-16.0%) Table 4: The numbers of output candidates of each step to be passed to the next step. The percentages specify the amount of reduction in hypothesis space. 5.3 Thresholding Effects We have examined whether we should discard the validated candidates with low frequencies either from the corpus or the Web. The cutoff points examined include initial low frequency range 1 to 10 and then from 10 up to 400 in with increments of 5. Figure 1 and Figure 2 illustrate that, to achieve best overall performance, it is beneficial to discard candidates with very low frequencies, e.g., frequencies below 5. Even though we observe a stabling trend after reaching certain threshold points for these validation methods, it is surprising to see that, for the corpus validation method with GS-GN, with stricter thresholds, average precisions are actually decreasing. We are currently investigating this exception. 5.4 Error Analysis Based on a preliminary error analysis, we have identified three areas for improvements. First, our current method does not account for certain phonological transformations when the On/Kun readings are concatenated together. Consider the name “matsuda” (  ). The segmentation step correctly segmented the romanji to “matsu-da”. However, in the Unihan database, 14 At this rate, checking the 21 million combinations remaining after filtering with bigrams using the Web (without the corpus filtering step) would take more than a year. the Kun reading of  is “ta”, while its On reading is “den”. Therefore, using the mappings from the Unihan database, we failed to obtain the mapping between the pronunciation “da” and the kanji  , which resulted in both low precision and recall for “matsuda”. This suggests for introducing language-specific phonological transformations or alternatively fuzzy matching to deal with the mismatch problem. Avg Precision - GS_GN 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 6 15 50 100 150 200 250 300 350 400 Threshold for frequency cutoff Avg Precision corpus+web corpus Figure 1: Average precisions achieved via both corpus and corpus+Web validation with different frequency-based cutoff thresholds for GS-GN Avg Precision - GS_SN 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 6 15 50 100 150 200 250 300 350 400 Threshold for frequency cutoff Avg Precision corpus+web corpus Figure 2: Average precisions achieved via both corpus and corpus+Web validation with different frequency-based cutoff thresholds for GS-SN Second, ENAMDICT contains mappings between kanji and romanji that are not available from the Unihan database. For example, for the name “hiroshi” in romanji, based on the mappings from the Unihan database, we can obtain two possible segmentations: “hiro-shi” and “hi-ro-shi”. Our method produces two- and three-kanji character sequences that correspond to these romanji characters. For example, corpus validation produces the following kanji candidates for “hiroshi”: 2  hiroshi 10  hiroshi 5  hiroshi 1  hiroshi 2    hiroshi 11   hiroshi 33    hiroshi 311   hiroshi ENAMDCIT, however, in addition to the 2- and 3-character kanji names, also contains 1-character kanji names, whose mappings are not found in the Unihan database, e.g.,  Hiroshi  Hiroshi  Hiroshi  Hiroshi  Hiroshi  Hiroshi This suggests the limitation of relying solely on the Unihan database for building mappings between romanji characters and kanji characters. Other mapping resources, such as ENAMDCIT, should be considered in our future work. Third, because the statistical part-of-speech tagger we used for Japanese term identification does not have a lexicon of all possible names in Japanese, some unknown names, which are incorrectly separated into individual kanji characters, are therefore not available for correct corpus-based validation. We are currently exploring methods using overlapping character bigrams, instead of the tagger-produced terms, as the basis for corpus-based validation and filtering. 6 Conclusions In this study, we have examined a solution to a previously little treated problem of transliterating CJK names written in Latin scripts back into their ideographic representations. The solution involves first identifying the origins of the CJK names and then back-transliterating the names to their respective ideographic representations with language-specific sound-to-character mappings. We have demonstrated that a simple trigram-based language identifier can serve adequately for identifying names of Japanese origin. During back-transliteration, the possibilities can be massive due to the large number of mappings between a Japanese sound and its kanji representations. To reduce the complexity, we apply a three-tier filtering process which eliminates most incorrect candidates, while still achieving an F measure of 0.38 on a test set of given names, and an F measure of 0.56 on a test of surnames. The three filtering steps involve using a bigram model derived from a large segmented Japanese corpus, then using a list of attested corpus terms from the same corpus, and lastly using the whole Web as a corpus. The Web is used to validate the backtransliterations using statistics of pages containing both the candidate kanji translation as well as the original romanji name. Based on the results of this study, our future work will involve testing the effectiveness of the current method in real CLIR applications, applying the method to other types of proper names and other language pairs, and exploring new methods for improving precision and recall for romanji name back-transliteration. In cross-language applications such as English to Japanese retrieval, dealing with a romaji name that is missing in the bilingual lexicon should involve (1) identifying the origin of the name for selecting the appropriate language-specific mappings, and (2) automatically generating the back-transliterations of the name in the right orthographic representations (e.g., Katakana representations for foreign Latin-origin names or kanji representations for native Japanese names). To further improve precision and recall, one promising technique is fuzzy matching (Meng et al, 2001) for dealing with phonological transformations in name generation that are not considered in our current approach (e.g., “matsuda” vs “matsuta”). Lastly, we will explore whether the proposed romanji to kanji backtransliteration approach applies to other types of names such as place names and study the effectiveness of the approach for backtransliterating romanji names of Chinese origin and Korean origin to their respective kanji representations. References Yaser Al-Onaizan and Kevin Knight. 2002. Machine Transliteration of Names in Arabic Text. Proc. of ACL Workshop on Computational Approaches to Semitic Languages William B. Cavnar and John M. Trenkle. 1994. Ngram based text categorization. In 3rd Annual Symposium on Document Analysis and Information Retrieval, 161-175 Atsushi Fujii and Tetsuya Ishikawa. 2001. Japanese/English Cross-Language Information Retrieval: Exploration of Query Translation and Transliteration. Computer and the Humanities, 35( 4): 389–420 K. S. Jeong, Sung-Hyon Myaeng, J. S. Lee, and K. S. Choi. 1999. Automatic identification and back-transliteration of foreign words for information retrieval. Information Processing and Management, 35(4): 523-540 Kevin Knight and Jonathan Graehl. 1998. Machine Transliteration. Computational Linguistics: 24(4): 599-612 Wen-Cheng Lin, Changhua Yang and Hsin-Hsi Chen. 2003. Foreign Name Backward Transliteration in Chinese-English CrossLanguage Image Retrieval, In Proceedings of CLEF 2003 Workshop, Trondheim, Norway. Helen Meng, Wai-Kit Lo, Berlin Chen, and Karen Tang. 2001. Generating Phonetic Cognates to Handel Named Entities in English-Chinese Cross-Language Spoken Document Retrieval. In Proc of the Automatic Speech Recognition and Understanding Workshop (ASRU 2001) Trento, Italy, Dec. Yan Qu, Gregory Grefenstette, David A. Evans. 2003. Automatic transliteration for Japanese-toEnglish text retrieval. In Proceedings of SIGIR 2003: 353-360 Yan Qu, Gregory Grefenstette, David A. Hull, David A. Evans, Toshiya Ueda, Tatsuo Kato, Daisuke Noda, Motoko Ishikawa, Setsuko Nara, and Kousaku Arita. 2004. JustsystemClairvoyance CLIR Experiments at NTCIR-4 Workshop. In Proceedings of the NTCIR-4 Workshop.
2004
24
Extracting Regulatory Gene Expression Networks from PubMed Jasmin ˇSari´c EML Research gGmbH Heidelberg, Germany [email protected] Lars J. Jensen EMBL Heidelberg, Germany [email protected] Rossitza Ouzounova EMBL Heidelberg, Germany [email protected] Isabel Rojas EML Research gGmbH Heidelberg, Germany [email protected] Peer Bork EMBL Heidelberg, Germany [email protected] Abstract We present an approach using syntactosemantic rules for the extraction of relational information from biomedical abstracts. The results show that by overcoming the hurdle of technical terminology, high precision results can be achieved. From abstracts related to baker’s yeast, we manage to extract a regulatory network comprised of 441 pairwise relations from 58,664 abstracts with an accuracy of 83–90%. To achieve this, we made use of a resource of gene/protein names considerably larger than those used in most other biology related information extraction approaches. This list of names was included in the lexicon of our retrained part-of-speech tagger for use on molecular biology abstracts. For the domain in question an accuracy of 93.6–97.7% was attained on POS-tags. The method is easily adapted to other organisms than yeast, allowing us to extract many more biologically relevant relations. 1 Introduction and related work A massive amount of information is buried in scientific publications (more than 500,000 publications per year). Therefore, the need for information extraction (IE) and text mining in the life sciences is drastically increasing. Most of the ongoing work is being dedicated to deal with PubMed1 abstracts. The technical terminology of biomedicine presents the main challenge of applying IE to such a corpus (Hobbs, 2003). The goal of our work is to extract from biological abstracts which proteins are responsible for regulating the expression (i.e. transcription or translation) of which genes. This means to extract a specific type of pairwise relations between biological entities. This differs from the BioCreAtIvE competition tasks2 that aimed at classifying entities (gene products) into classes based on Gene Ontology (Ashburner et al., 2000). A task closely related to ours, which has received some attention over the past five years, is the extraction of protein–protein interactions from abstracts. This problem has mainly been addressed by statistical “bag of words” approaches (Marcotte et al., 2001), with the notable exception of Blaschke et al. (1999). All of the approaches differ significantly from ours by only attempting to extract the type of interaction and the participating proteins, disregarding agens and patiens. Most NLP based studies tend to have been focused on extraction of events involving one particular verb, e.g. bind (Thomas et al., 2000) or inhibit (Pustejovsky et al., 2002). From a biological point of view, there are two problems with such approaches: 1) the meaning of the extracted events 1PubMed is a bibliographic database covering life sciences with a focus on biomedicine, comprising around 12 × 106 articles, roughly half of them including abstract (http: //www.ncbi.nlm.nih.gov/PubMed/). 2Critical Assessment of Information Extraction systems in Biology, http://www.mitre.org/public/ biocreative/ will depend strongly on the selectional restrictions and 2) the same meaning can be expressed using a number of different verbs. In contrast and alike (Friedman et al., 2001), we instead set out to handle only one specific biological problem and, in return, extract the related events with their whole range of syntactic variations. The variety in the biological terminology used to describe regulation of gene expression presents a major hurdle to an IE approach; in many cases the information is buried to such an extent that even a human reader is unable to extract it unless having a scientific background in biology. In this paper we will show that by overcoming the terminological barrier, high precision extraction of entity relations can be achieved within the field of molecular biology. 2 The biological task and our approach To extract relations, one should first recognize the named entities involved. This is particularly difficult in molecular biology where many forms of variation frequently occur. Synonymy is very common due to lack of standardization of gene names; BYP1, CIF1, FDP1, GGS1, GLC6, TPS1, TSS1, and YBR126C are all synonyms for the same gene/protein. Additionally, these names are subject to orthographic variation originating from differences in capitalization and hyphenation as well as syntactic variation of multiword terms (e.g. riboflavin synthetase beta chain = beta chain of riboflavin synthetase). Moreover, many names are homonyms since a gene and its gene product are usually named identically, causing cross-over of terms between semantic classes. Finally, paragrammatical variations are more frequent in life science publications than in common English due to the large number of publications by non-native speakers (Netzel et al., 2003). Extracting that a protein regulates the expression of a gene is a challenging problem as this fact can be expressed in a variety of ways—possibly mentioning neither the biological process (expression) nor any of the two biological entities (genes and proteins). Figure 1 shows a simplified ontology providing an overview of the biological entities involved in gene expression, their ontological relationships, and how they can interact with Gene Transcript Gene product Stable RNA Promoter Binding site Upstream activating sequence Upstream repressing sequence mRNA Protein Transcription regulator Transcription activator Transcription repressor is a part of produces binds to Figure 1: A simplified ontology for transcription regulation. The background color used for each term signifies its semantic role in relations: regulator (white), target (black), or either (gray). one another. An ontology is a great help when writing extraction rules, as it immediately suggests a large number of relevant relations to be extracted. Examples include “promoter contains upstream activating sequence” and “transcription regulator binds to promoter”, both of which follow from indirect relationships via binding site. It is often not known whether the regulation takes place at the level of gene transcription or translation or by an indirect mechanism. For this reason, and for simplicity, we decided against trying to extract how the regulation of expression takes place. We do, however, strictly require that the extracted relations provide information about a protein (the regulator, R) regulating the expression of a gene (the target, X), for which reason three requirements must be fulfilled: 1. It must be ascertained that the sentence mentions gene expression. “The protein R activates X” fails this requirement, as R might instead activate X post-translationally. Thus, whether the event should be extracted or not depends on the type of the accusative object X (e.g. gene or gene product). Without a head noun specifying the type, X remains ambiguous, leaving the whole relation underspecified, for which reason it should not be extracted. It should be noted that two thirds of the gene/protein names mentioned in our corpus are ambiguous for this reason. 2. The identity of the regulator (R) must be known. “The X promoter activates X expression” fails this requirement, as it is not known which transcription factor activates the expression when binding to the X promoter. Linguistically this implies that noun chunks of certain semantic types should be disallowed as agens. 3. The identity of the target (X) must be known. “The transcription factor R activates R dependent expression” fails this requirement, as it is not know which gene’s expression is dependent on R. The semantic types allowed for patiens should thus also be restricted. The two last requirements are important to avoid extraction from non-informative sentences that— despite them containing no information—occur quite frequently in scientific abstracts. The coloring of the entities in Figure 1 helps discern which relations are meaningful and which are not. The ability to genetically modify an organism in experiments brings about further complication to IE: biological texts often mention what takes place when an organism is artificially modified in a particular way. In some cases such modification can reverse part of the meaning of the verb: from the sentence “Deletion of R increased X expression” one can conclude that R represses expression of X. The key point is to identify that “deletion of R” implies that the sentence describes an experiment in which R has been removed, but that R would normally be present and that the biological impact of R is thus the opposite of what the verb increased alone would suggest. In other cases the verb will lose part of its meaning: “Mutation of R increased X expression” implies that R regulates expression X, but we cannot infer whether R is an activator or a repressor. In this case mutation is dealt in a manner similar to deletion in the previous example. Finally, there are those relations that should be completely avoided as they exist only because they have been artificially introduced through genetic engineering. In our extraction method we address all three cases. We have opted for a rule based approach (implemented as finite state automata) to extract the relations for two reasons. The first is, that a rule based approach allows us to directly ensure that the three requirements stated above are fulfilled for the extracted relations. This is desired to attain high accuracy on the extracted relations, which is what matters to the biologist. Hence, we focus in our evaluation on the semantic correctness of our method rather than on its grammatical correctness. As long as grammatical errors do not result in semantic errors, we do not consider it an error. Conversely, even a grammatically correct extraction is considered an error if it is semantically wrong. Our second reason for choosing a rule based approach is that our approach is theory-driven and highly interdisciplinary, involving computational linguists, bioinformaticians, and biologists. The rule based approach allows us to benefit more from the interplay of scientists with different backgrounds, as known biological constraints can be explicitly incorporated in the extraction rules. 3 Methods Table 1 shows an overview of the architecture of our IE system. It is organized in levels such that the output of one level is the input of the next one. The following sections describe each level in detail. 3.1 The corpus The PubMed resource was downloaded on January 19, 2004. 58,664 abstracts related to the yeast Saccharomyces cerevisiae were extracted by looking for occurrences of the terms “Saccharomyces cerevisiae”, “S. cerevisiae”, “Baker’s yeast”, “Brewer’s yeast”, and “Budding yeast” in the title/abstract or as head of a MeSH term3. These abstracts were filtered to obtain the 15,777 that mention at least two names (see section 3.4) and subsequently divided into a training and an evaluation set of 9137 and 6640 abstracts respectively. 3Medical Subject Headings (MeSH) is a controlled vocabulary for manually annoting PubMed articles. Level Component L0 Tokenization and multiwords Word and sentence boundaries are detected and multiwords are recognized and recomposed to one token. L1 POS-Tagging A part-of-speech tag is assigned to each word (or multiword) of the tokenized corpus. L2 Semantic labeling A manually built taxonomy is used to assign semantic labels to tokens. The taxonomy consists of gene names, cue words relevant for entity recognition, and classes of verbs for relation extraction. L3 Named entity chunking Based on the POS-tags and the semantic labels, a cascaded chunk grammar recognizes noun chunks relevant for the gene transcription domain, e.g. [nxgene The GAL4 gene ]. L4 Relation chunking Relations between entities are recognized, e.g. The expression of the cytochrome genes CYC1 and CYC7 is controlled by HAP1. L5 Output and visualization Information is gathered from the recognised patterns and transformed into pre-defined records. From the example in L4 we extract that HAP1 regulates the expression of CYC1 and CYC7. Table 1: Overview over the extraction architecture 3.2 Tokenization and multiword detection The process of tokenization consists of two steps (Grefenstette and Tapanainen, 1994): segmentation of the input text into a sequence of tokens and the detection of sentential boundaries. We use the tokenizer developed by Helmut Schmid at IMS (University of Stuttgart) because it combines a high accuracy (99.56% on the Brown corpus) with unsupervised learning (i.e. no manually labelled data is needed) (Schmid, 2000). The determination of token boundaries in technical or scientific texts is one of the main challenges within information extraction or retrieval. On the one hand, technical terms contain special characters such as brackets, colons, hyphens, slashes, etc. On the other hand, they often appear as multiword expressions which makes it hard to detect the left and right boundaries of the terms. Although a lot of work has been invested in the detection of technical terms within biology related texts (see Nenadi´c et al. (2003) or Yamamoto et al. (2003) for representative results) this task is not yet solved to a satisfying extent. As we are interested in very special terms and high precision results we opted for multiword detection based on semi-automatical acquisition of multiwords (see sections 3.4 and 3.5). 3.3 Part-of-speech tagging To improve the accuracy of POS-tagging on PubMed abstracts, TreeTagger (Schmid, 1994) was retrained on the GENIA 3.0 corpus (Kim et al., 2003). Furthermore, we expanded the POStagger lexicon with entries relevant for our application such as gene names (see section 3.4) and multiwords (see section 3.5). As tag set we use the UPenn tag set (Santorini, 1991) plus some minor extensions for distinguishing auxiliary verbs. The GENIA 3.0 corpus consists of PubMed abstracts and has 466,179 manually annotated tokens. For our application we made two changes in the annotation. The first one concerns seemingly undecideable cases like in/or annotated as in|cc. These were split into three tokens: in, /, and or each annotated with its own tag. This was done because TreeTagger is not able to annotate two POS-tags for one token. The second set of changes was to adapt the tag set so that vb... is used for derivates of to be, vh... for derivates of to have, and vv... for all other verbs. 3.4 Recognizing gene/protein names To be able to recognize gene/protein names as such, and to associate them with the appropriate database identifiers, a list of synonymous names and identifiers in six eukaryotic model organisms was compiled from several sources (available from http://www.bork.embl. de/synonyms/). For S. cerevisiae specifically, 51,640 uniquely resolvable names and identifiers were obtained from Saccharomyces Genome Database (SGD) and SWISS-PROT (Dwight et al., 2002; Boeckmann et al., 2003). Before matching these names against the POStagged corpus, the list of names was expanded to include different orthographic variants of each name. Firstly, the names were allowed to have various combinations of uppercase and lowercase letters: all uppercase, all lowercase, first letter uppercase, and (for multiword names) first letter of each word uppercase. In each of these versions, we allowed whitespace to be replaced by hyphen, and hyphen to be removed or replaced by whitespace. In addition, from each gene name a possible protein name was generated by appending the letter p. The resulting list containing all orthographic variations comprises 516,799 entries. The orthographically expanded name list was fed into the multiword detection, the POS-tagger lexicon, and was subsequently matched against the POS-tagged corpus to retag gene/protein names as such (nnpg). By accepting only matches to words tagged as common nouns (nn), the problem of homonymy was reduced since e.g. the name MAP can occur as a verb as well. 3.5 Semantic tagging In addition to the recognition of the gene and protein names, we recognize several other terms and annotate them with semantic tags. This set of semantically relevant terms mainly consists of nouns and verbs, as well as some few prepositions like from, or adjectives like dependent. The first main set of terms consists of nouns, which are classified as follows: • Relevant concepts in our ontology: gene, protein, promoter, binding site, transcription factor, etc. (153 entries). • Relational nouns, like nouns of activation (e.g. derepression and positive regulation), nouns of repression (e.g. suppression and negative regulation), nouns of regulation (e.g. affect and control) (69 entries). • Triggering experimental (artificial) contexts: mutation, deletion, fusion, defect, vector, plasmids, etc. (11 entries). • Enzymes: gyrase, kinase, etc. (569 entries). • Organism names extracted from the NCBI taxonomy of organisms (Wheeler et al., 2004) (20,746 entries). The second set of terms contains 50 verbs and their inflections. They were classified according to their relevance in gene transcription. These verbs are crucial for the extraction of relations between entities: • Verbs of activation e.g. enhance, increase, induce, and positively regulate. • Verbs of repression e.g. block, decrease, downregulate, and down regulate. • Verbs of regulation e.g. affect and control. • Other selected verbs like code (or encode) and contain where given their own tags. Each of the terms consisting of more than one word was utilized for multiword recognition. We also have have two additional classes of words to prevent false positive extractions. The first contains words of negation, like not, cannot, etc. The other contains nouns that are to be distinguished from other common nouns to avoid them being allowed within named entitities, e.g. allele and diploid. 3.6 Extraction of named entities In the preceding steps we classified relevant nouns according to semantic criteria. This allows us to chunk noun phrases generalizing over both POStags and semantic tags. Syntacto-semantic chunking was performed to recognize named entities using cascades of finite state rules implemented as a CASS grammar (Abney, 1996). As an example we recognize gene noun phrases: [nx gene [dt the] [nnpg CYC1] [gene gene] [in in] [yeast Saccharomyces cerevisiae]] Other syntactic variants, as for example “the glucokinase gene GLK1” are recognized too. Similarly, we detect at this early level noun chunks denoting other biological entities such as proteins, activators, repressors, transcription factors etc. Subsequently, we recognize more complex noun chunks on the basis of the simpler ones, e.g. promoters, upstream activating/repressing sequences (UAS/URS), binding sites. At this point it becomes important to distinguish between agens and patiens forms of certain entities. Since a binding site is part of a target gene, it can be referred to either by the name of this gene or by the name of the regulator protein that binds to it. It is thus necessary to discriminate between “binding site of” and “binding site for”. As already mentioned, we have annotated a class of nouns that trigger experimental context. On the basis of these we identify noun chunks mentioning, as for example deletion, mutation, or overexpression of genes. At a fairly late stage we recognize events that can occur as arguments for verbs like “expression of”. 3.7 Extraction of relations between entities This step of processing concerns the recognition of three types of relations between the recognized named entities: up-regulation, down-regulation, and (underspecified) regulation of expression. We combine syntactic properties (subcategorization restrictions) and semantic properties (selectional restrictions) of the relevant verbs to map them to one of the three relation types. The following shows a reduced bracketed structure consting of three parts, a promoter chunk, a verbal complex chunk, and a UAS chunk in patiens: [nx prom the ATR1 promoter region] [contain contains] [nx uas pt [dt−a a] [bs binding site] [for for] [nx activator the GCN4 activator protein]]. From this we extract that the GCN4 protein activates the expression of the ATR1 gene. We identify passive constructs too e.g. “RNR1 expression is reduced by CLN1 or CLN2 overexpression”. In this case we extract two pairwise relations, namely that both CLN1 and CLN2 down-regulate the expression of the RNR1 gene. We also identify nominalized relations as exemplified by “the binding of GCN4 protein to the SER1 promoter in vitro”. 4 Results Using our relation extraction rules, we were able to extract 422 relation chunks from our complete corpus. Since one entity chunk can mention several different named entities, these corresponded to a total of 597 extracted pairwise relations. However, as several relation chunks may mention the same pairwise relations, this reduces to 441 unique pairwise relations comprised of 126 up-regulations, 90 down-regulations, and 225 regulations of unknown direction. Figure 2 displays these 441 relations as a regulatory network in which the nodes represent genes or proteins and the arcs are expression regulation relations. Known transcription factors according to the Saccharomyces Genome Database (SGD) (Dwight et al., 2002) are denoted by black nodes. From a biological point of view, it is reassuring that these tend to correspond to proteins serving as regulators in our relations. Figure 2: The extracted network of gene regulation The extracted relations are shown as a directed graph, in which each node corresponds to a gene or protein and each arc represents a pairwise relation. The arcs point from the regulator to the target and the type of regulation is specified by the type of arrow head. Known transcription factors are highlighted as black nodes. 4.1 Evaluation of relation extraction To evaluate the accuracy of the extracted relation, we manually inspected all relations extracted from the evaluation corpus using the TIGERSearch visualization tool (Lezius, 2002). The accuracy of the relations was evaluated at the semantic rather than the grammatical level. We thus carried out the evaluation in such a way that relations were counted as correct if they extracted the correct biological conclusion, even if the analysis of the sentence is not as to be desired from a linguistic point of view. Conversely, a relation was counted as an error if the biological conclusion was wrong. 75 of the 90 relation chunks (83%) extracted from the evaluation corpus were entirely correct, meaning that the relation corresponded to expression regulation, the regulator (R) and the regulatee (X) were correctly identified, and the direction of regulation (up or down) was correct if extracted. Further 6 relation chunks extracted the wrong direction of regulation but were otherwise correct; our accuracy increases to 90% if allowing for this minor type of error. Approximately half of the errors made by our method stem from overlooked genetic modifications—although mentioned in the sentence, the extracted relation is not biologically relevant. 4.2 Entity recognition For the sake of consistency, we have also evaluated our ability to correctly identify named entities at the level of semantic rather than grammatical correctness. Manual inspection of 500 named entities from the evaluation corpus revealed 14 errors, which corresponds to an estimated accuracy of just over 97%. Surprisingly, many of these errors were commited when recognizing proteins, for which our accuracy was only 95%. Phrases such as “telomerase associated protein” (which got confused with “telomerase protein” itself) were responsible for about half of these errors. Among the 153 entities involved in relations no errors were detected, which is fewer than expected from our estimated accuracy on entity recognition (99% confidence according to hypergeometric test). This suggests that the templates used for relation extraction are unlikely to match those sentence constructs on which the entity recognition goes wrong. False identification of named entities are thus unlikely to have an impact on the accuracy of relation extraction. 4.3 POS-tagging and tokenization We compared the POS-tagging performance of two parameter files on 55,166 tokens from the GENIA corpus that were not used for retraining. Using the retrained tagger, 93.6% of the tokens were correctly tagged, 4.1% carried questionable tags (e.g. confusing proper nouns for common nouns), and 2.3% were clear tagging errors. This compares favourably to the 85.7% correct, 8.5% questionable tags, and 5.8% errors obtained when using the Standard English parameter file. Retraining thus reduced the error rate more than two-fold. Of 198 sentences evaluated, the correct sentence boundary was detected in all cases. In addition, three abbreviations incorrectly resulted in sentence marker, corresponding to an overall precision of 98.5%. 5 Conclusions We have developed a method that allows us to extract information on regulation of gene expression from biomedical abstracts. This is a highly relevant biological problem, since much is known about it although this knowledge has yet to be collected in a database. Also, knowledge on how gene expression is regulated is crucial for interpreting the enormous amounts of gene expression data produced by high-throughput methods like spotted microarrays and GeneChips. Although we developed and evaluated our method on abstracts related to baker’s yeast only, we have successfully applied the method to other organisms including humans (to be published elsewhere). The main adaptation required was to replace the list of synonymous gene/protein names to reflect the change of organism. Furthermore, we also intend to reuse the recognition of named entities to extract other, specific types of interactions between biological entities. Acknowledgments The authors wish to thank Sean Hooper for help with Figure 2. Jasmin ˇSari´c is funded by the Klaus Tschira Foundation gGmbH, Heidelberg (http: //www.kts.villa-bosch.de). Lars Juhl Jensen is funded by the Bundesministerium f¨ur Forschung und Bildung, BMBF-01-GG-9817. References S. Abney. 1996. Partial parsing via finite-state cascades. In Proceedings of the ESSLLI ’96 Robust Parsing Workshop, pages 8–15, Prague, Czech Republic. M. Ashburner, C. A. Ball, J. A. Blake, D. Botstein, H. Butler, J. M. Cherry, A. P. Davis, K. Dolinski, S. S. Dwight, J. T. Eppig, M. A. Harris, D. P. Hill, L. Issel-Tarver, A. Kasarskis, S. Lewis, J. C. Matese, J. E. Richardson, M. Ringwald, G. M. Rubin, and G. Sherlock. 2000. Gene Ontology: tool for the unification of biology. Nature Genetics, 25:25–29. C. Blaschke, M. A. Andrade, C. Ouzounis, and A. Valencia. 1999. Automatic extraction of biological information from scientific text: protein–protein interactions. In Proc., Intelligent Systems for Molecular Biology, volume 7, pages 60–67, Menlo Park, CA. AAAI Press. B. Boeckmann, A. Bairoch, R. Apweiler, M. C. Blatter, A. Estreicher, E. Gasteiger, M. J. Martin, K Michoud, C. O’Donovan, I. Phan, S. Pilbout, and M. Schneider. 2003. The SWISS-PROT protein knowledgebase and its supplement TrEMBL in 2003. Nucleic Acids Res., 31:365–370. S. S. Dwight, M. A. Harris, K. Dolinski, C. A. Ball, G. Binkley, K. R. Christie, D. G. Fisk, L. IsselTarver, M. Schroeder, G. Sherlock, A. Sethuraman, S. Weng, D. Botstein, and J. M. Cherry. 2002. Saccharomyces Genome Database (SGD) provides secondary gene annotation using the Gene Ontology (GO). Nucleic Acids Res., 30:69–72. C. Friedman, P. Kra, H. Yu, M. Krauthammer, and A. Rzhetsky. 2001. GENIES: a natural-language processing system for the extraction of molecular pathways from journal articles. Bioinformatics, 17 Suppl. 1:S74–S82. G. Grefenstette and P. Tapanainen. 1994. What is a word, what is a sentence? problems of tokenization. In The 3rd International Conference on Computational Lexicography, pages 79–87. J. R. Hobbs. 2003. Information extraction from biomedical text. J. Biomedical Informatics. J.-D. Kim, T. Ohta, Y. Tateisi, and J. Tsujii. 2003. GENIA corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19 suppl. 1:i180– i182. W. Lezius. 2002. TIGERSearch—ein Suchwerkzeug f¨ur Baumbanken. In S. Busemann, editor, Proceedings der 6. Konferenz zur Verarbeitung natrlicher Sprache (KONVENS 2002), Saarbr¨ucken, Germany. E. M. Marcotte, I. Xenarios, and D. Eisenberg. 2001. Mining literature for protein–protein interactions. Bioinformatics, 17:359–363. G. Nenadi´c, S. Rice, I. Spasi´c, S. Ananiadou, and B. Stapley. 2003. Selecting text features for gene name classification: from documents to terms. In S. Ananiadou and J. Tsujii, editors, Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine, pages 121–128. R. Netzel, Perez-Iratxeta C., P. Bork, and M. A. Andrade. 2003. The way we write. EMBO Rep., 4:446–451. J. Pustejovsky, J. Casta˜no, J. Zhang, M. Kotecki, and B. Cochran. 2002. Robust relational parsing over biomedical literature: Extracting inhibit relations. In Proceedings of the Seventh Pacific Symposium on Biocomputing, pages 362–373, Hawaii. World Scientific. B. Santorini. 1991. Part-of-speech tagging guidelines for the penn treebank project. Technical report, University of Pennsylvania. H. Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In International Conference on New Methods in Language Processing, Manchester, UK. H. Schmid. 2000. Unsupervised learning of period disambiguation for tokenisation. Technical report, Institut fr Maschinelle Sprachverarbeitung, University of Stuttgart. J. Thomas, D. Milward, C. Ouzounis, S. Pulman, and M. Carroll. 2000. Automatic extraction of protein interactions from scientific abstracts. In Proceedings of the Fifth Pacific Symposium on Biocomputing, pages 707–709, Hawaii. World Scientific. D. L. Wheeler, D. M. Church, R. Edgar, S. Federhen, W. Helmberg, Madden T. L., Pontius J. U., Schuler G. D., Schriml L. M., E. Sequeira, T. O. Suzek, T. A. Tatusova, and L. Wagner. 2004. Database resources of the national center for biotechnology information: update. Nucleic Acids Res., 32:D35–40. K. Yamamoto, T. Kudo, A. Konagaya, and Y. Matsumoto. 2003. Protein name tagging for biomedical annotation in text. In S. Ananiadou and J. Tsujii, editors, Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine, pages 65–72.
2004
25
Linguistic Profiling for Author Recognition and Verification Hans van Halteren Language and Speech, Univ. of Nijmegen P.O. Box 9103 NL-6500 HD, Nijmegen, The Netherlands [email protected] Abstract A new technique is introduced, linguistic profiling, in which large numbers of counts of linguistic features are used as a text profile, which can then be compared to average profiles for groups of texts. The technique proves to be quite effective for authorship verification and recognition. The best parameter settings yield a False Accept Rate of 8.1% at a False Reject Rate equal to zero for the verification task on a test corpus of student essays, and a 99.4% 2-way recognition accuracy on the same corpus. 1 Introduction There are several situations in language research or language engineering where we are in need of a specific type of extra-linguistic information about a text (document) and we would like to determine this information on the basis of linguistic properties of the text. Examples are the determination of the language variety or genre of a text, or a classification for document routing or information retrieval. For each of these applications, techniques have been developed focusing on specific aspects of the text, often based on frequency counts of functions words in linguistics and of content words in language engineering. In the technique we are introducing in this paper, linguistic profiling, we make no a priori choice for a specific type of word (or more complex feature) to be counted. Instead, all possible features are included and it is determined by the statistics for the texts under consideration, and the distinction to be made, how much weight, if any, each feature is to receive. Furthermore, the frequency counts are not used as absolute values, but rather as deviations from a norm, which is again determined by the situation at hand. Our hypothesis is that this technique can bring a useful contribution to all tasks where it is necessary to distinguish one group of texts from another. In this paper the technique is tested for one specific type of group, namely the group of texts written by the same author. 2 Tasks and Application Scenarios Traditionally, work on the attribution of a text to an author is done in one of two environments. The first is that of literary and/or historical research where attribution is sought for a work of unknown origin (e.g. Mosteller & Wallace, 1984; Holmes, 1998). As secondary information generally identifies potential authors, the task is authorship recognition: selection of one author from a set of known authors. Then there is forensic linguistics, where it needs to be determined if a suspect did or did not write a specific, probably incriminating, text (e.g. Broeders, 2001; Chaski, 2001). Here the task is authorship verification: confirming or denying authorship by a single known author. We would like to focus on a third environment, viz. that of the handling of large numbers of student essays. For some university courses, students have to write one or more essays every week and submit them for grading. Authorship recognition is needed in the case the sloppy student, who forgets to include his name in the essay. If we could link such an essay to the correct student ourselves, this would prevent delays in handling the essay. Authorship verification is needed in the case of the fraudulous student, who has decided that copying is much less work than writing an essay himself, which is only easy to spot if the original is also submitted by the original author. In both scenarios, the test material will be sizable, possibly around a thousand words, and at least several hundred. Training material can be sufficiently available as well, as long as text collection for each student is started early enough. Many other authorship verification scenarios do not have the luxury of such long stretches of test text. For now, however, we prefer to test the basic viability of linguistic profiling on such longer stretches. Afterwards, further experiments can show how long the test texts need to be to reach an acceptable recognition/verification quality. 2.1 Quality Measures For recognition, quality is best expressed as the percentage of correct choices when choosing between N authors, where N generally depends on the attribution problem at hand. We will use the percentage of correct choices between two authors, in order to be able to compare with previous work. For verification, quality is usually expressed in terms of erroneous decisions. When the system is asked to verify authorship for the actual author of a text and decides that the text was not written by that author, we speak of a False Reject. The False Reject Rate (FRR) is the percentage of cases in which this happens, the percentage being taken from the cases which should be accepted. Similarly, the False Accept Rate (FAR) is the percentage of cases where somebody who has not written the test text is accepted as having written the text. With increasing threshold settings, FAR will go down, while FRR goes up. The behaviour of a system can be shown by one of several types of FAR/FRR curve, such as the Receiver Operating Characteristic (ROC). Alternatively, if a single number is preferred, a popular measure is the Equal Error Rate (EER), viz. the threshold value where FAR is equal to FRR. However, the EER may be misleading, since it does not take into account the consequences of the two types of errors. Given the example application, plagiarism detection, we do not want to reject, i.e. accuse someone of plagiarism, unless we are sure. So we would like to measure the quality of the system with the False Accept Rate at the threshold at which the False Reject Rate becomes zero. 2.2 The Test Corpus Before using linguistic profiling for any real task, we should test the technique on a benchmark corpus. The first component of the Dutch Authorship Benchmark Corpus (ABC-NL1) appears to be almost ideal for this purpose. It contains widely divergent written texts produced by firstyear and fourth-year students of Dutch at the University of Nijmegen. The ABC-NL1 consists of 72 Dutch texts by 8 authors, controlled for age and educational level of the authors, and for register, genre and topic of the texts. It is assumed that the authors’ language skills were advanced, but their writing styles were as yet at only weakly developed and hence very similar, unlike those in literary attribution problems. Each author was asked to write nine texts of about a page and a half. In the end, it turned out that some authors were more productive than others, and that the text lengths varied from 628 to 1342 words. The authors did not know that the texts were to be used for authorship attribution studies, but instead assumed that their writing skill was measured. The topics for the nine texts were fixed, so that each author produced three argumentative non-fiction texts, on the television program Big Brother, the unification of Europe and smoking, three descriptive non-fiction texts, about soccer, the (then) upcoming new millennium and the most recent book they read, and three fiction texts, namely a fairy tale about Little Red Riding Hood, a murder story at the university and a chivalry romance. The ABC-NL1 corpus is not only well-suited because of its contents. It has also been used in previously published studies into authorship attribution. A ‘traditional’ authorship attribution method, i.e. using the overall relative frequencies of the fifty most frequent function words and a Principal Components Analysis (PCA) on the correlation matrix of the corresponding 50dimensional vectors, fails completely (Baayen et al., 2002). The use of Linear Discriminant Analysis (LDA) on overall frequency vectors for the 50 most frequent words achieves around 60% correct attributions when choosing between two authors, which can be increased to around 80% by the application of cross-sample entropy weighting (Baayen et al., 2002). Weighted Probability Distribution Voting (WPDV) modeling on the basis of a very large number of features achieves 97.8% correct attributions (van Halteren et al., To Appear). Although designed to produce a hard recognition task, the latter result show that very high recognition quality is feasible. Still, this appears to be a good test corpus to examine the effectiveness of a new technique. 3 Linguistic Profiling In linguistic profiling, the occurrences in a text are counted of a large number of linguistic features, either individual items or combinations of items. These counts are then normalized for text length and it is determined how much (i.e. how many standard deviations) they differ from the mean observed in a profile reference corpus. For the authorship task, the profile reference corpus consists of the collection of all attributed and non-attributed texts, i.e. the entire ABC-NL1 corpus. For each text, the deviation scores are combined into a profile vector, on which a variety of distance measures can be used to position the text in relation to any group of other texts. 3.1 Features Many types of linguistic features can be profiled, such as features referring to vocabulary, lexical patterns, syntax, semantics, pragmatics, information content or item distribution through a text. However, we decided to restrict the current experiments to a few simpler types of features to demonstrate the overall techniques and methodology for profiling before including every possible type of feature. In this paper, we first show the results for lexical features and continue with syntactic features, since these are the easiest ones to extract automatically for these texts. Other features will be the subject of further research. 3.2 Authorship Score Calculation In the problem at hand, the system has to decide if an unattributed text is written by a specific author, on the basis of attributed texts by that and other authors. We test our system’s ability to make this distinction by means of a 9-fold crossvalidation experiment. In each set of runs of the system, the training data consists of attributed texts for eight of the nine essay topics. The test data consists of the unattributed texts for the ninth essay topic. This means that for all runs, the test data is not included in the training data and is about a different topic than what is present in the training material. During each run within a set, the system only receives information about whether each training text is written by one specific author. All other texts are only marked as “not by this author”. 3.3 Raw Score The system first builds a profile to represent text written by the author in question. This is simply the featurewise average of the profile vectors of all text samples marked as being written by the author in question. The system then determines a raw score for all text samples in the list. Rather than using the normal distance measure, we opted for a non-symmetric measure which is a weighted combination of two factors: a) the difference between sample score and author score for each feature and b) the sample score by itself. This makes it possible to assign more importance to features whose count deviates significantly from the norm. The following distance formula is used: ∆T = (Σ |Ti–Ai| D |Ti| S) 1/(D+S) In this formula, Ti and Ai are the values for the ith feature for the text sample profile and the author profile respectively, and D and S are the weighting factors that can be used to assign more or less importance to the two factors described. We will see below how the effectiveness of the measure varies with their setting. The distance measure is then transformed into a score by the formula ScoreT = (Σ |Ti|(D+S)) 1/(D+S) – ∆T In this way, the score will grow with the similarity between text sample profile and author profile. Also, the first component serves as a correction factor for the length of the text sample profile vector. 3.4 Normalization and Renormalization The order of magnitude of the score values varies with the setting of D and S. Furthermore, the values can fluctuate significantly with the sample collection. To bring the values into a range which is suitable for subsequent calculations, we express them as the number of standard deviations they differ from the mean of the scores of the text samples marked as not being written by the author in question. In the experiments described in this paper, a rather special condition holds. In all tests, we know that the eight test samples are comparable in that they address the same topic, and that the author to be verified produced exactly one of the eight test samples. Under these circumstances, we should expect one sample to score higher than the others in each run, and we can profit from this knowledge by performing a renormalization, viz. to the number of standard deviations the score differs from the mean of the scores of the unattributed samples. However, this renormalization only makes sense in the situation that we have a fixed set of authors who each produced one text for each topic. This is in fact yet a different task than those mentioned above, say authorship sorting. Therefore, we will report on the results with renormalization, but only as additional information. The main description of the results will focus on the normalized scores. 4 Profiling with Lexical Features The most straightforward features that can be used are simply combinations of tokens in the text. 4.1 Lexical features Sufficiently frequent tokens, i.e. those that were observed at least a certain amount of times (in this case 5) in some language reference corpus (in this case the Eindhoven corpus; uit den Boogaart, 1975) are used as features by themselves. For less frequent tokens we determine a token pattern consisting of the sequence of character types, e.g., the token “Uefa-cup” is represented by the pattern “#L#6+/CL-L”, where the first “L” indicates low frequency, 6+ the size bracket, and the sequence “CL-L” a capital letter followed by one or more lower case letters followed by a hyphen and again one or more lower case letters. For lower case words, the final three letters of the word are included too, e.g. “waarmaken” leads to “#L#6+/L/ken”. These patterns have been originally designed for English and Dutch and will probably have to be extended when other languages are being handled. In addition to the form of the token, we also use the potential syntactic usage of the token as a feature. We apply the first few modules of a morphosyntactic tagger (in this case Wotan-Lite; Van Halteren et al., 2001) to the text, which determine which word class tags could apply to each token. For known words, the tags are taken from a lexicon; for unknown words, they are estimated on the basis of the word patterns described above. The three (if present) most likely tags are combined into a feature, e.g. “niet” leads to “#H#Adv(stell,onverv)-N(ev,neut)” and “waarmaken” to “#L#V(inf)-N(mv,neut)V(verldw, onverv)”. Note that the most likely tags are determined on the basis of the token itself and that the context is not consulted. The modules of the tagger which do context dependent disambiguation are not applied. Op top of the individual token and tag features we use all possible bi- and trigrams which can be built with them, e.g. the token combination “kon niet waarmaken” leads to features such as “wcw=#H#kon#H#Adv(stell,onverv)-N(ev,neut) #L#6+/L/ken”. Since the number of features quickly grows too high for efficient processing, we filter the set of features by demanding that a feature occurs in a set minimum number of texts in the profile reference corpus (in this case two). A feature which is filtered out instead contributes to a rest category feature, e.g. the feature above would contribute to “wcw=<OTHER>”. For the current corpus, this filtering leads to a feature set of about 100K features. The lexical features currently also include features for utterance length. Each utterance leads to two such features, viz. the exact length (e.g. “len=15”) and the length bracket (e.g. “len=1019”). 4.2 Results with lexical features A very rough first reconnaissance of settings for D and S suggested that the best results could be achieved with D between 0.1 and 2.4 and S between 0.0 and 1.0. Further examination of this area leads to FAR FRR=0 scores ranging down to around 15%. Figure 1 shows the scores at various settings for D and S. The z-axis is inverted (i.e. 1 - FAR FRR=0 is used) to show better scores as peaks rather than troughs. The most promising area is the ridge along the trough at D=0.0, S=0.0. A closer investigation of this area shows that the best settings are D=0.575 and S=0.15. The FAR FRR=0 score here is 14.9%, i.e. there is a threshold setting such that if all texts by the authors themselves are accepted, only 14.9% of texts by other authors are falsely accepted. The very low value for S is surprising. It indicates that it is undesirable to give too much attention to features which deviate much in the sample being measured; still, in the area in question, the score does peak at a positive S value, indicating that some such weighting does have effect. Successful low scores for S can also be seen in the hill leading around D=1.0, S=0.3, which peaks at an FAR FRR=0 score of around 17 percent. From the shape of the surface it would seem that an investigation of the area across the S=0.0 divide might still be worthwhile, which is in contradiction with the initial finding that negative values produce no useful results. 5 Beyond Lexical Features As stated above, once the basic viability of the technique was confirmed, more types of features would be added. As yet, this is limited to syntactic features. We will first describe the system quality using only syntactic features, and then describe the results when using lexical and syntactic features in combination. 5.1 Syntactic Features We used the Amazon parser to derive syntactic constituent analyses of each utterance (Coppen, 2003). We did not use the full rewrites, but rather constituent N-grams. The N-grams used were: • left hand side label, examining constituent occurrence • left hand side label plus one label from the right hand side, examining dominance • left hand side plus label two labels from the right hand side, in their actual order, examining dominance and linear precedence For each label, two representations are used. The first is only the syntactic constituent label, the second is the constituent label plus the head word. This is done for each part of the N-grams independently, leading to 2, 4 and 8 features respectively for the three types of N-gram. Furthermore, each feature is used once by itself, once with an additional marking for the depth of the rewrite in the analysis tree, once with an additional marking for the length of the rewrite, and once with both these markings. This means another multiplication factor of four for a total of 8, 16 and 32 features respectively. After filtering for minimum number of observations, again at least an observation in two different texts, there are about 900K active syntactic features, nine times as many as for the lexical features. Investigation of the results for various settings has not been as exhaustive as for the lexical features. The best settings so far, D=1.3, S=1.4, yield an FAR FRR=0 of 24.8%, much worse than the 14.9% seen for lexical features. 5.2 Combining Lexical and Syntactic Features From the FAR FRR=0 score, it would seem that syntactic features are not worth pursuing any furFigure 1: The variation of FAR (or rather 1-FAR) as a function of D and S, with D ranging from 0.1 to 2.4 and S from 0.0 to 1.0. ther, since they perform much worse than lexical ones. However, they might still be useful if we combine their scores with those for the lexical features. For now, rather than calculating new combined profiles, we just added the scores from the two individual systems. The combination of the best two individual systems leads to an FAR FRR=0 of 10.3%, a solid improvement over lexical features by themselves. However, the best individual systems are not necessarily the best combiners. The best combination systems produce FAR FRR=0 measurements down to 8.1%, with settings in different parts of the parameter space. It should be observed that the improvement gained by combination is linked to the chosen quality measure. If we examine the ROC-curves for several types of systems (plotting the FAR against the FRR; Figure 2), we see that the combination curves as a whole do not differ much from the lexical feature curve. In fact, the EER for the ‘best’ combination system is worse than that for the best lexical feature system. This means that we should be very much aware of the relative importance of FAR and FRR in any specific application when determining the ‘optimal’ features and parameters. 6 Parameter Settings A weak point in the system so far is that there is no automatic parameter selection. The best results reported above are the ones at optimal settings. One would hope that optimal settings on training/tuning data will remain good settings for new data. Further experiments on other data will have to shed more light on this. Another choice which cannot yet be made automatically is that of a threshold. So far, the presentation in this paper has been based on a single threshold for all author/text combinations. That there is an enormous potential for improvement can be shown by assuming a few more informed methods of threshold selection. The first method uses the fact that, in our experiments, there are always one true and seven false authors. This means we can choose the threshold at some point below the highest of the eight scores. We can hold on to the single threshold strategy if we first renormalize, as described in Section 3.4, and then choose a single value to threshold the renormalized values against. The second method assumes that we will be able to find an optimal threshold for each individual run of the system. The maximum effect of this can be estimated with an oracle providing the optimal threshold. Basically, since the oracle threshold will be at the score for the text by the author, we Figure 2: ROC (FAR plotted against FRR) for a varying threshold at good settings of D and S for different types of features. The top pane shows the whole range (0 to 1) for FAR and FRR. The bottom pane shows the area from 0.0 to 0.2. are examining how many texts by other authors score better than the text by the actual author. Table 1 compares the results for the best settings for these two new scenarios with the results presented above. Renormalizing already greatly improves the results. Interestingly, in this scenario, the syntactic features outperform the lexical ones, something which certainly merits closer investigation after the parameter spaces have been charted more extensively. The full potential of profiling becomes clear in the Oracle threshold scenario, which shows extremely good scores. Still, this potential will yet have to be realized by finding the right automatic threshold determination mechanism. 7 Comparison to Previous Authorship Attribution Work Above, we focused on the authorship verification task, since it is the harder problem, given that the potential group of authors is unknown. However, as mentioned in Section 2, previous work with this data has focused on the authorship recognition problem, to be exact on selecting the correct author out of two potential authors. We repeat the previously published results in Table 2, together with linguistic profiling scores, both for the 2way and for the 8-way selection problem. To do attribution with linguistic profiling, we calculated the author scores for each author from the set for a given text, and then selected the author with the highest score. The results are shown in Table 2, using lexical or syntactic features or both, and with and without renormalization. The Oracle scenario is not applicable as we are comparing rather than thresholding. In each case, the best results are not just found at a single parameter setting, but rather over a larger area in the parameter space. This means that the choice of optimal parameters will be more robust with regard to changes in authors and text types. We also observe that the optimal settings for recognition are very different from those for verification. A more detailed examination of the results is necessary to draw conclusions about these differences, which is again not possible until the parameter spaces have been charted more exhaustively. Lexical Features Syntactic Features Combination Single threshold 14.9% 24.8% 8.1% Single threshold after renormalization 9.3% 6.0% 2.4% Oracle threshold per run 0.8% 1.6% 0.2% Table 1: Best FAR FRR=0 scores for verification with various feature types and threshold selection mechanisms. 2-way errors /504 2-way percent correct 8-way errors /72 8-way percent correct 50 function words, PCA ± 50% followed by LDA ± 60% LDA with crosssample entropy weighting ± 80% all tokens, WPDV modeling 97.8% Lexical 6 98.8% 5 93% Syntactic 14 98.2% 10 86% Combined 3 99.4% 2 97% Lexical (renorm.) 1 99.8% 1 99% Syntactic (renorm.) 4 99.2% 3 96% Combined (renorm.) 0 100.0% 0 100% Table 2: Authorship recognition quality for various methods. All results with normalized scores are already better than the previously published results. When applying renormalization, which might be claimed to be justified in this particular authorship attribution problem, the combination system reaches the incredible level of making no mistakes at all. 8 Conclusion Linguistic profiling has certainly shown its worth for authorship recognition and verification. At the best settings found so far, a profiling system using combination of lexical and syntactic features is able select the correct author for 97% of the texts in the test corpus. It is also able to perform the verification task in such a way that it rejects no texts that should be accepted, while accepting only 8.1% of the texts that should be rejected. Using additional knowledge about the test corpus can improve this to 100% and 2.4%. The next step in the investigation of linguistic profiling for this task should be a more exhaustive charting of the parameter space, and especially the search for an automatic parameter selection procedure. Another avenue of future research is the inclusion of even more types of features. Here, however, it would be useful to define an even harder verification task, as the current system scores already very high and further improvements might be hard to measure. With the current corpus, the task might be made harder by limiting the size of the test texts. Other corpora might also serve to provide more obstinate data, although it must be said that the current test corpus was already designed specifically for this purpose. Use of further corpora will also help with parameter space charting, as they will show the similarities and/or differences in behaviour between data sets. Finally, with the right types of corpora, the worth of the technique for actual application scenarios could be investigated. So there are several possible routes to further improvement. Still, the current quality of the system is already such that the system could be applied as is. Certainly for authorship recognition and verification, as we hope to show by our participation in Patrick Juola’s Ad-hoc Authorship Attribution Contest (to be presented at ALLC/ACH 2004), for language verification (cf. van Halteren and Oostdijk, 2004), and possibly also for other text classification tasks, such as language or language variety recognition, genre recognition, or document classification for IR purposes. References Harald Baayen, Hans van Halteren, Anneke Neijt, and Fiona Tweedie. 2002. An Experiment in Authorship Attribution. Proc. JADT 2002, pp. 69-75. Ton Broeders. 2001. Forensic Speech and Audio Analysis, Forensic Linguistics 1998-2001 – A Review. Proc. 13th Interpol Forensic Science Symposium, Lyon, France. C. Chaski. 2001. Empirical Evaluations of LanguageBased Author Identification Techniques. Forensic Linguistics 8(1): 1-65. Peter Arno Coppen. 2003. Rejuvenating the Amazon parser. Poster presentation CLIN2003, Antwerp, Dec. 19, 2003. David Holmes. 1998. Authorship attribution. Literary and Linguistic Computing 13(3):111-117. F. Mosteller, and D.L. Wallace. 1984. Applied Bayesian and Classical Inference in the Case of the Federalist Papers (2nd edition). Springer Verlag, New York. P. C. Uit den Boogaart. 1975. Woordfrequenties in geschreven en gesproken Nederlands. Oosthoek, Scheltema & Holkema, Utrecht. Hans van Halteren, Jakub Zavrel, and Walter Daelemans. 2001. Improving accuracy in word class tagging through the combination of machine learning systems. Computational Linguistics 27(2):199-230. Hans van Halteren and Nelleke Oostdijk, 2004. Linguistic Profiling of Texts for the Purpose of Language Verification. Proc. COLING 2004. Hans van Halteren, Marco Haverkort, Harald Baayen, Anneke Neijt, and Fiona Tweedie. To appear. New Machine Learning Methods Demonstrate the Existence of a Human Stylome. Journal of Quantitative Linguistics.
2004
26
An Empirical Study of Information Synthesis Tasks Enrique Amig´o Julio Gonzalo V´ıctor Peinado Anselmo Pe˜nas Felisa Verdejo Departamento de Lenguajes y Sistemas Inform´aticos Universidad Nacional de Educaci´on a Distancia c/Juan del Rosal, 16 - 28040 Madrid - Spain {enrique,julio,victor,anselmo,felisa}@lsi.uned.es Abstract This paper describes an empirical study of the “Information Synthesis” task, defined as the process of (given a complex information need) extracting, organizing and inter-relating the pieces of information contained in a set of relevant documents, in order to obtain a comprehensive, non redundant report that satisfies the information need. Two main results are presented: a) the creation of an Information Synthesis testbed with 72 reports manually generated by nine subjects for eight complex topics with 100 relevant documents each; and b) an empirical comparison of similarity metrics between reports, under the hypothesis that the best metric is the one that best distinguishes between manual and automatically generated reports. A metric based on key concepts overlap gives better results than metrics based on n-gram overlap (such as ROUGE) or sentence overlap. 1 Introduction A classical Information Retrieval (IR) system helps the user finding relevant documents in a given text collection. In most occasions, however, this is only the first step towards fulfilling an information need. The next steps consist of extracting, organizing and relating the relevant pieces of information, in order to obtain a comprehensive, non redundant report that satisfies the information need. In this paper, we will refer to this process as Information Synthesis. It is normally understood as an (intellectually challenging) human task, and perhaps the Google Answer Service1 is the best general purpose illustration of how it works. In this service, users send complex queries which cannot be answered simply by inspecting the first two or three documents returned by a search engine. These are a couple of real, representative examples: a) I’m looking for information concerning the history of text compression both before and with computers. 1http://answers.google.com b) Provide an analysis on the future of web browsers, if any. Answers to such complex information needs are provided by experts which, commonly, search the Internet, select the best sources, and assemble the most relevant pieces of information into a report, organizing the most important facts and providing additional web hyperlinks for further reading. This Information Synthesis task is understood, in Google Answers, as a human task for which a search engine only provides the initial starting point. Our midterm goal is to develop computer assistants that help users to accomplish Information Synthesis tasks. From a Computational Linguistics point of view, Information Synthesis can be seen as a kind of topic-oriented, informative multi-document summarization, where the goal is to produce a single text as a compressed version of a set of documents with a minimum loss of relevant information. Unlike indicative summaries (which help to determine whether a document is relevant to a particular topic), informative summaries must be helpful to answer, for instance, factual questions about the topic. In the remainder of the paper, we will use the term “reports” to refer to the summaries produced in an Information Synthesis task, in order to distinguish them from other kinds of summaries. Topic-oriented multi-document summarization has already been studied in other evaluation initiatives which provide testbeds to compare alternative approaches (Over, 2003; Goldstein et al., 2000; Radev et al., 2000). Unfortunately, those studies have been restricted to very small summaries (around 100 words) and small document sets (1020 documents). These are relevant summarization tasks, but hardly representative of the Information Synthesis problem we are focusing on. The first goal of our work has been, therefore, to create a suitable testbed that permits qualitative and quantitative studies on the information synthesis task. Section 2 describes the creation of such a testbed, which includes the manual generation of 72 reports by nine different subjects across 8 complex topics with 100 relevant documents per topic. Using this testbed, our second goal has been to compare alternative similarity metrics for the Information Synthesis task. A good similarity metric provides a way of evaluating Information Synthesis systems (comparing their output with manually generated reports), and should also shed some light on the common properties of manually generated reports. Our working hypothesis is that the best metric will best distinguish between manual and automatically generated reports. We have compared several similarity metrics, including a few baseline measures (based on document, sentence and vocabulary overlap) and a stateof-the-art measure to evaluate summarization systems, ROUGE (Lin and Hovy, 2003). We also introduce another proximity measure based on key concept overlap, which turns out to be substantially better than ROUGE for a relevant class of topics. Section 3 describes these metrics and the experimental design to compare them; in Section 4, we analyze the outcome of the experiment, and Section 5 discusses related work. Finally, Section 6 draws the main conclusions of this work. 2 Creation of an Information Synthesis testbed We refer to Information Synthesis as the process of generating a topic-oriented report from a nontrivial amount of relevant, possibly interrelated documents. The first goal of our work is the generation of a testbed (ISCORPUS) with manually produced reports that serve as a starting point for further empirical studies and evaluation of information synthesis systems. This section describes how this testbed has been built. 2.1 Document collection and topic set The testbed must have a certain number of features which, altogether, differentiate the task from current multi-document summarization evaluations: Complex information needs. Being Information Synthesis a step which immediately follows a document retrieval process, it seems natural to start with standard IR topics as used in evaluation conferences such as TREC2, CLEF3 or NTCIR4. The title/description/narrative topics commonly used in such evaluation exercises are specially well suited for an Information Synthesis task: they are complex 2http://trec.nist.gov 3http://www.clef-campaign.org 4http://research.nii.ac.jp/ntcir/ and well defined, unlike, for instance, typical web queries. We have selected the Spanish CLEF 2001-2003 news collection testbed (Peters et al., 2002), because Spanish is the native language of the subjects recruited for the manual generation of reports. Out of the CLEF topic set, we have chosen the eight topics with the largest number of documents manually judged as relevant from the assessment pools. We have slightly reworded the topics to change the document retrieval focus (“Find documents that...”) into an information synthesis wording (“Generate a report about...”). Table 1 shows the eight selected topics. C042: Generate a report about the invasion of Haiti by UN/US soldiers. C045: Generate a report about the main negotiators of the Middle East peace treaty between Israel and Jordan, giving detailed information on the treaty. C047: What are the reasons for the military intervention of Russia in Chechnya? C048: Reasons for the withdrawal of United Nations (UN) peace- keeping forces from Bosnia. C050: Generate a report about the uprising of Indians in Chiapas (Mexico). C085: Generate a report about the operation “Turquoise”, the French humanitarian program in Rwanda. C056: Generate a report about campaigns against racism in Europe. C080: Generate a report about hunger strikes attempted in order to attract attention to a cause. Table 1: Topic set This set of eight CLEF topics has two differentiated subsets: in a majority of cases (first six topics), it is necessary to study how a situation evolves in time; the importance of every event related to the topic can only be established in relation with the others. The invasion of Haiti by UN and USA troops (C042) is an example of such a topic. We will refer to them as “Topic Tracking” (TT) reports, because they resemble the kind of topics used in such task. The last two questions (56 and 80), however, resemble Information Extraction tasks: essentially, the user has to detect and describe instances of a generic event (cases of hunger strikes and campaigns against racism in Europe); hence we will refer to them as “IE” reports. Topic tracking reports need a more elaborated treatment of the information in the documents, and therefore are more interesting from the point of view of Information Synthesis. We have, however, decided to keep the two IE topics; first, because they also reflect a realistic synthesis task; and second, because they can provide contrastive information as compared to TT reports. Large document sets. All the selected CLEF topics have more than one hundred documents judged as relevant by the CLEF assessors. For homogeneity, we have restricted the task to the first 100 documents for each topic (using a chronological order). Complex reports. The elaboration of a comprehensive report requires more space than is allowed in current multi-document summarization experiences. We have established a maximum of fifty sentences per summary, i.e., half a sentence per document. This limit satisfies three conditions: a) it is large enough to contain the essential information about the topic, b) it requires a substantial compression effort from the user, and c) it avoids defaulting to a “first sentence” strategy by lazy (or tired) users, because this strategy would double the maximum size allowed. We decided that the report generation would be an extractive task, which consists of selecting sentences from the documents. Obviously, a realistic information synthesis process also involves rewriting and elaboration of the texts contained in the documents. Keeping the task extractive has, however, two major advantages: first, it permits a direct comparison to automatic systems, which will typically be extractive; and second, it is a simpler task which produces less fatigue. 2.2 Generation of manual reports Nine subjects between 25 and 35 years-old were recruited for the manual generation of reports. All of them self-reported university degrees and a large experience using search engines and performing information searches. All subjects were given an in-place detailed description of the task in order to minimize divergent interpretations. They were told that, in a first step, they had to generate reports with a maximum of information about every topic within the fifty sentence space limit. In a second step, which would take place six months afterwards, they would be examined from each of the eight topics. The only documentation allowed during the exam would be the reports generated in the first phase of the experiment. Subjects scoring best would be rewarded. These instructions had two practical effects: first, the competitive setup was an extra motivation for achieving better results. And second, users tried to take advantage of all available space, and thus most reports were close to the fifty sentences limit. The time limit per topic was set to 30 minutes, which is tight for the information synthesis task, but prevents the effects of fatigue. We implemented an interface to facilitate the generation of extractive reports. The system displays a list with the titles of relevant documents in chronological order. Clicking on a title displays the full document, where the user can select any sentence(s) and add them to the final report. A different frame displays the selected sentences (also in chronological order), together with one bar indicating the remaining time and another bar indicating the remaining space. The 50 sentence limit can be temporarily exceeded and, when the 30 minute limit has been reached, the user can still remove sentences from the report until the sentence limit is reached back. 2.3 Questionnaires After summarizing every topic, the following questionnaire was filled in by every user: • Who are the main people involved in the topic? • What are the main organizations participating in the topic? • What are the key factors in the topic? Users provided free-text answers to these questions, with their freshly generated summary at hand. We did not provide any suggestions or constraints at this point, except that a maximum of eight slots were available per question (i.e. a maximum of 8X3 = 24 key concepts per topic, per user). This is, for instance, the answer of one user for the topic 42 about the invasion of Haiti by UN and USA troops in 1994: People Organizations Jean Bertrand Aristide ONU (UN) Clinton EEUU (USA) Raoul Cedras OEA (OAS) Philippe Biambi Michel Josep Francois Factors militares golpistas (coup attempting soldiers) golpe militar (coup attempt) restaurar la democracia (reinstatement of democracy) Finally, a single list of key concepts is generated for each topic, joining all the different answers. Redundant concepts (e.g. “war” and “conflict”) were inspected and collapsed by hand. These lists of key concepts constitute the gold standard for the similarity metric described in Section 3.2.5. Besides identifying key concepts, users also filled in the following questionnaire: • Were you familiarized with the topic? • Was it hard for you to elaborate the report? • Did you miss the possibility of introducing annotations or rewriting parts of the report by hand? • Do you consider that you generated a good report? • Are you tired? Out of the answers provided by users, the most remarkable facts are that: • only in 6% of the cases the user missed “a lot” the possibility of rewriting/adding comments to the topic. The fact that reports are made extractively did not seem to be a significant problem for our users. • in 73% of the cases, the user was quite or very satisfied about his summary. These are indications that the practical constraints imposed on the task (time limit and extractive nature of the summaries) do not necessarily compromise the representativeness of the testbed. The time limit is very tight, but the temporal arrangement of documents and their highly redundant nature facilitates skipping repetitive material (some pieces of news are discarded just by looking at the title, without examining the content). 2.4 Generation of baseline reports We have automatically generated baseline reports in two steps: • For every topic, we have produced 30 tentative baseline reports using DUC style criteria: – 18 summaries consist only of picking the first sentence out of each document in 18 different document subsets. The subsets are formed using different strategies, e.g. the most relevant documents for the query (according to the Inquery search engine), one document per day, the first or last 50 documents in chronological order, etc. – The other 12 summaries consist of a) picking the first n sentences out of a set of selected documents (with different values for n and different sets of documents) and b) taking the full content of a few documents. In both cases, document sets are formed with similar criteria as above. • Out of these 30 baseline reports, we have selected the 10 reports which have the highest sentence overlap with the manual summaries. The second step increases the quality of the baselines, making the task of differentiating manual and baseline reports more challenging. 3 Comparison of similarity metrics Formal aspects of a summary (or report), such as legibility, grammatical correctness, informativeness, etc., can only be evaluated manually. However, automatic evaluation metrics can play a useful role in the evaluation of how well the information from the original sources is preserved (Mani, 2001). Previous studies have shown that it is feasible to evaluate the output of summarization systems automatically (Lin and Hovy, 2003). The process is based in similarity metrics between texts. The first step is to establish a (manual) reference summary, and then the automatically generated summaries are ranked according to their similarity to the reference summary. The challenge is, then, to define an appropriate proximity metric for reports generated in the information synthesis task. 3.1 How to compare similarity metrics without human judgments? The QARLA estimation In tasks such as Machine Translation and Summarization, the quality of a proximity metric is measured in terms of the correlation between the ranking produced by the metric, and a reference ranking produced by human judges. An optimal similarity metric should produce the same ranking as human judges. In our case, acquiring human judgments about the quality of the baseline reports is too costly, and probably cannot be done reliably: a fine-grained evaluation of 50-sentence reports summarizing sets of 100 documents is a very complex task, which would probably produce different rankings from different judges. We believe there is a cheaper and more robust way of comparing similarity metrics without using human assessments. We assume a simple hypothesis: the best metric should be the one that best discriminates between manual and automatically generated reports. In other words, a similarity metric that cannot distinguish manual and automatic reports cannot be a good metric. Then, all we need is an estimation of how well a similarity metric separates manual and automatic reports. We propose to use the probability that, given any manual report Mref, any other manual report M is closer to Mref than any other automatic report A: QARLA(sim) = P(sim(M, Mref) > sim(A, Mref)) where M, Mref ∈M, A ∈A where M is the set of manually generated reports, A is the set of automatically generated reports, and “sim” is the similarity metric being evaluated. We refer to this value as the QARLA5 estimation. QARLA has two interesting features: • No human assessments are needed to compute QARLA. Only a set of manually produced summaries and a set of automatic summaries, for each topic considered. This reduces the cost of creating the testbed and, in addition, eliminates the possible bias introduced by human judges. • It is easy to collect enough data to achieve statistically significant results. For instance, our testbed provides 720 combinations per topic to estimate QARLA probability (we have nine manual plus ten automatic summaries per topic). A good QARLA value does not guarantee that a similarity metric will produce the same rankings as human judges, but a good similarity metric must have a good QARLA value: it is unlikely that a measure that cannot distinguish between manual and automatic summaries can still produce highquality rankings of automatic summaries by comparison to manual reference summaries. 3.2 Similarity metrics We have compared five different metrics using the QARLA estimation. The first three are meant as baselines; the fourth is the standard similarity metric used to evaluate summaries (ROUGE); and the last one, introduced in this paper, is based on the overlapping of key concepts. 3.2.1 Baseline 1: Document co-selection metric The following metric estimates the similarity of two reports from the set of documents which are represented in both reports (i.e. at least one sentence in each report belongs to the document). DocSim(Mr, M) = |Doc(Mr) ∩Doc(M)| |Doc(Mr)| where Mr is the reference report, M a second report and Doc(Mr), Doc(M) are the documents to which the sentences in Mr, M belong to. 5Quality criterion for reports evaluation metrics 3.2.2 Baselines 2 and 3: Sentence co-selection The more sentences in common between two reports, the more similar their content will be. We can measure Recall (how many sentences from the reference report are also in the contrastive report) and Precision (how many sentences from the contrastive report are also in the reference report): SentenceSimR(Mr, M) = |S(Mr) ∩S(M)| |S(Mr)| SentenceSimP(Mr, M) = |S(Mr) ∩S(M)| |S(M)| where S(Mr), S(M) are the sets of sentences in the reports Mr (reference) and M (contrastive). 3.2.3 Baseline 4: Perplexity A language model is a probability distribution over word sequences obtained from some training corpora (see e.g. (Manning and Schutze, 1999)). Perplexity is a measure of the degree of surprise of a text or corpus given a language model. In our case, we build a language model LM(Mr) for the reference report Mr, and measure the perplexity of the contrastive report M as compared to that language model: PerplexitySim(Mr, M) = 1 Perp(LM(Mr), M) We have used the Good-Turing discount algorithm to compute the language models (Clarkson and Rosenfeld, 1997). Note that this is also a baseline metric, because it only measures whether the content of the contrastive report is compatible with the reference report, but it does not consider the coverage: a single sentence from the reference report will have a low perplexity, even if it covers only a small fraction of the whole report. This problem is mitigated by the fact that we are comparing reports of approximately the same size and without repeated sentences. 3.2.4 ROUGE metric The distance between two summaries can be established as a function of their vocabulary (unigrams) and how this vocabulary is used (n-grams). From this point of view, some of the measures used in the evaluation of Machine Translation systems, such as BLEU (Papineni et al., 2002), have been imported into the summarization task. BLEU is based in the precision and n-gram co-ocurrence between an automatic translation and a reference manual translation. (Lin and Hovy, 2003) tried to apply BLEU as a measure to evaluate summaries, but the results were not as good as in Machine Translation. Indeed, some of the characteristics that define a good translation are not related with the features of a good summary; then Lin and Hovy proposed a recallbased variation of BLEU, known as ROUGE. The idea is the same: the quality of a proposed summary can be calculated as a function of the n-grams in common between the units of a model summary. The units can be sentences or discourse units: ROUGEn = P C∈{MU} P n-gram∈C Countm P C∈{MU} P n-gram∈C Count where MU is the set of model units, Countm is the maximum number of n-grams co-ocurring in a peer summary and a model unit, and Count is the number of n-grams in the model unit. It has been established that unigram and bigram based metrics permit to create a ranking of automatic summaries better (more similar to a human-produced ranking) than n-grams with n > 2. For our experiment, we have only considered unigrams (lemmatized words, excluding stop words), which gives good results with standard summaries (Lin and Hovy, 2003). 3.2.5 Key concepts metric Two summaries generated by different subjects may differ in the documents that contribute to the summary, in the sentences that are chosen, and even in the information that they provide. In our Information Synthesis settings, where topics are complex and the number of documents to summarize is large, it is likely to expect that similarity measures based on document, sentence or n-gram overlap do not give large similarity values between pairs of manually generated summaries. Our hypothesis is that two manual reports, even if they differ in their information content, will have the same (or very similar) key concepts; if this is true, comparing the key concepts of two reports can be a better similarity measure than the previous ones. In order to measure the overlap of key concepts between two reports, we create a vector ⃗kc for every report, such that every element in the vector represents the frequency of a key concept in the report in relation to the size of the report: kc(M)i = freq(Ci, M) |words(M)| being freq(Ci, M) the number of times the key concept Ci appears in the report M, and |words(M)| the number of words in the report. The key concept similarity NICOS (Nuclear Informative Concept Similarity) between two reports M and Mr can then be defined as the inverse of the Euclidean distance between their associated concept vectors: NICOS(M, Mr) = 1 | ⃗kc(Mr) −⃗kc(M)| In our experiment, the dimensions of kc vectors correspond to the list of key concepts provided by our test subjects (see Section 2.3). This list is our gold standard for every topic. 4 Experimental results Figure 1 shows, for every topic (horizontal axis), the QARLA estimation obtained for each similarity metric, i.e., the probability of a manual report being closer to other manual report than to an automatic report. Table 2 shows the average QARLA measure across all topics. Metric TT topics IE topics Perplexity 0.19 0.60 DocSim 0.20 0.34 SentenceSimR 0.29 0.52 SentenceSimP 0.38 0.57 ROUGE 0.54 0.53 NICOS 0.77 0.52 Table 2: Average QARLA For the six TT topics, the key concept similarity NICOS performs 43% better than ROUGE, and all baselines give poor results (all their QARLA probabilities are below chance, QARLA < 0.5). A nonparametric Wilcoxon sign test confirms that the difference between NICOS and ROUGE is highly significant (p < 0.005). This is an indication that the Information Synthesis task, as we have defined it, should not be studied as a standard summarization problem. It also confirms our hypothesis that key concepts tend to be stable across different users, and may help to generate the reports. The behavior of the two Information Extraction (IE) topics is substantially different from TT topics. While the ROUGE measure remains stable (0.53 versus 0.54), the key concept similarity is much worse with IE topics (0.52 versus 0.77). On the other hand, all baselines improve, and some of them (SentenceSim precision and perplexity) give better results than both ROUGE and NICOS. Of course, no reliable conclusion can be obtained from only two IE topics. But the observed differences suggest that TT and IE may need different approaches, both to the automatic generation of reports and to their evaluation. Figure 1: Comparison of similarity metrics by topic One possible reason for this different behavior is that IE topics do not have a set of consistent key concepts; every case of a hunger strike, for instance, involves different people, organizations and places. The average number of different key concepts is 18.7 for TT topics and 28.5 for IE topics, a difference that reveals less agreement between subjects, supporting this argument. 5 Related work Besides the measures included in our experiment, there are other criteria to compare summaries which could as well be tested for Information Synthesis: Annotation of relevant sentences in a corpus. (Khandelwal et al., 2001) propose a task, called “Temporal Summarization”, that combines summarization and topic tracking. The paper describes the creation of an evaluation corpus in which the most relevant sentences in a set of related news were annotated. Summaries are evaluated with a measure called “novel recall”, based in sentences selected by a summarization system and sentences manually associated to events in the corpus. The agreement rate between subjects in the identification of key events and the sentence annotation does not correspond with the agreement between reports that we have obtained in our experiments. There are, at least, two reasons to explain this: • (Khandelwal et al., 2001) work on an average of 43 documents, half the size of the topics in our corpus. • Although there are topics in both experiments, the information needs in our testbed are more complex (e.g. motivations for the invasion of Chechnya) Factoids. One of the problems in the evaluation of summaries is the versatility of human language. Two different summaries may contain the same information. In (Halteren and Teufel, 2003), the content of summaries is manually represented, decomposing sentences in factoids or simple facts. They also annotate the composition, generalization and implication relations between extracted factoids. The resulting measure is different from unigram based similarity. The main problem of factoids, as compared to other metrics, is that they require a costly manual processing of the summaries to be evaluated. 6 Conclusions In this paper, we have reported an empirical study of the “Information Synthesis” task, defined as the process of (given a complex information need) extracting, organizing and relating the pieces of information contained in a set of relevant documents, in order to obtain a comprehensive, non redundant report that satisfies the information need. We have obtained two main results: • The creation of an Information Synthesis testbed (ISCORPUS) with 72 reports manually generated by 9 subjects for 8 complex topics with 100 relevant documents each. • The empirical comparison of candidate metrics to estimate the similarity between reports. Our empirical comparison uses a quantitative criterion (the QARLA estimation) based on the hypothesis that a good similarity metric will be able to distinguish between manual and automatic reports. According to this measure, we have found evidence that the Information Synthesis task is not a standard multi-document summarization problem: state-ofthe-art similarity metrics for summaries do not perform equally well with the reports in our testbed. Our most interesting finding is that manually generated reports tend to have the same key concepts: a similarity metric based on overlapping key concepts (NICOS) gives significantly better results than metrics based on language models, n-gram coocurrence and sentence overlapping. This is an indication that detecting relevant key concepts is a promising strategy in the process of generating reports. Our results, however, has also some intrinsic limitations. Firstly, manually generated summaries are extractive, which is good for comparison purposes, but does not faithfully reflect a natural process of human information synthesis. Another weakness is the maximum time allowed per report: 30 minutes seems too little to examine 100 documents and extract a decent report, but allowing more time would have caused an excessive fatigue to users. Our volunteers, however, reported a medium to high satisfaction with the results of their work, and in some occasions finished their task without reaching the time limit. ISCORPUS is available at: http://nlp.uned.es/ISCORPUS Acknowledgments This research has been partially supported by a grant of the Spanish Government, project HERMES (TIC-2000-0335-C03-01). We are indebted to E. Hovy for his comments on an earlier version of this paper, and C. Y. Lin for his assistance with the ROUGE measure. Thanks also to our volunteers for their valuable cooperation. References P. Clarkson and R. Rosenfeld. 1997. Statistical language modeling using the CMU-Cambridge toolkit. In Proceeding of Eurospeech ’97, Rhodes, Greece. J. Goldstein, V. O. Mittal, J. G. Carbonell, and J. P. Callan. 2000. Creating and Evaluating Multi-Document Sentence Extract Summaries. In Proceedings of Ninth International Conferences on Information Knowledge Management (CIKM´00), pages 165–172, McLean, VA. H. V. Halteren and S. Teufel. 2003. Examining the Consensus between Human Summaries: Initial Experiments with Factoids Analysis. In HLT/NAACL-2003 Workshop on Automatic Summarization, Edmonton, Canada. V. Khandelwal, R. Gupta, and J. Allan. 2001. An Evaluation Corpus for Temporal Summarization. In Proceedings of the First International Conference on Human Language Technology Research (HLT 2001), Tolouse, France. C. Lin and E. H. Hovy. 2003. Automatic Evaluation of Summaries Using N-gram Co-ocurrence Statistics. In Proceeding of the 2003 Language Technology Conference (HLT-NAACL 2003), Edmonton, Canada. I. Mani. 2001. Automatic Summarization, volume 3 of Natural Language Processing. John Benjamins Publishing Company, Amsterdam/Philadelphia. C. D. Manning and H. Schutze. 1999. Foundations of statistical natural language processing. MIT Press, Cambridge Mass. P. Over. 2003. Introduction to DUC-2003: An Intrinsic Evaluation of Generic News Text Summarization Systems. In Proceedings of Workshop on Automatic Summarization (DUC 2003). K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311– 318, Philadelphia. C. Peters, M. Braschler, J. Gonzalo, and M. Kluck, editors. 2002. Evaluation of Cross-Language Information Retrieval Systems, volume 2406 of Lecture Notes in Computer Science. SpringerVerlag, Berlin-Heidelberg-New York. D. R. Radev, J. Hongyan, and M. Budzikowska. 2000. Centroid-Based Summarization of Multiple Documents: Sentence Extraction, UtilityBased Evaluation, and User Studies. In Proceedings of the Workshop on Automatic Summarization at the 6th Applied Natural Language Processing Conference and the 1st Conference of the North American Chapter of the Association for Computational Linguistics, Seattle, WA, April.
2004
27
Mining metalinguistic activity in corpora to create lexical resources using Information Extraction techniques: the MOP system Carlos Rodríguez Penagos Language Engineering Group, Engineering Institute UNAM, Ciudad Universitaria A.P. 70-472 Coyoacán 04510 Mexico City, México [email protected] Abstract This paper describes and evaluates MOP, an IE system for automatic extraction of metalinguistic information from technical and scientific documents. We claim that such a system can create special databases to bootstrap compilation and facilitate update of the huge and dynamically changing glossaries, knowledge bases and ontologies that are vital to modern-day research. 1 Introduction Availability of large-scale corpora has made it possible to mine specific knowledge from free or semi-structured text, resulting in what many consider by now a reasonably mature NLP technology. Extensive research in Information Extraction (IE) techniques, especially with the series of Message Understanding Conferences of the nineties, has focused on tasks such as creating and updating databases of corporate join ventures or terrorist and guerrilla attacks, while the ACQUILEX project used similar methods for creating lexical databases using the highly structured environment of machine-readable dictionary entries and other resources. Gathering knowledge from unstructured text often requires manually crafting knowledgeengineering rules both complex and deeply dependent of the domain at hand, although some successful experiences using learning algorithms have been reported (Fisher et al., 1995; Chieu et al., 2003). Although mining specific semantic relations and subcategorization information from free-text has been successfully carried out in the past (Hearst, 1999; Manning, 1993), automatically extracting lexical resources (including terminological definitions) from text in special domains has been a field less explored, but recent experiences (Klavans et al., 2001; Rodríguez, 2001; Cartier, 1998) show that compiling the extensive resources that modern scientific and technical disciplines need in order to manage the explosive growth of their knowledge, is both feasible and practical. A good example of this NLP-based processing need is the MedLine abstract database maintained by the National Library of Medicine1 (NLM), which incorporates around 40,000 Health Sciences papers each month. Researchers depend on these electronic resources to keep abreast of their rapidly changing field. In order to maintain and update vital indexing references such as the Unified Medical Language System (UMLS) resources, the MeSH and SPECIALIST vocabularies, the NLM staff needs to review 400,000 highly-technical papers each year. Clearly, neology detection, terminological information update and other tasks can benefit from applications that automatically search text for information, e.g., when a new term is introduced or an existing one is modified due to data or theory-driven concerns, or, in general, when new information about sublanguage usage is being put forward. But the usefulness of robust NLP applications for special-domain text goes beyond glossary updates. The kind of categorization information implicit in many definitions can help improve anaphora resolution, semantic typing or acronym identification in these corpora, as well as enhance “semantic rerendering” of special-domain ontologies and thesaurii (Pustejovsky et al., 2002). In this paper we describe and evaluate the MOP2 IE system, implemented to automatically create Metalinguistic Information Databases (MIDs) from large collections of special-domain 1 http://www.nlm.nih.gov/ 2 Metalinguistic Operation Processor research papers. Section 2 will lay out the theory, methodology and the empirical research grounding the application, while Section 3 will describe the first phase of the MOP tasks: accurate location of good candidate metalinguistic sentences for further processing. We experimented both with manually coded rules and with learning algorithms for this task. Section 4 focuses on the problem of identifying and organizing into a useful database structure the different linguistic constituents of the candidate predications, a phase similar to what are known in the IE literature as Named-Entity recognition, Element and Scenario template fill-up tasks. Finally, Section 5 discusses results and problems of our experiments, as well as future lines of research. 2 Metalanguage and term evolution in scientific disciplines 2.1 Explicit Metalinguistic Operations Preliminary empirical work to explore how researchers modify the terminological framework of their highly complex conceptual systems, included manual review of a corpus of 19 sociology articles (138,183 words) published in various British, American and Canadian academic journals with strict peer-review policies. We look at how term manipulation was done as well as how metalinguistic activity was signaled in text, both by lexical and paralinguistic means. Some of the indicators found included verbs and verbal phrases like called, known as, defined as, termed, coined, dubbed, and descriptors such as term and word. Other non-lexical markers included quotation marks, apposition and text formatting. A collection of potential metalinguistic patterns identified in the exploratory Sociology corpus was expanded (using other verbal tenses and forms) to 116 queries sent to the scientific and learned domains of the British National Corpus. The resulting 10,937 sentences were manually classified as metalinguistic or otherwise, with 5,407 (49.6% of total) found to be truly metalinguistic sentences. The presence of three components described below (autonym, informative segment and markers/operators) was the criteria for classification. Reliability of human subjects for this task has not been reported in the literature, and was not evaluated in our experiments. Careful analysis of this extensive corpus presented some interesting facts about what we have termed “Explicit Metalinguistic Operations” (or EMOs) in specialized discourse: A) EMOs usually do not follow the genusdifferentia scheme of aristotelian definitions, nor conform to the rigid and artificial structure of dictionary entries. More often than not, specific information about language use and term definition is provided by sentences such as: (1) This means that they ingest oxygen from the air via fine hollow tubes, known as tracheae, in which the term trachea is linked to the description fine hollow tubes in the context of a globally nonmetalinguistic sentence. Partial and heterogeneous information, rather that a complete definition, are much more common. B) Introduction of metalinguistic information in discourse is highly regular, regardless of the specific domain. This can be credited to the fact that the writer needs to mark these sentences for special processing by the reader, as they dissect across two different semiotic levels: a metalanguage and its object language, to use the terminology of logic where these concepts originate.3 Its constitutive markedness means that most of the times these sentences will have at least two indicators present, for example a verb and a descriptor, or quotation marks, or even have preceding sentences that announce them in some way. These formal and cognitive properties of EMOs facilitate the task of locating them accurately in text. C) EMOs can be further analyzed into 3 distinct components, each with its own properties and linguistic realizations: i) An autonym (see note 3): One or more selfreferential lexical items that are the logical or grammatical subject of a predication that needs not be a complete grammatical sentence. 3 At a very basic semiotic level natural language has to be split (at least methodologically) into two distinct systems that share the same rules and elements: a metalanguage, which is a language that is used to talk about another one, and an object language, which in turn can refer to and describe objects in the mind or in the physical world. The two are isomorphic and this accounts for reflexivity, the property of referring to itself, as when linguistic items are mentioned instead of being used normally in an utterance. Rey-Debove (1978) and Carnap (1934) call this condition autonymy. ii) An informative segment: a contribution of relevant information about the meaning, status, coding or interpretation of a linguistic unit. Informative segments constitute what we state about the autonymical element. iii) Markers/Operators: Elements used to mark or made prominent whole discourse operation, on account of its non-referential, metalinguistic nature. They are usually lexical, typographic or pragmatic elements that articulate autonyms and informative segments into a predication. Thus, in a sentence such as (2), the [autonym] is marked in square brackets, the {informational segment} in curly brackets and the <markeroperators> in angular brackets: (2) {The bit sequences representing quanta of knowledge} <will be called “>[Kenes]<”>, {a neologism intentionally similar to 'genes'}. 2.2 Defaults, knowledge and knowledge of language The 5,400 metalinguistic sentences from our BNC-based test corpus (henceforth, the EMO corpus) reflect an important aspect of scientific sublanguages, and of the scientific enterprise in general. Whenever scientists and scholars advance the state of the art of a discipline, the language they use has to evolve and change, and this buildup is carried out under metalinguistic control. Previous knowledge is transformed into new scientific common ground and ontological commitments are introduced and defended when semantic reference is established. That is why when we want to structure and acquire new knowledge we have to go through a resource-costly cognitive process that integrates, within coherent conceptual structures, a considerable amount of new and very complex lexical items and terms. It has to be pointed out that non-specialized language is not abundant4 in these kinds of metalinguistic exchanges because (unless in the context of language acquisition) we usually rely on a lexical competence that, although subsequently modified and enhanced, reaches the plateau of a generalized lexicon relatively early in our adult life. Technical terms can be thought of as semantic anomalies, in the sense that they are ad hoc 4 Our study shows that they represent between 1 and 6% of all sentences across different domains. constructs strongly bounded to a model, a domain or a context, and are not, by definition, part of the far larger linguistic competence from a first native language. The information provided by EMOs is not usually inferable from previous one available to the speaker’s community or expert group, and does not depend on general language competence by itself, but nevertheless is judged important and relevant enough to warrant the additional processing effort involved. Conventional resources like lexicons and dictionaries compile established meaning definitions. They can be seen as repositories of the default, core lexical information of words or terms used by a community (that is, the information available to an average, idealized speaker). A Metalinguistic Information Database (MID), on the other hand, compiles the real-time data provided by metalanguage analysis of leading-edge research papers, and can be conceptualized as an anti-dictionary: a listing of exceptions, special contexts and specific usage, of instances where meaning, value or pragmatic conditions have been spotlighted by discourse for cognitive reasons. The non-default and highly relevant information from MIDs could provide the material for new interpretation rules in reasoning applications, when inferences won’t succeed because the states of the lexicoconceptual system have changed. When interpreting text, regular lexical information is applied by default under normal conditions, but more specific pragmatic or discursive information can override it if necessary, or if context demands so (Lascarides & Copestake, 1995). A neologism or a word in an unexpected technical sense could stump a NLP system that assumes it will be able to use default information from a machine-readable dictionary. 3 Locating metalinguistic information in text: two approaches When implementingan IE application to mine metalinguistic information from text, the first issue to tackle is how to obtain a reliable set of candidate sentences from free text for input into the next phases of extraction. From our initial corpus analysis we selected 44 patterns that showed the best reliability for being EMO indicators. We start our processing5 by tokenizing text, which then is 5 Our implementation is Python-based, using the run through a cascade of finite-state devices based on identification patterns that extract a candidate set for filtering. Our filtering strategies in effect distinguish between useful results such as (3) from non-metalinguistic instances like (4): (3) Since the shame that was elicited by the coding procedure was seldom explicitly mentioned by the patient or the therapist, Lewis called it unacknowledged shame. (4) It was Lewis (1971;1976) who called attention to emotional elements in what until then had been construed as a perceptual phenomenon . For this task, we experimented with two strategies: First, we used corpus-based collocations to discard non-metalinguistic instances, for example the presence of attention in sentence (4) next to the marker called. Since immediate co-text seems important for this classification task, we also implemented learning algorithms that were trained on a subset from our EMO corpus, using as vectors either POS tags or word forms, at 1, 2, and 3 positions adjacent before and after our markers. These approaches are representative of wider paradigmatic approaches to NLP: symbolic and statistic techniques, each with their own advantages and limitations. Our evaluations of the MOP system are based on test runs over 3 document sets: a) our original exploratory corpus of sociology research papers [5581 sentences, 243 EMOs]; b) an online histology textbook [5146 sentences, 69 EMOs] ; and c) a small sample from the MedLine abstract database [1403 sentences, 10 EMOs]. Using collocational information, our first approach fared very well, presenting good precision numbers, but not so encouraging recall. The sociology corpus, for example, gave 0.94 precision (P) and 0.68 recall (R), while the histology one presented 0.9 P and 0.5 R. These low recall numbers reflect the fact that we only selected a subset of the most reliable and common metalinguistic patterns, and our list is not exhaustive. Example (5) shows one kind of metalinguistic sentence (with a copulative structure) attested in corpora, NLTK toolkit (nltk.sf.net) developed by E. Loper and S. Byrd at the University of Pennsylvania, although we have replaced stochastic POS taggers with an implementation of the Brill algorithm by Hugo Liu at MIT. Our output files follow XML standards to ensure transparency, portability and accessibility but that the system does not attempt to extract or process: (5) “Intercursive” power , on the other hand , is power in Weber's sense of constraint by an actor or group of actors over others. In order to better compare our two strategies, we decided to also zoom in on a more limited subset of verb forms for extraction (namely, calls, called, call), which presented ratios of metalinguistic relevance in our MOP corpus, ranging from 100% positives (for the pattern so called + quotation marks) to 77% (called, by itself) to 31% (call). Restricted to these verbs, our metrics show precision and recall rates of around 0.97, and an overall F-measure of 0.97.6 Of 5581 sentences (96 of which were metalinguistic sentences signaled by our cluster of verbs), 83 were extracted, with 13 (or 15.6% of candidates) filtered-out by collocations. For our learning experiments (an approach we have called contextual feature language models), we selected two well-known algorithms that showed promise for this classification task.7 The naive Bayes (NB) algorithm estimates the conditional probability of a set of features given a label, using the product of the probabilities of the individual features given that label. The Maximum Entropy model establishes a probability distribution that favors entropy, or uniformity, subject to the constraints encoded in the feature-label correlation. When training our ME classifiers, Generalized (GISMax) and Improved Iterative Scaling (IISMax) algorithms are used to estimate the optimal maximum entropy of a feature set, given a corpus. 1,371 training sentences were converted into labeled vectors, for example using 3 positions and POS tags: ('VB WP NNP', 'calls', 'DT NN NN') /'YES'@[102]. The different number of positions considered to the left and right of the markers in our training corpus, as well as the nature of the features selected (there are many more word-types than POS tags) ensured that our 3-part vector introduced a wide range of features against our 2 possible YES-NO labels for processing by our algorithms. Although our test runs using only collocations showed initially that structural regulari 6 With a ß factor of 1.0, and within the sociology document set 7 see Ratnaparkhi (1997) and Berger et al. (1996) for a formal description of these algorithms ties would perform well, both with our restricted lemma cluster and with our wider set of verbs and markers, our intuitions about improvement with more features (more positions to the right of left of the markers) or a more controlled and grammatically restricted environment (a finite set of surrounding POS tags), turned out to be overly optimistic. Nevertheless, stochastic approaches that used short range features did perform very well, in line with the hand-coded approach. The results of the different algorithms, restricted to the lexeme call, are presented in Table 1, while Figures 1 and 2 present best results in the learning experiments for the complete set of patterns used in the collocation approach, over two of our evaluation corpora. Type Positions Tags/ Words Features Accuracy Precision Recall GISMax 1 W 1254 0.97 0.96 0.98 IISMax 1 T 136 0.95 0.96 0.94 IISMax 1 W 1252 0.92 0.97 0.9 GISMax 1 T 138 0.91 0.9 0.96 GISMax 2 T 796 0.88 0.93 0.92 IISMax 2 T 794 0.86 0.95 0.89 IISMax 3 W 4290 0.87 0.85 0.98 GISMax 3 W 4292 0.87 0.85 0.98 IISMax 2 W 3186 0.86 0.87 0.95 GISMax 2 W 3188 0.86 0.87 0.95 NB 1 T 136 0.88 0.97 0.84 NB 2 T 794 0.87 0.96 0.84 NB 3 W 4290 0.73 0.86 0.77 Table 1. Best metrics for “call” lexeme sorted by F-measure and classifier accuracy Figure 1. Best metrics for Sociology corpus 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 P R F NB (3/T) IIS (1/W) GIS (1/W) Figure 2. Best metrics for Histology corpus 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 P R F NB (3/W) IIS (3/W) GIS (1/W) Figures 1 & 2. Best results for filtering algorithms.8 Both Knowledge-Engineering and supervised learning approaches can be adequate for extraction of metalinguistic sentences, although learning algorithms can be helpful when procedural rules have not been compiled; they also allow easier transport of systems to new thematic domains. We plan further research into stochastic approaches to fine tune them for the task. One issue that merits special attention is why some of the algorithms and features work well with one corpus, but not so well with another. This fact is in line with observations in Nigam et al. (1999) that naive Bayes and Maximum Entropy do not show fundamental baseline superiorities, but are dependent on other factors. A hybrid approach that combines hand-crafted collocations with classifiers customized to each pattern’s behavior and morpho-syntactic contexts in corpora might offer better results in future experiments. 4 Processing EMOs to compile metalinguistic information databases Once we have extracted candidate EMOs, the MOP system conforms to a general processing architecture shown in Figure 3. POS tagging is followed by shallow parsing that attempts limited PP-attachment. The resulting chunks are then tagged semantically as Autonyms, Agents, Markers, Anaphoric elements or simply as Noun Chunks, 8 Legend: P: Precision; R: Recall; F: F-Measure. NB: naïve Bayes; IIS: Maximum Entropy trained with Improved Iterative Scaling; GIS: Maximum Entropy trained with Generalized Iterative Scaling. (Positions/Feature type) using heuristics based on syntactic, pragmatic and argument structure observation of the extraction patterns. Next, a predicate processing phase selects the most likely surface realization of informational segments, autonyms and makers-operators, and proceeds to fill the templates in our databases. This was done by following different processing routes customized for each pattern using corpus analysis as well as FrameNet data from Name conferral and Name bearing frames to establish relevant arguments and linguistic realizations. Figure 3. MOP Architecture As mentioned earlier, informational segments present many realizations that distance them from the clarity, completeness and conciseness of lexicographic entries. In fact, they may show up as full-fledged clauses (6), as inter- or intrasentential anaphoric elements (7 and 8, the first one a relative clause), supply a categorization descriptor (9), or even (10) restrict themselves semantically to what we could call a sententiallyunrealized “existential variable” (with logical form ›x) indicating only that certain discourse entity is being introduced. (6) In 1965 the term soliton was coined to describe waves with this remarkable behaviour. (7) This leap brings cultural citizenship in line with what has been called the politics of citizenship . (8) They are called “endothermic compounds.” (9) One of the most enduring aspects of all social theories are those conceptual entities known as structures or groups. (10) A ›x so called cell-type-specific TF can be used by closely related cells, e.g., in erythrocytes and megakaryocytes. We have not included an anaphora-resolution module in our present system, so that instances 7, 8 and 10 will only display in the output as unresolved surface element or as existential variable place-holders,9 but these issues will be explored in future versions of the system. Nevertheless, much more common occurrences as in (11) and (12) are enough to create MIDs quite useful for lexicographers and for NLP lexical resources. (11) The Jovian magnetic field exerts an influence out to near a surface, called the "magnetopause". (12) Here we report the discovery of a soluble decoy receptor, termed decoy receptor 3 (DcR3)... The correct database entry for example 12 is presented in Table 4. Reference: MedLine sample # 6 Autonym: decoy receptor 3 (DcR3) Information a soluble decoy receptor Markers/ Operators: termed Table 4. Sample entry of MID The final processing stage presents metrics shown in Figure 4, using a ß factor of 1.0 to estimate F-measures. To better reflect overall performance in all template slots, we introduced a threshold of similarity of 65% for comparison between a golden standard slot entry and the one provided by the application. Thus, if the autonym or the informational segment is at least 2/3 of the correct response, it is counted as a positive, in many cases leveling the field for the expected errors in the prepositional phrase- or acronym- attachment algorithms, but accounting for a (basically) correct selection of superficial sentence segments. 9 For sentence (8) the system would retrieve a previous sentence: (“A few have positive enthalpies of formation”). to define “endothermic compounds”. Corpus Tokenization Candidate extraction MID Candidate Filtering Collocations ♦ Learning POS tagging & Partial parsing Semantic labeling Database template fillup 5 Results, comparisons and discussion The DEFINDER system (Klavans et al, 2001) at Columbia University is, to my knowledge, the only one fully comparable with MOP, both in scope and goals, but some basic differences between them exist. First, DEFINDER examines user-oriented documents that are bound to contain fully-developed definitions for the layman, as the general goal of the PERSIVAL project is to present medical information to patients in a less technical language than the one of reference literature. MOP focuses on leading-edge research papers that present the less predictable informational templates of highly technical language. Secondly, by the very nature of DEFINDER’s goals their qualitative evaluation criteria include readability, usefulness and completeness as judged by lay subjects, criteria which we have not adopted here. Neither have we determined coverage against existing online dictionaries, as they have done. Taking into account the above-mentioned differences between the two systems’ methods and goals, MOP compares well with the 0.8 Precision and 0.75 Recall of DEFINDER. While the resulting MOP “definitions” generally do not present high readability or completeness, these informational segments are not meant to be read by laymen, but used by domain lexicographers reviewing existing glossaries for neological change, or, for example, in machine-readable form by applications that attempt automatic categorization for semantic rerendering of an expert ontology, since definitional contexts provide sortal information as a natural part of the process of precisely situating a term or concept against the meaning network of interrelated lexical items. The Metalinguistic Information Databases in their present form are not, in full justice, lexical knowledge bases comparable with the highly-structured and sophisticated resources that use inheritance and typed features, like LKB (Copestake et al., 1993). MIDs are semi-structured resources (midway between raw corpora and structured lexical bases) that can be further processed to convert them into usable data sources, along the lines suggested by Vossen and Copestake (1993) for the syntactic kernels of lexicographic definitions, or by Pustejovsky et al. (2002) using corpus analytics to increase the semantic type coverage of the NLM UMLS ontology. Another interesting possibility is to use a dynamically-updated MID to trace the conceptual and terminological evolution of a discipline. We believe that low recall rates in our tests are in part due to the fact that we are dealing with the wider realm of metalinguistic information, as opposed to structured definitional sentences that have been distilled by an expert for consumeroriented documents. We have opted in favor of exploiting less standardized, non-default metalinguistic information that is being put forward in text because it can’t be assumed to be part of the collective expert-domain competence (Section 2.1). In doing so, we have exposed our system to the less predictable and highly charged lexical environment of leading-edge research literature, the cauldron where knowledge and terminological systems are forged in real time, and where scientiFigure 4. Metrics for 3 corpora (# of Records/Global F-Measure) 0.6 0.7 0.8 0.9 1 Precision Recall Precision Recall Precision Recall Global Informational Segments Autonyms Histology (35/0.71) Sociology (143/0.77) MedLine (10/0.78) fic meaning and interpretation are constantly debated, modified and agreed. We have not performed major customization of the system (like enriching the tagging lexicon with medical terms), in order to preserve the ability to use the system across different domains. Domain customization may improve metrics, but at a cost for portability. The implementation we have described here undoubtedly shows room for improvement in some areas, including: adding other patterns for better overall recall rates, deeper parsing for more accurate semantic typing of sentence arguments, etc. Also, the issue of which learning algorithms can better perform the initial filtering of EMO candidates is still very much an open question. Applications that can turn MIDs into truly useful lexical resources by further processing them need to be written. We plan to continue development of our proof-of-concept system to explore those areas. DEFINDER and MOP both show great potential as robust lexical acquisition systems capable of handling the vast electronic resources available today to researchers and laymen alike, helping to make them more accessible and useful. In doing so, they are also fulfilling the promise of NLP techniques as mature and practical technologies. References ACQUILEX projects, final report available at: http://www.cl.cam.ac.uk/Research/NL/acquilex/ Berger, A., S. Della Pietra et al., 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics, vol. 22, no. 1. Carnap, R. 1934. The Logical Syntax of Language. Routledge and Kegan, Londres 1964. Cartier, E. 1998. Analyse Automatique des textes: l’example des informations définitoires. RIFRA 1998. Sfax, Tunisia. Chieu, Hai Leong, Ng, Hwee Tou, & Lee, Yoong Keok. 2003. Closing the Gap: Learning-Based Information Extraction Rivaling KnowledgeEngineering Methods. 41st ACL. Sapporo, Japan. Copestake, A., Sanfilippo, A., Briscoe, T. and de Pavia, V. 1993. The ACQUILEX LKB: An introduction. In: Inheritance, Defaults and the Lexicon. Cambridge University Press. Fisher, D., S. Soderland, J. McCarthy, F. Feng, and W. Lehnert. 1995. Description of the UMass system as used for MUC-6. In Proceedings of MUC-6 Hearst, M. 1998. Automated discovery of wordnet relations. In Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA Klavans, J. and S. Muresan. 2001. Evaluation of the DEFINDER System for Fully Automatic Glossary Construction, proceedings of the American Medical Informatics Association Symposium 2001 Lascarides, A. and Copestake A. 1995. The Pragmatics of Word Meaning, Proceedings of the AAAI Spring Symposium Series: Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity and Generativity, Stanford CA. Manning, Ch. 1993. Automatic acquisition of a large subcategorization dictionary from corpora, In Proceedings of the 31st ACL, Columbus, OH. Nigam, K., Lafferty, J., and McCallum, A. 1999. Using Maximum Entropy for Text Classification, IJCAI-99 Workshop on Machine Learning for Information Filtering, pp. 61-67 Pustejovsky J., A. Rumshisky and J. Castaño. 2002. Rerendering Semantic Ontologies: Automatic Extensions to UMLS through Corpus Analytics. LREC 2002 Workshop on Ontologies and Lexical Knowledge Bases. Las Palmas, Canary Islands, Spain. Ratnaparkhi A. 1997. A Simple Introduction to Maximum Entropy Models for Natural Language Processing, TR 97-08, Institute for Research in Cognitive Science, University of Pennsylvania Rey-Debove, J. 1978. Le Métalangage. Le Robert, Paris. Rodríguez, C. 2001. Parsing Metalinguistic Knowledge from Texts, Selected papers from CICLING-2000 Collection in Computer Science (CCC); National Polytechnic Institute (IPN), Mexico. Vossen, P. and Copestake, A. 1993. Untangling Definition Structure into Knowledge Representation. In: Inheritance, Defaults and the Lexicon.
2004
28
Optimizing Typed Feature Structure Grammar Parsing through Non-Statistical Indexing Cosmin Munteanu and Gerald Penn University of Toronto 10 King’s College Rd. Toronto M5S 3G4 Canada mcosmin,gpenn  @cs.toronto.edu Abstract This paper introduces an indexing method based on static analysis of grammar rules and type signatures for typed feature structure grammars (TFSGs). The static analysis tries to predict at compile-time which feature paths will cause unification failure during parsing at run-time. To support the static analysis, we introduce a new classification of the instances of variables used in TFSGs, based on what type of structure sharing they create. The indexing actions that can be performed during parsing are also enumerated. Non-statistical indexing has the advantage of not requiring training, and, as the evaluation using large-scale HPSGs demonstrates, the improvements are comparable with those of statistical optimizations. Such statistical optimizations rely on data collected during training, and their performance does not always compensate for the training costs. 1 Introduction Developing efficient all-paths parsers has been a long-standing goal of research in computational linguistics. One particular class still in need of parsing time improvements is that of TFSGs. While simpler formalisms such as context-free grammars (CFGs) also face slow all-paths parsing times when the size of the grammar increases significantly, TFSGs (which generally have fewer rules than largescale CFGs) become slow as a result of the complex structures used to describe the grammatical categories. In HPSGs (Pollard and Sag, 1994), one category description could contain hundreds of feature values. This has been a barrier in transferring CFGsuccessful techniques to TFSG parsing. For TFSG chart parsers, one of the most timeconsuming operations is the retrieval of categories from the chart during rule completion (closing of constituents in the chart under a grammar rule). Looking in the chart for a matching edge for a daughter is accomplished by attempting unifications with edges stored in the chart, resulting in many failed unifications. The large and complex structure of TFS descriptions (Carpenter, 1992) leads to slow unification times, affecting the parsing times. Thus, failing unifications must be avoided during retrieval from the chart. To our knowledge, there have been only four methods proposed for improving the retrieval component of TFSG parsing. One (Penn and Munteanu, 2003) addresses only the cost of copying large categories, and was found to reduce parsing times by an average of 25% on a large-scale TFSG (MERGE). The second, a statistical method known as quickcheck (Malouf et al., 2000), determines the paths that are likely to cause unification failure by profiling a large sequence of parses over representative input, and then filters unifications at run-time by first testing these paths for type consistency. This was measured as providing up to a 50% improvement in parse times on the English Resource Grammar (Flickinger, 1999, ERG). The third (Penn, 1999b) is a similar but more conservative approach that uses the profile to re-order sister feature values in the internal data structure. This was found to improve parse times on the ALE HPSG by up to 33%. The problem with these statistical methods is that the improvements in parsing times may not justify the time spent on profiling, particularly during grammar development. The static analysis method introduced here does not use profiling, although it does not preclude it either. Indeed, an evaluation of statistical methods would be more relevant if measured on top of an adequate extent of non-statistical optimizations. Although quick-check is thought to produce parsing time improvements, its evaluation used a parser with only a superficial static analysis of chart indexing. That analysis, rule filtering (Kiefer et al., 1999), reduces parse times by filtering out mother-daughter unifications that can be determined to fail at compile-time. True indexing organizes the data (in this case, chart edges) to avoid unnecessary retrievals altogether, does not require the operations that it performs to be repeated once full unification is deemed necessary, and offers the support for easily adding information extracted from further static analysis of the grammar rules, while maintaining the same indexing strategy. Flexibility is one of the reasons for the successful employment of indexing in databases (Elmasri and Navathe, 2000) and automated reasoning (Ramakrishnan et al., 2001). In this paper, we present a general scheme for indexing TFS categories during parsing (Section 3). We then present a specific method for statically analyzing TFSGs based on the type signature and the structure of category descriptions in the grammar rules, and prove its soundness and completeness (Section 4.2.1). We describe a specific indexing strategy based on this analysis (Section 4), and evaluate it on two large-scale TFSGs (Section 5). The result is a purely non-statistical method that is competitive with the improvements gained by statistical optimizations, and is still compatible with further statistical improvements. 2 TFSG Terminology TFSs are used as formal representatives of rich grammatical categories. In this paper, the formalism from (Carpenter, 1992) will be used. A TFSG is defined relative to a fixed set of types and set of features, along with constraints, called appropriateness conditions. These are collectively known as the type signature (Figure 3). For each type, appropriateness specifies all and only the features that must have values defined in TFSs of that type. It also specifies the types of the values that those features can take. The set of types is partially ordered, and has a unique most general type ( – “bottom”). This order is called subsumption (  ): more specific (higher) types inherit appropriate features from their more general (lower) supertypes. Two types t1 and t2 unify (t1  t2  ) iff they have a least upper bound in the hierarchy. Besides a type signature, TFSGs contain a set of grammar (phrase) rules and lexical descriptions. A simple example of a lexical description is: john  SYNSEM:  SYN: np  SEM: j , while an example of a phrase rule is given in Figure 1.  SYN: s  SEM:  VPSem AGENT: NPSem    SYN: np  AGR: Agr  SEM : NPSem ,  SYN: vp  AGR: Agr  SEM: VPSem . Figure 1: A phrase rule stating that the syntactic category s can be combined from np and vp if their values for agr are the same. The semantics of s is that of the verb phrase, while the semantics of the noun phrase serves as agent. 2.1 Typed Feature Structures A TFS (Figure 2) is like a recursively defined record in a programming language: it has a type and features with values that can be TFSs, all obeying the appropriateness conditions of the type signature. TFSs can also be seen as rooted graphs, where arcs correspond to features and nodes to substructures. A node typing function θ  q associates a type to every node q in a TFS. Every TFS F has a unique starting or root node, qF. For a given TFS, the feature value partial function δ  f q specifies the node reachable from q by feature f when one exists. The path value partial function δ  π q specifies the node reachable from q by following a path of features π when one exists. TFSs can be unified as well. The result represents the most general consistent combination of the information from two TFSs. That information includes typing (by unifying the types), feature values (by recursive unification), and structure sharing (by an equivalence closure taken over the nodes of the arguments). For large TFSs, unification is computationally expensive, since all the nodes of the two TFSs are visited. In this process, many nodes are collapsed into equivalence classes because of structure sharing. A node x in a TFS F with root qF and a node x in a TFS F with root qF  are equivalent (  ) with respect to F  F iff x  qF and x  qF  , or if there is a path π such that δF  F   π qF  x and δF  F   π qF   x . NUMBER: PERSON: GENDER: masculine third [1]singular NUMBER: PERSON: GENDER: third neuter [1] throwing THROWER: index THROWN: index Figure 2: A TFS. Features are written in uppercase, while types are written with bold-face lowercase. Structure sharing is indicated by numerical tags, such as [1]. THROWER: THROWN: index index masculine feminine neuter singular plural first second third num gend pers PERSON: GENDER: NUMBER: pers num gend throwing index Figure 3: A type signature. For each type, appropriateness declares the features that must be defined on TFSs of that type, along with the type restrictions applying to their values. 2.2 Structure Sharing in Descriptions TFSGs are typically specified using descriptions, which logically denote sets of TFSs. Descriptions can be more terse because they can assume all of the information about their TFSs that can be inferred from appropriateness. Each non-disjunctive description can be associated with a unique most general feature structure in its denotation called a most general satisfier (MGSat). While a formal presentation can be found in (Carpenter, 1992), we limit ourselves to an intuitive example: the TFS from Figure 2 is the MGSat of the description: throwing  THROWER:  PERSON: third  NUMBER:  singular  Nr GENDER : masculine  THROWN :  PERSON : third  NUMBER : Nr GENDER : neuter . Descriptions can also contain variables, such as Nr. Structure sharing is enforced in descriptions through the use of variables. In TFSGs, the scope of a variable extends beyond a single description, resulting in structure sharing between different TFSs. In phrase structure rules (Figure 1), this sharing can occur between different daughter categories in a rule, or between a mother and a daughter. Unless the term description is explicitly used, we will use “mother” and “daughter” to refer to the MGSat of a mother or daughter description. We can classify instances of variables based on what type of structure sharing they create. Internal variables are the variables that represent internal structure sharing (such as in Figure 2). The occurrences of such variables are limited to a single category in a phrase structure rule. External variables are the variables used to share structure between categories. If a variable is used for structure sharing both inside a category and across categories, then it is also considered an external variable. For a specific category, two kinds of external variable instances can be distinguished, depending on their occurrence relative to the parsing control strategy: active external variables and inactive external variables. Active external variables are instances of external variables that are shared between the description of a category D and one or more descriptions of categories in the same rule as D visited by the parser before D as the rule is extended (completed). Inactive external variables are the external variable instances that are not active. For example, in bottom-up left-to-right parsing, all of a mother’s external variable instances would be active because, being external, they also occur in one of the daughter descriptions. Similarly, all of the leftmost daughter’s external variable instances would be inactive because this is the first description used by the parser. In Figure 1, Agr is an active external variable in the second daughter, but it is inactive in the first daughter. The active external variable instances are important for path indexing (Section 4.2), because they represent the points at which the parser must copy structure between TFSs. They are therefore substructures that must be provided to a rule by the parsing chart if these unifications could potentially fail. They also represent shared nodes in the MGSats of a rule’s category descriptions. In our definitions, we assume without loss of generality that parsing proceeds bottom-up, with left-to-right of rule daughters. This is the ALE system’s (Carpenter and Penn, 1996) parsing strategy. Definition 1. If D1  Dn are daughter descriptions in a rule and the rules are extended from left to right, then Ext  MGSat  Di is the set of nodes shared between MGSat  Di and MGSat  D1  MGSat  Di  1 . For a mother description M, Ext  MGSat  M is the set of nodes shared with any daughter in the same rule. Because the completion of TFSG rules can cause the categories to change in structure (due to external variable sharing), we need some extra notation to refer to a phrase structure rule’s categories at different times during a single application of that rule. By  M we symbolize the mother M after M’s rule is completed (all of the rule’s daughters are matched with edges in the chart).  D symbolizes the daughter D after all daughters to D’s left in D’s rule were unified with edges from the chart. An important relation exists between M and  M: if qM is M’s root and qM is  M’s root, then  x  M  x   M such that  π for which δ  π qM  x and δ  π qM  x, θ  x θ  x . In other words, extending the rule extends the information states of its categories monotonically. A similar relation exists between D and  D. The set of all nodes x in M such that  π for which δ  π qM  x and δ  π qM  x will be denoted by x  1 (and likewise for nodes in D). There may be more than one node in x  1 because of unifications that occur during the extension of M to  M. 3 The Indexing Timeline Indexing can be applied at several moments during parsing. We introduce a general strategy for indexed parsing, with respect to what actions should be taken at each stage. Three main stages can be identified. The first one consists of indexing actions that can be taken off-line (along with other optimizations that can be performed at compile-time). The second and third stages refer to actions performed at run time. Stage 1. In the off-line phase, a static analysis of grammar rules can be performed. The complete content of mothers and daughters may not be accessible, due to variables that will be instantiated during parsing, but various sources of information, such as the type signature, appropriateness specifications, and the types and features of mother and daughter descriptions, can be analyzed and an appropriate indexing scheme can be specified. This phase of indexing may include determining: (1a) which daughters in which rules will certainly not unify with a specific mother, and (1b) what information can be extracted from categories during parsing that can constitute indexing keys. It is desirable to perform as much analysis as possible off-line, since the cost of any action taken during run time prolongs the parsing time. Stage 2. During parsing, after a rule has been completed, all variables in the mother have been extended as far as they can be before insertion into the chart. This offers the possibility of further investigating the mother’s content and extracting supplemental information from the mother that contributes to the indexing keys. However, the choice of such investigative actions must be carefully studied, since it might burden the parsing process. Stage 3. While completing a rule, for each daughter a matching edge is searched in the chart. At this moment, the daughter’s active external variables have been extended as far as they can be before unification with a chart edge. The information identified in stage (1b) can be extracted and unified as a precursor to the remaining steps involved in category unification. These steps also take place at this stage. 4 TFSG Indexing To reduce the time spent on failures when searching for an edge in the chart, each edge (edge’s category) has an associated index key which uniquely identifies the set of daughter categories that can potentially match it. When completing a rule, edges unifying with a specific daughter are searched for in the chart. Instead of visiting all edges in the chart, the daughter’s index key selects a restricted number of edges for traversal, thus reducing the number of unification attempts. The passive edges added to the chart represent specializations of rules’ mothers. When a rule is completed, its mother M is added to the chart according to M’s indexing scheme, which is the set of index keys of daughters that might possibly unify with M. The index is implemented as a hash, where the hash function applied to a daughter yields the daughter’s index key (a selection of chart edges). For a passive edge representing M, M’s indexing scheme provides the collection of hash entries where it will be added. Each daughter is associated with a unique index key. During parsing, a specific daughter is searched for in the chart by visiting only those edges that have a matching key, thus reducing the time needed for traversing the chart. The index keys can be computed off-line (when daughters are indexed by position), or during parsing. 4.1 Positional Indexing In positional indexing, the index key for each daughter is represented by its position (rule number and daughter position in the rule). The structure of the index can be determined at compile-time (first stage). For each mother M in the grammar, a collection L  M  Ri D j  daughters that can match M  is created (M’s indexing scheme), where each element of L  M represents the rule number Ri and daughter position D j inside rule Ri (1  j  arity  Ri ) of a category that can match with M. For TFSGs it is not possible to compute off-line the exact list of mother-daughter matching pairs, but it is possible to rule out certain non-unifiable pairs before parsing — a compromise that pays off with a very low index management time. During parsing, each time an edge (representing a rule’s mother M) is added to the chart, it is inserted into the hash entries associated with the positions  Ri D j from the list L  M (the number of entries where M is inserted is  L  M  ). The entry associated with the key  Ri D j will contain only categories that can possibly unify with the daughter at position  Ri D j in the grammar. Because our parsing algorithm closes categories depth-first under leftmost daughter matching, only daughters Di with i  2 are searched for in the chart (and consequently, indexed). We used the EFD-based modification of this algorithm (Penn and Munteanu, 2003), which needs no active edges, and requires a constant two copies per edges, rather than the standard one copy per retrieval found in Prolog parsers. Without this, the cost of copying TFS categories would have overwhelmed the benefit of the index. 4.2 Path Indexing Path indexing is an extension of positional indexing. Although it shares the same underlying principle as the path indexing used in automated reasoning (Ramakrishnan et al., 2001), its functionality is related to quick check: extract a vector of types from a mother (which will become an edge) and a daughter, and test the unification of the two vectors before attempting to unify the edge and the daughter. Path indexing differs from quick-check in that it identifies these paths by a static analysis of grammar rules, performed off-line and with no training required. Path indexing is also built on top of positional indexing, therefore the vector of types can be different for each potentially unifiable motherdaughter pair. 4.2.1 Static Analysis of Grammar Rules Similar to the abstract interpretation used in program verification (Cousot and Cousot, 1992), the static analysis tries to predict a run-time phenomenon (specifically, unification failures) at compile-time. It tries to identify nodes in a mother that carry no relevant information with respect to unification with a particular daughter. For a mother M unifiable with a daughter D, these nodes will be grouped in a set StaticCut  M D . Intuitively, these nodes can be left out or ignored while computing the unification of  M and  D. The StaticCut can be divided into two subsets: StaticCut  M D  RigidCut  M D  VariableCut  M D  The RigidCut represents nodes that can be left out because neither they, nor one of their δπ-ancestors, can have their type values changed by means of external variable sharing. The VariableCut represents nodes that are either externally shared, or have an externally shared ancestor, but still can be left out. Definition 2. RigidCut  M D is the largest subset of nodes x  M such that,  y  D for which x  y: 1. x   Ext  M , y   Ext  D , 2.  x  M s.t.  π s.t. δ  π x  x, x   Ext  M , and 3.  y  D s.t.  π s.t. δ  π y  y, y   Ext  D . Definition 3. VariableCut is the largest subset of nodes x  M such that: 1. x   RigidCut  M D , and 2.  y  D for which x  y,  s  θ  x  t  θ  y , s  t exists. In words, a node can be left out even if it is externally shared (or has an externally shared ancestor) if all possible types this node can have unify with all possible types its corresponding nodes in D can have. Due to structure sharing, the types of nodes in M and D can change during parsing, by being specialized to one of their subtypes. Condition 2 ensures that the types of these nodes will remain compatible (have a least upper bound), even if they specialize during rule completion. An intuitive example (real-life examples cannot be reproduced here — a category in a typical TFSG can have hundreds of nodes) is presented in Figure 4. y2 y1 y3 y5 t1 t6 t6 y4 t1 t5 F: G: H: G: K: D x1 x2 x3 x4 F: H: G: I: t7 t7 t3 t1 G:t1 H:t6 F:t6 K:t1 I:t3 t1 t5 t3 G:t5 t4 t2 J:t5 t7 t6 t0 T t8 M Figure 4: Given the above type signature, mother M and daughter D (externally shared nodes are pointed to by dashed arrows), nodes x1  x2  and x3 from M can be left out when unifying M with D during parsing. x1 and x3  RigidCut  M  D  , while x2  VariableCut  M  D  (θ  y2  can promote only to t7, thus x2 and y2 will always be compatible). x4 is not included in the StaticCut, because if θ  y5  promotes to t5, then θ  y4  will promote to t5 (not unifiable with t3). When computing the unification between a mother and a daughter during parsing, the same outcome (success or failure) will be reached by using a reduced representation of the mother (  M sD), with nodes in StaticCut  M D removed from  M. Proposition 1. For a mother M and a daughter D, if M  D  before parsing, and  M (as an edge in the chart) and  D exist, then during parsing: (1)  M sD   D    M   D  , (2)  M sD   D    M   D  . Proof. The second part (  M sD   D    M   D  ) of Proposition 1 has a straightforward proof: if  M sD   D  , then  z   M sD  D such that  t for which  x  z t  θ  x . Since  M sD   M,  z   M  D such that  t for which  x  z t  θ  x , and therefore,  M   D  . The first part of the proposition will be proven by showing that  z   M  D, a consistent type can be assigned to z , where z is the set of nodes in  M and  D equivalent to z with respect to the unification of  M and  D.1 Three lemmata need to be formulated: Lemma 1. If x   M and x  x  1, then θ  x  θ  x . Similarly, for y   D, y  y  1, θ  y  θ  y . Lemma 2. If types t0 t1  tn are such that  t 0  t0  i   1  n , t 0  ti  , then  t  t0 such that  i   1  n , t  ti. 1Because we do not assume inequated TFSs (Carpenter, 1992) here, unification failure must result from type inconsistency. Lemma 3. If x   M and y   D for which x  y, then  x  x  1  y  y  1 such that x  y. In proving the first part of Proposition 1, four cases are identified: Case A:  z   M   1 and  z   D  1, Case B:  z   M   1 and  z   D   1, Case C:  z   M   1 and  z   D   1, Case D:  z   M   1 and  z   D   1. Case A is trivial, and D is a generalization of B and C. Case B. It will be shown that  t  Type such that  y  z   D and for x   z   M, t  θ  y and t  θ  x . Subcase B.i: x   M x    M sD.  y  z   D, y  x. Therefore, according to Lemma 3,  x  x  1  y  y  1 such that x  y. Thus, according to Condition 2 of Definition 3,  s  θ  y  t  θ  x , s  t  . But according to Lemma 1, θ  y  θ  y and θ  x  θ  x . Therefore,  y  z   D,  s  θ  y ,  t  θ  x , s  t  , and hence,  y  z   D  t  θ  x t  θ  y  . Thus, according to Lemma 2,  t  θ  x  y  z   D, t  θ  y . Subcase B.ii: x   M x   M sD. Since  M sD   D  ,  t  θ  x such that  y  z   D, t  θ  y . Case C. It will be shown that  t  θ  y such that  x  z , t  θ  x . Let y   z   D. The set z   M can be divided into two subsets: Sii  x  z   M  x   M sD  , and Si  x  z   M  x   M x    M sD, and x  VariableCut  M D  . If x were in RigidCut  M D , then necessarily  z   M  would be 1. Since Sii   M sD and  M sD   D  , then  t  θ  y such that  x  Sii t  θ  x (*). However,  x  Sii, x  y. Therefore, according to Lemma 3,  x  Sii  x  x  1  y  y  1 such that x  y. Thus, since x  VariableCut  M D , Condition 2 of Definition 3 holds, and therefore, according to Lemma 1,  s1  θ  x  s2  θ  y s1  s2  . More than this, since t  θ  y (for the type t from (*)),  s1  θ  x  s 2  t s1  s 2  , and hence,  s 2  t s 2  θ  x  . Thus, according to Lemma 2 and to (*),  t  t  θ  y such that  x  Sii t  θ  x  Thus,  t such that  x  z , t  θ  x . While Proposition 1 could possibly be used by grammar developers to simplify TFSGs themselves at the source-code level, here we only exploit it for internally identifying index keys for more efficient chart parsing with the existing grammar. There may be better static analyses, and better uses of this static analysis. In particular, future work will focus on using static analysis to determine smaller representations (by cutting nodes in Static Cuts) of the chart edges themselves. 4.2.2 Building the Path Index The indexing schemes used in path indexing are built on the same principles as those in positional indexing. The main difference is the content of the indexing keys, which now includes a third element. Each mother M has its indexing scheme defined as: L  M   Ri D j Vi  j  . The pair  Ri D j is the positional index key (as in positional indexing), while Vi  j is the path index vector containing type values extracted from M. A different set of types is extracted for each mother-daughter pair. So, path indexing uses a two-layer indexing method: the positional key for daughters, and types extracted from the typed feature structure. Each daughter’s index key is now given by L  D j   Ri Vi  j  , where Ri is the rule number of a potentially matching mother, and Vi  j is the path index vector containing types extracted from D j. The types extracted for the indexing vectors are those of nodes found at the end of indexing paths. A path π is an indexing path for a motherdaughter pair  M D iff: (1) π is defined for both M and D, (2)  x  StaticCut  M D  f s.t. δ  f x  δ  π qM (qM is M’s root), and (3) δ  π qM   StaticCut  M D . Indexing paths are the “frontiers” of the non-statically-cut nodes of M. A similar key extraction could be performed during Stage 2 of indexing (as outlined in Section 3), using  M rather than M. We have found that this online path discovery is generally too expensive to be performed during parsing, however. As stated in Proposition 1, the nodes in StaticCut  M D do not affect the success/failure of  M   D. Therefore, the types of first nodes not included in StaticCut  M D along each path π that stems from the root of M and D are included in the indexing key, since these nodes might contribute to the success/failure of the unification. It should be mentioned that the vectors Vi  j are filled with values extracted from  M after M’s rule is completed, and from  D after all daughters to the left of D are unified with edges in the chart. As an example, assuming that the indexing paths are THROWER:PERSON, THROWN, and THROWN:GENDER, the path index vector for the TFS shown in Figure 2 is  third index neuter . 4.2.3 Using the Path Index Inserting and retrieving edges from the chart using path indexing is similar to the general method presented at the beginning of this section. The first layer of the index is used to insert a mother as an edge into appropriate chart entries, according to the positional keys for the daughters it can match. Along with the mother, its path index vector is inserted into the chart. When searching for a matching edge for a daughter, the search is restricted by the first indexing layer to a single entry in the chart (labeled with the positional index key for the daughter). The second layer restricts searches to the edges that have a compatible path index vector. The compatibility is defined as type unification: the type pointed to by the element Vi  j  n of an edge’s vector Vi  j should unify with the type pointed to by the element Vi  j  n of the path index vector Vi  j of the daughter on position D j in a rule Ri. 5 Experimental Evaluation Two TFSGs were used to evaluate the performance of indexing: a pre-release version of the MERGE grammar, and the ALE port of the ERG (in its final form). MERGE is an adaptation of the ERG which uses types more conservatively in favour of relations, macros and complex-antecedent constraints. This pre-release version has 17 rules, 136 lexical items, 1157 types, and 144 introduced features. The ERG port has 45 rules, 1314 lexical entries, 4305 types and 155 features. MERGE was tested on 550 sentences of lengths between 6 and 16 words, extracted from the Wall Street Journal annotated parse trees (where phrases not covered by MERGE’s vocabulary were replaced by lexical entries having the same parts of speech), and from MERGE’s own test corpus. ERG was tested on 1030 sentences of lengths between 6 and 22 words, extracted from the Brown Corpus and from the Wall Street Journal annotated parse trees. Rather than use the current version of ALE, TFSs were encoded as Prolog terms as prescribed in (Penn, 1999a), where the number of argument positions is the number of colours needed to colour the feature graph. This was extended to allow for the enforcement of type constraints during TFS unification. Types were encoded as attributed variables in SICStus Prolog (Swedish Institute of Computer Science, 2004). 5.1 Positional and path indexing evaluation The average and best improvements in parsing times of positional and path indexing over the same EFDbased parser without indexing are presented in Table 1. The parsers were implemented in SICStus 3.10.1 for Solaris 8, running on a Sun Server with 16 GB of memory and 4 UltraSparc v.9 processors at 1281 MHz. For MERGE, parsing times range from 10 milliseconds to 1.3 seconds. For ERG, parsing times vary between 60 milliseconds and 29.2 seconds. Positional Index Path Index average best average best MERGE 1.3% 50% 1.3% 53.7% ERG 13.9% 36.5% 12% 41.6% Table 1: Parsing time improvements of positional and path indexing over the non-indexed EFD parser. 5.2 Comparison with statistical optimizations Non-statistical optimizations can be seen as a first step toward a highly efficient parser, while statistical optimization can be applied as a second step. However, one of the purposes of non-statistical indexing is to eliminate the burden of training while offering comparable improvements in parsing times. A quick-check parser was also built and evaluated and the set-up times for the indexed parsers and the quick-check parser were compared (Table 2). Quick-check was trained on a 300-sentence training corpus, as prescribed in (Malouf et al., 2000). The training corpus included 150 sentences also used in testing. The number of paths in path indexing is different for each mother-daughter pair, ranging from 1 to 43 over the two grammars. Positional Path Quick Index Index Check Compiling grammar 6’30” Compiling index 2” 1’33” Training 3h28’14” Total set-up time: 6’32” 8’3” 3h34’44” Table 2: The set-up times for non-statistically indexed parsers and statistically optimized parsers for MERGE. As seen in Table 3, quick-check alone surpasses positional and path indexing for the ERG. However, it is outperformed by them on the MERGE, recording slower times than even the baseline. But the combination of quick-check and path indexing is faster than quick-check alone on both grammars. Path indexing at best provided no decrease in performance over positional indexing alone in these experiments, attesting to the difficulty of maintaining efficient index keys in an implementation. Positional Path Quick Quick + Indexing Indexing Check Path MERGE 1.3% 1.3% -4.5% -4.3% ERG 13.9% 12% 19.8% 22% Table 3: Comparison of average improvements over nonindexed parsing among all parsers. The quick-check evaluation presented in (Malouf et al., 2000) uses only sentences with a length of at most 10 words, and the authors do not report the set-up times. Quick-check has an additional advantage in the present comparison, because half of the training sentences were included in the test corpus. While quick-check improvements on the ERG confirm other reports on this method, it must be Grammar Successful Failed unifications Failure rate reduction (vs. no index) unifications EFD Positional Path Quick Positional Path Quick non-indexed Index Index Check Index Index Check MERGE 159 755 699 552 370 7.4% 26.8% 50.9% ERG 1078 215083 109080 108610 18040 49.2% 49.5% 91.6% Table 4: The number of successful and failed unifications for the non-indexed, positional indexing, path indexing, and quick-check parsers, over MERGE and ERG (collected on the slowest sentence in the corresponding test sets.) noted that quick-check appears to be parochially very well-suited to the ERG (indeed quick-check was developed alongside testing on the ERG). Although the recommended first 30 most probable failure-causing paths account for a large part of the failures recorded in training on both grammars (94% for ERG and 97% for MERGE), only 51 paths caused failures at all for MERGE during training, compared to 216 for the ERG. Further training with quick-check for determining a better vector length for MERGE did not improve its performance. This discrepancy in the number of failure-causing paths could be resulting in an overfitted quick-check vector, or, perhaps the 30 paths chosen for MERGE really are not the best 30 (quick-check uses a greedy approximation). In addition, as shown in Table 4, the improvements made by quick-check on the ERG are explained by the drastic reduction of (chart lookup) unification failures during parsing relative to the other methods. It appears that nothing short of a drastic reduction is necessary to justify the overhead of maintaining the index, which is the largest for quick-check because some of its paths must be traversed at run-time — path indexing only uses paths available at compile-time in the grammar source. Note that path indexing outperforms quick-check on MERGE in spite of its lower failure reduction rate, because of its smaller overhead. 6 Conclusions and Future Work The indexing method proposed here is suitable for several classes of unification-based grammars. The index keys are determined statically and are based on an a priori analysis of grammar rules. A major advantage of such indexing methods is the elimination of the lengthy training processes needed by statistical methods. Our experimental evaluation demonstrates that indexing by static analysis is a promising alternative to optimizing parsing with TFSGs, although the time consumed by on-line maintenance of the index is a significant concern — echoes of an observation that has been made in applications of term indexing to databases and programming languages (Graf, 1996). Further work on efficient implementations and data structures is therefore required. Indexing by static analysis of grammar rules combined with statistical methods also can provide a higher aggregate benefit. The current static analysis of grammar rules used as a basis for indexing does not consider the effect of the universally quantified constraints that typically augment the signature and grammar rules. Future work will investigate this extension as well. References B. Carpenter and G. Penn. 1996. Compiling typed attribute-value logic grammars. In H. Bunt and M. Tomita, editors, Recent Advances in Parsing Technologies, pages 145–168. Kluwer. B. Carpenter. 1992. The Logic of Typed Feature Structures. Cambridge University Press. P. Cousot and R. Cousot. 1992. Abstract interpretation and application to logic programs. Journal of Logic Programming, 13(2–3). R. Elmasri and S. Navathe. 2000. Fundamentals of database systems. Addison-Wesley. D. Flickinger. 1999. The English Resource Grammar. http://lingo.stanford.edu/erg.html. P. Graf. 1996. Term Indexing. Springer. B. Kiefer, H.U. Krieger, J. Carroll, and R. Malouf. 1999. A bag of useful techniques for efficient and robust parsing. In Proceedings of the 37th Annual Meeting of the ACL. R. Malouf, J. Carrol, and A. Copestake. 2000. Efficient feature structure operations without compilation. Natural Language Engineering, 6(1). G. Penn and C. Munteanu. 2003. A tabulationbased parsing method that reduces copying. In Proceedings of the 41st Annual Meeting of the ACL, Sapporo, Japan. G. Penn. 1999a. An optimised Prolog encoding of typed feature structures. Technical Report 138, SFB 340, T¨ubingen. G. Penn. 1999b. Optimising don’t-care nondeterminism with statistical information. Technical Report 140, SFB 340, T¨ubingen. C. Pollard and I. Sag. 1994. Head-driven Phrase Structure Grammar. The University of Chicago Press. I.V. Ramakrishnan, R. Sekar, and A. Voronkov. 2001. Term indexing. In Handbook of Automated Reasoning, volume II, chapter 26. Elsevier Science. Swedish Institute of Computer Science. 2004. SICStus Prolog 3.11.0. http://www.sics.se/sicstus.
2004
29
2004
3
Head-Driven Parsing for Word Lattices Christopher Collins Department of Computer Science University of Toronto Toronto, ON, Canada [email protected] Bob Carpenter Alias I, Inc. Brooklyn, NY, USA [email protected] Gerald Penn Department of Computer Science University of Toronto Toronto, ON, Canada [email protected] Abstract We present the first application of the head-driven statistical parsing model of Collins (1999) as a simultaneous language model and parser for largevocabulary speech recognition. The model is adapted to an online left to right chart-parser for word lattices, integrating acoustic, n-gram, and parser probabilities. The parser uses structural and lexical dependencies not considered by ngram models, conditioning recognition on more linguistically-grounded relationships. Experiments on the Wall Street Journal treebank and lattice corpora show word error rates competitive with the standard n-gram language model while extracting additional structural information useful for speech understanding. 1 Introduction The question of how to integrate high-level knowledge representations of language with automatic speech recognition (ASR) is becoming more important as (1) speech recognition technology matures, (2) the rate of improvement of recognition accuracy decreases, and (3) the need for additional information (beyond simple transcriptions) becomes evident. Most of the currently best ASR systems use an n-gram language model of the type pioneered by Bahl et al. (1983). Recently, research has begun to show progress towards application of new and better models of spoken language (Hall and Johnson, 2003; Roark, 2001; Chelba and Jelinek, 2000). Our goal is integration of head-driven lexicalized parsing with acoustic and n-gram models for speech recognition, extracting high-level structure from speech, while simultaneously selecting the best path in a word lattice. Parse trees generated by this process will be useful for automated speech understanding, such as in higher semantic parsing (Ng and Zelle, 1997). Collins (1999) presents three lexicalized models which consider long-distance dependencies within a sentence. Grammar productions are conditioned on headwords. The conditioning context is thus more focused than that of a large n-gram covering the same span, so the sparse data problems arising from the sheer size of the parameter space are less pressing. However, sparse data problems arising from the limited availability of annotated training data become a problem. We test the head-driven statistical lattice parser with word lattices from the NIST HUB-1 corpus, which has been used by others in related work (Hall and Johnson, 2003; Roark, 2001; Chelba and Jelinek, 2000). Parse accuracy and word error rates are reported. We present an analysis of the effects of pruning and heuristic search on efficiency and accuracy and note several simplifying assumptions common to other reported experiments in this area, which present challenges for scaling up to realworld applications. This work shows the importance of careful algorithm and data structure design and choice of dynamic programming constraints to the efficiency and accuracy of a head-driven probabilistic parser for speech. We find that the parsing model of Collins (1999) can be successfully adapted as a language model for speech recognition. In the following section, we present a review of recent works in high-level language modelling for speech recognition. We describe the word lattice parser developed in this work in Section 3. Section 4 is a description of current evaluation metrics, and suggestions for new metrics. Experiments on strings and word lattices are reported in Section 5, and conclusions and opportunities for future work are outlined in Section 6. 2 Previous Work The largest improvements in word error rate (WER) have been seen with n-best list rescoring. The best n hypotheses of a simple speech recognizer are processed by a more sophisticated language model and re-ranked. This method is algorithmically simpler than parsing lattices, as one can use a model developed for strings, which need not operate strictly left to right. However, we confirm the observation of (Ravishankar, 1997; Hall and Johnson, 2003) that parsing word lattices saves computation time by only parsing common substrings once. Chelba (2000) reports WER reduction by rescoring word lattices with scores of a structured language model (Chelba and Jelinek, 2000), interpolated with trigram scores. Word predictions of the structured language model are conditioned on the two previous phrasal heads not yet contained in a bigger constituent. This is a computationally intensive process, as the dependencies considered can be of arbitrarily long distances. All possible sentence prefixes are considered at each extension step. Roark (2001) reports on the use of a lexicalized probabilistic top-down parser for word lattices, evaluated both on parse accuracy and WER. Our work is different from Roark (2001) in that we use a bottom-up parsing algorithm with dynamic programming based on the parsing model II of Collins (1999). Bottom-up chart parsing, through various forms of extensions to the CKY algorithm, has been applied to word lattices for speech recognition (Hall and Johnson, 2003; Chappelier and Rajman, 1998; Chelba and Jelinek, 2000). Full acoustic and n-best lattices filtered by trigram scores have been parsed. Hall and Johnson (2003) use a best-first probabilistic context free grammar (PCFG) to parse the input lattice, pruning to a set of local trees (candidate partial parse trees), which are then passed to a version of the parser of Charniak (2001) for more refined parsing. Unlike (Roark, 2001; Chelba, 2000), Hall and Johnson (2003) achieve improvement in WER over the trigram model without interpolating its lattice parser probabilities directly with trigram probabilities. 3 Word Lattice Parser Parsing models based on headword dependency relationships have been reported, such as the structured language model of Chelba and Jelinek (2000). These models use much less conditioning information than the parsing models of Collins (1999), and do not provide Penn Treebank format parse trees as output. In this section we outline the adaptation of the Collins (1999) parsing model to word lattices. The intended action of the parser is illustrated in Figure 1, which shows parse trees built directly upon a word lattice. 3.1 Parameterization The parameterization of model II of Collins (1999) is used in our word lattice parser. Parameters are * tokyo was the could that speculation unit yen the rise arise NN NNP IN AUX DT MD VB NN IN and in CC S NP S* NP VP * Figure 1: Example of a partially-parsed word lattice. Different paths through the lattice are simultaneously parsed. The example shows two final parses, one of low probability (S ) and one of high probability (S). maximum likelihood estimates of conditional probabilities — the probability of some event of interest (e.g., a left-modifier attachment) given a context (e.g., parent non-terminal, distance, headword). One notable difference between the word lattice parser and the original implementation of Collins (1999) is the handling of part-of-speech (POS) tagging of unknown words (words seen fewer than 5 times in training). The conditioning context of the parsing model parameters includes POS tagging. Collins (1999) falls back to the POS tagging of Ratnaparkhi (1996) for words seen fewer than 5 times in the training corpus. As the tagger of Ratnaparkhi (1996) cannot tag a word lattice, we cannot back off to this tagging. We rely on the tag assigned by the parsing model in all cases. Edges created by the bottom-up parsing are assigned a score which is the product of the inside and outside probabilities of the Collins (1999) model. 3.2 Parsing Algorithm The algorithm is a variation of probabilistic online, bottom-up, left-to-right Cocke-KasamiYounger parsing similar to Chappelier and Rajman (1998). Our parser produces trees (bottom-up) in a rightbranching manner, using unary extension and binary adjunction. Starting with a proposed headword, left modifiers are added first using right-branching, then right modifiers using left-branching. Word lattice edges are iteratively added to the agenda. Complete closure is carried out, and the next word edge is added to the agenda. This process is repeated until all word edges are read from the lattice, and at least one complete parse is found. Edges are each assigned a score, used to rank parse candidates. For parsing of strings, the score for a chart edge is the product of the scores of any child edges and the score for the creation of the new edge, as given by the model parameters. This score, defined solely by the parsing model, will be referred to as the parser score. The total score for chart edges for the lattice parsing task is a combination of the parser score, an acoustic model score, and a trigram model score. Scaling factors follow those of (Chelba and Jelinek, 2000; Roark, 2001). 3.3 Smoothing and Pruning The parameter estimation techniques (smoothing and back-off) of Collins (1999) are reimplemented. Additional techniques are required to prune the search space of possible parses, due to the complexity of the parsing algorithm and the size of the word lattices. The main technique we employ is a variation of the beam search of Collins (1999) to restrict the chart size by excluding low probability edges. The total score (combined acoustic and language model scores) of candidate edges are compared against edge with the same span and category. Proposed edges with score outside the beam are not added to the chart. The drawback to this process is that we can no longer guarantee that a model-optimal solution will be found. In practice, these heuristics have a negative effect on parse accuracy, but the amount of pruning can be tuned to balance relative time and space savings against precision and recall degradation (Collins, 1999). Collins (1999) uses a fixed size beam (10 000). We experiment with several variable beam (ˆb) sizes, where the beam is some function of a base beam (b) and the edge width (the number of terminals dominated by an edge). The base beam starts at a low beam size and increases iteratively by a specified increment if no parse is found. This allows parsing to operate quickly (with a minimal number of edges added to the chart). However, if many iterations are required to obtain a parse, the utility of starting with a low beam and iterating becomes questionable (Goodman, 1997). The base beam is limited to control the increase in the chart size. The selection of the base beam, beam increment, and variable beam function is governed by the familiar speed/accuracy trade-off.1 The variable beam function found to allow fast convergence with minimal loss of accuracy is: ˆb  b log  w  2  2  (1) 1Details of the optimization can be found in Collins (2004). Charniak et al. (1998) introduce overparsing as a technique to improve parse accuracy by continuing parsing after the first complete parse tree is found. The technique is employed by Hall and Johnson (2003) to ensure that early stages of parsing do not strongly bias later stages. We adapt this idea to a single stage process. Due to the restrictions of beam search and thresholds, the first parse found by the model may not be the model optimal parse (i.e., we cannot guarantee best-first search). We therefore employ a form of overparsing — once a complete parse tree is found, we further extend the base beam by the beam increment and parse again. We continue this process as long as extending the beam results in an improved best parse score. 4 Expanding the Measures of Success Given the task of simply generating a transcription of speech, WER is a useful and direct way to measure language model quality for ASR. WER is the count of incorrect words in hypothesis ˆW per word in the true string W. For measurement, we must assume prior knowledge of W and the best alignment of the reference and hypothesis strings.2 Errors are categorized as insertions, deletions, or substitutions. Word Error Rate  100Insertions  Substitutions  Deletions Total Words in Correct Transcript (2) It is important to note that most models — Mangu et al. (2000) is an innovative exception — minimize sentence error. Sentence error rate is the percentage of sentences for which the proposed utterance has at least one error. Models (such as ours) which optimize prediction of test sentences Wt, generated by the source, minimize the sentence error. Thus even though WER is useful practically, it is formally not the appropriate measure for the commonly used language models. Unfortunately, as a practical measure, sentence error rate is not as useful — it is not as fine-grained as WER. Perplexity is another measure of language model quality, measurable independent of ASR performance (Jelinek, 1997). Perplexity is related to the entropy of the source model which the language model attempts to estimate. These measures, while informative, do not capture success of extraction of high-level information from speech. Task-specific measures should be used in tandem with extensional measures such as perplexity and WER. Roark (2002), when reviewing 2SCLITE (http://www.nist.gov/speech/ tools/) by NIST is the most commonly used alignment tool. parsing for speech recognition, discusses a modelling trade-off between producing parse trees and producing strings. Most models are evaluated either with measures of success for parsing or for word recognition, but rarely both. Parsing models are difficult to implement as word-predictive language models due to their complexity. Generative random sampling is equally challenging, so the parsing correlate of perplexity is not easy to measure. Traditional (i.e., n-gram) language models do not produce parse trees, so parsing metrics are not useful. However, Roark (2001) argues for using parsing metrics, such as labelled precision and recall,3 along with WER, for parsing applications in ASR. Weighted WER (Weber et al., 1997) is also a useful measurement, as the most often ill-recognized words are short, closed-class words, which are not as important to speech understanding as phrasal head words. We will adopt the testing strategy of Roark (2001), but find that measurement of parse accuracy and WER on the same data set is not possible given currently available corpora. Use of weighted WER and development of methods to simultaneously measure WER and parse accuracy remain a topic for future research. 5 Experiments The word lattice parser was evaluated with several metrics — WER, labelled precision and recall, crossing brackets, and time and space resource usage. Following Roark (2001), we conducted evaluations using two experimental sets — strings and word lattices. We optimized settings (thresholds, variable beam function, base beam value) for parsing using development test data consisting of strings for which we have annotated parse trees. The parsing accuracy for parsing word lattices was not directly evaluated as we did not have annotated parse trees for comparison. Furthermore, standard parsing measures such as labelled precision and recall are not directly applicable in cases where the number of words differs between the proposed parse tree and the gold standard. Results show scores for parsing strings which are lower than the original implementation of Collins (1999). The WER scores for this, the first application of the Collins (1999) model to parsing word lattices, are comparable to other recent work in syntactic language modelling, and better than a simple trigram model trained on the same data. 3Parse trees are commonly scored with the PARSEVAL set of metrics (Black et al., 1991). 5.1 Parsing Strings The lattice parser can parse strings by creating a single-path lattice from the input (all word transitions are assigned an input score of 1.0). The lattice parser was trained on sections 02-21 of the Wall Street Journal portion of the Penn Treebank (Taylor et al., 2003) Development testing was carried out on section 23 in order to select model thresholds and variable beam functions. Final testing was carried out on section 00, and the PARSEVAL measures (Black et al., 1991) were used to evaluate the performance. The scores for our experiments are lower than the scores of the original implementation of model II (Collins, 1999). This difference is likely due in part to differences in POS tagging. Tag accuracy for our model was 93.2%, whereas for the original implementation of Collins (1999), model II achieved tag accuracy of 96.75%. In addition to different tagging strategies for unknown words, mentioned above, we restrict the tag-set considered by the parser for each word to those suggested by a simple first-stage tagger.4 By reducing the tag-set considered by the parsing model, we reduce the search space and increase the speed. However, the simple tagger used to narrow the search also introduces tagging error. The utility of the overparsing extension can be seen in Table 1. Each of the PARSEVAL measures improves when overparsing is used. 5.2 Parsing Lattices The success of the parsing model as a language model for speech recognition was measured both by parsing accuracy (parsing strings with annotated reference parses), and by WER. WER is measured by parsing word lattices and comparing the sentence yield of the highest scoring parse tree to the reference transcription (using NIST SCLITE for alignment and error calculation).5 We assume the parsing performance achieved by parsing strings carries over approximately to parsing word lattices. Two different corpora were used in training the parsing model on word lattices: sections 02-21 of the WSJ Penn Treebank (the same sections as used to train the model for parsing strings) [1 million words] 4The original implementation (Collins, 1999) of this model considered all tags for all words. 5To properly model language using a parser, one should sum parse tree scores for each sentence hypothesis, and choose the sentence with the best sum of parse tree scores. We choose the yield of the parse tree with the highest score. Summation is too computationally expensive given the model —we do not even generate all possible parse trees, but instead restrict generation using dynamic programming. Exp. OP LP (%) LR (%) CB 0 CB (%) 2 CB (%) Ref N 88.7 89.0 0.95 65.7 85.6 1 N 79.4 80.6 1.89 46.2 74.5 2 Y 80.8 81.4 1.70 44.3 80.4 Table 1: Results for parsing section 0 ( 40 words) of the WSJ Penn Treebank: OP = overparsing, LP/LR = labelled precision/recall. CB is the average number of crossing brackets per sentence. 0 CB, 2 CB are the percentage of sentences with 0 or 2 crossing brackets respectively. Ref is model II of (Collins, 1999). section “1987” of the BLLIP corpus (Charniak et al., 1999) [20 million words] The BLLIP corpus is a collection of Penn Treebank-style parses of the three-year (1987-1989) Wall Street Journal collection from the ACL/DCI corpus (approximately 30 million words).6 The parses were automatically produced by the parser of Charniak (2001). As the memory usage of our model corresponds directly to the amount of training data used, we were restricted by available memory to use only one section (1987) of the total corpus. Using the BLLIP corpus, we expected to get lower quality parse results due to the higher parse error of the corpus, when compared to the manually annotated Penn Treebank. The WER was expected to improve, as the BLLIP corpus has much greater lexical coverage. The training corpora were modified using a utility by Brian Roark to convert newspaper text to speechlike text, before being used as training input to the model. Specifically, all numbers were converted to words (60  sixty) and all punctuation was removed. We tested the performance of our parser on the word lattices from the NIST HUB-1 evaluation task of 1993. The lattices are derived from a set of utterances produced from Wall Street Journal text — the same domain as the Penn Treebank and the BLLIP training data. The word lattices were previously pruned to the 50-best paths by Brian Roark, using the A* decoding of Chelba (2000). The word lattices of the HUB-1 corpus are directed acyclic graphs in the HTK Standard Lattice Format (SLF), consisting of a set of vertices and a set of edges. Vertices, or nodes, are defined by a time-stamp and labelled with a word. The set of labelled, weighted edges, represents the word utterances. A word w is hypothesized over edge e if e ends at a vertex v labelled w. Edges are associated with transition probabilities and are labelled with an acoustic score and a language model score. The lattices of the HUB6The sentences of the HUB-1 corpus are a subset of those in BLLIP. We removed all HUB-1 sentences from the BLLIP corpus used in training. 1 corpus are annotated with trigram scores trained using a 20 thousand word vocabulary and 40 million word training sample. The word lattices have a unique start and end point, and each complete path through a lattice represents an utterance hypothesis. As the parser operates in a left-to-right manner, and closure is performed at each node, the input lattice edges must be processed in topological order. Input lattices were sorted before parsing. This corpus has been used in other work on syntactic language modelling (Chelba, 2000; Roark, 2001; Hall and Johnson, 2003). The word lattices of the HUB-1 corpus are annotated with an acoustic score, a, and a trigram probability, lm, for each edge. The input edge score stored in the word lattice is: log  Pinput   αlog  a   βlog  lm  (3) where a is the acoustic score and lm is the trigram score stored in the lattice. The total edge weight in the parser is a scaled combination of these scores with the parser score derived with the model parameters: log  w   αlog  a   βlog  lm   s (4) where w is the edge weight, and s is the score assigned by the parameters of the parsing model. We optimized performance on a development subset of test data, yielding α  1  16 and β  1. There is an important difference in the tokenization of the HUB-1 corpus and the Penn Treebank format. Clitics (i.e., he’s, wasn’t) are split from their hosts in the Penn Treebank (i.e., he ’s, was n’t), but not in the word lattices. The Treebank format cannot easily be converted into the lattice format, as often the two parts fall into different parse constituents. We used the lattices modified by Chelba (2000) in dealing with this problem — contracted words are split into two parts and the edge scores redistributed. We followed Hall and Johnson (2003) and used the Treebank tokenization for measuring the WER. The model was tested with and without overparsing. We see from Table 2 that overparsing has little effect on the WER. The word sequence most easily parsed by the model (i.e., generating the first complete parse tree) is likely also the word sequence found by overparsing. Although overparsing may have little effect on WER, we know from the experiments on strings that overparsing increases parse accuracy. This introduces a speed-accuracy tradeoff: depending on what type of output is required from the model (parse trees or strings), the additional time and resource requirements of overparsing may or may not be warranted. 5.3 Parsing N-Best Lattices vs. N-Best Lists The application of the model to 50-best word lattices was compared to rescoring the 50-best paths individually (50-best list parsing). The results are presented in Table 2. The cumulative number of edges added to the chart per word for n-best lists is an order of magnitude larger than for corresponding n-best lattices, in all cases. As the WERs are similar, we conclude that parsing n-best lists requires more work than parsing n-best lattices, for the same result. Therefore, parsing lattices is more efficient. This is because common substrings are only considered once per lattice. The amount of computational savings is dependent on the density of the lattices — for very dense lattices, the equivalent n-best list parsing will parse common substrings up to n times. In the limit of lowest density, a lattice may have paths without overlap, and the number of edges per word would be the same for the lattice and lists. 5.4 Time and Space Requirements The algorithms and data structures were designed to minimize parameter lookup times and memory usage by the chart and parameter set (Collins, 2004). To increase parameter lookup speed, all parameter values are calculated for all levels of back-off at training time. By contrast, (Collins, 1999) calculates parameter values by looking up event counts at run-time. The implementation was then optimized using a memory and processor profiler and debugger. Parsing the complete set of HUB-1 lattices (213 sentences, a total of 3,446 words) on average takes approximately 8 hours, on an Intel Pentium 4 (1.6GHz) Linux system, using 1GB memory. Memory requirements for parsing lattices is vastly greater than equivalent parsing of a single sentence, as chart size increases with the number of divergent paths in a lattice. Additional analysis of resource issues can be found in Collins (2004). 5.5 Comparison to Previous Work The results of our best experiments for lattice- and list-parsing are compared with previous results in Table 3. The oracle WER7 for the HUB-1 corpus is 3.4%. For the pruned 50-best lattices, the oracle WER is 7.8%. We see that by pruning the lattices using the trigram model, we already introduce additional error. Because of the memory usage and time required for parsing word lattices, we were unable to test our model on the original “acoustic” HUB-1 lattices, and are thus limited by the oracle WER of the 50-best lattices, and the bias introduced by pruning using a trigram model. Where available, we also present comparative scores of the sentence error rate (SER) — the percentage of sentences in the test set for which there was at least one recognition error. Note that due to the small (213 samples) size of the HUB-1 corpus, the differences seen in SER may not be significant. We see an improvement in WER for our parsing model alone (α  β  0) trained on 1 million words of the Penn Treebank compared to a trigram model trained on the same data — the “Treebank Trigram” noted in Table 3. This indicates that the larger context considered by our model allows for performance improvements over the trigram model alone. Further improvement is seen with the combination of acoustic, parsing, and trigram scores (α  1  16 β  1). However, the combination of the parsing model (trained on 1M words) with the lattice trigram (trained on 40M words) resulted in a higher WER than the lattice trigram alone. This indicates that our 1M word training set is not sufficient to permit effective combination with the lattice trigram. When the training of the head-driven parsing model was extended to the BLLIP 1987 corpus (20M words), the combination of models (α  1  16 β  1) achieved additional improvement in WER over the lattice trigram alone. The current best-performing models, in terms of WER, for the HUB-1 corpus, are the models of Roark (2001), Charniak (2001) (applied to n-best lists by Hall and Johnson (2003)), and the SLM of Chelba and Jelinek (2000) (applied to n-best lists by Xu et al. (2002)). However, n-best list parsing, as seen in our evaluation, requires repeated analysis of common subsequences, a less efficient process than directly parsing the word lattice. The reported results of (Roark, 2001) and (Chelba, 2000) are for parsing models interpolated with the lattice trigram probabilities. Hall and John7The WER of the hypothesis which best matches the true utterance, i.e., the lowest WER possible given the hypotheses set. Training Size Lattice/List OP WER Number of Edges S D I T (per word) 1M Lattice N 10.4 3.3 1.5 15.2 1788 1M List N 10.4 3.2 1.4 15.0 10211 1M Lattice Y 10.3 3.2 1.4 14.9 2855 1M List Y 10.2 3.2 1.4 14.8 16821 20M Lattice N 9.0 3.1 1.0 13.1 1735 20M List N 9.0 3.1 1.0 13.1 9999 20M Lattice Y 9.0 3.1 1.0 13.1 2801 20M List Y 9.0 3.3 0.9 13.3 16030 Table 2: Results for parsing HUB-1 n-best word lattices and lists: OP = overparsing, S = substutitions (%), D = deletions (%), I = insertions (%), T = total WER (%). Variable beam function: ˆb  b  log  w  2  2  . Training corpora: 1M = Penn Treebank sections 02-21; 20M = BLLIP section 1987. Model n-best List/Lattice Training Size WER (%) SER (%) Oracle (50-best lattice) Lattice 7.8 Charniak (2001) List 40M 11.9 Xu (2002) List 20M 12.3 Roark (2001) (with EM) List 2M 12.7 Hall (2003) Lattice 30M 13.0 Chelba (2000) Lattice 20M 13.0 Current (α  1  16 β  1) List 20M 13.1 71.0 Current (α  1  16 β  1) Lattice 20M 13.1 70.4 Roark (2001) (no EM) List 1M 13.4 Lattice Trigram Lattice 40M 13.7 69.0 Current (α  1  16 β  1) List 1M 14.8 74.3 Current (α  1  16 β  1) Lattice 1M 14.9 74.0 Current (α  β  0) Lattice 1M 16.0 75.5 Treebank Trigram Lattice 1M 16.5 79.8 No language model Lattice 16.8 84.0 Table 3: Comparison of WER for parsing HUB-1 words lattices with best results of other works. SER = sentence error rate. WER = word error rate. “Speech-like” transformations were applied to all training corpora. Xu (2002) is an implementation of the model of Chelba (2000) for n-best list parsing. Hall (2003) is a lattice-parser related to Charniak (2001). son (2003) does not use the lattice trigram scores directly. However, as in other works, the lattice trigram is used to prune the acoustic lattice to the 50 best paths. The difference in WER between our parser and those of Charniak (2001) and Roark (2001) applied to word lists may be due in part to the lower PARSEVAL scores of our system. Xu et al. (2002) report inverse correlation between labelled precision/recall and WER. We achieve 73.2/76.5% LP/LR on section 23 of the Penn Treebank, compared to 82.9/82.4% LP/LR of Roark (2001) and 90.1/90.1% LP/LR of Charniak (2000). Another contributing factor to the accuracy of Charniak (2001) is the size of the training set — 20M words larger than that used in this work. The low WER of Roark (2001), a top-down probabilistic parsing model, was achieved by training the model on 1 million words of the Penn Treebank, then performing a single pass of Expectation Maximization (EM) on a further 1.2 million words. 6 Conclusions In this work we present an adaptation of the parsing model of Collins (1999) for application to ASR. The system was evaluated over two sets of data: strings and word lattices. As PARSEVAL measures are not applicable to word lattices, we measured the parsing accuracy using string input. The resulting scores were lower than that original implementation of the model. Despite this, the model was successful as a language model for speech recognition, as measured by WER and ability to extract high-level information. Here, the system performs better than a simple n-gram model trained on the same data, while simultaneously providing syntactic information in the form of parse trees. WER scores are comparable to related works in this area. The large size of the parameter set of this parsing model necessarily restricts the size of training data that may be used. In addition, the resource requirements currently present a challenge for scaling up from the relatively sparse word lattices of the NIST HUB-1 corpus (created in a lab setting by professional readers) to lattices created with spontaneous speech in non-ideal conditions. An investigation into the relevant importance of each parameter for the speech recognition task may allow a reduction in the size of the parameter space, with minimal loss of recognition accuracy. A speedup may be achieved, and additional training data could be used. Tuning of parameters using EM has lead to improved WER for other models. We encourage investigation of this technique for lexicalized head-driven lattice parsing. Acknowledgements This research was funded in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Advice on training and test data was provided by Keith Hall of Brown University. References L. R. Bahl, F. Jelinek, and R. L. Mercer. 1983. A maximum likelihood approach to continuous speech recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5:179–190. E. Black, S. Abney, D. Flickenger, C. Gdaniec, R. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of English grammars. In Proceedings of Fourth DARPA Speech and Natural Language Workshop, pages 306– 311. J.-C. Chappelier and M. Rajman. 1998. A practical bottom-up algorithm for on-line parsing with stochastic context-free grammars. Technical Report 98-284, Swiss Federal Institute of Technology, July. Eugene Charniak, Sharon Goldwater, and Mark Johnson. 1998. Edge-Based Best-First Chart Parsing. In 6th Annual Workshop for Very Large Corpora, pages 127–133. Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, John Hale, and Mark Johnson. 1999. BLLIP 1987-89 WSJ Corpus Release 1. Linguistic Data Consortium. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 2000 Conference of the North American Chapter of the Association for Computational Linguistics, pages 132–129, New Brunswick, U.S.A. Eugene Charniak. 2001. Immediate-head parsing for language models. In Proceedings of the 39th Annual Meeting of the ACL. Ciprian Chelba and Frederick Jelinek. 2000. Structured language modeling. Computer Speech and Language, 14:283–332. Ciprian Chelba. 2000. Exploiting Syntactic Structure for Natural Language Modeling. Ph.D. thesis, Johns Hopkins University. Christopher Collins. 2004. Head-Driven Probabilistic Parsing for Word Lattices. M.Sc. thesis, University of Toronto. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Joshua Goodman. 1997. Global thresholding and multiple-pass parsing. In Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing. Keith Hall and Mark Johnson. 2003. Language modeling using efficient best-first bottom-up parsing. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop. Frederick Jelinek. 1997. Information Extraction From Speech And Text. MIT Press. Lidia Mangu, Eric Brill, and Andreas Stolcke. 2000. Finding consensus in speech recognition: Word error minimization and other applications of confusion networks. Computer Speech and Language, 14(4):373– 400. Hwee Tou Ng and John Zelle. 1997. Corpus-based approaches to semantic interpretation in natural language processing. AI Magazine, 18:45–54. A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Conference on Empirical Methods in Natural Language Processing, May. Mosur K. Ravishankar. 1997. Some results on search complexity vs accuracy. In DARPA Speech Recognition Workshop, pages 104–107, February. Brian Roark. 2001. Robust Probabilistic Predictive Syntactic Processing: Motivations, Models, and Applications. Ph.D. thesis, Brown University. Brian Roark. 2002. Markov parsing: Lattice rescoring with a statistical parser. In Proceedings of the 40th Annual Meeting of the ACL, pages 287–294. Ann Taylor, Mitchell Marcus, and Beatrice Santorini, 2003. The Penn TreeBank: An Overview, chapter 1. Kluwer, Dordrecht, The Netherlands. Hans Weber, J¨org Spilker, and G¨unther G¨orz. 1997. Parsing n best trees from a word lattice. Kunstliche Intelligenz, pages 279–288. Peng Xu, Ciprian Chelba, and Frederick Jelinek. 2002. A study on richer syntactic dependencies in structured language modeling. In Proceedings of the 40th Annual Meeting of the ACL, pages 191–198.
2004
30
Balancing Clarity and Efficiency in Typed Feature Logic through Delaying Gerald Penn University of Toronto 10 King’s College Rd. Toronto M5S 3G4 Canada [email protected] Abstract The purpose of this paper is to re-examine the balance between clarity and efficiency in HPSG design, with particular reference to the design decisions made in the English Resource Grammar (LinGO, 1999, ERG). It is argued that a simple generalization of the conventional delay statements used in logic programming is sufficient to restore much of the functionality and concomitant benefit that the ERG elected to forego, with an acceptable although still perceptible computational cost. 1 Motivation By convention, current HPSGs consist, at the very least, of a deductive backbone of extended phrase structure rules, in which each category is a description of a typed feature structure (TFS), augmented with constraints that enforce the principles of grammar. These principles typically take the form of statements, “for all TFSs, ψ holds,” where ψ is usually an implication. Historically, HPSG used a much richer set of formal descriptive devices, however, mostly on analogy to developments in the use of types and description logics in programming language theory (A¨ıt-Ka´ci, 1984), which had served as the impetus for HPSG’s invention (Pollard, 1998). This included logic-programming-style relations (H¨ohfeld and Smolka, 1988), a powerful description language in which expressions could denote sets of TFSs through the use of an explicit disjunction operator, and the full expressive power of implications, in which antecedents of the abovementioned ψ principles could be arbitrarily complex. Early HPSG-based natural language processing systems faithfully supported large chunks of this richer functionality, in spite of their inability to handle it efficiently — so much so that when the designers of the ERG set out to select formal descriptive devices for their implementation with the aim of “balancing clarity and efficiency,” (Flickinger, 2000), they chose to include none of these amenities. The ERG uses only phrase-structure rules and type-antecedent constraints, pushing all would-be description-level disjunctions into its type system or rules. In one respect, this choice was successful, because it did at least achieve a respectable level of efficiency. But the ERG’s selection of functionality has acquired an almost liturgical status within the HPSG community in the intervening seven years. Keeping this particular faith, moreover, comes at a considerable cost in clarity, as will be argued below. This paper identifies what it is precisely about this extra functionality that we miss (modularity, Section 2), determines what it would take at a minimum computationally to get it back (delaying, Section 3), and attempts to measure exactly how much that minimal computational overhead would cost (about 4 µs per delay, Section 4). This study has not been undertaken before; the ERG designers’ decision was based on largely anecdotal accounts of performance relative to then-current implementations that had not been designed with the intention of minimizing this extra cost (indeed, the ERG baseline had not yet been devised). 2 Modularity: the cost in clarity Semantic types and inheritance serve to organize the constraints and overall structure of an HPSG grammar. This is certainly a familiar, albeit vague justification from programming languages research, but the comparison between HPSG and modern programming languages essentially ends with this statement. Programming languages with inclusional polymorphism (subtyping) invariably provide functions or relations and allow these to be reified as methods within user-defined subclasses/subtypes. In HPSG, however, values of features must necessarily be TFSs themselves, and the only method (implicitly) provided by the type signature to act on these values is unification. In the absence of other methods and in the absence of an explicit disjunction operator, the type signature itself has the responsibility of not only declaring definitional subfin-wh-fill-rel-clinf-wh-fill-rel-cl red-rel-cl simp-inf-rel-cl fin-hd-fill-ph inf-hd-fill-ph wh-rel-cl non-wh-rel-cl hd-fill-ph hd-comp-ph inter-cl rel-cl hd-adj-ph hd-nexus-ph clause non-hd-ph hd-ph headed phrase phrase Figure 1: Relative clauses in the ERG (partial). class relationships, but expressing all other nondefinitional disjunctions in the grammar (as subtyping relationships). It must also encode the necessary accoutrements for implementing all other necessary means of combination as unification, such as difference lists for appending lists, or the so-called qeq constraints of Minimal Recursion Semantics (Copestake et al., 2003) to encode semantic embedding constraints. Unification, furthermore, is an inherently nonmodular, global operation because it can only be defined relative to the structure of the entire partial order of types (as a least upper bound). Of course, some partial orders are more modularizable than others, but legislating the global form that type signatures must take on is not an easy property to enforce without more local guidance. The conventional wisdom in programming languages research is indeed that types are responsible for mediating the communication between modules. A simple type system such as HPSG’s can thus only mediate very simple communication. Modern programming languages incorporate some degree of parametric polymorphism, in addition to subtyping, in order to accommodate more complex communication. To date, HPSG’s use of parametric types has been rather limited, although there have been some recent attempts to apply them to the ERG (Penn and Hoetmer, 2003). Without this, one obtains type signatures such as Figure 1 (a portion of the ERG’s for relative clauses), in which both the semantics of the subtyping links themselves (normally, subset inclusion) and the multi-dimensionality of the empirical domain’s analysis erode into a collection of arbitrary naming conventions that are difficult to validate or modify. A more avant-garde view of typing in programming languages research, inspired by the CurryHoward isomorphism, is that types are equivalent to relations, which is to say that a relation can mediate communication between modules through its arguments, just as a parametric type can through its parameters. The fact that we witness some of these mediators as types and others as relations is simply an intensional reflection of how the grammar writer thinks of them. In classical HPSG, relations were generally used as goals in some proof resolution strategy (such as Prolog’s SLD resolution), but even this has a parallel in the world of typing. Using the type signature and principles of Figure 2, for exappendbase appendrec Arg1: e list Arg1:ne list Junk:append append Arg1: list Arg2: list Arg3: list ⊥ appendbase =⇒Arg2 : L ∧Arg3 : L. appendrec =⇒Arg1 : [H|L1] ∧ Arg2 : L2 ∧Arg3 : [H|L3] ∧ Junk : (append ∧A1 : L1 ∧ A2 : L2 ∧Arg3 : L3). Figure 2: Implementing SLD resolution over the append relation as sort resolution. ample, we can perform proof resolution by attempting to sort resolve every TFS to a maximally specific type. This is actually consistent with HPSG’s use of feature logic, although most TFS-based NLP systems do not sort resolve because type inference under sort resolution is NP-complete (Penn, 2001). Phrase structure rules, on the other hand, while they can be encoded inside a logic programming relation, are more naturally viewed as algebraic generators. In this respect, they are more similar to the immediate subtyping declarations that grammar writers use to specify type signatures — both chart parsing and transitive closure are instances of allsource shortest-path problems on the same kind of algebraic structure, called a closed semi-ring. The only notion of modularity ever proven to hold of phrase structure rule systems (Wintner, 2002), furthermore, is an algebraic one. 3 Delaying: the missing link of functionality If relations are used in the absence of recursive data structures, a grammar could be specified using relations, and the relations could then be unfolded offline into relation-free descriptions. In this usage, relations are just macros, and not at all inefficient. Early HPSG implementations, however, used quite a lot of recursive structure where it did not need to be, and the structures they used, such as lists, buried important data deep inside substructures that made parsing much slower. Provided that grammar writers use more parsimonious structures, which is a good idea even in the absence of relations, there is nothing wrong with the speed of logic programming relations (Van Roy, 1990). Recursive datatypes are also prone to nontermination problems, however. This can happen when partially instantiated and potentially recursive data structures are submitted to a proof resolution procedure which explores the further instantiations of these structures too aggressively. Although this problem has received significant attention over the last fifteen years in the constraint logic programming (CLP) community, no true CLP implementation yet exists for the logic of typed feature structures (Carpenter, 1992, LTFS). Some aspects of general solution strategies, including incremental entailment simplification (A¨ıt-Kaci et al., 1992), deterministic goal expansion (Doerre, 1993), and guard statements for relations (Doerre et al., 1996) have found their way into the less restrictive sorted feature constraint systems from which LTFS descended. The CUF implementation (Doerre et al., 1996), notably, allowed for delay statements to be attached to relation definitions, which would wait until each argument was at least as specific as some variable-free, disjunction-free description before resolving. In the remainder of this section, a method is presented for reducing delays on any inequationfree description, including variables and disjunctions, to the SICStus Prolog when/2 primitive (Sections 3.4). This method takes full advantage of the restrictions inherent to LTFS (Section 3.1) to maximize run-time efficiency. In addition, by delaying calls to subgoals individually rather than the (universally quantified) relation definitions themselves,1 we can also use delays to postpone non-deterministic search on disjunctive descriptions (Section 3.3) and to implement complexantecedent constraints (Section 3.2). As a result, this single method restores all of the functionality we were missing. For simplicity, it will be assumed that the target language of our compiler is Prolog itself. This is inconsequential to the general proposal, although implementing logic programs in Prolog certainly involves less effort. 1Delaying relational definitions is a subcase of this functionality, which can be made more accessible through some extra syntactic sugar. 3.1 Restrictions inherent to LTFS LTFS is distinguished by its possession of appropriateness conditions that mediate the occurrence of features and types in these records. Appropriateness conditions stipulate, for every type, a finite set of features that can and must have values in TFSs of that type. This effectively forces TFSs to be finitebranching terms with named attributes. Appropriateness conditions also specify a type to which the value of an appropriate feature is restricted (a value restriction). These conditions make LTFS very convenient for linguistic purposes because the combination of typing with named attributes allows for a very terse description language that can easily make reference to a sparse amount of information in what are usually extremely large structures/records: Definition: Given a finite meet semi-lattice of types, Type, a fixed finite set of features, Feat, and a countable set of variables, Var, Φ is the least set of descriptions that contains: • v, v ∈Var, • τ, τ ∈Type, • F : φ, F ∈Feat, φ ∈Φ, • φ1 ∧φ2, φ1, φ2 ∈Φ, and • φ1 ∨φ2, φ1, φ2 ∈Φ. A nice property of this description language is that every non-disjunctive description with a nonempty denotation has a unique most general TFS in its denotation. This is called its most general satisfier. We will assume that appropriateness guarantees that there is a unique most general type, Intro(F) to which a given feature, F, is appropriate. This is called unique feature introduction. Where unique feature introduction is not assumed, it can be added automatically in O(F ·T) time, where F is the number of features and T is the number of types (Penn, 2001). Meet semi-latticehood can also be restored automatically, although this involves adding exponentially many new types in the worst case. 3.2 Complex Antecedent Constraints It will be assumed here that all complex-antecedent constraints are implicitly universally quantified, and are of the form: α =⇒(γ ∧ρ) where α, γ are descriptions from the core description language, Φ, and ρ is drawn from a definite clause language of relations, whose arguments are also descriptions from Φ. As mentioned above, the ERG uses the same form, but where α can only be a type description, τ, and ρ is the trivial goal, true. The approach taken here is to allow for arbitrary antecedents, α, but still to interpret the implications of principles using subsumption by α, i.e., for every TFS (the implicit universal quantification is still there), either the consequent holds, or the TFS is not subsumed by the most general satisfier of α. The subsumption convention dates back to the TDL (Krieger and Sch¨afer, 1994) and ALE (Carpenter and Penn, 1996) systems, and has earlier antecedents in work that applied lexical rules by subsumption (Krieger and Nerbone, 1991). The ConTroll constraint solver (Goetz and Meurers, 1997) attempted to handle complex antecedents, but used a classical interpretation of implication and no deductive phrase-structure backbone, which created a very large search space with severe non-termination problems. Within CLP more broadly, there is some related work on guarded constraints (Smolka, 1994) and on inferring guards automatically by residuation of implicational rules (Smolka, 1991), but implicit universal quantification of all constraints seems to be unique to linguistics. In most CLP, constraints on a class of terms or objects must be explicitly posted to a store for each member of that class. If a constraint is not posted for a particular term, then it does not apply to that term. The subsumption-based approach is sound with respect to the classical interpretation of implication for those principles where the classical interpretation really is the correct one. For completeness, some additional resolution method (in the form of a logic program with relations) must be used. As is normally the case in CLP, deductive search is used alongside constraint resolution. Under such assumptions, our principles can be converted to: trigger(α) =⇒v ∧whenfs((v = α), ((v = γ)∧ρ)) Thus, with an implementation of type-antecedent constraints and an implementation of whenfs/2 (Section 3.3), which delays the goal in its second argument until v is subsumed by (one of) the most general satisfier(s) of description α, all that remains is a method for finding the trigger, the most efficient type antecedent to use, i.e., the most general one that will not violate soundness. trigger(α) can be defined as follows: • trigger(v) = ⊥, • trigger(τ) = τ, • trigger(F : φ) = Intro(F), • trigger(φ1∧φ2) = trigger(φ1)⊔trigger(φ2), and • trigger(φ1∨φ2) = trigger(φ1)⊓trigger(φ2), where ⊔and ⊓are respectively unification and generalization in the type semi-lattice. In this and the next two subsections, we can use Figure 3 as a running example of the various stages of compilation of a typical complex-antecedent constraint, namely the Finiteness Marking Principle for German (1). This constraint is stated relative to the signature shown in Figure 4. The description to the left of the arrow in Figure 3 (1) selects TFSs whose substructure on the path SYNSEM:LOC:CAT satisfies two requirements: its HEAD value has type verb, and its MARKING value has type fin. The principle says that every TFS that satisfies that description must also have a SYNSEM: LOC: CAT: HEAD: VFORM value of type bse. To find the trigger in Figure 3 (1), we can observe that the antecedent is a feature value description (F:φ), so the trigger is Intro(SYNSEM), the unique introducer of the SYNSEM feature, which happens to be the type sign. We can then transform this constraint as above (Figure 3 (2)). The cons and goal operators in (2)–(5) are ALE syntax, used respectively to separate the type antecedent of a constraint from the description component of the consequent (in this case, just the variable, X), and to separate the description component of the consequent from its relational attachment. We know that any TFS subsumed by the original antecedent will also be subsumed by the most general TFS of type sign, because sign introduces SYNSEM. 3.3 Reducing Complex Conditionals Let us now implement our delay predicate, whenfs(V=Desc,Goal). Without loss of generality, it can be assumed that the first argument is actually drawn from a more general conditional language, including those of the form Vi = Desci closed under conjunction and disjunction. It can also be assumed that the variables of each Desci are distinct. Such a complex conditional can easily be converted into a normal form in which each atomic conditional contains a non-disjunctive description. Conjunction and disjunction of atomic conditionals then reduce as follows (using the Prolog convention of comma for AND and semi-colon for OR): whenfs((VD1,VD2),Goal) :whenfs(VD1,whenfs(VD2,Goal)). whenfs((VD1;VD2),Goal) :whenfs(VD1,(Trigger = 0 -> Goal ; true)), whenfs(VD2,(Trigger = 1 -> Goal ; true)). The binding of the variable Trigger is necessary to ensure that Goal is only resolved once in case the (1) synsem:loc:cat:(head:verb,marking:fin) =⇒synsem:loc:cat:head:vform:bse. (2) sign cons X goal whenfs((X=synsem:loc:cat:(head:verb,marking:fin)), (X=synsem:loc:cat:head:vform:bse)). (3) sign cons X goal whentype(sign,X,(farg(synsem,X,SynVal), whentype(synsem,SynVal,(farg(loc,SynVal,LocVal), whentype(local,LocVal,(farg(cat,LocVal,CatVal), whenfs((CatVal=(head:verb,marking:fin)), (X=synsem:loc:cat:head:vform:bse)))))))). (4) sign cons X goal (whentype(sign,X,(farg(synsem,X,SynVal), whentype(synsem,SynVal,(farg(loc,SynVal,LocVal), whentype(local,LocVal,(farg(cat,LocVal,CatVal), whentype(category,CatVal,(farg(head,CatVal,HdVal), whentype(verb,HdVal, whentype(category,CatVal,(farg(marking,CatVal,MkVal), whentype(fin,MkVal, (X=synsem:loc:cat:head:vform:bse)))))))))))))). (5) sign cons X goal (farg(synsem,X,SynVal), farg(loc,SynVal,LocVal), farg(cat,LocVal,CatVal), farg(head,CatVal,HdVal), whentype(verb,HdVal,(farg(marking,CatVal,MkVal), whentype(fin,MkVal, (X=synsem:loc:cat:head:vform:bse))))). (6) sign(e list( ),e list( ),SynVal,DelayVar) (7) whentype(Type,FS,Goal) :functor(FS,CurrentType,Arity), (sub type(Type,CurrentType) -> call(Goal) ; arg(Arity,FS,DelayVar), whentype(Type,DelayVar,Goal)). Figure 3: Reduction stages for the Finiteness Marking Principle. bse ind fin inf verb noun vform marking head VFORM:vform sign QRETR:list QSTORE:list SYNSEM:synsem synsem LOC:local category HEAD:head MARKING:marking local CAT:category ⊥ Figure 4: Part of the signature underlying the constraint in Figure 3. goals for both conditionals eventually unsuspend. For atomic conditionals, we must thread two extra arguments, VsIn, and VsOut, which track which variables have been seen so far. Delaying on atomic type conditionals is implemented by a special whentype/3 primitive (Section 3.4), and feature descriptions reduce using unique feature introduction: whenfs(V=T,Goal,Vs,Vs) :type(T) -> whentype(T,V,Goal). whenfs(V=(F:Desc),Goal,VsIn,VsOut):unique introducer(F,Intro), whentype(Intro,V, (farg(F,V,FVal), whenfs(FVal=Desc,Goal,VsIn, VsOut))). farg(F,V,FVal) binds FVal to the argument position of V that corresponds to the feature F once V has been instantiated to a type for which F is appropriate. In the variable case, whenfs/4 simply binds the variable when it first encounters it, but subsequent occurrences of that variable create a suspension using Prolog when/2, checking for identity with the previous occurrences. This implements a primitive delay on structure sharing (Section 3.4): whenfs(V=X,Goal,VsIn,VsOut) :var(X), (select(VsIn,X,VsOut) -> % not first X - wait when(?=(V,X), ((V==X) -> call(Goal) ; true)) ; % first X - bind VsOut=VsIn,V=X,call(Goal)). In practice, whenfs/2 can be partially evaluated by a compiler. In the running example, Figure 3, we can compile the whenfs/2 subgoal in (2) into simpler whentype/2 subgoals, that delay until X reaches a particular type. The second case of whenfs/4 tells us that this can be achieved by successively waiting for the types that introduce each of the features, SYNSEM, LOC, and CAT. As shown in Figure 4, those types are sign, synsem and local, respectively (Figure 3 (3)). The description that CatVal is suspended on is a conjunction, so we successively suspend on each conjunct. The type that introduces both HEAD and MARKING is category (4). In practice, static analysis can greatly reduce the complexity of the resulting relational goals. In this case, static analysis of the type system tells us that all four of these whentype/2 calls can be eliminated (5), since X must be a sign in this context, synsem is the least appropriate type of any SYNSEM value, local is the least appropriate type of any LOC value, and category is the least appropriate type of any CAT value. 3.4 Primitive delay statements The two fundamental primitives typically provided for Prolog terms, e.g., by SICStus Prolog when/2, are: (1) suspending until a variable is instantiated, and (2) suspending until two variables are equated or inequated. The latter corresponds exactly to structure-sharing in TFSs, and to shared variables in descriptions; its implementation was already discussed in the previous section. The former, if carried over directly, would correspond to delaying until a variable is promoted to a type more specific than ⊥, the most general type in the type semilattice. There are degrees of instantiation in LTFS, however, corresponding to long subtyping chains that terminate in ⊥. A more general and useful primitive in a typed language with such chains is suspending until a variable is promoted to a particular type. whentype(Type,X,Goal), i.e., delaying subgoal Goal until variable X reaches Type, is then the non-universally-quantified cousin of the type-antecedent constraints that are already used in the ERG. How whentype(Type,X,Goal) is implemented depends on the data structure used for TFSs, but in Prolog they invariably use the underlying Prolog implementation of when/2. In ALE, for example, TFSs are represented with reference chains that extend every time their type changes. One can simply wait for a variable position at the end of this chain to be instantiated, and then compare the new type to Type. Figure 3 (6) shows a schematic representation of a sign-typed TFS with SYNSEM value SynVal, and two other appropriate feature values. Acting upon this as its second argument, the corresponding definition of whentype(Type,X,Goal) in Figure 3 (7) delays on the variable in the extra, fourth argument position. This variable will be instantiated to a similar term when this TFS promotes to a subtype of sign. As described above, delaying until the antecedent of the principle in Figure 3 (1) is true or false ultimately reduces to delaying until various feature values attain certain types using whentype/3. A TFS may not have substructures that are specific enough to determine whether an antecedent holds or not. In this case, we must wait until it is known whether the antecedent is true or false before applying the consequent. If we reach a deadlock, where several constraints are suspended on their antecedents, then we must use another resolution method to begin testing more specific extensions of the TFS in turn. The choice of these other methods characterizes a true CLP solution for LTFS, all of which are enabled by the method presented in this paper. In the case of the signature in Figure 4, one of these methods may test whether a marking-typed substructure is consistent with either fin or inf. If it is consistent with fin, then this branch of the search may unsuspend the Finiteness Marking Principle on a sign-typed TFS that contains this substructure. 4 Measuring the cost of delaying How much of a cost do we pay for using delaying? In order to answer this question definitively, we would need to reimplement a large-scale grammar which was substantially identical in every way to the ERG but for its use of delay statements. The construction of such a grammar is outside the scope of this research programme, but we do have access to MERGE,2 which was designed to have the same extensional coverage of English as the ERG. Internally, the MERGE is quite unlike the ERG. Its TFSs are far larger because each TFS category carries inside it the phrase structure daughters of the rule that created it. It also has far fewer types, more feature values, a heavy reliance on lists, about a third as many phrase structure rules with daughter categories that are an average of 32% larger, and many more constraints. Because of these differences, this version of MERGE runs on average about 300 times slower than the ERG. On the other hand, MERGE uses delaying for all three of the purposes that have been discussed in this paper: complex antecedents, explicit whenfs/2 calls to avoid non-termination problems, and explicit whenfs/2 calls to avoid expensive nondeterministic searches. While there is currently no delay-free grammar to compare it to, we can pop open the hood on our implementation and measure delaying relative to other system functions on MERGE with its test suite. The results are shown in Figure 5. These results show that while the per call per sent. avg. avg. % Function µs avg. parse / call # calls time PS rules 1458 410 0.41 Chart access 13.3 13426 0.12 Relations 4.0 1380288 1.88 Delays 2.6 3633406 6.38 Path compression 2.0 955391 1.31 Constraints 1.6 1530779 1.62 Unification 1.5 37187128 38.77 Dereferencing 0.5 116731777 38.44 Add type MGSat 0.3 5131391 0.97 Retrieve feat. val. 0.02 19617973 0.21 Figure 5: Run-time allocation of functionality in MERGE. Times were measured on an HP Omnibook XE3 laptop with an 850MHz Pentium II processor and 512MB of RAM, running SICStus Prolog 3.11.0 on Windows 98 SE. cost of delaying is on a par with other system functions such as constraint enforcement and relational goal resolution, delaying takes between three and five times more of the percentage of sentence parse 2The author sincerely thanks Kordula DeKuthy and Detmar Meurers for their assistance in providing the version of MERGE (0.9.6) and its test suite (1347 sentences, average word length 6.3, average chart size 410 edges) for this evaluation. MERGE is still under development. time because it is called so often. This reflects, in part, design decisions of the MERGE grammar writers, but it also underscores the importance of having an efficient implementation of delaying for largescale use. Even if delaying could be eliminated entirely from this grammar at no cost, however, a 6% reduction in parsing speed would not, in the present author’s view, warrant the loss of modularity in a grammar of this size. 5 Conclusion It has been shown that a simple generalization of conventional delay statements to LTFS, combined with a subsumption-based interpretation of implicational constraints and unique feature introduction are sufficient to restore much of the functionality and concomitant benefit that has been routinely sacrificed in HPSG in the name of parsing efficiency. While a definitive measurement of the computational cost of this functionality has yet to emerge, there is at least no apparent indication from the experiments that we can conduct that disjunction, complex antecedents and/or a judicious use of recursion pose a significant obstacle to tractable grammar design when the right control strategy (CLP with subsumption testing) is adopted. References H. A¨ıt-Kaci, A. Podelski, and G. Smolka. 1992. A feature-based constraint system for logic programming with entailment. In Proceedings of the International Conference on Fifth Generation Computer Systems. H. A¨ıt-Ka´ci. 1984. A Lattice-theoretic Approach to Computation based on a Calculus of Partially Ordered Type Structures. Ph.D. thesis, University of Pennsylvania. B. Carpenter and G. Penn. 1996. Compiling typed attribute-value logic grammars. In H. Bunt and M. Tomita, editors, Recent Advances in Parsing Technologies, pages 145–168. Kluwer. B. Carpenter. 1992. The Logic of Typed Feature Structures. Cambridge. A. Copestake, D. Flickinger, C. Pollard, and I. Sag. 2003. Minimal Recursion Semantics: An introduction. Journal submission, November 2003. J. Doerre, M. Dorna, J. Junger, and K. Schneider, 1996. The CUF User’s Manual. IMS Stuttgart, 2.0 edition. J. Doerre. 1993. Generalizing Earley deduction for constraint-based grammars. Technical Report R1.2.A, DYANA Deliverable. D. Flickinger. 2000. On building a more efficient grammar by exploiting types. Natural Language Engineering, 6(1):15–28. T. Goetz and W.D. Meurers. 1997. Interleaving universal principles and relational constraints over typed feature logic. In Proceedings of the 35th ACL / 8th EACL, pages 1–8. M. H¨ohfeld and G. Smolka. 1988. Definite relations over constraint languages. LILOG Report 53, IBM Deutschland. H.-U. Krieger and J. Nerbone. 1991. Feature-based inheritance networks for computational lexicons. In Proceedings of the ACQUILEX Workshop on Default Inheritance in the Lexicon, number 238 in University of Cambridge, Computer Laboratory Technical Report. H.-U. Krieger and U. Sch¨afer. 1994. TDL — a type description language for HPSG part 1: Overview. Technical Report RR-94-37, Deutsches Forschungszentrum f¨ur K¨unstliche Intelligenz (DFKI), November. LinGO. 1999. The LinGO grammar and lexicon. Available on-line at http://lingo.stanford.edu. G. Penn and K. Hoetmer. 2003. In search of epistemic primitives in the english resource grammar. In Proceedings of the 10th International Conference on Head-driven Phrase Structure Grammar, pages 318–337. G. Penn. 2001. Tractability and structural closures in attribute logic signatures. In Proceedings of the 39th ACL, pages 410–417. C. J. Pollard. 1998. Personal communiciation to the author. G. Smolka. 1991. Residuation and guarded rules for constraint logic programming. Technical Report RR-91-13, DFKI. G. Smolka. 1994. A calculus for higher-order concurrent constraint programming with deep guards. Technical Report RR-94-03, DFKI. P. Van Roy. 1990. Can Logic Programming Execute as Fast as Imperative Programming? Ph.D. thesis, University of California, Berkeley. S. Wintner. 2002. Modular context-free grammars. Grammars, 5(1):41–63.
2004
31
Minimal Recursion Semantics as Dominance Constraints: Translation, Evaluation, and Analysis Ruth Fuchss,1 Alexander Koller,1 Joachim Niehren,2 and Stefan Thater1 1 Dept. of Computational Linguistics, Saarland University, Saarbrücken, Germany ∗ 2 INRIA Futurs, Lille, France {fuchss,koller,stth}@coli.uni-sb.de Abstract We show that a practical translation of MRS descriptions into normal dominance constraints is feasible. We start from a recent theoretical translation and verify its assumptions on the outputs of the English Resource Grammar (ERG) on the Redwoods corpus. The main assumption of the translation— that all relevant underspecified descriptions are nets—is validated for a large majority of cases; all non-nets computed by the ERG seem to be systematically incomplete. 1 Introduction Underspecification is the standard approach to dealing with scope ambiguity (Alshawi and Crouch, 1992; Pinkal, 1996). The readings of underspecified expressions are represented by compact and concise descriptions, instead of being enumerated explicitly. Underspecified descriptions are easier to derive in syntax-semantics interfaces (Egg et al., 2001; Copestake et al., 2001), useful in applications such as machine translation (Copestake et al., 1995), and can be resolved by need. Two important underspecification formalisms in the recent literature are Minimal Recursion Semantics (MRS) (Copestake et al., 2004) and dominance constraints (Egg et al., 2001). MRS is the underspecification language which is used in large-scale HPSG grammars, such as the English Resource Grammar (ERG) (Copestake and Flickinger, 2000). The main advantage of dominance constraints is that they can be solved very efficiently (Althaus et al., 2003; Bodirsky et al., 2004). Niehren and Thater (2003) defined, in a theoretical paper, a translation from MRS into normal dominance constraints. This translation clarified the precise relationship between these two related formalisms, and made the powerful meta-theory of dominance constraints accessible to MRS. Their goal was to also make the large grammars for MRS ∗Supported by the CHORUS project of the SFB 378 of the DFG. and the efficient constraint solvers for dominance constraints available to the other formalism. However, Niehren and Thater made three technical assumptions: 1. that EP-conjunction can be resolved in a preprocessing step; 2. that the qeq relation in MRS is simply dominance; 3. and (most importantly) that all linguistically correct and relevant MRS expressions belong to a certain class of constraints called nets. This means that it is not obvious whether their result can be immediately applied to the output of practical grammars like the ERG. In this paper, we evaluate the truth of these assumptions on the MRS expressions which the ERG computes for the sentences in the Redwoods Treebank (Oepen et al., 2002). The main result of our evaluation is that 83% of the Redwoods sentences are indeed nets, and 17% aren’t. A closer analysis of the non-nets reveals that they seem to be systematically incomplete, i. e. they predict more readings than the sentence actually has. This supports the claim that all linguistically correct MRS expressions are indeed nets. We also verify the other two assumptions, one empirically and one by proof. Our results are practically relevant because dominance constraint solvers are much faster and have more predictable runtimes when solving nets than the LKB solver for MRS (Copestake, 2002), as we also show here. In addition, nets might be useful as a debugging tool to identify potentially problematic semantic outputs when designing a grammar. Plan of the Paper. We first recall the definitions of MRS (§2) and dominance constraints (§3). We present the translation from MRS-nets to dominance constraints (§4) and prove that it can be extended to MRS-nets with EP-conjunction (§5). Finally we evaluate the net hypothesis and the qeq assumption on the Redwoods corpus, and compare runtimes (§6). 2 Minimal Recursion Semantics This section presents a definition of Minimal Recursion Semantics (MRS) (Copestake et al., 2004) including EP-conjunctions with a merging semantics. Full MRS with qeq-semantics, top handles, and event variables will be discussed in the last paragraph. MRS Syntax. MRS constraints are conjunctive formulas over the following vocabulary: 1. An infinite set of variables ranged over by h. Variables are also called handles. 2. An infinite set of constants x,y,z denoting indivual variables of the object language. 3. A set of function symbols ranged over by P, and a set of quantifier symbols ranged over by Q. Pairs Qx are further function symbols. 4. The binary predicate symbol ‘=q’. MRS constraints have three kinds of literals, two kinds of elementary predications (EPs) in the first two lines and handle constraints in the third line: 1. h : P(x1,...,xn,h1,...,hm), where n,m ≥0 2. h : Qx(h1,h2) 3. h1 =q h2 In EPs, label positions are on the left of ‘:’ and argument positions on the right. Let M be a set of literals. The label set lab(M) contains all handles of M that occur in label but not in argument position, and the argument handle set arg(M) contains all handles of M that occur in argument but not in label position. Definition 1 (MRS constraints). An MRS constraint (MRS for short) is a finite set M of MRSliterals such that: M1 every handle occurs at most once in argument position in M, M2 handle constraints h =q h′ always relate argument handles h to labels h′, and M3 for every constant (individual variable) x in argument position in M there is a unique literal of the form h : Qx(h1,h2) in M. We say that an MRS M is compact if every handle h in M is either a label or an argument handle. Compactness simplifies the following proofs, but it is no serious restriction in practice. We usually represent MRSs as directed graphs: the nodes of the graph are the handles of the MRS, EPs are represented as solid lines, and handle constraints are represented as dotted lines. For instance, the following MRS is represented by the graph on the left of Fig. 1. {h5 : somey(h6,h8),h7 : book(y),h1 : everyx(h2,h4), h3 : student(x),h9 : read(x,y),h2 =q h3,h6 =q h7} everyx somey studentx booky readx,y everyx somey studentx booky readx,y everyx somey studentx booky readx,y Figure 1: An MRS and its two configurations. Note that the relation between bound variables and their binders is made explicit by binding edges drawn as dotted lines (cf. C2 below); transitively redundand binding edges (e. g., from somey to booky) however are omited. MRS Semantics. Readings of underspecified representations correspond to configurations of MRS constraints. Intuitively, a configuration is an MRS where all handle constraints have been resolved by plugging the “tree fragments” into each other. Let M be an MRS and h,h′ be handles in M. We say that h immediately outscopes h′ in M if there is an EP in M with label h and argument handle h′, and we say that h outscopes h′ in M if the pair (h,h′) belongs to the reflexive transitive closure of the immediate outscope relation of M. Definition 2 (MRS configurations). An MRS M is a configuration if it satisfies conditions C1 and C2: C1 The graph of M is a tree of solid edges: (i) all handles are labels i. e., arg(M) = /0 and M contains no handle constraints, (ii) handles don’t properly outscope themselve, and (iii) all handles are pairwise connected by EPs in M. C2 If h : Qx(h1,h2) and h′ : P(...,x,...) belong to M, then h outscopes h′ in M i. e., binding edges in the graph of M are transitively redundant. We say that a configuration M is configuration of an MRS M′ if there exists a partial substitution σ : lab(M′) ⇝arg(M′) that states how to identify labels with argument handles of M′ so that: C3 M = {σ(E) | E is an EP in M′}, and C4 for all h =q h′ in M′, h outscopes σ(h′) in M. The value σ(E) is obtained by substituting all labels in dom(σ) in E while leaving all other handels unchanged. The MRS on the left of Fig. 1, for instance, has two configurations given to the right. EP-conjunctions. Definitions 1 and 2 generalize the idealized definition of MRS of Niehren and Thater (2003) by EP-conjunctions with a merging semantics. An MRS M contains an EP-conjunction if it contains different EPs with the same label h.The intuition is that EP-conjunctions are interpreted by object language conjunctions. P1, P2 P3 {h1 : P1(h2),h1 : P2(h3),h4 : P3 h2 =q h4,h3 =q h4} Figure 2: An unsolvable MRS with EP-conjunction P1 P3 P2 P1 P2, P3 configures Figure 3: A solvable MRS without merging-free configaration Fig. 2 shows an MRS with an EP-conjunction and its graph. The function symbols of both EPs are conjoined and their arguments are merged into a set. The MRS does not have configurations since the argument handles of the merged EPs cannot jointly outscope the node P4. We call a configuration merging if it contains EPconjunctions, and merging-free otherwise. Merging configurations are needed to solve EP-conjuctions such as {h : P1, h : P2}. Unfortunately, they can also solve MRSs without EP-conjunctions, such as the MRS in Fig. 3. The unique configuration of this MRS is a merging configuration: the labels of P1 and P2 must be identified with the only available argument handle. The admission of merging configurations may thus have important consequences for the solution space of arbitrary MRSs. Standard MRS. Standard MRS requires three further extensions: (i) qeq-semantics, (ii) tophandles, and (iii) event variables. These extensions are less relevant for our comparision. The qeq-semantics restricts the interpretation of handle constraints beyond dominance. Let M be an MRS with handles h,h′. We say that h is qeq h′ in M if either h = h′, or there is an EP h : Qx(h0,h1) in M and h1 is qeq h′ in M. Every qeq-configuration is a configuration as defined above, but not necessarily vice versa. The qeq-restriction is relevant in theory but will turn out unproblematic in practice (see §6). Standard MRS requires the existence of top handles in all MRS constraints. This condition doesn’t matter for MRSs with connected graphs (see (Bodirsky et al., 2004) for the proof idea). MRSs with unconnected graphs clearly do not play any role in practical underspecified semantics. Finally, MRSs permit events variables e,e′ as a second form of constants. They are treated equally to individual variables except that they cannot be bound by quantifiers. 3 Dominance Constraints Dominance constraints are a general framework for describing trees. For scope underspecification, they are used to describe the syntax trees of object language formulas. Dominance constraints are the core language underlying CLLS (Egg et al., 2001) which adds parallelism and binding constraints. Syntax and semantics. We assume a possibly infinite signature Σ = {f,g,...} of function symbols with fixed arities (written ar( f)) and an infinite set of variables ranged over by X,Y,Z. A dominance constraint ϕ is a conjunction of dominance, inequality, and labeling literals of the following form, where ar( f) = n: ϕ ::= X ◁∗Y | X ̸= Y | X : f(X1,...,Xn) | ϕ∧ϕ′ Dominance constraints are interpreted over finite constructor trees i. e., ground terms constructed from the function symbols in Σ. We identify ground terms with trees that are rooted, ranked, edgeordered and labeled. A solution for a dominance constraint ϕ consists of a tree τ and an assignment α that maps the variables in ϕ to nodes of τ such that all constraints are satisfied: labeling literals X : f(X1,...,Xn) are satisfied iff α(X) is labeled with f and its daughters are α(X1),...,α(Xn) in this order; dominance literals X ◁∗Y are satisfied iff α(X) dominates α(Y) in τ; and inequality literals X ̸= Y are satisfied iff α(X) and α(Y) are distinct nodes. Solved forms. Satisfiable dominance constraints have infinitely many solutions. Constraint solvers for dominance constraints therefore do not enumerate solutions but solved forms i. e., “tree shaped” constraints. To this end, we consider (weakly) normal dominance constraints (Bodirsky et al., 2004). We call a variable a hole of ϕ if it occurs in argument position in ϕ and a root of ϕ otherwise. Definition 3. A dominance constraint ϕ is normal if it satisfies the following conditions. N1 (a) each variable of ϕ occurs at most once in the labeling literals of ϕ. (b) each variable of ϕ occurs at least once in the labeling literals of ϕ. N2 for distinct roots X and Y of ϕ, X ̸= Y is in ϕ. N3 (a) if X ◁∗Y occurs in ϕ, Y is a root in ϕ. (b) if X ◁∗Y occurs in ϕ, X is a hole in ϕ. We call ϕ weakly normal if it satisfies the above properties except for N1 (b) and N3 (b). Note that Definition 3 imposes compactness: the height of tree fragments is always one. This is not everyx somey studentx booky readx,y everyx somey studentx booky readx,y everyx somey studentx booky readx,y Figure 4: A normal dominance constraint (left) and its two solved forms (right). a serious restriction, as weakly normal dominance constraints can be compactified, provided that dominance links relate either roots or holes with roots. Weakly normal dominance constraints ϕ can be represented by dominance graphs. The dominance graph of ϕ is a directed graph G = (V,ET ⊎ED) defined as follows. The nodes of G are the variables of ϕ. Labeling literals X : f(X1,...,Xk) are represented by tree edges (X,Xi) ∈ET, for 1 ≤i ≤k, and dominance literals X ◁∗X′ are represented by dominance edges (X,X′) ∈ED. Inequality literals are not represented in the graph. In pictures, labeling literals are drawn with solid lines and dominance edges with dotted lines. We say that a constraint ϕ is in solved form if its graph is in solved form. A graph G is in solved form iff it is a forest. The solved forms of G are solved forms G′ which are more specific than G i. e., they differ only in their dominance edges and the reachability relation of G extends the reachability of G′. A minimal solved form is a solved form which is minimal with respect to specificity. Simple solved forms are solved forms where every hole has exactly one outgoing dominance edge. Fig. 4 shows as a concrete example the translation of the MRS description in Fig. 1 together with its two minimal solved forms. Both solved forms are simple. 4 Translating Merging-Free MRS-Nets This section defines MRS-nets without EPconjunctions, and sketches their translation to normal dominance constraints. We define nets equally for MRSs and dominance constraints. The key semantic property of nets is that different notions of solutions coincide. In this section, we show that merging-free configurations coincides to minimal solved forms. §5 generalizes the translation by adding EP-conjunctions and permitting merging semantics. Pre-translation. An MRS constraint M can be represented as a corresponding dominance constraint ϕM as follows: The variables of ϕM are the handles of M, and the literals of ϕM correspond ... ... ... ... ... (a) strong (b) weak (c) island Figure 5: Fragment Schemata of Nets those of M in the following sence: h : P(x1,...,xn,h1,...,hk) →h : Px1,...,xn(h1,...,hk) h : Qx(h1,h2) →h : Qx(h1,h2) h =q h′ →h ◁∗h′ Additionally, dominance literals h ◁∗h′ are added to ϕM for all h,h′ s. t. h : Qx(h1,h2) and h′ : P(...,x,...) belong to M (cf. C2), and literals h ̸= h′ are added to ϕM for all h,h′ in distinct label position in M. Lemma 1. If a compact MRS M does not contain EP-conjunctions then ϕM is weakly normal, and the graph of M is the transitive reduction of the graph of ϕM. Nets. A hypernormal path (Althaus et al., 2003) in a constraint graph is a path in the undirected graph that contains for every leaf X at most one incident dominance edge. Let ϕ be a weakly normal dominance constraint and let G be the constraint graph of ϕ. We say that ϕ is a dominance net if the transitive reduction G′ of G is a net. G′ is a net if every tree fragment F of G′ satisfies one of the following three conditions, illustrated in Fig. 5: Strong. Every hole of F has exactly one outgoing dominance edge, and there is no weak root-to-root dominance edge. Weak. Every hole except for the last one has exactly one outgoing dominance edge; the last hole has no outgoing dominance edge, and there is exactly one weak root-to-root dominance edge. Island. The fragment has one hole X, and all variables which are connected to X by dominance edges are connected by a hypernormal path in the graph where F has been removed. We say that an MRS M is an MRS-net if the pretranslation of its literals results in a dominance net ϕM. We say that an MRS-net M is connected if ϕM is connected; ϕM is connected if the graph of ϕM is connected. Note that this notion of MRS-nets implies that MRS-nets cannot contain EP-conjunctions as otherwise the resulting dominance constraint would not be weakly normal. §5 shows that EP-conjunctions can be resolved i. e., MRSs with EP-conjunctions can be mapped to corresponding MRSs without EPconjunctions. If M is an MRS-net (without EP-conjunctions), then M can be translated into a corresponding dominance constraint ϕ by first pre-translating M into a ϕM and then normalizing ϕM by replacing weak root-to-root dominance edges in weak fragments by dominance edges which start from the open last hole. Theorem 1 (Niehren and Thater, 2003). Let M be an MRS and ϕM be the translation of M. If M is a connected MRS-net, then the merging-free configurations of M bijectively correspond to the minimal solved forms of the ϕM. The following section generalizes this result to MRS-nets with a merging semantics. 5 Merging and EP-Conjunctions We now show that if an MRS is a net, then all its configurations are merging-free, which in particular means that the translation can be applied to the more general version of MRS with a merging semantics. Lemma 2 (Niehren and Thater, 2003). All minimal solved forms of a connected dominance net are simple. Lemma 3. If all solved forms of a normal dominance constraint are simple, then all of its solved forms are minimal. Theorem 2. The configurations of an MRS-net M are merging-free. Proof. Let M′ be a configuration of M and let σ be the underlying substitution. We construct a solved form ϕM′ as follows: the labeling literals of ϕM′ are the pre-translations of the EPs in M, and ϕM′ has a dominance literal h′ ◁∗h iff (h,h′) ∈σ, and inequality literals X ̸= Y for all distinct roots in ϕM′. By condition C1 in Def. 2, the graph of M′ is a tree, hence the graph of ϕM′ must also be a tree i. e., ϕM′ is a solved form. ϕM′ must also be more specific than the graph of ϕM because the graph of M′ satisfies all dominance requirements of the handle constraints in M, hence ϕM′ is a solved form of ϕM. M clearly solved ϕM′. By Lemmata 2 and 3, ϕM′ must be simple and minimal because ϕM is a net. But then M′ cannot contain EP-conjunctions i. e., M′ is merging-free. The merging semantics of MRS is needed to solve EP-conjunctions. As we have seen, the merging semantics is not relevant for MRS constraints which are nets. This also verifies Niehren and Thater’s (2003) assumption that EP-conjunctions are “syntactic sugar” which can be resolved in a preprocessing step: EP-conjunctions can be resolved by exhaustively applying the following rule which adds new literals to make the implicit conjunction explicit: h : E1(h1,...,hn),h : E2(h′ 1,...,h′ m) ⇒ h : ‘E1&E2’(h1,...,hn,h′ 1,...,h′ m), where E(h1,...,hn) stands for an EP with argument handles h1,...,hn, and where ‘E1&E2’ is a complex function symbol. If this rule is applied exhaustively to an MRS M, we obtain an MRS M′ without EPconjunctions. It should be intuitively clear that the configurations of M and M′ correspond; Therefore, the configurations of M also correspond to the minimal solved forms of the translation of M′. 6 Evaluation The two remaining assumptions underlying the translation are the “net-hypothesis” that all linguistically relevant MRS expressions are nets, and the “qeq-hypothesis” that handle constraints can be given a dominance semantics practice. In this section, we empirically show that both assumptions are met in practice. As an interesting side effect, we also compare the run-times of the constraint-solvers we used, and we find that the dominance constraint solver typically outperforms the MRS solver, often by significant margins. Grammar and Resources. We use the English Resource Grammar (ERG), a large-scale HPSG grammar, in connection with the LKB system, a grammar development environment for typed feature grammars (Copestake and Flickinger, 2000). We use the system to parse sentences and output MRS constraints which we then translate into dominance constraints. As a test corpus, we use the Redwoods Treebank (Oepen et al., 2002) which contains 6612 sentences. We exclude the sentences that cannot be parsed due to memory capacities or words and grammatical structures that are not included in the ERG, or which produce ill-formed MRS expressions (typically violating M1) and thus base our evaluation on a corpus containing 6242 sentences. In case of syntactic ambiguity, we only use the first reading output by the LKB system. To enumerate the solutions of MRS constraints and their translations, we use the MRS solver built into the LKB system and a solver for weakly normal dominance constraints (Bodirsky et al., 2004), ... (a) open hole (b) ill-formed island Figure 6: Two classes of non-nets which is implemented in C++ and uses LEDA, a class library for efficient data types and algorithms (Mehlhorn and Näher, 1999). 6.1 Relevant Constraints are Nets We check for 6242 constraints whether they constitute nets. It turns out that 5200 (83.31%) constitute nets while 1042 (16.69%) violate one or more netconditions. Non-nets. The evaluation shows that the hypothesis that all relevant constraints are nets seems to be falsified: there are constraints that are not nets. However, a closer analysis suggests that these constraints are incomplete and predict more readings than the sentence actually has. This can also be illustrated with the average number of solutions: For the Redwoods corpus in combination with the ERG, nets have 1836 solutions on average, while non-nets have 14039 solutions, which is a factor of 7.7. The large number of solutions for non-nets is due to the “structural weakness” of non-nets; often, non-nets have only merging configurations. Non-nets can be classified into two categories (see Fig. 6): The first class are violated “strong” fragments which have holes without outgoing dominance edge and without a corresponding root-toroot dominance edge. The second class are violated “island” fragments where several outgoing dominance edges from one hole lead to nodes which are not hypernormally connected. There are two more possibilities for violated “weak” fragments— having more than one weak dominance edge or having a weak dominance edge without empty hole—, but they occur infrequently (4.4%). If those weak fragments were normalized, they would constitute violated island fragments, so we count them as such. 124 (11.9%) of the non-nets contain empty holes, 762 (73.13%) contain violated island fragments, and 156 (14.97%) contain both. Those constraints that contain only empty holes and no violated island fragments cannot be configured, as in configurations, all holes must be filled. Fragments with open holes occur frequently, but not in all contexts, for constraints representing for example time specifications (e. g., “from nine to twelve” or “a three o’clock flight”) or intensional expressions (e. g., “Is it?” or “I suppose”). Illavailablee, ax ay cafeteriax saunay ande,x,y prop ax ay cafeteriax saunay, ande,x,y availablee prop ax ay cafeteriax saunay ande,x,y availablee prop ϕ1 ϕ2 Figure 7: An MRS for “A sauna and a cafeteria are available” (top) and two of sixteen merging configurations (below). ax ay cafeteriax saunay ande,x,y availablee prop Figure 8: The “repaired” MRS from Fig. 7 formed island fragments are often triggered by some kind of coordination, like “a restaurant and/or a sauna” or “a hundred and thirty Marks”, also implicit ones like “one hour thirty minutes” or “one thirty”. Constraints with both kinds of violated fragments emerge when there is some input that yields an open hole and another part of the input yields a violated island fragment (for example in constructions like “from nine to eleven thirty” or “the ten o’clock flight Friday or Thursday”, but not necessarily as obviously as in those examples). The constraint on the left in Fig. 7 gives a concrete example for violated island fragments. The topmost fragment has outgoing dominance edges to otherwise unconnected subconstraints ϕ1 and ϕ2. Under the merging-free semantics of the MRS dialect used in (Niehren and Thater, 2003) where every hole has to be filled exactly once, this constraint cannot be configured: there is no hole into which “available” could be plugged. However, standard MRS has merging configuration where holes can be filled more than once. For the constraint in Fig. 7 this means that “available” can be merged in almost everywhere, only restricted by the “qeq-semantics” which forbids for instance “available” to be merged with “sauna.” In fact, the MRS constraint solver derives sixteen configurations for the constraint, two of which are given in Fig. 7, although the sentence has only two scope readings. We conjecture that non-nets are semantically “incomplete” in the sense that certain constraints are missing. For instance, an alternative analysis for the above constraint is given in Fig. 8. The constraint adds an additional argument handle to “and” and places a dominance edge from this handle to “available.” In fact, the constraint is a net; it has exactly two readings. 6.2 Qeq is dominance For all nets, the dominance constraint solver calculates the same number of solutions as the MRS solver does, with 3 exceptions that hint at problems in the syntax-semantics interface. As every configuration that satisfies proper qeq-constraints is also a configuration if handle constraints are interpreted under the weaker notion of dominance, the solutions computed by the dominance constraint solver and the MRS solver must be identical for every constraint. This means that the additional expressivity of proper qeq-constraints is not used in practice, which in turn means that in practice, the translation is sound and correct even for the standard MRS notion of solution, given the constraint is a net. 6.3 Comparison of Runtimes The availability of a large body of underspecified descriptions both in MRS and in dominance constraint format makes it possible to compare the solvers for the two underspecification formalisms. We measured the runtimes on all nets using a Pentium III CPU at 1.3 GHz. The tests were run in a multi-user environment, but as the MRS and dominance measurements were conducted pairwise, conditions were equal for every MRS constraint and corresponding dominance constraint. The measurements for all MRS-nets with less than thirty dominance edges are plotted in Fig. 9. Inputs are grouped according to the constraint size. The filled circles indicate average runtimes within each size group for enumerating all solutions using the dominance solver, and the empty circles indicate the same for the LKB solver. The brackets around each point indicate maximum and minimum runtimes in that group. Note that the vertical axis is logarithmic. We excluded cases in which one or both of the solvers did not return any results: There were 173 sentences (3.33% of all nets) on which the LKB solver ran out of memory, and 1 sentence (0.02%) that took the dominance solver more than two minutes to solve. The graph shows that the dominance constraint solver is generally much faster than the LKB solver: The average runtime is less by a factor of 50 for constraints of size 10, and this grows to a factor of 500 for constraints of size 25. Our experiments show that the dominance solver outperforms the LKB solver on 98% the cases. In addition, its runtimes are much more predictable, as the brackets in the graph are also shorter by two or three orders of magnitude, and the standard deviation is much smaller (not shown). 7 Conclusion We developed Niehren and Thater’s (2003) theoretical translation into a practical system for translating MRS into dominance constraints, applied it systematically to MRSs produced by English Resource Grammar for the Redwoods treebank, and evaluated the results. We showed that: 1. most “real life” MRS expressions are MRSnets, which means that the translation is correct in these cases; 2. for nets, merging is not necessary (or even possible); 3. the practical translation works perfectly for all MRS-nets from the corpus; in particular, the =q relation can be taken as synonymous with dominance in practice. Because the translation works so well in practice, we were able to compare the runtimes of MRS and dominance constraint solvers on the same inputs. This evaluation shows that the dominance constraint solver outperforms the MRS solver and displays more predictable runtimes. A researcher working with MRS can now solve MRS nets using the efficient dominance constraint solvers. A small but significant number of the MRS constraints derived by the ERG are not nets. We have argued that these constraints seem to be systematically incomplete, and their correct completions are indeed nets. A more detailed evaluation is an important task for future research, but if our “net hypothesis” is true, a system that tests whether all outputs of a grammar are nets (or a formal “safety criterion” that would prove this theoretically) could be a useful tool for developing and debugging grammars. From a more abstract point of view, our evaluation contributes to the fundamental question of what expressive power an underspecification formalism needs. It turned out that the distinction between qeq 1 10 100 1000 10000 100000 1e+06 0 5 10 15 20 25 30 Time (ms) Size (number of dominance edges) DC solver (LEDA) MRS solver Figure 9: Comparison of runtimes for the MRS and dominance constraint solvers. and dominance hardly plays a role in practice. If the net hypothesis is true, it also follows that merging is not necessary because EP-conjunctions can be converted into ordinary conjunctions. More research along these lines could help unify different underspecification formalisms and the resources that are available for them. Acknowledgments We are grateful to Ann Copestake for many fruitful discussions, and to our reviewers for helpful comments. References H. Alshawi and R. Crouch. 1992. Monotonic semantic interpretation. In Proc. 30th ACL, pages 32–39. Ernst Althaus, Denys Duchier, Alexander Koller, Kurt Mehlhorn, Joachim Niehren, and Sven Thiel. 2003. An efficient graph algorithm for dominance constraints. Journal of Algorithms, 48:194–219. Manuel Bodirsky, Denys Duchier, Joachim Niehren, and Sebastian Miele. 2004. An efficient algorithm for weakly normal dominance constraints. In ACM-SIAM Symposium on Discrete Algorithms. The ACM Press. Ann Copestake and Dan Flickinger. 2000. An open-source grammar development environment and broad-coverage english grammar using HPSG. In Conference on Language Resources and Evaluation. Ann Copestake, Dan Flickinger, Rob Malouf, Susanne Riehemann, and Ivan Sag. 1995. Translation using Minimal Recursion Semantics. Leuven. Ann Copestake, Alex Lascarides, and Dan Flickinger. 2001. An algebra for semantic construction in constraint-based grammars. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, pages 132–139, Toulouse, France. Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan Sag. 2004. Minimal recursion semantics: An introduction. Journal of Language and Computation. To appear. Ann Copestake. 2002. Implementing Typed Feature Structure Grammars. CSLI Publications, Stanford, CA. Markus Egg, Alexander Koller, and Joachim Niehren. 2001. The Constraint Language for Lambda Structures. Logic, Language, and Information, 10:457–485. K. Mehlhorn and S. Näher. 1999. The LEDA Platform of Combinatorial and Geometric Computing. Cambridge University Press, Cambridge. See also http://www.mpi-sb.mpg.de/LEDA/. Joachim Niehren and Stefan Thater. 2003. Bridging the gap between underspecification formalisms: Minimal recursion semantics as dominance constraints. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Stephan Oepen, Kristina Toutanova, Stuart Shieber, Christopher Manning, Dan Flickinger, and Thorsten Brants. 2002. The LinGO Redwoods treebank: Motivation and preliminary applications. In Proceedings of the 19th International Conference on Computational Linguistics (COLING’02), pages 1253–1257. Manfred Pinkal. 1996. Radical underspecification. In 10th Amsterdam Colloquium, pages 587–606.
2004
32
Learning with Unlabeled Data for Text Categorization Using Bootstrapping and Feature Projection Techniques Youngjoong Ko Dept. of Computer Science, Sogang Univ. Sinsu-dong 1, Mapo-gu Seoul, 121-742, Korea [email protected] Jungyun Seo Dept. of Computer Science, Sogang Univ. Sinsu-dong 1, Mapo-gu Seoul, 121-742, Korea [email protected] Abstract A wide range of supervised learning algorithms has been applied to Text Categorization. However, the supervised learning approaches have some problems. One of them is that they require a large, often prohibitive, number of labeled training documents for accurate learning. Generally, acquiring class labels for training data is costly, while gathering a large quantity of unlabeled data is cheap. We here propose a new automatic text categorization method for learning from only unlabeled data using a bootstrapping framework and a feature projection technique. From results of our experiments, our method showed reasonably comparable performance compared with a supervised method. If our method is used in a text categorization task, building text categorization systems will become significantly faster and less expensive. 1 Introduction Text categorization is the task of classifying documents into a certain number of pre-defined categories. Many supervised learning algorithms have been applied to this area. These algorithms today are reasonably successful when provided with enough labeled or annotated training examples. For example, there are Naive Bayes (McCallum and Nigam, 1998), Rocchio (Lewis et al., 1996), Nearest Neighbor (kNN) (Yang et al., 2002), TCFP (Ko and Seo, 2002), and Support Vector Machine (SVM) (Joachims, 1998). However, the supervised learning approach has some difficulties. One key difficulty is that it requires a large, often prohibitive, number of labeled training data for accurate learning. Since a labeling task must be done manually, it is a painfully time-consuming process. Furthermore, since the application area of text categorization has diversified from newswire articles and web pages to E-mails and newsgroup postings, it is also a difficult task to create training data for each application area (Nigam et al., 1998). In this light, we consider learning algorithms that do not require such a large amount of labeled data. While labeled data are difficult to obtain, unlabeled data are readily available and plentiful. Therefore, this paper advocates using a bootstrapping framework and a feature projection technique with just unlabeled data for text categorization. The input to the bootstrapping process is a large amount of unlabeled data and a small amount of seed information to tell the learner about the specific task. In this paper, we consider seed information in the form of title words associated with categories. In general, since unlabeled data are much less expensive and easier to collect than labeled data, our method is useful for text categorization tasks including online data sources such as web pages, E-mails, and newsgroup postings. To automatically build up a text classifier with unlabeled data, we must solve two problems; how we can automatically generate labeled training documents (machine-labeled data) from only title words and how we can handle incorrectly labeled documents in the machine-labeled data. This paper provides solutions for these problems. For the first problem, we employ the bootstrapping framework. For the second, we use the TCFP classifier with robustness from noisy data (Ko and Seo, 2004). How can labeled training data be automatically created from unlabeled data and title words? Maybe unlabeled data don’t have any information for building a text classifier because they do not contain the most important information, their category. Thus we must assign the class to each document in order to use supervised learning approaches. Since text categorization is a task based on pre-defined categories, we know the categories for classifying documents. Knowing the categories means that we can choose at least a representative title word of each category. This is the starting point of our proposed method. As we carry out a bootstrapping task from these title words, we can finally get labeled training data. Suppose, for example, that we are interested in classifying newsgroup postings about specially ‘Autos’ category. Above all, we can select ‘automobile’ as a title word, and automatically extract keywords (‘car’, ‘gear’, ‘transmission’, ‘sedan’, and so on) using co-occurrence information. In our method, we use context (a sequence of 60 words) as a unit of meaning for bootstrapping from title words; it is generally constructed as a middle size of a sentence and a document. We then extract core contexts that include at least one of the title words and the keywords. We call them centroid-contexts because they are regarded as contexts with the core meaning of each category. From the centroidcontexts, we can gain many words contextually cooccurred with the title words and keywords: ‘driver’, ‘clutch’, ‘trunk’, and so on. They are words in first-order co-occurrence with the title words and the keywords. To gather more vocabulary, we extract contexts that are similar to centroid-contexts by a similarity measure; they contain words in second-order co-occurrence with the title words and the keywords. We finally construct context-cluster of each category as the combination of centroid-contexts and contexts selected by the similarity measure. Using the context-clusters as labeled training data, a Naive Bayes classifier can be built. Since the Naive Bayes classifier can label all unlabeled documents for their category, we can finally obtain labeled training data (machine-labeled data). When the machine-labeled data is used to learn a text classifier, there is another difficult in that they have more incorrectly labeled documents than manually labeled data. Thus we develop and employ the TCFP classifiers with robustness from noisy data. The rest of this paper is organized as follows. Section 2 reviews previous works. In section 3 and 4, we explain the proposed method in detail. Section 5 is devoted to the analysis of the empirical results. The final section describes conclusions and future works. 2 Related Works In general, related approaches for using unlabeled data in text categorization have two directions; One builds classifiers from a combination of labeled and unlabeled data (Nigam, 2001; Bennett and Demiriz, 1999), and the other employs clustering algorithms for text categorization (Slonim et al., 2002). Nigam studied an Expected Maximization (EM) technique for combining labeled and unlabeled data for text categorization in his dissertation. He showed that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training data with a large pool of unlabeled data. Bennet and Demiriz achieved small improvements on some UCI data sets using SVM. It seems that SVMs assume that decision boundaries lie between classes in low-density regions of instance space, and the unlabeled examples help find these areas. Slonim suggested clustering techniques for unsupervised document classification. Given a collection of unlabeled data, he attempted to find clusters that are highly correlated with the true topics of documents by unsupervised clustering methods. In his paper, Slonim proposed a new clustering method, the sequential Information Bottleneck (sIB) algorithm. 3 The Bootstrapping Algorithm for Creating Machine-labeled Data The bootstrapping framework described in this paper consists of the following steps. Each module is described in the following sections in detail. 1. Preprocessing: Contexts are separated from unlabeled documents and content words are extracted from them. 2. Constructing context-clusters for training: - Keywords of each category are created - Centroid-contexts are extracted and verified - Context-clusters are created by a similarity measure 3. Learning Classifier: Naive Bayes classifier are learned by using the context-clusters 3.1 Preprocessing The preprocessing module has two main roles: extracting content words and reconstructing the collected documents into contexts. We use the Brill POS tagger to extract content words (Brill, 1995). Generally, the supervised learning approach with labeled data regards a document as a unit of meaning. But since we can use only the title words and unlabeled data, we define context as a unit of meaning and we employ it as the meaning unit to bootstrap the meaning of each category. In our system, we regard a sequence of 60 content words within a document as a context. To extract contexts from a document, we use sliding window techniques (Maarek et al., 1991). The window is a slide from the first word of the document to the last in the size of the window (60 words) and the interval of each window (30 words). Therefore, the final output of preprocessing is a set of context vectors that are represented as content words of each context. 3.2 Constructing Context-Clusters for Training At first, we automatically create keywords from a title word for each category using co-occurrence information. Then centroid-contexts are extracted using the title word and keywords. They contain at least one of the title and keywords. Finally, we can gain more information of each category by assigning remaining contexts to each contextcluster using a similarity measure technique; the remaining contexts do not contain any keywords or title words. 3.2.1 Creating Keyword Lists The starting point of our method is that we have title words and collected documents. A title word can present the main meaning of each category but it could be insufficient in representing any category for text categorization. Thus we need to find words that are semantically related to a title word, and we define them as keywords of each category. The score of semantic similarity between a title word, T, and a word, W, is calculated by the cosine metric as follows: ∑ ∑ ∑ = = = × × = n i i n i i n i i i w t w t W T sim 1 2 1 2 1 ) , ( (1) where ti and wi represent the occurrence (binary value: 0 or 1) of words T and W in i-th document respectively, and n is the total number of documents in the collected documents. This method calculates the similarity score between words based on the degree of their co-occurrence in the same document. Since the keywords for text categorization must have the power to discriminate categories as well as similarity with the title words, we assign a word to the keyword list of a category with the maximum similarity score and recalculate the score of the word in the category using the following formula: )) , ( ) , ( ( ) , ( ) , ( max sec max max max W T sim W T sim W T sim c W Score ond − + = (2) where Tmax is the title word with the maximum similarity score with a word W, cmax is the category of the title word Tmax, and Tsecondmax is other title word with the second high similarity score with the word W. This formula means that a word with high ranking in a category has a high similarity score with the title word of the category and a high similarity score difference with other title words. We sort out words assigned to each category according to the calculated score in descending order. We then choose top m words as keywords in the category. Table 1 shows the list of keywords (top 5) for each category in the WebKB data set. Table 1. The list of keywords in the WebKB data set Category Title Word Keywords course course assignments, hours, instructor, class, fall faculty professor associate, ph.d, fax, interests, publications project project system, systems, research, software, information student student graduate, computer, science, page, university 3.2.2 Extracting and Verifying Centroid-Contexts We choose contexts with a keyword or a title word of a category as centroid-contexts. Among centroid-contexts, some contexts could not have good features of a category even though they include the keywords of the category. To rank the importance of centroid-contexts, we compute the importance score of each centroid-context. First of all, weights (Wij) of word wi in j-th category are calculated using Term Frequency (TF) within a category and Inverse Category Frequency (ICF) (Cho and Kim, 1997) as follows: )) log( ) (log( i ij i ij ij CF M TF ICF TF W − × = × = (3) where CFi is the number of categories that contain wi and M is the total number of categories. Using word weights (Wij) calculated by formula 3, the score of a centroid-context (Sk) in j-th category (cj) is computed as follows: N W W W c S Score Nj j j j k + + + = ... ) , ( 2 1 (4) where N is the number of words in the centroidcontext. As a result, we obtain a set of words in firstorder co-occurrence from centroid-contexts of each category. 3.2.3 Creating Context-Clusters We gather the second-order co-occurrence information by assigning remaining contexts to the context-cluster of each category. For the assigning criterion, we calculate similarity between remaining contexts and centroid-contexts of each category. Thus we employ the similarity measure technique by Karov and Edelman (1998). In our method, a part of this technique is reformed for our purpose and remaining contexts are assigned to each context-cluster by that revised technique. 1) Measurement of word and context similarities As similar words tend to appear in similar contexts, we can compute the similarity by using contextual information. Words and contexts play complementary roles. Contexts are similar to the extent that they contain similar words, and words are similar to the extent that they appear in similar contexts (Karov and Edelman, 1998). This definition is circular. Thus it is applied iteratively using two matrices, WSM and CSM. Each category has a word similarity matrix WSMn and a context similarity matrix CSMn. In each iteration n, we update WSMn, whose rows and columns are labeled by all content words encountered in the centroid-contexts of each category and input remaining contexts. In that matrix, the cell (i,j) holds a value between 0 and 1, indicating the extent to which the i-th word is contextually similar to the j-th word. Also, we keep and update a CSMn, which holds similarities among contexts. The rows of CSMn correspond to the remaining contexts and the columns to the centroid-contexts. In this paper, the number of input contexts of row and column in CSM is limited to 200, considering execution time and memory allocation, and the number of iterations is set as 3. To compute the similarities, we initialize WSMn to the identity matrix. The following steps are iterated until the changes in the similarity values are small enough. 1. Update the context similarity matrix CSMn, using the word similarity matrix WSMn. 2. Update the word similarity matrix WSMn, using the context similarity matrix CSMn. 2) Affinity formulae To simplify the symmetric iterative treatment of similarity between words and contexts, we define an auxiliary relation between words and contexts as affinity. Affinity formulae are defined as follows (Karov and Edelman, 1998): ) , ( max ) , ( i n X W n W W sim X W aff i∈ = (5) (6) ) , ( max ) , ( j n X W n X X sim W X aff j ∈ = In the above formulae, n denotes the iteration number, and the similarity values are defined by WSMn and CSMn. Every word has some affinity to the context, and the context can be represented by a vector indicating the affinity of each word to it. 3) Similarity formulae The similarity of W1 to W2 is the average affinity of the contexts that include W1 to W2, and the similarity of a context X1 to X2 is a weighted average of the affinity of the words in X1 to X2. Similarity formulae are defined as follows: ) , ( ) , ( ) , ( 2 1 2 1 1 1 X W aff X W weight X X sim n X W n ⋅ = ∑ ∈ + (7) (8) ) , ( ) , ( ) , ( 1 ) , ( 2 1 2 1 1 2 1 1 2 1 1 W X aff W X weight W W sim else W W sim W W if n X W n n ⋅ = = = ∑ ∈ + + The weights in formula 7 are computed as reflecting global frequency, log-likelihood factors, and part of speech as used in (Karov and Edelman, 1998). The sum of weights in formula 8, which is a reciprocal number of contexts that contain W1, is 1. 4) Assigning remaining contexts to a category We decided a similarity value of each remaining context for each category using the following method: ) , ( ) , (   = ∈ ∈ j CC S i C c S X sim aver c X sim ic j i (9) In formula 9, i) X is a remaining context, ii) { } m c c c C ,..., , 2 1 = is a category set, and iii) { } n c S S i ,..., 1 = CC is a controid-contexts set of category ci. Each remaining context is assigned to a category which has a maximum similarity value. But there may exist noisy remaining contexts which do not belong to any category. To remove these noisy remaining contexts, we set up a dropping threshold using normal distribution of similarity values as follows (Ko and Seo, 2000): } ) , ( max{ C ci θσ µ + ≥ ∈ ic X sim (10) where i) X is a remaining context, ii) µ is an average of similarity values , iii) σ is a standard deviation of similarity values, and iv) θ is a numerical value corresponding to the threshold (%) in normal distribution table. ) , ( i C c c X sim i∈ Finally, a remaining context is assigned to the context-cluster of any category when the category has a maximum similarity above the dropping threshold value. In this paper, we empirically use a 15% threshold value from an experiment using a validation set. 3.3 Learning the Naive Bayes Classifier Using Context-Clusters In above section, we obtained labeled training data: context-clusters. Since training data are labeled as the context unit, we employ a Naive Bayes classifier because it can be built by estimating the word probability in a category, but not in a document. That is, the Naive Bayes classifier does not require labeled data with the unit of documents unlike other classifiers. We use the Naive Bayes classifier with minor modifications based on Kullback-Leibler Divergence (Craven et al., 2000). We classify a document di according to the following formula: ∑ ∏ = =         + ∝ ≈ = | | 1 | | 1 ) , ( )ˆ ; | ( )ˆ ; | ( log )ˆ ; | ( )ˆ ; ( log )ˆ ; | ( )ˆ | ( )ˆ | ( )ˆ ; | ( )ˆ | ( )ˆ ; | ( V t i t j t i t j V t d w N j t j i j i j i j d w P c w P d w P n c P c w P c P d P c d P c P d c P i θ θ θ θ θ θ θ θ θ θ (11) where i) n is the number of words in document di, ii) wt is the t-th word in the vocabulary, iii) N(wt,di) is the frequency of word wt in document di. Here, the Laplace smoothing is used to estimate the probability of word wt in class cj and the probability of class cj as follows: ∑= + + = | | 1 ) , ( | | ) , ( 1 )ˆ ; | ( V t c t c t j t j j G w N V G w N c w P θ (12) ∑ + + = i i j c c c j G C G c P | | | | | | 1 )ˆ | ( θ (13) where is the count of the number of times word w ) , ( jc t G w N t occurs in the context-cluster ( ) of category c jc G j. 4 Using a Feature Projection Technique for Handling Noisy Data of Machine-labeled Data We finally obtained labeled data of a documents unit, machine-labeled data. Now we can learn text classifiers using them. But since the machinelabeled data are created by our method, they generally include far more incorrectly labeled documents than the human-labeled data. Thus we employ a feature projection technique for our method. By the property of the feature projection technique, a classifier (the TCFP classifier) can have robustness from noisy data (Ko and Seo, 2004). As seen in our experiment results, TCFP showed the highest performance among conventional classifiers in using machine-labeled data. The TCFP classifier with robustness from noisy data Here, we simply describe the TCFP classifier using the feature projection technique (Ko and Seo, 2002; 2004). In this approach, the classification knowledge is represented as sets of projections of training data on each feature dimension. The classification of a test document is based on the voting of each feature of that test document. That is, the final prediction score is calculated by accumulating the voting scores of all features. First of all, we must calculate the voting ratio of each category for all features. Since elements with a high TF-IDF value in projections of a feature must become more useful classification criteria for the feature, we use only elements with TF-IDF values above the average TF-IDF value for voting. And the selected elements participate in proportional voting with the same importance as the TF-IDF value of each element. The voting ratio of each category cj in a feature tm is calculated by the following formula: ∑ ∑ ∈ ∈ ⋅ = m m m m j j I l t l m I l t m l m m d t w l t c y d t w t c r ) ( ) ( ) , ( )) ( , ( ) , ( ) , ( r r (14) In formula 14, w ) , ( d tm r is the weight of term tm in document d, Im denotes a set of elements selected for voting and is a function; if the category for an element t is equal to c , the output value is 1. Otherwise, the output value is 0. { } 1.0 ∈ ) (l m )) ( , ( l t c y m j j Next, since each feature separately votes on feature projections, contextual information is missing. Thus we calculate co-occurrence frequency of features in the training data and modify TF-IDF values of two terms ti and tj in a test document by co-occurrence frequency between them; terms with a high co-occurrence frequency value have higher term weights. Finally, the voting score of each category c in the m-th feature t j m of a test document d is calculated by the following formula: )) ( 1 log( ) , ( ) , ( ) , ( 2 m m m m t t c r d t tw t c vs j j χ + ⋅ ⋅ = r (15) where tw(tm,d) denotes a modified term weight by the co-occurrence frequency and denotes the calculated χ ) ( 2 mt χ m 2 statistics value of . t Table 2. The top micro-avg F1 scores and precision-recall breakeven points of each method. OurMethod (basis) OurMethod (NB) OurMethod (Rocchio) OurMethod (kNN) OurMethod (SVM) OurMethod (TCFP) Newsgroups 79.36 83.46 83 79.95 82.49 86.19 WebKB 73.63 73.22 75.28 68.04 73.74 75.47 Reuters 88.62 88.23 86.26 85.65 87.41 89.09 The outline of the TCFP classifier is as follow: 5 Empirical Evaluation 5.1 Data Sets and Experimental Settings To test our method, we used three different kinds of data sets: UseNet newsgroups (20 Newsgroups), web pages (WebKB), and newswire articles (Reuters 21578). For fair evaluation in Newsgroups and WebKB, we employed the fivefold cross-validation method. The Newsgroups data set, collected by Ken Lang, contains about 20,000 articles evenly divided among 20 UseNet discussion groups (McCallum and Nigam, 1998). In this paper, we used only 16 categories after removing 4 categories: three miscellaneous categories (talk.politics.misc, talk.religion.misc, and comp.os.ms-windows.misc) and one duplicate meaning category (comp.sys. ibm.pc.hardware). The second data set comes from the WebKB project at CMU (Craven et al., 2000). This data set contains web pages gathered from university computer science departments. The Reuters 21578 Distribution 1.0 data set consists of 12,902 articles and 90 topic categories from the Reuters newswire. Like other study in (Nigam, 2001), we used the ten most populous categories to identify the news topic. About 25% documents from training data of each data set are selected for a validation set. We applied a statistical feature selection method (χ2 statistics) to a preprocessing stage for each classifier (Yang and Pedersen, 1997). As performance measures, we followed the standard definition of recall, precision, and F1 measure. For evaluation performance average across categories, we used the micro-averaging method (Yang et al., 2002). Results on Reuters are reported as precision-recall breakeven points, which is a standard information retrieval measure for binary classification (Joachims, 1998). 1. input : test document: d r =<t1,t2,…,tn> 2. main process For each feature ti tw(ti,d) is calculated For each feature ti For each category cj vote[cj]=vote[cj]+vs(cj,ti) by Formula 15 prediction = ] [ max arg j c c vote j Title words in our experiment are selected according to category names of each data set (see Table 1 as an example). 5.2 Experimental Results 5.2.1 Observing the Performance According to the Number of Keywords First of all, we determine the number of keywords in our method using the validation set. The number of keywords is limited by the top m-th keyword from the ordered list of each category. Figure 1 displays the performance at different number of keywords (from 0 to 20) in each data set. 40 45 50 55 60 65 70 75 80 85 0 1 2 3 4 5 8 10 13 15 18 20 The number of keywords Micro- avg. F1 Newsgroups WebKB Reuters Figure 1. The comparison of performance according to the number of keywords We set the number of keywords to 2 in Newsgroups, 5 in WebKB, and 3 in Reuters empirically. Generally, we recommend that the number of keywords be between 2 and 5. 5.2.2 Comparing our Method Using TCFP with those Using other Classifiers In this section, we prove the superiority of TCFP over the other classifiers (SVM, kNN, Naive Bayes (NB), Roccio) in training data with much noisy data such as machine-labeled data. As shown in Table 2, we obtained the best performance in using TCFP at all three data sets. Let us define the notations. OurMethod(basis) denotes the Naive Bayes classifier using labeled contexts and OurMethod(NB) denotes the Naive Bayes classifier using machine-labeled data as training data. The same manner is applied for other classifiers. OurMethod(TCFP) achieved more advanced scores than OurMethod(basis): 6.83 in Newsgroups, 1.84 in WebKB, and 0.47 in Reuters. 5.2.3 Comparing with the Supervised Naive Bayes Classifier For this experiment, we consider two possible cases for labeling task. The first task is to label a part of collected documents and the second is to label all of them. As the first task, we built up a new training data set; it consists of 500 different documents randomly chosen from appropriate categories like the experiment in (Slonim et al., 2002). As a result, we report performances from two kinds of Naive Bayes classifiers which are learned from 500 training documents and the whole training documents respectively. Table 3. The comparison of our method and the supervised NB classifier OurMethod (TCFP) NB (500) NB (All) Newsgroups 86.19 72.68 91.72 WebKB 75.47 74.1 85.29 Reuters 89.09 82.1 91.64 In Table 3, the results of our method are higher than those of NB(500) and are comparable to those of NB(All) in all data sets. Especially, the result in Reuters reached 2.55 close to that of NB(All) though it used the whole labeled training data. 5.2.4 Enhancing our Method from Choosing Keywords by Human The main problem of our method is that the performance depends on the quality of the keywords and title words. As we have seen in Table 3, we obtained the worst performance in the WebKB data set. In fact, title words and keywords of each category in the WebKB data set also have high frequency in other categories. We think these factors contribute to a comparatively poor performance of our method. If keywords as well as title words are supplied by humans, our method may achieve higher performance. However, choosing the proper keywords for each category is a much difficult task. Moreover, keywords from developers, who have insufficient knowledge about an application domain, do not guarantee high performance. In order to overcome this problem, we propose a hybrid method for choosing keywords. That is, a developer obtains 10 candidate keywords from our keyword extraction method and then they can choose proper keywords from them. Table 4 shows the results from three data sets. Table 4. The comparison of our method and enhancing method OurMethod (TCFP) Enhancing (TCFP)) Improvement Newsgroups 86.19 86.23 +0.04 WebKB 75.47 77.59 +2.12 Reuters 89.09 89.52 +0.43 As shown in Table 4, especially we could achieve significant improvement in the WebKb data set. Thus we find that the new method for choosing keywords is more useful in a domain with confused keywords between categories such as the WebKB data set. 5.2.5 Comparing with a Clustering Technique In related works, we presented two approaches using unlabeled data in text categorization; one approach combines unlabeled data and labeled data, and the other approach uses the clustering technique for text categorization. Since our method does not use any labeled data, it cannot be fairly compared with the former approaches. Therefore, we compare our method with a clustering technique. Slonim et al. (2002) proposed a new clustering algorithm (sIB) for unsupervised document classification and verified the superiority of his algorithm. In his experiments, the sIB algorithm was superior to other clustering algorithms. As we set the same experimental settings as in Slonim’s experiments and conduct experiments, we verify that our method outperforms ths sIB algorithm. In our experiments, we used the micro-averaging precision as performance measure and two revised data sets: revised_NG, revised_Reuters. These data sets were revised in the same way according to Slonim’s paper as follows: In revised_NG, the categories of Newsgroups were united with respect to 10 meta-categories: five comp categories, three politics categories, two sports categories, three religions categories, and two transportation categories into five big metacategories. The revised_Reuters used the 10 most frequent categories in the Reuters 21578 corpus under the ModApte split. As shown in Table 5, our method shows 6.65 advanced score in revised_NG and 3.2 advanced score in revised_Reuters. Table 5. The comparison of our method and sIB sIB OurMethod (TCFP) Improvement revised_NG 79.5 86.15 +6.65 revised_Reuters 85.8 89 +3.2 6 Conclusions and Future Works This paper has addressed a new unsupervised or semi-unsupervised text categorization method. Though our method uses only title words and unlabeled data, it shows reasonably comparable performance in comparison with that of the supervised Naive Bayes classifier. Moreover, it outperforms a clustering method, sIB. Labeled data are expensive while unlabeled data are inexpensive and plentiful. Therefore, our method is useful for low-cost text categorization. Furthermore, if some text categorization tasks require high accuracy, our method can be used as an assistant tool for easily creating labeled training data. Since our method depends on title words and keywords, we need additional studies about the characteristics of candidate words for title words and keywords according to each data set. Acknowledgement This work was supported by grant No. R01-2003000-11588-0 from the basic Research Program of the KOSEF References K. Bennett and A. Demiriz, 1999, Semi-supervised Support Vector Machines, Advances in Neural Information Processing Systems 11, pp. 368-374. E. Brill, 1995, Transformation-Based Error-driven Learning and Natural Language Processing: A Case Study in Part of Speech Tagging, Computational Linguistics, Vol.21, No. 4. K. Cho and J. Kim, 1997, Automatic Text Categorization on Hierarchical Category Structure by using ICF (Inverse Category Frequency) Weighting, In Proc. of KISS conference, pp. 507-510. M. Craven, D. DiPasquo, D. Freitag, A. McCallum, T. Mitchell, K. Nigam, and S. Slattery, 2000, Learning to construct knowledge bases from the World Wide Web, Artificial Intelligence, 118(1-2), pp. 69-113. T. Joachims, 1998, Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Proc. of ECML, pp. 137-142. Y. Karov and S. Edelman, 1998, Similarity-based Word Sense Disambiguation, Computational Linguistics, Vol. 24, No. 1, pp. 41-60. Y. Ko and J. Seo, 2000, Automatic Text Categorization by Unsupervised Learning, In Proc. of COLING’2000, pp. 453-459. Y. Ko and J. Seo, 2002, Text Categorization using Feature Projections, In Proc. of COLING’2002, pp. 467-473. Y. Ko and J. Seo, 2004, Using the Feature Projection Technique based on the Normalized Voting Method for Text Classification, Information Processing and Management, Vol. 40, No. 2, pp. 191-208. D.D. Lewis, R.E. Schapire, J.P. Callan, and R. Papka, 1996, Training Algorithms for Linear Text Classifiers. In Proc. of SIGIR’96, pp.289-297. Y. Maarek, D. Berry, and G. Kaiser, 1991, An Information Retrieval Approach for Automatically Construction Software Libraries, IEEE Transaction on Software Engineering, Vol. 17, No. 8, pp. 800813. A. McCallum and K. Nigam, 1998, A Comparison of Event Models for Naive Bayes Text Classification. AAAI ’98 workshop on Learning for Text Categorization, pp. 41-48. K. P. Nigam, A. McCallum, S. Thrun, and T. Mitchell, 1998, Learning to Classify Text from Labeled and Unlabeled Documents, In Proc. of AAAI-98. K. P. Nigam, 2001, Using Unlabeled Data to Improve Text Classification, The dissertation for the degree of Doctor of Philosophy. N. Slonim, N. Friedman, and N. Tishby, 2002, Unsupervised Document Classification using Sequential Information Maximization, In Proc. of SIGIR’02, pp. 129-136. Y. Yang and J. P. Pedersen. 1997, Feature selection in statistical leaning of text categorization. In Proc. of ICML’97, pp. 412-420. Y. Yang, S. Slattery, and R. Ghani. 2002, A study of approaches to hypertext categorization, Journal of Intelligent Information Systems, Vol. 18, No. 2.
2004
33
The Sentimental Factor: Improving Review Classification via Human-Provided Information Philip Beineke∗and Trevor Hastie Dept. of Statistics Stanford University Stanford, CA 94305 Shivakumar Vaithyanathan IBM Almaden Research Center 650 Harry Rd. San Jose, CA 95120-6099 Abstract Sentiment classification is the task of labeling a review document according to the polarity of its prevailing opinion (favorable or unfavorable). In approaching this problem, a model builder often has three sources of information available: a small collection of labeled documents, a large collection of unlabeled documents, and human understanding of language. Ideally, a learning method will utilize all three sources. To accomplish this goal, we generalize an existing procedure that uses the latter two. We extend this procedure by re-interpreting it as a Naive Bayes model for document sentiment. Viewed as such, it can also be seen to extract a pair of derived features that are linearly combined to predict sentiment. This perspective allows us to improve upon previous methods, primarily through two strategies: incorporating additional derived features into the model and, where possible, using labeled data to estimate their relative influence. 1 Introduction Text documents are available in ever-increasing numbers, making automated techniques for information extraction increasingly useful. Traditionally, most research effort has been directed towards “objective” information, such as classification according to topic; however, interest is growing in producing information about the opinions that a document contains; for instance, Morinaga et al. (2002). In March, 2004, the American Association for Artificial Intelligence held a symposium in this area, entitled “Exploring Affect and Attitude in Text.” One task in opinion extraction is to label a review document d according to its prevailing sentiment s ∈{−1, 1} (unfavorable or favorable). Several previous papers have addressed this problem by building models that rely exclusively upon labeled documents, e.g. Pang et al. (2002), Dave et al. (2003). By learning models from labeled data, one can apply familiar, powerful techniques directly; however, in practice it may be difficult to obtain enough labeled reviews to learn model parameters accurately. A contrasting approach (Turney, 2002) relies only upon documents whose labels are unknown. This makes it possible to use a large underlying corpus – in this case, the entire Internet as seen through the AltaVista search engine. As a result, estimates for model parameters are subject to a relatively small amount of random variation. The corresponding drawback to such an approach is that its predictions are not validated on actual documents. In machine learning, it has often been effective to use labeled and unlabeled examples in tandem, e.g. Nigam et al. (2000). Turney’s model introduces the further consideration of incorporating human-provided knowledge about language. In this paper we build models that utilize all three sources: labeled documents, unlabeled documents, and human-provided information. The basic concept behind Turney’s model is quite simple. The “sentiment orientation” (Hatzivassiloglou and McKeown, 1997) of a pair of words is taken to be known. These words serve as “anchors” for positive and negative sentiment. Words that co-occur more frequently with one anchor than the other are themselves taken to be predictive of sentiment. As a result, information about a pair of words is generalized to many words, and then to documents. In the following section, we relate this model with Naive Bayes classification, showing that Turney’s classifier is a “pseudo-supervised” approach: it effectively generates a new corpus of labeled documents, upon which it fits a Naive Bayes classifier. This insight allows the procedure to be represented as a probability model that is linear on the logistic scale, which in turn suggests generalizations that are developed in subsequent sections. 2 A Logistic Model for Sentiment 2.1 Turney’s Sentiment Classifier In Turney’s model, the “sentiment orientation” σ of word w is estimated as follows. ˆσ(w) = log N(w,excellent)/Nexcellent N(w,poor)/Npoor (1) Here, Na is the total number of sites on the Internet that contain an occurrence of a – a feature that can be a word type or a phrase. N(w,a) is the number of sites in which features w and a appear “near” each other, i.e. in the same passage of text, within a span of ten words. Both numbers are obtained from the hit count that results from a query of the AltaVista search engine. The rationale for this estimate is that words that express similar sentiment often co-occur, while words that express conflicting sentiment cooccur more rarely. Thus, a word that co-occurs more frequently with excellent than poor is estimated to have a positive sentiment orientation. To extrapolate from words to documents, the estimated sentiment ˆs ∈{−1, 1} of a review document d is the sign of the average sentiment orientation of its constituent features.1 To represent this estimate formally, we introduce the following notation: W is a “dictionary” of features: (w1, . . . , wp). Each feature’s respective sentiment orientation is represented as an entry in the vector ˆσ of length p: ˆσj = ˆσ(wj) (2) Given a collection of n review documents, the i-th each di is also represented as a vector of length p, with dij equal to the number of times that feature wj occurs in di. The length of a document is its total number of features, |di| = Pp j=1 dij. Turney’s classifier for the i-th document’s sentiment si can now be written: ˆsi = sign Pp j=1 ˆσjdij |di| ! (3) Using a carefully chosen collection of features, this classifier produces correct results on 65.8% of a collection of 120 movie reviews, where 60 are labeled positive and 60 negative. Although this is not a particularly encouraging result, movie reviews tend to be a difficult domain. Accuracy on sentiment classification in other domains exceeds 80% (Turney, 2002). 1Note that not all words or phrases need to be considered as features. In Turney (2002), features are selected according to part-of-speech labels. 2.2 Naive Bayes Classification Bayes’ Theorem provides a convenient framework for predicting a binary response s ∈{−1, 1} from a feature vector x: Pr(s = 1|x) = Pr(x|s = 1)π1 P k∈{−1,1} Pr(x|s = k)πk (4) For a labeled sample of data (xi, si), i = 1, ..., n, a class’s marginal probability πk can be estimated trivially as the proportion of training samples belonging to the class. Thus the critical aspect of classification by Bayes’ Theorem is to estimate the conditional distribution of x given s. Naive Bayes simplifies this problem by making a “naive” assumption: within a class, the different feature values are taken to be independent of one another. Pr(x|s) = Y j Pr(xj|s) (5) As a result, the estimation problem is reduced to univariate distributions. • Naive Bayes for a Multinomial Distribution We consider a “bag of words” model for a document that belongs to class k, where features are assumed to result from a sequence of |di| independent multinomial draws with outcome probability vector qk = (qk1, . . . , qkp). Given a collection of documents with labels, (di, si), i = 1, . . . , n, a natural estimate for qkj is the fraction of all features in documents of class k that equal wj: ˆqkj = P i:si=k dij P i:si=k |di| (6) In the two-class case, the logit transformation provides a revealing representation of the class posterior probabilities of the Naive Bayes model. d logit(s|d) ≜ log c Pr(s = 1|d) c Pr(s = −1|d) (7) = log ˆπ1 ˆπ−1 + p X j=1 dj log ˆq1j ˆq−1j (8) = ˆα0 + p X j=1 dj ˆαj (9) where ˆα0 = log ˆπ1 ˆπ−1 (10) ˆαj = log ˆq1j ˆq−1j (11) Observe that the estimate for the logit in Equation 9 has a simple structure: it is a linear function of d. Models that take this form are commonplace in classification. 2.3 Turney’s Classifier as Naive Bayes Although Naive Bayes classification requires a labeled corpus of documents, we show in this section that Turney’s approach corresponds to a Naive Bayes model. The necessary documents and their corresponding labels are built from the spans of text that surround the anchor words excellent and poor. More formally, a labeled corpus may be produced by the following procedure: 1. For a particular anchor ak, locate all of the sites on the Internet where it occurs. 2. From all of the pages within a site, gather the features that occur within ten words of an occurrence of ak, with any particular feature included at most once. This list comprises a new “document,” representing that site.2 3. Label this document +1 if ak = excellent, -1 if ak = poor. When a Naive Bayes model is fit to the corpus described above, it results in a vector ˆα of length p, consisting of coefficient estimates for all features. In Propositions 1 and 2 below, we show that Turney’s estimates of sentiment orientation ˆσ are closely related to ˆα, and that both estimates produce identical classifiers. Proposition 1 ˆα = C1ˆσ (12) where C1 = Nexc./ P i:si=1 |di| Npoor/ P i:si=−1 |di| (13) Proof: Because a feature is restricted to at most one occurrence in a document, X i:si=k dij = N(w,ak) (14) Then from Equations 6 and 11: ˆαj = log ˆq1j ˆq−1j (15) = log N(w,exc.)/ P i:si=1 |di| N(w,poor)/ P i:si=−1 |di| (16) = C1ˆσj (17) 2 2If both anchors occur on a site, then there will actually be two documents, one for each sentiment Proposition 2 Turney’s classifier is identical to a Naive Bayes classifier fit on this corpus, with π1 = π−1 = 0.5. Proof: A Naive Bayes classifier typically assigns an observation to its most probable class. This is equivalent to classifying according to the sign of the estimated logit. So for any document, we must show that both the logit estimate and the average sentiment orientation are identical in sign. When π1 = 0.5, α0 = 0. Thus the estimated logit is d logit(s|d) = p X j=1 ˆαjdj (18) = C1 p X j=1 ˆσjdj (19) This is a positive multiple of Turney’s classifier (Equation 3), so they clearly match in sign. 2 3 A More Versatile Model 3.1 Desired Extensions By understanding Turney’s model within a Naive Bayes framework, we are able to interpret its output as a probability model for document classes. In the presence of labeled examples, this insight also makes it possible to estimate the intercept term α0. Further, we are able to view this model as a member of a broad class: linear estimates for the logit. This understanding facilitates further extensions, in particular, utilizing the following: 1. Labeled documents 2. More anchor words The reason for using labeled documents is straightforward; labels offer validation for any chosen model. Using additional anchors is desirable in part because it is inexpensive to produce lists of words that are believed to reflect positive sentiment, perhaps by reference to a thesaurus. In addition, a single anchor may be at once too general and too specific. An anchor may be too general in the sense that many common words have multiple meanings, and not all of them reflect a chosen sentiment orientation. For example, poor can refer to an objective economic state that does not necessarily express negative sentiment. As a result, a word such as income appears 4.18 times as frequently with poor as excellent, even though it does not convey negative sentiment. Similarly, excellent has a technical meaning in antiquity trading, which causes it to appear 3.34 times as frequently with furniture. An anchor may also be too specific, in the sense that there are a variety of different ways to express sentiment, and a single anchor may not capture them all. So a word like pretentious carries a strong negative sentiment but co-occurs only slightly more frequently (1.23 times) with excellent than poor. Likewise, fascination generally reflects a positive sentiment, yet it appears slightly more frequently (1.06 times) with poor than excellent. 3.2 Other Sources of Unlabeled Data The use of additional anchors has a drawback in terms of being resource-intensive. A feature set may contain many words and phrases, and each of them requires a separate AltaVista query for every chosen anchor word. In the case of 30,000 features and ten queries per minute, downloads for a single anchor word require over two days of data collection. An alternative approach is to access a large collection of documents directly. Then all cooccurrences can be counted in a single pass. Although this approach dramatically reduces the amount of data available, it does offer several advantages. • Increased Query Options Search engine queries of the form phrase NEAR anchor may not produce all of the desired cooccurrence counts. For instance, one may wish to run queries that use stemmed words, hyphenated words, or punctuation marks. One may also wish to modify the definition of NEAR, or to count individual co-occurrences, rather than counting sites that contain at least one co-occurrence. • Topic Matching Across the Internet as a whole, features may not exhibit the same correlation structure as they do within a specific domain. By restricting attention to documents within a domain, one may hope to avoid cooccurrences that are primarily relevant to other subjects. • Reproducibility On a fixed corpus, counts of word occurrences produce consistent results. Due to the dynamic nature of the Internet, numbers may fluctuate. 3.3 Co-Occurrences and Derived Features The Naive Bayes coefficient estimate ˆαj may itself be interpreted as an intercept term plus a linear combination of features of the form log N(wj,ak). Num. of Labeled Occurrences Correlation 1 - 5 0.022 6 - 10 0.082 11 - 25 0.113 26 - 50 0.183 51 - 75 0.283 76 - 100 0.316 Figure 1: Correlation between Supervised and Unsupervised Coefficient Estimates ˆαj = log N(j,exc.)/ P i:si=1 |di| N(j,pr.)/ P i:si=−1 |di| (20) = log C1 + log N(j,exc.) −log N(j,pr.) (21) We generalize this estimate as follows: for a collection of K different anchor words, we consider a general linear combination of logged co-occurrence counts. ˆαj = K X k=1 γk log N(wj,ak) (22) In the special case of a Naive Bayes model, γk = 1 when the k-th anchor word ak conveys positive sentiment, −1 when it conveys negative sentiment. Replacing the logit estimate in Equation 9 with an estimate of this form, the model becomes: d logit(s|d) = ˆα0 + p X j=1 dj ˆαj (23) = ˆα0 + p X j=1 K X k=1 djγk log N(wj,ak) (24) = γ0 + K X k=1 γk p X j=1 dj log N(wj,ak) (25) (26) This model has only K + 1 parameters: γ0, γ1, . . . , γK. These can be learned straightforwardly from labeled documents by a method such as logistic regression. Observe that a document receives a score for each anchor word Pp j=1 dj log N(wj,ak). Effectively, the predictor variables in this model are no longer counts of the original features dj. Rather, they are −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 −3 −2 −1 0 1 2 3 4 Traditional Naive Bayes Coefs. Turney Naive Bayes Coefs. Unsupervised vs. Supervised Coefficients Figure 2: Unsupervised versus Supervised Coefficient Estimates inner products between the entire feature vector d and the logged co-occurence vector N(w,ak). In this respect, the vector of logged co-occurrences is used to produce derived feature. 4 Data Analysis 4.1 Accuracy of Unsupervised Coefficients By means of a Perl script that uses the Lynx browser, Version 2.8.3rel.1, we download AltaVista hit counts for queries of the form “target NEAR anchor.” The initial list of targets consists of 44,321 word types extracted from the Pang corpus of 1400 labeled movie reviews. After preprocessing, this number is reduced to 28,629.3 In Figure 1, we compare estimates produced by two Naive Bayes procedures. For each feature wj, we estimate αj by using Turney’s procedure, and by fitting a traditional Naive Bayes model to the labeled documents. The traditional estimates are smoothed by assuming a Beta prior distribution that is equivalent to having four previous observations of wj in documents of each class. ˆq1j ˆq−1j = C2 4 + P i:si=1 dij 4 + P i:si=−1 dij (27) where C2 = 4p + P i:si=1 |di| 4p + P i:si=−1 |di| (28) Here, dij is used to indicate feature presence: dij =  1 if wj appears in di 0 otherwise (29) 3We eliminate extremely rare words by requiring each target to co-occur at least once with each anchor. In addition, certain types, such as words containing hyphens, apostrophes, or other punctuation marks, do not appear to produce valid counts, so they are discarded. Positive Negative best awful brilliant bad excellent pathetic spectacular poor wonderful worst Figure 3: Selected Anchor Words We choose this fitting procedure among several candidates because it performs well in classifying test documents. In Figure 1, each entry in the right-hand column is the observed correlation between these two estimates over a subset of features. For features that occur in five documents or fewer, the correlation is very weak (0.022). This is not surprising, as it is difficult to estimate a coefficient from such a small number of labeled examples. Correlations are stronger for more common features, but never strong. As a baseline for comparison, Naive Bayes coefficients can be estimated using a subset of their labeled occurrences. With two independent sets of 51-75 occurrences, Naive Bayes coefficient estimates had a correlation of 0.475. Figure 2 is a scatterplot of the same coefficient estimates for word types that appear in 51 to 100 documents. The great majority of features do not have large coefficients, but even for the ones that do, there is not a tight correlation. 4.2 Additional Anchors We wish to learn how our model performance depends on the choice and number of anchor words. Selecting from WordNet synonym lists (Fellbaum, 1998), we choose five positive anchor words and five negative (Figure 3). This produces a total of 25 different possible pairs for use in producing coefficient estimates. Figure 4 shows the classification performance of unsupervised procedures using the 1400 labeled Pang documents as test data. Coefficients ˆαj are estimated as described in Equation 22. Several different experimental conditions are applied. The methods labeled ”Count” use the original un-normalized coefficients, while those labeled “Norm.” have been normalized so that the number of co-occurrences with each anchor have identical variance. Results are shown when rare words (with three or fewer occurrences in the labeled corpus) are included and omitted. The methods “pair” and “10” describe whether all ten anchor coefficients are used at once, or just the ones that correspond to a single pair of Method Feat. Misclass. St.Dev Count Pair >3 39.6% 2.9% Norm. Pair >3 38.4% 3.0% Count Pair all 37.4% 3.1% Norm. Pair all 37.3% 3.0% Count 10 > 3 36.4% – Norm. 10 > 3 35.4% – Count 10 all 34.6% – Norm. 10 all 34.1% – Figure 4: Classification Error Rates for Different Unsupervised Approaches anchor words. For anchor pairs, the mean error across all 25 pairs is reported, along with its standard deviation. Patterns are consistent across the different conditions. A relatively large improvement comes from using all ten anchor words. Smaller benefits arise from including rare words and from normalizing model coefficients. Models that use the original pair of anchor words, excellent and poor, perform slightly better than the average pair. Whereas mean performance ranges from 37.3% to 39.6%, misclassification rates for this pair of anchors ranges from 37.4% to 38.1%. 4.3 A Smaller Unlabeled Corpus As described in Section 3.2, there are several reasons to explore the use of a smaller unlabeled corpus, rather than the entire Internet. In our experiments, we use additional movie reviews as our documents. For this domain, Pang makes available 27,886 reviews.4 Because this corpus offers dramatically fewer instances of anchor words, we modify our estimation procedure. Rather than discarding words that rarely co-occur with anchors, we use the same feature set as before and regularize estimates by the same procedure used in the Naive Bayes procedure described earlier. Using all features, and ten anchor words with normalized scores, test error is 35.0%. This suggests that comparable results can be attained while referring to a considerably smaller unlabeled corpus. Rather than requiring several days of downloads, the count of nearby co-occurrences was completed in under ten minutes. Because this procedure enables fast access to counts, we explore the possibility of dramatically enlarging our collection of anchor words. We col4This corpus is freely available on the following website: http://www.cs.cornell.edu/people/pabo/movie-review-data/. 100 200 300 400 500 600 0.30 0.32 0.34 0.36 0.38 0.40 Num. of Labeled Documents Classif. Error Misclassification versus Sample Size Figure 5: Misclassification with Labeled Documents. The solid curve represents a latent factor model with estimated coefficients. The dashed curve uses a Naive Bayes classifier. The two horizontal lines represent unsupervised estimates; the upper one is for the original unsupervised classifier, and the lower is for the most successful unsupervised method. lect data for the complete set of WordNet synonyms for the words good, best, bad, boring, and dreadful. This yields a total of 83 anchor words, 35 positive and 48 negative. When all of these anchors are used in conjunction, test error increases to 38.3%. One possible difficulty in using this automated procedure is that some synonyms for a word do not carry the same sentiment orientation. For instance, intense is listed as a synonym for bad, even though its presence in a movie review is a strongly positive indication.5 4.4 Methods with Supervision As demonstrated in Section 3.3, each anchor word ak is associated with a coefficient γk. In unsupervised models, these coefficients are assumed to be known. However, when labeled documents are available, it may be advantageous to estimate them. Figure 5 compares the performance of a model with estimated coefficient vector γ, as opposed to unsupervised models and a traditional supervised approach. When a moderate number of labeled documents are available, it offers a noticeable improvement. The supervised method used for reference in this case is the Naive Bayes model that is described in section 4.1. Naive Bayes classification is of particular interest here because it converges faster to its asymptotic optimum than do discriminative methods (Ng, A. Y. and Jordan, M., 2002). Further, with 5In the labeled Pang corpus, intense appears in 38 positive reviews and only 6 negative ones. a larger number of labeled documents, its performance on this corpus is comparable to that of Support Vector Machines and Maximum Entropy models (Pang et al., 2002). The coefficient vector γ is estimated by regularized logistic regression. This method has been used in other text classification problems, as in Zhang and Yang (2003). In our case, the regularization6 is introduced in order to enforce the beliefs that: γ1 ≈ γ2, if a1, a2 synonyms (30) γ1 ≈ −γ2, if a1, a2 antonyms (31) For further information on regularized model fitting, see for instance, Hastie et al. (2001). 5 Conclusion In business settings, there is growing interest in learning product reputations from the Internet. For such problems, it is often difficult or expensive to obtain labeled data. As a result, a change in modeling strategies is needed, towards approaches that require less supervision. In this paper we provide a framework for allowing human-provided information to be combined with unlabeled documents and labeled documents. We have found that this framework enables improvements over existing techniques, both in terms of the speed of model estimation and in classification accuracy. As a result, we believe that this is a promising new approach to problems of practical importance. References Kushal Dave, Steve Lawrence, and David M. Pennock. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. C. Fellbaum. 1998. Wordnet an electronic lexical database. T. Hastie, R. Tibshirani, and J. Friedman. 2001. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag. Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the semantic orientation of adjectives. In Philip R. Cohen and Wolfgang Wahlster, editors, Proceedings of the Thirty-Fifth Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics, pages 174–181, Somerset, New Jersey. Association for Computational Linguistics. 6By cross-validation, we choose the regularization term λ = 1.5/sqrt(n), where n is the number of labeled documents. Satoshi Morinaga, Kenji Yamanishi, Kenji Tateishi, and Toshikazu Fukushima. 2002. Mining product reputations on the web. Ng, A. Y. and Jordan, M. 2002. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Advances in Neural Information Processing Systems, 14. Kamal Nigam, Andrew K. McCallum, Sebastian Thrun, and Tom M. Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103–134. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP). P.D. Turney and M.L. Littman. 2002. Unsupervised learning of semantic orientation from a hundredbillion-word corpus. Peter Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL’02), pages 417– 424, Philadelphia, Pennsylvania. Association for Computational Linguistics. Janyce Wiebe. 2000. Learning subjective adjectives from corpora. In Proc. 17th National Conference on Artificial Intelligence (AAAI-2000), Austin, Texas. Jian Zhang and Yiming Yang. 2003. ”robustness of regularized linear classification methods in text categorization”. In Proceedings of the 26th Annual International ACM SIGIR Conference (SIGIR 2003).
2004
34
A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts Bo Pang and Lillian Lee Department of Computer Science Cornell University Ithaca, NY 14853-7501 {pabo,llee}@cs.cornell.edu Abstract Sentiment analysis seeks to identify the viewpoint(s) underlying a text span; an example application is classifying a movie review as “thumbs up” or “thumbs down”. To determine this sentiment polarity, we propose a novel machine-learning method that applies text-categorization techniques to just the subjective portions of the document. Extracting these portions can be implemented using efficient techniques for finding minimum cuts in graphs; this greatly facilitates incorporation of cross-sentence contextual constraints. 1 Introduction The computational treatment of opinion, sentiment, and subjectivity has recently attracted a great deal of attention (see references), in part because of its potential applications. For instance, informationextraction and question-answering systems could flag statements and queries regarding opinions rather than facts (Cardie et al., 2003). Also, it has proven useful for companies, recommender systems, and editorial sites to create summaries of people’s experiences and opinions that consist of subjective expressions extracted from reviews (as is commonly done in movie ads) or even just a review’s polarity — positive (“thumbs up”) or negative (“thumbs down”). Document polarity classification poses a significant challenge to data-driven methods, resisting traditional text-categorization techniques (Pang, Lee, and Vaithyanathan, 2002). Previous approaches focused on selecting indicative lexical features (e.g., the word “good”), classifying a document according to the number of such features that occur anywhere within it. In contrast, we propose the following process: (1) label the sentences in the document as either subjective or objective, discarding the latter; and then (2) apply a standard machine-learning classifier to the resulting extract. This can prevent the polarity classifier from considering irrelevant or even potentially misleading text: for example, although the sentence “The protagonist tries to protect her good name” contains the word “good”, it tells us nothing about the author’s opinion and in fact could well be embedded in a negative movie review. Also, as mentioned above, subjectivity extracts can be provided to users as a summary of the sentiment-oriented content of the document. Our results show that the subjectivity extracts we create accurately represent the sentiment information of the originating documents in a much more compact form: depending on choice of downstream polarity classifier, we can achieve highly statistically significant improvement (from 82.8% to 86.4%) or maintain the same level of performance for the polarity classification task while retaining only 60% of the reviews’ words. Also, we explore extraction methods based on a minimum cut formulation, which provides an efficient, intuitive, and effective means for integrating inter-sentencelevel contextual information with traditional bag-ofwords features. 2 Method 2.1 Architecture One can consider document-level polarity classification to be just a special (more difficult) case of text categorization with sentiment- rather than topic-based categories. Hence, standard machinelearning classification techniques, such as support vector machines (SVMs), can be applied to the entire documents themselves, as was done by Pang, Lee, and Vaithyanathan (2002). We refer to such classification techniques as default polarity classifiers. However, as noted above, we may be able to improve polarity classification by removing objective sentences (such as plot summaries in a movie review). We therefore propose, as depicted in Figure 1, to first employ a subjectivity detector that determines whether each sentence is subjective or not: discarding the objective ones creates an extract that should better represent a review’s subjective content to a default polarity classifier. s1 s2 s3 s4 s_n +/− s4 s1 subjectivity detector yes no no yes n−sentence review subjective sentence? m−sentence extract (m<=n) review? positive or negative default classifier polarity subjectivity extraction Figure 1: Polarity classification via subjectivity detection. To our knowledge, previous work has not integrated sentence-level subjectivity detection with document-level sentiment polarity. Yu and Hatzivassiloglou (2003) provide methods for sentencelevel analysis and for determining whether a document is subjective or not, but do not combine these two types of algorithms or consider document polarity classification. The motivation behind the singlesentence selection method of Beineke et al. (2004) is to reveal a document’s sentiment polarity, but they do not evaluate the polarity-classification accuracy that results. 2.2 Context and Subjectivity Detection As with document-level polarity classification, we could perform subjectivity detection on individual sentences by applying a standard classification algorithm on each sentence in isolation. However, modeling proximity relationships between sentences would enable us to leverage coherence: text spans occurring near each other (within discourse boundaries) may share the same subjectivity status, other things being equal (Wiebe, 1994). We would therefore like to supply our algorithms with pair-wise interaction information, e.g., to specify that two particular sentences should ideally receive the same subjectivity label but not state which label this should be. Incorporating such information is somewhat unnatural for classifiers whose input consists simply of individual feature vectors, such as Naive Bayes or SVMs, precisely because such classifiers label each test item in isolation. One could define synthetic features or feature vectors to attempt to overcome this obstacle. However, we propose an alternative that avoids the need for such feature engineering: we use an efficient and intuitive graph-based formulation relying on finding minimum cuts. Our approach is inspired by Blum and Chawla (2001), although they focused on similarity between items (the motivation being to combine labeled and unlabeled data), whereas we are concerned with physical proximity between the items to be classified; indeed, in computer vision, modeling proximity information via graph cuts has led to very effective classification (Boykov, Veksler, and Zabih, 1999). 2.3 Cut-based classification Figure 2 shows a worked example of the concepts in this section. Suppose we have n items x1, . . . , xn to divide into two classes C1 and C2, and we have access to two types of information: • Individual scores indj(xi): non-negative estimates of each xi’s preference for being in Cj based on just the features of xi alone; and • Association scores assoc(xi, xk): non-negative estimates of how important it is that xi and xk be in the same class.1 We would like to maximize each item’s “net happiness”: its individual score for the class it is assigned to, minus its individual score for the other class. But, we also want to penalize putting tightlyassociated items into different classes. Thus, after some algebra, we arrive at the following optimization problem: assign the xis to C1 and C2 so as to minimize the partition cost X x∈C1 ind2(x)+ X x∈C2 ind1(x)+ X xi∈C1, xk∈C2 assoc(xi, xk). The problem appears intractable, since there are 2n possible binary partitions of the xi’s. However, suppose we represent the situation in the following manner. Build an undirected graph G with vertices {v1, . . . , vn, s, t}; the last two are, respectively, the source and sink. Add n edges (s, vi), each with weight ind1(xi), and n edges (vi, t), each with weight ind2(xi). Finally, add n 2  edges (vi, vk), each with weight assoc(xi, xk). Then, cuts in G are defined as follows: Definition 1 A cut (S, T) of G is a partition of its nodes into sets S = {s} ∪S′ and T = {t} ∪T ′, where s ̸∈S′, t ̸∈T ′. Its cost cost(S, T) is the sum of the weights of all edges crossing from S to T. A minimum cut of G is one of minimum cost. 1Asymmetry is allowed, but we used symmetric scores. [ ] s t Y M N 2 ind (Y) [.2] 1 ind (Y) [.8] 2 ind (M) [.5] 1 ind (M) [.5] [.1] assoc(Y,N) 2 ind (N) [.9] 1 ind (N) assoc(M,N) assoc(Y,M) [.2] [1.0] [.1] C1 Individual Association Cost penalties penalties {Y,M} .2 + .5 + .1 .1 + .2 1.1 (none) .8 + .5 + .1 0 1.4 {Y,M,N} .2 + .5 + .9 0 1.6 {Y} .2 + .5 + .1 1.0 + .1 1.9 {N} .8 + .5 + .9 .1 + .2 2.5 {M} .8 + .5 + .1 1.0 + .2 2.6 {Y,N} .2 + .5 + .9 1.0 + .2 2.8 {M,N} .8 + .5 + .9 1.0 + .1 3.3 Figure 2: Graph for classifying three items. Brackets enclose example values; here, the individual scores happen to be probabilities. Based on individual scores alone, we would put Y (“yes”) in C1, N (“no”) in C2, and be undecided about M (“maybe”). But the association scores favor cuts that put Y and M in the same class, as shown in the table. Thus, the minimum cut, indicated by the dashed line, places M together with Y in C1. Observe that every cut corresponds to a partition of the items and has cost equal to the partition cost. Thus, our optimization problem reduces to finding minimum cuts. Practical advantages As we have noted, formulating our subjectivity-detection problem in terms of graphs allows us to model item-specific and pairwise information independently. Note that this is a very flexible paradigm. For instance, it is perfectly legitimate to use knowledge-rich algorithms employing deep linguistic knowledge about sentiment indicators to derive the individual scores. And we could also simultaneously use knowledgelean methods to assign the association scores. Interestingly, Yu and Hatzivassiloglou (2003) compared an individual-preference classifier against a relationship-based method, but didn’t combine the two; the ability to coordinate such algorithms is precisely one of the strengths of our approach. But a crucial advantage specific to the utilization of a minimum-cut-based approach is that we can use maximum-flow algorithms with polynomial asymptotic running times — and near-linear running times in practice — to exactly compute the minimumcost cut(s), despite the apparent intractability of the optimization problem (Cormen, Leiserson, and Rivest, 1990; Ahuja, Magnanti, and Orlin, 1993).2 In contrast, other graph-partitioning problems that have been previously used to formulate NLP classification problems3 are NP-complete (Hatzivassiloglou and McKeown, 1997; Agrawal et al., 2003; Joachims, 2003). 2Code available at http://www.avglab.com/andrew/soft.html. 3Graph-based approaches to general clustering problems are too numerous to mention here. 3 Evaluation Framework Our experiments involve classifying movie reviews as either positive or negative, an appealing task for several reasons. First, as mentioned in the introduction, providing polarity information about reviews is a useful service: witness the popularity of www.rottentomatoes.com. Second, movie reviews are apparently harder to classify than reviews of other products (Turney, 2002; Dave, Lawrence, and Pennock, 2003). Third, the correct label can be extracted automatically from rating information (e.g., number of stars). Our data4 contains 1000 positive and 1000 negative reviews all written before 2002, with a cap of 20 reviews per author (312 authors total) per category. We refer to this corpus as the polarity dataset. Default polarity classifiers We tested support vector machines (SVMs) and Naive Bayes (NB). Following Pang et al. (2002), we use unigram-presence features: the ith coordinate of a feature vector is 1 if the corresponding unigram occurs in the input text, 0 otherwise. (For SVMs, the feature vectors are length-normalized). Each default documentlevel polarity classifier is trained and tested on the extracts formed by applying one of the sentencelevel subjectivity detectors to reviews in the polarity dataset. Subjectivity dataset To train our detectors, we need a collection of labeled sentences. Riloff and Wiebe (2003) state that “It is [very hard] to obtain collections of individual sentences that can be easily identified as subjective or objective”; the polarity-dataset sentences, for example, have not 4Available at www.cs.cornell.edu/people/pabo/moviereview-data/ (review corpus version 2.0). been so annotated.5 Fortunately, we were able to mine the Web to create a large, automaticallylabeled sentence corpus6. To gather subjective sentences (or phrases), we collected 5000 moviereview snippets (e.g., “bold, imaginative, and impossible to resist”) from www.rottentomatoes.com. To obtain (mostly) objective data, we took 5000 sentences from plot summaries available from the Internet Movie Database (www.imdb.com). We only selected sentences or snippets at least ten words long and drawn from reviews or plot summaries of movies released post-2001, which prevents overlap with the polarity dataset. Subjectivity detectors As noted above, we can use our default polarity classifiers as “basic” sentencelevel subjectivity detectors (after retraining on the subjectivity dataset) to produce extracts of the original reviews. We also create a family of cut-based subjectivity detectors; these take as input the set of sentences appearing in a single document and determine the subjectivity status of all the sentences simultaneously using per-item and pairwise relationship information. Specifically, for a given document, we use the construction in Section 2.2 to build a graph wherein the source s and sink t correspond to the class of subjective and objective sentences, respectively, and each internal node vi corresponds to the document’s ith sentence si. We can set the individual scores ind1(si) to PrNB sub (si) and ind2(si) to 1 −PrNB sub (si), as shown in Figure 3, where PrNB sub (s) denotes Naive Bayes’ estimate of the probability that sentence s is subjective; or, we can use the weights produced by the SVM classifier instead.7 If we set all the association scores to zero, then the minimum-cut classification of the sentences is the same as that of the basic subjectivity detector. Alternatively, we incorporate the degree of proximity between pairs of sentences, controlled by three parameters. The threshold T specifies the maximum distance two sentences can be separated by and still be considered proximal. The 5We therefore could not directly evaluate sentenceclassification accuracy on the polarity dataset. 6Available at www.cs.cornell.edu/people/pabo/moviereview-data/ , sentence corpus version 1.0. 7We converted SVM output di, which is a signed distance (negative=objective) from the separating hyperplane, to nonnegative numbers by ind1(si) def = ( 1 di > 2; (2 + di)/4 −2 ≤di ≤2; 0 di < −2. and ind2(si) = 1 −ind1(si). Note that scaling is employed only for consistency; the algorithm itself does not require probabilities for individual scores. non-increasing function f(d) specifies how the influence of proximal sentences decays with respect to distance d; in our experiments, we tried f(d) = 1, e1−d, and 1/d2. The constant c controls the relative influence of the association scores: a larger c makes the minimum-cut algorithm more loath to put proximal sentences in different classes. With these in hand8, we set (for j > i) assoc(si, sj) def = n f(j −i) · c if (j −i) ≤T; 0 otherwise. 4 Experimental Results Below, we report average accuracies computed by ten-fold cross-validation over the polarity dataset. Section 4.1 examines our basic subjectivity extraction algorithms, which are based on individualsentence predictions alone. Section 4.2 evaluates the more sophisticated form of subjectivity extraction that incorporates context information via the minimum-cut paradigm. As we will see, the use of subjectivity extracts can in the best case provide satisfying improvement in polarity classification, and otherwise can at least yield polarity-classification accuracies indistinguishable from employing the full review. At the same time, the extracts we create are both smaller on average than the original document and more effective as input to a default polarity classifier than the same-length counterparts produced by standard summarization tactics (e.g., first- or last-N sentences). We therefore conclude that subjectivity extraction produces effective summaries of document sentiment. 4.1 Basic subjectivity extraction As noted in Section 3, both Naive Bayes and SVMs can be trained on our subjectivity dataset and then used as a basic subjectivity detector. The former has somewhat better average ten-fold cross-validation performance on the subjectivity dataset (92% vs. 90%), and so for space reasons, our initial discussions will focus on the results attained via NB subjectivity detection. Employing Naive Bayes as a subjectivity detector (ExtractNB) in conjunction with a Naive Bayes document-level polarity classifier achieves 86.4% accuracy.9 This is a clear improvement over the 82.8% that results when no extraction is applied 8Parameter training is driven by optimizing the performance of the downstream polarity classifier rather than the detector itself because the subjectivity dataset’s sentences come from different reviews, and so are never proximal. 9This result and others are depicted in Figure 5; for now, consider only the y-axis in those plots. ... ... sub sub NB NB s1 s2 s3 s4 s_n  construct graph compute min. cut   extract create s1 s4 m−sentence extract (m<=n)               n−sentence review v1 v2 s v3 edge crossing the cut v2 v3 v1 t s v n t v n proximity link individual subjectivity−probability link Pr 1−Pr (s1) Pr (s1)         Figure 3: Graph-cut-based creation of subjective extracts. (Full review); indeed, the difference is highly statistically significant (p < 0.01, paired t-test). With SVMs as the polarity classifier instead, the Full review performance rises to 87.15%, but comparison via the paired t-test reveals that this is statistically indistinguishable from the 86.4% that is achieved by running the SVM polarity classifier on ExtractNB input. (More improvements to extraction performance are reported later in this section.) These findings indicate10 that the extracts preserve (and, in the NB polarity-classifier case, apparently clarify) the sentiment information in the originating documents, and thus are good summaries from the polarity-classification point of view. Further support comes from a “flipping” experiment: if we give as input to the default polarity classifier an extract consisting of the sentences labeled objective, accuracy drops dramatically to 71% for NB and 67% for SVMs. This confirms our hypothesis that sentences discarded by the subjectivity extraction process are indeed much less indicative of sentiment polarity. Moreover, the subjectivity extracts are much more compact than the original documents (an important feature for a summary to have): they contain on average only about 60% of the source reviews’ words. (This word preservation rate is plotted along the x-axis in the graphs in Figure 5.) This prompts us to study how much reduction of the original documents subjectivity detectors can perform and still accurately represent the texts’ sentiment information. We can create subjectivity extracts of varying lengths by taking just the N most subjective sentences11 from the originating review. As one base10Recall that direct evidence is not available because the polarity dataset’s sentences lack subjectivity labels. 11These are the N sentences assigned the highest probability by the basic NB detector, regardless of whether their probabilline to compare against, we take the canonical summarization standard of extracting the first N sentences — in general settings, authors often begin documents with an overview. We also consider the last N sentences: in many documents, concluding material may be a good summary, and www.rottentomatoes.com tends to select “snippets” from the end of movie reviews (Beineke et al., 2004). Finally, as a sanity check, we include results from the N least subjective sentences according to Naive Bayes. Figure 4 shows the polarity classifier results as N ranges between 1 and 40. Our first observation is that the NB detector provides very good “bang for the buck”: with subjectivity extracts containing as few as 15 sentences, accuracy is quite close to what one gets if the entire review is used. In fact, for the NB polarity classifier, just using the 5 most subjective sentences is almost as informative as the Full review while containing on average only about 22% of the source reviews’ words. Also, it so happens that at N = 30, performance is actually slightly better than (but statistically indistinguishable from) Full review even when the SVM default polarity classifier is used (87.2% vs. 87.15%).12 This suggests potentially effective extraction alternatives other than using a fixed probability threshold (which resulted in the lower accuracy of 86.4% reported above). Furthermore, we see in Figure 4 that the N mostsubjective-sentences method generally outperforms the other baseline summarization methods (which perhaps suggests that sentiment summarization cannot be treated the same as topic-based summarizaities exceed 50% and so would actually be classified as subjective by Naive Bayes. For reviews with fewer than N sentences, the entire review will be returned. 12Note that roughly half of the documents in the polarity dataset contain more than 30 sentences (average=32.3, standard deviation 15). 55 60 65 70 75 80 85 90 1 5 10 15 20 25 30 35 40 Average accuracy N Accuracy for N-sentence abstracts (def = NB) most subjective N sentences last N sentences first N sentences least subjective N sentences Full review 55 60 65 70 75 80 85 90 1 5 10 15 20 25 30 35 40 Average accuracy N Accuracy for N-sentence abstracts (def = SVM) most subjective N sentences last N sentences first N sentences least subjective N sentences Full review Figure 4: Accuracies using N-sentence extracts for NB (left) and SVM (right) default polarity classifiers. 83 83.5 84 84.5 85 85.5 86 86.5 87 0.6 0.7 0.8 0.9 1 1.1 Average accuracy % of words extracted Accuracy for subjective abstracts (def = NB) difference in accuracy ExtractSVM+Prox ExtractNB+Prox ExtractNB ExtractSVM not statistically significant Full Review indicates statistically significant improvement in accuracy 83 83.5 84 84.5 85 85.5 86 86.5 87 0.6 0.7 0.8 0.9 1 1.1 Average accuracy % of words extracted Accuracy for subjective abstracts (def = SVM) difference in accuracy ExtractNB+Prox ExtractSVM+Prox ExtractSVM ExtractNB not statistically significant Full Review improvement in accuracy indicates statistically significant Figure 5: Word preservation rate vs. accuracy, NB (left) and SVMs (right) as default polarity classifiers. Also indicated are results for some statistical significance tests. tion, although this conjecture would need to be verified on other domains and data). It’s also interesting to observe how much better the last N sentences are than the first N sentences; this may reflect a (hardly surprising) tendency for movie-review authors to place plot descriptions at the beginning rather than the end of the text and conclude with overtly opinionated statements. 4.2 Incorporating context information The previous section demonstrated the value of subjectivity detection. We now examine whether context information, particularly regarding sentence proximity, can further improve subjectivity extraction. As discussed in Section 2.2 and 3, contextual constraints are easily incorporated via the minimum-cut formalism but are not natural inputs for standard Naive Bayes and SVMs. Figure 5 shows the effect of adding in proximity information. ExtractNB+Prox and ExtractSVM+Prox are the graph-based subjectivity detectors using Naive Bayes and SVMs, respectively, for the individual scores; we depict the best performance achieved by a single setting of the three proximity-related edge-weight parameters over all ten data folds13 (parameter selection was not a focus of the current work). The two comparisons we are most interested in are ExtractNB+Prox versus ExtractNB and ExtractSVM+Prox versus ExtractSVM. We see that the context-aware graph-based subjectivity detectors tend to create extracts that are more informative (statistically significant so (paired t-test) for SVM subjectivity detectors only), although these extracts are longer than their contextblind counterparts. We note that the performance 13Parameters are chosen from T ∈ {1, 2, 3}, f(d) ∈ {1, e1−d, 1/d2}, and c ∈[0, 1] at intervals of 0.1. enhancements cannot be attributed entirely to the mere inclusion of more sentences regardless of whether they are subjective or not — one counterargument is that Full review yielded substantially worse results for the NB default polarity classifier— and at any rate, the graph-derived extracts are still substantially more concise than the full texts. Now, while incorporating a bias for assigning nearby sentences to the same category into NB and SVM subjectivity detectors seems to require some non-obvious feature engineering, we also wish to investigate whether our graph-based paradigm makes better use of contextual constraints that can be (more or less) easily encoded into the input of standard classifiers. For illustrative purposes, we consider paragraph-boundary information, looking only at SVM subjectivity detection for simplicity’s sake. It seems intuitively plausible that paragraph boundaries (an approximation to discourse boundaries) loosen coherence constraints between nearby sentences. To capture this notion for minimum-cutbased classification, we can simply reduce the association scores for all pairs of sentences that occur in different paragraphs by multiplying them by a cross-paragraph-boundary weight w ∈[0, 1]. For standard classifiers, we can employ the trick of having the detector treat paragraphs, rather than sentences, as the basic unit to be labeled. This enables the standard classifier to utilize coherence between sentences in the same paragraph; on the other hand, it also (probably unavoidably) poses a hard constraint that all of a paragraph’s sentences get the same label, which increases noise sensitivity.14 Our experiments reveal the graph-cut formulation to be the better approach: for both default polarity classifiers (NB and SVM), some choice of parameters (including w) for ExtractSVM+Prox yields statistically significant improvement over its paragraphunit non-graph counterpart (NB: 86.4% vs. 85.2%; SVM: 86.15% vs. 85.45%). 5 Conclusions We examined the relation between subjectivity detection and polarity classification, showing that subjectivity detection can compress reviews into much shorter extracts that still retain polarity information at a level comparable to that of the full review. In fact, for the Naive Bayes polarity classifier, the subjectivity extracts are shown to be more effective input than the originating document, which suggests 14For example, in the data we used, boundaries may have been missed due to malformed html. that they are not only shorter, but also “cleaner” representations of the intended polarity. We have also shown that employing the minimum-cut framework results in the development of efficient algorithms for sentiment analysis. Utilizing contextual information via this framework can lead to statistically significant improvement in polarity-classification accuracy. Directions for future research include developing parameterselection techniques, incorporating other sources of contextual cues besides sentence proximity, and investigating other means for modeling such information. Acknowledgments We thank Eric Breck, Claire Cardie, Rich Caruana, Yejin Choi, Shimon Edelman, Thorsten Joachims, Jon Kleinberg, Oren Kurland, Art Munson, Vincent Ng, Fernando Pereira, Ves Stoyanov, Ramin Zabih, and the anonymous reviewers for helpful comments. This paper is based upon work supported in part by the National Science Foundation under grants ITR/IM IIS-0081334 and IIS-0329064, a Cornell Graduate Fellowship in Cognitive Studies, and by an Alfred P. Sloan Research Fellowship. Any opinions, findings, and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the National Science Foundation or Sloan Foundation. References Agrawal, Rakesh, Sridhar Rajagopalan, Ramakrishnan Srikant, and Yirong Xu. 2003. Mining newsgroups using networks arising from social behavior. In WWW, pages 529–535. Ahuja, Ravindra, Thomas L. Magnanti, and James B. Orlin. 1993. Network Flows: Theory, Algorithms, and Applications. Prentice Hall. Beineke, Philip, Trevor Hastie, Christopher Manning, and Shivakumar Vaithyanathan. 2004. Exploring sentiment summarization. In AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications (AAAI tech report SS-04-07). Blum, Avrim and Shuchi Chawla. 2001. Learning from labeled and unlabeled data using graph mincuts. In Intl. Conf. on Machine Learning (ICML), pages 19–26. Boykov, Yuri, Olga Veksler, and Ramin Zabih. 1999. Fast approximate energy minimization via graph cuts. In Intl. Conf. on Computer Vision (ICCV), pages 377–384. Journal version in IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI) 23(11):1222–1239, 2001. Cardie, Claire, Janyce Wiebe, Theresa Wilson, and Diane Litman. 2003. Combining low-level and summary representations of opinions for multiperspective question answering. In AAAI Spring Symposium on New Directions in Question Answering, pages 20–27. Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest. 1990. Introduction to Algorithms. MIT Press. Das, Sanjiv and Mike Chen. 2001. Yahoo! for Amazon: Extracting market sentiment from stock message boards. In Asia Pacific Finance Association Annual Conf. (APFA). Dave, Kushal, Steve Lawrence, and David M. Pennock. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In WWW, pages 519–528. Dini, Luca and Giampaolo Mazzini. 2002. Opinion classification through information extraction. In Intl. Conf. on Data Mining Methods and Databases for Engineering, Finance and Other Fields, pages 299–310. Durbin, Stephen D., J. Neal Richter, and Doug Warner. 2003. A system for affective rating of texts. In KDD Wksp. on Operational Text Classification Systems (OTC-3). Hatzivassiloglou, Vasileios and Kathleen McKeown. 1997. Predicting the semantic orientation of adjectives. In 35th ACL/8th EACL, pages 174–181. Joachims, Thorsten. 2003. Transductive learning via spectral graph partitioning. In Intl. Conf. on Machine Learning (ICML). Liu, Hugo, Henry Lieberman, and Ted Selker. 2003. A model of textual affect sensing using real-world knowledge. In Intelligent User Interfaces (IUI), pages 125–132. Montes-y-G´omez, Manuel, Aurelio L´opez-L´opez, and Alexander Gelbukh. 1999. Text mining as a social thermometer. In IJCAI Wksp. on Text Mining, pages 103–107. Morinaga, Satoshi, Kenji Yamanishi, Kenji Tateishi, and Toshikazu Fukushima. 2002. Mining product reputations on the web. In KDD, pages 341– 349. Industry track. Pang, Bo, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In EMNLP, pages 79–86. Qu, Yan, James Shanahan, and Janyce Wiebe, editors. 2004. AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications. AAAI technical report SS-04-07. Riloff, Ellen and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In EMNLP. Riloff, Ellen, Janyce Wiebe, and Theresa Wilson. 2003. Learning subjective nouns using extraction pattern bootstrapping. In Conf. on Natural Language Learning (CoNLL), pages 25–32. Subasic, Pero and Alison Huettner. 2001. Affect analysis of text using fuzzy semantic typing. IEEE Trans. Fuzzy Systems, 9(4):483–496. Tong, Richard M. 2001. An operational system for detecting and tracking opinions in on-line discussion. SIGIR Wksp. on Operational Text Classification. Turney, Peter. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In ACL, pages 417–424. Wiebe, Janyce M. 1994. Tracking point of view in narrative. Computational Linguistics, 20(2):233– 287. Yi, Jeonghee, Tetsuya Nasukawa, Razvan Bunescu, and Wayne Niblack. 2003. Sentiment analyzer: Extracting sentiments about a given topic using natural language processing techniques. In IEEE Intl. Conf. on Data Mining (ICDM). Yu, Hong and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In EMNLP.
2004
35
Finding Predominant Word Senses in Untagged Text Diana McCarthy & Rob Koeling & Julie Weeds & John Carroll Department of Informatics, University of Sussex Brighton BN1 9QH, UK dianam,robk,juliewe,johnca  @sussex.ac.uk Abstract In word sense disambiguation (WSD), the heuristic of choosing the most common sense is extremely powerful because the distribution of the senses of a word is often skewed. The problem with using the predominant, or first sense heuristic, aside from the fact that it does not take surrounding context into account, is that it assumes some quantity of handtagged data. Whilst there are a few hand-tagged corpora available for some languages, one would expect the frequency distribution of the senses of words, particularly topical words, to depend on the genre and domain of the text under consideration. We present work on the use of a thesaurus acquired from raw textual corpora and the WordNet similarity package to find predominant noun senses automatically. The acquired predominant senses give a precision of 64% on the nouns of the SENSEVAL2 English all-words task. This is a very promising result given that our method does not require any hand-tagged text, such as SemCor. Furthermore, we demonstrate that our method discovers appropriate predominant senses for words from two domainspecific corpora. 1 Introduction The first sense heuristic which is often used as a baseline for supervised WSD systems outperforms many of these systems which take surrounding context into account. This is shown by the results of the English all-words task in SENSEVAL-2 (Cotton et al., 1998) in figure 1 below, where the first sense is that listed in WordNet for the PoS given by the Penn TreeBank (Palmer et al., 2001). The senses in WordNet are ordered according to the frequency data in the manually tagged resource SemCor (Miller et al., 1993). Senses that have not occurred in SemCor are ordered arbitrarily and after those senses of the word that have occurred. The figure distinguishes systems which make use of hand-tagged data (using HTD) such as SemCor, from those that do not (without HTD). The high performance of the first sense baseline is due to the skewed frequency distribution of word senses. Even systems which show superior performance to this heuristic often make use of the heuristic where evidence from the context is not sufficient (Hoste et al., 2001). Whilst a first sense heuristic based on a sense-tagged corpus such as SemCor is clearly useful, there is a strong case for obtaining a first, or predominant, sense from untagged corpus data so that a WSD system can be tuned to the genre or domain at hand. SemCor comprises a relatively small sample of 250,000 words. There are words where the first sense in WordNet is counter-intuitive, because of the size of the corpus, and because where the frequency data does not indicate a first sense, the ordering is arbitrary. For example the first sense of tiger in WordNet is audacious person whereas one might expect that carnivorous animal is a more common usage. There are only a couple of instances of tiger within SemCor. Another example is embryo, which does not occur at all in SemCor and the first sense is listed as rudimentary plant rather than the anticipated fertilised egg meaning. We believe that an automatic means of finding a predominant sense would be useful for systems that use it as a means of backing-off (Wilks and Stevenson, 1998; Hoste et al., 2001) and for systems that use it in lexical acquisition (McCarthy, 1997; Merlo and Leybold, 2001; Korhonen, 2002) because of the limited size of hand-tagged resources. More importantly, when working within a specific domain one would wish to tune the first sense heuristic to the domain at hand. The first sense of star in SemCor is celestial body, however, if one were disambiguating popular news celebrity would be preferred. Assuming that one had an accurate WSD system then one could obtain frequency counts for senses and rank them with these counts. However, the most accurate WSD systems are those which require manually sense tagged data in the first place, and their accuracy depends on the quantity of training examples (Yarowsky and Florian, 2002) available. We 0 20 40 60 80 100 0 20 40 60 80 100 recall  precision First Sense "using HTD" "without HTD" "First Sense" Figure 1: The first sense heuristic compared with the SENSEVAL-2 English all-words task results are therefore investigating a method of automatically ranking WordNet senses from raw text. Many researchers are developing thesauruses from automatically parsed data. In these each target word is entered with an ordered list of “nearest neighbours”. The neighbours are words ordered in terms of the “distributional similarity” that they have with the target. Distributional similarity is a measure indicating the degree that two words, a word and its neighbour, occur in similar contexts. From inspection, one can see that the ordered neighbours of such a thesaurus relate to the different senses of the target word. For example, the neighbours of star in a dependency-based thesaurus provided by Lin 1 has the ordered list of neighbours: superstar, player, teammate, actor early in the list, but one can also see words that are related to another sense of star e.g. galaxy, sun, world and planet further down the list. We expect that the quantity and similarity of the neighbours pertaining to different senses will reflect the dominance of the sense to which they pertain. This is because there will be more relational data for the more prevalent senses compared to the less frequent senses. In this paper we describe and evaluate a method for ranking senses of nouns to obtain the predominant sense of a word using the neighbours from automatically acquired thesauruses. The neighbours for a word in a thesaurus are words themselves, rather than senses. In order to associate the neighbours with senses we make use of another notion of similarity, “semantic similarity”, which exists between senses, rather than words. We experiment with several WordNet Similarity measures (Patwardhan and Pedersen, 2003) which aim to capture semantic relatedness within 1Available at http://www.cs.ualberta.ca/˜lindek/demos/depsim.htm the WordNet hierarchy. We use WordNet as our sense inventory for this work. The paper is structured as follows. We discuss our method in the following section. Sections 3 and 4 concern experiments using predominant senses from the BNC evaluated against the data in SemCor and the SENSEVAL-2 English all-words task respectively. In section 5 we present results of the method on two domain specific sections of the Reuters corpus for a sample of words. We describe some related work in section 6 and conclude in section 7. 2 Method In order to find the predominant sense of a target word we use a thesaurus acquired from automatically parsed text based on the method of Lin (1998). This provides the  nearest neighbours to each target word, along with the distributional similarity score between the target word and its neighbour. We then use the WordNet similarity package (Patwardhan and Pedersen, 2003) to give us a semantic similarity measure (hereafter referred to as the WordNet similarity measure) to weight the contribution that each neighbour makes to the various senses of the target word. To find the first sense of a word (  ) we take each sense in turn and obtain a score reflecting the prevalence which is used for ranking. Let      be the ordered set of the top scoring  neighbours of  from the thesaurus with associated distributional similarity scores  !  !  !  . Let " " ! be the set of senses of  . For each sense of  ( #%$'&(" " ! ) we obtain a ranking score by summing over the )*+ -, ! of each neighbour ( .,/&0  ) multiplied by a weight. This weight is the WordNet similarity score ( 1  ) between the target sense ( #%$ ) and the sense of -, ( %23&4" "* -, ! ) that maximises this score, divided by the sum of all such WordNet similarity scores for " " ! and ., . Thus we rank each sense 1 $ &5" " ! using: 687 "9):);<" =%"?>@=%A 7 "#%$ !  B CED FGIH  , !J 1 #%$* ., ! K LNMPO F LRQ C LNQRL*STU # 1 $ O  -, ! (1) where: 1 #%$V -, !  WYXZ C L\[ F LNQ C LNQRL]S CED U *1 #%$* %2 !V! 2.1 Acquiring the Automatic Thesaurus The thesaurus was acquired using the method described by Lin (1998). For input we used grammatical relation data extracted using an automatic parser (Briscoe and Carroll, 2002). For the experiments in sections 3 and 4 we used the 90 million words of written English from the BNC. For each noun we considered the co-occurring verbs in the direct object and subject relation, the modifying nouns in noun-noun relations and the modifying adjectives in adjective-noun relations. We could easily extend the set of relations in the future. A noun,  , is thus described by a set of co-occurrence triples ^  7  _a` and associated frequencies, where 7 is a grammatical relation and _ is a possible cooccurrence with  in that relation. For every pair of nouns, where each noun had a total frequency in the triple data of 10 or more, we computed their distributional similarity using the measure given by Lin (1998). If b8 ! is the set of co-occurrence types  7 _ ! such that cd*+ 7  _ ! is positive then the similarity between two nouns,  and , can be computed as: )*+ !  K Sfe*g 2 U F h Sf Uji h S C U Ncd 7 _ !)k cl m 7 _ !V! K Sfe*g 2 U F h Sf U cl 7  _ !-k K STe*g 2 U F h S C U cl m 7 _ ! where: cl 7  _ ! onprq 6 _Istvu 7 ! 6 *_Is 7 ! A thesaurus entry of size  for a target noun  is then defined as the  most similar nouns to  . 2.2 The WordNet Similarity Package We use the WordNet Similarity Package 0.05 and WordNet version 1.6. 2 The WordNet Similarity package supports a range of WordNet similarity scores. We experimented using six of these to provide the 1  in equation 1 above and obtained results well over our baseline, but because of space limitations give results for the two which perform the best. We briefly summarise the two measures here; for a more detailed summary see (Patwardhan et al., 2003). The measures provide a similarity score between two WordNet senses ( xw and y ), these being synsets within WordNet. lesk (Banerjee and Pedersen, 2002) This score maximises the number of overlapping words in the gloss, or definition, of the senses. It uses the glosses of semantically related (according to WordNet) senses too. jcn (Jiang and Conrath, 1997) This score uses corpus data to populate classes (synsets) in the WordNet hierarchy with frequency counts. Each 2We use this version of WordNet since it allows us to map information to WordNets of other languages more accurately. We are of course able to apply the method to other versions of WordNet. synset, is incremented with the frequency counts from the corpus of all words belonging to that synset, directly or via the hyponymy relation. The frequency data is used to calculate the “information content” (IC) of a class c-z{ ! }|~;<A% j€ * !V! . Jiang and Conrath specify a distance measure:  ,V‚ C xwr y ! ƒc-z{xw !Ik c„z/y ! |oy J c-z{… ! , where the third class ( … ) is the most informative, or most specific, superordinate synset of the two senses rw and y . This is transformed from a distance measure in the WN-Similarity package by taking the reciprocal: † =% ‡rwx y ! (wˆ  ,]‚ C xwr y ! 3 Experiment with SemCor In order to evaluate our method we use the data in SemCor as a gold-standard. This is not ideal since we expect that the sense frequency distributions within SemCor will differ from those in the BNC, from which we obtain our thesaurus. Nevertheless, since many systems performed well on the English all-words task for SENSEVAL-2 by using the frequency information in SemCor this is a reasonable approach for evaluation. We generated a thesaurus entry for all polysemous nouns which occurred in SemCor with a frequency ` 2, and in the BNC with a frequency ‰ 10 in the grammatical relations listed in section 2.1 above. The jcn measure uses corpus data for the calculation of IC. We experimented with counts obtained from the BNC and the Brown corpus. The variation in counts had negligible affect on the results. 3 The experimental results reported here are obtained using IC counts from the BNC corpus. All the results shown here are those with the size of thesaurus entries (  ) set to 50. 4 We calculate the accuracy of finding the predominant sense, when there is indeed one sense with a higher frequency than the others for this word in SemCor ( 6 >Š ‚<‚ ). We also calculate the WSD accuracy that would be obtained on SemCor, when using our first sense in all contexts ( ‹Œ>  L ‚ ). 3.1 Results The results in table 1 show the accuracy of the ranking with respect to SemCor over the entire set of 2595 polysemous nouns in SemCor with 3Using the default IC counts provided with the package did result in significantly higher results, but these default files are obtained from the sense-tagged data within SemCor itself so we discounted these results. 4We repeated the experiment with the BNC data for jcn using #Ž3VE‘\’E‘“  and ” however, the number of neighbours used gave only minimal changes to the results so we do not report them here. measure 6 >Š‚<‚ % ‹a>  L ‚ % lesk 54 48 jcn 54 46 baseline 32 24 Table 1: SemCor results the jcn and lesk WordNet similarity measures. The random baseline for choosing the predominant sense over all these words ( K  Fr•'– eV—VL ˜ LNQ C LRQRL]Sf U ˜ ) is 32%. Both WordNet similarity measures beat this baseline. The random baseline for ‹a>  L ‚ ( Kš™ F› Q<œž – e ™ –  Q C L ˜ LRQ C LRQNL*S ™ U ˜ ) is 24%. Again, the automatic ranking outperforms this by a large margin. The first sense in SemCor provides an upperbound for this task of 67%. Since both measures gave comparable results we restricted our remaining experiments to jcn because this gave good results for finding the predominant sense, and is much more efficient than lesk, given the precompilation of the IC files. 3.2 Discussion From manual analysis, there are cases where the acquired first sense disagrees with SemCor, yet is intuitively plausible. This is to be expected regardless of any inherent shortcomings of the ranking technique since the senses within SemCor will differ compared to those of the BNC. For example, in WordNet the first listed sense of pipe is tobacco pipe, and this is ranked joint first according to the Brown files in SemCor with the second sense tube made of metal or plastic used to carry water, oil or gas etc.... The automatic ranking from the BNC data lists the latter tube sense first. This seems quite reasonable given the nearest neighbours: tube, cable, wire, tank, hole, cylinder, fitting, tap, cistern, plate.... Since SemCor is derived from the Brown corpus, which predates the BNC by up to 30 years 5 and contains a higher proportion of fiction 6, the high ranking for the tobacco pipe sense according to SemCor seems plausible. Another example where the ranking is intuitive, is soil. The first ranked sense according to SemCor is the filth, stain: state of being unclean sense whereas the automatic ranking lists dirt, ground, earth as the first sense, which is the second ranked 5The text in the Brown corpus was produced in 1961, whereas the bulk of the written portion of the BNC contains texts produced between 1975 and 1993. 66 out of the 15 Brown genres are fiction, including one specifically dedicated to detective fiction, whilst only 20% of the BNC text represents imaginative writing, the remaining 80% being classified as informative. sense according to SemCor. This seems intuitive given our expected relative usage of these senses in modern British English. Even given the difference in text type between SemCor and the BNC the results are encouraging, especially given that our ‹a>  L ‚ results are for polysemous nouns. In the English all-words SENSEVAL-2, 25% of the noun data was monosemous. Thus, if we used the sense ranking as a heuristic for an “all nouns” task we would expect to get precision in the region of 60%. We test this below on the SENSEVAL-2 English all-words data. 4 Experiment on SENSEVAL-2 English all Words Data In order to see how well the automatically acquired predominant sense performs on a WSD task from which the WordNet sense ordering has not been taken, we use the SENSEVAL-2 all-words data (Palmer et al., 2001). 7 This is a hand-tagged test suite of 5,000 words of running text from three articles from the Penn Treebank II. We use an allwords task because the predominant senses will reflect the sense distributions of all nouns within the documents, rather than a lexical sample task, where the target words are manually determined and the results will depend on the skew of the words in the sample. We do not assume that the predominant sense is a method of WSD in itself. To disambiguate senses a system should take context into account. However, it is important to know the performance of this heuristic for any systems that use it. We generated a thesaurus entry for all polysemous nouns in WordNet as described in section 2.1 above. We obtained the predominant sense for each of these words and used these to label the instances in the noun data within the SENSEVAL-2 English allwords task. We give the results for this WSD task in table 2. We compare results using the first sense listed in SemCor, and the first sense according to the SENSEVAL-2 English all-words test data itself. For the latter, we only take a first-sense where there is more than one occurrence of the noun in the test data and one sense has occurred more times than any of the others. We trivially labelled all monosemous items. Our automatically acquired predominant sense performs nearly as well as the first sense provided by SemCor, which is very encouraging given that 7In order to do this we use the mapping provided at http://www.lsi.upc.es/˜nlp/tools/mapping.html (Daud´e et al., 2000) for obtaining the SENSEVAL-2 data in WordNet 1.6. We discounted the few items for which there was no mapping. This amounted to only 3% of the data. precision recall Automatic 64 63 SemCor 69 68 SENSEVAL-2 92 72 Table 2: Evaluating predominant sense information on SENSEVAL-2 all-words data. our method only uses raw text, with no manual labelling. The performance of the predominant sense provided in the SENSEVAL-2 test data provides an upper bound for this task. The items that were not covered by our method were those with insufficient grammatical relations for the tuples employed. Two such words, today and one, each occurred 5 times in the test data. Extending the grammatical relations used for building the thesaurus should improve the coverage. There were a similar number of words that were not covered by a predominant sense in SemCor. For these one would need to obtain more sense-tagged text in order to use this heuristic. Our automatic ranking gave 67% precision on these items. This demonstrates that our method of providing a first sense from raw text will help when sense-tagged data is not available. 5 Experiments with Domain Specific Corpora A major motivation for our work is to try to capture changes in ranking of senses for documents from different domains. In order to test this we applied our method to two specific sections of the Reuters corpus. We demonstrate that choosing texts from a particular domain has a significant influence on the sense ranking. We chose the domains of SPORTS and FINANCE since there is sufficient material for these domains in this publically available corpus. 5.1 Reuters Corpus The Reuters corpus (Rose et al., 2002) is a collection of about 810,000 Reuters, English Language News stories. Many of the articles are economy related, but several other topics are included too. We selected documents from the SPORTS domain (topic code: GSPO) and a limited number of documents from the FINANCE domain (topic codes: ECAT and MCAT). The SPORTS corpus consists of 35317 documents (about 9.1 million words). The FINANCE corpus consists of 117734 documents (about 32.5 million words). We acquired thesauruses for these corpora using the procedure described in section 2.1. 5.2 Two Experiments There is no existing sense-tagged data for these domains that we could use for evaluation. We therefore decided to select a limited number of words and to evaluate these words qualitatively. The words included in this experiment are not a random sample, since we anticipated different predominant senses in the SPORTS and FINANCE domains for these words. Additionally, we evaluated our method quantitatively using the Subject Field Codes (SFC) resource (Magnini and Cavagli`a, 2000) which annotates WordNet synsets with domain labels. The SFC contains an economy label and a sports label. For this domain label experiment we selected all the words in WordNet that have at least one synset labelled economy and at least one synset labelled sports. The resulting set consisted of 38 words. We contrast the distribution of domain labels for these words in the 2 domain specific corpora. 5.3 Discussion The results for 10 of the words from the qualitative experiment are summarized in table 3 with the WordNet sense number for each word supplied alongside synonyms or hypernyms from WordNet for readability. The results are promising. Most words show the change in predominant sense (PS) that we anticipated. It is not always intuitively clear which of the senses to expect as predominant sense for either a particular domain or for the BNC, but the first senses of words like division and goal shift towards the more specific senses (league and score respectively). Moreover, the chosen senses of the word tie proved to be a textbook example of the behaviour we expected. The word share is among the words whose predominant sense remained the same for all three corpora. We anticipated that the stock certificate sense would be chosen for the FINANCE domain, but this did not happen. However, that particular sense ended up higher in the ranking for the FINANCE domain. Figure 2 displays the results of the second experiment with the domain specific corpora. This figure shows the domain labels assigned to the predominant senses for the set of 38 words after ranking the words using the SPORTS and the FINANCE corpora. We see that both domains have a similarly high percentage of factotum (domain independent) labels, but as we would expect, the other peaks correspond to the economy label for the FINANCE corpus, and the sports label for the SPORTS corpus. Word PS BNC PS FINANCE PS SPORTS pass 1 (accomplishment) 14 (attempt) 15 (throw) share 2 (portion, asset) 2 2 division 4 (admin. unit) 4 6 (league) head 1 (body part) 4 (leader) 4 loss 2 (transf. property) 2 8 (death, departure) competition 2 (contest, social event) 3 (rivalry) 2 match 2 (contest) 7 (equal, person) 2 tie 1 (neckwear) 2 (affiliation) 3 (draw) strike 1 (work stoppage) 1 6 (hit, success) goal 1 (end, mental object) 1 2 (score) Table 3: Domain specific results 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Percentage Ÿ law politics religion factotum administr. biology play commerce industry free_time economy physics telecom. mathematics medicine sports sport finance Figure 2: Distribution of domain labels of predominant senses for 38 polysemous words ranked using the SPORTS and FINANCE corpus. 6 Related Work Most research in WSD concentrates on using contextual features, typically neighbouring words, to help determine the correct sense of a target word. In contrast, our work is aimed at discovering the predominant senses from raw text because the first sense heuristic is such a useful one, and because handtagged data is not always available. A major benefit of our work, rather than reliance on hand-tagged training data such as SemCor, is that this method permits us to produce predominant senses for the domain and text type required. Buitelaar and Sacaleanu (2001) have previously explored ranking and selection of synsets in GermaNet for specific domains using the words in a given synset, and those related by hyponymy, and a term relevance measure taken from information retrieval. Buitelaar and Sacaleanu have evaluated their method on identifying domain specific concepts using human judgements on 100 items. We have evaluated our method using publically available resources, both for balanced and domain specific text. Magnini and Cavagli`a (2000) have identified WordNet word senses with particular domains, and this has proven useful for high precision WSD (Magnini et al., 2001); indeed in section 5 we used these domain labels for evaluation. Identification of these domain labels for word senses was semiautomatic and required a considerable amount of hand-labelling. Our approach is complementary to this. It only requires raw text from the given domain and because of this it can easily be applied to a new domain, or sense inventory, given sufficient text. Lapata and Brew (2004) have recently also highlighted the importance of a good prior in WSD. They used syntactic evidence to find a prior distribution for verb classes, based on (Levin, 1993), and incorporate this in a WSD system. Lapata and Brew obtain their priors for verb classes directly from subcategorisation evidence in a parsed corpus, whereas we use parsed data to find distributionally similar words (nearest neighbours) to the target word which reflect the different senses of the word and have associated distributional similarity scores which can be used for ranking the senses according to prevalence. There has been some related work on using automatic thesauruses for discovering word senses from corpora Pantel and Lin (2002). In this work the lists of neighbours are themselves clustered to bring out the various senses of the word. They evaluate using the lin measure described above in section 2.2 to determine the precision and recall of these discovered classes with respect to WordNet synsets. This method obtains precision of 61% and recall 51%. If WordNet sense distinctions are not ultimately required then discovering the senses directly from the neighbours list is useful because sense distinctions discovered are relevant to the corpus data and new senses can be found. In contrast, we use the neighbours lists and WordNet similarity measures to impose a prevalence ranking on the WordNet senses. We believe automatic ranking techniques such as ours will be useful for systems that rely on WordNet, for example those that use it for lexical acquisition or WSD. It would be useful however to combine our method of finding predominant senses with one which can automatically find new senses within text and relate these to WordNet synsets, as Ciaramita and Johnson (2003) do with unknown nouns. We have restricted ourselves to nouns in this work, since this PoS is perhaps most affected by domain. We are currently investigating the performance of the first sense heuristic, and this method, for other PoS on SENSEVAL-3 data (McCarthy et al., 2004), although not yet with rankings from domain specific corpora. The lesk measure can be used when ranking adjectives, and adverbs as well as nouns and verbs (which can also be ranked using jcn). Another major advantage that lesk has is that it is applicable to lexical resources which do not have the hierarchical structure that WordNet does, but do have definitions associated with word senses. 7 Conclusions We have devised a method that uses raw corpus data to automatically find a predominant sense for nouns in WordNet. We use an automatically acquired thesaurus and a WordNet Similarity measure. The automatically acquired predominant senses were evaluated against the hand-tagged resources SemCor and the SENSEVAL-2 English all-words task giving us a WSD precision of 64% on an all-nouns task. This is just 5% lower than results using the first sense in the manually labelled SemCor, and we obtain 67% precision on polysemous nouns that are not in SemCor. In many cases the sense ranking provided in SemCor differs to that obtained automatically because we used the BNC to produce our thesaurus. Indeed, the merit of our technique is the very possibility of obtaining predominant senses from the data at hand. We have demonstrated the possibility of finding predominant senses in domain specific corpora on a sample of nouns. In the future, we will perform a large scale evaluation on domain specific corpora. In particular, we will use balanced and domain specific corpora to isolate words having very different neighbours, and therefore rankings, in the different corpora and to detect and target words for which there is a highly skewed sense distribution in these corpora. There is plenty of scope for further work. We want to investigate the effect of frequency and choice of distributional similarity measure (Weeds et al., 2004). Additionally, we need to determine whether senses which do not occur in a wide variety of grammatical contexts fare badly using distributional measures of similarity, and what can be done to combat this problem using relation specific thesauruses. Whilst we have used WordNet as our sense inventory, it would be possible to use this method with another inventory given a measure of semantic relatedness between the neighbours and the senses. The lesk measure for example, can be used with definitions in any standard machine readable dictionary. Acknowledgements We would like to thank Siddharth Patwardhan and Ted Pedersen for making the WN Similarity package publically available. This work was funded by EU-2001-34460 project MEANING: Developing Multilingual Web-scale Language Technologies, UK EPSRC project Robust Accurate Statistical Parsing (RASP) and a UK EPSRC studentship. References Satanjeev Banerjee and Ted Pedersen. 2002. An adapted Lesk algorithm for word sense disambiguation using WordNet. In Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-02), Mexico City. Edward Briscoe and John Carroll. 2002. Robust accurate statistical annotation of general text. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC), pages 1499–1504, Las Palmas, Canary Islands, Spain. Paul Buitelaar and Bogdan Sacaleanu. 2001. Ranking and selecting synsets by domain relevance. In Proceedings of WordNet and Other Lexical Resources: Applications, Extensions and Customizations, NAACL 2001 Workshop, Pittsburgh, PA. Massimiliano Ciaramita and Mark Johnson. 2003. Supersense tagging of unknown nouns in WordNet. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2003). Scott Cotton, Phil Edmonds, Adam Kilgarriff, and Martha Palmer. 1998. SENSEVAL-2. http://www.sle.sharp.co.uk/senseval2/. Jordi Daud´e, Lluis Padr´o, and German Rigau. 2000. Mapping wordnets using structural information. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, Hong Kong. V´eronique Hoste, Anne Kool, and Walter Daelemans. 2001. Classifier optimization and combination in the English all words task. In Proceedings of the SENSEVAL-2 workshop, pages 84–86. Jay Jiang and David Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In International Conference on Research in Computational Linguistics, Taiwan. Anna Korhonen. 2002. Semantically motivated subcategorization acquisition. In Proceedings of the ACL Workshop on Unsupervised Lexical Acquisition, Philadelphia, USA. Mirella Lapata and Chris Brew. 2004. Verb class disambiguation using informative priors. Computational Linguistics, 30(1):45–75. Beth Levin. 1993. English Verb Classes and Alternations: a Preliminary Investigation. University of Chicago Press, Chicago and London. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL 98, Montreal, Canada. Bernardo Magnini and Gabriela Cavagli`a. 2000. Integrating subject field codes into WordNet. In Proceedings of LREC-2000, Athens, Greece. Bernardo Magnini, Carlo Strapparava, Giovanni Pezzuli, and Alfio Gliozzo. 2001. Using domain information for word sense disambiguation. In Proceedings of the SENSEVAL-2 workshop, pages 111–114. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carrolł. 2004. Using automatically acquired predominant senses for word sense disambiguation. In Proceedings of the ACL SENSEVAL-3 workshop. Diana McCarthy. 1997. Word sense disambiguation for acquisition of selectional preferences. In Proceedings of the ACL/EACL 97 Workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications, pages 52–61. Paola Merlo and Matthias Leybold. 2001. Automatic distinction of arguments and modifiers: the case of prepositional phrases. In Proceedings of the Workshop on Computational Language Learning (CoNLL 2001), Toulouse, France. George A. Miller, Claudia Leacock, Randee Tengi, and Ross T Bunker. 1993. A semantic concordance. In Proceedings of the ARPA Workshop on Human Language Technology, pages 303–308. Morgan Kaufman. Martha Palmer, Christiane Fellbaum, Scott Cotton, Lauren Delfs, and Hoa Trang Dang. 2001. English tasks: All-words and verb lexical sample. In Proceedings of the SENSEVAL-2 workshop, pages 21–24. Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 613–619, Edmonton, Canada. Siddharth Patwardhan and Ted Pedersen. 2003. The cpan wordnet::similarity package. http://search.cpan.org/author/SID/WordNetSimilarity-0.03/. Siddharth Patwardhan, Satanjeev Banerjee, and Ted Pedersen. 2003. Using measures of semantic relatedness for word sense disambiguation. In Proceedings of the Fourth International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2003), Mexico City. Tony G. Rose, Mary Stevenson, and Miles Whitehead. 2002. The Reuters Corpus volume 1 from yesterday’s news to tomorrow’s language resources. In Proc. of Third International Conference on Language Resources and Evaluation, Las Palmas de Gran Canaria. Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. Yorick Wilks and Mark Stevenson. 1998. The grammar of sense: using part-of speech tags as a first step in semantic disambiguation. Natural Language Engineering, 4(2):135–143. David Yarowsky and Radu Florian. 2002. Evaluating sense disambiguation performance across diverse parameter spaces. Natural Language Engineering, 8(4):293–310.
2004
36
Unsupervised Sense Disambiguation Using Bilingual Probabilistic Models Indrajit Bhattacharya Dept. of Computer Science University of Maryland College Park, MD, USA [email protected] Lise Getoor Dept. of Computer Science University of Maryland College Park, MD, USA [email protected] Yoshua Bengio Dept. IRO Universit´e de Montr´eal Montr´eal, Qu´ebec, Canada [email protected] Abstract We describe two probabilistic models for unsupervised word-sense disambiguation using parallel corpora. The first model, which we call the Sense model, builds on the work of Diab and Resnik (2002) that uses both parallel text and a sense inventory for the target language, and recasts their approach in a probabilistic framework. The second model, which we call the Concept model, is a hierarchical model that uses a concept latent variable to relate different language specific sense labels. We show that both models improve performance on the word sense disambiguation task over previous unsupervised approaches, with the Concept model showing the largest improvement. Furthermore, in learning the Concept model, as a by-product, we learn a sense inventory for the parallel language. 1 Introduction Word sense disambiguation (WSD) has been a central question in the computational linguistics community since its inception. WSD is fundamental to natural language understanding and is a useful intermediate step for many other language processing tasks (Ide and Veronis, 1998). Many recent approaches make use of ideas from statistical machine learning; the availability of shared sense definitions (e.g. WordNet (Fellbaum, 1998)) and recent international competitions (Kilgarrif and Rosenzweig, 2000) have enabled researchers to compare their results. Supervised approaches which make use of a small hand-labeled training set (Bruce and Wiebe, 1994; Yarowsky, 1993) typically outperform unsupervised approaches (Agirre et al., 2000; Litkowski, 2000; Lin, 2000; Resnik, 1997; Yarowsky, 1992; Yarowsky, 1995), but tend to be tuned to a specific corpus and are constrained by scarcity of labeled data. In an effort to overcome the difficulty of finding sense-labeled training data, researchers have begun investigating unsupervised approaches to wordsense disambiguation. For example, the use of parallel corpora for sense tagging can help with word sense disambiguation (Brown et al., 1991; Dagan, 1991; Dagan and Itai, 1994; Ide, 2000; Resnik and Yarowsky, 1999). As an illustration of sense disambiguation from translation data, when the English word bank is translated to Spanish as orilla, it is clear that we are referring to the shore sense of bank, rather than the financial institution sense. The main inspiration for our work is Diab and Resnik (2002), who use translations and linguistic knowledge for disambiguation and automatic sense tagging. Bengio and Kermorvant (2003) present a graphical model that is an attempt to formalize probabilistically the main ideas in Diab and Resnik (2002). They assume the same semantic hierarchy (in particular, WordNet) for both the languages and assign English words as well as their translations to WordNet synsets. Here we present two variants of the graphical model in Bengio and Kermorvant (2003), along with a method to discover a cluster structure for the Spanish senses. We also present empirical word sense disambiguation results which demonstrate the gain brought by this probabilistic approach, even while only using the translated word to provide disambiguation information. Our first generative model, the Sense Model, groups semantically related words from the two languages into senses, and translations are generated by probabilistically choosing a sense and then words from the sense. We show that this improves on the results of Diab and Resnik (2002). Our next model, which we call the Concept Model, aims to improve on the above sense structure by modeling the senses of the two languages separately and relating senses from both languages through a higher-level, semantically less precise concept. The intuition here is that not all of the senses that are possible for a word will be relevant for a concept. In other words, the distribution over the senses of a word given a concept can be expected to have a lower entropy than the distribution over the senses of the word in the language as a whole. In this paper, we look at translation data as a resource for identification of semantic concepts. Note that actual translated word pairs are not always good matches semantically, because the translation process is not on a word by word basis. This introduces a kind of noise in the translation, and an additional hidden variable to represent the shared meaning helps to take it into account. Improved performance over the Sense Model validates the use of concepts in modeling translations. An interesting by-product of the Concept Model is a semantic structure for the secondary language. This is automatically constructed using background knowledge of the structure for the primary language and the observed translation pairs. In the model, words sharing the same sense are synonyms while senses under the same concept are semantically related in the corpus. An investigation of the model trained over real data reveals that it can indeed group related words together. It may be noted that predicting senses from translations need not necessarily be an end result in itself. As we have already mentioned, lack of labeled data is a severe hindrance for supervised approaches to word sense disambiguation. At the same time, there is an abundance of bilingual documents and many more can potentially be mined from the web. It should be possible using our approach to (noisily) assign sense tags to words in such documents, thus providing huge resources of labeled data for supervised approaches to make use of. For the rest of this paper, for simplicity we will refer to the primary language of the parallel document as English and to the secondary as Spanish. The paper is organized as follows. We begin by formally describing the models in Section 2. We describe our approach for constructing the senses and concepts in Section 3. Our algorithm for learning the model parameters is described in Section 4. We present experimental results in Section 5 and our analysis in Section 6. We conclude in Section 7. 2 Probabilistic Models for Parallel Corpora We motivate the use of a probabilistic model by illustrating that disambiguation using translations is possible even when a word has a unique translation. For example, according to WordNet, the word prevention has two senses in English, which may be abbreviated as hindrance (the act of hindering or obstruction) and control (by prevention, e.g. the control of a disease). It has a single translation in our corpus, that being prevenci´on. The first English sense, hindrance, also has other words like bar that occur in the corpus and all of these other words are observed to be translated in Spanish as the word obstrucci´on. In addition, none of these other words translate to prevenci´on. So it is not unreasonable to suppose that the intended sense for prevention when translated as prevenci´on is different from that of bar. Therefore, the intended sense is most likely to be control. At the very heart of the reasoning is probabilistic analysis and independence assumptions. We are assuming that senses and words have certain occurrence probabilities and that the choice of the word can be made independently once the sense has been decided. This is the flavor that we look to add to modeling parallel documents for sense disambiguation. We formally describe the two generative models that use these ideas in Subsections 2.2 and 2.3. T We Ws Te Ts C Ws We word concept sense b) Concept Model a) Sense Model Figure 1: Graphical Representations of the a) Sense Model and the b) Concept Model 2.1 Notation Throughout, we use uppercase letters to denote random variables and lowercase letters to denote specific instances of the random variables. A translation pair is (  ,  ) where the subscript  and  indicate the primary language (English) and the secondary language (Spanish).      and        . We use the shorthand   for  !#" $ . 2.2 The Sense Model The Sense Model makes the assumption, inspired by ideas in Diab and Resnik (2002) and Bengio and Kermorvant (2003), that the English word ! and the Spanish word % in a translation pair share the same precise sense. In other words, the set of sense labels for the words in the two languages is the same and may be collapsed into one set of senses that is responsible for both English and Spanish words and the single latent variable in the model is the sense label & ' () *(+, for both words  and - . We also make the assumption that the words in both languages are conditionally independent given the sense label. The generative parameters .0/ for the model are the prior probability  (  of each sense ( and the conditional probabilities  21 (  and 3 *1 (  of each word  and  in the two languages given the sense. The generation of a translation pair by this model may be viewed as a two-step process that first selects a sense according to the priors on the senses and then selects a word from each language using the conditional probabilities for that sense. This may be imagined as a factoring of the joint distribution:    & 4"5 &   1 &   1 &  . Note that in the absence of labeled training data, two of the random variables - and  are observed, while the sense variable & is not. However, we can derive the possible values for our sense labels from WordNet, which gives us the possible senses for each English word - . The Sense model is shown in Figure 1(a). 2.3 The Concept Model The assumption of a one-to-one association between sense labels made in the Sense Model may be too simplistic to hold for arbitrary languages. In particular, it does not take into account that translation is from sentence to sentence (with a shared meaning), while the data we are modeling are aligned single-word translations  %  -6 , in which the intended meaning of - does not always match perfectly with the intended meaning of 7 . Generally, a set of 8 related senses in one language may be translated by one of 9 related senses in the other. This many-to-many mapping is captured in our alternative model using a second level hidden variable called a concept. Thus we have three hidden variables in the Concept Model — the English sense &  , the Spanish sense &  and the concept : , where & ;"  ( < *( >=  , & ?"  ( < *( A@  and : " B) *<BCD . We make the assumption that the senses &  and &  are independent of each other given the shared concept : . The generative parameters . / in the model are the prior probabilities  B  over the concepts, the conditional probabilities  ( E1 B  and  ( *1 B  for the English and Spanish senses given the concept, and the conditional probabilities  F1 ( $ and  21 ( $ for the words  and  in each language given their senses. We can now imagine the generative process of a translation pair by the Concept Model as first selecting a concept according to the priors, then a sense for each language given the concept, and finally a word for each sense using the conditional probabilities of the words. As in Bengio and Kermorvant (2003), this generative procedure may be captured by factoring the joint distribution using the conditional independence assumptions as 3 -  -  &   &   : "  :  & F1 : 3 !21 & $ & E1 :  21 & $ . The Concept model is shown in Figure 1(b). 3 Constructing the Senses and Concepts Building the structure of the model is crucial for our task. Choosing the dimensionality of the hidden variables by selecting the number of senses and concepts, as well as taking advantage of prior knowledge to impose constraints, are very important aspects of building the structure. If certain words are not possible for a given sense, or certain senses are not possible for a given concept, their corresponding parameters should be 0. For instance, for all words  that do not belong to a sense (  , the corresponding parameter .EGIH$J KLH would be permanently set to 0. Only the remaining parameters need to be modeled explicitly. While model selection is an extremely difficult problem in general, an important and interesting option is the use of world knowledge. Semantic hierarchies for some languages have been built. We should be able to make use of these known taxonomies in constructing our model. We make heavy use of the WordNet ontology to assign structure to both our models, as we discuss in the following subsections. There are two major tasks in building the structure — determining the possible sense labels for each word, both English and Spanish, and constructing the concepts, which involves choosing the number of concepts and the probable senses for each concept. 3.1 Building the Sense Model Each word in WordNet can belong to multiple synsets in the hierarchy, which are its possible senses. In both of our models, we directly use the WordNet senses as the English sense labels. All WordNet senses for which a word has been observed in the corpus form our set of English sense labels. The Sense Model holds that the sense labels for the two domains are the same. So we must use the same WordNet labels for the Spanish words as well. We include a Spanish word  for a sense ( if  is the translation of any English word  in ( . 3.2 Building the Concept Model Unlike the Sense Model, the Concept Model does not constrain the Spanish senses to be the same as the English ones. So the two major tasks in building the Concept Model are constructing the Spanish senses and then clustering the English and Spanish senses to build the concepts. Concept Model te2 ts1 te1 bar prevention c6118 ts2 c20 prevencio’n obstruccio’n Sense Model bar prevention te1 te2 prevencio’n obstruccio’n Figure 2: The Sense and Concept models for prevention, bar, prevenci´on and obstrucci´on For each Spanish word M , we have its set of English translations  N   =O . One possibility is to group Spanish words looking at their translations. However, a more robust approach is to consider the relevant English senses for  . Each English translation for  has its set of English sense labels P GIHDQ drawn from WordNet. So the relevant English sense labels for  may be defined as P GSR "UTNV P GIH Q . We call this the English sense map or 2WXY for  . We use the 2WXY s to define the Spanish senses. We may imagine each Spanish word to come from one or more Spanish senses. If each word has a single sense, then we add a Spanish sense (  for each *WZXY and all Spanish words that share that 2WXY belong to that sense. Otherwise, the *WZX Y s have to be split into frequently occurring subgroups. Frequently co-occurring subsets of 2WXY s can define more refined Spanish senses. We identify these subsets by looking at pairs of 2WZX Y s and computing their intersections. An intersection is considered to be a Spanish sense if it occurs for a significant number of pairs of 2WXY s. We consider both ways of building Spanish senses. In either case, a constructed Spanish sense (  comes with its relevant set  (  Q  of English senses, which we denote as 2WZX Y  (  . Once we have the Spanish senses, we cluster them to form concepts. We use the *WZXY corresponding to each Spanish sense to define a measure of similarity for a pair of Spanish senses. There are many options to choose from here. We use a simple measure that counts the number of common items in the two *WZX Y s.1 The similarity measure is now used to cluster the Spanish senses (  . Since this measure is not transitive, it does not directly define equivalence classes over (6 . Instead, we get a similarity graph where the vertices are the Spanish senses and we add an edge between two senses if their similarity is above a threshold. We now pick each connected component from this graph as a cluster of similar Spanish senses. 1Another option would be to use a measure of similarity for English senses, proposed in Resnik (1995) for two synsets in a concept hierarchy like WordNet. Our initial results with this measure were not favorable. Now we build the concepts from the Spanish sense clusters. We recall that a concept is defined by a set of English senses and a set of Spanish senses that are related. Each cluster represents a concept. A particular concept is formed by the set of Spanish senses in the cluster and the English senses relevant for them. The relevant English senses for any Spanish sense is given by its 2WZX Y . Therefore, the union of the *WZX Y s of all the Spanish senses in the cluster forms the set of English senses for each concept. 4 Learning the Model Parameters Once the model is built, we use the popular EM algorithm (Dempster et al., 1977) for hidden variables to learn the parameters for both models. The algorithm repeatedly iterates over two steps. The first step maximizes the expected log-likelihood of the joint probability of the observed data with the current parameter settings . / . The next step then reestimates the values of the parameters of the model. Below we summarize the re-estimation steps for each model. 4.1 EM for the Sense Model 3 & V " ( [" \ ] ^ _ VL` )  & " ( 1  Q   Q  . /    Q "  1 & V " ( a" b ^ GIHDQ ` c VL` ) 3 & " ( 1  Q  d Q  .F/  b  b ^e H Q ` <c VL` )  & " ( 1  Q   Q  . /    Q "  1 & V " (  follows similarly. 4.2 EM for the Concept Model  : V "gfha" \ ] ^ _ Vi` )  : V "gfj1  Q   Q  . /   &  Q "lk<1 : V "gfha" b ^ VL` )  : V "mf  &  Q "lk<1  Q  d Q  .F/  b ^ VL` )  : V "gfj1  Q   Q  . /   ! Q "  1 &  Q "lk [" b ^e HDQ ` <c VL` )  &  Q "lk<1  Q "    Q  . /  b  b ^ e HDQ ` <c VL` )  &  Q "gk1n Q "    Q  . /   &  Q " 8 1 : V "ofh and   Q "  1 &  Q " 8  follow similarly. 4.3 Initialization of Model Probabilities Since the EM algorithm performs gradient ascent as it iteratively improves the log-likelihood, it is prone to getting caught in local maxima, and selection of the initial conditions is crucial for the learning procedure. Instead of opting for a uniform or random initialization of the probabilities, we make use of prior knowledge about the English words and senses available from WordNet. WordNet provides occurrence frequencies for each synset in the SemCor Corpus that may be normalized to derive probabilities  Gqp  ( $ for each English sense (> . For the Sense Model, these probabilities form the initial priors over the senses, while all English (and Spanish) words belonging to a sense are initially assumed to be equally likely. However, initialization of the Concept Model using the same knowledge is trickier. We would like each English sense (  to have  V p V K  ( "r Gqp  ( $ . But the fact that each sense belongs to multiple concepts and the constraint b K H6sEt  ( E1 B u" \ makes the solution non-trivial. Instead, we settle for a compromise. We set  V p V K  ( E1 B v"w Gqp  ( $ and  B x" b KLH s2t  Gqp  ( $ . Subsequent normalization takes care of the sum constraints. For a Spanish sense, we set  (< a" b KLH s y{z}|E~ KLR  Gqp  (>  . Once we have the Spanish sense probabilities, we follow the same procedure for setting  ( 21 B  for each concept. All the Spanish and English words for a sense are set to be equally likely, as in the Sense Model. It turned out in our experiments on real data that this initialization makes a significant difference in model performance. 5 Experimental Evaluation Both the models are generative probabilistic models learned from parallel corpora and are expected to fit the training and subsequent test data. A good fit should be reflected in good prediction accuracy over a test set. The prediction task of interest is the sense of an English word when its translation is provided. We estimate the prediction accuracy and recall of our models on Senseval data.2 In addition, the Concept Model learns a sense structure for the Spanish 2Accuracy is the ratio of the number of correct predictions and the number of attempted predictions. Recall is the ratio of the number of correct predictions and the size of the test set. language. While it is hard to objectively evaluate the quality of such a structure, we present some interesting concepts that are learned as an indication of the potential of our approach. 5.1 Evaluation with Senseval Data In our experiments with real data, we make use of the parallel corpora constructed by Diab and Resnik (2002) for evaluation purposes. We chose to work on these corpora in order to permit a direct comparison with their results. The sense-tagged portion of the English corpus is comprised of the English “allwords” section of the SENSEVAL-2 test data. The remainder of this corpus is constructed by adding the Brown Corpus, the SENSEVAL-1 corpus, the SENSEVAL-2 English Lexical Sample test, trial and training corpora and the Wall Street Journal sections 18-24 from the Penn Treebank. This English corpus is translated into Spanish using two commercially available MT systems: Globalink Pro 6.4 and Systran Professional Premium. The GIZA++ implementation of the IBM statistical MT models was used to derive the most-likely word-level alignments, and these define the English/Spanish word co-occurrences. To take into account variability of translation, we combine the translations from the two systems for each English word, following in the footsteps of Diab and Resnik (2002). For our experiments, we focus only on nouns, of which there are 875 occurrences in our tagged data. The sense tags for the English domain are derived from the WordNet 1.7 inventory. After pruning stopwords, we end up with 16,186 English words, 31,862 Spanish words and 2,385,574 instances of 41,850 distinct translation pairs. The English words come from 20,361 WordNet senses. Table 1: Comparison with Diab’s Model Model Accuracy Recall Parameters Diab 0.618 0.572 Sense M. 0.624 0.616 154,947 Concept M. 0.672 0.651 120,268 As can be seen from the following table, both our models clearly outperform Diab (2003), which is an improvement over Diab and Resnik (2002), in both accuracy and recall, while the Concept Model does significantly better than the Sense Model with fewer parameters. The comparison is restricted to the same subset of the test data. For our best results, the Sense Model has 20,361 senses, while the Concept Model has 20,361 English senses, 11,961 Spanish senses and 7,366 concepts. The Concept Model results are for the version that allows multiple senses for a Spanish word. Results for the 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Recall Accuracy unsup. sup. diab concept model sense model Figure 3: Comparison with Senseval2 Systems single-sense model are similar. In Figure 3, we compare the prediction accuracy and recall against those of the 21 Senseval-2 English All Words participants and that of Diab (2003), when restricted to the same set of noun instances from the gold standard. It can be seen that our models outperform all the unsupervised approaches in recall and many supervised ones as well. No unsupervised approach is better in both accuracy and recall. It needs to be kept in mind that we take into account only bilingual data for our predictions, and not monolingual features like context of the word as most other WSD approaches do. 5.2 Semantic Grouping of Spanish Senses Table 2 shows some interesting examples of different Spanish senses for discovered concepts.3 The context of most concepts, like the ones shown, can be easily understood. For example, the first concept is about government actions and the second deals with murder and accidental deaths. The penultimate concept is interesting because it deals with different kinds of association and involves three different senses containing the word conexi´on. The other words in two of these senses suggest that they are about union and relation respectively. The third probably involves the link sense of connection. Conciseness of the concepts depends on the similarity threshold that is selected. Some may bring together loosely-related topics, which can be separated by a higher threshold. 6 Model Analysis In this section, we back up our experimental results with an in-depth analysis of the performance of our two models. Our Sense Model was motivated by Diab and Resnik (2002) but the flavors of the two are quite 3Some English words are found to occur in the Spanish Senses. This is because the machine translation system used to create the Spanish document left certain words untranslated. different. The most important distinction is that the Sense Model is a probabilistic generative model for parallel corpora, where interaction between different words stemming from the same sense comes into play, even if the words are not related through translations, and this interdependence of the senses through common words plays a role in sense disambiguation. We started off with our discussions on semantic ambiguity with the intuition that identification of semantic concepts in the corpus that relate multiple senses should help disambiguate senses. The Sense Model falls short of this target since it only brings together a single sense from each language. We will now revisit the motivating example from Section 2 and see how concepts help in disambiguation by grouping multiple related senses together. For the Sense Model, €S‚<ƒ„Fƒ…I†D‡‰ˆE…?1 ( >ŠŒ‹ €S‚<ƒ„Fƒ…I†D‡‰ˆE…?1 ( <  since it is the only word that ( Š can generate. However, this difference is compensated for by the higher prior probability  (   , which is strengthened by both the translation pairs. Since the probability of joint occurrence is given by the product 3 (  F1 (  21 (  for any sense ( , the model does not develop a clear preference for any of the two senses. The critical difference in the Concept Model can be appreciated directly from the corresponding joint probability  B  ( F1 B  O1 (  ( 21 B  21 ( 6 , where B is the relevant concept in the model. The preference for a particular instantiation in the model is dependent not on the prior  (  over a sense, but on the sense conditional 3 ( 21 B  . In our example, since  bar, obstrucci´on ‹ can be generated only through concept BŽE ,  (  *1 BŽE  is the only English sense conditional boosted by it.  prevention, prevenci´on ‹ is generated through a different concept B \F\ ‘ , where the higher conditional €q‚ƒ„Eƒ…S†D‡DˆE…?1 ( Š gradually strengthens one of the possible instantiations for it, and the other one becomes increasingly unlikely as the iterations progress. The inference is that only one sense of prevention is possible in the context of the parallel corpus. The key factor in this disambiguation was that two senses of prevention separated out in two different concepts. The other significant difference between the models is in the constraints on the parameters and the effect that they have on sense disambiguation. In the Sense Model, b K  ( u" \ , while in the Concept Model, b K H6sEt  ( E1 B ?" \ separately for each concept B . Now for two relevant senses for an English word, a slight difference in their priors will tend to get ironed out when normalized over the enTable 2: Example Spanish Senses in a Concept. For each concept, each row is a separate sense. Dictionary senses of Spanish words are provided in English within parenthesis where necessary. actos accidente accidentes supremas muertes(deaths) decisi´on decisiones casualty gobernando gobernante matar(to kill) matanzas(slaughter) muertes-le gubernamentales slaying gobernaci´on gobierno-proporciona derramamiento-de-sangre (spilling-of-blood) prohibir prohibiendo prohibitivo prohibitiva cachiporra(bludgeon) obligar(force) obligando(forcing) gubernamental gobiernos asesinato(murder) asesinatos linterna-el´ectrica linterna(lantern) man´ia craze faros-autom´ovil(headlight) culto(cult) cultos proto-senility linternas-portuarias(harbor-light) delirio delirium antorcha(torch) antorchas antorchas-pino-nudo rabias(fury) rabia farfulla(do hastily) oportunidad oportunidades diferenciaci´on ocasi´on ocasiones distinci´on distinciones riesgo(risk) riesgos peligro(danger) especializaci´on destino sino(fate) maestr´ia (mastery) fortuna suerte(fate) peculiaridades particularidades peculiaridades-inglesas probabilidad probabilidades especialidad especialidades diablo(devil) diablos modelo parang´on dickens ideal ideales heller santo(saint) santos san lucifer satan satan´as idol idols ´idolo deslumbra(dazzle) dios god dioses cromo(chromium) divinidad divinity meteoro meteoros meteor meteoros-blue inmortal(immortal) inmortales meteorito meteoritos teolog´ia teolog pedregosos(rocky) deidad deity deidades variaci´on variaciones minutos minuto discordancia desacuerdo(discord) discordancias momento momentos un-momento desviaci´on(deviation) desviaciones desviaciones-normales minutos momentos momento segundos discrepancia discrepancias fugaces(fleeting) variaci´on diferencia instante momento disensi´on pesta˜neo(blink) gui˜na(wink) pesta˜nean adhesi´on adherencia ataduras(tying) pasillo(corridor) enlace(connection) ataduras aisle atadura ataduras pasarela(footbridge) conexi´on conexiones hall vest´ibulos conexi´on une(to unite) pasaje(passage) relaci´on conexi´on callej´on(alley) callejas-ciegas (blind alley) callejones-ocultos implicaci´on (complicity) envolvimiento tire set of senses for the corpus. In contrast, if these two senses belong to the same concept in the Concept Model, the difference in the sense conditionals will be highlighted since the normalization occurs over a very small set of senses — the senses for only that concept, which in the best possible scenario will contain only the two contending senses, as in concept B \F\ ‘ of our example. As can be seen from Table 1, the Concept Model not only outperforms the Sense Model, it does so with significantly fewer parameters. This may be counter-intuitive since Concept Model involves an extra concept variable. However, the dissociation of Spanish and English senses can significantly reduce the parameter space. Imagine two Spanish words that are associated with ten English senses and accordingly each of them has a probability for belonging to each of these ten senses. Aided with a concept variable, it is possible to model the same relationship by creating a separate Spanish sense that contains these two words and relating this Spanish sense with the ten English senses through a concept variable. Thus these words now need to belong to only one sense as opposed to ten. Of course, now there are new transition probabilities for each of the eleven senses from the new concept node. The exact reduction in the parameter space will depend on the frequent subsets discovered for the 2WXY s of the Spanish words. Longer and more frequent subsets will lead to larger reductions. It must also be borne in mind that this reduction comes with the independence assumptions made in the Concept Model. 7 Conclusions and Future Work We have presented two novel probabilistic models for unsupervised word sense disambiguation using parallel corpora and have shown that both models outperform existing unsupervised approaches. In addition, we have shown that our second model, the Concept model, can be used to learn a sense inventory for the secondary language. An advantage of the probabilistic models is that they can easily incorporate additional information, such as context information. In future work, we plan to investigate the use of additional monolingual context. We would also like to perform additional validation of the learned secondary language sense inventory. 8 Acknowledgments The authors would like to thank Mona Diab and Philip Resnik for many helpful discussions and insightful comments for improving the paper and also for making their data available for our experiments. This study was supported by NSF Grant 0308030. References E. Agirre, J. Atserias, L. Padr, and G. Rigau. 2000. Combining supervised and unsupervised lexical knowledge methods for word sense disambiguation. In Computers and the Humanities, Special Double Issue on SensEval. Eds. Martha Palmer and Adam Kilgarriff. 34:1,2. Yoshua Bengio and Christopher Kermorvant. 2003. Extracting hidden sense probabilities from bitexts. Technical report, TR 1231, Departement d’informatique et recherche operationnelle, Universite de Montreal. Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1991. Word-sense disambiguation using statistical methods. In Meeting of the Association for Computational Linguistics, pages 264–270. Rebecca Bruce and Janyce Wiebe. 1994. A new approach to sense identification. In ARPA Workshop on Human Language Technology. Ido Dagan and Alon Itai. 1994. Word sense disambiguation using a second language monolingual corpus. Computational Linguistics, 20(4):563– 596. Ido Dagan. 1991. Lexical disambiguation: Sources of information and their statistical realization. In Meeting of the Association for Computational Linguistics, pages 341–342. A.P. Dempster, N.M. Laird, and D.B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B 39:1–38. Mona Diab and Philip Resnik. 2002. An unsupervised method for word sense tagging using parallel corpora. In Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL-02). Mona Diab. 2003. Word Sense Disambiguation Within a Multilingual Framework. Ph.D. thesis, University of Maryland, College Park. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. Nancy Ide and Jean Veronis. 1998. Word sense disambiguation: The state of the art. Computational Linguistics, 28(1):1–40. Nancy Ide. 2000. Cross-lingual sense determination: Can it work? In Computers and the Humanities: Special Issue on Senseval, 34:147-152. Adam Kilgarrif and Joseph Rosenzweig. 2000. Framework and results for english senseval. Computers and the Humanities, 34(1):15–48. Dekang Lin. 2000. Word sense disambiguation with a similarity based smoothed library. In Computers and the Humanities: Special Issue on Senseval, 34:147-152. K. C. Litkowski. 2000. Senseval: The cl research experience. In Computers and the Humanities, 34(1-2), pp. 153-8. Philip Resnik and David Yarowsky. 1999. Distinguishing systems and distinguishing senses: new evaluation methods for word sense disambiguation. Natural Language Engineering, 5(2). Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 448–453. Philip Resnik. 1997. Selectional preference and sense disambiguation. In Proceedings of ACL Siglex Workshop on Tagging Text with Lexical Semantics, Why, What and How?, Washington, April 4-5. David Yarowsky. 1992. Word-sense disambiguation using statistical models of Roget’s categories trained on large corpora. In Proceedings of COLING-92, pages 454–460, Nantes, France, July. David Yarowsky. 1993. One sense per collocation. In Proceedings, ARPA Human Language Technology Workshop, Princeton. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Meeting of the Association for Computational Linguistics, pages 189–196.
2004
37
Chinese Verb Sense Discrimination Using an EM Clustering Model with Rich Linguistic Features Jinying Chen, Martha Palmer Department of Computer and Information Science University of Pennsylvania Philadelphia, PA, 19104 {jinying,mpalmer}@linc.cis.upenn.edu Abstract This paper discusses the application of the Expectation-Maximization (EM) clustering algorithm to the task of Chinese verb sense discrimination. The model utilized rich linguistic features that capture predicateargument structure information of the target verbs. A semantic taxonomy for Chinese nouns, which was built semi-automatically based on two electronic Chinese semantic dictionaries, was used to provide semantic features for the model. Purity and normalized mutual information were used to evaluate the clustering performance on 12 Chinese verbs. The experimental results show that the EM clustering model can learn sense or sense group distinctions for most of the verbs successfully. We further enhanced the model with certain fine-grained semantic categories called lexical sets. Our results indicate that these lexical sets improve the model’s performance for the three most challenging verbs chosen from the first set of experiments. 1 Introduction Highly ambiguous words may lead to irrelevant document retrieval and inaccurate lexical choice in machine translation (Palmer et al., 2000), which suggests that word sense disambiguation (WSD) is beneficial and sometimes even necessary in such NLP tasks. This paper addresses WSD in Chinese through developing an Expectation-Maximization (EM) clustering model to learn Chinese verb sense distinctions. The major goal is to do sense discrimination rather than sense labeling, similar to (Schütze, 1998). The basic idea is to divide instances of a word into several clusters that have no sense labels. The instances in the same cluster are regarded as having the same meaning. Word sense discrimination can be applied to document retrieval and similar tasks in information access, and to facilitating the building of large annotated corpora. In addition, since the clustering model can be trained on large unannotated corpora and evaluated on a relatively small sense-tagged corpus, it can be used to find indicative features for sense distinctions through exploring huge amount of available unannotated text data. The EM clustering algorithm (Hofmann and Puzicha, 1998) used here is an unsupervised machine learning algorithm that has been applied in many NLP tasks, such as inducing a semantically labeled lexicon and determining lexical choice in machine translation (Rooth et al., 1998), automatic acquisition of verb semantic classes (Schulte im Walde, 2000) and automatic semantic labeling (Gildea and Jurafsky, 2002). In our task, we equipped the EM clustering model with rich linguistic features that capture the predicate-argument structure information of verbs and restricted the feature set for each verb using knowledge from dictionaries. We also semiautomatically built a semantic taxonomy for Chinese nouns based on two Chinese electronic semantic dictionaries, the Hownet dictionary1 and the Rocling dictionary.2 The 7 top-level categories of this taxonomy were used as semantic features for the model. Since external knowledge is used to obtain the semantic features and guide feature selection, the model is not completely unsupervised from this perspective; however, it does not make use of any annotated training data. Two external quality measures, purity and normalized mutual information (NMI) (Strehl. 2002), were used to evaluate the model’s performance on 12 Chinese verbs. The experimental results show that rich linguistic features and the semantic taxonomy are both very useful in sense discrimination. The model generally performs well in learning sense group distinctions for difficult, highly polysemous verbs and sense distinctions for other verbs. Enhanced by certain fine-grained semantic categories called lexical sets (Hanks, 1996), the model’s 1 http://www.keenage.com/. 2 A Chinese electronic dictionary liscenced from The Association for Computational Linguistics and Chinese Language Processing (ACLCLP), Nankang, Taipei, Taiwan. performance improved in a preliminary experiment for the three most difficult verbs chosen from the first set of experiments. The paper is organized as follows: we briefly introduce the EM clustering model in Section 2 and describe the features used by the model in Section 3. In Section 4, we introduce a semantic taxonomy for Chinese nouns, which is built semiautomatically for our task but can also be used in other NLP tasks such as co-reference resolution and relation detection in information extraction. We report our experimental results in Section 5 and conclude our discussion in Section 6. 2 EM Clustering Model The basic idea of our EM clustering approach is similar to the probabilistic model of co-occurrence described in detail in (Hofmann and Puzicha 1998). In our model, we treat a set of features { } mf f f ,..., , 2 1 , which are extracted from the parsed sentences that contain a target verb, as observed variables. These variables are assumed to be independent given a hidden variable c, the sense of the target verb. Therefore the joint probability of the observed variables (features) for each verb instance, i.e., each parsed sentence containing the target verb, is defined in equation (1), ∑ ∏ = = c m i i m c f p c p f f f p 1 2 1 ) | ( ) ( ) ,..., , ( (1) The if ’s are discrete-valued features that can take multiple values. A typical feature used in our model is shown in (2), = if       (2) At the beginning of training (i.e., clustering), the model’s parameters ) (c p and ) | ( c f p i are randomly initialized.3 Then, the probability of c conditioned on the observed features is computed in the expectation step (E-step), using equation (3), ∑ ∏ ∏ = = = c m i i m i i m c f p c p c f p c p f f f c p 1 1 2 1 ) | ( ) ( ) | ( ) ( ) ,..., , | ( ~ (3) 3 In our experiments, for verbs with more than 3 senses, syntactic and semantic restrictions derived from dictionary entries are used to constrain the random initialization. In the maximization step (M-step), ) (c p and ) | ( c f p i are re-computed by maximizing the loglikelihood of all the observed data which is calculated by using ) ,..., , | ( ~ 2 1 mf f f c p estimated in the E-step. The E-step and M-step are repeated for a fixed number of rounds, which is set to 20 in our experiments,4 or till the amount of change of ) (c p and ) | ( c f p i is under the threshold 0.001. When doing classification, for each verb instance, the model calculates the same conditional probability as in equation (3) and assigns the instance to the cluster with the maximal ) ,..., , | ( 2 1 mf f f c p . 3 Features Used in the Model The EM clustering model uses a set of linguistic features to capture the predicate-argument structure information of the target verbs. These features are usually more indicative of verb sense distinctions than simple features such as words next to the target verb or their POS tags. For example, the Chinese verb “出| chu1” has a sense of produce, the distinction between this sense and the verb’s other senses, such as happen and go out, largely depends on the semantic category of the verb’s direct object. Typical examples are shown in (1), (1 他们 县 出 香蕉 ) a. /their /county /produce /banana “Their county produces bananas.” 他们 县 出 大 b. /their /county /happen /big 事 了 /event /ASP “A big event happened in their county.” 他们 县 出 门 c. /their /county /go out 就 /door 是 山 /right away /be /mountain “In their county, you can see mountains as soon as you step out of the doors.” The verb has the sense produce in (1a) and its object should be something producible, such as 香蕉 “ /banana”. While in (1b), with the sense happen, the verb typically takes an event or eventlike 大事 object, such as “ /big event”, 事故 “ /accident” or “问题/problem” etc. In (1c), 门 the verb’s object “ /door” is closely related to location, consistent with the sense go out. In contrast, simple lexical or POS tag features sometimes fail to capture such information, which can be seen clearly in (2), 4 In our experiments, we set 20 as the maximal number of rounds after trying different numbers of rounds (20, 40, 60, 80, 100) in a preliminary experiment. 0 iff the target verb has no sentential complement 1 iff the target verb has a nonfinite sentential complement 2 iff the target verb has a finite sentential complement 去年 出 (2) a. /last year /produce 香蕉/banana 3000 公斤 / kilogram “3000 kilograms of bananas were produced last year.” 要 出 b. /in order to /produce 海南/Hainan 最好 的 香蕉 /best /DE /banana “In order to produce the best bananas in Hainan, ……” The verb’s object “香蕉/banana”, which is next to the verb in (2a), is far away from the verb in (2b). For (2b), a classifier only looking at the adjacent positions of the target verb tends to be misled by the NP right after the verb, i.e., “海南/Hainan”, which is a Province in China and a typical object of the verb with the sense go out. Five types of features are used in our model: 1. Semantic category of the subject of the target verb 2. Semantic category of the object of the target verb 3. Transitivity of the target verb 4. Whether the target verb takes a sentential complement and which type of sentential complement (finite or nonfinite) it takes 5. Whether the target verb occurs in a verb compound We obtain the values for the first two types of features (1) and (2) from a semantic taxonomy for Chinese nouns, which we will introduce in detail in the next section. In our implementation, the model uses different features for different verbs. The criteria for feature selection are from the electronic CETA dictionary file 5 and a hard copy English-Chinese dictionary, The Warmth Modern Chinese-English Dictionary.6 For example, the verb “出|chu1” never takes sentential complements, thus the fourth type of feature is not used for it. It could be supposed that we can still have a uniform model, i.e., a model using the same set of features for all the target verbs, and just let the EM clustering algorithm find useful features for different verbs automatically. The problem here is that unsupervised learning models (i.e., models trained on unlabeled data) are more likely to be affected by noisy data than supervised ones. Since all the features used in our model are extracted from automatically parsed sentences that inevitably have preprocessing errors such as segmentation, POS tagging and parsing errors, using verb-specific sets of features can alleviate the problem caused by noisy data to some extent. For example, if the model already knows 5 Licensed from the Department of Defense 6 The Warmth Modern Chinese-English Dictionary, Wang-Wen Books Ltd, 1997. that a verb like “出|chu1” can never take sentential complements (i.e., it does not use the fourth type of feature for that verb), it will not be misled by erroneous parsing information saying that the verb takes sentential complements in certain sentences. Since the corresponding feature is not included, the noisy data is filtered out. In our EM clustering model, all the features selected for a target verb are treated in the same way, as described in Section 2. 4 A Semantic Taxonomy Built Semiautomatically Examples in (1) have shown that the semantic category of the object of a verb sometimes is crucial in distinguishing certain Chinese verb senses. And our previous work on information extraction in Chinese (Chen et al., 2004) has shown that semantic features, which are more general than lexical features but still contain rich information about words, can be used to improve a model’s capability of handling unknown words, thus alleviating potential sparse data problems. We have two Chinese electronic semantic dictionaries: the Hownet dictionary, which assigns 26,106 nouns to 346 semantic categories, and the Rocling dictionary, which assigns 4,474 nouns to 110 semantic categories.7 A preliminary experimental result suggests that these semantic categories might be too fine-grained for the EM clustering model (see Section 5.2 for greater details). An analysis of the sense distinctions of several Chinese verbs also suggests that more general categories on top of the Hownet and Rocling categories could still be informative and most importantly, could enable the model to generate meaningful clusters more easily. We therefore built a three-level semantic taxonomy based on the two semantic dictionaries using both automatic methods and manual effort. The taxonomy was built in three steps. First, a simple mapping algorithm was used to map semantic categories defined in Hownet and Rocling into 27 top-level WordNet categories.8 The Hownet or Rocling semantic categories have English glosses. For each category gloss, the algorithm looks through the hypernyms of its first sense in WordNet and chooses the first WordNet top-level category it finds. 7 Hownet assigns multiple entries (could be different semantic categories) to polysemous words. The Rocling dictionary we used only assigns one entry (i.e., one semantic category) to each noun. 8 The 27 categories contain 25 unique beginners for noun source files in WordNet, as defined in (Fellbaum, 1998) and two higher level categories Entity and Abstraction. The mapping obtained from step 1 needs further modification for two reasons. First, the glosses of Hownet or Rocling semantic categories usually have multiple senses in WordNet. Sometimes, the first sense in WordNet for a category gloss is not its intended meaning in Hownet or Rocling. In this case, the simple algorithm cannot get the correct mapping. Second, Hownet and Rocling sometimes use adjectives or non-words as category glosses, such as animate and LandVehicle etc., which have no WordNet nominal hypernyms at all. However, those adjectives or non-words usually have straightforward meanings and can be easily reassigned to an appropriate WordNet category. Although not accurate, the automatic mapping in step 1 provides a basic framework or skeleton for the semantic taxonomy we want to build and makes subsequent work easier. In step 2, hand correction, we found that we could make judgments and necessary adjustments on about 80% of the mappings by only looking at the category glosses used by Hownet or Rocling, such as livestock, money, building and so on. For the other 20%, we could make quick decisions by looking them up in an electronic table we created. For each Hownet or Rocling category, our table lists all the nouns assigned to it by the two dictionaries. We merged two WordNet categories into others and subdivided three categories that seemed more coarse-grained than others into 2~5 subcategories. Step 2 took three days and 35 intermediate-level categories were generated. In step 3, we manually clustered the 35 intermediate-level categories into 7 top-level semantic categories. Figure 1 shows part of the taxonomy. The EM clustering model uses the 7 top-level categories to define the first two types of features that were introduced in Section 3. For example, the value of a feature kf is 1 if and only if the object NP of the target verb belongs to the semantic category Event and is otherwise 0. 5 Clustering Experiments Since we need labeled data to evaluate the clustering performance but have limited sense- tagged corpora, we applied the clustering model to 12 Chinese verbs in our experiments. The verbs are chosen from 28 annotated verbs in Penn Chinese Treebank so that they have at least two verb meanings in the corpus and for each of them, the number of instances for a single verb sense does not exceed 90% of the total number of instances. In our task, we generally do not include senses for other parts of speech of the selected words, such as noun, preposition, conjunction and particle etc., since the parser we used has a very high accuracy in distinguishing different parts of speech of these words (>98% for most of them). However, we do include senses for conjunctional and/or prepositional usage of two words, “到|dao4” and “为|wei4”, since our parser cannot distinguish the verb usage from the conjunctional or prepositional usage for the two words very well. Five verbs, the first five listed in Table 1, are both highly polysemous and difficult for a supervised word sense classifier (Dang et al., 2002). 9 In our experiments, we manually grouped the verb senses for the five verbs. The criteria for the grouping are similar to Palmer et al.’s (to appear) work on English verbs, which considers both sense coherence and predicate-argument structure distinctions. Figure 2 gives an example of 9 In the supervised task, their accuracies are lower than 85%, and four of them are even lower than the baselines. Entity Plant Artifact Document Food …… Money drinks, edible, meals, vegetable, … Location Location_Part Location Group …… institution, army, corporation, … Event Natural Phenomena Happening Activity …… Process chase, cut, pass, split, cheat, … process, BecomeLess, StateChange, disappear, …. Top level Intermediate level Hownet/Rocling categories Figure 1. Part of the 3-level Semantic Taxonomy for Chinese Nouns (other top-level nodes are Time, Human, Animal and State) the definition of sense groups. The manually defined sense groups are used to evaluate the model’s performance on the five verbs. The model was trained on an unannotated corpus, People’s Daily News (PDN), and tested on the manually sense-tagged Chinese Treebank (with some additional sense-tagged PDN data).10 We parsed the training and test data using a Maximum Entropy parser and extracted the features from the parsed data automatically. The number of clusters used by the model is set to the number of the defined senses or sense groups of each target verb. For each verb, we ran the EM clustering algorithm ten times. Table 2 shows the average performance and the standard deviation for each verb. Table 1 summarizes the data used in the experiments, where we also give the normalized sense perplexity11 of each verb in the test data. 5.1 Evaluation Methods We use two external quality measures, purity and normalized mutual information (NMI) (Strehl. 2002) to evaluate the clustering performance. Assuming a verb has l senses, the clustering model assigns n instances of the verb into k clusters, in is the size of the ith cluster, j n is the number of instances hand-tagged with the jth sense, and j in is the number of instances with the jth sense in the ith cluster, purity is defined in equation (4): ∑ = = k i j i j n n purity 1 max 1 (4) 10 The sense-tagged PDN data we used here are the same as in (Dang et al., 2002). 11 It is calculated as the entropy of the sense distribution of a verb in the test data divided by the largest possible entropy, i.e., log2 (the number of senses of the verb in the test data). It can be interpreted as classification accuracy when for each cluster we treat the majority of instances that have the same sense as correctly classified. The baseline purity is calculated by treating all instances for a target verb in a single cluster. The purity measure is very intuitive. In our case, since the number of clusters is preset to the number of senses, purity for verbs with two senses is equal to classification accuracy defined in supervised WSD. However, for verbs with more than 2 senses, purity is less informative in that a clustering model could achieve high purity by making the instances of 2 or 3 dominant senses the majority instances of all the clusters. Mutual information (MI) is more theoretically well-founded than purity. Treating the verb sense and the cluster as random variables S and C, the MI between them is defined in equation (5): ∑∑ ∑ = = = = l j k i j i j i j i c s n n n n n n c p s p c s p c s p C S MI 1 1 , log ) ( ) ( ) , ( log ) , ( ) , ( (5) MI(S,C) characterizes the reduction in uncertainty of one random variable S (or C) due to knowing the other variable C (or S). A single cluster with all instances for a target verb has a zero MI. Random clustering also has a zero MI in the limit. In our experiments, we used [0,1]normalized mutual information (NMI) (Strehl. 2002). A shortcoming of this measure, however, is that the best possible clustering (upper bound) evaluates to less than 1, unless classes are balanced. Unfortunately, unbalanced sense distribution is the usual case in WSD tasks, which makes NMI itself hard to interpret. Therefore, in addition to NMI, we also give its upper bound (upper-NMI) and the ratio of NMI and its upper bound (NMI-ratio) for each verb, as shown in columns 6 to 8 in Table 2. Senses for “到|dao4” Sense groups for “到|dao4” 1. to go to, leave for 2. to come 3. to arrive 4. to reach a particular stage, condition, or level 5. marker for completion of activities (after a verb) 6. marker for direction of activities (after a verb) 7. to reach a time point 8. up to, until (prepositional usage) 9. up to, until, (from …) to … (conjunctional usage) 1, 2 4,7,8,9 5 3 6 Figure 2. Sense groups for the Chinese verb “到|dao4” Verb| Pinyin Sample senses of the verb # Senses in test data # Sense groups in test data Sense perplexity # Clusters # Training instances # Test instances 出 |chu1 go out /produce 16 7 0.68 8 399 157 到 |dao4 come /reach 9 5 0.72 6 1838 186 见 |jian4 see /show 8 5 0.68 6 117 82 想 |xiang3 think/suppose 6 4 0.64 6 94 228 要 |yao4 Should/intend to 8 4 0.65 7 2781 185 表示|biao3shi4 Indicate /express 2 0.93 2 666 97 发现|fa1xian4 discover /realize 2 0.76 2 319 27 发展|fa1zhan3 develop /grow 3 0.69 3 458 130 恢复|hui1fu4 resume /restore 4 0.83 4 107 125 说 |shuo1 say /express by written words 7 0.40 7 2692 307 投入|tou2ru4 to input /plunge into 2 1.00 2 136 23 为 |wei2_4 to be /in order to 6 0.82 6 547 463 Verb Sense perplexity Baseline Purity (%) Purity (%) Std. Dev. of purity (%) NMI Upper- NMI NMI- ratio (%) Std. Dev. of NMI ratio (%) 出 0.68 52.87 63.31 1.59 0.2954 0.6831 43.24 1.76 到 0.72 40.32 90.48 1.08 0.4802 0.7200 75.65 0.00 见 0.68 58.54 72.20 1.61 0.1526 0.6806 22.41 0.66 想 0.64 68.42 79.39 3.74 0.2366 0.6354 37.24 8.22 要 0.65 69.19 69.62 0.34 0.0108 0.6550 1.65 0.78 表示 0.93 64.95 98.04 1.49 0.8670 0.9345 92.77 0.00 发现 0.76 77.78 97.04 3.87 0.7161 0.7642 93.71 13.26 发展 0.69 53.13 90.77 0.24 0.4482 0.6918 64.79 2.26 恢复 0.83 45.97 65.32 0.00 0.1288 0.8234 15.64 0.00 说 0.40 80.13 93.00 0.58 0.3013 0.3958 76.13 4.07 投入 1.00 52.17 95.65 0.00 0.7827 0.9986 78.38 0.00 为 0.82 32.61 75.12 0.43 0.4213 0.8213 51.30 2.07 Average 0.73 58.01 82.50 1.12 0.4088 0.7336 54.41 3.31 5.2 Experimental Results Table 2 summarizes the experimental results for the 12 Chinese verbs. As we see, the EM clustering model performs well on most of them, except the verb “要|yao4”.12 The NMI measure NMI-ratio turns out to be more stringent than purity. A high purity does not necessarily mean a high NMI-ratio. Although intuitively, NMI-ratio should be related to sense perplexity and purity, it is hard to formalize the relationships between them from the results. In fact, the NMI-ratio for a particular verb is eventually determined by its concrete sense distribution in the test data and the model’s clustering behavior for that verb. For example, the verbs “出|chu1” and “见|jian4” have the same sense perplexity and “见|jian4” has a higher purity than “出|chu1” (72.20% vs. 63.31%), but the NMIratio for “见|jian4” is much lower than “出|chu1” (22.41% vs. 43.24%). An analysis of the 12 For all the verbs except “要|yao4”, the model’s purities outperformed the baseline purities significantly (p<0.05, and p<0.001 for 8 of them). classification results for “见|jian4” shows that the clustering model made the instances of the verb’s most dominant sense the majority instances of three clusters (of total 5 clusters), which is penalized heavily by the NMI measure. Rich linguistic features turn out to be very effective in learning Chinese verb sense distinctions. Except for the two verbs, “发现|fa1xian4” and “表示|biao3shi4”, the sense distinctions of which can usually be made only by syntactic alternations,13 features such as semantic features or combinations of semantic features and syntactic alternations are very beneficial and sometimes even necessary for learning sense distinctions of other verbs. For example, the verb “见|jian4” has one sense see, in which the verb typically takes a Human subject and a sentential complement, while in another sense show, the verb typically takes an Entity subject and a State object. An inspection of the classification results shows 13 For example, the verb “发现|fa1xian4” takes an object in one sense discover and a sentential complement in the other sense realize. Table 1. A summary of the training and test data used in the experiments Table 2. The performance of the EM clustering model on 12 Chinese verbs measured by purity and normalized mutual information (NMI) that the EM clustering model has indeed learned such combinatory patterns from the training data. The experimental results also indicate that the semantic taxonomy we built is beneficial for the task. For example, the verb “投入|tou1ru4” has two senses, input and plunge into. It typically takes an Event object for the second sense but not for the first one. A single feature obtained from our semantic taxonomy, which tests whether the verb takes an Event object, captures this property neatly (achieves purity 95.65% and NMI-ratio 78.38% when using 2 clusters). Without the taxonomy, the top-level category Event is split into many finegrained Hownet or Rocling categories, which makes it very difficult for the EM clustering model to learn sense distinctions for this verb. In fact, in a preliminary experiment only using the Hownet and Rocling categories, the model had the same purity as the baseline (52.17%) and a low NMI-ratio (4.22%) when using 2 clusters. The purity improved when using more clusters (70.43% with 4 clusters and 76.09% with 6), but it was still much lower than the purity achieved by using the semantic taxonomy and the NMI-ratio dropped further (1.19% and 1.20% for the two cases). By looking at the classification results, we identified three major types of errors. First, preprocessing errors create noisy data for the model. Second, certain sense distinctions depend heavily on global contextual information (crosssentence information) that is not captured by our model. This problem is especially serious for the verb “要|yao4”. For example, without global contextual information, the verb can have at least three meanings want, need or should in the same clause, as shown in (3). (3) 他 要 马上 /he /want/need/should /at once 读完 这本 书 /finish reading /this /book. “He wants to/needs to/should finish reading this book at once.” Third, a target verb sometimes has specific types of NP arguments or co-occurs with specific types of verbs in verb compounds in certain senses. Such information is crucial for distinguishing these senses from others, but is not captured by the general semantic taxonomy used here. We did further experiments to investigate how much improvement the model could gain by capturing such information, as discussed in Section 5.3. 5.3 Experiments with Lexical Sets As discussed by Patrick Hanks (1996), certain senses of a verb are often distinguished by very narrowly defined semantic classes (called lexical sets) that are specific to the meaning of that verb sense. For example, in our case, the verb “恢复|hui1fu4” has a sense recover in which its direct object should be something that can be recovered naturally. A typical set of object NPs of the verb for this particular sense is partially listed in (4), (4) Lexical set for naturally recoverable things 体力 身体 健康 { /physical strength, /body, /health, 精力 听力 /mental energy, /hearing 知觉 , /feeling, 记忆力/memory, ……} Most words in this lexical set belong to the Hownet category attribute and the top-level category State in our taxonomy. However, even the lower-level category attribute still contains many other words irrelevant to the lexical set, some of which are even typical objects of the verb for two other senses, resume and regain, such as “邦交/diplomatic relations” in “恢复/resume 邦交/diplomatic relations” and “名誉/reputation” in “恢复/regain名誉/reputation”. Therefore, a lexical set like (4) is necessary for distinguishing the recover sense from other senses of the verb. It has been argued that the extensional definition of lexical sets can only be done using corpus evidence and it cannot be done fully automatically (Hanks, 1997). In our experiments, we use a bootstrapping approach to obtain five lexical sets semi-automatically for three verbs “出|chu1”, “见|jian4” and “恢复|hui1fu4” that have both low purity and low NMI-ratio in the first set of experiments. 14 We first extracted candidates for the lexical sets from the training data. For example, we extracted all the direct objects of the verb “恢复|hui1fu4” and all the verbs that combined with the verb “出|chu1” to form verb compounds from the automatically parsed training data. From the candidates, we manually selected words to form five initial seed sets, each of which contains no more than ten words. A simple algorithm was used to search for all the words that have the same detailed Hownet semantic definitions (semantic category plus certain supplementary information) as the seed words. We did not use Rocling because its semantic definitions are so general that a seed word tends to extend to a huge set of irrelevant words. Highly relevant words were manually selected from all the words found by the searching algorithm and added to the initial seed sets. The enlarged sets were used as lexical sets. The enhanced model first uses the lexical sets to obtain the semantic category of the NP arguments 14 We did not include “要|yao4”, since its meaning rarely depends on local predicate-argument structure information. of the three verbs. Only when the search fails does the model resort to the general semantic taxonomy. The model also uses the lexical sets to determine the types of the compound verbs that contain the target verb “出|chu1” and uses them as new features. Table 3 shows the model’s performance on the three verbs with or without using lexical sets. As we see, lexical sets improves the model’s performance on all of them, especially on the verb “出|chu1”. Although the results are still preliminary, they nevertheless provide us hints of how much a WSD model for Chinese verbs could gain from lexical sets. w/o lexical sets (%) with lexical sets (%) Verb Purity NMI-ratio Purity NMI-ratio 出 63.61 43.24 76.50 52.81 见 72.20 22.41 77.56 34.63 恢复 65.32 15.64 69.03 19.71 6 Conclusion We have shown that an EM clustering model that uses rich linguistic features and a general semantic taxonomy for Chinese nouns generally performs well in learning sense distinctions for 12 Chinese verbs. In addition, using lexical sets improves the model’s performance on three of the most challenging verbs. Future work is to extend our coverage and to apply the semantic taxonomy and the same types of features to supervised WSD in Chinese. Since the experimental results suggest that a general semantic taxonomy and more constrained lexical sets are both beneficial for WSD tasks, we will develop automatic methods to build large-scale semantic taxonomies and lexical sets for Chinese, which reduce human effort as much as possible but still ensure high quality of the obtained taxonomies or lexical sets. 7 Acknowledgements This work has been supported by an ITIC supplement to a National Science Foundation Grant, NSF-ITR-EIA-0205448. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. References Jinying Chen, Nianwen Xue and Martha Palmer. 2004. Using a Smoothing Maximum Entropy Model for Chinese Nominal Entity Tagging. In Proceedings of the 1st Int. Joint Conference on Natural Language Processing. Hainan Island, China. Hoa Trang Dang, Ching-yi Chia, Martha Palmer, and Fu-Dong Chiou. 2002. Simple Features for Chinese Word Sense Disambiguation. In Proceedings of COLING-2002 Nineteenth Int. Conference on Computational Linguistics, Taipei, Aug.24–Sept.1. Christiane Fellbaum. 1998. WordNet – an Electronic Lexical Database. The MIT Press, Cambridge, Massachusetts, London. Daniel Gildea and Daniel Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics, 28(3): 245-288, 2002. Patrick Hanks. 1996. Contextual dependencies and lexical sets. The Int. Journal of Corpus Linguistics, 1:1. Patrick Hanks. 1997. Lexical sets: relevance and probability. in B. Lewandowska-Tomaszczyk and M. Thelen (eds.) Translation and Meaning, Part 4, School of Translation and Interpreting, Maastricht, The Netherlands. Thomas Hofmann and Puzicha Jan. 1998. Statistical models for co-occurrence data, MIT Artificial Intelligence Lab., Technical Report AIM-1625. Adam Kilgarriff and Martha Palmer. 2000. Introduction to the sepcial issue on SENSEVAL. Computers and the Humanities, 34(1-2): 15-48. Martha Palmer, Hoa Trang Dang, and Christiane Fellbaum. To appear. Making fine-grained and coarse-grained sense distinctions, both manually and automatically. Natural Language Engineering. Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1998. EM-based clustering for NLP applications. AIMS Report 4(3).Institut für Maschinelle Sprachverarbeitung. Sabine Schulte im Walde. 2000. Clustering verbs semantically according to their alternation behaviour. In Proceedings of the 18th Int. Conference on Computational Linguistics, 747753. Hinrich Schütze. 1998. Automatic Word Sense Discrimination. Computational Linguistics, 24 (1): 97-124. Alexander Strehl. 2002. Relationship-based Clustering and Cluster Ensembles for Highdimensional Data Mining. Dissertation. The University of Texas at Austin. http://www.lans. ece.utexas.edu/~strehl/diss/. Table 3. Clustering performance with and without lexical sets for three Chinese verbs
2004
38
Relieving The Data Acquisition Bottleneck In Word Sense Disambiguation Mona Diab Linguistics Department Stanford University [email protected] Abstract Supervised learning methods for WSD yield better performance than unsupervised methods. Yet the availability of clean training data for the former is still a severe challenge. In this paper, we present an unsupervised bootstrapping approach for WSD which exploits huge amounts of automatically generated noisy data for training within a supervised learning framework. The method is evaluated using the 29 nouns in the English Lexical Sample task of SENSEVAL2. Our algorithm does as well as supervised algorithms on 31% of this test set, which is an improvement of 11% (absolute) over state-of-the-art bootstrapping WSD algorithms. We identify seven different factors that impact the performance of our system. 1 Introduction Supervised Word Sense Disambiguation (WSD) systems perform better than unsupervised systems. But lack of training data is a severe bottleneck for supervised systems due to the extensive labor and cost involved. Indeed, one of the main goals of the SENSEVAL exercises is to create large amounts of sense-annotated data for supervised systems (Kilgarriff&Rosenzweig, 2000). The problem is even more challenging for languages which possess scarce computer readable knowledge resources. In this paper, we investigate the role of large amounts of noisily sense annotated data obtained using an unsupervised approach in relieving the data acquisition bottleneck for the WSD task. We bootstrap a supervised learning WSD system with an unsupervised seed set. We use the sense annotated data produced by Diab’s unsupervised system SALAAM (Diab&Resnik, 2002; Diab, 2003). SALAAM is a WSD system that exploits parallel corpora for sense disambiguation of words in running text. To date, SALAAM yields the best scores for an unsupervised system on the SENSEVAL2 English All-Words task (Diab, 2003). SALAAM is an appealing approach as it provides automatically sense annotated data in two languages simultaneously, thereby providing a multilingual framework for solving the data acquisition problem. For instance, SALAAM has been used to bootstrap the WSD process for Arabic as illustrated in (Diab, 2004). In a supervised learning setting, WSD is cast as a classification problem, where a predefined set of sense tags constitutes the classes. The ambiguous words in text are assigned one or more of these classes by a machine learning algorithm based on some extracted features. This algorithm learns parameters from explicit associations between the class and the features, or combination of features, that characterize it. Therefore, such systems are very sensitive to the training data, and those data are, generally, assumed to be as clean as possible. In this paper, we question that assumption. Can large amounts of noisily annotated data used in training be useful within such a learning paradigm for WSD? What is the nature of the quality-quantity trade-off in addressing this problem? 2 Related Work To our knowledge, the earliest study of bootstrapping a WSD system with noisy data is by Gale et. al., (Gale et al. , 1992). Their investigation was limited in scale to six data items with two senses each and a bounded number of examples per test item. Two more recent investigations are by Yarowsky, (Yarowsky, 1995), and later, Mihalcea, (Mihalcea, 2002). Each of the studies, in turn, addresses the issue of data quantity while maintaining good quality training examples. Both investigations present algorithms for bootstrapping supervised WSD systems using clean data based on a dictionary or an ontological resource. The general idea is to start with a clean initial seed and iteratively increase the seed size to cover more data. Yarowsky starts with a few tagged instances to train a decision list approach. The initial seed is manually tagged with the correct senses based on entries in Roget’s Thesaurus. The approach yields very successful results — 95% — on a handful of data items. Mihalcea, on the other hand, bases the bootstrapping approach on a generation algorithm, GenCor (Mihalcea&Moldovan, 1999). GenCor creates seeds from monosemous words in WordNet, Semcor data, sense tagged examples from the glosses of polysemous words in WordNet, and other hand tagged data if available. This initial seed set is used for querying the Web for more examples and the retrieved contexts are added to the seed corpus. The words in the contexts of the seed words retrieved are then disambiguated. The disambiguated contexts are then used for querying the Web for yet more examples, and so on. It is an iterative algorithm that incrementally generates large amounts of sense tagged data. The words found are restricted to either part of noun compounds or internal arguments of verbs. Mihalcea’s supervised learning system is an instance-based-learning algorithm. In the study, Mihalcea compares results yielded by the supervised learning system trained on the automatically generated data, GenCor, against the same system trained on manually annotated data. She reports successful results on six of the data items tested. 3 Empirical Layout Similar to Mihalcea’s approach, we compare results obtained by a supervised WSD system for English using manually sense annotated training examples against results obtained by the same WSD system trained on SALAAM sense tagged examples. The test data is the same, namely, the SENSEVAL 2 English Lexical Sample test set. The supervised WSD system chosen here is the University of Maryland System for SENSEVAL 2 Tagging (  ) (Cabezas et al. , 2002). 3.1    The learning approach adopted by    is based on Support Vector Machines (SVM).    uses SVM   by Joachims (Joachims, 1998).1 For each target word, where a target word is a test item, a family of classifiers is constructed, one for each of the target word senses. All the positive examples for a sense   are considered the negative examples of   , where ! "$# .(Allwein et al., 2000) In    , each target word is considered an independent classification problem. The features used for    are mainly contextual features with weight values associated with each feature. The features are space delimited units, 1http://www.ai.cs.uni.dortmund.de/svmlight. tokens, extracted from the immediate context of the target word. Three types of features are extracted: % Wide Context Features: All the tokens in the paragraph where the target word occurs. % Narrow Context features: The tokens that collocate in the surrounding context, to the left and right, with the target word within a fixed window size of & . % Grammatical Features: Syntactic tuples such as verb-obj, subj-verb, etc. extracted from the context of the target word using a dependency parser, MINIPAR (Lin, 1998). Each feature extracted is associated with a weight value. The weight calculation is a variant on the Inverse Document Frequency (IDF) measure in Information Retrieval. The weighting, in this case, is an Inverse Category Frequency (ICF) measure where each token is weighted by the inverse of its frequency of occurrence in the specified context of the target word. 3.1.1 Manually Annotated Training Data The manually-annotated training data is the SENSEVAL2 Lexical Sample training data for the English task, (SV2LS Train).2 This training data corpus comprises 44856 lines and 917740 tokens. There is a close affinity between the test data and the manually annotated training data. The Pearson ('  correlation between the sense distributions for the test data and the manually annotated training data, per test item, ranges between )+*-,/.10 .3 3.2 SALAAM SALAAM exploits parallel corpora for sense annotation. The key intuition behind SALAAM is that when words in one language, L1, are translated into the same word in a second language, L2, then those L1 words are semantically similar. For example, when the English — L1 — words bank, brokerage, mortgage-lender translate into the French — L2 — word banque in a parallel corpus, where bank is polysemous, SALAAM discovers that the intended sense for bank is the financial institution sense, not the geological formation sense, based on the fact that it is grouped with brokerage and mortgage-lender. SALAAM’s algorithm is as follows: % SALAAM expects a word aligned parallel corpus as input; 2http://www.senseval.org 3The correlation is measured between two frequency distributions. Throughout this paper, we opt for using the parametric Pearson 2 correlation rather than KL distance in order to test statistical significance. % L1 words that translate into the same L2 word are grouped into clusters; % SALAAM identifies the appropriate senses for the words in those clusters based on the words senses’ proximity in WordNet. The word sense proximity is measured in information theoretic terms based on an algorithm by Resnik (Resnik, 1999); % A sense selection criterion is applied to choose the appropriate sense label or set of sense labels for each word in the cluster; % The chosen sense tags for the words in the cluster are propagated back to their respective contexts in the parallel text. Simultaneously, SALAAM projects the propagated sense tags for L1 words onto their L2 corresponding translations. 3.2.1 Automatically Generated SALAAM Training Data Three sets of SALAAM tagged training corpora are created: % SV2LS TR: English SENSEVAL2 Lexical Sample trial and training corpora with no manual annotations. It comprises 61879 lines and 1084064 tokens. % MT: The English Brown Corpus, SENSEVAL1 (trial, training and test corpora), Wall Street Journal corpus, and SENSEVAL 2 All Words corpus. All of which comprise 151762 lines and 37945517 tokens. % HT: UN English corpus which comprises 71672 lines of 1734001 tokens The SALAAM-tagged corpora are rendered in a format similar to that of the manually annotated training data. The automatic sense tagging for MT and SV2LS TR training data is based on using SALAAM with machine translated parallel corpora. The HT training corpus is automatically sense tagged based on using SALAAM with the EnglishSpanish UN naturally occurring parallel corpus. 3.3 Experimental Conditions Experimental conditions are created based on three of SALAAM’s tagging factors, Corpus, Language and Threshold: % Corpus: There are 4 different combinations for the training corpora: MT+SV2LS TR; MT+HT+SV2LS TR; HT+SV2LS TR; or SV2LS TR alone. % Language: The context language of the parallel corpus used by SALAAM to obtain the sense tags for the English training corpus. There are three options: French (FR), Spanish (SP), or, Merged languages (ML), where the results are obtained by merging the English output of FR and SP. % Threshold: Sense selection criterion, in SALAAM, is set to either MAX (M) or THRESH (T). These factors result in 39 conditions.4 3.4 Test Data The test data are the 29 noun test items for the SENSEVAL 2 English Lexical Sample task, (SV2LSTest). The data is tagged with the WordNet 1.7pre (Fellbaum, 1998; Cotton et al. , 2001). The average perplexity for the test items is 3.47 (see Section 5.3), the average number of senses is 7.93, and the total number of contexts for all senses of all test items is 1773. 4 Evaluation In this evaluation,     is the    system trained with SALAAM-tagged data and    is the    system trained with manually annotated data. Since we don’t expect     to outperform human tagging, the results yielded by  , are the upper bound for the purposes of this study. It is important to note that   is always trained with SV2LS TR as part of the training set in order to guarantee genre congruence between the training and test sets.The scores are calculated using scorer2.5 The average precision score over all the items for    is 65.3% at 100% Coverage. 4.1 Metrics We report the results using two metrics, the harmonic mean of precision and recall, (  ) score, and the Performance Ratio (PR), which we define as the ratio between two precision scores on the same test data where precision is rendered using scorer2. PR is measured as follows:  "     '       '   (1) 4Originally, there are 48 conditions, 9 of which are excluded due to extreme sparseness in training contexts. 5From http://www.senseval.org, all scorer2 results are reported in fine-grain mode. 4.2 Results Table 1 shows the   scores for the upper bound    .      is the condition in     that yields the highest overall   score over all noun items.    the maximum  score achievable, if we know which condition yields the best performance per test item, therefore it is an oracle condition.6 Since our approach is unsupervised, we also report the results of other unsupervised systems on this test set. Accordingly, the last seven row entries in Table 1 present state-of-the-art SENSEVAL2 unsupervised systems performance on this test set.7 System     65.3    ! " 36.02   $#&%(' 45.1 ITRI 45 UNED-LS-U 40.1 CLRes 29.3 IIT2(R) 24.4 IIT1(R) 23.9 IIT2 23.2 IIT1 22 Table 1:  scores on SV2LS Test for    ,   )* ,    , and state-of-the-art unsupervised systems participating in the SENSEVAL2 English Lexical Sample task. All of the unsupervised methods including     * and     + , are significantly below the supervised method,    .     * is the third in the unsupervised methods. It is worth noting that the average   score across the 39 conditions is & &*.-0/ , and the lowest is &+0 * 01- . The five best conditions for     , that yield the highest average   across all test items, use the HT corpus in the training data, four of which are the result of merged languages in SALAAM indicating that evidence from different languages simultaneously is desirable.     is the maximum potential among all unsupervised approaches if the best of all the conditions are combined. One of our goals is to automatically determine which condition or set of conditions yield the best results for each test item. Of central interest in this paper is the performance ratio (PR) for the individual nouns. Table 6The different conditions are considered independent taggers and there is no interaction across target nouns 7http://www.senseval.org 2 illustrates the PR of the different nouns yielded by     * and      sorted in descending order by     + , PR scores. A 0 * ) ) PR indicates an equivalent performance between     and    . The highest PR values are highlighted in bold. Nouns #Ss UMH% UMSb UMSm detention 4 65.6 1.00 1.05 chair 7 83.3 1.02 1.02 bum 4 85 0.14 1.00 dyke 2 89.3 1.00 1.00 fatigue 6 80.5 1.00 1.00 hearth 3 75 1.00 1.00 spade 6 75 1.00 1.00 stress 6 50 0.05 1.00 yew 3 78.6 1.00 1.00 art 17 47.9 0.98 0.98 child 7 58.7 0.93 0.97 material 16 55.9 0.81 0.92 church 6 73.4 0.75 0.77 mouth 10 55.9 0 0.73 authority 9 62 0.60 0.70 post 12 57.6 0.66 0.66 nation 4 78.4 0.34 0.59 feeling 5 56.9 0.33 0.59 restraint 8 60 0.2 0.56 channel 7 62 0.52 0.52 facility 5 54.4 0.32 0.51 circuit 13 62.7 0.44 0.44 nature 7 45.7 0.43 0.43 bar 19 60.9 0.20 0.30 grip 6 58.8 0.27 0.27 sense 8 39.6 0.24 0.24 lady 8 72.7 0.09 0.16 day 16 62.5 0.06 0.08 holiday 6 86.7 0.08 0.08 Table 2: The number of senses per item, in column #Ss,  precision performance per item as indicated in column UMH, PR scores for     )* in column UMSb and     + , in column UMSm on SV2LS Test   + , yields PR scores 2$)+*-,+0 for the top 12 test items listed in Table 2. Our algorithm does as well as supervised algorithm,    , on 41.6% of this test set. In      , 31% of the test items, (9 nouns yield PR scores 2 )+*-,43 ), do as well as  . This is an improvement of 11% absolute over state-of-the-art bootstrapping WSD algorithm yielded by Mihalcea (Mihalcea, 2002). Mihalcea reports high PR scores for six test items only: art, chair, channel, church, detention, nation. It is worth highlighting that her bootstrapping approach is partially supervised since it depends mainly on hand labelled data as a seed for the training data. Interestingly, two nouns, detention and chair, yield better performance than  , as indicated by the PRs 0 * ) and 0 * ) 3 , respectively. This is attributed to the fact that SALAAM produces a lot more correctly annotated training data for these two words than that provided in the manually annotated training data for    . Some nouns yield very poor PR values mainly due to the lack of training contexts, which is the case for mouth in     )* , for example. Or lack of coverage of all the senses in the test data such as for bar and day, or simply errors in the annotation of the SALAAM-tagged training data. If we were to include only nouns that achieve acceptable PR scores of  )+*.- — the first 16 nouns in Table 2 for    , — the overall potential precision of     is significantly increased to 63.8% and the overall precision of    is increased to 68.4%.8 These results support the idea that we could replace hand tagging with SALAAM’s unsupervised tagging if we did so for those items that yield an acceptable PR score. But the question remains: How do we predict which training/test items will yield acceptable PR scores? 5 Factors Affecting Performance Ratio In an attempt to address this question, we analyze several different factors for their impact on the performance of     quanitified as PR. In order to effectively alleviate the sense annotation acquisition bottleneck, it is crucial to predict which items would be reliably annotated automatically using     . Accordingly, in the rest of this paper, we explore 7 different factors by examining the yielded PR values in   + , . 5.1 Number of Senses The test items that possess many senses, such as art (17 senses), material (16 senses), mouth (10 senses) and post (12 senses), exhibit PRs of 0.98, 0.92, 0.73 and 0.66, respectively. Overall, the correlation between number of senses per noun and its PR score is an insignificant ' " )+*-&+0 ,  / 0 3  " 3*-,  2 )+* 0  . Though it is a weak negative correlation, it does suggest that when the number of senses increases, PR tends to decrease. 5.2 Number of Training Examples This is a characteristic of the training data. We examine the correlation between the PR and the num8A PR of   is considered acceptable since    achieves an overall   score of  ! in the WSD task. ber of training examples available to   for each noun in the training data. The correlation between the number of training examples and PR is insignificant at ' "" )+* 0 ,  / 0 3  " )+*.&# 2 )+* /  . More interestingly, however, spade, with only 5 training examples, yields a PR score of 0 * ) . This contrasts with nation, which has more than 4200 training examples, but yields a low PR score of )+*$, . Accordingly, the number of training examples alone does not seem to have a direct impact on PR. 5.3 Sense Perplexity This factor is a characteristic of the training data. Perplexity is 3%'& )(+* ,. Entropy is measured as follows: /.  "10 243 /5   6 7 /5   (2) where 5 is a sense for a polysemous noun and . is the set of all its senses. Entropy is a measure of confusability in the senses’ contexts distributions; when the distribution is relatively uniform, entropy is high. A skew in the senses’ contexts distributions indicates low entropy, and accordingly, low perplexity. The lowest possible perplexity is 0 , corresponding to ) entropy. A low sense perplexity is desirable since it facilitates the discrimination of senses by the learner, therefore leading to better classification. In the SALAAMtagged training data, for example, bar has the highest perplexity value of ,*$8 over its 19 senses, while day, with 16 senses, has a much lower perplexity of 0 *-& . Surprisingly, we observe nouns with high perplexity such as bum (sense perplexity value of &* ) & ) achieving PR scores of 0 * ) . While nouns with relatively low perplexity values such as grip (sense perplexity of )+*$& ) yields a low PR score of )+*.34- . Moreover, nouns with the same perplexity and similar number of senses yield very different PR scores. For example, examining holiday and child, both have the same perplexity of 3* 0 /4/ and the number of senses is close, with 6 and 7 senses, respectively, however, the PR scores are very different; holiday yields a PR of )+* )8 , and child achieves a PR of )+*-, . Furthermore, nature and art have the same perplexity of 3*.3 , ; art has 17 senses while nature has 7 senses only, nonetheless, art yields a much higher PR score of ( )+*-,8 ) compared to a PR of )+* /4/ for nature. These observations are further solidified by the insignificant correlation of ' " )+* 013 ,  / 0 3  " )+* /9  2 )+*$ between sense perplexity and PR. At first blush, one is inclined to hypothesize that, the combination of low perplexity associated with a large number of senses — as an indication of high skew in the distribution — is a good indicator of high PR, but reviewing the data, this hypothesis is dispelled by day which has 16 senses and a sense perplexity of 0 *-& , yet yields a low PR score of )+* )8 . 5.4 Semantic Translation Entropy Semantic translation entropy (STE) (Melamed, 1997) is a special characteristic of the SALAAMtagged training data, since the source of evidence for SALAAM tagging is multilingual translations. STE measures the amount of translational variation for an L1 word in L2, in a parallel corpus. STE is a variant on the entropy measure. STE is expressed as follows:    "" 0  2  (   *   6 7 (    (3) where  is a translation in the set of possible translations  in L2; and  is L1 word. The probability of a translation  is calculated directly from the alignments of the test nouns and their corresponding translations via the maximum likelihood estimate. Variation in translation is beneficial for SALAAM tagging, therefore, high STE is a desirable feature. Correlation between the automatic tagging precision and STE is expected to be high if SALAAM has good quality translations and good quality alignments. However, this correlation is a low ' " )+*-& & . Consequently, we observe a low correlation between STE and PR, ' " )+*.343 ,  / 0 3  " 0 *-&+0 2 )+*.34 . Examining the data, the nouns bum, detention, dyke, stress, and yew exhibit both high STE and high PR; Moreover, there are several nouns that exhibit low STE and low PR. But the intriguing items are those that are inconsistent. For instance, child and holiday: child has an STE of )+* )8 and comprises 7 senses at a low sense perplexity of 0 *., , yet yields a high PR of )+*-, . As mentioned earlier, low STE indicates lack of translational variation. In this specific experimental condition, child is translated as  enfant, enfantile, ni˜no, ni˜no-peque˜no  , which are words that preserve ambiguity in both French and Spanish. On the other hand, holiday has a relatively high STE value of )+*.-4- , yet results in the lowest PR of )+* )8 . Consequently, we conclude that STE alone is not a good direct indicator of PR. 5.5 Perplexity Difference Perplexity difference (PerpDiff) is a measure of the absolute difference in sense perplexity between the test data items and the training data items. For the manually annotated training data items, the overall correlation between the perplexity measures is a significant ' " )+*-,4which contrasts to a low overall correlation of ' " )+* / & between the SALAAMtagged training data items and the test data items. Across the nouns in this study, the correlation between PerpDiff and PR is ' "  )+* / . It is advantageous to be as similar as possible to the training data to guarantee good classification results within a supervised framework, therefore a low PerpDiff is desirable. We observe cases with a low PerpDiff such as holiday (PerpDiff of )+* ) ), yet the PR is a low )+* )8 . On the other hand, items such as art have a relatively high PerpDiff of 3*.-43 , but achieves a high PR of )+*-, . Accordingly, PerpDiff alone is not a good indicator of PR. 5.6 Sense Distributional Correlation Sense Distributional Correlation (SDC) results from comparing the sense distributions of the test data items with those of SALAAM-tagged training data items. It is worth noting that the correlation between the SDC of manually annotated training data and that of the test data ranges from ' " )+*-,  0 * ) . A strong significant correlation of ' " )+*$8 ,  / 0 3  " 8 )  )+* ) ) )0  between SDC and PR exists for SALAAM-tagged training data and the test data. Overall, nouns that yield high PR have high SDC values. However, there are some instances where this strong correlation is not exhibited. For example, circuit and post have relatively high SDC values, )+*7 ,0/ and )+*$8, , respectively, in     + , , but they score lower PR values than detention which has a comparatively lower SDC value of )+*7 - . The fact that both circuit and post have many senses, 13 and 12, respectively, while detention has 4 senses only is noteworthy. detention has a higher STE and lower sense perplexity than either of them however. Overall, the data suggests that SDC is a very good direct indicator of PR. 5.7 Sense Context Confusability A situation of sense context confusability (SCC) arises when two senses of a noun are very similar and are highly uniformly represented in the training examples. This is an artifact of the fine granularity of senses in WordNet 1.7pre. Highly similar senses typically lead to similar usages, therefore similar contexts, which in a learning framework detract from the learning algorithm’s discriminatory power. Upon examining the 29 polysemous nouns in the training and test sets, we observe that a significant number of the words have similar senses according to a manual grouping provided by Palmer, in 2002.9 For example, senses 2 and 3 of nature, meaning trait and quality, respectively, are considered similar by the manual grouping. The manual grouping does not provide total coverage of all the noun senses in this test set. For instance, it only considers the homonymic senses 1, 2 and 3 of spade, yet, in the current test set, spade has 6 senses, due to the existence of sub senses. 26 of the 29 test items exhibit multiple groupings based on the manual grouping. Only three nouns, detention, dyke, spade do not have any sense groupings. They all, in turn, achieve high PR scores of 0 * ) . There are several nouns that have relatively high SDC values yet their performance ratios are low such as post, nation, channel and circuit. For instance, nation has a very high SDC value of )+*-,4-43 , a low sense perplexity of 0 *-& — relatively close to the 0 *.- sense perplexity of the test data — a sufficient number of contexts (4350), yet it yields a PR of )+*$, . According to the manual sense grouping, senses 1 and 3 are similar, and indeed, upon inspection of the context distributions, we find the bulk of the senses’ instance examples in the SALAAMtagged training data for the condition that yields this PR in   + , are annotated with either sense 1 or sense 3, thereby creating confusable contexts for the learning algorithm. All the cases of nouns that achieve high PR and possess sense groups do not have any SCC in the training data which strongly suggests that SCC is an important factor to consider when predicting the PR of a system. 5.8 Discussion We conclude from the above exploration that SDC and SCC affect PR scores directly. PerpDiff, STE, and Sense Perplexity, number of senses and number of contexts seem to have no noticeable direct impact on the PR. Based on this observation, we calculate the SDC values for all the training data used in our experimental conditions for the 29 test items. Table 3 illustrates the items with the highest SDC values, in descending order, as yielded from any of the SALAAM conditions. We use an empirical cut-off value of )+*7 for SDC. The SCC values are reported as a boolean Y/N value, where a Y indicates the presence of a sense confusable context. As shown a high SDC can serve as a means of auto9http://www.senseval.org/sense-groups. The manual sense grouping comprises 400 polysemous nouns including the 29 nouns in this evaluation. Noun SDC SCC PR dyke 1 N 1.00 bum 1 N 1.00 fatigue 1 N 1.00 hearth 1 N 1.00 yew 1 N 1.00 chair 0.99 N 1.02 child 0.99 N 0.95 detention 0.98 N 1.0 spade 0.97 N 1.00 mouth 0.96 Y 0.73 nation 0.96 N 0.59 material 0.92 N 0.92 post 0.90 Y 0.63 authority 0.86 Y 0.70 art 0.83 N 0.98 church 0.80 N 0.77 circuit 0.79 N 0.44 stress 0.77 N 1.00 Table 3: Highest SDC values for the test items associated with their respective SCC and PR values.11 matically predicting a high PR, but it is not sufficient. If we eliminate the items where an SCC exists, namely, mouth, post, and authority, we are still left with nation and circuit, where both yield very low PR scores. nation has the desirable low PerpDiff of )+*.343 . The sense annotation tagging precision of the 3   in this condition which yields the highest SDC — Spanish UN data with the 3   for training — is a low & )+* / and a low STE value of )+* 013 , . This is due to the fact that both French and Spanish preserve ambiguity in similar ways to English which does not make it a good target word for disambiguation within the SALAAM framework, given these two languages as sources of evidence. Accordingly, in this case, STE coupled with the noisy tagging could have resulted in the low PR. However, for circuit, the STE value for its respective condition is a high )+*.3 ,+0 , but we observe a relatively high PerpDiff of 0 *$& compared to the PerpDiff of ) for the manually annotated data. Therefore, a combination of high SDC and nonexistent SCC can reliably predict good PR. But the other factors still have a role to play in order to achieve accurate prediction. It is worth emphasizing that two of the identified factors are dependent on the test data in this study, SDC and PerpDiff. One solution to this problem is to estimate SDC and PerpDiff using a held out data set that is hand tagged. Such a held out data set would be considerably smaller than the required size of a manually tagged training data for a classical supervised WSD system. Hence, SALAAMtagged training data offers a viable solution to the annotation acquisition bottleneck. 6 Conclusion and Future Directions In this paper, we applied an unsupervised approach within a learning framework     for the sense annotation of large amounts of data. The ultimate goal of     is to alleviate the data labelling bottleneck by means of a trade-off between quality and quantity of the training data.     is competitive with state-of-the-art unsupervised systems evaluated on the same test set from SENSEVAL2. Moreover, it yields superior results to those obtained by the only comparable bootstrapping approach when tested on the same data set. Moreover, we explore, in depth, different factors that directly and indirectly affect the performance of   quantified as a performance ratio, PR. Sense Distribution Correlation (SDC) and Sense Context Confusability (SCC) have the highest direct impact on performance ratio, PR. However, evidence suggests that probably a confluence of all the different factors leads to the best prediction of an acceptable PR value. An investigation into the feasibility of combining these different factors with the different attributes of the experimental conditions for SALAAM to automatically predict when the noisy training data can reliably replace manually annotated data is a matter of future work. 7 Acknowledgements I would like to thank Philip Resnik for his guidance and insights that contributed tremendously to this paper. Also I would like to acknowledge Daniel Jurafsky and Kadri Hacioglu for their helpful comments. I would like to thank the three anonymous reviewers for their detailed reviews. This work has been supported, in part, by NSF Award #IIS0325646. References Erin L. Allwein, Robert E. Schapire, and Yoram Singer. 2000. Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113-141. Clara Cabezas, Philip Resnik, and Jessica Stevens. 2002. Supervised Sense Tagging using Support Vector Machines. Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems (SENSEVAL-2). Toulouse, France. Scott Cotton, Phil Edmonds, Adam Kilgarriff, and Martha Palmer, ed. 2001. SENSEVAL-2: Second International Workshop on Evaluating Word Sense Disambiguation Systems. ACL SIGLEX, Toulouse, France. Mona Diab. 2004. An Unsupervised Approach for Bootstrapping Arabic Word Sense Tagging. Proceedings of Arabic Based Script Languages, COLING 2004. Geneva, Switzerland. Mona Diab and Philip Resnik. 2002. An Unsupervised Method for Word Sense Tagging Using Parallel Corpora. Proceedings of 40th meeting of ACL. Pennsylvania, USA. Mona Diab. 2003. Word Sense Disambiguation Within a Multilingual Framework. PhD Thesis. University of Maryland College Park, USA. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. William A. Gale and Kenneth W. Church and David Yarowsky. 1992. Using Bilingual Materials to Develop Word Sense Disambiguation Methods. Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation. Montr´eal, Canada. Thorsten Joachims. 1998. Text Categorization with Support Vector Machines: Learning with Many Relevant Features. Proceedings of the European Conference on Machine Learning. Springer. A. Kilgarriff and J. Rosenzweig. 2000. Framework and Results for English SENSEVAL. Journal of Computers and the Humanities. pages 15—48, 34. Dekang Lin. 1998. Dependency-Based Evaluation of MINIPAR. Proceedings of the Workshop on the Evaluation of Parsing Systems, First International Conference on Language Resources and Evaluation. Granada, Spain. Dan I. Melamed. 1997. Measuring Semantic Entropy. ACL SIGLEX, Washington, DC. Rada Mihalcea and Dan Moldovan. 1999. A method for Word Sense Disambiguation of unrestricted text. Proceedings of the 37th Annual Meeting of ACL. Maryland, USA. Rada Mihalcea. 2002. Bootstrapping Large sense tagged corpora. Proceedings of the 3rd International Conference on Languages Resources and Evaluations (LREC). Las Palmas, Canary Islands, Spain. Philip Resnik. 1999. Semantic Similarity in a Taxonomy: An Information-Based Measure and its Application to Problems of Ambiguity in Natural Language. Journal Artificial Intelligence Research. (11) p. 95130. David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. Proceedings of the 33rd Annual Meeting of ACL. Cambridge, MA.
2004
39
Analysis of Mixed Natural and Symbolic Language Input in Mathematical Dialogs Magdalena Wolska Ivana Kruijff-Korbayov´a Fachrichtung Computerlinguistik Universit¨at des Saarlandes, Postfach 15 11 50 66041 Saarbr¨ucken, Germany magda,korbay  @coli.uni-sb.de Abstract Discourse in formal domains, such as mathematics, is characterized by a mixture of telegraphic natural language and embedded (semi-)formal symbolic mathematical expressions. We present language phenomena observed in a corpus of dialogs with a simulated tutorial system for proving theorems as evidence for the need for deep syntactic and semantic analysis. We propose an approach to input understanding in this setting. Our goal is a uniform analysis of inputs of different degree of verbalization: ranging from symbolic alone to fully worded mathematical expressions. 1 Introduction Our goal is to develop a language understanding module for a flexible dialog system tutoring mathematical problem solving, in particular, theorem proving (Benzm¨uller et al., 2003a).1 As empirical findings in the area of intelligent tutoring show, flexible natural language dialog supports active learning (Moore, 1993). However, little is known about the use of natural language in dialog setting in formal domains, such as mathematics, due to the lack of empirical data. To fill this gap, we collected a corpus of dialogs with a simulated tutorial dialog system for teaching proofs in naive set theory. An investigation of the corpus reveals various phenomena that present challenges for such input understanding techniques as shallow syntactic analysis combined with keyword spotting, or statistical methods, e.g., Latent Semantic Analysis, which are commonly employed in (tutorial) dialog systems. The prominent characteristics of the language in our corpus include: (i) tight interleaving of natural and symbolic language, (ii) varying degree of natural language verbalization of the formal mathematical 1This work is carried out within the DIALOG project: a collaboration between the Computer Science and Computational Linguistics departments of the Saarland University, within the Collaborative Research Center on Resource-Adaptive Cognitive Processes, SFB 378 (www.coli.uni-sb.de/ sfb378). content, and (iii) informal and/or imprecise reference to mathematical concepts and relations. These phenomena motivate the need for deep syntactic and semantic analysis in order to ensure correct mapping of the surface input to the underlying proof representation. An additional methodological desideratum is to provide a uniform treatment of the different degrees of verbalization of the mathematical content. By designing one grammar which allows a uniform treatment of the linguistic content on a par with the mathematical content, one can aim at achieving a consistent analysis void of example-based heuristics. We present such an approach to analysis here. The paper is organized as follows: In Section 2, we summarize relevant existing approaches to input analysis in (tutorial) dialog systems on the one hand and analysis of mathematical discourse on the other. Their shortcomings with respect to our setting become clear in Section 3 where we show examples of language phenomena from our dialogs. In Section 4, we propose an analysis methodology that allows us to capture any mixture of natural and mathematical language in a uniform way. We show example analyses in Section 5. In Section 6, we conclude and point out future work issues. 2 Related work Language understanding in dialog systems, be it with text or speech interface, is commonly performed using shallow syntactic analysis combined with keyword spotting. Tutorial systems also successfully employ statistical methods which compare student responses to a model built from preconstructed gold-standard answers (Graesser et al., 2000). This is impossible for our dialogs, due to the presence of symbolic mathematical expressions. Moreover, the shallow techniques also remain oblivious of such aspects of discourse meaning as causal relations, modality, negation, or scope of quantifiers which are of crucial importance in our setting. When precise understanding is needed, tutorial systems either use menu- or template-based input, or use closed-questions to elicit short answers of little syntactic variation (Glass, 2001). However, this conflicts with the preference for flexible dialog in active learning (Moore, 1993). With regard to interpreting mathematical texts, (Zinn, 2003) and (Baur, 1999) present DRT analyses of course-book proofs. However, the language in our dialogs is more informal: natural language and symbolic mathematical expressions are mixed more freely, there is a higher degree and more variety of verbalization, and mathematical objects are not properly introduced. Moreover, both above approaches rely on typesetting and additional information that identifies mathematical symbols, formulae, and proof steps, whereas our input does not contain any such information. Forcing the user to delimit formulae would reduce the flexibility of the system, make the interface harder to use, and might not guarantee a clean separation of the natural language and the non-linguistic content anyway. 3 Linguistic data In this section, we first briefly describe the corpus collection experiment and then present the common language phenomena found in the corpus. 3.1 Corpus collection 24 subjects with varying educational background and little to fair prior mathematical knowledge participated in a Wizard-of-Oz experiment (Benzm¨uller et al., 2003b). In the tutoring session, they were asked to prove 3 theorems2: (i)               ; (ii)   !  "#  ; (iii) $&%('  *)+-, .0/1' 2 . To encourage dialog with the system, the subjects were instructed to enter proof steps, rather than complete proofs at once. Both the subjects and the tutor were free in formulating their turns. Buttons were available in the interface for inserting mathematical symbols, while literals were typed on the keyboard. The dialogs were typed in German. The collected corpus consists of 66 dialog logfiles, containing on average 12 turns. The total number of sentences is 1115, of which 393 are student sentences. The students’ turns consisted on average of 1 sentence, the tutor’s of 2. More details on the corpus itself and annotation efforts that guide the development of the system components can be found in (Wolska et al., 2004). 2 3 stands for set complement and 4 for power set. 3.2 Language phenomena To indicate the overall complexity of input understanding in our setting, we present an overview of common language phenomena in our dialogs.3 In the remainder of this paper, we then concentrate on the issue of interleaved natural language and mathematical expressions, and present an approach to processing this type of input. Interleaved natural language and formulae Mathematical language, often semi-formal, is interleaved with natural language informally verbalizing proof steps. In particular, mathematical expressions (or parts thereof) may lie within the scope of quantifiers or negation expressed in natural language: A auch 57698;:1< [ =?>@ ACBD5?EF8HG1< ] A I B ist J von C K (A I B) [... is J of . . . ] (da ja A I B= L ) [(because A I B= L )] B enthaelt kein x J A [B contains no x J A] For parsing, this means that the mathematical content has to be identified before it is interpreted within the utterance. Imprecise or informal naming Domain relations and concepts are described informally using imprecise and/or ambiguous expressions. A enthaelt B [A contains B] A muss in B sein [A must be in B] where contain and be in can express the domain relation of either subset or element; B vollstaendig ausserhalb von A liegen muss, also im Komplement von A [B has to be entirely outside of A, so in the complement of A] dann sind A und B (vollkommen) verschieden, haben keine gemeinsamen Elemente [then A and B are (completely) different, have no common elements] where be outside of and be different are informal descriptions of the empty intersection of sets. To handle imprecision and informality, we constructed an ontological knowledge base containing domain-specific interpretations of the predicates (Horacek and Wolska, 2004). Discourse deixis Anaphoric expressions refer deictically to pieces of discourse: der obere Ausdruck [the above term] der letzte Satz [the last sentence] Folgerung aus dem Obigen [conclusion from the above] aus der regel in der zweiten Zeile [from the rule in the second line] 3As the tutor was also free in wording his turns, we include observations from both student and tutor language behavior. In the presented examples, we reproduce the original spelling. In our domain, this class of referring expressions also includes references to structural parts of terms and formulae such as “the left side” or “the inner parenthesis” which are incomplete specifications: the former refers to a part of an equation, the latter, metonymic, to an expression enclosed in parenthesis. Moreover, these expressions require discourse referents for the sub-parts of mathematical expressions to be available. Generic vs. specific reference Generic and specific references can appear within one utterance: Potenzmenge enthaelt alle Teilmengen, also auch (A I B) [A power set contains all subsets, hence also(A I B)] where “a power set” is a generic reference, whereas “   ” is a specific reference to a subset of a specific instance of a power set introduced earlier. Co-reference4 Co-reference phenomena specific to informal mathematical discourse involve (parts of) mathematical expressions within text. Da, wenn 5 698;:*< sein soll,  Element von 698 : < sein muss. Und wenn : 5 698 < sein soll, muss  auch Element von 698 < sein. [Because if it should be that 5 698 : < ,  must be an element of 698;:*< . And if it should be that :  5 698 < , it must be an element of 698 < as well.] Entities denoted with the same literals may or may not co-refer: DeMorgan-Regel-2 besagt: 698  I : < = 698 < K 698;: < In diesem Fall: z.B. 698H< = dem Begriff 698 K&:  ) 698;:*< = dem Begriff 698K < [DeMorgan-Regel-2 means: 698  I : )  698 < K 698;:*< In this case: e.g. 698H< = the term 698 K :  < 698;:*< = the term 698K < ] Informal descriptions of proof-step actions Sometimes, “actions” involving terms, formulae or parts thereof are verbalized before the appropriate formal operation is performed: Wende zweimal die DeMorgan-Regel an [I’m applying DeMorgan rule twice] damit kann ich den oberen Ausdruck wie folgt schreiben:. . . [given this I can write the upper term as follows:. . . ] The meaning of the “action verbs” is needed for the interpretation of the intended proof-step. Metonymy Metonymic expressions are used to refer to structural sub-parts of formulae, resulting in predicate structures acceptable informally, yet incompatible in terms of selection restrictions. Dann gilt fuer die linke Seite, wenn ! #"$% &'(% , der Begriff A  B dann ja schon dadrin und ist somit auch Element davon [Then for the left hand side it holds that..., the term A  B is already there, and so an element of it] 4To indicate co-referential entities, we inserted the indices which are not present in the dialog logfiles. where the predicate hold, in this domain, normally takes an argument of sort CONST, TERM or FORMULA, rather than LOCATION; de morgan regel 2 auf beide komplemente angewendet [de morgan rule 2 applied to both complements] where the predicate apply takes two arguments: one of sort RULE and the other of sort TERM or FORMULA, rather than OPERATION ON SETS. In the next section, we present our approach to a uniform analysis of input that consists of a mixture of natural language and mathematical expressions. 4 Uniform input analysis strategy The task of input interpretation is two-fold. Firstly, it is to construct a representation of the utterance’s linguistic meaning. Secondly, it is to identify and separate within the utterance: (i) parts which constitute meta-communication with the tutor, e.g.: Ich habe die Aufgabenstellung nicht verstanden. [I don’t understand what the task is.] (ii) parts which convey domain knowledge that should be verified by a domain reasoner; for example, the entire utterance ) *(! + ist laut deMorgan-1 ) , & )  [. . . is, according to deMorgan-1,. . . ] can be evaluated; on the other hand, the domain reasoner’s knowledge base does not contain appropriate representations to evaluate the correctness of using, e.g., the focusing particle “also”, as in: Wenn A = B, dann ist A auch )  und B ) , . [If A = B, then A is also )  and B ) , .] Our goal is to provide a uniform analysis of inputs of varying degrees of verbalization. This is achieved by the use of one grammar that is capable of analyzing utterances that contain both natural language and mathematical expressions. Syntactic categories corresponding to mathematical expressions are treated in the same way as those of linguistic lexical entries: they are part of the deep analysis, enter into dependency relations and take on semantic roles. The analysis proceeds in 2 stages: 1. After standard pre-processing,5 mathematical expressions are identified, analyzed, categorized, and substituted with default lexicon entries encoded in the grammar (Section 4.1). 5Standard pre-processing includes sentence and word tokenization, (spelling correction and) morphological analysis, part-of-speech tagging. =   A B  C D    A B  C D Figure 1: Tree representation of the formula     7         ) 2. Next, the input is syntactically parsed, and a representation of its linguistic meaning is constructed compositionally along with the parse (Section 4.2). The obtained linguistic meaning representation is subsequently merged with discourse context and interpreted by consulting a semantic lexicon of the domain and a domain-specific knowledge base (Section 4.3). If the syntactic parser fails to produce an analysis, a shallow chunk parser and keyword-based rules are used to attempt partial analysis and build a partial representation of the predicate-argument structure. In the next sections, we present the procedure of constructing the linguistic meaning of syntactically well-formed utterances. 4.1 Parsing mathematical expressions The task of the mathematical expression parser is to identify mathematical expressions. The identified mathematical expressions are subsequently verified as to syntactic validity and categorized. Implementation Identification of mathematical expressions within word-tokenized text is performed using simple indicators: single character tokens (with the characters  and standing for power set and set complement respectively), mathematical symbol unicodes, and new-line characters. The tagger converts the infix notation used in the input into an expression tree from which the following information is available: surface sub-structure (e.g., “left side” of an expression, list of sub-expressions, list of bracketed sub-expressions) and expression type based on the top level operator (e.g., CONST, TERM, FORMULA 0 FORMULA (formula missing left argument), etc.). For example, the expression             ) is represented by the formula tree in Fig. 1. The bracket subscripts indicate the operators heading sub-formulae enclosed in parenthesis. Given the expression’s top node operator, =, the expression is of type formula, its “left side” is the expression    F  , the list of bracketed sub-expressions includes: A  B, C  D,    "  , etc. Evaluation We have conducted a preliminary evaluation of the mathematical expression parser. Both the student and tutor turns were included to provide more data for the evaluation. Of the 890 mathematical expressions found in the corpus (432 in the student and 458 in the tutor turns), only 9 were incorrectly recognized. The following classes of errors were detected:6 1. P((A K C) I (B K C)) =PC K (A I B) P((A K C) I (B K C))=PC K (A I B) 2. a. (A 5 U und B 5 U) b. (da ja A I B= L ) ( A 5 U und B 5 U ) (da ja A I B= L ) 3. K((A K B) I (C K D)) = K(A ? B) ? K(C ? D) K((A K B) I (C K D)) = K(A ? B) ? K(C ? D) 4. Gleiches gilt mit D (K(C I D)) K (K(A I B)) Gleiches gilt mit D (K(C I D)) K (K(A I B)) [The same holds with . . . ] The examples in (1) and (2) have to do with parentheses. In (1), the student actually omitted them. The remedy in such cases is to ask the student to correct the input. In (2), on the other hand, no parentheses are missing, but they are ambiguous between mathematical brackets and parenthetical statement markers. The parser mistakenly included one of the parentheses with the mathematical expressions, thereby introducing an error. We could include a list of mathematical operations allowed to be verbalized, in order to include the logical connective in (2a) in the tagged formula. But (2b) shows that this simple solution would not remedy the problem overall, as there is no pattern as to the amount and type of linguistic material accompanying the formulae in parenthesis. We are presently working on ways to identify the two uses of parentheses in a pre-processing step. In (3) the error is caused by a non-standard character, “?”, found in the formula. In (4) the student omitted punctuation causing the character “D” to be interpreted as a nonstandard literal for naming an operation on sets. 4.2 Deep analysis The task of the deep parser is to produce a domainindependent linguistic meaning representation of syntactically well-formed sentences and fragments. By linguistic meaning (LM), we understand the dependency-based deep semantics in the sense of the Prague School notion of sentence meaning as employed in the Functional Generative Description 6Incorrect tagging is shown along with the correct result below it, following an arrow. (FGD) (Sgall et al., 1986; Kruijff, 2001). It represents the literal meaning of the utterance rather than a domain-specific interpretation.7 In FGD, the central frame unit of a sentence/clause is the head verb which specifies the tectogrammatical relations (TRs) of its dependents (participants). Further distinction is drawn into inner participants, such as Actor, Patient, Addressee, and free modifications, such as Location, Means, Direction. Using TRs rather than surface grammatical roles provides a generalized view of the correlations between domain-specific content and its linguistic realization. We use a simplified set of TRs based on (Hajiˇcov´a et al., 2000). One reason for simplification is to distinguish which relations are to be understood metaphorically given the domain sub-language. In order to allow for ambiguity in the recognition of TRs, we organize them hierarchically into a taxonomy. The most commonly occurring relations in our context, aside from the inner participant roles of Actor and Patient, are Cause, Condition, and ResultConclusion (which coincide with the rhetorical relations in the argumentative structure of the proof), for example: Da [A )  gilt] CAUSE  , alle x, die in A sind sind nicht in B [As A )  applies, all x that are in A are not in B] Wenn [A )  ] COND  , dann A  B=  [If A ) ! , then A  B=  ] Da  )  gilt, [alle x, die in A sind sind nicht in B] RES  Wenn A ) ! , dann [A  B=  ] RES  Other commonly found TRs include NormCriterion, e.g. [nach deMorgan-Regel-2] NORM  ist ) + & =...) [according to De Morgan rule 2 it holds that ...] ) *(! + ist [laut DeMorgan-1] NORM  ( ) ,  ) ! ) [. . . equals, according to De Morgan rule1, . . . ] We group other relations into sets of HasProperty, GeneralRelation (for adjectival and clausal modification), and Other (a catch-all category), for example: dann muessen alla A und B [in C]  PROP-LOC  enthalten sein [then all A and B have to be contained in C] Alle x, [die in B sind]  GENREL  . . . [All x that are in B...] alle elemente [aus A]  PROP-FROM  sind in )  enthalten [all elements from A are contained in ) ! ] Aus A - U  B folgt [mit A  B=  ]  OTHER  , B - U  A. [From A - U  B follows with A  B=  , that B - U  A] 7LM is conceptually related to logical form, however, differs in coverage: while it does operate on the level of deep semantic roles, such aspects of meaning as the scope of quantifiers or interpretation of plurals, synonymy, or ambiguity are not resolved. where PROP-LOC denotes the HasProperty relation of type Location, GENREL is a general relation as in complementation, and PROP-FROM is a HasProperty relation of type Direction-From or From-Source. More details on the investigation into tectogrammatical relations that build up linguistic meaning of informal mathematical text can be found in (Wolska and Kruijff-Korbayov´a, 2004a). Implementation The syntactic analysis is performed using openCCG8, an open source parser for Multi-Modal Combinatory Categorial Grammar (MMCCG). MMCCG is a lexicalist grammar formalism in which application of combinatory rules is controlled though context-sensitive specification of modes on slashes (Baldridge and Kruijff, 2003). The linguistic meaning, built in parallel with the syntax, is represented using Hybrid Logic Dependency Semantics (HLDS), a hybrid logic representation which allows a compositional, unification-based construction of HLDS terms with CCG (Baldridge and Kruijff, 2002). An HLDS term is a relational structure where dependency relations between heads and dependents are encoded as modal relations. The syntactic categories for a lexical entry FORMULA, corresponding to mathematical expressions of type “formula”, are ,  , and . For example, in one of the readings of “B enthaelt " ”, “enthaelt” represents the meaning contain taking dependents in the relations Actor and Patient, shown schematically in Fig. 2. enthalten:contain FORMULA: ACT  FORMULA:  PAT  Figure 2: Tectogrammatical representation of the utterance “B enthaelt   ” [B contains  ]. FORMULA represents the default lexical entry for identified mathematical expressions categorized as “formula” (cf. Section 4.1). The LM is represented by the following HLDS term: @h1(contain   ACT  (f1  FORMULA:B)   PAT  (f2  FORMULA:   ) where h1 is the state where the proposition contain is true, and the nominals f1 and f2 represent dependents of the head contain, which stand in the tectogrammatical relations Actor and Patient, respectively. It is possible to refer to the structural sub-parts of the FORMULA type expressions, as formula subparts are identified by the tagger, and discourse ref8http://openccg.sourceforge.net erents are created for them and stored with the discourse model. We represent the discourse model within the same framework of hybrid modal logic. Nominals of the hybrid logic object language are atomic formulae that constitute a pointing device to a particular place in a model where they are true. The satisfaction operator, @, allows to evaluate a formula at the point in the model given by a nominal (e.g. the formula @  evaluates  at the point i). For discourse modeling, we adopt the hybrid logic formalization of the DRT notions in (Kruijff, 2001; Kruijff and Kruijff-Korbayov´a, 2001). Within this formalism, nominals are interpreted as discourse referents that are bound to propositions through the satisfaction operator. In the example above, f1 and f2 represent discourse referents for FORMULA:B and FORMULA: 1 , respectively. More technical details on the formalism can be found in the aforementioned publications. 4.3 Domain interpretation The linguistic meaning representations obtained from the parser are interpreted with respect to the domain. We are constructing a domain ontology that reflects the domain reasoner’s knowledge base, and is augmented to allow resolution of ambiguities introduced by natural language. For example, the previously mentioned predicate contain represents the semantic relation of Containment which, in the domain of naive set theory, is ambiguous between the domain relations ELEMENT, SUBSET, and PROPER SUBSET. The specializations of the ambiguous semantic relations are encoded in the ontology, while a semantic lexicon provides interpretations of the predicates. At the domain interpretation stage, the semantic lexicon is consulted to translate the tectogrammatical frames of the predicates into the semantic relations represented in the domain ontology. More details on the lexical-semantic stage of interpretation can be found in (Wolska and KruijffKorbayov´a, 2004b), and more details on the domain ontology are presented in (Horacek and Wolska, 2004). For example, for the predicate contain, the lexicon contains the following facts: contain( ,   ,     )  (SUBFORMULA   , embedding   ) [’a Patient of type FORMULA is a subformula embedded within a FORMULA in the Actor relation with respect to the head contain’] contain( ,  !#"%$  ,    !#"%$  )  CONTAINMENT(container   , containee   ) [’the Containment relation involves a predicate contain and its Actor and Patient dependents, where the Actor and Patient are the container and containee parameters respectively’] Translation rules that consult the ontology expand the meaning of the predicates to all their alternative domain-specific interpretations preserving argument structure. As it is in the capacity of neither sentence-level nor discourse-level analysis to evaluate the correctness of the alternative interpretations, this task is delegated to the Proof Manager (PM). The task of the PM is to: (A) communicate directly with the theorem prover;9 (B) build and maintain a representation of the proof constructed by the student;10 (C) check type compatibility of proof-relevant entities introduced as new in discourse; (D) check consistency and validity of each of the interpretations constructed by the analysis module, with the proof context; (E) evaluate the proof-relevant part of the utterance with respect to completeness, accuracy, and relevance. 5 Example analysis In this section, we illustrate the mechanics of the approach on the following examples. (1) B enthaelt kein    [B contains no    ] (2) A  B & A  B ' (3) A enthaelt keinesfalls Elemente, die in B sind. [A contains no elements that are also in B] Example (1) shows the tight interaction of natural language and mathematical formulae. The intended reading of the scope of negation is over a part of the formula following it, rather than the whole formula. The analysis proceeds as follows. The formula tagger first identifies the formula ( x  A ) and substitutes it with the generic entry FORMULA represented in the lexicon. If there was no prior discourse entity for “B” to verify its type, the type is ambiguous between CONST, TERM, and FORMULA.11 The sentence is assigned four alternative readings: (i) “CONST contains no FORMULA”, (ii) “TERM contains no FORMULA”, (iii) “FORMULA contains no FORMULA”, (iv) “CONST contains no CONST 0 FORMULA”. The last reading is obtained by partitioning an entity of type FORMULA in meaningful ways, taking into account possible interaction with preceding modifiers. Here, given the quantifier “no”, the expression ( x  A ) has been split into its surface parts 9We are using a version of * MEGA adapted for assertionlevel proving (Vo et al., 2003). 10The discourse content representation is separated from the proof representation, however, the corresponding entities must be co-indexed in both. 11In prior discourse, there may have been an assignment B := + , where + is a formula, in which case, B would be known from discourse context to be of type FORMULA (similarly for term assignment); by CONST we mean a set or element variable such as A, x denoting a set A or an element x respectively. enthalten:contain FORMULA: ACT  no RESTR  FORMULA:  PAT  Figure 3: Tectogrammatical representation of the utterance “B enthaelt kein (   ) ” [B contains no  ]. enthalten:contain CONST: ACT  no RESTR  CONST:  PAT  0 FORMULA:  GENREL  Figure 4: Tectogrammatical representation of the utterance “B enthaelt kein (    ) ” [B contains no (    ) ]. as follows: ( [x][  A] ) .12 [x] has been substituted with a generic lexical entry CONST, and [  A] with a symbolic entry for a formula missing its left argument (cf. Section 4.1). The readings (i) and (ii) are rejected because of sortal incompatibility. The linguistic meanings of readings (iii) and (iv) are presented in Fig. 3 and Fig. 4, respectively. The corresponding HLDS representations are:13 — for “FORMULA contains no FORMULA”: s:(@k1(kein   RESTR  f2   BODY  (e1  enthalten   ACT  (f1  FORMULA)   PAT  f2))  @f2(FORMULA)) [‘formula B embeds no subformula x  A’] — for “CONST contains no CONST 0 FORMULA”: s:(@k1(kein   RESTR  x1   BODY  (e1  enthalten   ACT  (c1  CONST)   PAT  x1))  @x1(CONST   HASPROP  (x2  0 FORMULA))) [‘B contains no x such that x is an element of A’] Next, the semantic lexicon is consulted to translate these readings into their domain interpretations. The relevant lexical semantic entries were presented in Section 4.3. Using the linguistic meaning, the semantic lexicon, and the ontology, we obtain four interpretations paraphrased below: — for “FORMULA contains no FORMULA”: (1.1) ’it is not the case that  PAT  , the formula, x  A, is a subformula of  ACT  , the formula B’; — for “CONST contains no CONST 0 FORMULA”: 12There are other ways of constituent partitioning of the formula at the top level operator to separate the operator and its arguments: [x][ ][A]  and [x ][A]  . Each of the partitions obtains its appropriate type corresponding to a lexical entry available in the grammar (e.g., the [x ] chunk is of type FORMULA 0 for a formula missing its right argument). Not all the readings, however, compose to form a syntactically and semantically valid parse of the given sentence. 13Irrelevant parts of the meaning representation are omitted; glosses of the hybrid formulae are provided. enthalten:contain CONST: ACT  no RESTR  elements PAT  in GENREL   ACT  CONST: LOC  Figure 5: Tectogrammatical representation of the utterance “A enthaelt keinesfalls Elemente, die auch in B sind.” [A contains no elements that are also in B.]. (1.2a) ’it is not the case that  PAT  , the constant x,  ACT  , B, and x  A’, (1.2b) ’it is not the case that  PAT  , the constant x,   ACT  , B, and x  A’, (1.2c) ’it is not the case that  PAT  , the constant x,   ACT  , B, and x  A’. The interpretation (1.1) is verified in the discourse context with information on structural parts of the discourse entity “B” of type formula, while (1.2a-c) are translated into messages to the PM and passed on for evaluation in the proof context. Example (2) contains one mathematical formula. Such utterances are the simplest to analyze: The formulae identified by the mathematical expression tagger are passed directly to the PM. Example (3) shows an utterance with domainrelevant content fully linguistically verbalized. The analysis of fully verbalized utterances proceeds similarly to the first example: the mathematical expressions are substituted with the appropriate generic lexical entries (here, “A” and “B” are substituted with their three possible alternative readings: CONST, TERM, and FORMULA, yielding several readings “CONST contains no elements that are also in CONST”, “TERM contains no elements that are also in TERM”, etc.). Next, the sentence is analyzed by the grammar. The semantic roles of Actor and Patient associated with the verb “contain” are taken by “A” and “elements” respectively; quantifier “no” is in the relation Restrictor with “A”; the relative clause is in the GeneralRelation with “elements”, etc. The linguistic meaning of the utterance in example (3) is shown in Fig. 5. Then, the semantic lexicon and the ontology are consulted to translate the linguistic meaning into its domain-specific interpretations, which are in this case very similar to the ones of example (1). 6 Conclusions and Further Work Based on experimentally collected tutorial dialogs on mathematical proofs, we argued for the use of deep syntactic and semantic analysis. We presented an approach that uses multimodal CCG with hybrid logic dependency semantics, treating natural and symbolic language on a par, thus enabling uniform analysis of inputs with varying degree of formal content verbalization. A preliminary evaluation of the mathematical expression parser showed a reasonable result. We are incrementally extending the implementation of the deep analysis components, which will be evaluated as part of the next Wizard-of-Oz experiment. One of the issues to be addressed in this context is the treatment of ill-formed input. On the one hand, the system can initiate a correction subdialog in such cases. On the other hand, it is not desirable to go into syntactic details and distract the student from the main tutoring goal. We therefore need to handle some degree of ill-formed input. Another question is which parts of mathematical expressions should have explicit semantic representation. We feel that this choice should be motivated empirically, by systematic occurrence of natural language references to parts of mathematical expressions (e.g., “the left/right side”, “the parenthesis”, and “the inner parenthesis”) and by the syntactic contexts in which they occur (e.g., the partitioning ( [x][  A] ) seems well motivated in “B contains no x  A”; [x  ] is a constituent in “x  of complement of B.”) We also plan to investigate the interaction of modal verbs with the argumentative structure of the proof. For instance, the necessity modality is compatible with asserting a necessary conclusion or a prerequisite condition (e.g., “A und B muessen disjunkt sein.” [A and B must be disjoint.]). This introduces an ambiguity that needs to be resolved by the domain reasoner. References J. M. Baldridge and G.J. M. Kruijff. 2002. Coupling CCG with hybrid logic dependency semantics. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia PA. pp. 319–326. J. M. Baldridge and G.J. M. Kruijff. 2003. Multi-modal combinatory categorial grammar. In Proc. of the 10th Annual Meeting of the European Chapter of the Association for Computational Linguistics (EACL’03), Budapest, Hungary. pp. 211–218. J. Baur. 1999. Syntax und Semantik mathematischer Texte. Diplomarbeit, Fachrichtung Computerlinguistik, Universit¨at des Saarlandes, Saarbr¨ucken, Germany. C. Benzm¨uller, A. Fiedler, M. Gabsdil, H. Horacek, I. KruijffKorbayov´a, M. Pinkal, J. Siekmann, D. Tsovaltzi, B. Q. Vo, and M. Wolska. 2003a. Tutorial dialogs on mathematical proofs. In Proc. of IJCAI’03 Workshop on Knowledge Representation and Automated Reasoning for E-Learning Systems, Acapulco, Mexico. C. Benzm¨uller, A. Fiedler, M. Gabsdil, H. Horacek, I. KruijffKorbayov´a, M. Pinkal, J. Siekmann, D. Tsovaltzi, B. Q. Vo, and M. Wolska. 2003b. A Wizard-of-Oz experiment for tutorial dialogues in mathematics. In Proc. of the AIED’03 Workshop on Advanced Technologies for Mathematics Education, Sydney, Australia. pp. 471–481. M. Glass. 2001. Processing language input in the CIRCSIMTutor intelligent tutoring system. In Proc. of the 10th AIED Conference, San Antonio, TX. pp. 210–221. A. Graesser, P. Wiemer-Hastings, K. Wiemer-Hastings, D. Harter, and N. Person. 2000. Using latent semantic analysis to evaluate the contributions of students in autotutor. Interactive Learning Environments, 8:2. pp. 129–147. E. Hajiˇcov´a, J. Panevov´a, and P. Sgall. 2000. A manual for tectogrammatical tagging of the Prague Dependency Treebank. TR-2000-09, Charles University, Prague, Czech Republic. H. Horacek and M. Wolska. 2004. Interpreting Semi-Formal Utterances in Dialogs about Mathematical Proofs. In Proc. of the 9th International Conference on Application of Natural Language to Information Systems (NLDB’04), Salford, Manchester, Springer. To appear. G.J.M. Kruijff and I. Kruijff-Korbayov´a. 2001. A hybrid logic formalization of information structure sensitive discourse interpretation. In Proc. of the 4th International Conference on Text, Speech and Dialogue (TSD’2001), ˇZelezn´a Ruda, Czech Republic. pp. 31–38. G.J.M. Kruijff. 2001. A Categorial-Modal Logical Architecture of Informativity: Dependency Grammar Logic & Information Structure. Ph.D. Thesis, Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic. J. Moore. 1993. What makes human explanations effective? In Proc. of the 15th Annual Conference of the Cognitive Science Society, Hillsdale, NJ. pp. 131–136. P. Sgall, E. Hajiˇcov´a, and J. Panevov´a. 1986. The meaning of the sentence in its semantic and pragmatic aspects. Reidel Publishing Company, Dordrecht, The Netherlands. Q.B. Vo, C. Benzm¨uller, and S. Autexier. 2003. Assertion Application in Theorem Proving and Proof Planning. In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI). Acapulco, Mexico. M. Wolska and I. Kruijff-Korbayov´a. 2004a. Building a dependency-based grammar for parsing informal mathematical discourse. In Proc. of the 7th International Conference on Text, Speech and Dialogue (TSD’04), Brno, Czech Republic, Springer. To appear. M. Wolska and I. Kruijff-Korbayov´a. 2004b. LexicalSemantic Interpretation of Language Input in Mathematical Dialogs. In Proc. of the ACL Workshop on Text Meaning and Interpretation, Barcelona, Spain. To appear. M. Wolska, B. Q. Vo, D. Tsovaltzi, I. Kruijff-Korbayov´a, E. Karagjosova, H. Horacek, M. Gabsdil, A. Fiedler, C. Benzm¨uller, 2004. An annotated corpus of tutorial dialogs on mathematical theorem proving. In Proc. of 4th International Conference On Language Resources and Evaluation (LREC’04), Lisbon, Portugal. pp. 1007–1010. C. Zinn. 2003. A Computational Framework for Understanding Mathematical Discourse. In Logic Journal of the IGPL, 11:4, pp. 457–484, Oxford University Press.
2004
4
Enriching the Output of a Parser Using Memory-Based Learning Valentin Jijkoun and Maarten de Rijke Informatics Institute, University of Amsterdam jijkoun, mdr  @science.uva.nl Abstract We describe a method for enriching the output of a parser with information available in a corpus. The method is based on graph rewriting using memorybased learning, applied to dependency structures. This general framework allows us to accurately recover both grammatical and semantic information as well as non-local dependencies. It also facilitates dependency-based evaluation of phrase structure parsers. Our method is largely independent of the choice of parser and corpus, and shows state of the art performance. 1 Introduction We describe a method to automatically enrich the output of parsers with information that is present in existing treebanks but usually not produced by the parsers themselves. Our motivation is two-fold. First and most important, for applications requiring information extraction or semantic interpretation of text, it is desirable to have parsers produce grammatically and semantically rich output. Second, to facilitate dependency-based comparison and evaluation of different parsers, their outputs may need to be transformed into specific rich dependency formalisms. The method allows us to automatically transform the output of a parser into structures as they are annotated in a dependency treebank. For a phrase structure parser, we first convert the produced phrase structures into dependency graphs in a straightforward way, and then apply a sequence of graph transformations: changing dependency labels, adding new nodes, and adding new dependencies. A memory-based learner trained on a dependency corpus is used to detect which modifications should be performed. For a dependency corpus derived from the Penn Treebank and the parsers we considered, these transformations correspond to adding Penn functional tags (e.g., -SBJ, -TMP, -LOC), empty nodes (e.g., NP PRO) and non-local dependencies (controlled traces, WHextraction, etc.). For these specific sub-tasks our method achieves state of the art performance. The evaluation of the transformed output of the parsers of Charniak (2000) and Collins (1999) gives 90% unlabelled and 84% labelled accuracy with respect to dependencies, when measured against a dependency corpus derived from the Penn Treebank. The paper is organized as follows. After providing some background and motivation in Section 2, we give the general overview of our method in Section 3. In Sections 4 through 8, we describe all stages of the transformation process, providing evaluation results and comparing our methods to earlier work. We discuss the results in Section 9. 2 Background and Motivation State of the art statistical parsers, e.g., parsers trained on the Penn Treebank, produce syntactic parse trees with bare phrase labels, such as NP, PP, S, although the training corpora are usually much richer and often contain additional grammatical and semantic information (distinguishing various modifiers, complements, subjects, objects, etc.), including non-local dependencies, i.e., relations between phrases not adjacent in the parse tree. While this information may be explicitly annotated in a treebank, it is rarely used or delivered by parsers.1 The reason is that bringing in more information of this type usually makes the underlying parsing model more complicated: more parameters need to be estimated and independence assumptions may no longer hold. Klein and Manning (2003), for example, mention that using functional tags of the Penn Treebank (temporal, location, subject, predicate, etc.) with a simple unlexicalized PCFG generally had a negative effect on the parser’s performance. Currently, there are no parsers trained on the Penn Treebank that use the structure of the treebank in full and that are thus 1Some notable exceptions are the CCG parser described in (Hockenmaier, 2003), which incorporates non-local dependencies into the parser’s statistical model, and the parser of Collins (1999), which uses WH traces and argument/modifier distinctions. capable of producing syntactic structures containing all or nearly all of the information annotated in the corpus. In recent years there has been a growing interest in getting more information from parsers than just bare phrase trees. Blaheta and Charniak (2000) presented the first method for assigning Penn functional tags to constituents identified by a parser. Pattern-matching approaches were used in (Johnson, 2002) and (Jijkoun, 2003) to recover non-local dependencies in phrase trees. Furthermore, experiments described in (Dienes and Dubey, 2003) show that the latter task can be successfully addressed by shallow preprocessing methods. 3 An Overview of the Method In this section we give a high-level overview of our method for transforming a parser’s output and describe the different steps of the process. In the experiments we used the parsers described in (Charniak, 2000) and (Collins, 1999). For Collins’ parser the text was first POS-tagged using Ratnaparkhi’s maximum enthropy tagger. The training phase of the method consists in learning which transformations need to be applied to the output of a parser to make it as similar to the treebank data as possible. As a preliminary step (Step 0), we convert the WSJ2 to a dependency corpus without losing the annotated information (functional tags, empty nodes, non-local dependencies). The same conversion is applied to the output of the parsers we consider. The details of the conversion process are described in Section 4 below. The training then proceeds by comparing graphs derived from a parser’s output with the graphs from the dependency corpus, detecting various mismatches, such as incorrect arc labels and missing nodes or arcs. Then the following steps are taken to fix the mismatches: Step 1: changing arc labels Step 2: adding new nodes Step 3: adding new arcs Obviously, other modifications are possible, such as deleting arcs or moving arcs from one node to another. We leave these for future work, though, and focus on the three transformations mentioned above. The dependency corpus was split into training (WSJ sections 02–21), development (sections 00– 2Thoughout the paper WSJ refers to the Penn Treebank II Wall Street Journal corpus. 01) and test (section 23) corpora. For each of the steps 1, 2 and 3 we proceed as follows: 1. compare the training corpus to the output of the parser on the strings of the corpus, after applying the transformations of the previous steps 2. identify possible beneficial transformations (which arc labels need to be changed or where new nodes or arcs need to be added) 3. train a memory-based classifier to predict possible transformations given their context (i.e., information about the local structure of the dependency graph around possible application sites). While the definitions of the context and application site and the graph modifications are different for the three steps, the general structure of the method remains the same at each stage. Sections 6, 7 and 8 describe the steps in detail. In the application phase of the method, we proceed similarly. First, the output of the parser is converted to dependency graphs, and then the learners trained during the steps 1, 2 and 3 are applied in sequence to perform the graph transformations. Apart from the conversion from phrase structures to dependency graphs and the extraction of some linguistic features for the learning, our method does not use any information about the details of the treebank annotation or the parser’s output: it works with arbitrary labelled directed graphs. 4 Step 0: From Constituents to Dependencies To convert phrase trees to dependency structures, we followed the commonly used scheme (Collins, 1999). The conversion routine,3 described below, is applied both to the original WSJ structures and the output of the parsers, though the former provides more information (e.g., traces) which is used by the conversion routine if available. First, for the treebank data, all traces are resolved and corresponding empty nodes are replaced with links to target constituents, so that syntactic trees become directed acyclic graphs. Second, for each constituent we detect its head daughters (more than one in the case of conjunction) and identify lexical heads. Then, for each constituent we output new dependencies between its lexical head and the lexical heads of its non-head daughters. The label of every new dependency is the constituent’s phrase 3Our converter is available at http://www.science. uva.nl/˜jijkoun/software. (a) S NP−SBJ VP to seek NP seats *−1 directors NP−SBJ−1 this month NP−TMP VP planned S (b) VP to seek NP seats VP planned S directors this month NP NP S (c) planned directors VP|S S|NP−SBJ to seek seats VP|NP month this VP|TO S|NP−TMP NP|DT S|NP−SBJ (d) planned directors VP|S S|NP to seek seats VP|NP month this VP|TO S|NP NP|DT Figure 1: Example of (a) the Penn Treebank WSJ annotation, (b) the output of Charniak’s parser, and the results of the conversion to dependency structures of (c) the Penn tree and of (d) the parser’s output label, stripped of all functional tags and coindexing marks, conjoined with the label of the non-head daughter, with its functional tags but without coindexing marks. Figure 1 shows an example of the original Penn annotation (a), the output of Charniak’s parser (b) and the results of our conversion of these trees to dependency structures (c and d). The interpretation of the dependency labels is straightforward: e.g., the label S NP-TMP corresponds to a sentence (S) being modified by a temporal noun phrase (NP-TMP). The core of the conversion routine is the selection of head daughters of the constituents. Following (Collins, 1999), we used a head table, but extended it with a set of additional rules, based on constituent labels, POS tags or, sometimes actual words, to account for situations where the head table alone gave unsatisfactory results. The most notable extension is our handling of conjunctions, which are often left relatively flat in WSJ and, as a result, in a parser’s output: we used simple pattern-based heuristics to detect conjuncts and mark all conjuncts as heads of a conjunction. After the conversion, every resulting dependency structure is modified deterministically:  auxiliary verbs (be, do, have) become dependents of corresponding main verbs (similar to modal verbs, which are handled by the head table);  to fix a WSJ inconsistency, we move the -LGS tag (indicating logical subject of passive in a by-phrase) from the PP to its child NP. 5 Dependency-based Evaluation of Parsers After the original WSJ structures and the parsers’ outputs have been converted to dependency structures, we evaluate the performance of the parsers against the dependency corpus. We use the standard precision/recall measures over sets of dependencies (excluding punctuation marks, as usual) and evaluate Collins’ and Charniak’s parsers on WSJ section 23 in three settings:  on unlabelled dependencies;  on labelled dependencies with only bare labels (all functional tags discarded);  on labelled dependencies with functional tags. Notice that since neither Collins’ nor Charniak’s parser outputs WSJ functional labels, all dependencies with functional labels in the gold parse will be judged incorrect in the third setting. The evaluation results are shown in Table 1, in the row “step 0”.4 As explained above, the low numbers for the dependency evaluation with functional tags are expected, because the two parsers were not intended to produce functional labels. Interestingly, the ranking of the two parsers is different for the dependency-based evaluation than for PARSEVAL: Charniak’s parser obtains a higher PARSEVAL score than Collins’ (89.0% vs. 88.2%), 4For meaningful comparison, the Collins’ tags -A and -g are removed in this evaluation. Evaluation Parser unlabelled labelled with func. tags P R f P R f P R f after conversion Charniak 89.9 83.9 86.8 85.9 80.1 82.9 68.0 63.5 65.7 (step 0, Section 4) Collins 90.4 83.7 87.0 86.7 80.3 83.4 68.4 63.4 65.8 after relabelling Charniak 89.9 83.9 86.8 86.3 80.5 83.3 83.8 78.2 80.9 (step 1, Section 6) Collins 90.4 83.7 87.0 87.0 80.6 83.7 84.6 78.4 81.4 after adding nodes Charniak 90.1 85.4 87.7 86.5 82.0 84.2 84.1 79.8 81.9 (step 2, Section 7) Collins 90.6 85.3 87.9 87.2 82.1 84.6 84.9 79.9 82.3 after adding arcs Charniak 90.0 89.7 89.8 86.5 86.2 86.4 84.2 83.9 84.0 (step 3, Section 8) Collins 90.4 89.4 89.9 87.1 86.2 86.6 84.9 83.9 84.4 Table 1: Dependency-based evaluation of the parsers after different transformation steps but slightly lower f-score on dependencies without functional tags (82.9% vs. 83.4%). To summarize the evaluation scores at this stage, both parsers perform with f-score around 87% on unlabelled dependencies. When evaluating on bare dependency labels (i.e., disregarding functional tags) the performance drops to 83%. The new errors that appear when taking labels into account come from different sources: incorrect POS tags (NN vs. VBG), different degrees of flatness of analyses in gold and test parses (JJ vs. ADJP, or CD vs. QP) and inconsistencies in the Penn annotation (VP vs. RRC). Finally, the performance goes down to around 66% when taking into account functional tags, which are not produced by the parsers at all. 6 Step 1: Changing Dependency Labels Intuitively, it seems that the 66% performance on labels with functional tags is an underestimation, because much of the missing information is easily recoverable. E.g., one can think of simple heuristics to distinguish subject NPs, temporal PPs, etc., thus introducing functional labels and improving the scores. Developing such heuristics would be a very time consuming and ad hoc process: e.g., Collins’ -A and -g tags may give useful clues for this labelling, but they are not available in the output of other parsers. As an alternative to hardcoded heuristics, Blaheta and Charniak (2000) proposed to recover the Penn functional tags automatically. On the Penn Treebank, they trained a statistical model that, given a constituent in a parsed sentence and its context (parent, grandparent, head words thereof etc.), predicted the functional label, possibly empty. The method gave impressive performance, with 98.64% accuracy on all constituents and 87.28% f-score for non-empty functional labels, when applied to constituents correctly identified by Charniak’s parser. If we extrapolate these results to labelled PARSEVAL with functional labels, the method would give around 87.8% performance (98.64% of the “usual” 89%) for Charniak’s parser. Adding functional labels can be viewed as a relabelling task: we need to change the labels produced by a parser. We considered this more general task, and used a different approach, taking dependency graphs as input. We first parsed the training part of our dependency treebank (sections 02–21) and identified possible relabellings by comparing dependencies output by a parser to dependencies from the treebank. E.g., for Collins’ parser the most frequent relabellings were S NP S NP-SBJ, PP NP-A PP NP, VP NP-A VP NP, S NP-A S NP-SBJ and VP PP VP PP-CLR. In total, around 30% of all the parser’s dependencies had different labels in the treebank. We then learned a mapping from the parser’s labels to those in the dependency corpus, using TiMBL, a memory-based classifier (Daelemans et al., 2003). The features used for the relabelling were similar to those used by Blaheta and Charniak, but redefined for dependency structures. For each dependency we included:  the head (  ) and dependent (  ), their POS tags;  the leftmost dependent of  and its POS;  the head of  (  ), its POS and the label of the dependency  ;  the closest left and right siblings of  (dependents of  ) and their POS tags;  the label of the dependency (   ) as derived from the parser’s output. When included in feature vectors, all dependency labels were split at ‘ ’, e.g., the label S NP-A resulted in two features: S and NP-A. Testing was done as follows. The test corpus (section 23) was also parsed, and for each dependency a feature vector was formed and given to TiMBL to correct the dependency label. After this transformation the outputs of the parsers were evaluated, as before, on dependencies in the three settings. The results of the evaluation are shown in Table 1 (the row marked “step 1”). Let us take a closer look at the evaluation results. Obviously, relabelling does not change the unlabelled scores. The 1% improvement for evaluation on bare labels suggests that our approach is capable not only of adding functional tags, but can also correct the parser’s phrase labels and partof-speech tags: for Collins’ parser the most frequent correct changes not involving functional labels were NP NN  NP JJ and NP JJ  NP VBN, fixing POS tagging errors. A very substantial increase of the labelled score (from 66% to 81%), which is only 6% lower than unlabelled score, clearly indicates that, although the parsers do not produce functional labels, this information is to a large extent implicitly present in trees and can be recovered. 6.1 Comparison to Earlier Work One effect of the relabelling procedure described above is the recovery of Penn functional tags. Thus, it is informative to compare our results with those reported in (Blaheta and Charniak, 2000) for this same task. Blaheta and Charniak measured tagging accuracy and precision/recall for functional tag identification only for constituents correctly identified by the parser (i.e., having the correct span and nonterminal label). Since our method uses the dependency formalism, to make a meaningful comparison we need to model the notion of a constituent being correctly found by a parser. For a word we say that the constituent corresponding to its maximal projection is correctly identified if there exists  , the head of , and for the dependency  the right part of its label (e.g., NP-SBJ for S NP-SBJ) is a nonterminal (i.e., not a POS tag) and matches the right part of the label in the gold dependency structure, after stripping functional tags. Thus, the constituent’s label and headword should be correct, but not necessarily the span. Moreover, 2.5% of all constituents with functional labels (246 out of 9928 in section 23) are not maximal projections. Since our method ignores functional tags of such constituents (these tags disappear after the conversion of phrase structures to dependency graphs), we consider them as errors, i.e., reducing our recall value. Below, the tagging accuracy, precision and recall are evaluated on constituents correctly identified by Charniak’s parser for section 23. Method Accuracy P R f Blaheta 98.6 87.2 87.4 87.3 This paper 94.7 90.2 86.9 88.5 The difference in the accuracy is due to two reasons. First, because of the different definition of a correctly identified constituent in the parser’s output, we apply our method to a greater portion of all labels produced by the parser (95% vs. 89% reported in (Blaheta and Charniak, 2000)). This might make the task for out system more difficult. And second, whereas 22% of all constituents in section 23 have a functional tag, 36% of the maximal projections have one. Since we apply our method only to labels of maximal projections, this means that our accuracy baseline (i.e., never assign any tag) is lower. 7 Step 2: Adding Missing Nodes As the row labelled “step 1” in Table 1 indicates, for both parsers the recall is relatively low (6% lower than the precision): while the WSJ trees, and hence the derived dependency structures, contain non-local dependencies and empty nodes, the parsers simply do not provide this information. To make up for this, we considered two further tranformations of the output of the parsers: adding new nodes (corresponding to empty nodes in WSJ), and adding new labelled arcs. This section describes the former modification and Section 8 the latter. As described in Section 4, when converting WSJ trees to dependency structures, traces are resolved, their empty nodes removed and new dependencies introduced. Of the remaining empty nodes (i.e., non-traces), the most frequent in WSJ are: NP PRO, empty units, empty complementizers, empty relative pronouns. To add missing empty nodes to dependency graphs, we compared the output of the parsers on the strings of the training corpus after steps 0 and 1 (conversion to dependencies and relabelling) to the structures in the corpus itself. We trained a classifier which, for every word in the parser’s output, had to decide whether an empty node should be added as a new dependent of the word, and what its symbol (‘*’, ‘*U*’ or ‘0’ in WSJ), POS tag (always -NONE- in WSJ) and the label of the new dependency (e.g., ‘S NP-SBJ’ for NP PRO and ‘VP SBAR’ for empty complementizers) should be. This decision is conditioned on the word itself and its context. The features used were:  the word and its POS tag, whether the word has any subject and object dependents, and whether it is the head of a finite verb group;  the same information for the word’s head (if any) and also the label of the corresponding dependency;  the same information for the rightmost and leftmost dependents of the word (if exist) along with their dependency labels. In total, we extracted 23 symbolic features for every word in the corpus. TiMBL was trained on sections 02–21 and applied to the output of the parsers (after steps 0 and 1) on the test corpus (section 23), producing a list of empty nodes to be inserted in the dependency graphs. After insertion of the empty nodes, the resulting structures were evaluated against section 23 of the gold dependency treebank. The results are shown in Table 1 (the row “step 2”). For both parsers the insertion of empty nodes improves the recall by 1.5%, resulting in a 1% increase of the f-score. 7.1 Comparison to Earlier Work A procedure for empty node recovery was first described in (Johnson, 2002), along with an evaluation criterion: an empty node is correct if its category and position in the sentence are correct. Since our method works with dependency structures, not phrase trees, we adopt a different but comparable criterion: an empty node should be attached as a dependent to the correct word, and with the correct dependency label. Unlike the first metric, our correctness criterion also requires that possible attachment ambiguities are resolved correctly (e.g., as in the number of reports 0 they sent, where the empty relative pronoun may be attached either to number or to reports). For this task, the best published results (using Johnson’s metric) were reported by Dienes and Dubey (2003), who used shallow tagging to insert empty elements. Below we give the comparison to our method. Notice that this evaluation does not include traces (i.e., empty elements with antecedents): recovery of traces is described in Section 8. Type This paper Dienes&Dubey P R f P R f PRO-NP 73.1 63.89 68.1 68.7 70.4 69.5 COMP-SBAR 82.6 83.1 82.8 93.8 78.6 85.5 COMP-WHNP 65.3 40.0 49.6 67.2 38.3 48.8 UNIT 95.4 91.8 93.6 99.1 92.5 95.7 For comparison we use the notation of Dienes and Dubey: PRO-NP for uncontrolled PROs (nodes ‘*’ in the WSJ), COMP-SBAR for empty complementizers (nodes ‘0’ with dependency label VP SBAR), COMP-WHNP for empty relative pronouns (nodes ‘0’ with dependency label X SBAR, where X  VP) and UNIT for empty units (nodes ‘*U*’). It is interesting to see that for empty nodes except for UNIT both methods have their advantages, showing better precision or better recall. Yet shallow tagging clearly performs better for UNIT. 8 Step 3: Adding Missing Dependencies We now get to the third and final step of our transformation method: adding missing arcs to dependency graphs. The parsers we considered do not explicitly provide information about non-local dependencies (control, WH-extraction) present in the treebank. Moreover, newly inserted empty nodes (step 2, Section 7) might also need more links to the rest of a sentence (e.g., the inserted empty complementizers). In this section we describe the insertion of missing dependencies. Johnson (2002) was the first to address recovery of non-local dependencies in a parser’s output. He proposed a pattern-matching algorithm: first, from the training corpus the patterns that license nonlocal dependencies are extracted, and then these patterns are detected in unseen trees, dependencies being added when matches are found. Building on these ideas, Jijkoun (2003) used a machine learning classifier to detect matches. We extended Jijkoun’s approach by providing the classifier with lexical information and using richer patterns with labels containing the Penn functional tags and empty nodes, detected at steps 1 and 2. First, we compared the output of the parsers on the strings of the training corpus after steps 0, 1 and 2 to the dependency structures in the training corpus. For every dependency that is missing in the parser’s output, we find the shortest undirected path in the dependency graph connecting the head and the dependent. These paths, connected sequences of labelled dependencies, define the set of possible patterns. For our experiments we only considered patterns occuring more than 100 times in the training corpus. E.g., for Collins’ parser, 67 different patterns were found. Next, from the parsers’ output on the strings of the training corpus, we extracted all occurrences of the patterns, along with information about the nodes involved. For every node in an occurrence of a pattern we extracted the following features:  the word and its POS tag;  whether the word has subject and object dependents;  whether the word is the head of a finite verb cluster. We then trained TiMBL to predict the label of the missing dependency (or ‘none’), given an occurrence of a pattern and the features of all the nodes involved. We trained a separate classifier for each pattern. For evaluation purposes we extracted all occurrences of the patterns and the features of their nodes from the parsers’ outputs for section 23 after steps 0, 1 and 2 and used TiMBL to predict and insert new dependencies. Then we compared the resulting dependency structures to the gold corpus. The results are shown in Table 1 (the row “step 3”). As expected, adding missing dependencies substantially improves the recall (by 4% for both parsers) and allows both parsers to achieve an 84% f-score on dependencies with functional tags (90% on unlabelled dependencies). The unlabelled f-score 89.9% for Collins’ parser is close to the 90.9% reported in (Collins, 1999) for the evaluation on unlabelled local dependencies only (without empty nodes and traces). Since as many as 5% of all dependencies in WSJ involve traces or empty nodes, the results in Table 1 are encouraging. 8.1 Comparison to Earlier Work Recently, several methods for the recovery of nonlocal dependencies have been described in the literature. Johnson (2002) and Jijkoun (2003) used pattern-matching on local phrase or dependency structures. Dienes and Dubey (2003) used shallow preprocessing to insert empty elements in raw sentences, making the parser itself capable of finding non-local dependencies. Their method achieves a considerable improvement over the results reported in (Johnson, 2002) and gives the best evaluation results published to date. To compare our results to Dienes and Dubey’s, we carried out the transformation steps 0–3 described above, with a single modification: when adding missing dependencies (step 3), we only considered patterns that introduce nonlocal dependencies (i.e., traces: we kept the information whether a dependency is a trace when converting WSJ to a dependency corpus). As before, a dependency is correctly found if its head, dependent, and label are correct. For traces, this corresponds to the evaluation using the head-based antecedent representation described in (Johnson, 2002), and for empty nodes without antecedents (e.g., NP PRO) this is the measure used in Section 7.1. To make the results comparable to other methods, we strip functional tags from the dependency labels before label comparison. Below are the overall precision, recall, and f-score for our method and the scores reported in (Dienes and Dubey, 2003) for antecedent recovery using Collins’ parser. Method P R f Dienes and Dubey 81.5 68.7 74.6 This paper 82.8 67.8 74.6 Interestingly, the overall performance of our postprocessing method is very similar to that of the pre- and in-processing methods of Dienes and Dubey (2003). Hence, for most cases, traces and empty nodes can be reliably identified using only local information provided by a parser, using the parser itself as a black box. This is important, since making parsers aware of non-local relations need not improve the overall performance: Dienes and Dubey (2003) report a decrease in PARSEVAL fscore from 88.2% to 86.4% after modifying Collins’ parser to resolve traces internally, although this allowed them to achieve high accuracy for traces. 9 Discussion The experiments described in the previous sections indicate that although statistical parsers do not explicitly output some information available in the corpus they were trained on (grammatical and semantic tags, empty nodes, non-local dependencies), this information can be recovered with reasonably high accuracy, using pattern matching and machine learning methods. For our task, using dependency structures rather than phrase trees has several advantages. First, after converting both the treebank trees and parsers’ outputs to graphs with head–modifier relations, our method needs very little information about the linguistic nature of the data, and thus is largely corpusand parser-independent. Indeed, after the conversion, the only linguistically informed operation is the straightforward extraction of features indicating the presence of subject and object dependents, and finiteness of verb groups. Second, using a dependency formalism facilitates a very straightforward evaluation of the systems that produce structures more complex than trees. It is not clear whether the PARSEVAL evaluation can be easily extended to take non-local relations into account (see (Johnson, 2002) for examples of such extension). Finally, the independence from the details of the parser and the corpus suggests that our method can be applied to systems based on other formalisms, e.g., (Hockenmaier, 2003), to allow a meaningful dependency-based comparison of very different parsers. Furthermore, with the fine-grained set of dependency labels that our system provides, it is possible to map the resulting structures to other dependency formalisms, either automatically in case annotated corpora exist, or with a manually developed set of rules. Our preliminary experiments with Collins’ parser and the corpus annotated with grammatical relations (Carroll et al., 2003) are promising: the system achieves 76% precision/recall fscore, after the parser’s output is enriched with our method and transformed to grammatical relations using a set of 40 simple rules. This is very close to the performance reported by Carroll et al. (2003) for the parser specifically designed for the extraction of grammatical relations. Despite the high-dimensional feature spaces, the large number of lexical features, and the lack of independence between features, we achieved high accuracy using a memory-based learner. TiMBL performed well on tasks where structured, more complicated and task-specific statistical models have been used previously (Blaheta and Charniak, 2000). For all subtasks we used the same settings for TiMBL: simple feature overlap measure, 5 nearest neighbours with majority voting. During further experiments with our method on different corpora, we found that quite different settings led to a better performance. It is clear that more careful and systematic parameter tuning and the analysis of the contribution of different features have to be addressed. Finally, our method is not restricted to syntactic structures. It has been successfully applied to the identification of semantic relations (Ahn et al., 2004), using FrameNet as the training corpus. For this task, we viewed semantic relations (e.g., Speaker, Topic, Addressee) as dependencies between a predicate and its arguments. Adding such semantic relations to syntactic dependency graphs was simply an additional graph transformation step. 10 Conclusions We presented a method to automatically enrich the output of a parser with information that is not provided by the parser itself, but is available in a treebank. Using the method with two state of the art statistical parsers and the Penn Treebank allowed us to recover functional tags (grammatical and semantic), empty nodes and traces. Thus, we are able to provide virtually all information available in the corpus, without modifying the parser, viewing it, indeed, as a black box. Our method allows us to perform a meaningful dependency-based comparison of phrase structure parsers. The evaluation on a dependency corpus derived from the Penn Treebank showed that, after our post-processing, two state of the art statistical parsers achieve 84% accuracy on a fine-grained set of dependency labels. Finally, our method for enriching the output of a parser is, to a large extent, independent of a specific parser and corpus, and can be used with other syntactic and semantic resources. 11 Acknowledgements We are grateful to David Ahn and Stefan Schlobach and to the anonymous referees for their useful suggestions. This research was supported by grants from the Netherlands Organization for Scientific Research (NWO) under project numbers 22080-001, 365-20-005, 612.069.006, 612.000.106, 612.000.207 and 612.066.302. References David Ahn, Sisay Fissaha, Valentin Jijkoun, and Maarten de Rijke. 2004. The University of Amsterdam at Senseval-3: semantic roles and logic forms. In Proceedings of the ACL-2004 Workshop on Evaluation of Systems for the Semantic Analysis of Text. Don Blaheta and Eugene Charniak. 2000. Assigning function tags to parsed text. In Proceedings of the 1st Meeting of NAACL, pages 234–240. John Carroll, Guido Minnen, and Ted Briscoe. 2003. Parser evaluation using a grammatical relation annotation scheme. In Anne Abeill´e, editor, Building and Using Parsed Corpora, pages 299–316. Kluwer. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Meeting of NAACL, pages 132–139. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch, 2003. TiMBL: Tilburg Memory Based Learner, version 5.0, Reference Guide. ILK Technical Report 03-10. Available from http://ilk.kub.nl/downloads/pub/papers/ilk0310.ps.gz. P´eter Dienes and Amit Dubey. 2003. Antecedent recovery: Experiments with a trace tagger. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 33–40. Julia Hockenmaier. 2003. Parsing with generative models of predicate-argument structure. In Proceedings of the 41st Meeting of ACL, pages 359–366. Valentin Jijkoun. 2003. Finding non-local dependencies: Beyond pattern matching. In Proceedings of the ACL-2003 Student Research Workshop, pages 37–43. Mark Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of the 40th meeting of ACL, pages 136–143. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Meeting of ACL, pages 423–430.
2004
40
Long-Distance Dependency Resolution in Automatically Acquired Wide-Coverage PCFG-Based LFG Approximations Aoife Cahill, Michael Burke, Ruth O’Donovan, Josef van Genabith, Andy Way National Centre for Language Technology and School of Computing, Dublin City University, Dublin, Ireland {acahill,mburke,rodonovan,josef,away}@computing.dcu.ie Abstract This paper shows how finite approximations of long distance dependency (LDD) resolution can be obtained automatically for wide-coverage, robust, probabilistic Lexical-Functional Grammar (LFG) resources acquired from treebanks. We extract LFG subcategorisation frames and paths linking LDD reentrancies from f-structures generated automatically for the Penn-II treebank trees and use them in an LDD resolution algorithm to parse new text. Unlike (Collins, 1999; Johnson, 2002), in our approach resolution of LDDs is done at f-structure (attribute-value structure representations of basic predicate-argument or dependency structure) without empty productions, traces and coindexation in CFG parse trees. Currently our best automatically induced grammars achieve 80.97% f-score for fstructures parsing section 23 of the WSJ part of the Penn-II treebank and evaluating against the DCU 1051 and 80.24% against the PARC 700 Dependency Bank (King et al., 2003), performing at the same or a slightly better level than state-of-the-art hand-crafted grammars (Kaplan et al., 2004). 1 Introduction The determination of syntactic structure is an important step in natural language processing as syntactic structure strongly determines semantic interpretation in the form of predicate-argument structure, dependency relations or logical form. For a substantial number of linguistic phenomena such as topicalisation, wh-movement in relative clauses and interrogative sentences, however, there is an important difference between the location of the (surface) realisation of linguistic material and the location where this material should be interpreted semantically. Resolution of such long-distance dependencies (LDDs) is therefore crucial in the determination of accurate predicate-argument struc1Manually constructed f-structures for 105 randomly selected trees from Section 23 of the WSJ section of the Penn-II Treebank ture, deep dependency relations and the construction of proper meaning representations such as logical forms (Johnson, 2002). Modern unification/constraint-based grammars such as LFG or HPSG capture deep linguistic information including LDDs, predicate-argument structure, or logical form. Manually scaling rich unification grammars to naturally occurring free text, however, is extremely time-consuming, expensive and requires considerable linguistic and computational expertise. Few hand-crafted, deep unification grammars have in fact achieved the coverage and robustness required to parse a corpus of say the size and complexity of the Penn treebank: (Riezler et al., 2002) show how a deep, carefully hand-crafted LFG is successfully scaled to parse the Penn-II treebank (Marcus et al., 1994) with discriminative (loglinear) parameter estimation techniques. The last 20 years have seen continuously increasing efforts in the construction of parse-annotated corpora. Substantial treebanks2 are now available for many languages (including English, Japanese, Chinese, German, French, Czech, Turkish), others are currently under construction (Arabic, Bulgarian) or near completion (Spanish, Catalan). Treebanks have been enormously influential in the development of robust, state-of-the-art parsing technology: grammars (or grammatical information) automatically extracted from treebank resources provide the backbone of many state-of-the-art probabilistic parsing approaches (Charniak, 1996; Collins, 1999; Charniak, 1999; Hockenmaier, 2003; Klein and Manning, 2003). Such approaches are attractive as they achieve robustness, coverage and performance while incurring very low grammar development cost. However, with few notable exceptions (e.g. Collins’ Model 3, (Johnson, 2002), (Hockenmaier, 2003) ), treebank-based probabilistic parsers return fairly simple “surfacey” CFG trees, without deep syntactic or semantic information. The grammars used by such systems are sometimes re2Or dependency banks. ferred to as “half” (or “shallow”) grammars (Johnson, 2002), i.e. they do not resolve LDDs but interpret linguistic material purely locally where it occurs in the tree. Recently (Cahill et al., 2002) showed how wide-coverage, probabilistic unification grammar resources can be acquired automatically from fstructure-annotated treebanks. Many second generation treebanks provide a certain amount of deep syntactic or dependency information (e.g. in the form of Penn-II functional tags and traces) supporting the computation of representations of deep linguistic information. Exploiting this information (Cahill et al., 2002) implement an automatic LFG f-structure annotation algorithm that associates nodes in treebank trees with fstructure annotations in the form of attribute-value structure equations representing abstract predicateargument structure/dependency relations. From the f-structure annotated treebank they automatically extract wide-coverage, robust, PCFG-based LFG approximations that parse new text into trees and f-structure representations. The LFG approximations of (Cahill et al., 2002), however, are only “half” grammars, i.e. like most of their probabilistic CFG cousins (Charniak, 1996; Johnson, 1999; Klein and Manning, 2003) they do not resolve LDDs but interpret linguistic material purely locally where it occurs in the tree. In this paper we show how finite approximations of long distance dependency resolution can be obtained automatically for wide-coverage, robust, probabilistic LFG resources automatically acquired from treebanks. We extract LFG subcategorisation frames and paths linking LDD reentrancies from f-structures generated automatically for the PennII treebank trees and use them in an LDD resolution algorithm to parse new text. Unlike (Collins, 1999; Johnson, 2002), in our approach LDDs are resolved on the level of f-structure representation, rather than in terms of empty productions and coindexation on parse trees. Currently we achieve fstructure/dependency f-scores of 80.24 and 80.97 for parsing section 23 of the WSJ part of the PennII treebank, evaluating against the PARC 700 and DCU 105 respectively. The paper is structured as follows: we give a brief introduction to LFG. We outline the automatic f-structure annotation algorithm, PCFG-based LFG grammar approximations and parsing architectures of (Cahill et al., 2002). We present our subcategorisation frame extraction and introduce the treebankbased acquisition of finite approximations of LFG functional uncertainty equations in terms of LDD paths. We present the f-structure LDD resolution algorithm, provide results and extensive evaluation. We compare our method with previous work. Finally, we conclude. 2 Lexical Functional Grammar (LFG) Lexical-Functional Grammar (Kaplan and Bresnan, 1982; Dalrymple, 2001) minimally involves two levels of syntactic representation:3 c-structure and f-structure. C(onstituent)-structure represents the grouping of words and phrases into larger constituents and is realised in terms of a CFPSG grammar. F(unctional)-structure represents abstract syntactic functions such as SUBJ(ect), OBJ(ect), OBL(ique), closed and open clausal COMP/XCOMP(lement), ADJ(unct), APP(osition) etc. and is implemented in terms of recursive feature structures (attribute-value matrices). C-structure captures surface grammatical configurations, fstructure encodes abstract syntactic information approximating to predicate-argument/dependency structure or simple logical form (van Genabith and Crouch, 1996). C- and f-structures are related in terms of functional annotations (constraints, attribute-value equations) on c-structure rules (cf. Figure 1). S NP VP U.N. V NP signs treaty " SUBJ  PRED U.N. PRED sign OBJ  PRED treaty # S → NP VP ↑SUBJ=↓ ↑=↓ VP → V NP ↑=↓ ↑OBJ=↓ NP → U.N V → signs ↑PRED=U.N. ↑PRED=sign Figure 1: Simple LFG C- and F-Structure Uparrows point to the f-structure associated with the mother node, downarrows to that of the local node. The equations are collected with arrows instantiated to unique tree node identifiers, and a constraint solver generates an f-structure. 3 Automatic F-Structure Annotation The Penn-II treebank employs CFG trees with additional “functional” node annotations (such as -LOC, -TMP, -SBJ, -LGS, . . . ) as well as traces and coindexation (to indicate LDDs) as basic data structures. The f-structure annotation algorithm of (Cahill et 3LFGs may also involve morphological and semantic levels of representation. al., 2002) exploits configurational, categorial, PennII “functional”, local head and trace information to annotate nodes with LFG feature-structure equations. A slightly adapted version of (Magerman, 1994)’s scheme automatically head-lexicalises the Penn-II trees. This partitions local subtrees of depth one (corresponding to CFG rules) into left and right contexts (relative to head). The annotation algorithm is modular with four components (Figure 2): left-right (L-R) annotation principles (e.g. leftmost NP to right of V head of VP type rule is likely to be an object etc.); coordination annotation principles (separating these out simplifies other components of the algorithm); traces (translates traces and coindexation in trees into corresponding reentrancies in f-structure ( 1 in Figure 3)); catch all and clean-up. Lexical information is provided via macros for POS tag classes. L/R Context ⇒Coordination ⇒Traces ⇒Catch-All Figure 2: Annotation Algorithm The f-structure annotations are passed to a constraint solver to produce f-structures. Annotation is evaluated in terms of coverage and quality, summarised in Table 1. Coverage is near complete with 99.82% of the 48K Penn-II sentences receiving a single, connected f-structure. Annotation quality is measured in terms of precision and recall (P&R) against the DCU 105. The algorithm achieves an F-score of 96.57% for full f-structures and 94.3% for preds-only f-structures.4 S S-TPC- 1 NP U.N. VP V signs NP treaty NP Det the N headline VP V said S T- 1   TOPIC " SUBJ  PRED U.N. PRED sign OBJ  PRED treaty # 1 SUBJ h SPEC the PRED headline i PRED say COMP 1   Figure 3: Penn-II style tree with LDD trace and corresponding reentrancy in f-structure 4Full f-structures measure all attribute-value pairs including“minor” features such as person, number etc. The stricter preds-only captures only paths ending in PRED:VALUE. # frags # sent percent 0 85 0.176 1 48337 99.820 2 2 0.004 all preds P 96.52 94.45 R 96.63 94.16 Table 1: F-structure annotation results for DCU 105 4 PCFG-Based LFG Approximations Based on these resources (Cahill et al., 2002) developed two parsing architectures. Both generate PCFG-based approximations of LFG grammars. In the pipeline architecture a standard PCFG is extracted from the “raw” treebank to parse unseen text. The resulting parse-trees are then annotated by the automatic f-structure annotation algorithm and resolved into f-structures. In the integrated architecture the treebank is first annotated with f-structure equations. An annotated PCFG is then extracted where each non-terminal symbol in the grammar has been augmented with LFG f-equations: NP[↑OBJ=↓] → DT[↑SPEC=↓] NN[↑=↓] . Nodes followed by annotations are treated as a monadic category for grammar extraction and parsing. Post-parsing, equations are collected from parse trees and resolved into f-structures. Both architectures parse raw text into “proto” fstructures with LDDs unresolved resulting in incomplete argument structures as in Figure 4. S S NP U.N. VP V signs NP treaty NP Det the N headline VP V said   TOPIC " SUBJ  PRED U.N. PRED sign OBJ  PRED treaty # SUBJ h SPEC the PRED headline i PRED say   Figure 4: Shallow-Parser Output with Unresolved LDD and Incomplete Argument Structure (cf. Figure 3) 5 LDDs and LFG FU-Equations Theoretically, LDDs can span unbounded amounts of intervening linguistic material as in [U.N. signs treaty]1 the paper claimed . . . a source said []1. In LFG, LDDs are resolved at the f-structure level, obviating the need for empty productions and traces in trees (Dalrymple, 2001), using functional uncertainty (FU) equations. FUs are regular expressions specifying paths in f-structure between a source (where linguistic material is encountered) and a target (where linguistic material is interpreted semantically). To account for the fronted sentential constituents in Figures 3 and 4, an FU equation of the form ↑TOPIC = ↑COMP* COMP would be required. The equation states that the value of the TOPIC attribute is token identical with the value of the final COMP argument along a path through the immediately enclosing f-structure along zero or more COMP attributes. This FU equation is annotated to the topicalised sentential constituent in the relevant CFG rules as follows S → S NP VP ↑TOPIC=↓ ↑SUBJ=↓ ↑=↓ ↑TOPIC=↑COMP*COMP and generates the LDD-resolved proper f-structure in Figure 3 for the traceless tree in Figure 4, as required. In addition to FU equations, subcategorisation information is a crucial ingredient in LFG’s account of LDDs. As an example, for a topicalised constituent to be resolved as the argument of a local predicate as specified by the FU equation, the local predicate must (i) subcategorise for the argument in question and (ii) the argument in question must not be already filled. Subcategorisation requirements are provided lexically in terms of semantic forms (subcat lists) and coherence and completeness conditions (all GFs specified must be present, and no others may be present) on f-structure representations. Semantic forms specify which grammatical functions (GFs) a predicate requires locally. For our example in Figures 3 and 4, the relevant lexical entries are: V → said ↑PRED=say⟨↑SUBJ, ↑COMP⟩ V → signs ↑PRED=sign⟨↑SUBJ, ↑OBJ⟩ FU equations and subcategorisation requirements together ensure that LDDs can only be resolved at suitable f-structure locations. 6 Acquiring Lexical and LDD Resources In order to model the LFG account of LDD resolution we require subcat frames (i.e. semantic forms) and LDD resolution paths through f-structure. Traditionally, such resources were handcoded. Here we show how they can be acquired from f-structure annotated treebank resources. LFG distinguishes between governable (arguments) and nongovernable (adjuncts) grammatical functions (GFs). If the automatic f-structure annotation algorithm outlined in Section 3 generates high quality f-structures, reliable semantic forms can be extracted (reverse-engineered): for each f-structure generated, for each level of embedding we determine the local PRED value and collect the governable, i.e. subcategorisable grammatical functions present at that level of embedding. For the proper f-structure in Figure 3 we obtain sign([subj,obj]) and say([subj,comp]). We extract frames from the full WSJ section of the Penn-II Treebank with 48K trees. Unlike many other approaches, our extraction process does not predefine frames, fully reflects LDDs in the source data-structures (cf. Figure 3), discriminates between active and passive frames, computes GF, GF:CFG category pairas well as CFG category-based subcategorisation frames and associates conditional probabilities with frames. Given a lemma l and an argument list s, the probability of s given l is estimated as: P(s|l) := count(l, s) Pn i=1 count(l, si) Table 2 summarises the results. We extract 3586 verb lemmas and 10969 unique verbal semantic form types (lemma followed by non-empty argument list). Including prepositions associated with the subcategorised OBLs and particles, this number goes up to 14348. The number of unique frame types (without lemma) is 38 without specific prepositions and particles, 577 with. F-structure annotations allow us to distinguish passive and active frames. Table 3 shows the most frequent semantic forms for accept. Passive frames are marked p. We carried out a comprehensive evaluation of the automatically acquired verbal semantic forms against the COMLEX Resource (Macleod et al., 1994) for the 2992 active verb lemmas that both resources have in common. We report on the evaluation of GF-based frames for the full frames with complete prepositional and particle infomation. We use relative conditional probability thresholds (1% and 5%) to filter the selection of semantic forms (Table 4). (O’Donovan et al., 2004) provide a more detailed description of the extraction and evaluation of semantic forms. Without Prep/Part With Prep/Part Lemmas 3586 3586 Sem. Forms 10969 14348 Frame Types 38 577 Active Frame Types 38 548 Passive Frame Types 21 177 Table 2: Verb Results Semantic Form Occurrences Prob. accept([obj,subj]) 122 0.813 accept([subj],p) 9 0.060 accept([comp,subj]) 5 0.033 accept([subj,obl:as],p) 3 0.020 accept([obj,subj,obl:as]) 3 0.020 accept([obj,subj,obl:from]) 3 0.020 accept([subj]) 2 0.013 accept([obj,subj,obl:at]) 1 0.007 accept([obj,subj,obl:for]) 1 0.007 accept([obj,subj,xcomp]) 1 0.007 Table 3: Semantic forms for the verb accept. Threshold 1% Threshold 5% P R F-Score P R F-Score Exp. 73.7% 22.1% 34.0% 78.0% 18.3% 29.6% Table 4: COMLEX Comparison We further acquire finite approximations of FUequations. by extracting paths between co-indexed material occurring in the automatically generated fstructures from sections 02-21 of the Penn-II treebank. We extract 26 unique TOPIC, 60 TOPIC-REL and 13 FOCUS path types (with a total of 14,911 token occurrences), each with an associated probability. We distinguish between two types of TOPICREL paths, those that occur in wh-less constructions, and all other types (c.f Table 5). Given a path p and an LDD type t (either TOPIC, TOPIC-REL or FOCUS), the probability of p given t is estimated as: P(p|t) := count(t, p) Pn i=1 count(t, pi) In order to get a first measure of how well the approximation models the data, we compute the path types in section 23 not covered by those extracted from 02-21: 23/(02-21). There are 3 such path types (Table 6), each occuring exactly once. Given that the total number of path tokens in section 23 is 949, the finite approximation extracted from 02-23 covers 99.69% of all LDD paths in section 23. 7 Resolving LDDs in F-Structure Given a set of semantic forms s with probabilities P(s|l) (where l is a lemma), a set of paths p with P(p|t) (where t is either TOPIC, TOPIC-REL or FOCUS) and an f-structure f, the core of the algorithm to resolve LDDs recursively traverses f to: find TOPIC|TOPIC-REL|FOCUS:g pair; retrieve TOPIC|TOPIC-REL|FOCUS paths; for each path p with GF1 : . . . : GFn : GF, traverse f along GF1 : . . . : GFn to sub-f-structure h; retrieve local PRED:l; add GF:g to h iff ∗GF is not present at h wh-less TOPIC-REL # wh-less TOPIC-REL # subj 5692 adjunct 1314 xcomp:adjunct 610 obj 364 xcomp:obj 291 xcomp:xcomp:adjunct 96 comp:subj 76 xcomp:subj 67 Table 5: Most frequent wh-less TOPIC-REL paths 02–21 23 23 /(02–21) TOPIC 26 7 2 FOCUS 13 4 0 TOPIC-REL 60 22 1 Table 6: Number of path types extracted ∗h together with GF is locally complete and coherent with respect to a semantic form s for l rank resolution by P(s|l) × P(p|t) The algorithm supports multiple, interacting TOPIC, TOPIC-REL and FOCUS LDDs. We use P(s|l) × P(p|t) to rank a solution, depending on how likely the PRED takes semantic frame s, and how likely the TOPIC, FOCUS or TOPIC-REL is resolved using path p. The algorithm also supports resolution of LDDs where no overt linguistic material introduces a source TOPIC-REL function (e.g. in reduced relative clause constructions). We distinguish between passive and active constructions, using the relevant semantic frame type when resolving LDDs. 8 Experiments and Evaluation We ran experiments with grammars in both the pipeline and the integrated parsing architectures. The first grammar is a basic PCFG, while A-PCFG includes the f-structure annotations. We apply a parent transformation to each grammar (Johnson, 1999) to give P-PCFG and PA-PCFG. We train on sections 02-21 (grammar, lexical extraction and LDD paths) of the Penn-II Treebank and test on section 23. The only pre-processing of the trees that we do is to remove empty nodes, and remove all PennII functional tags in the integrated model. We evaluate the parse trees using evalb. Following (Riezler et al., 2002), we convert f-structures into dependency triple format. Using their software we evaluate the f-structure parser output against: 1. The DCU 105 (Cahill et al., 2002) 2. The full 2,416 f-structures automatically generated by the f-structure annotation algorithm for the original Penn-II trees, in a CCG-style (Hockenmaier, 2003) evaluation experiment Pipeline Integrated PCFG P-PCFG A-PCFG PA-PCFG 2416 Section 23 trees # Parses 2416 2416 2416 2414 Lab. F-Score 75.83 80.80 79.17 81.32 Unlab. F-Score 78.28 82.70 81.49 83.28 DCU 105 F-Strs All GFs F-Score (before LDD resolution) 79.82 79.24 81.12 81.20 All GFs F-Score (after LDD resolution) 83.79 84.59 86.30 87.04 Preds only F-Score (before LDD resolution) 70.00 71.57 73.45 74.61 Preds only F-Score (after LDD resolution) 73.78 77.43 78.76 80.97 2416 F-Strs All GFs F-Score (before LDD resolution) 81.98 81.49 83.32 82.78 All GFs F-Score (after LDD resolution) 84.16 84.37 86.45 86.00 Preds only F-Score (before LDD resolution) 72.00 73.23 75.22 75.10 Preds only F-Score (after LDD resolution) 74.07 76.12 78.36 78.40 PARC 700 Dependency Bank Subset of GFs following (Kaplan et al., 2004) 77.86 80.24 77.68 78.60 Table 7: Parser Evaluation 3. A subset of 560 dependency structures of the PARC 700 Dependency Bank following (Kaplan et al., 2004) The results are given in Table 7. The parenttransformed grammars perform best in both architectures. In all cases, there is a marked improvement (2.07-6.36%) in the f-structures after LDD resolution. We achieve between 73.78% and 80.97% preds-only and 83.79% to 87.04% all GFs f-score, depending on gold-standard. We achieve between 77.68% and 80.24% against the PARC 700 following the experiments in (Kaplan et al., 2004). For details on how we map the f-structures produced by our parsers to a format similar to that of the PARC 700 Dependency Bank, see (Burke et al., 2004). Table 8 shows the evaluation result broken down by individual GF (preds-only) for the integrated model PA-PCFG against the DCU 105. In order to measure how many of the LDD reentrancies in the gold-standard f-structures are captured correctly by our parsers, we developed evaluation software for f-structure LDD reentrancies (similar to Johnson’s (2002) evaluation to capture traces and their antecedents in trees). Table 9 shows the results with the integrated model achieving more than 76% correct LDD reentrancies. 9 Related Work (Collins, 1999)’s Model 3 is limited to wh-traces in relative clauses (it doesn’t treat topicalisation, focus etc.). Johnson’s (2002) work is closest to ours in spirit. Like our approach he provides a finite approximation of LDDs. Unlike our approach, however, he works with tree fragments in a postprocessing approach to add empty nodes and their DEP. PRECISION RECALL F-SCORE adjunct 717/903 = 79 717/947 = 76 78 app 14/15 = 93 14/19 = 74 82 comp 35/43 = 81 35/65 = 54 65 coord 109/143 = 76 109/161 = 68 72 det 253/264 = 96 253/269 = 94 95 focus 1/1 = 100 1/1 = 100 100 obj 387/445 = 87 387/461 = 84 85 obj2 0/1 = 0 0/2 = 0 0 obl 27/56 = 48 27/61 = 44 46 obl2 1/3 = 33 1/2 = 50 40 obl ag 5/11 = 45 5/12 = 42 43 poss 69/73 = 95 69/81 = 85 90 quant 40/55 = 73 40/52 = 77 75 relmod 26/38 = 68 26/50 = 52 59 subj 330/361 = 91 330/414 = 80 85 topic 12/12 = 100 12/13 = 92 96 topicrel 35/42 = 83 35/52 = 67 74 xcomp 139/160 = 87 139/146 = 95 91 OVERALL 83.78 78.35 80.97 Table 8: Preds-only results of PA-PCFG against the DCU 105 antecedents to parse trees, while we present an approach to LDD resolution on the level of f-structure. It seems that the f-structure-based approach is more abstract (99 LDD path types against approximately 9,000 tree-fragment types in (Johnson, 2002)) and fine-grained in its use of lexical information (subcat frames). In contrast to Johnson’s approach, our LDD resolution algorithm is not biased. It computes all possible complete resolutions and orderranks them using LDD path and subcat frame probabilities. It is difficult to provide a satisfactory comparison between the two methods, but we have carried out an experiment that compares them at the f-structure level. We take the output of Charniak’s Pipeline Integrated PCFG P-PCFG A-PCFG PA-PCFG TOPIC Precision (11/14) (12/13) (12/13) (12/12) Recall (11/13) (12/13) (12/13) (12/13) F-Score 0.81 0.92 0.92 0.96 FOCUS Precision (0/1) (0/1) (0/1) (0/1) Recall (0/1) (0/1) (0/1) (0/1) F-Score 0 0 0 0 TOPIC-REL Precision (20/34) (27/36) (34/42) (34/42) Recall (20/52) (27/52) (34/52) (34/52) F-Score 0.47 0.613 0.72 0.72 OVERALL 0.54 0.67 0.75 0.76 Table 9: LDD Evaluation on the DCU 105 Charniak -LDD res. +LDD res. (Johnson, 2002) All GFs 80.86 86.65 85.16 Preds Only 74.63 80.97 79.75 Table 10: Comparison at f-structure level of LDD resolution to (Johnson, 2002) on the DCU 105 parser (Charniak, 1999) and, using the pipeline f-structure annotation model, evaluate against the DCU 105, both before and after LDD resolution. Using the software described in (Johnson, 2002) we add empty nodes to the output of Charniak’s parser, pass these trees to our automatic annotation algorithm and evaluate against the DCU 105. The results are given in Table 10. Our method of resolving LDDs at f-structure level results in a preds-only f-score of 80.97%. Using (Johnson, 2002)’s method of adding empty nodes to the parse-trees results in an f-score of 79.75%. (Hockenmaier, 2003) provides CCG-based models of LDDs. Some of these involve extensive cleanup of the underlying Penn-II treebank resource prior to grammar extraction. In contrast, in our approach we leave the treebank as is and only add (but never correct) annotations. Earlier HPSG work (Tateisi et al., 1998) is based on independently constructed hand-crafted XTAG resources. In contrast, we acquire our resources from treebanks and achieve substantially wider coverage. Our approach provides wide-coverage, robust, and – with the addition of LDD resolution – “deep” or “full”, PCFG-based LFG approximations. Crucially, we do not claim to provide fully adequate statistical models. It is well known (Abney, 1997) that PCFG-type approximations to unification grammars can yield inconsistent probability models due to loss of probability mass: the parser successfully returns the highest ranked parse tree but the constraint solver cannot resolve the f-equations (generated in the pipeline or “hidden” in the integrated model) and the probability mass associated with that tree is lost. This case, however, is surprisingly rare for our grammars: only 0.0018% (85 out of 48424) of the original Penn-II trees (without FRAGs) fail to produce an f-structure due to inconsistent annotations (Table 1), and for parsing section 23 with the integrated model (A-PCFG), only 9 sentences do not receive a parse because no f-structure can be generated for the highest ranked tree (0.4%). Parsing with the pipeline model, all sentences receive one complete f-structure. Research on adequate probability models for unification grammars is important. (Miyao et al., 2003) present a Penn-II treebank based HPSG with log-linear probability models. They achieve coverage of 50.2% on section 23, as against 99% in our approach. (Riezler et al., 2002; Kaplan et al., 2004) describe how a carefully hand-crafted LFG is scaled to the full Penn-II treebank with log-linear based probability models. They achieve 79% coverage (full parse) and 21% fragement/skimmed parses. By the same measure, full parse coverage is around 99% for our automatically acquired PCFG-based LFG approximations. Against the PARC 700, the hand-crafted LFG grammar reported in (Kaplan et al., 2004) achieves an fscore of 79.6%. For the same experiment, our best automatically-induced grammar achieves an f-score of 80.24%. 10 Conclusions We presented and extensively evaluated a finite approximation of LDD resolution in automatically constructed, wide-coverage, robust, PCFGbased LFG approximations, effectively turning the “half”(or “shallow”)-grammars presented in (Cahill et al., 2002) into “full” or “deep” grammars. In our approach, LDDs are resolved in f-structure, not trees. The method achieves a preds-only f-score of 80.97% for f-structures with the PA-PCFG in the integrated architecture against the DCU 105 and 78.4% against the 2,416 automatically generated f-structures for the original Penn-II treebank trees. Evaluating against the PARC 700 Dependency Bank, the P-PCFG achieves an f-score of 80.24%, an overall improvement of approximately 0.6% on the result reported for the best hand-crafted grammars in (Kaplan et al., 2004). Acknowledgements This research was funded by Enterprise Ireland Basic Research Grant SC/2001/186 and IRCSET. References S. Abney. 1997. Stochastic attribute-value grammars. Computational Linguistics, 23(4):597– 618. M. Burke, A. Cahill, R. O’Donovan, J. van Genabith, and A. Way 2004. The Evaluation of an Automatic Annotation Algorithm against the PARC 700 Dependency Bank. In Proceedings of the Ninth International Conference on LFG, Christchurch, New Zealand (to appear). A. Cahill, M. McCarthy, J. van Genabith, and A. Way. 2002. Parsing with PCFGs and Automatic F-Structure Annotation. In Miriam Butt and Tracy Holloway King, editors, Proceedings of the Seventh International Conference on LFG, pages 76–95. CSLI Publications, Stanford, CA. E. Charniak. 1996. Tree-Bank Grammars. In AAAI/IAAI, Vol. 2, pages 1031–1036. E. Charniak. 1999. A Maximum-Entropy-Inspired Parser. Technical Report CS-99-12, Brown University, Providence, RI. M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA. M. Dalrymple. 2001. Lexical-Functional Grammar. San Diego, CA; London Academic Press. J. Hockenmaier. 2003. Parsing with Generative models of Predicate-Argument Structure. In Proceedings of the 41st Annual Conference of the Association for Computational Linguistics, pages 359–366, Sapporo, Japan. M. Johnson. 1999. PCFG models of linguistic tree representations. Computational Linguistics, 24(4):613–632. M. Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 136–143, Philadelphia, PA. R. Kaplan and J. Bresnan. 1982. Lexical Functional Grammar, a Formal System for Grammatical Representation. In The Mental Representation of Grammatical Relations, pages 173–281. MIT Press, Cambridge, MA. R. Kaplan, S. Riezler, T. H. King, J. T. Maxwell, A. Vasserman, and R. Crouch. 2004. Speed and accuracy in shallow and deep stochastic parsing. In Proceedings of the Human Language Technology Conference and the 4th Annual Meeting of the North American Chapter of the Association for Computational Linguistics, pages 97– 104, Boston, MA. T.H. King, R. Crouch, S. Riezler, M. Dalrymple, and R. Kaplan. 2003. The PARC700 dependency bank. In Proceedings of the EACL03: 4th International Workshop on Linguistically Interpreted Corpora (LINC-03), pages 1–8, Budapest. D. Klein and C. Manning. 2003. Accurate Unlexicalized Parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL’02), pages 423–430, Sapporo, Japan. C. Macleod, A. Meyers, and R. Grishman. 1994. The COMLEX Syntax Project: The First Year. In Proceedings of the ARPA Workshop on Human Language Technology, pages 669-703, Princeton, NJ. D. Magerman. 1994. Natural Language Parsing as Statistical Pattern Recognition. PhD thesis, Stanford University, CA. M. Marcus, G. Kim, M.A. Marcinkiewicz, R. MacIntyre, A. Bies, M. Ferguson, K. Katz, and B. Schasberger. 1994. The Penn Treebank: Annotating Predicate Argument Structure. In Proceedings of the ARPA Workshop on Human Language Technology, pages 110–115, Princeton, NJ. Y. Miyao, T. Ninomiya, and J. Tsujii. 2003. Probabilistic modeling of argument structures including non-local dependencies. In Proceedings of the Conference on Recent Advances in Natural Language Processing (RANLP), pages 285–291, Borovets, Bulgaria. R. O’Donovan, M. Burke, A. Cahill, J. van Genabith, and A. Way. 2004. Large-Scale Induction and Evaluation of Lexical Resources from the Penn-II Treebank. In Proceedings of the 42nd Annual Conference of the Association for Computational Linguistics (ACL-04), Barcelona. S. Riezler, T.H. King, R. Kaplan, R. Crouch, J. T. Maxwell III, and M. Johnson. 2002. Parsing the Wall Street Journal using a LexicalFunctional Grammar and Discriminative Estimation Techniques. In Proceedings of the 40th Annual Conference of the Association for Computational Linguistics (ACL-02), pages 271–278, Philadelphia, PA. Y. Tateisi, K. Torisawa, Y. Miyao, and J. Tsujii. 1998. Translating the XTAG English Grammar to HPSG. In 4th International Workshop on Tree Adjoining Grammars and Related Frameworks, Philadelphia, PA, pages 172–175. J. van Genabith and R. Crouch. 1996. Direct and Underspecified Interpretations of LFG fStructures. In Proceedings of the 16th International Conference on Computational Linguistics (COLING), pages 262–267, Copenhagen.
2004
41
Deep dependencies from context-free statistical parsers: correcting the surface dependency approximation Roger Levy Department of Linguistics Stanford University [email protected] Christopher D. Manning Departments of Computer Science and Linguistics Stanford University [email protected] Abstract We present a linguistically-motivated algorithm for reconstructing nonlocal dependency in broad-coverage context-free parse trees derived from treebanks. We use an algorithm based on loglinear classifiers to augment and reshape context-free trees so as to reintroduce underlying nonlocal dependencies lost in the context-free approximation. We find that our algorithm compares favorably with prior work on English using an existing evaluation metric, and also introduce and argue for a new dependency-based evaluation metric. By this new evaluation metric our algorithm achieves 60% error reduction on gold-standard input trees and 5% error reduction on state-ofthe-art machine-parsed input trees, when compared with the best previous work. We also present the first results on nonlocal dependency reconstruction for a language other than English, comparing performance on English and German. Our new evaluation metric quantitatively corroborates the intuition that in a language with freer word order, the surface dependencies in context-free parse trees are a poorer approximation to underlying dependency structure. 1 Introduction While parsers are been used for other purposes, the primary motivation for syntactic parsing is as an aid to semantic interpretation, in pursuit of broader goals of natural language understanding. Proponents of traditional ‘deep’ or ‘precise’ approaches to syntax, such as GB, CCG, HPSG, LFG, or TAG, have argued that sophisticated grammatical formalisms are essential to resolving various hidden relationships such as the source phrase of moved whphrases in questions and relativizations, or the controller of clauses without an overt subject. Knowledge of these hidden relationships is in turn essential to semantic interpretation of the kind practiced in the semantic parsing (Gildea and Jurafsky, 2002) and QA (Pasca and Harabagiu, 2001) literatures. However, work in statistical parsing has for the most part put these needs aside, being content to recover surface context-free (CF) phrase structure trees. This perhaps reflects the fact that context-free phrase structure grammar (CFG) is in some sense at the the heart of the majority of both formal and computational syntactic research. Although, upon introducing it, Chomsky (1956) rejected CFG as an adequate framework for natural language description, the majority of work in the last half century has used context-free structural descriptions and related methodologies in one form or another as an important component of syntactic analysis. CFGs seem adequate to weakly generate almost all common natural language structures, and also facilitate a transparent predicate-argument and/or semantic interpretation for the more basic ones (Gazdar et al., 1985). Nevertheless, despite their success in providing surface phrase structure analyses, if statistical parsers and the representations they produce do not provide a useful stepping stone to recovering the hidden relationships, they will ultimately come to be seen as a dead end, and work will necessarily return to using richer formalisms. In this paper we attempt to establish to what degree current statistical parsers are a useful step in analysis by examining the performance of further statistical classifiers on non-local dependency recovery from CF parse trees. The natural isomorphism from CF trees to dependency trees induces only local dependencies, derived from the headsister relation in a CF local tree. However, if the output of a context-free parser can be algorithmically augmented to accurately identify and incorporate nonlocal dependencies, then we can say that the context-free parsing model is a safe approximation to the true task of dependency reconstruction. We investigate the safeness of this approximation, devising an algorithm to reconstruct non-local dependencies from context-free parse trees using loglinear classifiers, tested on treebanks of not only English but also German, a language with much freer word order and correspondingly more discontinuity than English. This algorithm can be used as an intermediate step between the surface output trees of modern statistical parsers and semantic interpretation systems for a variety of tasks.1 1Many linguistic and technical intricacies are involved in the interpretation and use of non-local annotation structure found in treebanks. A more complete exposition of the work presented here can be found in Levy (2004). S NP-3 NNP Farmers VP VBD was ADJP JJ quick S *ICH*-2 NP NN yesterday S-2 NP *-3 VP TO to VP VB point PRT RP out NP NP DT the NN problems SBAR WHNP-1 0 S NP PRP it VP VBZ sees NP *T*-1 . . Figure 1: Example of empty and nonlocal annotations from the Penn Treebank of English, including null complementizers (0), relativization (*T*-1), rightextraposition (*ICH*-2), and syntactic control (*-3). 1.1 Previous Work Previous work on nonlocal dependency has focused entirely on English, despite the disparity in type and frequency of various non-local dependency constructions for varying languages (Kruijff, 2002). Collins (1999)’s Model 3 investigated GPSG-style trace threading for resolving nonlocal relative pronoun dependencies. Johnson (2002) was the first post-processing approach to non-local dependency recovery, using a simple pattern-matching algorithm on context-free trees. Dienes and Dubey (2003a,b) and Dienes (2003) approached the problem by preidentifying empty categories using an HMM on unparsed strings and threaded the identified empties into the category structure of a context-free parser, finding that this method compared favorably with both Collins’ and Johnson’s. Traditional LFG parsing, in both non-stochastic (Kaplan and Maxwell, 1993) and stochastic (Riezler et al., 2002; Kaplan et al., 2004) incarnations, also divides the labor of local and nonlocal dependency identification into two phases, starting with context-free parses and continuing by augmentation with functional information. 2 Datasets The datasets used for this study consist of the Wall Street Journal section of the Penn Treebank of English (WSJ) and the context-free version of the NEGRA (version 2) corpus of German (Skut et al., 1997b). Full-size experiments on WSJ described in Section 4 used the standard sections 2-21 for training, 24 for development, and trees whose yield is under 100 words from section 23 for testing. Experiments described in Section 4.3 used the same development and test sets but files 200-959 of WSJ as a smaller training set; for NEGRA we followed Dubey and Keller (2003) in using the first 18,602 sentences for training, the last 1,000 for development, and the previous 1,000 for testing. Consistent with prior work and with common practice in statistical parsing, we stripped categories of all functional tags prior to training and testing (though in several cases this seems to have been a limiting move; see Section 5). Nonlocal dependency annotation in Penn Treebanks can be divided into three major types: unindexed empty elements, dislocations, and control. The first type consists primarily of null complementizers, as exemplified in Figure 1 by the null relative pronoun 0 (c.f. aspects that it sees), and do not participate in (though they may mediate) nonlocal dependency. The second type consists of a dislocated element coindexed with an origin site of semantic interpretation, as in the association in Figure 1 of WHNP-1 with the direct object position of sees (a relativization), and the association of S2 with the ADJP quick (a right dislocation). This type encompasses the classic cases of nonlocal dependency: topicalization, relativization, wh- movement, and right dislocation, as well as expletives and other instances of non-canonical argument positioning. The third type involves control loci in syntactic argument positions, sometimes coindexed with overt controllers, as in the association of the NP Farmers with the empty subject position of the S2 node. (An example of a control locus with no controller would be [S NP-* [VP Eating ice cream ]] is fun.) Controllers are to be interpreted as syntactic (and possibly semantic) arguments both in their overt position and in the position of loci they control. This type encompasses raising, control, passivization, and unexpressed subjects of to- infinitive and gerund verbs, among other constructions.2 NEGRA’s original annotation is as dependency trees with phrasal nodes, crossing branches, and no empty elements. However, the distribution includes a context-free version produced algorithmically by recursively remapping discontinuous parts of nodes upward into higher phrases and marking their sites of origin.3 The resulting “traces” correspond roughly to a subclass of the second class of Penn Treebank empties discussed above, and include wh- movement, topicalization, right extrapositions from NP, expletives, and scrambling of sub2Four of the annotation errors in WSJ lead to uninterpretable dislocation and sharing patterns, including failure to annotate dislocations corresponding to marked origin sites, and mislabelings of control loci as origin sites of dislocation that lead to cyclic dislocations (which are explicitly prohibited in WSJ annotation guidelines). We corrected these errors manually before model testing and training. 3For a detailed description of the algorithm for creating the context-free version of NEGRA, see Skut et al. (1997a). S VAFIN VP $, $. AP wird PP VVPP . ADV NP ADJD PROAV begonnen , VP Erst ADJA NN sp¨ater damit NP VZ lange Zeit ART NE PTKZU VVINF den RMV zu schaffen S AP-2 ADV Erst not until NP ADJA lange long NN Zeit time ADJD sp¨ater later VAFIN wird will VP *T2* PP PROAV damit with it *T1* VVPP begonnen be begun $, , VP-1 NP ART den the NE RMV RMV VZ PTKZU zu to VVINF schaffen form $. . “The RMV will not begin to be formed for a long time.” Figure 2: Nonlocal dependencies via right-extraposition (*T1*) and topicalization (*T2*) in the NEGRA corpus of German, before (top) and after (bottom) transformation to context-free form. Dashed lines show where nodes go as a result of remapping into context-free form. jects after other complements. The positioning of NEGRA’s “traces” inside the mother node is completely algorithmic; a dislocated constituent C has its trace at the edge of the original mother closest to C’s overt position. Given a context-free NEGRA tree shorn of its trace/antecedent notation, however, it is far from trivial to determine which nodes are dislocated, and where they come from. Figure 2 shows an annotated sentence from the NEGRA corpus with discontinuities due to right extraposition (*T1*) and topicalization (*T2*), before and after transformation into context-free form with traces. 3 Algorithm Corresponding to the three types of empty-element annotation found in the Penn Treebank, our algorithm divides the process of CF tree enhancement into three phases. Each phase involves the identification of a certain subset of tree nodes to be operated on, followed by the application of the appropriate operation to the node. Operations may involve the insertion of a category at some position among a node’s daughters; the marking of certain nodes as dislocated; or the relocation of dislocated nodes to other positions within the tree. The content and ordering of phases is consistent with the syntactic theory upon which treebank annotation is based. For example, WSJ annotates relative clauses lacking overt relative pronouns, such as the SBAR in Figure 1, with a trace in the relativization site whose antecedent is an empty relative pronoun. This requires that empty relative pronoun insertion precede dislocated element identification. Likewise, dislocated elements can serve as controllers of control loci, based on their originating site, so it is sensible to return dislocated nodes to their originating sites before identifying control loci and their controllers. For WSJ, the three phases are: 1. (a) Determine nodes at which to insert null COMPlementizers4 (IDENTNULL) (b) For each COMP insertion node, determine position of each insertion and insert COMP (INSERTNULL) 2. (a) Classify each tree node as +/- DISLOCATED (IDENTMOVED) (b) For each DISLOCATED node, choose an ORIGIN node (RELOCMOVED) (c) For each pair ⟨DISLOCATED,origin⟩, choose a position of insertion and insert dislocated (INSERTRELOC) 3. (a) Classify each node as +/- control LOCUS (IDENTLOCUS) (b) For each LOCUS, determine position of insertion and insert LOCUS (INSERTLOCUS) (c) For each LOCUS, determine CONTROLLER (if any) (FINDCONTROLLER) Note in particular that phase 2 involves the classification of overt tree nodes as dislocated, followed by the identification of an origin site (annotated in the treebank as an empty node) for each dislocated element; whereas phase 3 involves the identification of (empty) control loci first, and of controllers later. This approach contrasts with Johnson (2002), who treats empty/antecedent identification as a joint task, and with Dienes and Dubey (2003a,b), who always identify empties first and determine antecedents later. Our motivation is that it should generally be easier to determine whether an overt element is dislocated than whether a given position is the origin of some yet unknown dislocated element (particularly in the absence of a sophisticated model of argument expression); but control loci are highly predictable from local context, such as the subjectless non-finite S in Figure 1’s S-2.5 Indeed this difference seems to be implicit in the nonlocal feature templates used by Dienes and Dubey (2003a,b) in their empty element tagger, in particular lookback for wh- words preceding a candidate verb. As described in Section 2, NEGRA’s nonlocal annotation schema is much simpler, involving no 4The WSJ contains a number of SBARs headed by empty complementizers with trace S’s. These SBARs are introduced in our algorithm as projections of identified empty complementizers as daughters of non-SBAR categories. 5Additionally, whereas dislocated nodes are always overt, control loci may be controlled by other (null) control loci, meaning that identifying controllers before control loci would still entail looking for nulls. IDENTMOVED S NP⟨it/there⟩ VP S/SBAR Expletive dislocation IDENTLOCUS S VP ⟨⟩ VP-internal context to determine null subjecthood INSERTNULLS S VP Possible null complementizer (records syntactic path from every S in sentence) Figure 3: Different classifiers’ specialized tree-matching fragments and their purposes uncoindexed empties or control loci. Correspondingly, our NEGRA algorithm includes only phase 2 of the WSJ algorithm, step (c) of which is trivial for NEGRA due to the deterministic positioning of trace insertion in the treebank. In each case we use a loglinear model for node classification, with a combination of quadratic regularization and thresholding by individual feature count to prevent overfitting. In the second and third parts of phases 2 and 3, when determining an originating site or controller for a given node N, or an insertion position for a node N′ in N, we use a competition-based setting, using a binary classification (yes/no for association with N) on each node in the tree, and during testing choosing the node with the highest score for positive association with N.6 All other phases of classification involve independent decisions at each node. In phase 3, we include a special zero node to indicate a control locus with no antecedent. 3.1 Feature templates Each subphase of our dependency reconstruction algorithm involves the training of a separate model and the development of a separate feature set. We found that it was important to include both a variety of general feature templates and a number of manually designed, specialized features to resolve specific problems observed for individual classifiers. We developed all feature templates exclusively on the training and development sets specified in Section 2. Table 1 shows which general feature templates we used in each classifier. The features are 6The choice of a unique origin site makes our algorithm unable to deal with right-node raising or parasitic gaps. Cases of right-node raising could be automatically transformed into single-origin dislocations by making use of a theory of coordination such as Maxwell and Manning (1996), while parasitic gaps could be handled with the introduction of a secondary classifier. Both phenomena are low-frequency, however, and we ignore them here. Feature type IdentNull InsertNull IdentMoved RelocMoved InsertReloc IdentLocus InsertLocus FindController TAG ✓ ✓ HD ✓ CAT×MCAT ⊗ ✓ CAT×MCAT×GCAT ✓ ✓ ✓ CAT×HD×MCAT×MHD ⊗ CAT×TAG×MCAT×MTAG ⊗ CAT×TAG ✓ ✓ CAT×HD ⊗ (FIRST/LAST)CAT ✓ ✓ (L/RSIS)CAT ✓ ✓ DPOS×CAT ✓ PATH ✓ ✓ CAT×RCAT ✓ TAG×RCAT ✓ CAT×TAG×RCAT ✓ CAT×RCAT×DPOS ✓ HD×RHD ⊗ CAT×HD×RHD ✓ CAT×DCAT ✓ ✓ ✓ ✓ MHD×HD ⊗ # Special 9 0 11 0 0 12 0 3 Table 1: Shared feature templates. See text for template descriptions. # Special is the number of special templates used for the classifier. ⊗denotes that all subsets of the template conjunction were included. coded as follows. The prefixes {∅,M,G,D,R} indicate that the feature value is calculated with respect to the node in question, its mother, grandmother, daughter, or relative node respectively.7 {CAT,POS,TAG,WORD} stand for syntactic category, position (of daughter) in mother, head tag, and head word respectively. For example, when determining whether an infinitival VP is extraposed, such as S-2 in Figure 1, the plausibility of the VP head being a deep dependent of the head verb is captured with the MHD×HD template. (FIRST/LAST)CAT and (L/RSIS)CAT are templates used for choosing the position to insert insert relocated nodes, respectively recording whether a node of a given category is the first/last daughter, and the syntactic category of a node’s left/right sisters. PATH is the syntactic path between relative and base node, defined as the list of the syntactic categories on the (inclusive) node path linking the relative node to the node in question, paired with whether the step on the path was upward or downward. For example, in Figure 2 the syntactic path from VP-1 to PP is [↑-VP,↑S,↓-VP,↓-PP]. This is a crucial feature for the relativized classifiers RELOCATEMOVED and FINDCONTROLLER; in an abstract sense it mediates the gap-threading information incorporated into GPSG7The relative node is DISLOCATED in RELOCMOVED and LOCUS in FINDCONTROLLER. Gold trees Parser output Jn Pres Jn DD Pres NP-* 62.4 75.3 55.6 (69.5) 61.1 WH-t 85.1 67.6 80.0 (82.0) 63.3 0 89.3 99.6 77.1 (48.8) 87.0 SBAR 74.8 74.7 71.0 73.8 71.0 S-t 90 93.3 87 84.5 83.6 Table 2: Comparison with previous work using Johnson’s PARSEVAL metric. Jn is Johnson (2002); DD is Dienes and Dubey (2003b); Pres is the present work. style (Gazdar et al., 1985) parsers, and in concrete terms it closely matches the information derived from Johnson (2002)’s connected local tree set patterns. Gildea and Jurafsky (2002) is to our knowledge the first use of such a feature for classification tasks on syntactic trees; they found it important for the related task of semantic role identification. We expressed specialized hand-coded feature templates as tree-matching patterns that capture a fragment of the content of the pattern in the feature value. Representative examples appear in Figure 3. The italicized node is the node for which a given feature is recorded; underscores indicate variables that can match any category; and the angle-bracketed parts of the tree fragment, together with an index for the pattern, determine the feature value.8 4 Evaluation 4.1 Comparison with previous work Our algorithm’s performance can be compared with the work of Johnson (2002) and Dienes and Dubey (2003a) on WSJ. Valid comparisons exist for the insertion of uncoindexed empty nodes (COMP and ARB-SUBJ), identification of control and raising loci (CONTROLLOCUS), and pairings of dislocated and controller/raised nodes with their origins (DISLOC,CONTROLLER). In Table 2 we present comparative results, using the PARSEVAL-based evaluation metric introduced by Johnson (2002) – a correct empty category inference requires the string position of the empty category, combined with the left and right boundaries plus syntactic category of the antecedent, if any, for purposes of comparison.9,10 Note that this evaluation metric does not require correct attachment of the empty category into 8A complete description of feature templates can be found at http://nlp.stanford.edu/˜rog/acl2004/templates/index.html 9For purposes of comparability with Johnson (2002) we used Charniak’s 2000 parser as P. 10Our algorithm was evaluated on a more stringent standard for NP-* than in previous work: control loci-related mappings were done after dislocated nodes were actually relocated by the algorithm, so an incorrect dislocation remapping can render incorrect the indices of a correct NP-* labeled bracketing. Additionally, our algorithm does not distinguish the syntactic cateP CF P A ◦P J ◦P D G A ◦G J ◦G Overall 91.2 87.6 90.5 90.0 88.3 95.7 99.4 98.5 NP 91.6 89.9 91.4 91.2 89.4 97.9 99.8 99.6 S 93.3 83.4 91.2 89.9 89.2 89.0 98.0 96.0 VP 91.2 87.3 90.2 89.6 88.0 95.2 99.0 97.7 ADJP 73.1 72.8 72.9 72.8 72.5 99.7 99.6 98.8 SBAR 94.4 66.7 89.3 84.9 85.0 72.6 99.4 94.1 ADVP 70.1 69.7 69.5 69.7 67.7 99.4 99.4 99.7 Table 3: Typed dependency F1 performance when composed with statistical parser. PCF is parser output evaluated by context-free (shallow) dependencies; all others are evaluated on deep dependencies. P is parser, G is string-to-context-free-gold-tree mapping, A is present remapping algorithm, J is Johnson 2002, D is the COMBINED model of Dienes 2003. the parse tree. In Figure 1, for example, WHNP1 could be erroneously remapped to the right edge of any S or VP node in the sentence without resulting in error according to this metric. We therefore abandon this metric in further evaluations as it is not clear whether it adequately approximates performance in predicate-argument structure recovery.11 4.2 Composition with a context-free parser If we think of a statistical parser as a function from strings to CF trees, and the nonlocal dependency recovery algorithm A presented in this paper as a function from trees to trees, we can naturally compose our algorithm with a parser P to form a function A ◦P from strings to trees whose dependency interpretation is, hopefully, an improvement over the trees from P. To test this idea quantitatively we evaluate performance with respect to recovery of typed dependency relations between words. A dependency relation, commonly employed for evaluation in the statistical parsing literature, is defined at a node N of a lexicalized parse tree as a pair ⟨wi, wj⟩where wi is the lexical head of N and wj is the lexical head of some non-head daughter of N. Dependency relations may further be typed according to information at or near the relevant tree node; Collins (1999), for example, reports dependency scores typed on the syntactic categories of the mother, head daughter, and dependent daughter, plus on whether the dependent precedes or follows the head. We present here dependency evaluations where the gold-standard dependency set is defined by the remapped tree, typed gory of null insertions, whereas previous work has; as a result, the null complementizer class 0 and WH-t dislocation class are aggregates of classes used in previous work. 11Collins (1999) reports 93.8%/90.1% precision/recall in his Model 3 for accurate identification of relativization site in noninfinitival relative clauses. This figure is difficult to compare directly with other figures in this section; a tree search indicates that non-infinitival subjects make up at most 85.4% of the WHNP dislocations in WSJ. Performance on gold trees Performance on parsed trees ID Rel Combo ID Combo P R F1 Acc P R F1 P R F1 P R F1 WSJ(full) 92.0 82.9 87.2 95.0 89.6 80.1 84.6 34.5 47.6 40.0 17.8 24.3 20.5 WSJ(sm) 92.3 79.5 85.5 93.3 90.4 77.2 83.2 38.0 47.3 42.1 19.7 24.3 21.7 NEGRA 73.9 64.6 69.0 85.1 63.3 55.4 59.1 48.3 39.7 43.6 20.9 17.2 18.9 Table 4: Cross-linguistic comparison of dislocated node identification and remapping. ID is correct identification of nodes as +/– dislocated; Rel is relocation of node to correct mother given gold-standard data on which nodes are dislocated (only applicable for gold trees); Combo is both correct identification and remapping. by syntactic category of the mother node.12 In Figure 1, for example, to would be an ADJP dependent of quick rather than a VP dependent of was; and Farmers would be an S dependent both of to in to point out . . . and of was. We use the head-finding rules of Collins (1999) to lexicalize trees, and assume that null complementizers do not participate in dependency relations. To further compare the results of our algorithm with previous work, we obtained the output trees produced by Johnson (2002) and Dienes (2003) and evaluated them on typed dependency performance. Table 3 shows the results of this evaluation. For comparison, we include shallow dependency accuracy for Charniak’s parser under PCF. 4.3 Cross-linguistic comparison In order to compare the results of nonlocal dependency reconstruction between languages, we must identify equivalence classes of nonlocal dependency annotation between treebanks. NEGRA’s nonlocal dependency annotation is quite different from WSJ, as described in Section 2, ignoring controlled and arbitrary unexpressed subjects. The natural basis of comparison is therefore the set of all nonlocal NEGRA annotations against all WSJ dislocations, excluding relativizations (defined simply as dislocated wh- constituents under SBAR).13 Table 4 shows the performance comparison between WSJ and NEGRA of IDENTDISLOC and RELOCMOVED, on sentences of 40 tokens or less. For this evaluation metric we use syntactic category and left & right edges of (1) dislocated nodes (ID); and (2) originating mother node to which dislocated node is mapped (Rel). Combo requires both (1) and (2) to be correct. NEGRA is smaller than WSJ (∼350,000 words vs. 1 million), so for fair 12Unfortunately, 46 WSJ dislocation annotations in this testset involve dislocated nodes dominating their origin sites. It is not entirely clear how to interpret the intended semantics of these examples, so we ignore them in evaluation. 13The interpretation of comparative results must be modulated by the fact that more total time was spent on feature engineering for WSJ than for NEGRA, and the first author, who engineered the NEGRA feature set, is not a native speaker of German. comparison we tested WSJ using the smaller training set described in Section 2, comparable in size to NEGRA’s. Since the positioning of traces within NEGRA nodes is trivial, we evaluate remapping and combination performances requiring only proper selection of the originating mother node; thus we carry the algorithm out on both treebanks through step (2b). This is adequate for purposes of our typed dependency evaluation in Section 4.2, since typed dependencies do not depend on positional information. State-of-the-art statistical parsing is far better on WSJ (Charniak, 2000) than on NEGRA (Dubey and Keller, 2003), so for comparison of parser-composed dependency performance we used vanilla PCFG models for both WSJ and NEGRA trained on comparably-sized datasets; in addition to making similar types of independence assumptions, these models performed relatively comparably on labeled bracketing measures for our development sets (73.2% performance for WSJ versus 70.9% for NEGRA). Table 5 compares the testset performance of algorithms on the two treebanks on the typed dependency measure introduced in Section 4.2.14 5 Discussion The WSJ results shown in Tables 2 and 3 suggest that discriminative models incorporating both nonlocal and local lexical and syntactic information can achieve good results on the task of non-local dependency identification. On the PARSEVAL metric, our algorithm performed particularly well on null complementizer and control locus insertion, and on S node relocation. In particular, Johnson noted that the proper insertion of control loci was a difficult issue involving lexical as well as structural sensitivity. We found the loglinear paradigm a good one in which to model this feature combination; when run in isolation on gold-standard development trees, our model reached 96.4% F1 on control locus insertion, reducing error over the Johnson model’s 89.3% 14Many head-dependent relations in NEGRA are explicitly marked, but for those that are not we used a Collins (1999)style head-finding algorithm independently developed for German PCFG parsing. PCF P A ◦P G A ◦G WSJ(full) 76.3 75.4 75.7 98.7 99.7 WSJ(sm) 76.3 75.4 75.7 98.7 99.6 NEGRA 62.0 59.3 61.0 90.9 93.6 Table 5: Typed dependency F1 performance when composed with statistical parser. Remapped dependencies involve only non-relativization dislocations and exclude control loci. by nearly two-thirds. The performance of our algorithm is also evident in the substantial contribution to typed dependency accuracy seen in Table 3. For gold-standard input trees, our algorithm reduces error by over 80% from the surface-dependency baseline, and over 60% compared with Johnson’s results. For parsed input trees, our algorithm reduces dependency error by 23% over the baseline, and by 5% compared with Johnson’s results. Note that the dependency figures of Dienes lag behind even the parsed results for Johnson’s model; this may well be due to the fact that Dienes built his model as an extension of Collins (1999), which lags behind Charniak (2000) by about 1.3-1.5%. Manual investigation of errors on English goldstandard data revealed two major issues that suggest further potential for improvement in performance without further increase in algorithmic complexity or training set size. First, we noted that annotation inconsistency accounted for a large number of errors, particularly false positives. VPs from which an S has been extracted ([SShut up,] he [VP said t]) are inconsistently given an empty SBAR daughter, suggesting the cross-model low-70’s performance on null SBAR insertion models (see Table 2) may be a ceiling. Control loci were often under-annotated; the first five development-set false positive control loci we checked were all due to annotation error. And why-WHADVPs under SBAR, which are always dislocations, were not so annotated 20% of the time. Second, both control locus insertion and dislocated NP remapping must be sensitive to the presence of argument NPs under classified nodes. But temporal NPs, indistinguishable by gross category, also appear under such nodes, creating a major confound. We used customized features to compensate to some extent, but temporal annotation already exists in WSJ and could be used. We note that Klein and Manning (2003) independently found retention of temporal NP marking useful for PCFG parsing. As can be seen in Table 3, the absolute improvement in dependency recovery is smaller for both our and Johnson’s postprocessing algorithms when applied to parsed input trees than when applied to gold-standard input trees. It seems that this degradation is not primarily due to noise in parse tree outputs reducing recall of nonlocal dependency identification: precision/recall splits were largely the same between gold and parsed data, and manual inspection revealed that incorrect nonlocal dependency choices often arose from syntactically reasonable yet incorrect input from the parser. For example, the gold-standard parse right-wing whites . . . will [VP step up [NP their threats [S [VP * to take matters into their own hands ]]]] has an unindexed control locus because Treebank annotation specifies that infinitival VPs inside NPs are not assigned controllers. Charniak’s parser, however, attaches the infinitival VP into the higher step up . . . VP. Infinitival VPs inside VPs generally do receive controllers for their null subjects, and our algorithm accordingly yet mistakenly assigns right-wing-whites as the antecedent. The English/German comparison shown in Tables 4 and 5 is suggestive, but caution is necessary in its interpretation due to the fact that differences in both language structure and treebank annotation may be involved. Results in the G column of Table 5, showing the accuracy of the context-free dependency approximation from gold-standard parse trees, quantitatively corroborates the intuition that nonlocal dependency is more prominent in German than in English. Manual investigation of errors made on German gold-standard data revealed two major sources of error beyond sparsity. The first was a widespread ambiguity of S and VP nodes within S and VP nodes; many true dislocations of all sorts are expressed at the S and VP levels in CFG parse trees, such as VP1 of Figure 2, but many adverbial and subordinate phrases of S or VP category are genuine dependents of the main clausal verb. We were able to find a number of features to distinguish some cases, such as the presence of certain unambiguous relativeclause introducing complementizers beginning an S node, but much ambiguity remained. The second was the ambiguity that some matrix S-initial NPs are actually dependents of the VP head (in these cases, NEGRA annotates the finite verb as the head of S and the non-finite verb as the head of VP). This is not necessarily a genuine discontinuity per se, but rather corresponds to identification of the subject NP in a clause. Obviously, having access to reliable case marking would improve performance in this area; such information is in fact included in NEGRA’s morphological annotation, another argument for the utility of involving enhanced annotation in CF parsing. As can be seen in the right half of Table 4, performance falls off considerably on vanilla PCFGparsed data. This fall-off seems more dramatic than that seen in Sections 4.1 and 4.2, no doubt partly due to the poorer performance of the vanilla PCFG, but likely also because only non-relativization dislocations are considered in Section 4.3. These dislocations often require non-local information (such as identity of surface lexical governor) for identification and are thus especially susceptible to degradation in parsed data. Nevertheless, seemingly dismal performance here still provided a strong boost to typed dependency evaluation of parsed data, as seen in A ◦P of Table 5. We suspect this indicates that dislocated terminals are being usefully identified and mapped back to their proper governors, even if the syntactic projections of these terminals and governors are not being correctly identified by the parser. 6 Further Work Against the background of CFG as the standard approximation of dependency structure for broadcoverage parsing, there are essentially three options for the recovery of nonlocal dependency. The first option is to postprocess CF parse trees, which we have closely investigated in this paper. The second is to incorporate nonlocal dependency information into the category structure of CF trees. This was the approach taken by Dienes and Dubey (2003a,b) and Dienes (2003); it is also practiced in recent work on broad-coverage CCG parsing (Hockenmaier, 2003). The third would be to incorporate nonlocal dependency information into the edge structure parse trees, allowing discontinuous constituency to be explicitly represented in the parse chart. This approach was tentatively investigated by Plaehn (2000). As the syntactic diversity of languages for which treebanks are available grows, it will become increasingly important to compare these three approaches. 7 Acknowledgements This work has benefited from feedback from Dan Jurafsky and three anonymous reviewers, and from presentation at the Institute of Cognitive Science, University of Colorado at Boulder. The authors are also grateful to Dan Klein and Jenny Finkel for use of maximum-entropy software they wrote. This work was supported in part by the Advanced Research and Development Activity (ARDA)’s Advanced Question Answering for Intelligence (AQUAINT) Program. References Charniak, E. (2000). A Maximum-Entropy-inspired parser. In Proceedings of NAACL. Chomsky, N. (1956). Three models for the description of language. IRE Transactions on Information Theory, 2(3):113– 124. Collins, M. (1999). Head-Driven Statistical Models for Natural Language Parsing. PhD thesis, University of Pennsylvania. Dienes, P. (2003). Statistical Parsing with Non-local Dependencies. PhD thesis, Saarland University. Dienes, P. and Dubey, A. (2003a). Antecedent recovery: Experiments with a trace tagger. In Proceedings of EMNLP. Dienes, P. and Dubey, A. (2003b). Deep processing by combining shallow methods. In Proceedings of ACL. Dubey, A. and Keller, F. (2003). Parsing German with sisterhead dependencies. In Proceedings of ACL. Gazdar, G., Klein, E., Pullum, G., and Sag, I. (1985). Generalized Phrase Structure Grammar. Harvard. Gildea, D. and Jurafsky, D. (2002). Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. Hockenmaier, J. (2003). Data and models for Statistical Parsing with Combinatory Categorial Grammar. PhD thesis, University of Edinburgh. Johnson, M. (2002). A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of ACL, volume 40. Kaplan, R., Riezler, S., King, T. H., Maxwell, J. T., Vasserman, A., and Crouch, R. (2004). Speed and accuracy in shallow and deep stochastic parsing. In Proceedings of NAACL. Kaplan, R. M. and Maxwell, J. T. (1993). The interface between phrasal and functional constraints. Computational Linguistics, 19(4):571–590. Klein, D. and Manning, C. D. (2003). Accurate unlexicalized parsing. In Proceedings of ACL. Kruijff, G.-J. (2002). Learning linearization rules from treebanks. Invited talk at the Formal Grammar’02/COLOGNET-ELSNET Symposium. Levy, R. (2004). Probabilistic Models of Syntactic Discontinuity. PhD thesis, Stanford University. In progress. Maxwell, J. T. and Manning, C. D. (1996). A theory of nonconstituent coordination based on finite-state rules. In Butt, M. and King, T. H., editors, Proceedings of LFG. Pasca, M. and Harabagiu, S. M. (2001). High performance question/answering. In Proceedings of SIGIR. Plaehn, O. (2000). Computing the most probable parse for a discontinuous phrase structure grammar. In Proceedings of IWPT, Trento, Italy. Riezler, S., King, T. H., Kaplan, R. M., Crouch, R. S., Maxwell, J. T., and Johnson, M. (2002). Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. In Proceedings of ACL, pages 271– 278. Skut, W., Brants, T., Krenn, B., and Uszkoreit, H. (1997a). Annotating unrestricted German text. In Fachtagung der Sektion Computerlinguistik der Deutschen Gesellschaft fr Sprachwissenschaft, Heidelberg, Germany. Skut, W., Krenn, B., Brants, T., and Uszkoreit, H. (1997b). An annotation scheme for free word order languages. In Proceedings of ANLP.
2004
42
A Study on Convolution Kernels for Shallow Semantic Parsing Alessandro Moschitti University of Texas at Dallas Human Language Technology Research Institute Richardson, TX 75083-0688, USA [email protected] Abstract In this paper we have designed and experimented novel convolution kernels for automatic classification of predicate arguments. Their main property is the ability to process structured representations. Support Vector Machines (SVMs), using a combination of such kernels and the flat feature kernel, classify PropBank predicate arguments with accuracy higher than the current argument classification stateof-the-art. Additionally, experiments on FrameNet data have shown that SVMs are appealing for the classification of semantic roles even if the proposed kernels do not produce any improvement. 1 Introduction Several linguistic theories, e.g. (Jackendoff, 1990) claim that semantic information in natural language texts is connected to syntactic structures. Hence, to deal with natural language semantics, the learning algorithm should be able to represent and process structured data. The classical solution adopted for such tasks is to convert syntax structures into flat feature representations which are suitable for a given learning model. The main drawback is that structures may not be properly represented by flat features. In particular, these problems affect the processing of predicate argument structures annotated in PropBank (Kingsbury and Palmer, 2002) or FrameNet (Fillmore, 1982). Figure 1 shows an example of a predicate annotation in PropBank for the sentence: "Paul gives a lecture in Rome". A predicate may be a verb or a noun or an adjective and most of the time Arg 0 is the logical subject, Arg 1 is the logical object and ArgM may indicate locations, as in our example. FrameNet also describes predicate/argument structures but for this purpose it uses richer semantic structures called frames. These latter are schematic representations of situations involving various participants, properties and roles in which a word may be typically used. Frame elements or semantic roles are arguments of predicates called target words. In FrameNet, the argument names are local to a particular frame. Predicate Arg. 0 Arg. M S N NP D N VP V Paul in gives a lecture PP IN N Rome Arg. 1 Figure 1: A predicate argument structure in a parse-tree representation. Several machine learning approaches for argument identification and classification have been developed (Gildea and Jurasfky, 2002; Gildea and Palmer, 2002; Surdeanu et al., 2003; Hacioglu et al., 2003). Their common characteristic is the adoption of feature spaces that model predicate-argument structures in a flat representation. On the contrary, convolution kernels aim to capture structural information in term of sub-structures, providing a viable alternative to flat features. In this paper, we select portions of syntactic trees, which include predicate/argument salient sub-structures, to define convolution kernels for the task of predicate argument classification. In particular, our kernels aim to (a) represent the relation between predicate and one of its arguments and (b) to capture the overall argument structure of the target predicate. Additionally, we define novel kernels as combinations of the above two with the polynomial kernel of standard flat features. Experiments on Support Vector Machines using the above kernels show an improvement of the state-of-the-art for PropBank argument classification. On the contrary, FrameNet semantic parsing seems to not take advantage of the structural information provided by our kernels. The remainder of this paper is organized as follows: Section 2 defines the Predicate Argument Extraction problem and the standard solution to solve it. In Section 3 we present our kernels whereas in Section 4 we show comparative results among SVMs using standard features and the proposed kernels. Finally, Section 5 summarizes the conclusions. 2 Predicate Argument Extraction: a standard approach Given a sentence in natural language and the target predicates, all arguments have to be recognized. This problem can be divided into two subtasks: (a) the detection of the argument boundaries, i.e. all its compounding words and (b) the classification of the argument type, e.g. Arg0 or ArgM in PropBank or Agent and Goal in FrameNet. The standard approach to learn both detection and classification of predicate arguments is summarized by the following steps: 1. Given a sentence from the training-set generate a full syntactic parse-tree; 2. let P and A be the set of predicates and the set of parse-tree nodes (i.e. the potential arguments), respectively; 3. for each pair <p, a> ∈P × A: • extract the feature representation set, Fp,a; • if the subtree rooted in a covers exactly the words of one argument of p, put Fp,a in T + (positive examples), otherwise put it in T − (negative examples). For example, in Figure 1, for each combination of the predicate give with the nodes N, S, VP, V, NP, PP, D or IN the instances F”give”,a are generated. In case the node a exactly covers Paul, a lecture or in Rome, it will be a positive instance otherwise it will be a negative one, e.g. F”give”,”IN”. To learn the argument classifiers the T + set can be re-organized as positive T + argi and negative T − argi examples for each argument i. In this way, an individual ONE-vs-ALL classifier for each argument i can be trained. We adopted this solution as it is simple and effective (Hacioglu et al., 2003). In the classification phase, given a sentence of the test-set, all its Fp,a are generated and classified by each individual classifier. As a final decision, we select the argument associated with the maximum value among the scores provided by the SVMs, i.e. argmaxi∈S Ci, where S is the target set of arguments. - Phrase Type: This feature indicates the syntactic type of the phrase labeled as a predicate argument, e.g. NP for Arg1. - Parse Tree Path: This feature contains the path in the parse tree between the predicate and the argument phrase, expressed as a sequence of nonterminal labels linked by direction (up or down) symbols, e.g. V ↑VP ↓NP for Arg1. - Position: Indicates if the constituent, i.e. the potential argument, appears before or after the predicate in the sentence, e.g. after for Arg1 and before for Arg0. - Voice: This feature distinguishes between active or passive voice for the predicate phrase, e.g. active for every argument. - Head Word: This feature contains the headword of the evaluated phrase. Case and morphological information are preserved, e.g. lecture for Arg1. - Governing Category indicates if an NP is dominated by a sentence phrase or by a verb phrase, e.g. the NP associated with Arg1 is dominated by a VP. - Predicate Word: This feature consists of two components: (1) the word itself, e.g. gives for all arguments; and (2) the lemma which represents the verb normalized to lower case and infinitive form, e.g. give for all arguments. Table 1: Standard features extracted from the parse-tree in Figure 1. 2.1 Standard feature space The discovery of relevant features is, as usual, a complex task, nevertheless, there is a common consensus on the basic features that should be adopted. These standard features, firstly proposed in (Gildea and Jurasfky, 2002), refer to a flat information derived from parse trees, i.e. Phrase Type, Predicate Word, Head Word, Governing Category, Position and Voice. Table 1 presents the standard features and exemplifies how they are extracted from the parse tree in Figure 1. For example, the Parse Tree Path feature represents the path in the parse-tree between a predicate node and one of its argument nodes. It is expressed as a sequence of nonterminal labels linked by direction symbols (up or down), e.g. in Figure 1, V↑VP↓NP is the path between the predicate to give and the argument 1, a lecture. Two pairs <p1, a1> and <p2, a2> have two different Path features even if the paths differ only for a node in the parse-tree. This pre S N NP D N VP V Paul in delivers a talk PP IN NP jj Fdeliver, Arg0 formal N style Arg. 0 a) S N NP D N VP V Paul in delivers a talk PP IN NP jj formal N style Fdeliver, Arg1 b) S N NP D N VP V Paul in delivers a talk PP IN NP jj formal N style Arg. 1 Fdeliver, ArgM c) Arg. M Figure 2: Structured features for Arg0, Arg1 and ArgM. vents the learning algorithm to generalize well on unseen data. In order to address this problem, the next section describes a novel kernel space for predicate argument classification. 2.2 Support Vector Machine approach Given a vector space in ℜn and a set of positive and negative points, SVMs classify vectors according to a separating hyperplane, H(⃗x) = ⃗w × ⃗x + b = 0, where ⃗w ∈ℜn and b ∈ℜare learned by applying the Structural Risk Minimization principle (Vapnik, 1995). To apply the SVM algorithm to Predicate Argument Classification, we need a function φ : F →ℜn to map our features space F = {f1, .., f|F|} and our predicate/argument pair representation, Fp,a = Fz, into ℜn, such that: Fz →φ(Fz) = (φ1(Fz), .., φn(Fz)) From the kernel theory we have that: H(⃗x) =  X i=1..l αi⃗xi  ·⃗x+b = X i=1..l αi⃗xi ·⃗x+b = = X i=1..l αiφ(Fi) · φ(Fz) + b. where, Fi ∀i ∈{1, .., l} are the training instances and the product K(Fi, Fz) =<φ(Fi) · φ(Fz)> is the kernel function associated with the mapping φ. The simplest mapping that we can apply is φ(Fz) = ⃗z = (z1, ..., zn) where zi = 1 if fi ∈Fz otherwise zi = 0, i.e. the characteristic vector of the set Fz with respect to F. If we choose as a kernel function the scalar product we obtain the linear kernel KL(Fx, Fz) = ⃗x · ⃗z. Another function which is the current stateof-the-art of predicate argument classification is the polynomial kernel: Kp(Fx, Fz) = (c+⃗x·⃗z)d, where c is a constant and d is the degree of the polynom. 3 Convolution Kernels for Semantic Parsing We propose two different convolution kernels associated with two different predicate argument sub-structures: the first includes the target predicate with one of its arguments. We will show that it contains almost all the standard feature information. The second relates to the sub-categorization frame of verbs. In this case, the kernel function aims to cluster together verbal predicates which have the same syntactic realizations. This provides the classification algorithm with important clues about the possible set of arguments suited for the target syntactic structure. 3.1 Predicate/Argument Feature (PAF) We consider the predicate argument structures annotated in PropBank or FrameNet as our semantic space. The smallest sub-structure which includes one predicate with only one of its arguments defines our structural feature. For example, Figure 2 illustrates the parse-tree of the sentence "Paul delivers a talk in formal style". The circled substructures in (a), (b) and (c) are our semantic objects associated with the three arguments of the verb to deliver, i.e. <deliver, Arg0>, <deliver, Arg1> and <deliver, ArgM >. Note that each predicate/argument pair is associated with only one structure, i.e. Fp,a contain only one of the circled sub-trees. Other important properties are the followings: (1) The overall semantic feature space F contains sub-structures composed of syntactic information embodied by parse-tree dependencies and semantic information under the form of predicate/argument annotation. (2) This solution is efficient as we have to classify as many nodes as the number of predicate arguments. (3) A constituent cannot be part of two different arguments of the target predicate, i.e. there is no overlapping between the words of two arguments. Thus, two semantic structures Fp1,a1 and Fp2,a2 1, associated with two different ar1Fp,a was defined as the set of features of the object <p, a>. Since in our representations we have only one S NP VP VP VP CC VBD NP flushed DT NN the pan and VBD NP buckled PRP$ NN his belt PRP He Arg0 (flush and buckle) Arg1 (flush) Arg1 (buckle) Predicate 1 Predicate 2 Fflush Fbuckle Figure 3: Sub-Categorization Features for two predicate argument structures. guments, cannot be included one in the other. This property is important because a convolution kernel would not be effective to distinguish between an object and its sub-parts. 3.2 Sub-Categorization Feature (SCF) The above object space aims to capture all the information between a predicate and one of its arguments. Its main drawback is that important structural information related to interargument dependencies is neglected. In order to solve this problem we define the SubCategorization Feature (SCF). This is the subparse tree which includes the sub-categorization frame of the target verbal predicate. For example, Figure 3 shows the parse tree of the sentence "He flushed the pan and buckled his belt". The solid line describes the SCF of the predicate flush, i.e. Fflush whereas the dashed line tailors the SCF of the predicate buckle, i.e. Fbuckle. Note that SCFs are features for predicates, (i.e. they describe predicates) whereas PAF characterizes predicate/argument pairs. Once semantic representations are defined, we need to design a kernel function to estimate the similarity between our objects. As suggested in Section 2 we can map them into vectors in ℜn and evaluate implicitly the scalar product among them. 3.3 Predicate/Argument structure Kernel (PAK) Given the semantic objects defined in the previous section, we design a convolution kernel in a way similar to the parse-tree kernel proposed in (Collins and Duffy, 2002). We divide our mapping φ in two steps: (1) from the semantic structure space F (i.e. PAF or SCF objects) to the set of all their possible sub-structures element in Fp,a with an abuse of notation we use it to indicate the objects themselves. NP D N a talk NP D N NP D N a D N a talk NP D N NP D N VP V delivers a talk V delivers NP D N VP V a talk NP D N VP V NP D N VP V a NP D VP V talk N a NP D N VP V delivers talk NP D N VP V delivers NP D N VP V delivers NP VP V NP VP V delivers talk Figure 4: All 17 valid fragments of the semantic structure associated with Arg 1 of Figure 2. F′ = {f ′ 1, .., f ′ |F′|} and (2) from F′ to ℜ|F′|. An example of features in F ′ is given in Figure 4 where the whole set of fragments, F ′ deliver,Arg1, of the argument structure Fdeliver,Arg1, is shown (see also Figure 2). It is worth noting that the allowed sub-trees contain the entire (not partial) production rules. For instance, the sub-tree [NP [D a]] is excluded from the set of the Figure 4 since only a part of the production NP →D N is used in its generation. However, this constraint does not apply to the production VP →V NP PP along with the fragment [VP [V NP]] as the subtree [VP [PP [...]]] is not considered part of the semantic structure. Thus, in step 1, an argument structure Fp,a is mapped in a fragment set F ′ p,a. In step 2, this latter is mapped into ⃗x = (x1, .., x|F′|) ∈ℜ|F′|, where xi is equal to the number of times that f ′ i occurs in F ′ p,a 2. In order to evaluate K(φ(Fx), φ(Fz)) without evaluating the feature vector ⃗x and ⃗z we define the indicator function Ii(n) = 1 if the substructure i is rooted at node n and 0 otherwise. It follows that φi(Fx) = P n∈Nx Ii(n), where Nx is the set of the Fx’s nodes. Therefore, the kernel can be written as: K(φ(Fx), φ(Fz)) = |F′| X i=1 ( X nx∈Nx Ii(nx))( X nz∈Nz Ii(nz)) = X nx∈Nx X nz∈Nz X i Ii(nx)Ii(nz) where Nx and Nz are the nodes in Fx and Fz, respectively. In (Collins and Duffy, 2002), it has been shown that P i Ii(nx)Ii(nz) = ∆(nx, nz) can be computed in O(|Nx| × |Nz|) by the following recursive relation: (1) if the productions at nx and nz are different then ∆(nx, nz) = 0; 2A fragment can appear several times in a parse-tree, thus each fragment occurrence is considered as a different element in F ′ p,a. (2) if the productions at nx and nz are the same, and nx and nz are pre-terminals then ∆(nx, nz) = 1; (3) if the productions at nx and nz are the same, and nx and nz are not pre-terminals then ∆(nx, nz) = nc(nx) Y j=1 (1 + ∆(ch(nx, j), ch(nz, j))), where nc(nx) is the number of the children of nx and ch(n, i) is the i-th child of the node n. Note that as the productions are the same ch(nx, i) = ch(nz, i). This kind of kernel has the drawback of assigning more weight to larger structures while the argument type does not strictly depend on the size of the argument (Moschitti and Bejan, 2004). To overcome this problem we can scale the relative importance of the tree fragments using a parameter λ for the cases (2) and (3), i.e. ∆(nx, nz) = λ and ∆(nx, nz) = λ Qnc(nx) j=1 (1 + ∆(ch(nx, j), ch(nz, j))) respectively. It is worth noting that even if the above equations define a kernel function similar to the one proposed in (Collins and Duffy, 2002), the substructures on which it operates are different from the parse-tree kernel. For example, Figure 4 shows that structures such as [VP [V] [NP]], [VP [V delivers ] [NP]] and [VP [V] [NP [DT] [N]]] are valid features, but these fragments (and many others) are not generated by a complete production, i.e. VP →V NP PP. As a consequence they would not be included in the parse-tree kernel of the sentence. 3.4 Comparison with Standard Features In this section we compare standard features with the kernel based representation in order to derive useful indications for their use: First, PAK estimates a similarity between two argument structures (i.e., PAF or SCF) by counting the number of sub-structures that are in common. As an example, the similarity between the two structures in Figure 2, F”delivers”,Arg0 and F”delivers”,Arg1, is equal to 1 since they have in common only the [V delivers] substructure. Such low value depends on the fact that different arguments tend to appear in different structures. On the contrary, if two structures differ only for a few nodes (especially terminals or near terminal nodes) the similarity remains quite high. For example, if we change the tense of the verb to deliver (Figure 2) in delivered, the [VP [V delivers] [NP]] subtree will be transformed in [VP [VBD delivered] [NP]], where the NP is unchanged. Thus, the similarity with the previous structure will be quite high as: (1) the NP with all sub-parts will be matched and (2) the small difference will not highly affect the kernel norm and consequently the final score. The above property also holds for the SCF structures. For example, in Figure 3, KP AK(φ(Fflush), φ(Fbuckle)) is quite high as the two verbs have the same syntactic realization of their arguments. In general, flat features do not possess this conservative property. For example, the Parse Tree Path is very sensible to small changes of parse-trees, e.g. two predicates, expressed in different tenses, generate two different Path features. Second, some information contained in the standard features is embedded in PAF: Phrase Type, Predicate Word and Head Word explicitly appear as structure fragments. For example, in Figure 4 are shown fragments like [NP [DT] [N]] or [NP [DT a] [N talk]] which explicitly encode the Phrase Type feature NP for the Arg 1 in Figure 2.b. The Predicate Word is represented by the fragment [V delivers] and the Head Word is encoded in [N talk]. The same is not true for SCF since it does not contain information about a specific argument. SCF, in fact, aims to characterize the predicate with respect to the overall argument structures rather than a specific pair <p, a>. Third, Governing Category, Position and Voice features are not explicitly contained in both PAF and SCF. Nevertheless, SCF may allow the learning algorithm to detect the active/passive form of verbs. Finally, from the above observations follows that the PAF representation may be used with PAK to classify arguments. On the contrary, SCF lacks important information, thus, alone it may be used only to classify verbs in syntactic categories. This suggests that SCF should be used in conjunction with standard features to boost their classification performance. 4 The Experiments The aim of our experiments are twofold: On the one hand, we study if the PAF representation produces an accuracy higher than standard features. On the other hand, we study if SCF can be used to classify verbs according to their syntactic realization. Both the above aims can be carried out by combining PAF and SCF with the standard features. For this purpose we adopted two ways to combine kernels3: (1) K = K1 · K2 and (2) K = γK1 + K2. The resulting set of kernels used in the experiments is the following: • Kpd is the polynomial kernel with degree d over the standard features. • KP AF is obtained by using PAK function over the PAF structures. • KP AF +P = γ KP AF |KP AF | + Kpd |Kpd|, i.e. the sum between the normalized4 PAF-based kernel and the normalized polynomial kernel. • KP AF ·P = KP AF ·Kpd |KP AF |·|Kpd|, i.e. the normalized product between the PAF-based kernel and the polynomial kernel. • KSCF +P = γ KSCF |KSCF | + Kpd |Kpd|, i.e. the summation between the normalized SCF-based kernel and the normalized polynomial kernel. • KSCF ·P = KSCF ·Kpd |KSCF |·|Kpd|, i.e. the normalized product between SCF-based kernel and the polynomial kernel. 4.1 Corpora set-up The above kernels were experimented over two corpora: PropBank (www.cis.upenn.edu/∼ace) along with Penn TreeBank5 2 (Marcus et al., 1993) and FrameNet. PropBank contains about 53,700 sentences and a fixed split between training and testing which has been used in other researches e.g., (Gildea and Palmer, 2002; Surdeanu et al., 2003; Hacioglu et al., 2003). In this split, Sections from 02 to 21 are used for training, section 23 for testing and sections 1 and 22 as developing set. We considered all PropBank arguments6 from Arg0 to Arg9, ArgA and ArgM for a total of 122,774 and 7,359 arguments in training and testing respectively. It is worth noting that in the experiments we used the gold standard parsing from Penn TreeBank, thus our kernel structures are derived with high precision. For the FrameNet corpus (www.icsi.berkeley 3It can be proven that the resulting kernels still satisfy Mercer’s conditions (Cristianini and Shawe-Taylor, 2000). 4To normalize a kernel K(⃗x, ⃗z) we can divide it by p K(⃗x, ⃗x) · K(⃗z, ⃗z). 5We point out that we removed from Penn TreeBank the function tags like SBJ and TMP as parsers usually are not able to provide this information. 6We noted that only Arg0 to Arg4 and ArgM contain enough training/testing data to affect the overall performance. .edu/∼framenet) we extracted all 24,558 sentences from the 40 frames of Senseval 3 task (www.senseval.org) for the Automatic Labeling of Semantic Roles. We considered 18 of the most frequent roles and we mapped together those having the same name. Only verbs are selected to be predicates in our evaluations. Moreover, as it does not exist a fixed split between training and testing, we selected randomly 30% of sentences for testing and 70% for training. Additionally, 30% of training was used as a validation-set. The sentences were processed using Collins’ parser (Collins, 1997) to generate parse-trees automatically. 4.2 Classification set-up The classifier evaluations were carried out using the SVM-light software (Joachims, 1999) available at svmlight.joachims.org with the default polynomial kernel for standard feature evaluations. To process PAF and SCF, we implemented our own kernels and we used them inside SVM-light. The classification performances were evaluated using the f1 measure7 for single arguments and the accuracy for the final multi-class classifier. This latter choice allows us to compare the results with previous literature works, e.g. (Gildea and Jurasfky, 2002; Surdeanu et al., 2003; Hacioglu et al., 2003). For the evaluation of SVMs, we used the default regularization parameter (e.g., C = 1 for normalized kernels) and we tried a few costfactor values (i.e., j ∈{0.1, 1, 2, 3, 4, 5}) to adjust the rate between Precision and Recall. We chose parameters by evaluating SVM using Kp3 kernel over the validation-set. Both λ (see Section 3.3) and γ parameters were evaluated in a similar way by maximizing the performance of SVM using KP AF and γ KSCF |KSCF | + Kpd |Kpd| respectively. These parameters were adopted also for all the other kernels. 4.3 Kernel evaluations To study the impact of our structural kernels we firstly derived the maximal accuracy reachable with standard features along with polynomial kernels. The multi-class accuracies, for PropBank and FrameNet using Kpd with d = 1, .., 5, are shown in Figure 5. We note that (a) the highest performance is reached for d = 3, (b) for PropBank our maximal accuracy (90.5%) 7f1 assigns equal importance to Precision P and Recall R, i.e. f1 = 2P ·R P +R. is substantially equal to the SVM performance (88%) obtained in (Hacioglu et al., 2003) with degree 2 and (c) the accuracy on FrameNet (85.2%) is higher than the best result obtained in literature, i.e. 82.0% in (Gildea and Palmer, 2002). This different outcome is due to a different task (we classify different roles) and a different classification algorithm. Moreover, we did not use the Frame information which is very important8. 0.82 0.83 0.84 0.85 0.86 0.87 0.88 0.89 0.9 0.91 1 2 3 4 5 d Accuracy FrameNet PropBank Figure 5: Multi-classifier accuracy according to different degrees of the polynomial kernel. It is worth noting that the difference between linear and polynomial kernel is about 3-4 percent points for both PropBank and FrameNet. This remarkable difference can be easily explained by considering the meaning of standard features. For example, let us restrict the classification function CArg0 to the two features Voice and Position. Without loss of generality we can assume: (a) Voice=1 if active and 0 if passive, and (b) Position=1 when the argument is after the predicate and 0 otherwise. To simplify the example, we also assume that if an argument precedes the target predicate it is a subject, otherwise it is an object 9. It follows that a constituent is Arg0, i.e. CArg0 = 1, if only one feature at a time is 1, otherwise it is not an Arg0, i.e. CArg0 = 0. In other words, CArg0 = Position XOR Voice, which is the classical example of a non-linear separable function that becomes separable in a superlinear space (Cristianini and Shawe-Taylor, 2000). After it was established that the best kernel for standard features is Kp3, we carried out all the other experiments using it in the kernel combinations. Table 2 and 3 show the single class (f1 measure) as well as multi-class classifier (accuracy) performance for PropBank and FrameNet respectively. Each column of the two tables refers to a different kernel defined in the 8Preliminary experiments indicate that SVMs can reach 90% by using the frame feature. 9Indeed, this is true in most part of the cases. previous section. The overall meaning is discussed in the following points: First, PAF alone has good performance, since in PropBank evaluation it outperforms the linear kernel (Kp1), 88.7% vs. 86.7% whereas in FrameNet, it shows a similar performance 79.5% vs. 82.1% (compare tables with Figure 5). This suggests that PAF generates the same information as the standard features in a linear space. However, when a degree greater than 1 is used for standard features, PAF is outperformed10. Args P 3 PAF PAF+P PAF·P SCF+P SCF·P Arg0 90.8 88.3 90.6 90.5 94.6 94.7 Arg1 91.1 87.4 89.9 91.2 92.9 94.1 Arg2 80.0 68.5 77.5 74.7 77.4 82.0 Arg3 57.9 56.5 55.6 49.7 56.2 56.4 Arg4 70.5 68.7 71.2 62.7 69.6 71.1 ArgM 95.4 94.1 96.2 96.2 96.1 96.3 Acc. 90.5 88.7 90.2 90.4 92.4 93.2 Table 2: Evaluation of Kernels on PropBank. Roles P 3 PAF PAF+P PAF·P SCF+P SCF·P agent 92.0 88.5 91.7 91.3 93.1 93.9 cause 59.7 16.1 41.6 27.7 42.6 57.3 degree 74.9 68.6 71.4 57.8 68.5 60.9 depict. 52.6 29.7 51.0 28.6 46.8 37.6 durat. 45.8 52.1 40.9 29.0 31.8 41.8 goal 85.9 78.6 85.3 82.8 84.0 85.3 instr. 67.9 46.8 62.8 55.8 59.6 64.1 mann. 81.0 81.9 81.2 78.6 77.8 77.8 Acc. 85.2 79.5 84.6 81.6 83.8 84.2 18 roles Table 3: Evaluation of Kernels on FrameNet semantic roles. Second, SCF improves the polynomial kernel (d = 3), i.e. the current state-of-the-art, of about 3 percent points on PropBank (column SCF·P). This suggests that (a) PAK can measure the similarity between two SCF structures and (b) the sub-categorization information provides effective clues about the expected argument type. The interesting consequence is that SCF together with PAK seems suitable to automatically cluster different verbs that have the same syntactic realization. We note also that to fully exploit the SCF information it is necessary to use a kernel product (K1 · K2) combination rather than the sum (K1 + K2), e.g. column SCF+P. Finally, the FrameNet results are completely different. No kernel combinations with both PAF and SCF produce an improvement. On 10Unfortunately the use of a polynomial kernel on top the tree fragments to generate the XOR functions seems not successful. the contrary, the performance decreases, suggesting that the classifier is confused by this syntactic information. The main reason for the different outcomes is that PropBank arguments are different from semantic roles as they are an intermediate level between syntax and semantic, i.e. they are nearer to grammatical functions. In fact, in PropBank arguments are annotated consistently with syntactic alternations (see the Annotation guidelines for PropBank at www.cis.upenn.edu/∼ace). On the contrary FrameNet roles represent the final semantic product and they are assigned according to semantic considerations rather than syntactic aspects. For example, Cause and Agent semantic roles have identical syntactic realizations. This prevents SCF to distinguish between them. Another minor reason may be the use of automatic parse-trees to extract PAF and SCF, even if preliminary experiments on automatic semantic shallow parsing of PropBank have shown no important differences versus semantic parsing which adopts Gold Standard parse-trees. 5 Conclusions In this paper, we have experimented with SVMs using the two novel convolution kernels PAF and SCF which are designed for the semantic structures derived from PropBank and FrameNet corpora. Moreover, we have combined them with the polynomial kernel of standard features. The results have shown that: First, SVMs using the above kernels are appealing for semantically parsing both corpora. Second, PAF and SCF can be used to improve automatic classification of PropBank arguments as they provide clues about the predicate argument structure of the target verb. For example, SCF improves (a) the classification state-of-theart (i.e. the polynomial kernel) of about 3 percent points and (b) the best literature result of about 5 percent points. Third, additional work is needed to design kernels suitable to learn the deep semantic contained in FrameNet as it seems not sensible to both PAF and SCF information. Finally, an analysis of SVMs using polynomial kernels over standard features has explained why they largely outperform linear classifiers based-on standard features. In the future we plan to design other structures and combine them with SCF, PAF and standard features. In this vision the learning will be carried out on a set of structural features instead of a set of flat features. Other studies may relate to the use of SCF to generate verb clusters. Acknowledgments This research has been sponsored by the ARDA AQUAINT program. In addition, I would like to thank Professor Sanda Harabagiu for her advice, Adrian Cosmin Bejan for implementing the feature extractor and Paul Mor˘arescu for processing the FrameNet data. Many thanks to the anonymous reviewers for their invaluable suggestions. References Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In proceeding of ACL-02. Michael Collins. 1997. Three generative, lexicalized models for statistical parsing. In proceedings of the ACL-97, pages 16–23, Somerset, New Jersey. Nello Cristianini and John Shawe-Taylor. 2000. An introduction to Support Vector Machines. Cambridge University Press. Charles J. Fillmore. 1982. Frame semantics. In Linguistics in the Morning Calm, pages 111–137. Daniel Gildea and Daniel Jurasfky. 2002. Automatic labeling of semantic roles. Computational Linguistic. Daniel Gildea and Martha Palmer. 2002. The necessity of parsing for predicate argument recognition. In proceedings of ACL-02, Philadelphia, PA. R. Jackendoff. 1990. Semantic Structures, Current Studies in Linguistics series. Cambridge, Massachusetts: The MIT Press. T. Joachims. 1999. Making large-scale SVM learning practical. In Advances in Kernel Methods Support Vector Learning. Paul Kingsbury and Martha Palmer. 2002. From treebank to propbank. In proceedings of LREC02, Las Palmas, Spain. M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics. Alessandro Moschitti and Cosmin Adrian Bejan. 2004. A semantic kernel for predicate argument classification. In proceedings of CoNLL-04, Boston, USA. Kadri Hacioglu, Sameer Pradhan, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2003. Shallow Semantic Parsing Using Support Vector Machines. TR-CSLR-2003-03, University of Colorado. Mihai Surdeanu, Sanda M. Harabagiu, John Williams, and John Aarseth. 2003. Using predicate-argument structures for information extraction. In proceedings of ACL-03, Sapporo, Japan. V. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag New York, Inc.
2004
43
Combining Acoustic and Pragmatic Features to Predict Recognition Performance in Spoken Dialogue Systems Malte Gabsdil Department of Computational Linguistics Saarland University Germany [email protected] Oliver Lemon School of Informatics Edinburgh University Scotland [email protected] Abstract We use machine learners trained on a combination of acoustic confidence and pragmatic plausibility features computed from dialogue context to predict the accuracy of incoming n-best recognition hypotheses to a spoken dialogue system. Our best results show a 25% weighted f-score improvement over a baseline system that implements a “grammar-switching” approach to context-sensitive speech recognition. 1 Introduction A crucial problem in the design of spoken dialogue systems is to decide for incoming recognition hypotheses whether a system should accept (consider correctly recognized), reject (assume misrecognition), or ignore (classify as noise or speech not directed to the system) them. In addition, a more sophisticated dialogue system might decide whether to clarify or confirm certain hypotheses. Obviously, incorrect decisions at this point can have serious negative effects on system usability and user satisfaction. On the one hand, accepting misrecognized hypotheses leads to misunderstandings and unintended system behaviors which are usually difficult to recover from. On the other hand, users might get frustrated with a system that behaves too cautiously and rejects or ignores too many utterances. Thus an important feature in dialogue system engineering is the tradeoff between avoiding task failure (due to misrecognitions) and promoting overall dialogue efficiency, flow, and naturalness. In this paper, we investigate the use of machine learners trained on a combination of acoustic confidence and pragmatic plausibility features (i.e. computed from dialogue context) to predict the quality of incoming n-best recognition hypotheses to a spoken dialogue system. These predictions are then used to select a “best” hypothesis and to decide on appropriate system reactions. We evaluate this approach in comparison with a baseline system that combines fixed recognition confidence rejection thresholds with dialogue-state dependent recognition grammars (Lemon, 2004). The paper is organized as follows. After a short relation to previous work, Section 3 introduces the WITAS multimodal dialogue system, which we use to collect data (Section 4) and to derive baseline results (Section 5). Section 6 describes our learning experiments for classifying and selecting from nbest recognition hypotheses and Section 7 reports our results. 2 Relation to Previous Work (Litman et al., 2000) use acoustic-prosodic information extracted from speech waveforms, together with information derived from their speech recognizer, to automatically predict misrecognized turns in a corpus of train-timetable information dialogues. In our experiments, we also use recognizer confidence scores and a limited number of acousticprosodic features (e.g. amplitude in the speech signal) for hypothesis classification. (Walker et al., 2000) use a combination of features from the speech recognizer, natural language understanding, and dialogue manager/discourse history to classify hypotheses as correct, partially correct, or misrecognized. Our work is related to these experiments in that we also combine confidence scores and higherlevel features for classification. However, both (Litman et al., 2000) and (Walker et al., 2000) consider only single-best recognition results and thus use their classifiers as “filters” to decide whether the best recognition hypothesis for a user utterance is correct or not. We go a step further in that we classify n-best hypotheses and then select among the alternatives. We also explore the use of more dialogue and task-oriented features (e.g. the dialogue move type of a recognition hypothesis) for classification. The main difference between our approach and work on hypothesis reordering (e.g. (Chotimongkol and Rudnicky, 2001)) is that we make a decision regarding whether a dialogue system should accept, clarify, reject, or ignore a user utterance. Furthermore, our approach is more generally applicable than preceding research, since we frame our methodology in the Information State Update (ISU) approach to dialogue management (Traum et al., 1999) and therefore expect it to be applicable to a range of related multimodal dialogue systems. 3 The WITAS Dialogue System The WITAS dialogue system (Lemon et al., 2002) is a multimodal command and control dialogue system that allows a human operator to interact with a simulated “unmanned aerial vehicle” (UAV): a small robotic helicopter. The human operator is provided with a GUI – an interactive (i.e. mouse clickable) map – and specifies mission goals using natural language commands spoken into a headset, or by using combinations of GUI actions and spoken commands. The simulated UAV can carry out different activities such as flying to locations, following vehicles, and delivering objects. The dialogue system uses the Nuance 8.0 speech recognizer with language models compiled from a grammar (written using the Gemini system (Dowding et al., 1993)), which is also used for parsing and generation. 3.1 WITAS Information States The WITAS dialogue system is part of a larger family of systems that implement the Information State Update (ISU) approach to dialogue management (Traum et al., 1999). The ISU approach has been used to formalize different theories of dialogue and forms the basis of several dialogue system implementations in domains such as route planning, home automation, and tutorial dialogue. The ISU approach is a particularly useful testbed for our technique because it collects information relevant to dialogue context in a central data structure from which it can be easily extracted. (Lemon et al., 2002) describe in detail the components of Information States (IS) and the update procedures for processing user input and generating system responses. Here, we briefly introduce parts of the IS which are needed to understand the system’s basic workings, and from which we will extract dialogue-level and task-level information for our learning experiments: • Dialogue Move Tree (DMT): a tree-structure, in which each subtree of the root node represents a “thread” in the conversation, and where each node in a subtree represents an utterance made either by the system or the user. 1 • Active Node List (ANL): a list that records all “active” nodes in the DMT; active nodes indi1A tree is used in order to overcome the limitations of stackbased processing, see (Lemon and Gruenstein, 2004). cate conversational contributions that are still in some sense open, and to which new utterances can attach. • Activity Tree (AT): a tree-structure representing the current, past, and planned activities that the back-end system (in this case a UAV) performs. • Salience List (SL): a list of NPs introduced in the current dialogue ordered by recency. • Modality Buffer (MB): a temporary store that registers click events on the GUI. The DMT and AT are the core components of Information States. The SL and MB are subsidiary data-structures needed for interpreting and generating anaphoric expressions and definite NPs. Finally, the ANL plays a crucial role in integrating new user utterances into the DMT. 4 Data Collection For our experiments, we use data collected in a small user study with the grammar-switching version of the WITAS dialogue system (Lemon, 2004). In this study, six subjects from Edinburgh University (4 male, 2 female) had to solve five simple tasks with the system, resulting in 30 complete dialogues. The subjects’ utterances were recorded as 8kHz 16bit waveform files and all aspects of the Information State transitions during the interactions were logged as html files. Altogether, 303 utterances were recorded in the user study (≈10 user utterances/dialogue). 4.1 Labeling We transcribed all user utterances and parsed the transcriptions offline using WITAS’ natural language understanding component in order to get a gold-standard labeling of the data. Each utterance was labeled as either in-grammar or out-ofgrammar (oog), depending on whether its transcription could be parsed or not, or as crosstalk: a special marker that indicated that the input was not directed to the system (e.g. noise, laughter, self-talk, the system accidentally recording itself). For all in-grammar utterances we stored their interpretations (quasi-logical forms) as computed by WITAS’ parser. Since the parser uses a domain-specific semantic grammar designed for this particular application, each in-grammar utterance had an interpretation that is “correct” with respect to the WITAS application. 4.2 Simplifying Assumptions The evaluations in the following sections make two simplifying assumptions. First, we consider a user utterance correctly recognized only if the logical form of the transcription is the same as the logical form of the recognition hypothesis. This assumption can be too strong because the system might react appropriately even if the logical forms are not literally the same. Second, if a transcribed utterance is out-of-grammar, we assume that the system cannot react appropriately. Again, this assumption might be too strong because the recognizer can accidentally map an utterance to a logical form that is equivalent to the one intended by the user. 5 The Baseline System The baseline for our experiments is the behavior of the WITAS dialogue system that was used to collect the experimental data (using dialogue context as a predictor of language models for speech recognition, see below). We chose this baseline because it has been shown to perform significantly better than an earlier version of the system that always used the same (i.e. full) grammar for recognition (Lemon, 2004). We evaluate the performance of the baseline by analyzing the dialogue logs from the user study. With this information, it is possible to decide how the system reacted to each user utterance. We distinguish between the following three cases: 1. accept: the system accepted the recognition hypothesis of a user utterance as correct. 2. reject: the system rejected the recognition hypothesis of a user utterance given a fixed confidence rejection threshold. 3. ignore: the system did not react to a user utterance at all. These three classes map naturally to the goldstandard labels of the transcribed user utterances: the system should accept in-grammar utterances, reject out-of-grammar input, and ignore crosstalk. 5.1 Context-sensitive Speech Recognition In the the WITAS dialogue system, the “grammarswitching” approach to context-sensitive speech recognition (Lemon, 2004) is implemented using the ANL. At any point in the dialogue, there is a “most active node” at the top of the ANL. The dialogue move type of this node defines the name of a language model that is used for recognizing the next user utterance. For instance, if the most active node is a system yes-no-question then the appropriate language model is defined by a small context-free grammar covering phrases such as “yes”, “that’s right”, “okay”, “negative”, “maybe”, and so on. The WITAS dialogue system with contextsensitive speech recognition showed significantly better recognition rates than a previous version of the system that used the full grammar for recognition at all times ((Lemon, 2004) reports a 11.5% reduction in overall utterance recognition error rate). Note however that an inherent danger with grammar-switching is that the system may have wrong expectations and thus might activate a language model which is not appropriate for the user’s next utterance, leading to misrecognitions or incorrect rejections. 5.2 Results Table 1 summarizes the evaluation of the baseline system. System behavior User utterance accept reject ignore in-grammar 154/22 8 4 out-of-grammar 45 43 4 crosstalk 12 9 2 Accuracy: 65.68% Weighted f-score: 61.81% Table 1: WITAS dialogue system baseline results Table 1 should be read as follows: looking at the first row, in 154 cases the system understood and accepted the correct logical form of an in-grammar utterance by the user. In 22 cases, the system accepted a logical form that differed from the one for the transcribed utterance.2 In 8 cases, the system rejected an in-grammar utterance and in 4 cases it did not react to an in-grammar utterance at all. The second row of Table 1 shows that the system accepted 45, rejected 43, and ignored 4 user utterances whose transcriptions were out-of-grammar and could not be parsed. Finally, the third row of the table shows that the baseline system accepted 12 utterances that were not addressed to it, rejected 9, and ignored 2. Table 1 shows that a major problem with the baseline system is that it accepts too many user utterances. In particular, the baseline system accepts the wrong interpretation for 22 in-grammar utterances, 45 utterances which it should have rejected as outof-grammar, and 12 utterances which it should have 2For the computation of accuracy and weighted f-scores, these were counted as wrongly accepted out-of-grammar utterances. ignored. All of these cases will generally lead to unintended actions by the system. 6 Classifying and Selecting N-best Recognition Hypotheses We aim at improving over the baseline results by considering the n-best recognition hypotheses for each user utterance. Our methodology consists of two steps: i) we automatically classify the n-best recognition hypotheses for an utterance as either correctly or incorrectly recognized and ii) we use a simple selection procedure to choose the “best” hypothesis based on this classification. In order to get multiple recognition hypotheses for all utterances in the experimental data, we re-ran the speech recognizer with the full recognition grammar and 10best output and processed the results offline with WITAS’ parser, obtaining a logical form for each recognition hypothesis (every hypothesis has a logical form since language models are compiled from the parsing grammar). 6.1 Hypothesis Labeling We labeled all hypotheses with one of the following four classes, based on the manual transcriptions of the experimental data: in-grammar, oog (WER ≤ 50), oog (WER > 50), or crosstalk. The in-grammar and crosstalk classes correspond to those described for the baseline. However, we decided to divide up the out-of-grammar class into the two classes oog (WER ≤50) and oog (WER > 50) to get a more finegrained classification. In order to assign hypotheses to the two oog classes, we compute the word error rate (WER) between recognition hypotheses and the transcription of corresponding user utterances. If the WER is ≤50%, we label the hypothesis as oog (WER ≤50), otherwise as oog (WER > 50). We also annotate all misrecognized hypotheses of in-grammar utterances with their respective WER scores. The motivation behind splitting the out-ofgrammar class into two subclasses and for annotating misrecognized in-grammar hypotheses with their WER scores is that we want to distinguish between different “degrees” of misrecognition that can be used by the dialogue system to decide whether it should initiate clarification instead of rejection.3 We use a threshold (50%) on a hypothesis’ WER as an indicator for whether hypotheses should be 3The WITAS dialogue system currently does not support this type of clarification dialogue; the WER annotations are therefore only of theoretical interest. However, an extended system could easily use this information to decide when clarification should be initiated. clarified or rejected. This is adopted from (Gabsdil, 2003), based on the fact that WER correlates with concept accuracy (CA, (Boros et al., 1996)). The WER threshold can be set differently according to the needs of an application. However, one would ideally set a threshold directly on CA scores for this labeling, but these are currently not available for our data. We also introduce the distinction between out-ofgrammar (WER ≤50) and out-of-grammar (WER > 50) in the gold standard for the classification of (whole) user utterances. We split the out-ofgrammar class into two sub-classes depending on whether the 10-best recognition results include at least one hypothesis with a WER ≤50 compared to the corresponding transcription. Thus, if there is a recognition hypothesis which is close to the transcription, an utterance is labeled as oog (WER ≤ 50). In order to relate these classes to different system behaviors, we define that utterances labeled as oog (WER ≤50) should be clarified and utterances labeled as oog (WER > 50) should be rejected by the system. The same is done for all in-grammar utterances for which only misrecognized hypotheses are available. 6.2 Classification: Feature Groups We represent recognition hypotheses as 20dimensional feature vectors for automatic classification. The feature vectors combine recognizer confidence scores, low-level acoustic information, information from WITAS system Information States, and domain knowledge about the different tasks in the scenario. The following list gives an overview of all features (described in more detail below). 1. Recognition (6): nbestRank, hypothesisLength, confidence, confidenceZScore, confidence-StandardDeviation, minWordConfidence 2. Utterance (3): minAmp, meanAmp, RMS-amp 3. Dialogue (9): currentDM, currentCommand, mostActiveNode, DMBigramFrequency, qaMatch, aqMatch, #unresolvedNPs, #unresolvedPronouns, #uniqueIndefinites 4. Task (2): taskConflict, #taskConstraintConflict All features are extracted automatically from the output of the speech recognizer, utterance waveforms, IS logs, and a small library of plan operators describing the actions the UAV can perform. The recognition (REC) feature group includes the position of a hypothesis in the n-best list (nbestRank), its length in words (hypothesisLength), and five features representing the recognizer’s confidence assessment. Similar features have been used in the literature (e.g. (Litman et al., 2000)). The minWordConfidence and standard deviation/zScore features are computed from individual word confidences in the recognition output. We expect them to help the machine learners decide between the different WER classes (e.g. a high overall confidence score can sometimes be misleading). The utterance (UTT) feature group reflects information about the amplitude in the speech signal (all features are extracted with the UNIX sox utility). The motivation for including the amplitude features is that they might be useful for detecting crosstalk utterances which are not directly spoken into the headset microphone (e.g. the system accidentally recognizing itself). The dialogue features (DIAL) represent information derived from Information States and can be coarsely divided into two sub-groups. The first group includes features representing general coherence constraints on the dialogue: the dialogue move types of the current utterance (currentDM) and of the most active node in the ANL (mostActiveNode), the command type of the current utterance (currentCommand, if it is a command, null otherwise), statistics on which move types typically follow each other (DMBigramFrequency), and two features (qaMatch and aqMatch) that explicitly encode whether the current and the previous utterance form a valid question answer pair (e.g. yn-question followed by yn-answer). The second group includes features that indicate how many definite NPs and pronouns cannot be resolved in the current Information State (#unresolvedNP, #unresolvedPronouns, e.g. “the car” if no car was mentioned before) and a feature indicating the number of indefinite NPs that can be uniquely resolved in the Information State (#uniqueIndefinites, e.g. “a tower” where there is only one tower in the domain). We include these features because (short) determiners are often confused by speech recognizers. In the WITAS scenario, a misrecognized determiner/demonstrative pronoun can lead to confusing system behavior (e.g. a wrongly recognized “there” will cause the system to ask “Where is that?”). Finally, the task features (TASK) reflect conflicting instructions in the domain. The feature taskConflict indicates a conflict if the current dialogue move type is a command and that command already appears as an active task in the AT. #taskConstraintConflict counts the number of conflicts that arise between the currently active tasks in the AT and the hypothesis. For example, if the UAV is already flying somewhere the preconditions of the action operator for take off (altitude = 0) conflict with those for fly (altitude ̸= 0), so that “take off” would be an unlikely command in this context. 6.3 Learners and Selection Procedure We use the memory based learner TiMBL (Daelemans et al., 2002) and the rule induction learner RIPPER (Cohen, 1995) to predict the class of each of the 10-best recognition hypotheses for a given utterance. We chose these two learners because they implement different learning strategies, are well established, fast, freely available, and easy to use. In a second step, we decide which (if any) of the classified hypotheses we actually want to pick as the best result and how the user utterance should be classified as a whole. This task is decided by the following selection procedure (see Figure 1) which implements a preference ordering accept > clarify > reject > ignore.4 1. Scan the list of classified n-best recognition hypotheses top-down. Return the first result that is classified as accept and classify the utterance as accept. 2. If 1. fails, scan the list of classified n-best recognition hypotheses top-down. Return the first result that is classified as clarify and classify the utterance as clarify. 3. If 2. fails, count the number of rejects and ignores in the classified recognition hypotheses. If the number of rejects is larger or equal than the number of ignores classify the utterance as reject. 4. Else classify the utterance as ignore. Figure 1: Selection procedure This procedure is applied to choose from the classified n-best hypotheses for an utterance, independent of the particular machine learner, in all of the following experiments. Since we have a limited amount experimental data in this study (10 hypotheses for each of the 303 user utterances), we use a “leave-one-out” crossvalidation setup for classification. This means that we classify the 10-best hypotheses for a particular utterance based on the 10-best hypotheses of all 302 other utterances and repeat this 303 times. 4Note that in a dialogue application one would not always need to classify all n-best hypotheses in order to select a result but could stop as soon as a hypothesis is classified as correct, which can save processing time. 7 Results and Evaluation The middle part of Table 2 shows the classification results for TiMBL and RIPPER when run with default parameter settings (the other results are included for comparison). The individual rows show the performance when different combinations of feature groups are used for training. The results for the three-way classification are included for comparison with the baseline system and are obtained by combining the two classes clarify and reject. Note that we do not evaluate the performance of the learners for classifying the individual recognition hypotheses but the classification of (whole) user utterances (i.e. including the selection procedure to choose from the classified hypotheses). The results show that both learners profit from the addition of more features concerning dialogue context and task context for classifying user speech input appropriately. The only exception from this trend is a slight performance decrease when task features are added in the four-way classification for RIPPER. Note that both learners already outperform the baseline results even when only recognition features are considered. The most striking result is the performance gain for TiMBL (almost 10%) when we include the dialogue features. As soon as dialogue features are included, TiMBL also performs slightly better than RIPPER. Note that the introduction of (limited) task features, in addition to the DIAL and UTT features, did not have dramatic impact in this study. One aim for future work is to define and analyze the influence of further task related features for classification. 7.1 Optimizing TiMBL Parameters In all of the above experiments we ran the machine learners with their default parameter settings. However, recent research (Daelemans and Hoste, 2002; Marsi et al., 2003) has shown that machine learners often profit from parameter optimization (i.e. finding the best performing parameters on some development data). We therefore selected 40 possible parameter combinations for TiMBL (varying the number of nearest neighbors, feature weighting, and class voting weights) and nested a parameter optimization step into the “leave-oneout” evaluation paradigm (cf. Figure 2).5 Note that our optimization method is not as sophisticated as the “Iterative Deepening” approach 5We only optimized parameters for TiMBL because it performed better with default settings than RIPPER and because the findings in (Daelemans and Hoste, 2002) indicate that TiMBL profits more from parameter optimization. 1. Set aside the recognition hypotheses for one of the user utterances. 2. Randomly split the remaining data into an 80% training and 20% test set. 3. Run TiMBL with all possible parameter settings on the generated training and test sets and store the best performing settings. 4. Classify the left-out hypotheses with the recorded parameter settings. 5. Iterate. Figure 2: Parameter optimization described by (Marsi et al., 2003) but is similar in the sense that it computes a best-performing parameter setting for each data fold. Table 3 shows the classification results when we run TiMBL with optimized parameter settings and using all feature groups for training. System Behavior User Utterance accept clarify reject ignore in-grammar 159/2 11 16 0 out-of-grammar 0 25 5 0 (WER ≤50) out-of-grammar 6 6 50 0 (WER > 50) crosstalk 2 5 0 16 Acc/wf-score (3 classes): 86.14/86.39% Acc/wf-score (4 classes): 82.51/83.29% Table 3: TiMBL classification results with optimized parameters Table 3 shows a remarkable 9% improvement for the 3-way and 4-way classification in both accuracy and weighted f-score, compared to using TiMBL with default parameter settings. In terms of WER, the baseline system (cf. Table 1) accepted 233 user utterances with a WER of 21.51%, and in contrast, TiMBL with optimized parameters (Ti OP) only accepted 169 user utterances with a WER of 4.05%. This low WER reflects the fact that if the machine learning system accepts an user utterance, it is almost certainly the correct one. Note that although the machine learning system in total accepted far fewer utterances (169 vs. 233) it accepted more correct utterances than the baseline (159 vs. 154). 7.2 Evaluation The baseline accuracy for the 3-class problem is 65.68% (61.81% weighted f-score). Our best results, obtained by using TiMBL with parameter opSystem or features used Acc/wf-score Acc/wf-score Acc/wf-score Acc/wf-score for classification (3 classes) (4 classes) (3 classes) (4 classes) Baseline 65.68/61.81% TiMBL RIPPER REC 67.66/67.51% 63.04/63.03% 69.31/69.03% 66.67/65.14% REC+UTT 68.98/68.32% 64.03/63.08% 72.61/72.33% 70.30/68.61% REC+UTT+DIAL 77.56/77.59% 72.94/73.70% 74.92/75.34% 71.29/71.62% REC+UTT+DIAL+TASK 77.89/77.91% 73.27/74.12% 75.25/75.61% 70.63/71.54% TiMBL (optimized params.) 86.14/86.39% 82.51/83.29% Oracle 94.06/94.17% 94.06/94.18% Table 2: Classification Results timization, show a 25% weighted f-score improvement over the baseline system. We can compare these results to a hypothetical “oracle” system in order to obtain an upper bound on classification performance. This is an imaginary system which performs perfectly on the experimental data given the 10-best recognition output. The oracle results reveal that for 18 of the in-grammar utterances the 10-best recognition hypotheses do not include the correct logical form at all and therefore have to be classified as clarify or reject (i.e. it is not possible to achieve 100% accuracy on the experimental data). Table 2 shows that our best results are only 8%/12% (absolute) away from the optimal performance. 7.2.1 Costs and χ2 Levels of Significance We use the χ2 test of independence to statistically compare the different classification results. However, since χ2 only tells us whether two classifications are different from each other, we introduce a simple cost measure (Table 4) for the 3-way classification problem to complement the χ2 results.6 System behavior User utterance accept reject ignore in-grammar 0 2 2 out-of-grammar 4 2 2 crosstalk 4 2 0 Table 4: Cost measure Table 4 captures the intuition that the correct behavior of a dialogue system is to accept correctly recognized utterances and ignore crosstalk (cost 0). The worst a system can do is to accept misrecognized utterances or utterances that were not addressed to the system. The remaining classes are as6We only evaluate the 3-way classification problem because there are no baseline results for the 4-way classification available. signed a value in-between these two extremes. Note that the cost assignment is not validated against user judgments. We only use the costs to interpret the χ2 levels of significance (i.e. as an indicator to compare the relative quality of different systems). Table 5 shows the differences in cost and χ2 levels of significance when we compare the classification results. Here, Ti OP stands for TiMBL with optimized parameters and the stars indicate the level of statistical significance as computed by the χ2 statistics (∗∗∗indicates significance at p = .001, ∗∗at p = .01, and ∗at p = .05).7 Baseline RIPPER TiMBL Ti OP Oracle −232∗∗∗−116∗∗∗−100∗∗∗ −56 Ti OP −176∗∗∗ −60∗ −44 TiMBL −132∗∗∗ −16 RIPPER −116∗∗∗ Table 5: Cost comparisons and χ2 levels of significance for 3-way classification The cost measure shows the strict ordering: Oracle < Ti OP < TiMBL < RIPPER < Baseline. Note however that according to the χ2 test there is no significant difference between the oracle system and TiMBL with optimized parameters. Table 5 also shows that all of our experiments significantly outperform the baseline system. 8 Conclusion We used a combination of acoustic confidence and pragmatic plausibility features (i.e. computed from dialogue context) to predict the quality of incoming recognition hypotheses to a multi-modal dialogue system. We classified hypotheses as accept, (clarify), reject, or ignore: functional categories that 7Following (Hinton, 1995), we leave out categories with expected frequencies < 5 in the χ2 computation and reduce the degrees of freedom accordingly. can be used by a dialogue manager to decide appropriate system reactions. The approach is novel in combining machine learning with n-best processing for spoken dialogue systems using the Information State Update approach. Our best results, obtained using TiMBL with optimized parameters, show a 25% weighted f-score improvement over a baseline system that uses a “grammar-switching” approach to context-sensitive speech recognition, and are only 8% away from the optimal performance that can be achieved on the data. Clearly, this improvement would result in better dialogue system performance overall. Parameter optimization improved the classification results by 9% compared to using the learner with default settings, which shows the importance of such tuning. Future work points in two directions: first, integrating our methodology into working ISU-based dialogue systems and determining whether or not they improve in terms of standard dialogue evaluation metrics (e.g. task completion). The ISU approach is a particularly useful testbed for our methodology because it collects information pertaining to dialogue context in a central data structure from which it can be easily extracted. This avenue will be further explored in the TALK project8. Second, it will be interesting to investigate the impact of different dialogue and task features for classification and to introduce a distinction between “generic” features that are domain independent and “application-specific” features which reflect properties of individual systems and application scenarios. Acknowledgments We thank Nuance Communications Inc. for the use of their speech recognition and synthesis software and Alexander Koller and Dan Shapiro for reading draft versions of this paper. Oliver Lemon was partially supported by Scottish Enterprise under the Edinburgh-Stanford Link programme. References M. Boros, W. Eckert, F. Gallwitz, G. G¨orz, G. Hanrieder, and H. Niemann. 1996. Towards Understanding Spontaneous Speech: Word Accuracy vs. Concept Accuracy. In Proc. ICSLP-96. Ananlada Chotimongkol and Alexander I. Rudnicky. 2001. N-best Speech Hypotheses Reordering Using Linear Regression. In Proceedings of EuroSpeech 2001, pages 1829–1832. William W. Cohen. 1995. Fast Effective Rule Induction. In Proceedings of the 12th International Conference on Machine Learning. 8EC FP6 IST-507802, http://www.talk-project.org Walter Daelemans and V´eronique Hoste. 2002. Evaluation of Machine Learning Methods for Natural Language Processing Tasks. In Proceedings of LREC-02. Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2002. TIMBL: Tilburg Memory Based Learner, version 4.2, Reference Guide. In ILK Technical Report 02-01. John Dowding, Jean Mark Gawron, Doug Appelt, John Bear, Lynn Cherny, Robert Moore, and Douglas Moran. 1993. GEMINI: a natural language system for spoken-language understanding. In Proceedings of ACL-93. Malte Gabsdil. 2003. Classifying Recognition Results for Spoken Dialogue Systems. In Proceedings of the Student Research Workshop at ACL03. Perry R. Hinton. 1995. Statistics Explained – A Guide For Social Science Students. Routledge. Oliver Lemon and Alexander Gruenstein. 2004. Multithreaded context for robust conversational interfaces: context-sensitive speech recognition and interpretation of corrective fragments. ACM Transactions on Computer-Human Interaction. (to appear). Oliver Lemon, Alexander Gruenstein, and Stanley Peters. 2002. Collaborative activities and multitasking in dialogue systems. Traitement Automatique des Langues, 43(2):131–154. Oliver Lemon. 2004. Context-sensitive speech recognition in ISU dialogue systems: results for the grammar switching approach. In Proceedings of the 8th Workshop on the Semantics and Pragmatics of Dialogue, CATALOG’04. Diane J. Litman, Julia Hirschberg, and Marc Swerts. 2000. Predicting Automatic Speech Recognition Performance Using Prosodic Cues. In Proceedings of NAACL-00. Erwin Marsi, Martin Reynaert, Antal van den Bosch, Walter Daelemans, and V´eronique Hoste. 2003. Learning to predict pitch accents and prosodic boundaries in Dutch. In Proceedings of ACL-03. David Traum, Johan Bos, Robin Cooper, Staffan Larsson, Ian Lewin, Colin Matheson, and Massimo Poesio. 1999. A Model of Dialogue Moves and Information State Revision. Technical Report D2.1, Trindi Project. Marilyn Walker, Jerry Wright, and Irene Langkilde. 2000. Using Natural Language Processing and Discourse Features to Identify Understanding Errors in a Spoken Dialogue System. In Proceedings of ICML-2000.
2004
44
Predicting Student Emotions in Computer-Human Tutoring Dialogues Diane J. Litman University of Pittsburgh Department of Computer Science Learning Research and Development Center Pittsburgh PA, 15260, USA [email protected] Kate Forbes-Riley University of Pittsburgh Learning Research and Development Center Pittsburgh PA, 15260, USA [email protected] Abstract We examine the utility of speech and lexical features for predicting student emotions in computerhuman spoken tutoring dialogues. We first annotate student turns for negative, neutral, positive and mixed emotions. We then extract acoustic-prosodic features from the speech signal, and lexical items from the transcribed or recognized speech. We compare the results of machine learning experiments using these features alone or in combination to predict various categorizations of the annotated student emotions. Our best results yield a 19-36% relative improvement in error reduction over a baseline. Finally, we compare our results with emotion prediction in human-human tutoring dialogues. 1 Introduction This paper explores the feasibility of automatically predicting student emotional states in a corpus of computer-human spoken tutoring dialogues. Intelligent tutoring dialogue systems have become more prevalent in recent years (Aleven and Rose, 2003), as one method of improving the performance gap between computer and human tutors; recent experiments with such systems (e.g., (Graesser et al., 2002)) are starting to yield promising empirical results. Another method for closing this performance gap has been to incorporate affective reasoning into computer tutoring systems, independently of whether or not the tutor is dialogue-based (Conati et al., 2003; Kort et al., 2001; Bhatt et al., 2004). For example, (Aist et al., 2002) have shown that adding human-provided emotional scaffolding to an automated reading tutor increases student persistence. Our long-term goal is to merge these lines of dialogue and affective tutoring research, by enhancing our intelligent tutoring spoken dialogue system to automatically predict and adapt to student emotions, and to investigate whether this improves learning and other measures of performance. Previous spoken dialogue research has shown that predictive models of emotion distinctions (e.g., emotional vs. non-emotional, negative vs. nonnegative) can be developed using features typically available to a spoken dialogue system in real-time (e.g, acoustic-prosodic, lexical, dialogue, and/or contextual) (Batliner et al., 2000; Lee et al., 2001; Lee et al., 2002; Ang et al., 2002; Batliner et al., 2003; Shafran et al., 2003). In prior work we built on and generalized such research, by defining a three-way distinction between negative, neutral, and positive student emotional states that could be reliably annotated and accurately predicted in human-human spoken tutoring dialogues (ForbesRiley and Litman, 2004; Litman and Forbes-Riley, 2004). Like the non-tutoring studies, our results showed that combining feature types yielded the highest predictive accuracy. In this paper we investigate the application of our approach to a comparable corpus of computerhuman tutoring dialogues, which displays many different characteristics, such as shorter utterances, little student initiative, and non-overlapping speech. We investigate whether we can annotate and predict student emotions as accurately and whether the relative utility of speech and lexical features as predictors is the same, especially when the output of the speech recognizer is used (rather than a human transcription of the student speech). Our best models for predicting three different types of emotion classifications achieve accuracies of 66-73%, representing relative improvements of 19-36% over majority class baseline errors. Our computer-human results also show interesting differences compared with comparable analyses of human-human data. Our results provide an empirical basis for enhancing our spoken dialogue tutoring system to automatically predict and adapt to a student model that includes emotional states. 2 Computer-Human Dialogue Data Our data consists of student dialogues with ITSPOKE (Intelligent Tutoring SPOKEn dialogue system) (Litman and Silliman, 2004), a spoken dialogue tutor built on top of the Why2-Atlas conceptual physics text-based tutoring system (VanLehn et al., 2002). In ITSPOKE, a student first types an essay answering a qualitative physics problem. ITSPOKE then analyzes the essay and engages the student in spoken dialogue to correct misconceptions and to elicit complete explanations. First, the Why2-Atlasback-end parses the student essay into propositional representations, in order to find useful dialogue topics. It uses 3 different approaches (symbolic, statistical and hybrid) competitively to create a representation for each sentence, then resolves temporal and nominal anaphora and constructs proofs using abductive reasoning (Jordan et al., 2004). During the dialogue, student speech is digitized from microphone input and sent to the Sphinx2 recognizer, whose stochastic language models have a vocabulary of 1240 words and are trained with 7720 student utterances from evaluations of Why2-Atlas and from pilot studies of ITSPOKE. Sphinx2’s best “transcription”(recognition output) is then sent to the Why2-Atlas back-end for syntactic, semantic and dialogue analysis. Finally, the text response produced by Why2-Atlas is sent to the Cepstral text-to-speech system and played to the student. After the dialogue, the student revises the essay, thereby ending the tutoring or causing another round of tutoring/essay revision. Our corpus of dialogues with ITSPOKE was collected from November 2003 - April 2004, as part of an evaluation comparing ITSPOKE, Why2-Atlas, and human tutoring (Litman et al., 2004). Subjects are University of Pittsburgh students who have never taken college physics, and who are native English speakers. Subjects first read a small document of background physics material, then work through 5 problems (dialogues) with ITSPOKE. The corpus contains 100 dialogues (physics problems) from 20 subjects, with a total of 2445 student turns and 398 unique words. 15 dialogues have been annotated for emotion as described in Section 3. On average, our dialogues last 19.4 minutes and contain 25 student turns. While ITSPOKE’s word error rate on this corpus is 31.2%, semantic accuracy is more useful for dialogue evaluation as it does not penalize for unimportant word errors. Semantic analysis based on speech recognition is the same as based on perfect transcription 92.4% of the time. An emotionannotated corpus example is shown in Figure 1. 3 Annotating Student Turns In our data, student “emotions”1 can only be identified indirectly: via what is said and/or how it is 1We use the term “emotion” loosely to cover both affects and attitudes that can impact student learning. ...dialogue excerpt at 18.3 min. into session... ITSPOKE  : What is the magnitude of the acceleration of the packet in the horizontal direction? STUDENT  : i don’t know (NEGATIVE) ITSPOKE  : Ok, let’s consider how forces in the vertical direction affect the motion of a body in a horizontal direction in a different scenario. Say an apple falls from a tree. What force(s) are acting on the apple as it falls? STUDENT  : gravity (NEGATIVE) ITSPOKE  : Yeah. Are there any other forces acting on the apple as it falls? STUDENT  : no (NEGATIVE) ITSPOKE  : Fine. The only force acting on the apple as it falls to the earth is the earth’s gravitational force. In which direction does gravitational force act? STUDENT  : down (POSITIVE) ITSPOKE  : Yep. Therefore, what is the magnitude of this gravitational force in the horizontal direction? STUDENT  : in the direction of the airplane (NEUTRAL) Figure 1: Annotated Spoken Dialogue Excerpt said. In (Litman and Forbes-Riley, 2004), we discuss a scheme for manually annotating student turns in a human-human tutoring dialogue corpus for intuitively perceived emotions.2 These emotions are viewed along a linear scale, shown and defined as follows: negative  neutral  positive. Negative: a student turn that expresses emotions such as confused, bored, irritated. Evidence of a negative emotion can come from many knowledge sources such as lexical items (e.g., “I don’t know” in student  in Figure 1), and/or acoustic-prosodic features (e.g., prior-turn pausing in student  ). Positive: a student turn expressing emotions such as confident, enthusiastic. An example is student  , which displays louder speech and faster tempo. Neutral: a student turn not expressing a negative or positive emotion. An example is student  , where evidence comes from moderate loudness, pitch and tempo. We also distinguish Mixed: a student turn expressing both positive and negative emotions. To avoid influencing the annotator’s intuitive understanding of emotion expression, and because particular emotional cues are not used consistently 2Weak and strong expressions of emotions are annotated. or unambiguously across speakers, our annotation manual does not associate particular cues with particular emotion labels. Instead, it contains examples of labeled dialogue excerpts (as in Figure 1, except on human-human data) with links to corresponding audio files. The cues mentioned in the discussion of Figure 1 above were elicited during post-annotation discussion of the emotions, and are presented here for expository use only. (Litman and Forbes-Riley, 2004) further details our annotation scheme and discusses how it builds on related work. To analyze the reliability of the scheme on our new computer-human data, we selected 15 transcribed dialogues from the corpus described in Section 2, yielding a dataset of 333 student turns, where approximately 30 turns came from each of 10 subjects. The 333 turns were separately annotated by two annotators following the emotion annotation scheme described above. We focus here on three analyses of this data, itemized below. While the first analysis provides the most fine-grained distinctions for triggering system adaptation, the second and third (simplified) analyses correspond to those used in (Lee et al., 2001) and (Batliner et al., 2000), respectively. These represent alternative potentially useful triggering mechanisms, and are worth exploring as they might be easier to annotate and/or predict.  Negative, Neutral, Positive (NPN): mixeds are conflated with neutrals.  Negative, Non-Negative (NnN): positives, mixeds, neutrals are conflated as nonnegatives.  Emotional, Non-Emotional (EnE): negatives, positives, mixeds are conflated as Emotional; neutrals are Non-Emotional. Tables 1-3 provide a confusion matrix for each analysis summarizing inter-annotator agreement. The rows correspond to the labels assigned by annotator 1, and the columns correspond to the labels assigned by annotator 2. For example, the annotators agreed on 89 negatives in Table 1. In the NnN analysis, the two annotators agreed on the annotations of 259/333 turns achieving 77.8% agreement, with Kappa = 0.5. In the EnE analysis, the two annotators agreed on the annotations of 220/333 turns achieving 66.1% agreement, with Kappa = 0.3. In the NPN analysis, the two annotators agreed on the annotations of 202/333 turns achieving 60.7% agreement, with Kappa = 0.4. This inter-annotator agreement is on par with that of prior studies of emotion annotation in naturally occurring computer-human dialogues (e.g., agreement of 71% and Kappa of 0.47 in (Ang et al., 2002), Kappa of 0.45 and 0.48 in (Narayanan, 2002), and Kappa ranging between 0.32 and 0.42 in (Shafran et al., 2003)). A number of researchers have accommodated for this low agreement by exploring ways of achieving consensus between disagreed annotations, to yield 100% agreement (e.g (Ang et al., 2002; Devillers et al., 2003)). As in (Ang et al., 2002), we will experiment below with predicting emotions using both our agreed data and consensuslabeled data. negative non-negative negative 89 36 non-negative 38 170 Table 1: NnN Analysis Confusion Matrix emotional non-emotional emotional 129 43 non-emotional 70 91 Table 2: EnE Analysis Confusion Matrix negative neutral positive negative 89 30 6 neutral 32 94 38 positive 6 19 19 Table 3: NPN Analysis Confusion Matrix 4 Extracting Features from Turns For each of the 333 student turns described above, we next extracted the set of features itemized in Figure 2, for use in the machine learning experiments described in Section 5. Motivated by previous studies of emotion prediction in spontaneous dialogues (Ang et al., 2002; Lee et al., 2001; Batliner et al., 2003), our acousticprosodic features represent knowledge of pitch, energy, duration, tempo and pausing. We further restrict our features to those that can be computed automatically and in real-time, since our goal is to use such features to trigger online adaptation in ITSPOKE based on predicted student emotions. F0 and RMS values, representing measures of pitch and loudness, respectively, are computed using Entropic Research Laboratory’s pitch tracker, get f0, with no post-correction. Amount of Silence is approximated as the proportion of zero f0 frames for the turn. Turn Duration and Prior Pause Duration are computed Acoustic-Prosodic Features  4 fundamental frequency (f0): max, min, mean, standard deviation  4 energy (RMS): max, min, mean, standard deviation  4 temporal: amount of silence in turn, turn duration, duration of pause prior to turn, speaking rate Lexical Features  human-transcribed lexical items in the turn  ITSPOKE-recognized lexical items in the turn Identifier Features: subject, gender, problem Figure 2: Features Per Student Turn automatically via the start and end turn boundaries in ITSPOKE logs. Speaking Rate is automatically calculated as #syllables per second in the turn. While acoustic-prosodic features address how something is said, lexical features representing what is said have also been shown to be useful for predicting emotion in spontaneous dialogues (Lee et al., 2002; Ang et al., 2002; Batliner et al., 2003; Devillers et al., 2003; Shafran et al., 2003). Our first set of lexical features represents the human transcription of each student turn as a word occurrence vector (indicating the lexical items that are present in the turn). This feature represents the “ideal” performance of ITSPOKE with respect to speech recognition. The second set represents ITSPOKE’s actual best speech recognition hypothesisof what is said in each student turn, again as a word occurrence vector. Finally, we recorded for each turn the 3 “identifier” features shown last in Figure 2. Prior studies (Oudeyer, 2002; Lee et al., 2002) have shown that “subject” and “gender” can play an important role in emotion recognition. “Subject” and “problem” are particularly important in our tutoring domain because students will use our system repeatedly, and problems are repeated across students. 5 Predicting Student Emotions 5.1 Feature Sets and Method We next created the 10 feature sets in Figure 3, to study the effects that various feature combinations had on predicting emotion. We compare an acoustic-prosodic feature set (“sp”), a humantranscribed lexical items feature set (“lex”) and an ITSPOKE-recognized lexical items feature set (“asr”). We further compare feature sets combining acoustic-prosodic and either transcribed or recognized lexical items (“sp+lex”, “sp+asr”). Finally, we compare each of these 5 feature sets with an identical set supplemented with our 3 identifier features (“+id”). sp: 12 acoustic-prosodic features lex: human-transcribed lexical items asr: ITSPOKE recognized lexical items sp+lex: combined sp and lex features sp+asr: combined sp and asr features +id: each above set + 3 identifier features Figure 3: Feature Sets for Machine Learning We use the Weka machine learning software (Witten and Frank, 1999) to automatically learn our emotion prediction models. In our humanhuman dialogue studies (Litman and Forbes, 2003), the use of boosted decision trees yielded the most robust performance across feature sets so we will continue their use here. 5.2 Predicting Agreed Turns As in (Shafran et al., 2003; Lee et al., 2001), our first study looks at the clearer cases of emotional turns, i.e. only those student turns where the two annotators agreed on an emotion label. Tables 4-6 show, for each emotion classification, the mean accuracy (%correct) and standard error (SE) for our 10 feature sets (Figure 3), computed across 10 runs of 10-fold cross-validation.3 For comparison, the accuracy of a standard baseline algorithm (MAJ), which always predicts the majority class, is shown in each caption. For example, Table 4’s caption shows that for NnN, always predicting the majority class of non-negative yields an accuracy of 65.65%. In each table, the accuracies are labeled for how they compare statistically to the relevant baseline accuracy (  = worse,  = same,  = better), as automatically computed in Weka using a two-tailed t-test (p  .05). First note that almost every feature set significantly outperforms the majority class baseline, across all emotion classifications; the only exceptions are the speech-only feature sets without identifier features (“sp-id”) in the NnN and EnE tables, which perform the same as the baseline. These results suggest that without any subject or task specific information, acoustic-prosodic features alone 3For each cross-validation, the training and test data are drawn from utterances produced by the same set of speakers. A separate experiment showed that testing on one speaker and training on the others, averaged across all speakers, does not significantly change the results. are not useful predictors for our two binary classification tasks, at least in our computer-human dialogue corpus. As will be discussed in Section 6, however, “sp-id” feature sets are useful predictors in human-human tutoring dialogues. Feat. Set -id SE +id SE sp 64.10  0.80 70.66  0.76 lex 68.20  0.41 72.74  0.58 asr 72.30  0.58 70.51  0.59 sp+lex 71.78  0.77 72.43  0.87 sp+asr 69.90  0.57 71.44b 0.68 Table 4: %Correct, NnN Agreed, MAJ (nonnegative) = 65.65% Feat. Set -id SE +id SE sp 59.18  0.75 70.68  0.89 lex 63.18  0.82 75.64  0.37 asr 66.36  0.54 72.91  0.35 sp+lex 63.86  0.97 69.59  0.48 sp+asr 65.14  0.82 69.64  0.57 Table 5: %Correct, EnE Agreed, MAJ (emotional) = 58.64% Feat. Set -id SE +id SE sp 55.49  1.01 62.03  0.91 lex 52.66  0.62 67.84  0.66 asr 57.95  0.67 65.70  0.50 sp+lex 62.08  0.56 63.52  0.48 sp+asr 61.22  1.20 62.23  0.86 Table 6: %Correct, NPN Agreed, MAJ (neutral) = 46.52% Further note that adding identifier features to the “-id” feature sets almost always improves performance, although this difference is not always significant4; across tables the “+id” feature sets outperform their “-id” counterparts across all feature sets and emotion classifications except one (NnN “asr”). Surprisingly, while (Lee et al., 2002) found it useful to develop separate gender-based emotion prediction models, in our experiment, gender is the only identifier that does not appear in any learned model. Also note that with the addition of identifier features, the speech-only feature sets (sp+id) now do outperform the majority class baselines for all three emotion classifications. 4For any feature set, the mean +/- 2*SE = the 95% confidence interval. If the confidence intervals for two feature sets are non-overlapping, then their mean accuracies are significantly different with 95% confidence. With respect to the relative utility of lexical versus acoustic-prosodic features, without identifier features, using only lexical features (“lex” or “asr”) almost always produces statistically better performance than using only speech features (“sp”); the only exception is NPN “lex”, which performs statistically the same as NPN “sp”. This is consistent with others’ findings, e.g., (Lee et al., 2002; Shafran et al., 2003). When identifier features are added to both, the lexical sets don’t always significantly outperform the speech set; only in NPN and EnE “lex+id” is this the case. For NnN, just as using “sp+id” rather than “sp-id” improved performance when compared to the majority baseline, the addition of the identifier features also improves the utility of the speech features when compared to the lexical features. Interestingly, although we hypothesized that the “lex” feature sets would present an upper bound on the performance of the “asr” sets, because the human transcription is more accurate than the speech recognizer, we see that this is not consistently the case. In fact, in the “-id” sets, “asr” always significantly outperforms “lex”. A comparison of the decision trees produced in either case, however, does not reveal why this is the case; words chosen as predictors are not very intuitive in either case (e.g., for NnN, an example path through the learned “lex” decision tree says predict negative if the utterance contains the word will but does not contain the word decrease). Understanding this result is an area for future research. Within the “+id” sets, we see that “lex” and “asr” perform the same in the NnN and NPN classifications; in EnE “lex+id” significantly outperforms “asr+id”. The utility of the “lex” features compared to “asr” also increases when combined with the “sp” features (with and without identifiers), for both NnN and NPN. Moreover, based on results in (Lee et al., 2002; Ang et al., 2002; Forbes-Riley and Litman, 2004), we hypothesized that combining speech and lexical features would result in better performance than either feature set alone. We instead found that the relative performance of these sets depends both on the emotion classification being predicted and the presence or absence of “id” features. Although consistently with prior research we find that the combined feature sets usually outperform the speech-only feature sets, the combined feature sets frequently perform worse than the lexical-only feature sets. However, we will see in Section 6 that combining knowledge sources does improve prediction performance in human-human dialogues. Finally, the bolded accuracies in each table summarize the best-performing feature sets with and without identifiers, with respect to both the %Corr figures shown in the tables, as well as to relative improvement in error reduction over the baseline (MAJ) error5, after excluding all the feature sets containing “lex” features. In this way we give a better estimate of the best performance our system could accomplish, given the features it can currently access from among those discussed. These bestperforming feature sets yield relative improvements over their majority baseline errors ranging from 1936%. Moreover, although the NPN classification yields the lowest raw accuracies, it yields the highest relative improvement over its baseline. 5.3 Predicting Consensus Turns Following (Ang et al., 2002; Devillers et al., 2003), we also explored consensus labeling, both with the goal of increasing our usable data set for prediction, and to include the more difficult annotation cases. For our consensus labeling, the original annotators revisited each originally disagreed case, and through discussion, sought a consensus label. Due to consensus labeling, agreement rose across all three emotion classifications to 100%. Tables 79 show, for each emotion classification, the mean accuracy (%correct) and standard error (SE) for our 10 feature sets. Feat. Set -id SE +id SE sp 59.10  0.57 64.20  0.52 lex 63.70  0.47 68.64  0.41 asr 66.26  0.71 68.13  0.56 sp+lex 64.69  0.61 65.40  0.63 sp+asr 65.99  0.51 67.55  0.48 Table 7: %Corr., NnN Consensus, MAJ=62.47% Feat. Set -id SE +id SE sp 56.13  0.94 59.30  0.48 lex 52.07  0.34 65.37  0.47 asr 53.78  0.66 64.13  0.51 sp+lex 60.96  0.76 63.01  0.62 sp+asr 57.84  0.73 60.89  0.38 Table 8: %Corr., EnE Consensus, MAJ=55.86% A comparison with Tables 4-6 shows that overall, using consensus-labeled data decreased the performance across all feature sets and emotion classifications. This was also found in (Ang et al., 2002). Moreover, it is no longer the case that every feature 5Relative improvement over the baseline (MAJ) error for feature set x = ! #"$%'&)(!*!+,)-* )$ #"$%'&.(/!* , where error(x) is 100 minus the %Corr(x) value shown in Tables 4-6. Feat. Set -id SE +id SE sp 48.97  0.66 51.90  0.40 lex 47.86  0.54 57.28  0.44 asr 51.09  0.66 53.41  0.66 sp+lex 53.41  0.62 54.20  0.86 sp+asr 52.50  0.42 53.84  0.42 Table 9: %Corr., NPN Consensus, MAJ=48.35% set performs as well as or better than their baselines6; within the “-id” sets, NnN “sp” and EnE “lex” perform significantly worse than their baselines. However, again we see that the “+id” sets do consistently better than the “-id” sets and moreover always outperform the baselines. We also see again that using only lexical features almost always yields better performance than using only speech features. In addition, we again see that the “lex” feature sets perform comparably to the “asr” feature sets, rather than outperforming them as we first hypothesized. And finally, we see again that while in most cases combining speech and lexical features yields better performance than using only speech features, the combined feature sets in most cases perform the same or worse than the lexical feature sets. As above, the bolded accuracies summarize the best-performing feature sets from each emotion classification, after excluding all the feature sets containing “lex” to give a better estimate of actual system performance. The best-performing feature sets in the consensus data yield an 11%-19% relative improvement in error reduction compared to the majority class prediction, which is a lower error reduction than seen for agreed data. Moreover, the NPN classification yields the lowest accuracies and the lowest improvements over its baseline. 6 Comparison with Human Tutoring While building ITSPOKE, we collected a corresponding corpus of spoken human tutoring dialogues, using the same experimental methodology as for our computer tutoring corpus (e.g. same subject pool, physics problems, web and audio interface, etc); the only difference between the two corpora is whether the tutor is human or computer. As discussed in (Forbes-Riley and Litman, 2004), two annotators had previously labeled 453 turns in this corpus with the emotion annotation scheme discussed in Section 3, and performed a preliminary set of machine learning experiments (different from those reported above). Here, we perform the exper6The majority class for EnE Consensus is non-emotional; all others are unchanged. NnN EnE NPN FS -id SE +id SE -id SE +id SE -id SE +id SE sp 77.46 0.42 77.56 0.30 84.71 0.39 84.66 0.40 73.09 0.68 74.18 0.40 lex 80.74 0.42 80.60 0.34 88.86 0.26 86.23 0.34 78.56 0.45 77.18 0.43 sp+lex 81.37 0.33 80.79 0.41 87.74 0.36 88.31 0.29 79.06 0.38 78.03 0.33 Table 10: Human-Human %Correct, NnN MAJ=72.21%; EnE MAJ=50.86%; NPN MAJ=53.24% iments from Section 5.2 on this annotated human tutoring data, as a step towards understand the differences between annotating and predicting emotion in human versus computer tutoring dialogues. With respect to inter-annotator agreement, in the NnN analysis, the two annotators had 88.96% agreement (Kappa = 0.74). In the EnE analysis, the annotators had 77.26% agreement (Kappa = 0.55). In the NPN analysis, the annotators had 75.06% agreement (Kappa = 0.60). A comparison with the results in Section 3 shows that all of these figures are higher than their computer tutoring counterparts. With respect to predictive accuracy, Table 10 shows our results for the agreed data. A comparison with Tables 4-6 shows that overall, the humanhuman data yields increased performance across all feature sets and emotion classifications, although it should be noted that the human-human corpus is over 100 turns larger than the computer-human corpus. Every feature set performs significantly better than their baselines. However, unlike the computerhuman data, we don’t see the “+id” sets performing better than the “-id” sets; rather, both sets perform about the same. We do see again the “lex” sets yielding better performance than the “sp” sets. However, we now see that in 5 out of 6 cases, combining speech and lexical features yields better performance than using either “sp” or “lex” alone. Finally, these feature sets yield a relative error reduction of 42.45%-77.33% compared to the majority class predictions, which is far better than in our computer tutoring experiments. Moreover, the EnE classification yields the highest raw accuracies and relative improvements over baseline error. We hypothesize that such differences arise in part due to differences between the two corpora: 1) student turns with the computer tutor are much shorter than with the human tutor (and thus contain less emotional content - making both annotation and prediction more difficult), 2) students respond to the computer tutor differently and perhaps more idiosyncratically than to the human tutor, 3) the computer tutor is less “flexible” than the human tutor (allowing little student initiative, questions, groundings, contextual references, etc.), which also effects student emotional response and its expression. 7 Conclusions and Current Directions Our results show that acoustic-prosodic and lexical features can be used to automatically predict student emotion in computer-human tutoring dialogues. We examined emotion prediction using a classification scheme developed for our prior humanhuman tutoring studies (negative/positive/neutral), as well as using two simpler schemes proposed by other dialogue researchers (negative/non-negative, emotional/non-emotional). We used machine learning to examine the impact of different feature sets on prediction accuracy. Across schemes, our feature sets outperform a majority baseline, and lexical features outperform acoustic-prosodic features. While adding identifier features typically also improves performance, combining lexical and speech features does not. Our analyses also suggest that prediction in consensus-labeled turns is harder than in agreed turns, and that prediction in our computerhuman corpus is harder and based on somewhat different features than in our human-human corpus. Our continuing work extends this methodology with the goal of enhancing ITSPOKE to predict and adapt to student emotions. We continue to manually annotate ITSPOKE data, and are exploring partial automation via semi-supervised machine learning (Maeireizo-Tokeshi et al., 2004). Further manual annotation might also improve reliability, as understanding systematic disagreements can lead to coding manual revisions. We are also expanding our feature set to include features suggested in prior dialogue research, tutoring-dependent features (e.g., pedagogical goal), and other features available in our logs (e.g., semantic analysis). Finally, we will explore how the recognized emotions can be used to improve system performance. First, we will label human tutor adaptations to emotional student turns in our human tutoring corpus; this labeling will be used to formulate adaptive strategies for ITSPOKE, and to determine which of our three prediction tasks best triggers adaptation. Acknowledgments This research is supported by NSF Grants 9720359 & 0328431. Thanks to the Why2-Atlas team and S. Silliman for system design and data collection. References G. Aist, B. Kort, R. Reilly, J. Mostow, and R. Picard. 2002. Experimentally augmenting an intelligent tutoring system with human-supplied capabilities: Adding Human-Provided Emotional Scaffolding to an Automated Reading Tutor that Listens. In Proc. Intelligent Tutoring Systems. V. Aleven and C. P. Rose, editors. 2003. Proc. AI in Education Workshop on Tutorial Dialogue Systems: With a View toward the Classroom. J. Ang, R. Dhillon, A. Krupski, E.Shriberg, and A. Stolcke. 2002. Prosody-based automatic detection of annoyance and frustration in humancomputer dialog. In Proc. International Conf. on Spoken Language Processing (ICSLP). A. Batliner, K. Fischer, R. Huber, J. Spilker, and E. N¨oth. 2000. Desperately seeking emotions: Actors, wizards, and human beings. In Proc. ISCA Workshop on Speech and Emotion. A. Batliner, K. Fischer, R. Huber, J. Spilker, and E. Noth. 2003. How to find trouble in communication. Speech Communication, 40:117–143. K. Bhatt, M. Evens, and S. Argamon. 2004. Hedged responses and expressions of affect in human/human and human/computer tutorial interactions. In Proc. Cognitive Science. C. Conati, R. Chabbal, and H. Maclaren. 2003. A study on using biometric sensors for monitoring user emotions in educational games. In Proc. User Modeling Workshop on Assessing and Adapting to User Attitudes and Effect: Why, When, and How? L. Devillers, L. Lamel, and I. Vasilescu. 2003. Emotion detection in task-oriented spoken dialogs. In Proc. IEEE International Conference on Multimedia & Expo (ICME). K. Forbes-Riley and D. Litman. 2004. Predicting emotion in spoken dialogue from multiple knowledge sources. In Proc. Human Language Technology Conf. of the North American Chap. of the Assoc. for Computational Linguistics (HLT/NAACL). A. Graesser, K. VanLehn, C. Rose, P. Jordan, and D. Harter. 2002. Intelligent tutoring systems with conversational dialogue. AI Magazine. P. W. Jordan, M. Makatchev, and K. VanLehn. 2004. Combining competing language understanding approaches in an intelligenttutoring system. In Proc. Intelligent Tutoring Systems. B. Kort, R. Reilly, and R. W. Picard. 2001. An affective model of interplay between emotions and learning: Reengineering educational pedagogy building a learning companion. In International Conf. on Advanced Learning Technologies. C.M. Lee, S. Narayanan, and R. Pieraccini. 2001. Recognition of negative emotions from the speech signal. In Proc. IEEE Automatic Speech Recognition and Understanding Workshop. C.M. Lee, S. Narayanan, and R. Pieraccini. 2002. Combining acoustic and language information for emotion recognition. In International Conf. on Spoken Language Processing (ICSLP). D. Litman and K. Forbes-Riley. 2004. Annotating student emotional states in spoken tutoring dialogues. In Proc. 5th SIGdial Workshop on Discourse and Dialogue. D. Litman and K. Forbes. 2003. Recognizing emotion from student speech in tutoring dialogues. In Proc. IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). D. Litman and S. Silliman. 2004. ITSPOKE: An intelligent tutoring spoken dialogue system. In Companion Proc. of the Human Language Technology Conf. of the North American Chap. of the Assoc. for Computational Linguistics (HLT/NAACL). D. J. Litman, C. P. Rose, K. Forbes-Riley, K. VanLehn, D. Bhembe, and S. Silliman. 2004. Spoken versus typed human and computer dialogue tutoring. In Proc. Intelligent Tutoring Systems. B. Maeireizo-Tokeshi, D. Litman, and R. Hwa. 2004. Co-training for predicting emotions with spoken dialogue data. In Companion Proc. Assoc. for Computational Linguistics (ACL). S. Narayanan. 2002. Towards modeling user behavior in human-machine interaction: Effect of errors and emotions. In Proc. ISLE Workshop on Dialogue Tagging for Multi-modal Human Computer Interaction. P-Y. Oudeyer. 2002. The production and recognition of emotions in speech: Features and Algorithms. International Journal of Human Computer Studies, 59(1-2):157–183. I. Shafran, M. Riley, and M. Mohri. 2003. Voice signatures. In Proc. IEEE Automatic Speech Recognition and Understanding Workshop. K. VanLehn, P. W. Jordan, C. P. Ros´e, D. Bhembe, M. B¨ottner, A. Gaydos, M. Makatchev, U. Pappuswamy, M. Ringenberg, A. Roque, S. Siler, R. Srivastava, and R. Wilson. 2002. The architecture of Why2-Atlas: A coach for qualitative physics essay writing. In Proc. Intelligent Tutoring Systems. I. H. Witten and E. Frank. 1999. Data Mining: Practical Machine Learning Tools and Techniques with Java implementations.
2004
45
    !#"$ %'&)(+*,-&) ./01 235476869:<;=:?>'35@ A BDC-EGF.HJILKMHONPC Q-ERBDS,TVUXWYBXZ UDB [\Z WY]JBXS ^_WYEa`bHJIcKMBDZdERSRegfihifjHOS Wjke-l-m S_fjeJZ k H l-hLn'oOpJq-rts u HONPBXvtw UD^XFxQ UDIFxBXk Q ybzi{D| 4}8~ | 0€ƒ‚„d…O†ˆ‡j…t‰\‚„dŠ†dR‹VŒ5Ž‚…}Œb…t‡tƒ‰‘“’5‰”†•–ƒ—G‚a”˜ ˜)’5‰…t’ …}˜a”†™.š›„d,†dR‹-Œ5Ž‚Ž…}Œœ…t‡‚„d,’•‰”†5–ƒ—G‚a”˜M˜ ”˜˜a”Œg‚—tžœ‚Ž”†P‚a…Ÿ—Ÿ˜a”€ —tŒg‚– Œ7‚aƒ‰’5‰ƒ‚—G‚Ž…}Œ¡—t£¢ ¤ …t‰Ž‚„•€¥‚„5—G‚¦†dƒ‚aƒ‰€¦Œd”˜ ‚„dœ’5‰”†5–ƒ—G‚aœ‡j…t‰ ‚„d tƒ‰‘§dŽ‚˜˜a”€ —tŒg‚–)‰…}Ž”˜L—tŒ5†¨—t†X©ª5Œ5–R‚˜ƒ™M˜i’•‰”†5£¢ –ƒ—G‚ac†dR‹VŒ5‚Ž…}Œ5˜?—G‰–R…}€’VŽƒ‚at§t‚„dƒž,–ƒ—tŒ ‘ ‚a”˜‚a”† ‘Jž«‰ª5Œ5Œ5¬Œ ¤ ‚„di—t ¤ …t‰ Ž‚„5€­…}Œ ˜a…}€˜a”Œg‚a”Œ5–R”˜—tŒ5† tƒ‰Ž‡jžOŒ ¤ ‚„dc‰”˜a…}ª5‚Ž…}Œ …t‡V‚„dc’•‰”†5–ƒ—G‚at§7˜”€ —tŒO¢ ‚–‰…}Ž”˜\—tŒ5†®—t†X©ª5Œ5–R‚˜\¬Œ=‚„d…}˜a¨˜a”Œ7‚a”Œ•–R”˜ƒ™œš.„d ’5‰”†•–ƒ—G‚a”˜Ÿ—G‰=†dR‹VŒd”†¯˜a”€ ¬—tªd‚a…}€ —G‚–ƒ—tž±°›Ž‚„ ‚„dˆ„d”’²…t‡ —³˜…t‡Y‚´°—G‰“”Œ7OŽ‰…}Œ5€ ”Œ7‚œ‚„5—G‚µª•˜a”˜ ˜aƒtƒ‰—t?˜a”–R‚Ž…}Œ5˜L…t‡#—–R…t‰’•ª5˜‚a… ’•‰…¶O†dM‡Yƒ”†5‘•—t–· ‡j…t‰‚„d›†5R‹VŒ5Ž‚Ž…}Œ …t‡ ‚„d)’5‰”†5¬–ƒ—G‚a”˜ƒ§g—tŒ5† ‚„d”Œ ‡j…t‰ ‚„d ˜ªd‘•˜”¸gªd”Œg‚›‚a”˜‚Œ ¤ —tŒ5†P‰R‹VŒ5Œ ¤ …t‡‚„d †dR‹5¢ Œ5Ž‚…}Œ5˜ƒ™#š.„d›€ƒ‚„5…J†Š˜tƒ‰ž ¹•RºdŽ‘•.Œ —t†5†5¬Œ ¤ — Œdƒ°’5‰”†5–ƒ—G‚a,‚a…Š—Š˜a‚c…t‡—tŽ‰”—t†džœ†dR‹VŒd”†P’5‰”†O¢ –ƒ—G‚a”˜\‡Y…t‰\— ¤ Žt”Œ=tƒ‰‘™Šš.„d €ƒ‚„d…O†=‘•ª•†5˜)…}Œ —tŒ¨RºO¬˜a‚Œ ¤ —G’5’5‰…}—t– „Š‚„•—G‚L†5R‹VŒd”˜i’5‰”†5–ƒ—G‚a”˜‡j…t‰ » …t‰†5¼)ƒ‚Mtƒ‰‘¡–ƒ¬—t˜˜a”˜ƒ§ —tŒ5†µ‚„•—G‚)’•¬—tŒ5˜c‚a…Ÿ†dR‹VŒd ’5‰”†•–ƒ—G‚a”˜ ‡Y…t‰ ƒtƒ‰ž¾½Œ ¤ ˜„®tƒ‰‘™“š›„dœ†dR‹VŒ5£¢ ‚Ž…}Œ5˜¿…t‡‚„5 ’5‰”†5¬–ƒ—G‚a”˜¿—tŒ5†=‚„dŠ˜a”€ —tŒg‚– Œg‚aƒ‰a¢ ’5‰ƒ‚—G‚…}Œ¨—t ¤ …t‰Ž‚„5€À—G‰Á‘ ”Œ ¤ ª•˜a”†¦‚a…—tªd‚a…}€¦—G‚¢ –ƒ—tž/–R‰”—G‚a¨—µ–R…t‰’•ª5˜Á…t‡c—tŒ5Œ5…t‚—G‚a”†ˆtƒ‰‘ˆ’•‰”†5£¢ –ƒ—G‚a”˜ƒ§-˜a”€ —tŒg‚–«‰…}Ž”˜›—tŒ5†œ—t†X©ª5Œ5–R‚˜ƒ™ Ã Ä 6 | 4}:9Å~ |}Æ :?6 š.„5¬˜b’•—G’ ƒ‰ˆ†d”—t˜=°›‚„Ç‚„d†dR‹VŒ•Ž‚Ž…}Œ<…t‡Ÿtƒ‰‘ ’5‰”†•–ƒ—G‚a”˜ °›„5– „ˆ°›¬€ —G·tŸ’ …}˜˜Ž‘•‚„d¨†5ƒ‚aƒ‰a¢ € Œ•—G‚Ž…}Œ“…t‡)tƒ‰‘³€”—tŒ5¬Œ ¤ §˜a”€ —tŒg‚–¨‰…}Ž”˜”§—t†O¢ ©ª5Œ5–R‚˜ƒ§›—tŒ•†—G‚a‚—t– „5€”Œg‚µ—tŒ5†¯€”—tŒ5Œ ¤ …t‡¿’ …}˜a‚¢ tƒ‰‘•—tÉÈÈi˜ƒ™›Ê)”Œ5–Rt§ ‚„d —t†d”¸gª•—t–Ržœ…t‡‚„d †dR‹VŒ5£¢ ‚Ž…}Œ5˜˜L€”—t˜ª5‰”†¦‘Jž¦–R…}€’V—G‰Œ ¤ ‚„dÁ…}ªd‚a’•ªd‚i…t‡Ë— ˜a”€ —tŒg‚–ŠŒ7‚aƒ‰’5‰ƒ‚—G‚Ž…}Œˆ—t ¤ …t‰Ž‚„5€Ì°›Ž‚„=‚„dŠ˜a…G¢ ªd‚…}Œ¦…t‡‚„5…}˜a«˜a”€ —tŒg‚–ÁŒ7‚aƒ‰’5‰ƒ‚—G‚Ž…}Œ¨‚—t˜a·O˜L…}Œ ˜a”Œg‚a”Œ5–R”˜ ‰—tŒ5†d…}€¦Žžb‚—G·t”Œ®‡j‰…}€+—tŒ7ž“–R…t‰’•ª5˜ƒ§É…t‰ ‚ÍžJ’ ”†µ‘Jžµ—¦ª5˜ƒ‰)—G‚›‚„5,–R…}Œ5˜a…}t™.š.„d\—t ¤ …t‰ Ž‚„5€œ§ ‚„Jª5˜ƒ§O€,ª5˜‚’5‰…DJ¬†d›€ € ”†5—G‚a›‡jƒ”†d‘•—t–· ‘Jž‚a”˜a‚¢ Œ ¤ ‚„dµ†dR‹-Œ5Ž‚Ž…}Œ5˜\…}Œ¾‚„d”˜aœ‰—tŒ•†d…}€ Žžˆ˜a””–R‚a”† ˜a”Œg‚a”Œ5–R”˜ƒ™ÎŒ/ÏÑÐM…}€ ƒÒt§dÓGÔtÔdÕ¶Ö § ¤ ”Œdƒ‰ –c’5‰”†•–ƒ—G‚a”˜ „5—¶tM‘ ƒ”Œ¨†dR‹VŒd”†¦‡Y…t‰ » …t‰†5¼)ƒ‚«ÕG™x×ŠÏØ„5”Œ5–Rƒ‡Y…t‰‚„§ » ¼ÁÖ¨tƒ‰‘–ƒ—t˜˜”˜“ÏØÙd”Ž‘V—tª5€œ§ՔÚtÚtÛ}Ö ™Üš›„d=˜aR¢ € —tŒg‚–¿‰…}Ž”˜›…t‡‚„5,’5‰”†5–ƒ—G‚a”˜M—G‰,¬Œd·t”†µ‚a…¨‚„d ˜a”Ž”–R‚…}Œ5—tc‰”˜a‚a‰ –R‚Ž…}Œ5˜PÏØ–ƒ—G‚a ¤ …t‰ Ž”˜Š¬Œ » …t‰†5¼)ƒ‚ …}Œg‚a…}Ž… ¤ ž=‡Y…t‰ Œd…}ª5Œ5˜_ÖM—tŒ5†=‚„d ¤ ‰ —t€ € —G‚–ƒ—t‰”—X¢ ‚Ž…}Œ5˜.‚„5—G‚c‰”—tŽÒƒ«‚„5”€œ™š.„d¿˜”Ž”–R‚Ž…}Œ5—t ‰”˜‚a‰–_¢ ‚Ž…}Œ5˜Á…t‡‚„d ’5‰”†•–ƒ—G‚a”˜¿—G‰ ˜a…}¬†5Žž ¤ ‰…}ª•Œ5†d”†¡…}Œ ‚„d » …t‰†5¼)ƒ‚c…}Œ7‚a…}… ¤ ž¦‡j…t‰LŒ5…}ª5Œ5˜ÁÏØÝ/¬Žƒ‰”§ ՔÚtÚtÛ}Ö § °›„d…}˜Ÿª5’5’ ƒ‰ Žƒt”L…}Œg‚a…}Ž… ¤ ž±„5—t˜ ‘ ƒ”Œ¾€…O†5£‹•”† —tŒ5†Þ‰”—G‰‰—tŒ ¤ ”†ßÏÑÐÁ…}€ƒÒt§ŸÓGÔtÔGàgÖ=‘•—t˜a”†Þ…}Œá‚„d ‡jƒ”†d‘•—t–·œ…t‘•‚—tŒd”†µ‘JžŸ‚a”˜a‚Œ ¤ ‚„d\’5‰”†5–ƒ—G‚a\†dƒ‡Y¢ Œ5‚Ž…}Œ5˜ƒ™ Ê)…¶°ƒtƒ‰”§¨°i„•—”t‡j…}ª5Œ5†â…}ªd‚=‚„5—G‚“‚„5Œ5£¢ ‚—tË†d”— Œµ‚„5—G‚)°…t‰·Ÿ…t‡†dR‹VŒ•Œ ¤ ’5‰”†•–ƒ—G‚a”˜.‡j…t‰ — » ¼Ìtƒ‰‘­–ƒ¬—t˜˜Ÿ°›„5– „'°›)‘ ¡—t˜…±X—t¬†‡j…t‰ €…}˜a‚ …t‡.‚„d¨tƒ‰‘•˜ ª5Œ5†5ƒ‰\‚„5—G‚–ƒ—t˜˜ „5—t˜ ’5‰…Dt”Œ ‚a…µ‘ ‚a…J…µ…t’5‚€ ¬˜a‚– ‘ ”–ƒ—tª5˜a € —tŒgžb…t‡i‚„d¦tƒ‰‘ ‡j…t‰€ ˜ˆŒ•–ƒª5†d”†Çª5Œ5†5ƒ‰=‚„5—G‚®–ƒ—t˜˜=‰”—tŽÒƒ‚„d”Ž‰ ˜a”€ —tŒg‚–“‰…}Ž”˜µ‘Jž­†5£ã ƒ‰”Œg‚P˜a”Ž”–R‚…}Œ5—t¿‰”˜‚a‰–_¢ ‚Ž…}Œ5˜c—tŒ•† ¤ ‰—t€ € —G‚–ƒ—t ‰”—G‚Ž…}Œ5˜ƒ™š.„•˜L˜L†5ªdÁ‚a… ‚„dŠ‡j—t–R‚,‚„5—G‚,€ —tŒgž=…t‡c‚„dŠtƒ‰‘•˜\ª•Œ5†dƒ‰\— ¤ Žt”Œ –ƒ—t˜˜ „5—¶tŠ‘ ƒ”Œ ¤ ‰…}ªd’ ”†ˆŒ®€ —tŒ7žˆ¬Œ5˜a‚—tŒ5–R”˜,‘gž ˜a…}€¡·OŒ5†³…t‡,€’•¬–ƒ—G‚Ž…}Œ§…t‰bäØåaæ çOætè5éGê éÏØÙ5”£¢ ‘•—tª5€µ§µÕ”ÚtÚtÛ}Ö §Š‰—G‚„5ƒ‰ˆ‚„5—G‚ˆ‘JžÞ˜„5—G‰Œ ¤ ˜”€ —tŒO¢ ‚–±‰…}Ž”˜ˆŒâ—ë„5Žƒ‰—G‰– „7žÇ…t‡Ÿ’5‰”†•–ƒ—G‚a”˜ƒ™ì½t”Œ Œ¾€ —tŒgž®–ƒ—t˜a”˜”§˜a…}€ ¨…t‡M‚„dŸtƒ‰‘•˜¬Œ®‚„5µ˜—t€ ˜ažOŒ5˜aƒ‚Á˜a‚Á†5£ã ƒ‰Á˜a”€ —tŒg‚–ƒ—tŽžP—tŒ•†¡˜žJŒg‚—t–R‚–ƒ—tŽž ‘ ƒ‚Ͱƒ”Œb‚„d”€œ™ÁÙd…t‰«Œ5˜a‚—tŒ•–Rt§-‚„5,‚„5‰†P˜a”Œ•˜a,…t‡ í ¤ —tŒ5î/Œ » ¼ï˜ƒð í ’5‰…G‹•‚ƒ§ ¤ —t¬Œ§?‘ ”ŒdR‹•‚¦Ï؆5ƒ‰Žt ‘ ”ŒdR‹•‚c‡j‰…}€ŠÖ ™ñ.„d¦äjòOóƒê¦óR§5‚„d\‚„5Œ ¤ …t‘5‚—t¬Œd”†§ ˜¨˜ažOŒ7‚—t–R‚–ƒ—t¬Žž³‰”—tÒƒ”†‘Jž³‚„dbÈÈô ‡j‰…}€õ¼MÈö ‡j…t‰‚´°i……t‡Ë‚„dÁtƒ‰‘•˜L˜‚a”†ŠŒŠ‚„5—G‚c˜ažOŒ5˜aƒ‚ í ‘ ”ŒdR¢ ‹•‚tŒ5† í ’5‰…G‹V‚ƒ§ñ)„5Ž,‚„dœäjòOóRê¦óM‡j…t‰ í ¤ —tŒ5˜ ‰”—tÒƒ”†“‘Jžˆ—/†•Ž‰”–R‚ …t‘O©Â”–R‚ƒ™bš.„d¨†5£ã ƒ‰”Œ5–R”˜ Œ ¤ ‰—t€ €¦—G‚–ƒ—t?‰”—G‚Ž…}Œ5˜)—G‰\ƒt”Œ¡€…t‰,’5‰ƒG—tŒ ¤ °›Ž‚„•Œ±‚„d¡tƒ‰‘•˜Ÿª5Œ5†5ƒ‰¦…}Œd ¤ Žt”Œ¯–ƒ—t˜˜ƒ™<÷Lªd‚ƒ§ Œ¯‚„d…}˜ˆ–ƒ—t˜a”˜PŒ¯°)„5–„¯‚„5=tƒ‰‘­’ …}ŽžO˜a”€\ž'˜ Œd…t‚ „5 ¤ „§#…}Œd¨’5‰”†•–ƒ—G‚aŠ†dR‹-Œ5Ž‚Ž…}Œ“‡Y…t‰ ‚„d¨tƒ‰‘ –ƒ—t˜˜)€ —”ž/—G’5’•Žž¨‚a…Š€ —tŒgžœ…t‡‚„d,tƒ‰‘/‡j…t‰€ ˜)ª5ŒO¢ †dƒ‰Ë‚„5—G‚É–ƒ¬—t˜˜ƒ™Ë¼)…t‚Ͱ)Ž‚„5˜a‚—tŒ5†•Œ ¤ ‚„d”˜’5‰…t‘VŽ”€ ˜ƒ§ » ¼Ìtƒ‰‘­–ƒ¬—t˜˜a”˜œ„5—¶t=’5‰…¶O†d”†¯—tŒ²€ ’-…t‰‚—tŒ7‚ ‘•—t˜¬˜c‡Y…t‰)‚„d\–R…}Œ5˜‚a‰ª5–R‚Ž…}Œœ…t‡— ¤ ”Œdƒ‰—t…}Œg‚a…}Ž… ¤ ž …t‡’5‰”†•–ƒ—G‚a”˜‚„•—G‚°›¬d–R…¶tƒ‰cƒtƒ‰ž ½iŒ ¤ ˜„ tƒ‰‘™ Ý/…t‰ƒ…¶tƒ‰”§iŽ‡L…}ŒdŠ–R…}Œ5˜†dƒ‰˜«‚„•—G‚,‚„dƒ‰Š—G‰Ÿø}ùGøtÓ tƒ‰‘•˜¬Œ » …t‰†5¼)ƒ‚›ÕG™x×Á„5—”OŒ ¤ …}Œ5ž\…}Œd.˜a”Œ•˜a.—tŒ5† ÓOՔÚtÚ tƒ‰‘•˜c„•—”OŒ ¤ RºO—t–R‚Žž¨‚Ͱ… ˜a”Œ5˜”˜ƒ§O‚„dÁ’5‰”†O¢ –ƒ—G‚a”˜,‚„5—G‚,„•—”t¨‘-ƒ”Œ“–R…}Œ5˜a‚a‰ª5–R‚a”†®‡j…t‰\‚„dŠtƒ‰‘ –ƒ—t˜˜”˜L…t‡ùg§xÚtøOÕ«tƒ‰‘•˜c—G‰«tƒ‰žŸ–ƒŽ…}˜aÁ‚a…‘ «†d…}Œdt™ Ê)…¶°ƒtƒ‰”§Á‚„d¡€ ƒ‚„d…J†'RºO’•—tŒ5”†Œ‚„5˜Š’•—G’ ƒ‰ †dƒO—G‚a”˜P–R…}Œ•˜†dƒ‰—G‘VŽž³‡j‰…}€ ‚„d » ¼¥—G’5’5‰…}—t– „ ‚a…œ–R…}Œ5˜a‚a‰ª•–R‚Œ ¤ tƒ‰‘=€”—tŒ5Œ ¤ ™MÎŒP’•—G‰‚–ƒª5—G‰”§-Ž‚ ”˜– „dƒ°›˜i‚„dM˜žJŒ5˜ƒ‚i˜‚Œ ‡Ø—”t…t‰L…t‡’5‰”†•–ƒ—G‚a)†dƒ‡Y¢ Œ5‚Ž…}Œ5˜‡j…t‰¬Œ5†5ŽO†5ª5—tgtƒ‰‘•˜L—t˜i…t’5’ …}˜a”†Š‚a… —,˜a‚ …t‡˜ažOŒd…}ŒgžJ€…}ª•˜tƒ‰‘V˜ƒ§J—tŒ•†¦—tŽ…D°›˜i‡Y…t‰L—tŒ¦”—t˜ž¦ŒO¢ ‚a ¤ ‰—G‚Ž…}Œ,…t‡5—.Œdƒ°®’5‰”†5¬–ƒ—G‚aiŒg‚a…›—›¬˜a‚…t‡d—tŽ‰”—t†dž †dR‹VŒ5”†Ÿ’5‰”†5–ƒ—G‚a”˜.‡Y…t‰›— ¤ Žt”Œœtƒ‰‘™ ÷‰ŽR¹Vžt§Š‚„d¯—t ¤ …t‰‚„5€ ÏÑÐM…}€ ƒÒt§/ÓGÔtÔdն֓‚„5—G‚ ‚a”˜a‚˜‚„d.’5‰”†•–ƒ—G‚a”˜˜—t˜‡Y…}…¶°›˜ƒ™#Ùd…t‰iƒtƒ‰ž tƒ‰‘ Œ¯—±˜a”Œg‚a”Œ5–Rt§«°=’5‰…¶O†d=—±¬˜a‚Š…t‡ ’5‰”†•–ƒ—G‚a”˜ ‡j…t‰\‚„5—G‚¿tƒ‰‘™¨š›„d”˜a’5‰”†•–ƒ—G‚a”˜\–ƒ—tŒ“‘-OŽƒ°”† —t˜.–R…}Œg‚a”Œ5†dƒ‰˜›‡Y…t‰c‚„5¿€”—tŒ5Œ ¤ …t‡É‚„d«tƒ‰‘?™š.„d ¤ …}—t˜Ë…t‡d‚„di—t ¤ …t‰Ž‚„5€—G‰‚a…Á˜a”Ž”–R‚Ë…}Œd’5‰”†5–ƒ—G‚a ‡j‰…}€Þ‚„5—G‚¬˜a‚ƒ§t‚„Jª5˜†dƒ‚aƒ‰€¦Œ5Œ ¤ ‚„d›˜”Œ5˜ac…t‡ ‚„d tƒ‰‘§†d”Œg‚Ž‡jž®Ž‚˜˜a”€¦—tŒ7‚–Ÿ‰…}Ž”˜ƒ§i—tŒ•†³—t†X©ª5Œ5–R‚˜ —tŒ5†¨—G‚a‚—t–„P’-…}˜‚¢ѐtƒ‰‘•—t-ÈȘ”™)•‚„d”˜aÁ‚—t˜a·O˜c—G‰ ˜€ ª5Ž‚—tŒ5ƒ…}ª5˜Žž±—t–„•Žƒt”†™ÇÙd…t‰œ”—t–„ ¤ ‰—t€¦€ —G‚£¢ –ƒ—t)‰”—G‚Ž…}ŒëÏÑÐ  Ö¦Œ±‚„d¡–ƒ—tª5˜a®Ïؘ‚—G‰‚Œ ¤ °›Ž‚„ ‚„dc¼MȈ–R…}€’•Ž”€ ”Œ7‚˜ Ö˗tŒ5† ‡j…t‰#ƒtƒ‰ž,’•‰”†5–ƒ—G‚aLŒ ‚„dc˜‚?…t‡5’5‰”†5–ƒ—G‚a”˜ƒ§t‚„dc—t ¤ …t‰ Ž‚„5€<–„d”–·J˜‡5‚„d ’5‰”†•–ƒ—G‚acRºJ’V—tŒ5˜#‚„d)Ð  ™7'’5‰”†•–ƒ—G‚a,óRç Yè —®Ð  Ž‡›‚„5ƒ‰P˜—ˆ˜”€ —tŒ7‚¬–Ÿ‰…}ŽµŒ¾‚„dœ’•‰”†5£¢ –ƒ—G‚aP‰”—tŽÒƒ”†±‘Jž®‚„5 ¤ ‰—t€ € —G‚¬–ƒ—t.‰”—G‚…}Œ³—tŒ5† ‚„d ˜a””–R‚Ž…}Œ5—tÉ‰”˜a‚a‰¬–R‚Ž…}Œ5˜Á…t‡‚„5 ˜a”€ —tŒg‚– ‰…}Ž ˜ªd‘V˜ª5€/‚„d/…}Œg‚a…}Ž… ¤ –ƒ—t«–ƒ—G‚a ¤ …t‰ž'…t‡,‚„db„d”—t† Œd…}ª5ŒŸ…t‡É‚„d ¤ ‰ —t€ € —G‚–ƒ—t‰”—G‚Ž…}Œ™š.„•˜’5‰…O–R”˜˜ ˜›‰ƒ’ ”—G‚a”†/‡j…t‰M”—t–„“Ð  ŒP‚„d –ƒ¬—tª5˜a,—tŒ•†P”—t– „ ’5‰”†•–ƒ—G‚a¨¬Œ“‚„dœ¬˜a‚\…t‡)’5‰”†5–ƒ—G‚a”˜ ‡j…t‰ ‚„dŸtƒ‰‘ …t‡‚„dÁ–ƒ—tª5˜at™š.„5”Œ§7‚„d)’5‰”†5¬–ƒ—G‚a›‚„5—G‚RºJ’V—tŒ5˜ ‚„d¿€…}˜‚MÐ  ˜)˜c˜a”Ž”–R‚a”†/—t˜c‚„d\€ ”—tŒ5Œ ¤ …t‡#‚„d tƒ‰‘™¿š›„d ˜”€ —tŒ7‚¬–\‰…}”˜)…t‡i‚„d ’5‰”†5–ƒ—G‚a„5—¶t ‘ ƒ”Œ³†d”Œg‚£‹•”†±—t˜¨—=‰”˜ª5Ž‚…t‡Á‚„5˜’•‰…J–R”˜˜ƒ™ÎŒ –ƒ—t˜a,…t‡É‚”˜ƒ§•‚„d\’5‰”†5–ƒ—G‚a\‚„5—G‚)„5—t˜›‚„d ¤ ‰”—G‚a”˜‚ ŒJª5€\‘ ƒ‰œ…t‡˜a”€ —tŒg‚–=‰…}Ž”˜µ‰”—t¬ŽÒƒ”†²˜µ˜a””–R‚a”†™ ½tƒ‰ž ¤ ‰—t€ €¦—G‚–ƒ—tL‰”—G‚Ž…}Œ®‚„5—G‚ „5—t˜ Œd…t‚ ‘ ƒ”Œ € —G’5’ ”†P‚a…Š—¨˜a”€ —tŒg‚–¿‰…},€ ª5˜a‚.‘ ,—tŒ/—t†X©ª5Œ5–R‚ …t‰c—tŒŠ¼Mȱ€…O†5£‹•ƒ‰¶™Éš.„5›”Œg‚a‰Ž”˜i‡Y…t‰c—t†X©ª5Œ5–R‚˜—G‰ ˜a‚a…t‰”† Œ«‚„5‰…J…t‚ËŒd…O†d  ¶äØætè)…t‰7ó  ƒåñç 䨿Gè Ïj‡j…t‰ ˜a‚—G‚Žttƒ‰‘•˜_Ö—tŒ5†,—G‰LŒ5„dƒ‰‚a”†«‘gž¿—t¬}’5‰”†•–ƒ—G‚a”˜ Œ«‚„5…}˜a–ƒ—G‚a ¤ …t‰ Ž”˜ƒ™)†G©aª5Œ•–R‚˜?—G‰†d”Œg‚£‹•”†«—G‡j‚aƒ‰ ‚„d ’•‰”†5–ƒ—G‚a …t‡c‚„5 tƒ‰‘®„•—t˜¿‘ ƒ”Œˆ†dƒ‚aƒ‰€¦Œd”† ‘ ”–ƒ—tª5˜a—t†X©ª5Œ5–R‚˜Á—G‰Œd…t‚«’•—G‰‚)…t‡i‚„d—G‰ ¤ ª5€”Œg‚ ˜a‚a‰ª•–R‚ªd‰«…t‡É‚„d¿’5‰”†•–ƒ—G‚at™ ÎŒ¡‚„d Œ5RºJ‚«˜a”–R‚Ž…}Œ§-° °›Ë˜„d…D°Ç„5…¶°Þ‚a…œ†dR¢ ‹VŒd’•‰”†5–ƒ—G‚a”˜Á‡j…t‰\Œ5†•ŽJ¬†5ª5—ttƒ‰‘•˜¿—t˜¿†5Žã-ƒ‰”Œg‚ ‡j‰…}€ » ¼tƒ‰‘µ–ƒ—t˜˜”˜ƒ§d„d…¶°²‚„d”˜«’5‰”†5–ƒ—G‚a”˜‡j…t‰ Œ5†•ŽJ¬†5ª5—tJtƒ‰‘•˜–ƒ—tŒ¦‰”ª5˜a›”Œg‚a‰Ž”˜Œ¦‚„d ¤ ”Œ5ƒ‰– ’5‰”†•–ƒ—G‚a”˜‡j…t‰ » …t‰†5¼)ƒ‚ tƒ‰‘–ƒ¬—t˜˜a”˜ƒ§L—tŒ5†³„d…D° ‚„dƒž¾—G‰µŒg‚a ¤ ‰—G‚a”†¬Œ7‚a…=‚„dœ…}Œg‚a…}Ž… ¤ ž¾…t‡M’5‰”†O¢ –ƒ—G‚a”˜ ‚„5—G‚ „5—”tŸ‘ ƒ”Œ“†dR‹-Œd”†ˆ°›‚„ˆ‚„d¨„d”Ž’“…t‡ » ¼0tƒ‰‘ –ƒ¬—t˜˜a”˜ƒ™J”–R‚…}Œµ’5‰…¶O†d”˜,—P†5¬˜–ƒª5˜Â¢ ˜Ž…}Œ…t‡«‚„db˜a”€ —tªd‚a…}€¦—G‚–P–R…}Œ5˜‚a‰ª5–R‚Ž…}Œ…t‡«‚„d ’5‰”†•–ƒ—G‚a”˜ƒ™J”–R‚Ž…}Œ¨à ¤ Žt”˜c— OŽƒ°¯…t‡?‚„d«ªd’5’ ƒ‰a¢ Žƒt”Ë…}Œ7‚a…}… ¤ žµ…t‡’5‰”†5¬–ƒ—G‚a”˜ƒ§-˜”–R‚Ž…}Œ¡ø¦†5˜–ƒª•˜˜a”˜ ‚„d‚a”˜‚Œ ¤ §G—tŒ5†¿˜a”–R‚Ž…}Œ•˜Ë×c—tŒ5† ùL‰”—G‚a”†¿‰”˜a”—G‰– „ —tŒ5†µ–R…}Œ•–ƒª5˜Ž…}Œ•˜ƒ§O‰”˜a’ ”–R‚Žt”Žžt™   3 6 Æ 6Š4}3•9 Æ ~O8 | 3 {! :4 Ä 69 Æ#"Æ 9iÅ8%$ &¦354 z{ š.„dˆ—G’•’5‰…}—t–„ë‚a…†dR‹VŒ•Œ ¤ ’5‰”†5¬–ƒ—G‚a”˜œ‡Y…t‰/Œ5†5£¢ O†5ª5—t?tƒ‰‘•˜Á˜Á˜€ ¬—G‰›‚a…Ÿ‚„d °—¶ž¡¬ŒP°›„•–„/‚„d ˜a”Œ5˜”˜c‡Y…t‰M— tƒ‰‘/—G‰ ¤ Žt”ŒµŒœ—¦†5–R‚Ž…}Œ5—G‰žt™š.„d †5£ã ƒ‰”Œ5–RŽ”˜ËŒ\‚„5Žƒt”g…t‡•†dƒ‚—t¬}’5‰…DJ¬†d”†™ËÙ5…t‰ ”—t–„“tƒ‰‘ˆ˜a”Œ5˜t§…}Œd € ª5˜a‚«’5‰…DJ¬†dtðM—7ÖM—Ÿ’•‰”†5£¢ –ƒ—G‚aŠ—tŒ•†ˆ—µ„5Žƒ‰ —G‰–„gž/°)„dƒ‰‚a…PŒ•˜aƒ‰‚¿Ž‚ƒ§‘ ÖM‚„d ˜a”€ —tŒg‚– ‰…}Ž”˜M‡j…t‰Á‚„d,’•‰”†5–ƒ—G‚at§—tŒ5†b–¶Ö›‚„d ˜aR¢ Ž”–R‚Ž…}Œ•—t ‰”˜a‚a‰–R‚…}Œ5˜c—tŒ5† ¤ ‰—t€¦€ —G‚–ƒ—t‰”¬—G‚Ž…}Œ5˜ ‡j…t‰M”—t– „b‰…}Žt™('M‡–R…}ª5‰˜at§ŒP‚„d–R…}Œ5˜a‚a‰ª5–R‚…}Œ/…t‡ ‚„dµ†dR‹-Œ5Ž‚Ž…}Œ5˜ …}Œd/€,ª5˜‚·tƒƒ’Œ¾€ Œ•†¾‚„dµŒO¢ ‚aƒ‰’5‰ƒ‚—G‚Ž…}Œ²—t ¤ …t‰Ž‚„•€ ‘ ”–ƒ—tª5˜ab‚„db†dR‹VŒ•Ž‚Ž…}Œ5˜ € ª5˜a‚œ†dƒ‚aƒ‰€ ¬Œdbtƒ‰‘뀁”—tŒ5¬Œ ¤ §)˜a”€¦—tŒ7‚–b‰…}Ž”˜ —tŒ5†'—t†X©ª5Œ5–R‚˜ƒ™Þ½ºd–Rƒ’5‚¨‡Y…t‰Ÿ‚„5—G‚ƒ§)…}ªd‰œ—G’5’5‰…}—t– „ ˜c˜¬€ —G‰‚a…¦—†5–R‚…}Œ5—G‰žŠ†5R‹VŒ5Ž‚Ž…}Œ¨ŒŸ‚„d¿˜”Œ5˜a ‚„5—G‚É‚„d’•‰”†5–ƒ—G‚a”˜Ë‡Y…t‰É‚„5—G‚ɐtƒ‰‘ —G‰i¬˜a‚a”†\Œ\‚„d ˜—t€°L—”ž,—t˜É‚„dL˜a”Œ5˜”˜Ë…t‡V—›tƒ‰‘—G‰L˜‚a”†\Œ,‚„d †5–R‚…}Œ5—G‰žt™±š.„dœŽRºd–ƒ—tc†dR‹VŒ•Ž‚Ž…}Œ®…t‡M‚„5œ’5‰”†O¢ –ƒ—G‚a”˜,˜¿—œ‡j‰—t€R¢Í·t ‰ƒ’5‰”˜”Œ7‚—G‚Ž…}Œ“–R…}Œ7‚—tŒ•Œ ¤ ‚„d˜”Ž”–R‚Ž…}Œ5—tt‰”˜a‚a‰–R‚Ž…}Œ•˜ ‡Y…}…¶°”†«‘gžÁ‚„d ¤ ‰ —t€ ¢ € —G‚–ƒ—t-‰”¬—G‚Ž…}Œ5˜‡j…t‰L‚„5—G‚‰…}Ž ¤ t”ŒŠ‚„5…}˜aM˜”Ž”–_¢ ‚Ž…}Œ5—tL‰”˜a‚a‰–R‚Ž…}Œ•˜ƒ™“š.„dœ˜ažOŒg‚—Xº ‡j…t‰ —=˜a”€ —tŒg‚– ‰…}Ž¿¬˜ƒð )+* ,.-0/1)2)3 4- *65879)3:0*;4 587 )3 4- *65879)3:0*;4 587 <.<.<0<.<0<.<.<0<.< )3 4- *65 )3:0*;4 58707.7.7 » „dƒ‰>=2?A@CBEDǘa‚—tŒ5†5˜)‡j…t‰M—tŒgžPŒJª5€\‘ ƒ‰)…t‡˜aR¢ Ž”–R‚Ž…}Œ•—t)‰”˜a‚a‰–R‚Ž…}Œ•˜ƒ§.—tŒ5† í =GF0B ?HD)j…t‰µ—tŒ7ž ŒJª5€\‘ ƒ‰.…t‡ ¤ ‰ —t€ € —G‚–ƒ—tË‰”—G‚Ž…}Œ•˜ƒ™š›„d¿…t‰†dƒ‰MŒ °›„5¬–„Š‚„d ¤ ‰—t€ € —G‚–ƒ—t-‰”¬—G‚Ž…}Œ5˜L—G‰Á˜a‚a”†¨˜LŽ‰a¢ ‰”ŽƒG—tŒg‚ƒ™¦Ê›…D°iƒtƒ‰¶§Ë‚„d¦…t‰†dƒ‰¿…t‡‚„d ˜a”Ž”–R‚…}Œ5—t ‰”˜a‚a‰ –R‚Ž…}Œ5˜˜‰”ŽƒG—tŒ7‚¬Œ ‚„d›˜a”Œ5˜.‚„5—G‚‚„d.‹•‰˜a‚ ˜a”Ž”–R‚…}Œ5—tc‰”˜a‚a‰ –R‚Ž…}Œ±‚„5—G‚Š€ —G‚–„d”˜¨˜˜a””–R‚a”† —tŒ5†/‚„d\…t‚„5ƒ‰˜ÁŒµ‚„d ˜‚›—G‰,Œ5…t‚)‚a‰”†™›Ê)”Œ5–Rt§ ‚„dÁ˜a‚i…t‡?˜a”Ž”–R‚…}Œ5—t•‰”˜a‚a‰–R‚Ž…}Œ•˜i˜—,’5‰ƒ‡jƒ‰”Œ5–R ˜‚Ï » Ž·O˜ƒ§ՔÚ}ùGø}Ö ™Ÿš.„5¬˜)’•‰ƒ‡Yƒ‰”Œ5–R ˜a‚«˜\–R‰Ž‚£¢ –ƒ—tŒŸ˜a”Ž”–R‚¬Œ ¤ ‚„d¿–R…t‰‰”–R‚.˜a”Œ•˜a«‡Y…t‰.€¦—tŒ7ž¨„d”—t† Œd…}ª5Œ•˜µŒ²‚„d Œd…}ª5Œ²’V„d‰—t˜a”˜œ…t‡‚„5ˆtƒ‰‘Ç—G‰ ¤ ªO¢ €”Œg‚˜ƒ™ ˜a”Ž”–R‚Ž…}Œ5—t-‰”˜‚a‰–R‚Ž…}Œ¨’5‰”–R”†5”†Ÿ‘gžŠ‚„d«˜ ¤ Œ í ¢Íî¦ÏY¢Í˜‰_Öɀ”—tŒ5˜#‚„5—G‚‚„d.˜a”€ —tŒg‚–L‰…}Ž.¬˜Œd…t‚‰R¢ —tŽÒƒ”†¡‘Jžµ‚„5—G‚Á…}Œg‚a…}Ž… ¤ –ƒ—t–ƒ—G‚a ¤ …t‰žt™ š.„d ¤ ‰ —t€ ¢ € —G‚–ƒ—t¦‰”—G‚Ž…}Œ•˜b‡Y…t‰ ÈȘ“—G‰³‰ƒ’5‰”˜a”Œ7‚a”†á‘gž °.‰‚Œ ¤ í ’5‰ƒ’•î.‡j…}Ž…D°i”† ‘Jž«‚„di’•‰ƒ’ …}˜Ž‚Ž…}Œ5˜‚„5—G‚ ‰”—tÒƒ¨‚„5Ÿ˜”€ —tŒ7‚¬–Ÿ‰…}Žt§t™ ¤ ™²Ïj’5‰ƒ’±—G‘ …}ªd‚ …}Œ ™Ž™Ž™ Ö ™ J…}€¾…t‡¨‚„d ¤ ‰—t€ €¦—G‚–ƒ—t‰”—G‚…}Œ5˜b—G‰t𠘪d‘d©=Ïj‚„d“˜”Œ7‚a”Œ5–R „5—t˜µ—³˜ªd‘O©a”–R‚ Ö §¿˜ªd‘O©Â¢ÍŽ‡Y¢Ñ…t‘O© Ïj‚„d ˜a”Œ7‚a”Œ•–R „5—t˜¡—˜ªd‘O©a”–R‚P—tŒ5†ë—'†5Ž‰”–R‚µ…t‘d¢ ©a”–R‚ Ö §…t‘O©¾Ï؆5Ž‰”–R‚/…t‘O©a”–R‚ Ö §…t‘O©ÓÇÏØ˜a”–R…}Œ5†Ç’ …}˜a‚¢ tƒ‰‘•—t\¼)ÈLÖ §›–R’ÀÏØ—tŒgž²–R…}€’•Ž”€”Œg‚P–ƒ—tª•˜aDÖ §¿–R’d¢ Œd‡›ÏؗtŒ=¬ŒO‹VŒ5Ž‚X—tË–R…}€’•Ž”€”Œg‚«–ƒ—tª5˜aDÖ §–R’5¢Ñ‚„5—G‚¢ –ƒ—tª5˜ÏØ—³–R…}€’•”€”Œ7‚¡–ƒ—tª5˜aˆ¬Œ7‚a‰…O†5ª5–R”†­‘gž²— ‚„5—G‚œ–ƒ—tª5˜DÖ §M–R’d¢Í¬Œ ¤ ÏØ—¾–R…}€’VŽ”€”Œg‚Ÿ–ƒ¬—tª5˜a=ŒO¢ ‚a‰…O†5ª5–R”†ë‘gž ¤ ƒ‰ª•Œ5†VÖ §«’5‰ƒ’5¢Í–R’ÀÏØ—'–R…}€’•Ž”€”Œg‚ –ƒ—tª5˜¿Œ7‚a‰…J†5ª•–R”†Ÿ‘gžŸ—’5‰ƒ’ …}˜‚Ž…}ŒVÖ §Oƒ‚–G™ Oª5’5’ …}˜a ‚„5—G‚¿…}Œd °L—tŒg‚˜\‚a…P†5R‹VŒd’5‰”†•–ƒ—G‚a”˜ ‡j…t‰¿‚„d tƒ‰‘ í ¤ —tŒ™ñî š.„d ‹•‰˜‚M‚„5‰ƒ ’5‰”†•–ƒ—G‚a”˜ —G‰tð ) :   :  /  :  +:  )  4  )     : /  4  /  ,     /8707 )  : /  )! "  #   -87 ) 4 "$%    ,&$%87.7 )' 8/  / ) /  : ( )" * ,+ 7%) , $%7.7 :  * /   ( -0,   ,  +: ./( )  4  )* /   ( -0,   ,  7.7 )' ,  -0, ),   ,  7 ) , $%87.7 : &0 ,8404 /8404  ,   : 12 )  4  )' 0* & 4&./ *  ,3 &0 ,84.4 /84 4  ,  7.7 )' 8/  / ) 0 ,64.4 /64.4  ,  7%) , $%87.7 )4 * , *&0 ,8404 )5 3"   :./  ;7%) ) 0 * / 0 0* ,  707.7 ) 4 , " */  )  3"   :./  2  :;7 ).) 0 * / 0 0* ,  7.7%) 0 * / 0  0 $67 * ,  7.707» Á„5—”tMŒ5–ƒª5†5”† ‚„d » ¼¯tƒ‰‘Š˜a”Œ•˜a”˜‡j…}Ž…D°›Œ ¤ ‚„d/’5‰”†5–ƒ—G‚at§c‘Vªd‚ ‚„dƒž³–ƒ—tŒ'‘ µ…}€ Ž‚a‚a”†™­š.„d ‰”—t˜a…}Œ ‡j…t‰ ’•ªd‚a‚Œ ¤ ‚„d”€¥˜\‚a…b’5‰…J†5ª•–RŠ—tŒ®…}ªd‚¢ ’•ªd‚Ë‚„5—G‚–ƒ—tŒ ‘-–„d”–·t”† — ¤ —tŒ5˜‚ » ¼±tƒ‰‘˜a”Œ•˜a”˜ ÏØ˜aƒ J”–R‚Ž…}Œ¯ø}Ö ™ » b–R…}ª5†„•—”tb—t˜a…®¬Œ5–ƒª5†d”† ‚„d“˜”Œ5˜aˆŒJª5€\‘ ƒ‰˜Ÿ…t‡98?æGè3:têEGè7;(# ”ä æGèGå_é'æ5< = æGè•äÍóƒê)çOæGå+Gå é>Lè:   ò<Ï@?…}Œ ¤ €¦—tŒáÐM‰…}ªd’§BADC ??¬€ Ž‚a”†VÖ §É…}Œd¨…t‡.‚„dŸ†5–R‚Ž…}Œ5—G‰ Ž”˜«‚„5—G‚ °¨„5—¶t –R…}Œ5˜ª•Ž‚a”†±Œ¾–R…}Œ5˜a‚a‰ª•–R‚Œ ¤ ‚„dœ’5‰”†•–ƒ—G‚a”˜ƒ™¾š.„d ‹•‰˜‚’5‰”†5¬–ƒ—G‚aE:  jèF!Gió H:Gògäg˜?—)˜ªd‘5’•‰”†5–ƒ—G‚a…t‡ F Rò Gè3:góF Rä täÍóIFaæ5<JF tè6YêEtäÍóƒ§?–ƒ—tª5˜aR¢Í– „5—tŒ ¤ R¢Ñ…t‡Y¢Í˜a‚—G‚aR¢ …t‡Y¢Í—tŒ5€ —G‚aR¢Ñ‘ ”Œ ¤ §M°›„5¬–„¯„5—t˜Ÿ‘ ƒ”Œ²—tŽ‰”—t†dž'†dR¢ ‹VŒd”†‡j…t‰b—¯–ƒ—t˜˜/…t‡ » ¼ tƒ‰‘•˜ƒ™¥ÎŒ‡Ø—t–R‚ƒ§ …}Œd †d…J”˜Œd…t‚Œ5ƒ”†‚a…\Œ5†•–ƒ—G‚ac‚„d( :JóRè•äd‰…}Žc‘ ”–ƒ—tª5˜a Ž‚ ˜—t‰”—t†dž¾Œ5†5–ƒ—G‚a”†±¬Œ±Ž‚˜’•—G‰”Œg‚ ’5‰”†5–ƒ—G‚at™ š.„d …}Œ5ž/‰…}”˜Á‚„5—G‚¿…}Œd Œdƒ”†•˜«‚a…µ˜a‚«—G‰‚„5…}˜a ‚„5—G‚¦†•£ã-ƒ‰ ‡j‰…}€ ‚„d…}˜µŒ±Ž‚˜€ € ”†5—G‚aœ˜ªd’ ƒ‰a¢ ’5‰”†•–ƒ—G‚at™“š.„dŸ’5‰”†5–ƒ—G‚aK:  jèF åaó  Rò3F &F Žæ tä æGè ˜.—Š˜ªd‘5’•‰”†5–ƒ—G‚aÁ…t‡cåó ƒòF  F Žæ }䨿G蕧-°)„5–„P„5—t˜ ‘ ƒ”Œ—t‰”—t†dž­†dR‹VŒ5”†ëŒë’5‰Œ5–ƒ’•Žb‡Y…t‰b—t » ¼ tƒ‰‘µ‡j…t‰€ ˜.‚„5—G‚c‡j—t¬ª5Œ5†dƒ‰c‚„d¿˜žJŒ5˜ƒ‚˜ í ‰”—t–„?Õƒî —tŒ5† í ˜–ƒ—tŽƒàd™ñî š.„d¡äÑæ&F £æ ‰…}Žt§#°›„5– „“˜ †d”Œg‚£¢ –ƒ—tM‚a…®‚„5/’•—G‰”Œ7‚¨’5‰”†5¬–ƒ—G‚at§›†d…J”˜ŠŒd…t‚ŸŒdƒ”†‚a… ‘ ¡¬˜a‚a”†”Ž‚„dƒ‰”™áÊ)”Œ5–Rt§Á—tM‚„d¡°i…t‰·'—tŽ‰”—t†dž †d…}Œd Œ¡†dR‹VŒ5¬Œ ¤ ’5‰”†5–ƒ—G‚a”˜M‡Y…t‰ » ¼á–ƒ—t˜˜a”˜«–ƒ—tŒ ‘ Š‰”ª5˜a”†±Œ¾†dR‹VŒ•Œ ¤ ’•‰”†5–ƒ—G‚a”˜ ‡j…t‰ Œ5†5J†•ª5—t tƒ‰‘•˜b°)„5–„§,‘ ”–ƒ—tª5˜a¾…t‡Š‚„d”Ž‰=„5 ¤ „džd ¤ ‰ƒ±…t‡ ’ …}ŽžO˜a”€\žt§-–ƒ—tŒ5Œd…t‚M‘ \„5—tŒ5†5”†œ‘gžœ‚„5\’5‰”†•–ƒ—G‚a”˜ †dR‹VŒ5”†‡j…t‰ » ¼ tƒ‰‘â–ƒ—t˜˜a”˜ƒ™õš.„d¾’5‰”†5–ƒ—G‚a :  YèF¬çOæ ƒó  æG衁RºJ’•‰”˜˜a”˜Á‚„d ˜”Œ5˜a …t‡i‚„5tƒ‰‘ °›„d”Œb‚„5‚„5Œ ¤ ˜ ¤ —tŒ5”†=—G‰¦€…}Œdƒžt§É—t˜˜aƒ‚˜ƒ§Ëƒ‚–G™ » „d”Œ ‚„dœ˜a…}ªd‰–R¨…t‡›‚„dŸ‚„5¬Œ ¤ ˜\‚a‰ —tŒ5˜a‡jƒ‰‰”†¾—G‰ „Jª5€ —tŒ5˜?…t‰É˜a…O–ƒ—t ¤ ‰…}ªd’•˜Ï í „gª5€¦—tŒO¢Í— ¤ ”Œ7‚îÁ˜a‚—tŒ•†5˜ ‡j…t‰É„Jª5€ —tŒ5˜Ë…t‰#˜a…J–ƒ¬—t ¤ ‰…}ª5’•˜ Ö ‚„5˜a”€¦—tŒ7‚–i‰…}Ž˜ –ƒ—tŽ”†L< åaæGêMFçdæ _§Vt™ ¤ ™ í d„d ¤ —tŒd”†œ€ ª5– „P€…}Œdƒž ‡j‰…}€ß‚„5\‹•‰ €œ™ñîš.„5\‰…}Ž Ræ&NOå+ óF äɍ˜Mª5˜a”†/°›„d”Œ ‚„d˜a…}ªd‰–R¬˜?Œd…t‚?—)„gª•€ —tŒO¢Í— ¤ ”Œg‚ƒ§tt™ ¤ § í Ê) ¤ —t¬Œd”† € ª5–„µ€ …}ŒdƒžŠ‡j‰…}€0‚„d,€ Œd”˜…t‰.‡j‰…}€POD‘Jž¨˜”Œ ¤ ‚„dŠ”˜a‚—G‚at™ñîߚ.„5¦’5‰ƒ’ …}˜‚Ž…}Œb‚„5—G‚ ‰”—tŽÒƒ”˜\‚„5˜ ‰…}ŽŠ˜ í ‡j‰…}€œ™ñî0š.„d ’•‰ƒ’d¢Í–R’®˜‚—tŒ5†5˜«‡j…t‰,—/–R…}€ ¢ ’•Ž”€ ”Œ7‚\–ƒ¬—tª5˜a¦Œg‚a‰…O†5ª5–R”†=‘Jž=—µ’5‰ƒ’-…}˜Ž‚Ž…}Œ?ÏØ˜ Ö ™ ÎŒŠ‚„•˜i–ƒ—t˜t§5‚„dM’5‰ƒ’ …}˜Ž‚Ž…}Œ•˜ í ‡j‰…}€ îœÏj‡Y‰…}€ï˜a”£¢ Œ ¤ Ö«—tŒ5† í ‘JžJî¾Ïj‘Jžˆ˜a”Œ ¤ Ö ™¨š.„5 ¢Í„Jª5€ —tŒO¢Í— ¤ ”Œg‚ Œ'‚„dˆ˜a…}ª5‰–RR¢Ñ‚PŒ5†•–ƒ—G‚a”˜Ÿ‚„5—G‚œ‚„5˜Ÿ‰…}Ž=˜œŒd…t‚ ‰”—tÒƒ”†=‘gž¡‚„d …}Œg‚a…}Ž… ¤ –ƒ—t–ƒ—G‚a ¤ …t‰ž=…t‡MòNOêEGèF  :JóRè•äؙš›„d¿ŒdRºO‚c‚Ͱ…dð  :  4 ,  /   :Q+:  +SR : (T R : U/( )  4  )' 0* & 4&./ *  ,3  4 ,  /   :;7 7 )' 8/  / )V   :79) , $%7.7 )4 * , *&0 ,8404 )5 3"   :./  ;7%) ) 0 * / 0 0* ,  707.7 ) 4 , " */  )  3"   :./  2  :;7 ).) 0 * / 0 0* ,  7.7%) 0 * / 0  0 $67 * ,  7.707 :   * /  4 /W+: X R+:  Y/( )  4  )   * /  4 /87.7 )  : /  ) * -87%) * -87.7 )' 8/  / )4"3$%4   /  : * "Z / 0 * , /8404  &  ,    :7 ) ,&$%707 )5$6/ ,  :  :  ,E[\  8/  / )  3"  3 : /  3&*  -]  :;7 ) 4"3$%    ,&$%707š.„d’•‰”†5–ƒ—G‚a9:0 YèF ƒæGê¦óƒäjò jè3:Š˜Á‚„d¦€…}˜‚ ¤ ”ŒO¢ ƒ‰—t §\‘-”–ƒ—tª•˜a“‚„d¾˜a”Ž”–R‚…}Œ5—t¿‰”˜‚a‰–R‚Ž…}Œë…t‡¦Ž‚˜ äjòOóƒê 󃧜äjò Yè:t§˜¡‚„d±€…}˜a‚ ¤ ”Œdƒ‰ —t –R…}Œ•–Rƒ’5‚ˆŒ ‚„dˆ…}Œg‚a…}Ž… ¤ žt™Üš.„5˜Ÿ’5‰”†5–ƒ—G‚a=°›¬)‘ ˆ—t˜…³˜aR¢ Ž”–R‚a”†‘gž¯‚„d —t ¤ …t‰Ž‚„•€ —t˜/‚„d“˜a”–R…}Œ5†<–„5…}–R ‡j…t‰i‚„dMtƒ‰‘ í ¤ —tŒ5Œ í d„d ¤ —tŒ5”†Š€ ª5–„¨€…}Œdƒž ‡j‰…}€ ‚„d¡Ž…t‚a‚aƒ‰žt™ñî š.„5œ‹•‰˜a‚¦’5‰”†5–ƒ—G‚a/°›L‘  :  YèF¬çOæ ƒó  æG街tŒ5†b‚„d ˜a”–R…}Œ5†b’•‰”†5–ƒ—G‚aL:  YèF Rætê ó”äjò jè3:t™δ‡M…}ŒdP†d…J”˜ Œd…t‚°L—tŒg‚¦‚„5˜ ‚a…®„5—G’d¢ ’ ”Œœ—tŒ5†œ€¦Ž‚i‚„5¿–„d…}–R”˜©aª•˜a‚c‚a… …}Œd«’5‰”†5–ƒ—G‚at§ Œ5—t€”ž2:0 jèFçdæ ”ó Øætè5§.‡j…t‰¦‚„5˜¦˜a”Œ7‚a”Œ•–Rt§.…}Œd Œdƒ”†5˜É‚a…«‰ƒ°›‰Ž‚a‚„dL”Œ7‚a‰ž,‡Y…t‰‚„5‚„5”€L…t‡:  YèF Rætê ó”äjò jè3:¾—t˜ƒð_^5`acb(deb^!f4gQhi i bi ijkhlm`anj@lpoSq ^Vhr*s q q¶™«š.„5¬˜›°›¬Rºd–ƒª5†d ‚„d ‚„5Œ ¤ ˜M‚„5—G‚«„5—¶t çOæ ƒó  æG諗t˜—.„gžJ’-ƒ‰ Œ7žO€œ§”…t‰É˜ªd’ ƒ‰–R…}Œ5–Rƒ’5‚ƒ™Ëš.„d ’5‰”†•–ƒ—G‚aL:  jèF RæGê¦óƒäjò0Yè3: ˜Á—¨†5ƒ‡j—tª5‚)’5‰”†5–ƒ—G‚a ‡j…t‰ í ¤ —t¬Œ™ñî̚.„•—G‚,’5‰”†5–ƒ—G‚aœ–R…}Œ7‚—t¬Œ5˜€’•¬–ƒŽ‚Žž € —tŒgž’5‰”†5¬–ƒ—G‚a”˜‚„5—G‚—G‰)ª5Œ•—tŒ5—tŽžJ҃”†§7Œ’V—G‰‚–_¢ ª5—G‰‚„d…}˜aM°›„d…}˜a\äjòdóRê¦ói˜—,Œd…}€ Œ5—tÑ§}t™ ¤ ™Ž§ í ¤ —tŒ –R…}Œg‚a‰…} OXŒ5†5ƒ’-”Œ•†d”Œ5–ROX˜ª5–ƒ–R”˜˜ƒ§Gƒ‚–G™ñî=š.„dL€”—tŒO¢ Œ ¤ …t‡‚„5”˜a ’5‰”†5–ƒ—G‚a”˜Á–ƒ—tŒb‘  Œd‡jƒ‰‰”†§-…t‰ ¤ ”ŒO¢ ƒ‰—G‚a”†P‡Y‰…}€ ‚„d¿€”—tŒ5Œ ¤ …t‡É‚„5¿Œd…}€ Œ5—tÑ™ š.„5›’5‰”†•–ƒ—G‚a :0 jèF jè ”åaó  ƒóL¬˜i— ˜ªd‘5’•‰”†5–ƒ—G‚a …t‡ jè ƒåó ƒóR™š.„5”Œg‚a‰ž í ÏØ— ¤ ”Œg‚¶Ï،5YÖRÏØŒ5¬YÖaÖ î)€ ”—tŒ5˜ ‚„5—G‚,‚„5˜¿’•‰”†5–ƒ—G‚a¨†d…g”˜,Œd…t‚,„•—”tŸ—tŒ®— ¤ ”Œg‚ƒ§˜a… Ž‚L°›¬VŒd…t‚.¬Œ5„dƒ‰Ž‚i‚„5˜L‰…}M‡j‰…}€ï—tŒgž¨˜ªd’ ƒ‰’5‰”†O¢ –ƒ—G‚at™š.„d€”—tŒ•Œ ¤ …t‡i‚„•˜Á’5‰”†5¬–ƒ—G‚a RºO’5‰”˜˜a”˜ ‚„d…}˜a ’5‰…t’-…}˜Ž‚Ž…}Œ5˜MŒ¡°›„5– „P”Œg‚Ž‚Ž”˜Á…t‚„5ƒ‰«‚„5—tŒ „Jª5€ —tŒ5˜¦…t‰¨—tŒ5¬€ —t˜ ¤ —tŒ'˜a…}€ƒ‚„•Œ ¤ ™­š.„dP‰…}Ž  ó£ætè3: jè3: FÂäÑæYäjòOóRê¦ó¿€”—tŒ5˜Á‚„5—G‚Á‚„d ‹Vƒ‰)…t‡i‚„5˜ ‰…}Ž,˜.Œœ‚„5¿‰”—G‚Ž…}Œ•˜„5Ž’Ÿ…t‡  ó£æGè: jè3: FÂäÑæ¿‚a…Š‚„d ‹Vƒ‰.…t‡‚„d\‰…}ŽŠäjòOóƒê¦óR™›š.„5\‰”—G‚…}Œ  ó£ætè3: jè3: F äÑæM˜—\tƒ‰ž ¤ ”Œdƒ‰—t-‰”—G‚Ž…}Œ•˜„5Ž’¦¬Œ5–ƒª5†5¬Œ ¤ € —tŒ7ž …t‚„dƒ‰#‰”—G‚…}Œ5˜Ë˜ª5–„ —t˜tä@F Žæ ”§tä äØå  NOäÍóƒ§}—tŒ5†,…t‚„O¢ ƒ‰˜ƒ™š.„•˜É‰…}Ž.˜—t˜…¿˜„5—G‰”†‘Jž …t‚„dƒ‰’5‰”†•–ƒ—G‚a”˜ƒ™ š.„Jª5˜ƒ§.‚„5b—tŒ•—tŽžJ˜˜¨…t‡\‚„d=˜a”Œg‚a”Œ5–R í ‚„d=’ …}Œ5† ¤ —tŒd”†â°—G‚aƒ‰î˜ ƒæGê¦óƒäjò jè3:9 &N. ”óäjòOó]GtäÍóRå  ó£ætè3: jè3: FÂäÑæ=äjòOó,çOæGè“äÑæ Yè; ƒåó ”óR™¡Î Œ®‚„5˜,˜a”ŒO¢ ‚a”Œ5–Rt§}‚„5  ó ŽæGè3: jè3: F äÑæi‰”—G‚…}Œ,˜a‚—tŒ•†5˜?‡j…t‰ tä@F £æ ”§ ˜a…}€ƒ‚„•Œ ¤ ‚„5—G‚P–ƒ—tŒë‘ ˆŒd‡jƒ‰‰”†¯‘Jž'—t†5†•Ž‚Ž…}Œ5—t —tŒ5—tŽžO˜˜…t‡‚„d›…}Œg‚a…}Ž… ¤ ž¦…t‡‚„5)—G‰ ¤ ª5€ ”Œ7‚˜i…t‡‚„d ‰”—G‚…}Œ™É¯˜¬€ —G‰#—tŒ5—tŽžO˜˜˜’5‰…O†5ª5–R”† ‡Y…t‰ í ‚„d …t‘O©a”–R‚ ¤ —tŒ5”† ˜a’ ƒ”† ª5Œ5‡Y…t‰€¦Žžt™ñîP÷…t‚„¨˜a”Œ7‚a”Œ•–R”˜ –ƒ—tŒ³‘ Ÿ‰ƒ’•„5‰—t˜a”†±‘Jž ˜—”žOŒ ¤ í ‚„dµ°L—G‚aƒ‰ŸŒ¾‚„d ’ …}Œ5†ˆŒ•–R‰”—t˜a”†§ñtŒ5† í ‚„5¨˜a’ ƒ”†=…t‡c‚„5¦…t‘O©a”–R‚ Œ5–R‰”—t˜a”†™ñî  b  h `acb  b  j `b3i M˜Š¬Œ5†5–ƒ—G‚a”†§L‚„d/…t‰†dƒ‰¨…t‡«‚„dP’•‰”†5–ƒ—G‚a”˜¨˜ ‰”ŽƒG—tŒg‚Œ‚„d)˜a”Œ5˜a.‚„•—G‚ƒ§gŒ –ƒ—t˜›…t‡ ‚Ž”˜›Ïj‚´°i… …t‰ €…t‰›’5‰”†5–ƒ—G‚a”˜‰”—tŽÒƒ.—tO‚„5”Ž‰˜a”€ —tŒg‚–c‰…}Ž”˜ Ö § ‚„dƒžµ°)‘ \’•‰ƒ‡Yƒ‰‰”†/—t˜M‚„d ’5‰”†•–ƒ—G‚a,‡j…t‰M‚„5—G‚ tƒ‰‘ލŒ‚„d …t‰†dƒ‰=Œ­°)„5–„‚„dƒžë—G‰¾†5R‹VŒd”†™ š.„5¬˜Ÿ…t‰†dƒ‰P„5—t˜/Œd…t‚„5Œ ¤ ‚a…†d…³°›‚„¯‡Y‰”¸Jªd”Œ5–Rž …t‡c˜a”Œ5˜a”˜¿‡Y…t‰ — ¤ Žt”Œˆtƒ‰‘§?‘•ª5‚«‰—G‚„dƒ‰¿°›‚„=‚„d ¤ ”Œdƒ‰—t¬Ž‚Íž¿…t‡•‚„dL…}Œ7‚a…}Ž… ¤ ¬–ƒ—td–ƒ—G‚a ¤ …t‰Ž”˜Œ,‚„dL˜aR¢ Ž”–R‚Ž…}Œ•—t?‰”˜‚a‰–R‚Ž…}Œ5˜)…t‡‚„5 ˜a”€¦—tŒ7‚– ‰…}”˜ƒ™MÙ5…t‰ Œ5˜‚—tŒ5–Rt§t„5—t†\‚„5’5‰”†•–ƒ—G‚a :0 YèF Gió :tògä7‘ ƒ”Œ †dR¢ ‹VŒd”†—G‡Y‚aƒ‰ :0 YèF ƒæGê¦óƒäjò jè3:t§B:  YèF!Gió H:GòJä¿°…}ª5† „5—¶tµ‘ ƒ”Œ¾‰—tŒd·t”†³˜a”–R…}Œ•†³—G‡Y‚aƒ‰ :  jèF RæGê¦óƒäjò0Yè3: —t˜c‚„d«’•‰”†5–ƒ—G‚aÁ…t‡ í ¤ —tŒ5î Œ¨‚„d¿˜”Œ7‚a”Œ5–R í š.„d ‚a”—t–„5ƒ‰ ¤ —t¬Œd”†=°” ¤ „7‚ƒ™ñî0Î ‡‚„d¦…}Œg‚a…}Ž… ¤ –ƒ—ti–ƒ—G‚¢  ¤ …t‰ž'…t‡,‚„5b˜”Ž”–R‚Ž…}Œ5—tÁ‰”˜‚a‰–R‚Ž…}Œ'…t‡ —G‚µŽ”—t˜a‚ …}Œd«‰…}t§5˜—”žO˜ƒ§ B”§5Œ¨’5‰”†5–ƒ—G‚at§5˜—¶žt§ §5˜LŒd…t‚.Œ ‚„dœ˜—t€œ’V—G‚„±—t˜ ‚„dµ˜a”Ž”–R‚Ž…}Œ•—tL‰”˜a‚a‰–R‚…}Œ®…t‡ ‰…}Ž BÁŒP’•‰”†5–ƒ—G‚a §-‚„5”Œ/Ž‚Á€ —G·t”˜«Œd…Ÿ†5£ã ƒ‰a¢ ”Œ5–RŒ¡°›„5¬–„/…t‰†dƒ‰Á‚„d ’5‰”†5¬–ƒ—G‚a”˜M—G‰ †5R‹VŒd”†™ š.„5—G‚˜‚„dM–ƒ—t˜aM‡Y…t‰ :  jèF!Gió H:GògäV—tŒ5†P:0 jèF´åó RòF &F £æA tä æGè5§V‘-”–ƒ—tª•˜aÁ‚„dÁ…}Œ7‚a…}Ž… ¤ ¬–ƒ—t–ƒ—G‚a ¤ …t‰Ž”˜.…t‡ ‚„d”Ž‰¿äjòdóRê¦ó .—G‰¿Œd…t‚)ŒŸ‚„d\˜—t€ «’•—G‚„™ š.„5P†dR‹VŒ5‚Ž…}Œ¾…t‡«‚„dP’•‰”†5–ƒ—G‚a”˜¨˜’5‰…¶O†d”† °›Ž‚„5…}ªd‚P—tŒgž­·JŒ5…¶°›Ž”† ¤  —G‘ …}ªd‚/°›„5—G‚/–R…}ª•†ë‘  ‚„d €”—tŒ5Œ ¤ …t‡‚„d,Œd…}ª•Œ5˜›ŒP‚„d,˜”Œ7‚a”Œ5–Rt™)š.„d ¤ …}—t ˜i‚„5—G‚i‚„5M†dR‹-Œ5Ž‚Ž…}Œ…t‡‚„5)’5‰”†5–ƒ—G‚a”˜„d”Ž’ †dƒ‚aƒ‰€¦Œd¨‚„dœ˜a”Œ5˜a”˜ …t‡›‚„5ŸŒ5…}ª5Œ5˜ —t˜€,ª5– „±—t˜ ’ …}˜˜Ž‘VŽt™ Ùd…t‰=Œ•˜a‚—tŒ5–Rt§\Ž‡ …}Œd®‚ÍžJ’-”˜/‚„d ˜a”ŒO¢ ‚a”Œ5–R í ‚„5œ·JŽ‚a¨¹•ƒ° ¬Œ7‚a…=‚„dœ˜a·Jžt§ñ„dµ˜ažO˜a‚a”€ ˜a”Ž”–R‚˜#‚´°i…«’•‰”†5–ƒ—G‚a”˜É‡j…t‰ í ¹VžJîOðɂ„di‹•‰˜a‚É…}Œdtðé °›„5¬–„¡˜Á—¨˜ªd‘5’5‰”†•–ƒ—G‚a\…t‡ Rò.Gè:góIF Žæ tä æGè5§LÏØ—tŒ —tŒ5€¦—G‚a«‘-”¬Œ ¤ – „5—tŒ ¤ ”˜)Ž…J–ƒ—G‚…}ŒŸ‡Y‰…}€ï— ’•—t–RÁ‚a… —tŒd…t‚„dƒ‰RÖ ™Ÿš›„5˜«˜¿˜…P‘ ”–ƒ—tª5˜a …}Œ5 …t‡c‚„d¦˜a”Œ•˜a”˜ …t‡ í ·JŽ‚a”îPŒ » ¼0˜«—œ„•—”°.·Ïj·J‚a¶Ó}Ö ™¦š.„Jª5˜ƒ§·OŽ‚a¶Ó ˜˜a”Ž”–R‚a”†œ—t˜‚„d  :góƒè•äV…t‡éX™š›„dM˜”–R…}Œ5†¨’5‰”†O¢ –ƒ—G‚a ˜ &N. ƒóIFÂäÑæ&FiéÏØ—'˜ªd‘•’5‰”†5–ƒ—G‚a=…t‡ &N. ƒóIF äÑæ&F+ Rò.tè3:góF Žæ }䨿GèOÖ §?Œµ°)„5–„ í ·J‚a”˜›‚—G·t”Œ=—t˜ ’•—¶žg‚„•Œ ¤ §#—P‚a…Džt§MÏj·OŽ‚a}Õ¶Ö ™/ÎŒ®‚„5˜ –ƒ—t˜at§ í ·OŽ‚a}Õƒî ˜Á‚„d/äjòOóRê¦ó\—tŒ5†b‚„d :JóRè•ä…t‰EYè;Gè6jê }äÍóIF+ &N. ƒó —G‰ª5Œd·OŒd…D°›Œ™Mª5‰Œ ¤ …}Œd …t‡…}ªd‰«†d”€…}˜”§ ˜…}€R¢ ‘ …J†5ž=‚ÍžJ’-”† ‚„dŸ˜a”Œg‚a”Œ5–R í O„dŸŽƒ‡j‚,‚„5Š·tƒžO˜Œ „dƒ‰›–ƒ—G‰¶§ñtŒ5†P‚„d\˜ažO˜a‚a”€ß–ƒ—t€ ,ªd’µ°)Ž‚„Ÿ‚„d¿‡j…}£¢ Ž…D°›Œ ¤ ‚´°i… ’5‰”†5¬–ƒ—G‚a”˜ ‡j…t‰ í Žƒ‡j‚îOð Žó GóF &F¬ç  ó —tŒ5† Žó GóF  ó_ò jèG™ 'Áªd‰ ‹V‰˜a‚ ‰”—t–R‚…}Œ°L—t˜Š‚„5—G‚ ˜a…}€ƒ‚„•Œ ¤ „•—t† ¤ …}ŒdMtƒ‰ž°.‰…}Œ ¤ ŒŠ˜a”Ž”–R‚Œ ¤ ‚„d ’5‰”†•–ƒ—G‚a!Žó GóF  Fç  _óR™Š÷Lªd‚,ªd’ …}Œ=Ž…g…t·OŒ ¤ Œg‚a… Ž‚ƒ§X°i‡j…}ª5Œ5†«…}ªd‚Ë‚„5—G‚?…}Œ5…t‡d‚„d˜a”Œ5˜a”˜Ë‡Y…t‰ í ·tƒžJ˜î Œ » ¼­˜‚„d í Ù#Ž…t‰¬†5—«·tƒžO˜ƒ§ñî —\Ž…O–ƒ—G‚Ž…}Œ™Ê)”Œ5–Rt§ ‚„d¨Œ7‚aƒ‰’5‰ƒ‚—G‚Ž…}Œ“…t‡c‚„dŠ˜ažO˜a‚a”€+‡Y…t‰ í Žƒ‡j‚îP°L—t˜ –R…t‰‰”–R‚ƒ§ ¤ Žt”Œ­‚„•—G‚/˜a”Œ5˜ˆ‡Y…t‰ í ·tƒžJ˜”§ñîOð í ˜…}€R¢ ‘ …J†5ž Žƒ‡j‚¦—=’V—t–R®Ïj‚„d C«ƒžJ˜_Öª5˜¬Œ ¤ „dƒ‰Š–ƒ—G‰Ÿ—t˜ ‚„d\¬Œ5˜a‚a‰ª5€ ”Œ7‚‡Y…t‰)‚„d¿–„•—tŒ ¤ \…t‡#Ž…O–ƒ—G‚Ž…}Œ™  y { : | 8-473 6 "Æ 4}:Ë6>'3•6 |! :?4 |! 3 " 3•> Æ 8Å | :Ë>8 |}Æ ~  3 6 Æ |}Æ :?6: ¦4}3•9 Æ ~O8 | 3 { » ›„5—¶t.†dƒt”Ž…t’ ”†—«˜a…t‡j‚ͰL—G‰.”ŒgJ‰…}Œ5€”Œg‚#‚„5—G‚ „5—t˜c‚„5«‡Y…}…¶°›Œ ¤ –R…}€ ’-…}Œ5”Œ7‚˜ƒði— ’•—G‰˜aƒ‰”§d‚„5¿˜aR¢ € —tŒg‚–µŒg‚aƒ‰’5‰ƒ‚aƒ‰¦‚„5—G‚¦ª•˜a”˜ ‚„dP’5‰”†5–ƒ—G‚a”˜¦‚a… Œg‚aƒ‰’5‰ƒ‚=‚„5³˜a”Œ7‚a”Œ•–Rt§Š—­–R…t‰’Vª5˜Ï#?òOó%$PæGå  & æ”æ'M>cè ƒé £æ ç5ó #G§ » …t‰†/÷i…J…t·-§•ÎŒ5–G™)(L„•–ƒ— ¤ …gÖ †5ŽO†d”† Œg‚a…=†5£ã ƒ‰”Œ7‚ ˜”–R‚Ž…}Œ5˜ ‡j…t‰ †dR‹VŒ5Œ ¤ §‰R¢ ‹VŒ5¬Œ ¤ —tŒ5†¯‚a”˜‚Œ ¤ ‚„db’5‰”†5¬–ƒ—G‚ab†5R‹VŒ5Ž‚Ž…}Œ•˜ƒ§›— ˜a·O€ €ƒ‰›‚„5—G‚)˜a”—G‰ –„d”˜M‡j…t‰)˜a”Œg‚a”Œ5–R”˜MŒµ‚„d,–R…t‰a¢ ’•ª5˜#–R…}Œg‚—tŒ5Œ ¤ ‚„5tƒ‰‘‡Y…t‰°›„•–„ ’5‰”†•–ƒ—G‚a”˜—G‰ ‘ ”Œ ¤ †dR‹VŒd”†§)—tŒ5†²—±€”– „5—tŒ5˜€õ‡Y…t‰µ†d‰ — ¤t¤ Œ ¤ Œ¯˜a”Œg‚a”Œ5–R”˜µ‡j‰…}€õ‚„dˆ–R…t‰’•ª5˜œŒg‚a…¾‚„dˆ˜ažO˜a‚a”€ ‡j…t‰‚a”˜a‚Œ ¤ ™“MŒ±Œg‚aƒ‰‡Ø—t–Rµ„5—t˜ ‘-ƒ”Œ±—t¬˜a…=€’•ŽR¢ €”Œg‚a”† ‡Y…t‰ ‚„d…}˜a¨°›„d…b—G‘•„d…t‰M??¬˜a’“’•—G‰”Œ7‚„d”˜”˜ƒ™ š.„dLŒg‚aƒ‰‡Ø—t–R.Œ7‚aƒ‰ —t–R‚Žt”Žž,—t˜·J˜#‡j…t‰#‚„d.˜a”€ —tŒg‚– ‰…}Ž”˜”§5‚„d\˜a””–R‚Ž…}Œ5—t‰”˜‚a‰–R‚Ž…}Œ5˜›—tŒ5†œ‚„d ¤ ‰ —t€ ¢ € —G‚–ƒ—t5‰”—G‚Ž…}Œ5˜‡Y…t‰— ¤ Žt”Œ ’5‰”†5–ƒ—G‚at™Oªd’5’ …}˜a ‚„5—G‚…}Œdc°L—tŒg‚˜‚a…¿†dR‹VŒd’5‰”†•–ƒ—G‚a”˜#‡Y…t‰‚„5c‚„d‰ƒ ˜a”Œ5˜”˜.…t‡‚„d¿tƒ‰‘ í –R…}Œd‡Øª5˜a”5‰…DJ†5”†Ÿ‘gž 8?æGè3: F êEGè5§Œ5—t€ ”Žž í ‚a…¡–ƒ—tª•˜aŠ‚a…¡‘ Š€ ŽºJ”†®ªd’®Œ“‚„d € Œ•†§ñî í ‚a…ˆ€ £º®ªd’±Œ®…}Œd*ñ˜€ ¬Œ5†'Ïj°)Ž‚„VÖ îb—tŒ5† í ‚a… ’•ªd‚Œ¦†5¬˜a…t‰†dƒ‰”§J€ —G·tÁŽ”˜˜–ƒŽ”—G‰L…t‰€…t‰M†5‡¬¢ ‹V–ƒª5‚‚a…Іd”—t°›Ž‚„b™Ž™ñî 0 +p[(V  " 4 /  ,   ,   "%4 / &0 / * 0 /)  4  )     : /  4  /  ,     /8707 )  : /  )! "  3 : /   * -87 ) 4"3$%    ,&$%707 )' 8/  / )! "  3 : /   * -87%) , $%7.7 ) &*  /    " 4 / )V   :7 ) 4"3$%    ,&$%70T [ ' ,   "%4 /  4 ,  /  : &   4 ,  /   : )  4  ) , 4 "  Z./ *4 & Z7.7 )  : /  )! "  3 : /   * -87 ) 4"3$%    ,&$%707 )' 8/  /1)V   :7 ) ,&$%707 ) ,   8/  / )'   :;7%)0) 0 * / 0  7.707 )* / "  * / )I,   6/  /87.7 U [ ' ,   "%4 / ( 4.4 "8/  "ZZ6 )  4  ) ,&$%43" * /  Z /  4 (  , *   ,  707 )  : /  )! "  3 : /   * -87 ) 4 "$%    , $%87.7 )' 8/  / ) &0 6;4       : ,  " *    ,   :;7 ) , $%7.7š.„d­‹•‰˜a‚³’•‰”†5–ƒ—G‚aë¬˜˜a‚a‰—t ¤ „7‚a‡j…t‰°L—G‰†™ š.„d jèGè6jêEtäÍóIF+ &N. ”󫍏˜ÁŒdƒ”†d”†b‘ ”–ƒ—tª5˜ —tŒgžJ‚„5Œ ¤ Œ ’5‰¬Œ5–ƒŽ’•Ž/–ƒ—tŒë–ƒ—tª5˜a=‚a…³–R…}Œ5‡jª5˜ˆ—±ŽOŒ ¤ ‚„5Œ ¤ § t™ ¤ ™Ž§ í š.„5< ¤ „7‚'–R…}Œd‡jª•˜a”†Ü‚„d‘VŽ‰† —tŒ5†Ü°L—t˜ ‚a‰—G’5’ ”†§ñt‰ í š.„d”˜¿¸gªd”˜‚Ž…}Œ5˜L–R…}Œd‡Øª5˜aMƒt”Œœ‚„d RºO’-ƒ‰‚˜ƒ™ñî/š›„dŠ˜a”–R…}Œ•†®’•‰”†5–ƒ—G‚a¨„5—t˜ Œd…P‰”˜‚a‰–_¢ ‚Ž…}Œ5˜ËŒ¿‚„5‰…}Ž”˜äjòOóƒê óɗtŒ•† æ F äjòOóƒê¦óR™)ŒgžJ‚„5Œ ¤ –ƒ—tŒŸŒ ’•‰Œ5–ƒŽ’VŽL‘-M–R…}Œd‡Øª5˜a”†Š°›Ž‚„¦—tŒgžJ‚„5Œ ¤ ”˜at™ š.„d=åó N Yåó ˜Ž…t‚¦–ƒ—tŒ±‘ µª5˜a”†Œ±—tŒ7ž¾’5‰”†5–ƒ—G‚a ‚a…ˆŒ5†•–ƒ—G‚aŠ‚„•—G‚ ¬Œ®…t‰ †dƒ‰ ‡Y…t‰‚„5—G‚ ’5‰”†5–ƒ—G‚aŸ‚a… ‘ \˜a”Ž”–R‚a”†/‘Jžœ‚„d,—t ¤ …t‰ Ž‚„5€ —t˜)‚„d,€ ”—tŒ5Œ ¤ …t‡ ‚„dŠtƒ‰‘§É‚„d ‰…}Ž¦RºJ’5‰”˜˜a”†“¬Œ=‚„dPåó N Yåó,˜Ž…t‚ € ª5˜a‚.„5—¶t\‘ ƒ”ŒŸ‹V”†Ÿ‘gž¨‚„d\—t ¤ …t‰ Ž‚„5€œ™š.„•˜c˜ Œd…t‚¿—G‘•˜…}ªd‚a”Žž¡Œdƒ”†d”†ˆŒ¡‚„5¬˜«–ƒ—t˜at§?‘•ª5‚¿Ž‚«€ ŒO¢ € Òƒ”˜\‚„dœŒJª5€,‘-ƒ‰ …t‡›’•‰”†5–ƒ—G‚a”˜ ˜a”Ž”–R‚a”†¾‡j…t‰ — tƒ‰‘™¼›…t‚a,‚„5—G‚ƒ§V‡Ë…}Œd\‰”€…¶t”˜)‚„d åó N Yåó)˜Ž…t‚ ‡j‰…}€Ì’-ÓJ§#—tŒ•†ˆ…}ŒdŠ‚ÍžJ’-”˜ ‚„d¨˜a”Œg‚a”Œ5–R í Ê)Ÿ–R…}ŒO¢ ‡Øª5˜a”†'‚„d¡‚a”—t–„dƒ‰”§ñî'’5‰”†5–ƒ—G‚a”˜Ÿ’?Õ¡—tŒ5†'’VÓ®°› ‘ .˜a”Ž”–R‚a”†Š‘ ”–ƒ—tª5˜a.‘ …t‚„’5‰”†5–ƒ—G‚a”˜°)ORºO’•—tŒ ‚„d ‚´°i… ¤ ‰—t€ € —G‚¬–ƒ—t‰”—G‚…}Œ5˜¿Œb‚„dŠ˜a”Œ7‚a”Œ•–Rt™ Ê)…¶°ƒtƒ‰”§Ž‡…}Œd‚´žg’ ”˜ í O„d –R…}Œd‡Øª5˜a”†b‚„dŠ–R‰—tŒd °›Ž‚„œ—¦„dƒ‰…}Œ§ñî…}Œ•ŽžŠ’-Ó °›-‘ ¿˜a”Ž”–R‚a”†P‘ ”–ƒ—tª5˜a ’VÓ.°›tRºO’•—t¬Œ¿‚„dÈȱô °›Ž‚„ —›„dƒ‰…}ŒOöO—tŒ•†¿’?Õi°› Œd…t‚‘ c—G‘•Žc‚a…d™š.„d.˜”Ž”–R‚Ž…}Œ5—tO‰”˜‚a‰–R‚Ž…}Œ …t‡-‚„d äjòOóƒê óL‡Y…t‰c’5‰”†5–ƒ—G‚aM’ ˜L—tŒ7žJ‚„5Œ ¤ ‚„5—G‚c˜LŒd…t‚c— ç•ò7é    FÂäjò jè3:ŸŒ=‚„5¦…}Œg‚a…}Ž… ¤ žt§’5‰ƒ‡jƒ‰‰Œ ¤ ‚„5…}˜a –R…}Œ5–Rƒ’5‚˜i„5—”OŒ ¤ æGê êMNJè8 t䨿t蠗t˜i—«„7žJ’ ƒ‰Œ7žO€œ™ š.„5ƒ‰=—G‰ˆ‚Ͱ…³°L—”žO˜PŒ'°›„•–„¯…}Œdˆ–ƒ—tŒ­’5‰…G¢ –Rƒ”†“¬Œb‚„d †dR‹VŒ5‚Ž…}Œ¡…t‡i‚„d”˜ ’5‰”†5¬–ƒ—G‚a”˜ƒ™ 'ÁŒd °L—”ž˜#‚a…¿’5‰…¶O†dL‚„d.†dR‹VŒ•Ž‚Ž…}Œ5˜ ¤ Žt”Œ —G‘ …¶t)ª5˜Â¢ Œ ¤ ‚„dŠ¬Œ7‚aƒ‰‡Ø—t–Rt§°›„•–„“‰”†5ª5–R”˜,‚a…/‰”˜a’ …}Œ5†5Œ ¤ ‚a…P˜a…}€ Š–R…}€ € —tŒ•†5˜ OX¸Jªd”˜a‚Ž…}Œ5˜¿‘gž¡‚„d¦Œg‚aƒ‰‡Ø—t–Rt§ Œ5—t€”ž í ”Œ7‚aƒ‰‰…}Žt§ñî í ”Œg‚aƒ‰ ˜a”Ž”–R‚Ž…}Œ5—t‰”˜‚a‰–_¢ ‚Ž…}Œ5˜”§ñî í ”Œ7‚aƒ‰ ¤ ‰—t€ € —G‚¬–ƒ—t ‰”—G‚…}Œ5˜b‡Y…t‰“‚„5—G‚ ‰…}Žt§ñî í †d…}Œd°›Ž‚„¡‚„5¬˜Á‰…}Ž ”î<ƒ‚–G™š.„d”Œ§…}Œ5–R ‚„dˆ†dR‹-Œ5Ž‚Ž…}Œ5˜¨—G‰=–R…}€’•ƒ‚a”†²…}Œdˆ–ƒ—tŒ²˜a·O€ — ˜a”–R‚Ž…}Œ'…t‡¿‚„5¡–R…t‰’Vª5˜¦‡j…t‰Ÿ˜a”Œ7‚a”Œ•–R”˜œ–R…}Œ7‚—tŒ•Œ ¤ ‚„d,tƒ‰‘P‡j…t‰›°›„•–„œ’•‰”†5–ƒ—G‚a”˜)—G‰\‘ ”Œ ¤ †5R‹VŒd”† —tŒ5†¨‚a‰žŠ‚„d”€Ïj’•—G‰˜a¿—tŒ•†¨¬Œ7‚aƒ‰’•‰ƒ‚L‚„d”€ŠÖiŒ¨‚„d ˜ažO˜a‚a”€ÀŒ …t‰†5ƒ‰‚a… ‚a”˜a‚i‚„5)†dR‹VŒ•Ž‚Ž…}Œ5˜#’5‰…¶O†d”†™ Ý/…}˜a‚P˜a”Œg‚a”Œ5–R”˜P—G‰=’•—G‰˜a”†²—tŒ5†²¬Œ7‚aƒ‰’•‰ƒ‚a”†²Œ ‡jƒ°ë˜a”–R…}Œ5†5˜”™ MŒd…t‚„dƒ‰Š°—¶ž³˜‚a…“˜a·O€ ‚„5P–R…t‰’•ª5˜ ‡Y…t‰¨˜a”ŒO¢ ‚a”Œ5–R”˜M–R…}Œ7‚—t¬Œ5Œ ¤ ‚„d\tƒ‰‘µ‡j…t‰›°›„5¬–„Ÿ’5‰”†•–ƒ—G‚a”˜ —G‰ ‘ ”Œ ¤ †dR‹-Œd”†§’5‰…¶O†d ‚a”Œ7‚—G‚tŠ†dR‹VŒ•Ž‚Ž…}Œ5˜ ‘•—t˜a”†­…}Œ²‚„d“˜a”Œg‚a”Œ5–R”˜P‰ƒ‚a‰Žt”†²‘Jž'‚„d“˜a·O€ ¢ €ƒ‰”§d—tŒ5†Ÿ‚„5”ŒŠ‰R‹VŒd«—tŒ5†¨‚a”˜a‚c‚„d«†dR‹VŒ•Ž‚Ž…}Œ5˜i‘gž ˜a”—G‰– „5Œ ¤ ‚„5 ˜a”–R‚Ž…}Œ=…t‡‚„d –R…t‰’•ª•˜Á‡Y…t‰¿‰R‹VŒ5Œ ¤ ’5‰”†•–ƒ—G‚a”˜ƒ§—tŒ5†¾Ž…J…t’¾—t˜Œdƒ”†d”†™ˆš.„5¨Œg‚aƒ‰‡Ø—t–R ’5‰…DJ¬†d”˜Á°›Ž‚„b—t#·OŒ5†5˜M…t‡‡jª•Œ5–R‚Ž…}Œ5˜M‡j…t‰\˜a”—G‰– „O¢ Œ ¤ ‚„5,Œd…}ª5Œµ…}Œ7‚a…}Ž… ¤ žt§-°›„5¬–„œ‡Ø—t–ƒŽ‚—G‚a”˜ ¤ ‰”—G‚Žž ‚„d¿‚—t˜·¨…t‡#†dR‹VŒ•Œ ¤ ‚„d¿’5‰”†•–ƒ—G‚a”˜ƒ™ š.„5›’•—t–ƒ¬Œ ¤ …t‡?—,’5‰”†5¬–ƒ—G‚aMŒ¦‚„dÁ„5Žƒ‰—G‰– „7ž…t‡ ’5‰”†•–ƒ—G‚a”˜Á˜Á—¨˜”€ —tªd‚a…}€ —G‚¬–,’•‰…J–R”˜˜ƒ™¿š.„d ŒO¢ ‚aƒ‰‡Ø—t–R›„5—t˜—Á‡Øª5Œ5–R‚Ž…}Œ ‚„•—G‚†5˜a’•¬—”žO˜Ë—tO‚„5c’5‰”†O¢ –ƒ—G‚a”˜Á—tŽ‰”—t†5žœ‘•ª5Ž‚c‡j…t‰)—t¬?‚„d » ¼á˜a”Œ5˜a”˜)‡Y…t‰Á— ¤ Žt”Œ tƒ‰‘™ˆš›„dƒ‰Ÿ—G‰œ—t˜a…¡‡jª5Œ•–R‚Ž…}Œ5˜,‚„5—G‚†5˜Â¢ ’•—¶žŠ‚„5,–R…}Œg‚a”Œ7‚)…t‡É‚„5¿’5‰”†5¬–ƒ—G‚a”˜›—t˜.°”—t˜)—t ‚„d”Ž‰ ˜ªd’ ƒ‰’5‰”†5¬–ƒ—G‚a”˜,—tŒ5†˜ªd‘5’5‰”†5–ƒ—G‚a”˜ƒ™ š.„5˜ € —G·t”˜›‚„d\Œ5˜ƒ‰‚Ž…}ŒŸ—‰—G‚„dƒ‰)˜€’•M‚—t˜·V™   j@lco  b  b  j `b Oª5’5’ …}˜a¿‚„5—G‚Á…}Œd °L—tŒ7‚˜Á‚a…œ—t†5†¡—ŸŒdƒ°’•‰”†5£¢ –ƒ—G‚ac‚a…Á—M¬˜a‚…t‡5’•‰”†5–ƒ—G‚a”˜É‡j…t‰#— ¤ Žt”Œ tƒ‰‘ °›„5¬–„ …}Œd ‚„d…}ª ¤ „g‚/Ž‚/°—t˜b–R…}€’VŽƒ‚at™ 'ÁŒd € —”ž†5˜Â¢ –R…¶tƒ‰œ—ˆŒ5ƒ° ˜a”Œ5˜µ‚„5—G‚ » ¼ …t‰¦‚„d/†5–R‚Ž…}Œ5—G‰ž „5—t˜¡€ ˜˜”†§¿˜a…}€ƒ‚„5¬Œ ¤ Œd…t‚¡ª5Œ5–R…}€ €…}Œ™š.„5˜ ˜Š—t˜Š˜¬€’•Žµ—t˜¨—t†5†5Œ ¤ ‚„d/Œ5ƒ°ß’5‰”†•–ƒ—G‚a/†dƒ‡Y¢ Œ5‚Ž…}Œ±‚a…¾‚„db˜a‚Š…t‡¿’5‰”†5¬–ƒ—G‚a”˜Š‡j…t‰¨‚„5—G‚Ÿtƒ‰‘™ Oªd’•’-…}˜µ‚„5—G‚Ÿ…}Œd¡°—tŒg‚˜Ÿ‚a…±—t†5†‚„5/’5‰”†5–ƒ—G‚a :  YèF+&F åaó çNdä tä æG貂a…‚„5®˜‚µ…t‡’5‰”†5¬–ƒ—G‚a”˜P‡j…t‰ ‚„d«tƒ‰‘ í ¤ —t¬Œ§ñî‘ ”–ƒ—tª5˜a«‚„dÁtƒ‰‘ í ¤ —tŒ5˜L‡j‰R¢ ¸Jªd”Œ7‚žµª5˜a”†/°›Ž‚„µ‚„•˜›–R…}Œ5–Rƒ’•‚ƒ™Mš.„•˜.’5‰”†5–ƒ—G‚a ˜/–ƒŽ”—G‰Žž²—³‰R‹VŒd”€”Œg‚P…t‡‚„d®’5‰”†5¬–ƒ—G‚a :  YèF Rætê ó”äjò jè3:t™­Ê)”Œ5–RbŽ‚¦Œdƒ”†5˜¦‚a…“‘ PŒ•˜aƒ‰‚a”†±‘ R¢ ‡j…t‰Ž‚ƒ™\ȉ Ž…t‰)‚a…µ¬Œ7‚a‰…O†5ª5–ƒ¬Œ ¤ ‚„dŒdƒ°Ç’5‰”†5–ƒ—G‚a …}Œdµ€ —¶ž¾°—tŒg‚ ‚a…ˆ–„5”–·³Ž‡)‚„dƒ‰µ˜—b’5‰”†5–ƒ—G‚a °›„d…}˜,äjòOóƒê ó–R…}Œg‚—tŒ5˜i‚„dM˜a”Ž”–R‚Ž…}Œ5—tV‰”˜a‚a‰ –R‚Ž…}Œ …t‡ í ‰ƒ’•ªd‚—G‚…}Œ§ñîM…t‰?…}Œdi…t‡O‚˜?„gžg’ ƒ‰ŒgžJ€¦˜ƒ™?š.„5¬˜˜ —Á‚—t˜a· ‡jª•Žž¿—tªd‚a…}€ —G‚a”†™'ÁŒdcŒdƒ”†5˜…}Œ•Žž¿‚a…¿‚´žg’ tð ÏY‹VŒ5†d¢Ñ’5‰”†5–ƒ—G‚a”˜LÏj‚„d”€‰ƒ’•ª5‚—G‚Ž…}ŒÕ¶ÖaÖ ™š.„5˜?°› ‰ƒ‚a‰ƒt —tË’5‰”†•–ƒ—G‚a”˜›°)„d…}˜aŸäjòOóRê¦óM„5—t˜ åó´çNdä &F ä æGèG§5…t‰)…}Œd«…t‡#Ž‚˜.„gžg’ ƒ‰ŒgžJ€¦˜—t˜)Ž‚˜c˜a”Ž”–R‚…}Œ5—t ‰”˜a‚a‰ –R‚Ž…}Œ™ÉÎ Œ¦‚„5˜i–ƒ—t˜at§g‚„5›’5‰”†•–ƒ—G‚a ó Rä     òF &F åaó çNdä tä æG譂„5—G‚=„5—t˜=˜a‚—tŒ•†5Œ ¤ Õ®—t˜b‚„d ˜”Ž”–_¢ ‚Ž…}Œ5—tJ‰”˜‚a‰–R‚Ž…}Œ,‡Y…t‰#‚„d«äjòOóƒê¦óɰ›7‘ i‰ƒ‚a‰ Žƒt”†™ J…d§d…}Œd¿Œ5ƒ”†5˜L…}Œ5ŽžŠ‚a… Œ5˜aƒ‰‚L‚„d«’•‰”†5–ƒ—G‚a :  YèF &F åaó çNdä tä æG迍€ €”†•—G‚a”ŽžM‘-ƒ‡j…t‰ :  jèF RæGê¦óƒäjò0Yè3: —t˜.‡j…}Ž…D°›˜ƒð ) : 3 * / 0 "(   ,  )  4 3 ) /84  $8 4 ( * / 0 "   ,  7.707 y  86:4}8>'8: ¿|! 3c3•4 $Í3 " 38$  6 | : $Í:  ë: ¦4}3•9 Æ ~O8 | 3 { » ²„5—”t¯†dR‹VŒ5”† J§ñÔdÕ¶ùë’5‰”†•–ƒ—G‚a”˜ƒ™ O…}€…t‡ ‚„d”€ —G‰ tƒ‰žë–R…}€’VŽRº-§,°›„5ˆ…t‚„dƒ‰˜b–R…}Œg‚—tŒ ©ª5˜a‚¿‚´°i…b…t‰ ‚„d‰ƒŠ˜”€ —tŒ7‚¬– ‰…}Ž”˜,‘ ”–ƒ—tª5˜aŠ…t‚„dƒ‰ ‰…}Ž”˜Š—G‰PŒ•„dƒ‰Ž‚a”†¾‡j‰…}€ ‚„d”‰Š˜ªd’ ƒ‰’•‰”†5–ƒ—G‚a”˜ƒ™ š.„dï€ —D©a…t‰á–ƒ—t˜˜a”˜â—tŒ5† ˜a…}€ ï…t‡‚„d”‰â˜ªd‘d¢ –ƒ—t˜˜”˜ˆ—G‰³‘5‰ ŽR¹•žë†d”˜–R‰Ž‘ ”†™õš.„d¾‹•‰˜‚b–ƒ¬—t˜˜ ˜  ci b f &a lpob f4h f4i` `b °›Ž‚„'×GÔ}ړ˜ª5‘5’5‰”†5Ž¢ –ƒ—G‚a”˜ƒ™š.„•˜Ë–ƒ¬—t˜˜#–R…}Œ7‚—tŒ•˜€ —D©Â…t‰˜ª5‘ –ƒ¬—t˜˜a”˜Ë°›Ž‚„ tƒ‰ž­‚a‚Žb‰”—G‚Ž…}Œë‚a…”—t–„…t‚„dƒ‰”™ J…}€®€ —X¢ ©a…t‰“˜ªd‘–ƒ—t˜˜”˜³Ïj‚„d³Œgª•€\‘ ƒ‰=…t‡Ÿ’5‰”†5¬–ƒ—G‚a”˜ˆª5ŒO¢ †dƒ‰'”—t–„ –ƒ—t˜˜'—G‰ ¤ Žt”ŒÜŒ ’•—G‰”Œ7‚„d”˜”˜‡j…}£¢ Ž…D°›Œ ¤ ‚„d=–ƒ—t˜˜ ֊—G‰ &N. ƒóF ƒò.Gè3:JóIFaæ5< F _ä }äÍóIFaæ5< F Gè6jêEtäÍóIF  óYè:)ÏÂÕRà}ÔM˜ªd‘5’•‰”†5–ƒ—G‚a”˜ ֕°›„•–„¿¬ŒÁ‚ªd‰Œ „5—t˜)˜ªd‘–ƒ—t˜˜a”˜›˜ª5–„/—t˜ tåÂæ&N. ”óIF <ƒó ó  Yè3: FóRê ætä æGè Ï øtÓ}Ö &N. ƒóIFÂäÑæ&F+ ”äËÏÂՔÚ}Ö §jè JNJåóIF´òNOåRä@F ƒæGê¦ó  æAGé&FÂæGåJF æGè-ó ƒó < ÏÂՔÛ}Ö ™ 'M‚„dƒ‰\€¦—D©Â…t‰ ˜ªd‘–ƒ—t˜˜a”˜Á…t‡ &N. ƒóIF Rò Gè3:góFÂæV< F _ä täÍó—G‰tðjè ƒåó ƒócÏ#OÕ¶Ö §Yê)çVåaæGó.ÏÂՔÚ}Ö § GæGå ƒóƒè¦ÏÂՃÔ7Ö §•äÍóRå_ê Yè}äÍócÏÂՔ×}Ö §Gç•ò7é    F+ Rò Gè3:góFÂæV< F _ä }äÍó“ÏÂÕRàgÖ=Ïjt™ ¤ ™Ž§E Ræ    <_é>   N•ó@<_éX§Mƒ‚–G™ Ö &N. ƒóIF Rò Gè3:góFÂæV< F YèVäÍó :tåØäØé Ï ÓtÓ}Ö §äØå+Gè k<RæGå_êßÏ Ú}Öi—tŒ5†Š…t‚„O¢ ƒ‰˜ƒ™ š.„5 ŒdRºO‚ì’•‰”†5–ƒ—G‚a –ƒ—t˜˜ ˜  ci b f'`hf &a lcobfkh `jkhlœ°›Ž‚„ }ùGÚ)’5‰”†•–ƒ—G‚a”˜ƒ™#š.„d’5‰£¢ € —G‰žáƒt”Œ7‚¾RºO’5‰”˜˜a”† ‘Jžâ‚„5˜®’5‰”†5¬–ƒ—G‚a'˜ — –ƒ—tª5˜a¦…t‡– „5—tŒ ¤  …t‡LŽ…O–ƒ—G‚Ž…}Œ=…t‡L˜a…}€ƒ‚„5Œ ¤ …t‚„dƒ‰ ‚„5—tŒ‚„dP— ¤ ”Œg‚M—tŽ‚„5…}ª ¤ „±‚„d¡— ¤ ”Œ7‚¨€ —”ž³„5—¶t —t˜a…±– „5—tŒ ¤ ”†­Ž…O–ƒ—G‚Ž…}Œ™ÀÎŒ í ??Œ5†•—®–ƒ—G‰‰ Ž”†¯‚„d ‘ …g…t·O˜«‚a…µ‚„5 Ž‘5‰—G‰žP„d — ¤ ”Œg‚,—t˜…P„5—t˜«‘ ƒ”Œ €…Dt”†§‘•ªd‚,‚„d¨’5‰€ —G‰ž¡ƒt”Œg‚˜,‚„dŠ‡Ø—t–R‚ ‚„5—G‚ ??¬Œ5†5—\–ƒ—tª•˜a”†Ÿ—,– „5—tŒ ¤ Á…t‡?Ž…O–ƒ—G‚Ž…}Œ¨…t‡‚„dM‘ …g…t·O˜ƒ™ ÎŒ­—˜a”Œg‚a”Œ5–R ˜ª5–„—t˜ í š.„5“€…J…}Œë–ƒŽ‰–ƒ”˜œ‚„d ”—G‰‚„§ñî í ‚„d €…g…}Œ•˜›‚„dŠäjòOóƒê¦ó›—tŒ•†P‚„d  :góƒè•ä …t‰ jèGè6jêEtäÍóIF+ &N. ”󫍏˜Áª5Œd·OŒd…D°›Œ™\š.„5,’5‰”†5–ƒ—G‚a  N0 ”óIFÂäÑæ&F+ Rò.Gè:góIF Žæ tä æG賍˜ŠŒd…t‚ ‚a… ‘ P–R…}Œd‡Øª5˜a”† °›Ž‚„2 ƒò.Gè3:JóIF £æA tä æGè­ÏؘaƒPŒdRºO‚¦–ƒ—t˜˜ Ö Œ¾°›„5¬–„ ‚„d — ¤ ”Œ7‚«—tŒ5†P‚„d\‚„d”€  —G‰,‚„d,˜—t€ —tŒ•€ —G‚a ‘ ”Œ ¤ §Ot™ ¤ ™Ž§ í C¿”Žž¨°i”Œg‚.‚a… ‚„d,˜a‚a…t‰t™ñ›„d«€ —X¢ ©a…t‰ë˜ªd‘–ƒ—t˜˜”˜…t‡  pi bf'`hf a lcobf h `jkhl —G‰tð.çNdäLÏÑùGø}Ö §åóRê ætó Ï ø }Ö §äØå+Gè ÍçOæGåRä)Ï Ó }Ö § çVåaæ&F çdóËÏ ÓGÔ7Ö § æGè5è ó ”äÏ ÓtÓ}Ö § Læ&G²ÏÂՔÓ}Ö §—tŒ5†ŸçN.C §5çN. ò —tŒ5† ”óRè¿°›Ž‚„µÚ ’5‰”†5–ƒ—G‚a”˜.”—t– „™ š.„5\ŒdRºO‚)–ƒ—t˜˜)˜ &a lpob fkh `jkhl °›Ž‚„¡Ó tÛ ’5‰”†•–ƒ—G‚a”˜ƒ™ Ý/—D©a…t‰®˜ªd‘–ƒ—t˜˜a”˜b—G‰tðG  '²ÏÂÕRàgÖ ÏØ˜a…}€¾˜ªd‘•’5‰”†5–ƒ—G‚a”˜P…t‡ G  '®—G‰³ò0 '}óR§¦êEGå+ Ròg§ è ó'F G  'D§¡—tŒ5†À…t‚„dƒ‰˜ Ö §¾óRè•äÍóƒå¯ÏÂՃÔ7Ö § Žó GóF &F ç  óÏÂÕtÕ¶Ö §5äØå+Gó §6Gå_å tóIFÂäÑæ&F  Fç  _óLÏÂՃÔ7Ö—tŒ5†¿…t‚„O¢ ƒ‰˜ƒ™ š.„5bŒ5RºJ‚œ–ƒ—t˜˜œ¬˜Kjkl`b   `=°›‚„ }ùGÓ ’•‰”†5£¢ –ƒ—G‚a”˜ƒ™ » „5‚„5Œœ‚„5¬˜›–ƒ—t˜˜”§V‚„d € —D©a…t‰Á˜ªd‘–ƒ—t˜˜a”˜ —G‰tð æGê êMNOè6 täÍó Ï ÓXà }Ö §äØåótä@F+GèF+Gè6jêE  FaæGåJF òNOêEGèÜÏ Ót×}Ö §¡°›„d…}˜¯€¦—”žt…t‰˜ªd‘5’5‰”†•–ƒ—G‚a”˜±—G‰ äØåótä@F!NOè N. Rä é&F RæGê¦ó  æ GéG§ —tŒ5†  óRò.Gó\Ï Ú}Ö § ƒæ YèF &F :tåaæ&NDçFaæGåJF &F´òNOêEGèˆÏ Ú}ÖL—tŒ5†œ…t‚„5ƒ‰˜ƒ™ š.„5ŒdRºO‚²–ƒ—t˜˜¯˜/`  lpi b  f4h f4i hd b3`acj@lco °›Ž‚„ ÓtÚ J™ Î ‚˜ € —D©a…t‰ß–ƒ—t˜˜ï˜ äØå tè <”óRåJFÂæV< F çOæ ƒó  æGèÏ Ó OÕ¶Ö §M°›„5¬–„'Œ³‚ªd‰ Œ–R…}Œg‚—tŒ5˜Ÿ˜ª5– „ ˜ªd‘•’5‰”†5–ƒ—G‚a”˜—t˜ : Gó›Ï#OÕ¶Ö §:Jóƒä?ÏÂÕ tÓ}Ö—tŒ5† …t‚„5ƒ‰˜ƒ™ š.„d«ŒdRºO‚.–ƒ—t˜˜›˜ d b f4h  f  b `bf4i hd b3`acj@lco °›Ž‚„0ÕRàtಒ5‰”†5–ƒ—G‚a”˜ƒ™ O…}€±€ —D©Â…t‰ˆ˜ªd‘–ƒ—t˜˜a”˜ —G‰ jè6Øä}äÍóIF RæGê¦óƒäjò0Yè3: Ï#GÔ7Ö § ƒåótäÍóF tå_äÉÏ ÓtÛ}Ö §}çVåaæ&F &N6 ó ÏÂՃÔ7Ö §'çVåaó ç GåóIF RæGê¦ó”äjò Yè: ÏÂՃÔ7ÖޗtŒ5† …t‚„O¢ ƒ‰˜ƒ™MŒd…t‚„dƒ‰b€ —D©a…t‰¡–ƒ¬—t˜˜/¬˜ s  ob¯°›Ž‚„ ՔÛtÓ ’5‰”†•–ƒ—G‚a”˜ƒ™ O…}€ …t‡²Ž‚˜â˜ª5‘5’5‰”†5¬–ƒ—G‚a”˜â—G‰ ç  F+&F è-ó :0tä tóIF N6 :}ê¦óRè•ä@FaæGèßÏ#t×}ÖÏjt™ ¤ ™Ž§    çF çVåaæGóæ ç}çOæ ƒóIF #7ó Çæ ç7çOæ ƒóFç5óæ çóR§—tŒ5†Ü…t‚„O¢ ƒ‰˜ƒ™ Ö § çNdä@F    N5óF äÑæ F ƒæGê¦óƒäjò jè3:ÞÏÂՔÓ}Ö §ŠçVå+  ”ó¯ÏÑùtÖ § —t–ƒ–Rƒ’5‚¢͗t†5€¦Ž‚¢͗X¢Ñ‡j—t–R‚ Ï#GÔ7Ö § æGèå_êMF æGå åaæ  æGå }äÍó ÏÂÕtÕ¶Ö § 7óRè•éF Rætê ó”äjò jè3: FÂäÑæ&F RæGê¦ó  æ Gé¾ÏÂÕRàgÖ §  N. ”ó ÏÂÕtÕ¶Ö=—tŒ•†Þ…t‚„dƒ‰˜ƒ™ š.„d³ŒdRºO‚“–ƒ—t˜˜ˆ˜ bSg b  j f b(l b받Ž‚„ÀÕtՃԲ’5‰”†•–ƒ—G‚a”˜ƒ™ J…}€¾…t‡¨Ž‚˜b˜ªd‘d¢ –ƒ—t˜˜”˜â—G‰tð <ƒó ó  F jèF+&F _ä täÍóFÂæGåJFóRê ætä æGè Ï Ótø}Ö § çdóƒå _ó Gó Ï ÓOÕ¶Ö §E  '}óIF RæGê¦óƒäjò0Yè3:'Ï#t×}Ö §Ÿó _çdóƒå ÑóRè; óIF ó tóRè•ä@F _ä }äÍóIF+  RäØå  ¶äØætè0ÏÂÕtնֈ—tŒ5†Þ…t‚„dƒ‰˜”™ š.„d ŒdRºO‚Ë–ƒ—t˜˜Ë˜p`anjklа›Ž‚„ ÕtՔø)’5‰”†•–ƒ—G‚a”˜ƒ™ J…}€…t‡ Ž‚˜€ —D©a…t‰˜ª5‘ –ƒ¬—t˜˜a”˜—G‰ Gè;  é Dó«Ï#OÕ¶Ö §Jç GèµÏÂÕ }Ö § åó ƒæGèF+ æGè  N67ó Ï Ú}Ö §  ƒæ #täÍó ÏÑùtÖL—tŒ5†µ…t‚„dƒ‰˜ƒ™ š.„5âŒdRºJ‚­–ƒ¬—t˜˜­˜  b j  b °›Ž‚„ÚtÓ0’•‰”†5£¢ –ƒ—G‚a”˜ƒ™'δ‚˜Š€ —D©a…t‰¦˜ªd‘–ƒ—t˜˜¬˜/óJóRåRä@F æGè•äØåaæ  FaætóRå Ï ×tø}Ö §.°›„•–„³¬Œ±‚ªd‰Œ³–R…}Œg‚—tŒ5˜¨€ —D©a…t‰¨˜ªd‘–ƒ—t˜˜a”˜ ˜ª5– „ —t˜'åó RäØå  jèVÏ ÓtÓ}Ö³—tŒ5†ßêEGè; :gó²Ï Ó }Ö ™ MŒO¢ …t‚„dƒ‰L–ƒ—t˜˜i˜W`h! &a¡°›Ž‚„¦øtø¿’5‰”†5¬–ƒ—G‚a”˜ƒ™#δ‚˜i€ —X¢ ©a…t‰ ˜ªd‘–ƒ—t˜˜a”˜,—G‰Pò Gè óIFaæ çdóƒå täÍóPÏÂՔÛ}Ö —tŒ5†ò jä@F Rætê ó”äjò jè3:¾Ï ÓXàgÖ ™ë)Œ5…t‚„dƒ‰Š€ —D©a…t‰Š–ƒ¬—t˜˜ °›‚„øtÓ ’5‰”†•–ƒ—G‚a”˜c˜ i g b(l  f4i hdeb3`anj@lpo?§d°›„5– „¨„•—t˜ YèF :gó _äÏ#OÕ¶Ö.—t˜.…}Œd¿…t‡#Ž‚˜.€¦—tŒœ˜ªd‘–ƒ—t˜˜”˜ƒ™ J…}€ =…t‚„dƒ‰µ’5‰”†•–ƒ—G‚aˆ–ƒ—t˜˜a”˜P—G‰tð]g  b"b(l` Ïjà7ø}Ö § deh#"bf4r h $ f4g hij `j hl Ï# }Ö §%Slch Ï#OÕ¶Ö § j& f'`hf  hf4i hd b3`acj@lco/Ï# }Ö §#'noa` Ï#OÕ¶Ö §(b(SgQb# f i crci ` l b f  hd f'`apbf4r h $ Ï Ó }Ö §  hf  `Ï Ó}ùtÖ § gng hj@l`f4ihd b3r h $ ÏÂՔø}Ö §  )kh Bf4i hd b3`acj@lco Ï ÓOÕ¶Ö §i cgcgQh  `f4i hd b3`acj@lcoˆÏ ÓXàgÖ §i  bb  Ï Ó }Ö § —tŒ5†* S`j&@j&+ b“ÏÂÕtÕ¶Ö ™ÁšË…Š‚„d”˜,° Œ5ƒ”†P‚a…œ—t†5†bÓtÓ  ’5‰”†•–ƒ—G‚a”˜Á‡Y…t‰«˜a‚—G‚t tƒ‰‘=˜a”Œ5˜a”˜ Ïjt™ ¤ ™  óF }ä@F &F ç  óƒ§ jè  N67óR§dƒ‚–G™ Ö , 3 {¶|}Æ 6  » „5—”t ‚a”˜a‚a”†¡‚„d ’5‰”†5¬–ƒ—G‚a”˜)Œ¡øGÔtԟ˜a”Œ7‚a”Œ•–R”˜ ‚—G·t”Œ ‡j‰…}€ ‚„d #?òdó $PæGå  & æ”æ'7>Lè ”é Žæ çdóIF  #G™Ÿš.„d € —tŒJª5—tŽž¡–R…t‰‰”–R‚a”†®…}ªd‚a’•ªd‚\…t‡i‚„dŠ˜aR¢ € —tŒg‚–P¬Œ7‚aƒ‰’•‰ƒ‚aƒ‰¦…}Œ‚„d”˜a¡øGÔtÔ ˜a”Œg‚a”Œ5–R”˜¨„5—t˜ ‘ ƒ”Œ¾€ —t†dŸ’•ªd‘•¬–…}Œ±¼)…¶t”€,‘-ƒ‰¨Ót×J§Ô¡…}Œ¾‚„d šÉ—G‘•Ž ÕGð O‚—G‚˜a‚–ƒ˜c‡j…t‰.‚„d O”Œ7‚a”Œ5–R”˜ ȉ”†5–ƒ—G‚a«‡j…t‰›‚„dƒ‰‘ Û}ù J”€ —tŒg‚–  …}Ž”˜ƒð ÚOÕ M†X©aª•Œ5–R‚˜ƒð ÛtÓ ÈɅ}˜a‚ƒ‰‘V—tÈÈ—G‚a‚—t–„5€”Œg‚ƒð ÛtÚ —tªd‚„d…t‰ ˜ * „5…}€ƒ’•— ¤ âÏj°.°.°¿™ñ–ƒ˜”™ñª5–R‡´™ ”†•ªO ¤ …}€ ƒÒDÖ ™ š.„d”˜®˜a”Œg‚a”Œ5–R”˜=—tŒ5†‚„d”Ž‰b˜ªd‘–ƒ—tª5˜”˜P–R…}Œg‚—tŒ –ƒŽ…}˜aM‚a… à}ÔtÔ,†5¬˜a‚Œ5–R‚tƒ‰‘•˜ƒ™Ý¡—tŒ7ž …t‡‚„5”˜a)tƒ‰‘•˜ —G’5’ ”—G‰LŒ¦‚„5ÁøGÔtÔ˜a”Œg‚a”Œ5–R«–R…t‰’•ª•˜i˜aƒtƒ‰ —tV‚¬€”˜ °›Ž‚„¾†5Žã-ƒ‰”Œg‚ ˜a”Œ5˜a”˜ƒ§°›„5ŽŠ…t‚„dƒ‰˜—G’5’ ”—G‰M©ª5˜a‚ …}Œd …t‰Á‚Ͱ…¨‚¬€”˜ƒ™ J’•—t–R € Ž‚—G‚…}Œ5˜c’5‰ƒt”Œg‚«ª5˜ ‡j‰…}€¬˜a‚Œ ¤ ‚„d/tƒ‰‘•˜Š¬Œ³‚„5˜Š’•—G’ ƒ‰”™ëšÉ—G‘•Ž¾Õ ˜ª5€¦€ —G‰ŽÒƒ”˜Ÿ‚„d=‰”˜ª5Ž‚˜¨…t‡ ‚„dˆ˜a”€ —tŒg‚–=Œg‚aƒ‰a¢ ’5‰ƒ‚aƒ‰ …}Œ ‚„d”˜aœøGÔtÔ=˜a”Œg‚a”Œ5–R”˜ƒ™ » œ–R…}ª5Œg‚a”†±—t˜ —“–R…t‰‰”–R‚Š’•‰”†5–ƒ—G‚aµ‡j…t‰¨‚„dPtƒ‰‘…}Œ5Žž¾‚„d å Rä ’5‰”†•–ƒ—G‚aŠ˜a””–R‚a”†®‘Jž=‚„d¨—t ¤ …t‰Ž‚„5€µ™ J”€ —tŒg‚– ‰…}Ž”˜ °ƒ‰P–R…}€ ’•ªd‚a”†¾…}Œ5Žž®‡Y…t‰¦‚„d…}˜aœ’5‰”†•–ƒ—G‚a”˜ °›„5¬–„Ÿ°ƒ‰¿–R…t‰‰”–R‚ŽžŸ˜a”Ž”–R‚a”†P‘gž¨‚„d¿—t ¤ …t‰ Ž‚„5€œ™ M˜›…}Œd –ƒ—tŒb˜aƒt§-‚„d,‰”˜ª5Ž‚˜)—G‰,tƒ‰ž ¤ …g…O†§ —tŒ5†§ —t˜Á—Š–R…}Œ•˜a”¸gª5”Œ5–Rt§‚„d € —tŒgª•—tË–R…t‰‰”–R‚Ž…}Œ¡…t‡‚„d ƒ‰‰…t‰ ˜iŒŠ‚„dÁ˜a”Œg‚a”Œ5–R”˜‚—G·t”˜ctƒ‰ž¦‚a‚Ž.‚€t™ÎŒ ‡Ø—t–R‚ƒ§.°ib„5—¶t¡€ —t†5P’•ªd‘V–œ—tŒd…t‚„dƒ‰Ÿ˜aƒ‚Š…t‡,øGÔtÔ ˜a”Œg‚a”Œ5–R”˜ …}Œ“Ý¡—G‰–„±ÓXàd§ÓGÔtÔGàd™Pš.„•˜¿Œdƒ°0˜aƒ‚\…t‡ ˜a”Œg‚a”Œ5–R”˜.‚a”˜‚˜.€ —tŒgž¦tƒ‰‘•˜L‚„•—G‚.†5†ŸŒd…t‚.—G’5’ ”—G‰ Œ±‚„5PŒ5Ž‚¬—t–R…t‰’•ª5˜ƒ§L—tŒ5†±°›„•–„±Rºd„5Ž‘VŽ‚€ —tŒ7ž †5£ã ƒ‰”Œg‚˜a”Œ5˜a”˜”™#š›„dc’5‰”¬€ Œ5—G‰ž«‰”˜ª5Ž‚˜…}Œ ‚„5˜ Œdƒ°Ü˜aƒ‚Š—G‰/–R…}€’•—G‰—G‘•µ‚a…“‚„d/‰”˜ª5‚˜‡Y…t‰Š‚„d ‹•‰˜‚)˜aƒ‚ƒ™¿š.„5 ˜a””–R‚Ž…}Œ5—tÉ‰”˜a‚a‰¬–R‚Ž…}Œ5˜M—G‰—t€”† —G‚b–ƒ—G’5‚ªd‰Œ ¤ ‚„d“’ …}˜˜Ž‘•ŽˆŒd…}ª•Œë–ƒ—G‚a ¤ …t‰Ž”˜¡‡j…t‰ — ¤ Žt”Œë’5‰”†5–ƒ—G‚at§Á—tŒ5†­—G‰ˆŒd…t‚µ‘•—t˜”†¯‚a…—tŒ7ž –R…t‰’•ª•˜ƒ™¨š›„d¦€ —tŒˆ‰”—t˜a…}Œ5˜\‡Y…t‰,‡j—t¬Œ ¤ ‚a…/—t˜˜ ¤ Œ €”—tŒ5¬Œ ¤ ‚a…,—,–R…}Œ5˜a‚Ž‚ª5”Œ7‚—G‰tðɅDtƒ‰a¢Í˜a’ ”–ƒ£‹V–ƒ—G‚…}Œ …t‰M…¶tƒ‰a¢ ¤ ”Œdƒ‰ —tŽÒ”—G‚Ž…}Œb…t‡‚„d ˜a”Ž”–R‚Ž…}Œ•—t?‰”˜‚a‰–_¢ ‚Ž…}Œ5˜”§g€ ˜˜Œ ¤ — ¤ ‰ —t€ € —G‚–ƒ—tV‰”—G‚Ž…}Œ§7˜a”Œ5˜aMŒd…t‚ –R…}Œg‚a”€’•—G‚a”†ÀŒâ‚„5’5‰”†•–ƒ—G‚a†dR‹VŒ•Ž‚Ž…}Œ5˜b‡j…t‰ ‚„5—G‚µtƒ‰‘§M—tŒ•†­Œ5˜ªŠ–ƒŽ”Œg‚¨Œ5‡Y…t‰€¦—G‚Ž…}Œ¯Œ¯‚„d ˜a”Œg‚a”Œ5–R‚a…œª5Œd”¸Jª5Žt…O–ƒ—tŽžœ†dƒ‚aƒ‰€¦Œd ‚„d,’•‰”†5£¢ –ƒ—G‚a …t‡‚„d tƒ‰‘™ š.„d˜a”€ —tŒg‚–Œg‚aƒ‰’5‰ƒ‚aƒ‰¿„5—t˜ ‘ ”–R…}€)—tŒŠRºd–R”Ž”Œg‚‚a…J…}•‚a… —tªd‚a…}€ —G‚–ƒ—tž‘•ª5† —Á–R…t‰’•ª5˜#…t‡-—tŒ5Œd…t‚—G‚a”†’•‰”†5–ƒ—G‚a”˜ƒ§}˜”€ —tŒ7‚¬–i‰…}Ž”˜ —tŒ5†Ÿ—t†X©ª5Œ5–R‚˜ƒ§O‰”¸Jª5Ž‰ Œ ¤ tƒ‰žŠŽ‚a‚M„Jª5€ —tŒ¨Rã …t‰‚ Œ²G—t†•—G‚Œ ¤ —tŒ5†ë–R…t‰‰”–R‚Œ ¤ ‚„5=‹VŒ5—t«…}ª5‚a’•ªd‚ƒ™ 'Áªd‰¨—t†5†5Ž‚…}Œ5˜¦—tŒ5†‰”˜a‚a‰ª•–R‚ªd‰Œ ¤ ‚a…®‚„dbªd’5’ ƒ‰ Žƒt”¿Œ5…}ª5Œ¯…}Œ7‚a…}… ¤ ž'‚a… ¤ ƒ‚„5ƒ‰/°›‚„¯‚„d=’5‰ƒ‡jƒ‰a¢ ”Œ5–Rœ˜‚,Œ®‚„dœ˜a”Ž”–R‚Ž…}Œ•—ti‰”˜a‚a‰–R‚Ž…}Œ•˜,˜ € —G·g¢ Œ ¤ ’ …}˜˜Ž‘•ŽÁ‡j…t‰›‚„d Œ7‚aƒ‰’5‰ƒ‚aƒ‰.‚a…Љ”˜a…}Žt € —tŒ7ž Œd…}ª5Œ˜a”Œ5˜a”˜”§\°›„5¬–„…t‚„dƒ‰°›˜“°…}ª5†ë„•—”t¾‰R¢ € —tŒ5”†±ª5Œd‰”˜a…}t”†™Ù# ¤ ªd‰ˆÕP†dƒ’•–R‚˜‚„5œ…}ªd‚¢ ’•ªd‚Š…t‡¿‚„dbŒg‚aƒ‰’5‰ƒ‚aƒ‰¨‡Y…t‰¨…}Œd¡…t‡¿‚„db˜a”Œg‚a”Œ5–R Œ/‚„dŒ5Ž‚¬—t?–R…t‰’•ª•˜ƒ™¿š.„dŒg‚aƒ‰’5‰ƒ‚—G‚…}Œ/…t‡i‚„d ˜a”Œg‚a”Œ5–R”˜œŒ‚„d=–R…t‰’•ª5˜ŸŒ•–ƒª5†dµ‰”—G‚ŽOŽÒ”—G‚Ž…}Œ§     !  " $#%&('#)*+ $#%&-,./1023+4 $)&5)06#%"26++ $'"26 ".#")04780"6#%267+ 9: $#;"2)"0+< + $#"%=26"> ?  $#%"=26"@#026+"+ "#"026 + $="26"A B5C"2)2%D95EC2)&F#")&F2)0"26C6/E2> E 5C2)G+ 1HHIJK"L?MNDGO63< + P5! Q  $)5)RC&5"0++S $C5"/10/E5)?C5"0+ 9T U0.2'23+<&+ $C"62C V5) $C62CF)C> ? " $C)W/1)%/X#)+S $)&51)R622"6=#0/E5)++ $6226=#03/5)47(/1)%/X#)&Y6226=#"0/E5)7+ 9T $#0F5 3+E<&+ #KK Z1[A =J"M I\]^L\ZE_ `,2"#a+ + #)%    b ! Q  U#%&Q'#)"*+c $#%&Q,./X02+S $)&51)R06#%"26++ U'26 "."#)047d06#%2"647+ 9T $#;"2")0+< + $#"%=26"> ?  $#%"=26"@#026+"+ "#"026 + $="26"A e "."#6;"2% 9 ".#"6;"2&FE/f  $ "."#6;2+ 1HH&IJK"LQMNgGh63<+ P5   $#%i)j#/X6+S $)&5)hC"6/ "2&++ UC&5"2/5)QC6/ 247kC6/ 2+ 9T $0.2"'2+< + + Ù# ¤ ªd‰'ÕGð²Î Œg‚aƒ‰’5‰ƒ‚aƒ‰/…}ªd‚a’Vªd‚P‡j…t‰Rl Gè5é/G#ò jäÍó äØå+7óRå täÍóƒå¾æ_çdóRè-ó  çOæ _äC ³æGènm è #Gèâåó ”óRå  &F ä æGè Gè! ƒò.Gå :gó  NJè<  jåLçVå# ó –R…}€’•”€”Œ7‚¨’•„d‰ —t˜a”˜ƒ§c–R…g…t‰ †5Œ5—G‚Ž…}Œ§.˜ªd‘ …t‰†5Œ•—X¢ ‚Ž…}Œ²—tŒ5†¯—t†dtƒ‰‘•—tÁ–ƒ—tª5˜a”˜”™áš.„d ¤ ‰—t€ €¦—G‚–ƒ—t ‰”—G‚…}Œˆ˜¿¬˜a‚a”†b‹•‰˜‚«‡Y…}…¶°”†ˆ‘Jž/‚„d¨˜a”€ —tŒg‚– ‰…}Ž¨‡j…t‰ ‚„•—G‚ ¤ ‰ —t€ € —G‚–ƒ—t‰”—G‚…}Œ ¬Œ“‚„d¨‡j…t‰€ = Ï#?"opiqrts$uUvew Bx @yoDÖ D¿™­Ê)”Œ5–Rt§.‚„5P’•—tŽ‰= D Ïj’•—t–R”†,‚a…Á‚„d‰  ¤ „7‚Ë…t‡5‚„d ¤ ‰—t€ € —G‚¬–ƒ—t7‰”¬—G‚Ž…}ŒVÖ †d”Œg‚£‹•”˜Š˜a”€ —tŒg‚–/‰…}Ž”˜ƒ™Çš.„d/’5‰… ¤ ‰—t€’5‰Œg‚˜ ‚„dŸ’5‰”†•–ƒ—G‚aŠ‡j…}Ž…D°i”† ‘Jž“‚„d » ¼ tƒ‰‘±˜a”Œ•˜a”˜ —tŒ5†‚„5.’•„d‰—t˜ í ˜ªd’5’ …t‰‚a”†‘Jž{zx—X¢ÍŒgª5€,‘ ƒ‰*.  ˜ƒ§ñî Ïj°›„dƒ‰Rzx—X¢ÍŒgª•€\‘ ƒ‰*˜a‚—tŒ5†5˜¦‡Y…t‰¨—“–ƒ—G‰†5¬Œ5—t.ŒJª5€ ¢ ‘ ƒ‰_Ö §?°)„5–„“Œ5†5–ƒ—G‚a”˜¿‚„d¦ŒJª5€\‘ ƒ‰¿…t‡ ¤ ‰—t€¦€ —G‚¢ –ƒ—t›‰”—G‚Ž…}Œ5˜/Ï   ˜ Ö ‚„5—G‚Š„5—¶t/‰”—tŽÒƒ”†˜”€ —tŒO¢ ‚–P‰…}Ž”˜ƒ™䇫‚„d/…}ªd‚a’•ªd‚ЇY…t‰Ÿ‚„5—G‚Ÿ˜a”Œg‚a”Œ5–Rb„5—t˜ €…t‰Š˜a”€ —tŒg‚– ‰…}”˜«‚„5—tŒˆ‚„5¦Œgª•€\‘ ƒ‰\¬˜a‚a”†=Œ ‚„dc’•„5‰—t˜a í ˜ªd’•’-…t‰‚a”† ‘gžt§ñî\‚„d…}˜a›˜a”€¦—tŒ7‚–L‰…}Ž”˜ —G‰Ÿ—t†X©ª5Œ5–R‚˜ƒ™¡È‰ƒ’ …}˜Ž‚…}Œ5˜\–ƒ—tŒ ‘ Š—G‚a‚—t– „d”†¾‚a… ‚„dtƒ‰‘ °›Ž‚„ —)–R…}Œd‹V†d”Œ5–R…t‡% _äØåaæGè3:c…t‰ Gió 'D™M ’ …}˜a‚¢ѐtƒ‰‘•—t-ÈÈi˜†dƒ”€ ”†¦‚a… ‘ ›—G‰ ¤ ª5€”Œg‚˜…t‡‚„d tƒ‰‘=—G‰ —G‚a‚—t–„d”†“—t˜( RäØåÂætè3:t§°)„5Ž\—t¬ËÈiȘ)‚„5—G‚ —G‰ †dƒ”€”†¡—t†X©ª5Œ5–R‚˜M–ƒ—tŒ/‘  —G‚a‚—t–„d”†=—t˜ _äØåaæGè3: …t‰KGió 'D™'š.„d » ¼Ü˜”Œ5˜aPŒJª5€,‘-ƒ‰‡j…t‰ ‚„dPtƒ‰‘ ˜c’•‰Œg‚a”†œŒŸ’•—G‰”Œg‚„d”˜a”˜)€ €”†5¬—G‚a”Žž¦‡j…}Ž…D°›Œ ¤ ‚„dŸ’5‰”†•–ƒ—G‚at™ J…}€ƒ‚€ ”˜ € …t‰Ÿ‚„5—tŒ …}Œd » ¼ tƒ‰‘ ˜a”Œ•˜ac˜ ¤ Žt”Œ ‡Y…t‰—Á’•‰”†5–ƒ—G‚at™#Ùd…t‰Œ•˜a‚—tŒ5–Rt§ ‡j…t‰?‚„d’•‰”†5–ƒ—G‚a.äØå+Gó F+ Rò Gè3:góF Žæ t䨿tè ¤ Žt”Œ¿‡j…t‰ ‚„d,tƒ‰‘ í ‚a‰ —”t”îŸŒP‚„d ˜”Œ7‚a”Œ5–R í O„d,‚a‰—”t””† ‚a… J’•—tŒ§ñî ‚„d ‡j…}Ž…¶°)Œ ¤ » ¼á˜a”Œ•˜a”˜M…t‡ í ‚a‰—”t”¬î —G‰ ˜a‚a”† Ïj‚a‰—”t”ÍÕ,‚a‰ —”t”ŽàŸ‚a‰—¶t”¬ø}Ö ™Mš›„5˜›€ ”—tŒ5˜ ‚„5—G‚#—t » ¼³˜a”Œ•˜a”˜?…t‡ í ‚a‰ —”t”îÁŒ\‚„5—G‚˝¬˜a‚?—G‰L–R…G¢ —tŽ”˜–R”†¬Œ7‚a…«‚„dL’5‰”†5–ƒ—G‚a«äØå Gó  F+ Rò.Gè:góIF Žæ tä æGè ÏÑÐM…}€ƒÒt§#ÓGÔtÔdÕ¶Ö ™¦Ê)”Œ5–Rt§?—t#‚„d”˜ » ¼À˜a”Œ5˜a”˜«…t‡ ‚„dtƒ‰‘ „5—¶t‚„5˜—t€ L€ ”—tŒ5Œ ¤ ™#š.„d » ¼Œd…}ª5Œ ˜a”Œ5˜”˜›‡j…}Ž…D°‚„d Œd…}ª5Œ¡Œµ’V—G‰”Œ7‚„5”˜a”˜ƒ™›ÎŒ¡˜a…}€ Œ5˜‚—tŒ5–R”˜ƒ§t‚„dL—t ¤ …t‰Ž‚„•€€ —¶ž\’5‰…¶O†dL€…t‰‚„5—tŒ …}Œd¦˜”Œ5˜a ‡j…t‰\—PŒ5…}ª5Œ™¦šË…µ‚„d¦–R…}Œg‚a‰—G‰žb‚a…/tƒ‰‘ ˜a”Œ5˜”˜ƒ§«‚„5˜œ¬˜a‚œ˜µ—±‰—tŒ5·t”†ë˜‚Ÿ¬Œ¯°›„5– „²‚„d ‹•‰˜‚ Œd…}ª•Œ ˜”Œ5˜aœ˜a‚a”† ‘Jž“‚„dœ—t ¤ …t‰ Ž‚„5€¥˜ ‚„d ’5‰ƒ‡jƒ‰‰”†¡…}Œdt™÷”˜†5”˜Á‚„5˜Á…}ªd‚a’•ª5‚ƒ§‚„d ˜ažO˜a‚a”€ ˜‚˜µ—t¿‚„5®˜ªd’ ƒ‰’•‰”†5–ƒ—G‚a”˜œ…t‡‚„5“’5‰”†•–ƒ—G‚a”˜ ˜a”Ž”–R‚a”†¯‡j…t‰Ÿ‚„dbtƒ‰‘•˜ŸŒ‚„d=˜a”Œg‚a”Œ5–Rt™áš.„Jª5˜ƒ§ æ çdóƒèF+GèFóRè•äÍóƒåÑçVå ƒó  JF  jè6ØätäÍóF tèFaóƒè•äÍóRåÍçVå ”óIF çVåaæ ”ó ”ä  F+/ætå@:0Gè6 ¶ó> F+ jè6Øä}äÍóIF RæGê¦óƒäjò0Yè3:  F  : tóIF å  ƒóF äÑæ F ƒæGê¦óƒäjò jè3:  F+ êE'}óIFaæGåJF ”åaó täÍóIF Rætê ó”äjò jè3:t§ —tŒ5† ƒò.Gå :góIF  CC  F+õäØå tè <”óRåJFÂæV< F çOæ ƒó  æGè  JF âäØå Gè6 <”óRåJFÂæ5<JF RæGê¦óƒäjò0Yè3:  F+  F ä æGè  3$Ñ8 | 3•9  3 { 3•8-4g~ ! š.„5¬˜°i…t‰·³„5—t˜ ‘ ”ŒdR‹•‚a”†‡Y‰…}€˜aƒtƒ‰ —t)˜a…}ª5‰–R”˜ƒ™ ÎŒÀ¬Œ ¤ ª5˜‚–ƒ˜ƒ§¦‚„5'°…t‰· ‰ƒ’ …t‰‚a”†0Œ ÏØÈ‰Ž‚– „O¢ ƒ‚a‚ƒ§iՔÚtÚtÓ#ÐM‰ € ˜„5—¶°¿§ՔÚtÚGÔ ÈŒ5·tƒ‰”§ՔÚtÛtÚ}Ö«„5—¶t ‘ ƒ”Œ³ŒO¹-ªd”Œ7‚¬—t ™±Î Œ–R…}€’•ªd‚—G‚Ž…}Œ•—t›Œ ¤ ª5˜a‚–ƒ˜”§ ‚„dDƒ‰‘•¼)ƒ‚/’5‰…X©Â”–R‚¾Ï M—tŒ ¤ ƒ‚¡—t ™Ž§¦Õ”ÚtÚtÛ}Ö/—tŒ5† Ùd‰—t€ ”Œdƒ‚ˆÏ uUv?Eu  Xo B o @yo o;Bqp?or_oEsaÖ ÏØÙ#€ …t‰ƒ‚Ë—t ™Ž§tÓGÔtÔ}Ö‘-”—G‰?‰”—G‚Ž…}Œ\‚a…›‚„5˜°i…t‰·-™ Ý¡—D©Â…t‰ †5£ã ƒ‰”Œ5–R”˜,—G‰¦‚„5—G‚ …}ªd‰,’5‰…X©a”–R‚,—t€ ˜,—G‚ †dR‹VŒ•Œ ¤ ’5‰”†•–ƒ—G‚a”˜¿‡j…t‰\ƒtƒ‰ž®½Œ ¤ ¬˜„btƒ‰‘®¬Œ“— ˜ažO˜a‚a”€ —G‚–b€ —tŒ•Œdƒ‰”§)¬Œd·OŒ ¤ ‚„db˜a”Ž”–R‚…}Œ5—tÁ‰R¢ ˜a‚a‰¬–R‚Ž…}Œ5˜¿‡j…t‰ ‚„d”Ž‰ ˜a”€ —tŒg‚– ‰…}”˜\‚a…b—P°””˜Â¢ ‚—G‘•¬˜„d”† —tŒ5† °›†d”Žžˆª•˜a”†“…}Œg‚a…}Ž… ¤ ž ‡Y…t‰ Œ5…}ª5Œ5˜ƒ§ Œ5—t€”ž » …t‰ †5¼›ƒ‚ƒ§d—tŒ5†¦’•—t–ƒŒ ¤ ‚„d”€á¬Œ¦—\„•Žƒ‰—G‰a¢ –„gž…t‡ ’5‰”†5¬–ƒ—G‚a”˜°›„dƒ‰.Œd‡jƒ‰”Œ5–R”˜—tŒ5†Š˜a”€ —tŒg‚– ‰…}Ž”˜c–ƒ—tŒµ‘-¿¬Œ5„dƒ‰Ž‚a”†™ÉÝ/…t‰ƒ…¶tƒ‰¶§•‚„d¿†dR‹-Œ5Ž‚Ž…}Œ …t‡…}ª5‰«’5‰”†5–ƒ—G‚a”˜«–ƒ—tŒb‘  ‚a”˜a‚a”†=‘JžP‰ª•Œ5Œ5Œ ¤ ‚„d ˜a”€ —tŒg‚–/Œ7‚aƒ‰’5‰ƒ‚—G‚Ž…}Œ—t ¤ …t‰Ž‚„5€ …}Œ—tŒgž³–R…t‰a¢ ’•ª5˜¦…t‰Š…}Œ'˜a”Œg‚a”Œ5–R”˜¨‚ÍžJ’ ”†‘Jž¾‚„d¡ª•˜aƒ‰Š—G‚¨‚„d –R…}Œ5˜a…}t™   :?6~ $´Å {GÆ :Ë6 { MŒ<—G’•’5‰…}—t–„<‚a…¯‘•ª5¬†5Œ ¤ tƒ‰‘Ç’5‰”†5–ƒ—G‚a”˜¡„5—t˜ ‘ ƒ”ŒP’•‰”˜a”Œg‚a”†™)š.„5,–R…}Œ5˜a‚a‰ ª5–R‚Ž…}Œ/…t‡‚„d\’•‰”†5£¢ –ƒ—G‚a”˜˜Ë”˜˜a”Œg‚—tŽž¿¬Œd·t”†,‚a…M—tŒ —t ¤ …t‰‚„5€ë‡j…t‰†dR¢ ‚aƒ‰€ ¬Œ5Œ ¤ ‚„d¦˜a”€¦—tŒ7‚– ‰…}Ž”˜«…t‡L‚„d’5‰”†•–ƒ—G‚a”˜ƒ™ š.„dM†dR‹VŒ5‚Ž…}Œ5˜…t‡‚„5)’5‰”†5–ƒ—G‚a”˜L—G‰Á†d…}Œd)˜”€ £¢ —tªd‚a…}€ —G‚¬–ƒ—tŽžP—tŒ5†¡—G‰ ‘ ”Œ ¤ ‰R‹VŒd”†¡—tŒ5†P‚a”˜‚a”† °›Ž‚„Š‚„d«„d”Ž’¨…t‡Ë— ˜a”€¦—tŒ7‚–ÁŒg‚aƒ‰’5‰ƒ‚—G‚Ž…}ŒŸ—t ¤ …G¢ ‰Ž‚„•€ß‚„5—G‚¿ª•˜a”˜Á‚„d†dR‹VŒ5‚Ž…}Œ5˜.‡j…t‰¿†dƒ‚aƒ‰€¦Œ5Œ ¤ tƒ‰‘ €”—tŒ5Œ ¤ —tŒ5†“˜a”€¦—tŒ7‚–¦‰…}Ž”˜\‡Y…t‰ ˜a”Œ7‚a”Œ•–R”˜ ˜a”Ž”–R‚a”†‡j‰…}€ —ˆ–R…t‰’•ª5˜ƒ§L…t‰¦”Œg‚aƒ‰”†'—G‚¦‚„dP‚aƒ‰a¢ € Œ•—t ™š.„5/—t ¤ …t‰‚„5€—tŒ•†‚„5/’5‰”†5–ƒ—G‚a”˜Ÿ—G‰ ‘ ”Œ ¤ ª5˜”†œ‚a…Š—tªd‚a…}€¦—G‚–ƒ—tŽžœ–R…}Œ5˜a‚a‰ ª5–R‚›—¦–R…t‰’•ª5˜ …t‡˜a”€ —tŒg‚–«—tŒ5Œd…t‚—G‚a”†¡˜a”Œg‚a”Œ5–R”˜ƒ™  3 354}3•6i~O3 { Ê)…}—tŒ šË‰—tŒ ¤ Á—tŒ ¤ §_C¿—G‰ Œ C¿Ž’5’ ƒ‰”§<Ý/—G‰‚„5— È#—t€ƒ‰¶§G—tŒ•†t…}˜ƒ’•„  …}˜a”Œd҃°” ¤ ™LՔÚtÚtÛJ™ÉÎŒgt”˜Â¢ ‚ ¤ —G‚¬Œ ¤ ‰ ¤ ª•—G‰i˜a”Œ•˜a›RºO‚a”Œ5˜Ž…}Œ•˜‘•—t˜”†¦…}Œ¨ŒO¢ ‚aƒ‰˜”–R‚Žt¿ŽƒOŒŠ–ƒ¬—t˜˜a”˜ƒ™ÎŒim è = 8_m! F#" = 8#§ ’•— ¤ ”˜MÓtÚ %$JÓtÚtÚJ§ÝP…}Œg‚a‰”—t §&«ªdƒ‘ ”–G™ (M™Ùd”‘•—tª5€œ™¿Õ”ÚtÚtÛJ™Á ˜”€ —tŒ7‚¬– Œ5ƒ‚Ͱ…t‰·/…t‡i”ŒO¢ ¤ ¬˜„\tƒ‰‘•˜ƒ™#Î Œ(M™tÙd”¬Ž‘•—tª5€œ§t”†•Ž‚a…t‰”§ $PæGå+% óƒä  "Mè²ó ó ¶äØåÂæGè8 8?ó  #   ; }ä    ”ó Gè RæGê¦óbæ5< ØäC  ç7ç # }䨿Gè6 §’•— ¤ œ–„5—G’5‚aƒ‰ J™#Ý/Î š0ȉ”˜˜ƒ§ ՔÚtÚtÛJ§ (L—t€\‘•‰† ¤ t§5Ý¡—t˜˜ƒ™ (M™'d™ÁÙ#€…t‰t§ (M™  ™(}…}„5Œ5˜a…}Œ§,—tŒ5†<ݓ™  ™ ?™ ÈɁƒ‚a‰ª5–·V™›ÓGÔtÔJ™.÷L—t–· ¤ ‰…}ª•Œ5†¡‚a…Їj‰—t€”Œdƒ‚ƒ™ m èF äÍóƒå è}䨿Gè; *)dæ&NOå_è iæ5<M8?ó æ':tå+ ç•ògéX§.Ք×JðxÓ tø%$ ÓtøGÔO™ Ù™ÐM…}€ƒÒt™œÓGÔtÔdÕG™Ÿ)Œ³—t ¤ …t‰Ž‚„5€¥‡j…t‰¦—t˜a’ ”–R‚˜ …t‡ ˜a”€¦—tŒ7‚–²¬Œ7‚aƒ‰’•‰ƒ‚—G‚Ž…}Œ0ª5˜Œ ¤ —tŒ0”Œ5„•—tŒ5–R”† °…t‰†5Œdƒ‚ƒ™ëÎ Œ,+Låaæ _ó jè3: ¯æ5<³äjòOó.-}è/,æGåRäjò "Mê¦óƒå # Gè l“ó óƒäjè3:0æ5<ëäjòOó0,æGåRäjò1"Mê¦óRå Gè " ƒæ #tä æGè <ƒæGå = ætê›çNOä }䨿Gè;  8 jè3: N  _ä#  2"3" = 8 F4-65 5 G§d’V— ¤ ”˜)Û}ù$JÚXàd™ Ù™ïÐÁ…}€ƒÒt™ ÓGÔtÔGàd™ ÐM‰…}ª5Œ5†•Œ ¤ ‚„d …}Œg‚a…}£¢ … ¤ ž …}Œ‚„d ˜a”€ —tŒg‚–ߍ¬Œ7‚aƒ‰’•‰ƒ‚—G‚Ž…}Œ —t ¤ …G¢ ‰‚„5€œ™ÎŒ7+cåÂæA óó  jè3: æ5<,äjòdó98 ó æGè{m è•äÍóRå_è&F ä æGè  $PæGå+% óƒä = ætè <ƒóƒåaóƒè 󃧛’•— ¤ ”˜=ՔÓXà:$VՔÓtÚJ§ Ý¡—t˜—G‰žJ· A)Œ5tƒ‰˜Ž‚´žt§5÷i‰ Œd…d™ d™?ÐM‰ € ˜„5—¶°¿™¿Õ”ÚtÚGÔO™;"Må : NOê¦óRè•ä8-äØå N6 ”ä@NOåóR™MÝPΚ È‰”˜˜ƒ§ (L—t€,‘5‰¬† ¤ t§5Ý/—t˜˜ƒ™ ÐMƒ…t‰ ¤ 'Ý/¬Žƒ‰”™<ՔÚtÚtÛJ™ë¼›…}ª•Œ5˜ˆŒÞ°…t‰†5Œ5ƒ‚ƒ™'ÎŒ (M™VÙd”Ž‘•—tª•€œ§d”†5Ž‚a…t‰”§$PæGå+%,ó”ä "Mè“ó ó ¶äØåÂætè6 8Ëó #  3; }ä    ”ó tè ƒæGê¦ó›æV< ØäC  ç}ç  tä æGè § ’•— ¤ )–„5—G’5‚aƒ‰MÕG™gÝ/Î šÈ‰”˜˜”§•Õ”ÚtÚtÛJ§(L—t€\‘•‰† ¤ t§ Ý¡—t˜˜ƒ™ -™ÁȍŒ5·tƒ‰”™­Õ”ÚtÛtÚJ™#8ËóGå_è    ØäØé Gè = æ':}è6ØäØæG蕙 Ý/Î šëȉ”˜˜”§ (L—t€\‘•‰† ¤ t§5Ý¡—t˜˜ƒ™ ÷M™ ?™,ȉŽ‚– „dƒ‚a‚ƒ™âՔÚtÚtÓJ™< Lå+Gê êEtä   = æGê)çdóIF äÍóƒè ó tè=+ Gå jè3:;+)óRåk<Rætå êEGè _óR™š›„d A)Œ•Žtƒ‰a¢ ˜‚Íž¨…t‡ (c„5–ƒ— ¤ … ȉ”˜˜ƒ§ (L„5¬–ƒ— ¤ …d§ Î ¬Œd…}˜ƒ™ > ™ñ\™ » Ž·O˜ƒ™ ՔÚ}ùGøJ™ ȉƒ‡jƒ‰”Œ5–R ˜”€ —tŒ7‚¬–ƒ˜ƒ™ ÎŒὛ™ ?™ C«ƒ”Œ•—tŒ§¦”†5‚a…t‰”§@?ætå êE !8 óRêEGè•ä# æ5<A tä@NOå   8 Gè3: N :góR™(c—t€\‘5‰ † ¤  A)Œ5tƒ‰˜Ž‚´ž ȉ”˜˜ƒ§ (L—t€,‘5‰¬† ¤ t§A C ™
2004
46
Large-Scale Induction and Evaluation of Lexical Resources from the Penn-II Treebank Ruth O’Donovan, Michael Burke, Aoife Cahill, Josef van Genabith, Andy Way National Centre for Language Technology and School of Computing Dublin City University Glasnevin Dublin 9 Ireland {rodonovan,mburke,acahill,josef,away}@computing.dcu.ie Abstract In this paper we present a methodology for extracting subcategorisation frames based on an automatic LFG f-structure annotation algorithm for the Penn-II Treebank. We extract abstract syntactic function-based subcategorisation frames (LFG semantic forms), traditional CFG categorybased subcategorisation frames as well as mixed function/category-based frames, with or without preposition information for obliques and particle information for particle verbs. Our approach does not predefine frames, associates probabilities with frames conditional on the lemma, distinguishes between active and passive frames, and fully reflects the effects of long-distance dependencies in the source data structures. We extract 3586 verb lemmas, 14348 semantic form types (an average of 4 per lemma) with 577 frame types. We present a large-scale evaluation of the complete set of forms extracted against the full COMLEX resource. 1 Introduction Lexical resources are crucial in the construction of wide-coverage computational systems based on modern syntactic theories (e.g. LFG, HPSG, CCG, LTAG etc.). However, as manual construction of such lexical resources is time-consuming, errorprone, expensive and rarely ever complete, it is often the case that limitations of NLP systems based on lexicalised approaches are due to bottlenecks in the lexicon component. Given this, research on automating lexical acquisition for lexically-based NLP systems is a particularly important issue. In this paper we present an approach to automating subcategorisation frame acquisition for LFG (Kaplan and Bresnan, 1982) i.e. grammatical function-based systems. LFG has two levels of structural representation: c(onstituent)structure, and f(unctional)-structure. LFG differentiates between governable (argument) and nongovernable (adjunct) grammatical functions. Subcategorisation requirements are enforced through semantic forms specifying the governable grammatical functions required by a particular predicate (e.g. FOCUS⟨(↑ SUBJ)(↑ OBLon)⟩). Our approach is based on earlier work on LFG semantic form extraction (van Genabith et al., 1999) and recent progress in automatically annotating the Penn-II treebank with LFG f-structures (Cahill et al., 2004b). Depending on the quality of the f-structures, reliable LFG semantic forms can then be generated quite simply by recursively reading off the subcategorisable grammatical functions for each local pred value at each level of embedding in the f-structures. The work reported in (van Genabith et al., 1999) was small scale (100 trees), proof of concept and required considerable manual annotation work. In this paper we show how the extraction process can be scaled to the complete Wall Street Journal (WSJ) section of the Penn-II treebank, with about 1 million words in 50,000 sentences, based on the automatic LFG f-structure annotation algorithm described in (Cahill et al., 2004b). In addition to extracting grammatical function-based subcategorisation frames, we also include the syntactic categories of the predicate and its subcategorised arguments, as well as additional details such as the prepositions required by obliques, and particles accompanying particle verbs. Our method does not predefine the frames to be extracted. In contrast to many other approaches, it discriminates between active and passive frames, properly reflects long distance dependencies and assigns conditional probabilities to the semantic forms associated with each predicate. Section 2 reviews related work in the area of automatic subcategorisation frame extraction. Our methodology and its implementation are presented in Section 3. Section 4 presents the results of our lexical extraction. In Section 5 we evaluate the complete extracted lexicon against the COMLEX resource (MacLeod et al., 1994). To our knowledge, this is the largest evaluation of subcategorisation frames for English. In Section 6, we conclude and give suggestions for future work. 2 Related Work Creating a (subcategorisation) lexicon by hand is time-consuming, error-prone, requires considerable linguistic expertise and is rarely, if ever, complete. In addition, a system incorporating a manually constructed lexicon cannot easily be adapted to specific domains. Accordingly, many researchers have attempted to construct lexicons automatically, especially for English. (Brent, 1993) relies on local morphosyntactic cues (such as the -ing suffix, except where such a word follows a determiner or a preposition other than to) in the untagged Brown Corpus as probabilistic indicators of six different predefined subcategorisation frames. The frames do not include details of specific prepositions. (Manning, 1993) observes that Brent’s recognition technique is a “rather simplistic and inadequate approach to verb detection, with a very high error rate”. Manning feeds the output from a stochastic tagger into a finite state parser, and applies statistical filtering to the parsing results. He predefines 19 different subcategorisation frames, including details of prepositions. Applying this technique to approx. 4 million words of New York Times newswire, Manning acquires 4900 subcategorisation frames for 3104 verbs, an average of 1.6 per verb. (Ushioda et al., 1993) run a finite state NP parser on a POS-tagged corpus to calculate the relative frequency of just six subcategorisation verb classes. In addition, all prepositional phrases are treated as adjuncts. For 1565 tokens of 33 selected verbs, they report an accuracy rate of 83%. (Briscoe and Carroll, 1997) observe that in the work of (Brent, 1993), (Manning, 1993) and (Ushioda et al., 1993), “the maximum number of distinct subcategorization classes recognized is sixteen, and only Ushioda et al. attempt to derive relative subcategorization frequency for individual predicates”. In contrast, the system of (Briscoe and Carroll, 1997) distinguishes 163 verbal subcategorisation classes by means of a statistical shallow parser, a classifier of subcategorisation classes, and a priori estimates of the probability that any verb will be a member of those classes. More recent work by Korhonen (2002) on the filtering phase of this approach has improved results. Korhonen experiments with the use of linguistic verb classes for obtaining more accurate back-off estimates for use in hypothesis selection. Using this extended approach, the average results for 45 semantically classified test verbs evaluated against hand judgements are precision 87.1% and recall 71.2%. By comparison, the average results for 30 verbs not classified semantically are precision 78.2% and recall 58.7%. Carroll and Rooth (1998) use a hand-written head-lexicalised context-free grammar and a text corpus to compute the probability of particular subcategorisation scenarios. The extracted frames do not contain details of prepositions. More recently, a number of researchers have applied similar techniques to derive resources for other languages, especially German. One of these, (Schulte im Walde, 2002), induces a computational subcategorisation lexicon for over 14,000 German verbs. Using sentences of limited length, she extracts 38 distinct frame types, which contain maximally three arguments each. The frames may optionally contain details of particular prepositional use. Her evaluation on over 3000 frequently occurring verbs against the German dictionary Duden Das Stilw¨orterbuch is similar in scale to ours and is discussed further in Section 5. There has also been some work on extracting subcategorisation details from the Penn Treebank. (Kinyon and Prolo, 2002) introduce a tool which uses fine-grained rules to identify the arguments, including optional arguments, of each verb occurrence in the Penn Treebank, along with their syntactic functions. They manually examined the 150+ possible sequences of tags, both functional and categorial, in Penn-II and determined whether the sequence in question denoted a modifier, argument or optional argument. Arguments were then mapped to traditional syntactic functions. As they do not include an evaluation, currently it is impossible to say how effective this technique is. (Xia et al., 2000) and (Chen and Vijay-Shanker, 2000) extract lexicalised TAGs from the Penn Treebank. Both techniques implement variations on the approaches of (Magerman, 1994) and (Collins, 1997) for the purpose of differentiating between complement and adjunct. In the case of (Xia et al., 2000), invalid elementary trees produced as a result of annotation errors in the treebank are filtered out using linguistic heuristics. (Hockenmaier et al., 2002) outline a method for the automatic extraction of a large syntactic CCG lexicon from Penn-II. For each tree, the algorithm annotates the nodes with CCG categories in a topdown recursive manner. In order to examine the coverage of the extracted lexicon in a manner similar to (Xia et al., 2000), (Hockenmaier et al., 2002) compared the reference lexicon acquired from Sections 02-21 with a test lexicon extracted from Section 23 of the WSJ. It was found that the reference CCG lexicon contained 95.09% of the entries in the test lexicon, while 94.03% of the entries in the test TAG lexicon also occurred in the reference lexicon. Both approaches involve extensive correction and clean-up of the treebank prior to lexical extraction. 3 Our Methodology The first step in the application of our methodology is the production of a treebank annotated with LFG f-structure information. F-structures are feature structures which represent abstract syntactic information, approximating to basic predicate-argumentmodifier structures. We utilise the automatic annotation algorithm of (Cahill et al., 2004b) to derive a version of Penn-II where each node in each tree is annotated with an LFG functional annotation (i.e. an attribute value structure equation). Trees are traversed top-down, and annotation is driven by categorial, basic configurational, trace and Penn-II functional tag information in local subtrees of mostly depth one (i.e. CFG rules). The annotation procedure is dependent on locating the head daughter, for which the scheme of (Magerman, 1994) with some changes and amendments is used. The head is annotated with the LFG equation ↑=↓. Linguistic generalisations are provided over the left (the prefix) and the right (suffix) context of the head for each syntactic category occurring as the mother node of such heads. To give a simple example, the rightmost NP to the left of a VP head under an S is likely to be its subject (↑SUBJ =↓), while the leftmost NP to the right of the V head of a VP is most probably its object (↑OBJ =↓). (Cahill et al., 2004b) provide four sets of annotation principles, one for non-coordinate configurations, one for coordinate configurations, one for traces (long distance dependencies) and a final ‘catch all and clean up’ phase. Distinguishing between argument and adjunct is an inherent step in the automatic assignment of functional annotations. The satisfactory treatment of long distance dependencies by the annotation algorithm is imperative for the extraction of accurate semantic forms. The Penn Treebank employs a rich arsenal of traces and empty productions (nodes which do not realise any lexical material) to co-index displaced material with the position where it should be interpreted semantically. The algorithm of (Cahill et al., 2004b) translates the traces into corresponding re-entrancies in the f-structure representation (Figure 1). Passive movement is also captured and expressed at f-structure level using a passive:+ annotation. Once a treebank tree is annotated with feature structure equations by the annotation algorithm, the equations are collected and passed to a constraint solver which produces the f-structures. In order to ensure the quality of the semanS S-TPC- 1 NP U.N. VP V signs NP treaty NP Det the N headline VP V said S T- 1   TOPIC " SUBJ  PRED U.N. PRED sign OBJ  PRED treaty # 1 SUBJ h SPEC the PRED headline i PRED say COMP 1   Figure 1: Penn-II style tree with long distance dependency trace and corresponding reentrancy in f-structure tic forms extracted by our method, we must first ensure the quality of the f-structure annotations. (Cahill et al., 2004b) measure annotation quality in terms of precision and recall against manually constructed, gold-standard f-structures for 105 randomly selected trees from section 23 of the WSJ section of Penn-II. The algorithm currently achieves an F-score of 96.3% for complete f-structures and 93.6% for preds-only f-structures.1 Our semantic form extraction methodology is based on the procedure of (van Genabith et al., 1999): For each f-structure generated, for each level of embedding we determine the local PRED value and collect the subcategorisable grammatical functions present at that level of embedding. Consider the f-structure in Figure 1. From this we recursively extract the following nonempty semantic forms: say([subj,comp]), sign([subj,obj]). In effect, in both (van Genabith et al., 1999) and our approach semantic forms are reverse engineered from automatically generated f-structures for treebank trees. We extract the following subcategorisable syntactic functions: SUBJ, OBJ, OBJ2, OBLprep, OBL2prep, COMP, XCOMP and PART. Adjuncts (e.g. ADJ, APP etc) are not included in the semantic forms. PART is not a syntactic function in the strict sense but we capture the relevant co-occurrence patterns of verbs and particles in the semantic forms. Just as OBL includes the prepositional head of the PP, PART includes the actual particle which occurs e.g. add([subj,obj,part:up]). In the work presented here we substantially extend the approach of (van Genabith et al., 1999) as 1Preds-only measures only paths ending in PRED:VALUE so features such as number, person etc are not included. regards coverage, granularity and evaluation: First, we scale the approach of (van Genabith et al., 1999) which was proof of concept on 100 trees to the full WSJ section of the Penn-II Treebank. Second, our approach fully reflects long distance dependencies, indicated in terms of traces in the Penn-II Treebank and corresponding re-entrancies at f-structure. Third, in addition to abstract syntactic functionbased subcategorisation frames we compute frames for syntactic function-CFG category pairs, both for the verbal heads and their arguments and also generate pure CFG-based subcat frames. Fourth, our method differentiates between frames captured for active or passive constructions. Fifth, our method associates conditional probabilities with frames. In contrast to much of the work reviewed in the previous section, our system is able to produce surface syntactic as well as abstract functional subcategorisation details. To incorporate CFG details into the extracted semantic forms, we add an extra feature to the generated f-structures, the value of which is the syntactic category of the pred at each level of embedding. Exploiting this information, the extracted semantic form for the verb sign looks as follows: sign(v,[subj(np),obj(np)]). We have also extended the algorithm to deal with passive voice and its effect on subcategorisation behaviour. Consider Figure 2: not taking voice into account, the algorithm extracts an intransitive frame outlaw([subj]) for the transitive outlaw. To correct this, the extraction algorithm uses the feature value pair passive:+, which appears in the f-structure at the level of embedding of the verb in question, to mark that predicate as occurring in the passive: outlaw([subj],p). In order to estimate the likelihood of the cooccurrence of a predicate with a particular argument list, we compute conditional probabilities for subcategorisation frames based on the number of token occurrences in the corpus. Given a lemma l and an argument list s, the probability of s given l is estimated as: P(s|l) := count(l, s) Pn i=1 count(l, si) We use thresholding to filter possible error judgements by our system. Table 1 shows the attested semantic forms for the verb accept with their associated conditional probabilities. Note that were the distinction between active and passive not taken into account, the intransitive occurrence of accept would have been assigned an unmerited probability. subj : spec : quant : pred : all adjunct : 2 : pred : almost adjunct : 3 : pred : remain participle : pres 4 : obj : adjunct : 5 : pred : cancer-causing pers : 3 pred : asbestos num : sg pform : of pers : 3 pred : use num : pl passive : + adjunct : 1 : obj : pred : 1997 pform : by xcomp : subj : spec: quant : pred : all adjunct : 2 : pred : almost ... ... passive : + xcomp : subj : spec: quant : pred : all adjunct : 2 : pred : almost ... ... passive : + pred : outlaw tense : past pred : be pred : will modal : + Figure 2: Automatically generated f-structure for the string wsj 0003 23“By 1997, almost all remaining uses of cancer-causing asbestos will be outlawed.” Semantic Form Frequency Probability accept([subj,obj]) 122 0.813 - accept([subj],p) 9 0.060 accept([subj,comp]) 5 0.033 - accept([subj,obl:as],p) 3 0.020 accept([subj,obj,obl:as]) 3 0.020 accept([subj,obj,obl:from]) 3 0.020 - accept([subj]) 2 0.013 accept([subj,obj,obl:at]) 1 0.007 accept([subj,obj,obl:for]) 1 0.007 accept([subj,obj,xcomp]) 1 0.007 Table 1: Semantic Forms for the verb accept marked with p for passive use. 4 Results We extract non-empty semantic forms2 for 3586 verb lemmas and 10969 unique verbal semantic form types (lemma followed by non-empty argument list). Including prepositions associated with the OBLs and particles, this number rises to 14348, an average of 4.0 per lemma (Table 2). The number of unique frame types (without lemma) is 38 without specific prepositions and particles, 577 with (Table 3). F-structure annotations allow us to distinguish passive and active frames. 5 COMLEX Evaluation We evaluated our induced (verbal) semantic forms against COMLEX (MacLeod et al., 1994). COM2Frames with at least one subcategorised grammatical function. Without Prep/Part With Prep/Part Sem. Form Types 10969 14348 Active 8516 11367 Passive 2453 2981 Table 2: Number of Semantic Form Types Without Prep/Part With Prep/Part # Frame Types 38 577 # Singletons 1 243 # Twice Occurring 1 84 # Occurring max. 5 7 415 # Occurring > 5 31 162 Table 3: Number of Distinct Frames for Verbs (not including syntactic category for grammatical function) LEX defines 138 distinct verb frame types without the inclusion of specific prepositions or particles. The following is a sample entry for the verb reimburse: (VERB :ORTH “reimburse” :SUBC ((NP-NP) (NP-PP :PVAL (“for”)) (NP))) Each verb has a :SUBC feature, specifying its subcategorisation behaviour. For example, reimburse can occur with two noun phrases (NP-NP), a noun phrase and a prepositional phrase headed by “for” (NP-PP :PVAL (“for”)) or a single noun phrase (NP). Note that the details of the subject noun phrase are not included in COMLEX frames. Each of the complement types which make up the value of the :SUBC feature is associated with a formal frame definition which looks as follows: (vp-frame np-np :cs ((np 2)(np 3)) :gs (:subject 1 :obj 2 :obj2 3) :ex “she asked him his name”) The value of the :cs feature is the constituent structure of the subcategorisation frame, which lists the syntactic CF-PSG constituents in sequence. The value of the :gs feature is the grammatical structure which indicates the functional role played by each of the CF-PSG constituents. The elements of the constituent structure are indexed, and referenced in the :gs field. This mapping between constituent structure and functional structure makes the information contained in COMLEX suitable as an evaluation standard for the LFG semantic forms which we induce. 5.1 COMLEX-LFG Mapping We devised a common format for our induced semantic forms and those contained in COMLEX. This is summarised in Table 4. COMLEX does not distinguish between obliques and objects so we converted Obji to OBLi as required. In addition, COMLEX does not explicitly differentiate between COMPs and XCOMPs, but does encode control information for any Comps which occur, thus allowing us to deduce the distinction automatically. The manually constructed COMLEX entries provided us with a gold standard against which we evaluated the automatically induced frames for the 2992 (active) verbs that both resources have in common. LFG COMLEX Merged SUBJ Subject SUBJ OBJ Object OBJ OBJ2 Obj2 OBJ2 OBL Obj3 OBL OBL2 Obj4 OBL2 COMP Comp COMP XCOMP Comp XCOMP PART Part PART Table 4: COMLEX and LFG Syntactic Functions We use the computed conditional probabilities to set a threshold to filter the selection of semantic forms. As some verbs occur less frequently than others we felt it was important to use a relative rather than absolute threshold. For a threshold of 1%, we disregard any frames with a conditional probability of less than or equal to 0.01. We carried out the evaluation in a similar way to (Schulte im Walde, 2002). The scale of our evaluation is comparable to hers. This allows us to make tentative comparisons between our respective results. The figures shown in Table 5 are the results of three different kinds of evaluation with the threshold set to 1% and 5%. The effect of the threshold increase is obvious in that Precision goes up for each of the experiments while Recall goes down. For Exp 1, we excluded prepositional phrases entirely from the comparison, i.e. assumed that PPs were adjunct material (e.g. [subj,obl:for] becomes [subj]). Our results are better for Precision than for Recall compared to Schulte im Walde (op cit.), who reports Precision of 74.53%, Recall of 69.74% and an F-score of 72.05%. Exp 2 includes prepositional phrases but not parameterised for particular prepositions (e.g. [subj,obl:for] becomes [subj,obl]). While our figures for Recall are again lower, our results for Precision are considerably higher than those of Schulte im Walde (op cit.) who recorded Precision of 60.76%, Recall of 63.91% and an F-score of 62.30%. For Exp. 3, we used semantic forms which contained details of specific prepositions for any subcategorised prepositional phrase. Our Precision figures are again high (in comparison to 65.52% as recorded by (Schulte im Walde, 2002)). However, Threshold 1% Threshold 5% P R F-Score P R F-Score Exp. 1 79.0% 59.6% 68.0% 83.5% 54.7% 66.1% Exp. 2 77.1% 50.4% 61.0% 81.4% 44.8% 57.8% Exp. 2a 76.4% 44.5% 56.3% 80.9% 39.0% 52.6% Exp. 3 73.7% 22.1% 34.0% 78.0% 18.3% 29.6% Exp. 3a 73.3% 19.9% 31.3% 77.6% 16.2% 26.8% Table 5: COMLEX Comparison our Recall is very low (compared to the 50.83% that Schulte im Walde (op cit.) reports). Consequently our F-score is also low (Schulte im Walde (op cit.) records an F-score of 57.24%). Experiments 2a and 3a are similar to Experiments 2 and 3 respectively except they include the specific particle associated with each PART. 5.1.1 Directional Prepositions There are a number of possible reasons for our low recall scores for Experiment 3 in Table 5. It is a well-documented fact (Briscoe and Carroll, 1997) that subcategorisation frames (and their frequencies) vary across domains. We have extracted frames from one domain (the WSJ) whereas COMLEX was built using examples from the San Jose Mercury News, the Brown Corpus, several literary works from the Library of America, scientific abstracts from the U.S. Department of Energy, and the WSJ. For this reason, it is likely to contain a greater variety of subcategorisation frames than our induced lexicon. It is also possible that due to human error COMLEX contains subcategorisation frames, the validity of which may be in doubt. This is due to the fact that the aim of the COMLEX project was to construct as complete a set of subcategorisation frames as possible, even for infrequent verbs. Lexicographers were allowed to extrapolate from the citations found, a procedure which is bound to be less certain than the assignment of frames based entirely on existing examples. Our recall figure was particularly low in the case of evaluation using details of prepositions (Experiment 3). This can be accounted for by the fact that COMLEX errs on the side of overgeneration when it comes to preposition assignment. This is particularly true of directional prepositions, a list of 31 of which has been prepared and is assigned in its entirety by default to any verb which can potentially appear with any directional preposition. In a subsequent experiment, we incorporate this list of directional prepositions by default into our semantic form induction process in the same way as the creators of COMLEX have done. Table 6 shows the results of this experiment. As expected there is a significant imPrecision Recall F-Score Experiment 3 81.7% 40.8% 54.4% Experiment 3a 83.1% 35.4% 49.7% Table 6: COMLEX Comparison using p-dir(Threshold of 1%) Passive Precision Recall F-Score Experiment 2 80.2% 54.7% 65.1% Experiment 2a 79.7% 46.2% 58.5% Experiment 3 72.6% 33.4% 45.8% Experiment 3a 72.3% 29.3% 41.7% Table 7: Passive evaluation (Threshold of 1%) provement in the recall figure, being almost double the figures reported in Table 5 for Experiments 3 and 3a. 5.1.2 Passive Evaluation Table 7 presents the results of our evaluation of the passive semantic forms we extract. It was carried out for 1422 verbs which occur with passive frames and are shared by the induced lexicon and COMLEX. As COMLEX does not provide explicit passive entries, we applied Lexical Redundancy Rules (Kaplan and Bresnan, 1982) to automatically convert the active COMLEX frames to their passive counterparts. For example, the COMLEX entry see([subj,obj]) is converted to see([subj]). The resulting precision is very high, a slight increase on that for the active frames. The recall score drops for passive frames (from 54.7% to 29.3%) in a similar way to that for active frames when prepositional details are included. 5.2 Lexical Accession Rates As well as evaluating the quality of our extracted semantic forms, we also examine the rate at which they are induced. (Charniak, 1996) and (Krotov et al., 1998) observed that treebank grammars (CFGs extracted from treebanks) are very large and grow with the size of the treebank. We were interested in discovering whether the acquisition of lexical material on the same data displays a similar propensity. Figure 3 displays the accession rates for the semantic forms induced by our method for sections 0–24 of the WSJ section of the Penn-II treebank. When we do not distinguish semantic forms by category, all semantic forms together with those for verbs display smaller accession rates than for the PCFG. We also examined the coverage of our system in a similar way to (Hockenmaier et al., 2002). We extracted a verb-only reference lexicon from Sections 02-21 of the WSJ and subsequently compared this to a test lexicon constructed in the same way from 0 5000 10000 15000 20000 25000 0 5 10 15 20 25 No. of SFs/Rules WSJ Section All SF Frames All Verbs All SF Frames, no category All Verbs, no category PCFG Figure 3: Accession Rates for Semantic Forms and CFG Rules Entries also in reference lexicon: 89.89% Entries not in reference lexicon: 10.11% Known words: 7.85% - Known words, known frames: 7.85% - Known words, unknown frames: Unknown words: 2.32% - Unknown words, known frames: 2.32% - Unknown words, unknown frames: Table 8: Coverage of induced lexicon on unseen data (Verbs Only) Section 23. Table 8 shows the results of this experiment. 89.89% of the entries in the test lexicon appeared in the reference lexicon. 6 Conclusions We have presented an algorithm and its implementation for the extraction of semantic forms or subcategorisation frames from the Penn-II Treebank, automatically annotated with LFG f-structures. We have substantially extended an earlier approach by (van Genabith et al., 1999). The original approach was small-scale and ‘proof of concept’. We have scaled our approach to the entire WSJ Sections of PennII (50,000 trees). Our approach does not predefine the subcategorisation frames we extract as many other approaches do. We extract abstract syntactic function-based subcategorisation frames (LFG semantic forms), traditional CFG category-based frames as well as mixed function-category based frames. Unlike many other approaches to subcategorisation frame extraction, our system properly reflects the effects of long distance dependencies and distinguishes between active and passive frames. Finally our system associates conditional probabilities with the frames we extract. We carried out an extensive evaluation of the complete induced lexicon (not just a sample) against the full COMLEX resource. To our knowledge, this is the most extensive qualitative evaluation of subcategorisation extraction in English. The only evaluation of a similar scale is that carried out by (Schulte im Walde, 2002) for German. Our results compare well with hers. We believe our semantic forms are fine-grained and by choosing to evaluate against COMLEX we set our sights high: COMLEX is considerably more detailed than the OALD or LDOCE used for other evaluations. Currently work is under way to extend the coverage of our acquired lexicons by applying our methodology to the Penn-III treebank, a more balanced corpus resource with a number of text genres (in addition to the WSJ sections). It is important to realise that the induction of lexical resources is part of a larger project on the acquisition of wide-coverage, robust, probabilistic, deep unification grammar resources from treebanks. We are already using the extracted semantic forms in parsing new text with robust, wide-coverage PCFG-based LFG grammar approximations automatically acquired from the f-structure annotated Penn-II treebank (Cahill et al., 2004a). We hope to be able to apply our lexical acquisition methodology beyond existing parse-annotated corpora (Penn-II and PennIII): new text is parsed by our PCFG-based LFG approximations into f-structures from which we can then extract further semantic forms. The work reported here is part of the core component for bootstrapping this approach. As the extraction algorithm we presented derives semantic forms at f-structure level, it is easily applied to other, even typologically different, languages. We have successfully ported our automatic annotation algorithm to the TIGER Treebank, despite German being a less configurational language than English, and extracted wide-coverage, probabilistic LFG grammar approximations and lexical resources for German (Cahill et al., 2003). Currently, we are migrating the technique to Spanish, which has freer word order than English and less morphological marking than German. Preliminary results have been very encouraging. 7 Acknowledgements The research reported here is supported by Enterprise Ireland Basic Research Grant SC/2001/186 and an IRCSET PhD fellowship award. References M. Brent. 1993. From Grammar to Lexicon: Unsupervised Learning of Lexical Syntax. Computational Linguistics, 19(2):203–222. E. Briscoe and J. Carroll. 1997. Automatic Extraction of Subcategorization from Corpora. In Proceedings of the 5th ACL Conference on Applied Natural Language Processing, pages 356–363, Washington, DC. A. Cahill, M. Forst, M. McCarthy, R. O’Donovan, C. Rohrer, J. van Genabith, and A. Way. 2003. Treebank-Based Multilingual UnificationGrammar Development. In Proceedings of the Workshop on Ideas and Strategies for Multilingual Grammar Development at the 15th ESSLLI, pages 17–24, Vienna, Austria. A. Cahill, M. Burke, R. O’Donovan, J. van Genabith, and A. Way. 2004a. Long-Distance Dependency Resolution in Automatically Acquired Wide-Coverage PCFG-Based LFG Approximations. In Proceedings of the 42nd Annual Conference of the Association for Computational Linguistics (ACL-04), Barcelona, Spain. A. Cahill, M. McCarthy, M. Burke, R. O’Donovan, J. van Genabith, and A. Way. 2004b. Evaluating Automatic F-Structure Annotation for the PennII Treebank. Journal of Research on Language and Computation. G. Carroll and M. Rooth. 1998. Valence Induction with a Head-Lexicalised PCFG. In Proceedings of the 3rd Conference on Empirical Methods in Natural Language Processing, pages 36– 45, Granada, Spain. E. Charniak. 1996. Tree-bank Grammars. In AAAI96: Proceedings of the Thirteenth National Conference on Artificial Intelligence, MIT Press, pages 1031–1036, Cambridge, MA. J. Chen and K. Vijay-Shanker. 2000. Automated Extraction of TAGs from the Penn Treebank. In Proceedings of the 38th Annual Meeting of the Association of Computational Linguistics, pages 65–76, Hong Kong. M. Collins. 1997. Three generative lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, pages 16–23. J. Hockenmaier, G. Bierner, and J. Baldridge. 2002. Extending the Coverage of a CCG System. Journal of Language and Computation, (2). R. Kaplan and J. Bresnan. 1982. Lexical Functional Grammar: A Formal System for Grammatical Representation. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations, pages 206–250. MIT Press, Cambridge, MA, Mannheim, 8th Edition. A. Kinyon and C. Prolo. 2002. Identifying Verb Arguments and their Syntactic Function in the Penn Treebank. In Proceedings of the 3rd LREC Conference, pages 1982–1987, Las Palmas, Spain. A. Korhonen. 2002. Subcategorization Acquisition. PhD thesis published as Techical Report UCAMCL-TR-530, Computer Laboratory, University of Cambridge, UK. A. Krotov, M. Hepple, R. Gaizauskas, and Y. Wilks. 1998. Compacting the Penn Treebank Grammar. In Proceedings of COLING-ACL’98, pages 669– 703, Montreal, Canada. C. MacLeod, R. Grishman, and A. Meyers. 1994. The Comlex Syntax Project: The First Year. In Proceedings of the ARPA Workshop on Human Language Technology, pages 669–703, Princeton, NJ. D. Magerman. 1994. Natural Language Parsing as Statistical Pattern Recognition. PhD Thesis, Stanford University, CA. C. Manning. 1993. Automatic Acquisition of a Large Subcategorisation Dictionary from Corpora. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 235–242, Columbus, OH. S. Schulte im Walde. 2002. Evaluating Verb Subcategorisation Frames learned by a German Statistical Grammar against Manual Definitions in the Duden Dictionary. In Proceedings of the 10th EURALEX International Congress, pages 187– 197, Copenhagen, Denmark. A. Ushioda, D. Evans, T. Gibson, and A. Waibel. 1993. The Automatic Acquisition of Frequencies of Verb Subcategorization Frames from Tagged Corpora. In SIGLEX ACL Workshop on the Acquisition of Lexical Knowledge from Text, pages 95–106, Columbus, OH. J. van Genabith, A. Way, and L. Sadler. 1999. Datadriven Compilation of LFG Semantic Forms. In EACL-99 Workshop on Linguistically Interpreted Corpora, pages 69–76, Bergen, Norway. F. Xia, M. Palmer, and A. Joshi. 2000. A Uniform Method of Grammar Extraction and its Applications. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2000), pages 53–62, Hong Kong.
2004
47
Inducing Frame Semantic Verb Classes from WordNet and LDOCE Rebecca Green, Bonnie J. Dorr, and Philip Resnik *†‡ *† *† Institute for Advanced Computer Studies * Department of Computer Science † College of Information Studies ‡ University of Maryland College Park, MD 20742 USA {rgreen, bonnie, resnik}@umiacs.umd.edu Abstract This paper presents SemFrame, a system that induces frame semantic verb classes from WordNet and LDOCE. Semantic frames are thought to have significant potential in resolving the paraphrase problem challenging many languagebased applications. When compared to the handcrafted FrameNet, SemFrame achieves its best recall-precision balance with 83.2% recall (based on SemFrame's coverage of FrameNet frames) and 73.8% precision (based on SemFrame verbs’ semantic relatedness to frame-evoking verbs). The next best performing semantic verb classes achieve 56.9% recall and 55.0% precision. 1 Introduction Semantic content can almost always be expressed in a variety of ways. Lexical synonymy (She esteemed him highly vs. She respected him greatly), syntactic variation (John paid the bill vs. The bill was paid by John), overlapping meanings (Anna turned at Elm vs. Anna rounded the corner at Elm), and other phenomena interact to produce a broad range of choices for most language generation tasks (Hirst, 2003; Rinaldi et al., 2003; Kozlowski et al., 2003). At the same time, natural language understanding must recognize what remains constant across paraphrases. The paraphrase phenomenon affects many computational linguistic applications, including information retrieval, information extraction, question-answering, and machine translation. For example, documents that express the same content using different linguistic means should typically be retrieved for the same queries. Information sought to answer a question needs to be recognized no matter how it is expressed. Semantic frames (Fillmore, 1982; Fillmore and Atkins, 1992) address the paraphrase problem through their slot-and-filler templates, representing frequently occurring, structured experiences. Semantic frame types of an intermediate granularity have the potential to fulfill an interlingua role within a solution to the paraphrase problem. Until now, semantic frames have been generated by hand (as in Fillmore and Atkins, 1992), based on native speaker intuition; the FrameNet project (http://www.icsi.berkeley.edu/ ~framenet; Johnson et al., 2002) now couples this generation with empirical validation. Only recently has this project begun to achieve relative breadth in its inventory of semantic frames. To have a comprehensive inventory of semantic frames, however, we need the capacity to generate semantic frames semi-automatically (the need for manual post-editing is assumed). To address these challenges, we have developed SemFrame, a system that induces semantic frames automatically. Overall, the system performs two primary functions: (1) identification of sets of verb senses that evoke a common semantic frame (in the sense that lexical units call forth corresponding conceptual structures); and (2) identification of the conceptual structure of semantic frames. This paper explores the first task of identifying frame semantic verb classes. These classes have several types of uses. First, they are the basis for identifying the internal structure of the frame proper, as set forth in Green and Dorr, 2004. Second, they may be used to extend FrameNet. Third, they support applications needing access to sets of semantically related words, for example, text segmentation and word sense disambiguation, as explored to a limited degree in Green, 2004. Section 2 presents related research efforts on developing semantic verb classes. Section 3 summarizes the features of WordNet (http://www.cogsci.princeton.edu/~wn) and LDOCE (Procter, 1978) that support the automatic induction of semantic verb classes, definitions and example sentences often mention while Section 4 sets forth the approach taken by their participants using semantic-type-like nouns, SemFrame to accomplish this task. Section 5 thus mapping easily to the corresponding frame presents a brief synopsis of SemFrame’s results, element. Corpus data, however, are more likely while Section 6 presents an evaluation of to include instantiated participants, which may SemFrame’s ability to identify semantic verb not generalize to the frame element. Second, classes of a FrameNet-like nature. Section 7 lexical resources provide a consistent amount of summarizes our work and motivates directions for data for word senses, while the amount of data in further development of SemFrame. a corpus for word senses is likely to vary widely. 2 Previous Work The EAGLES (1998) report on semantic encoding differentiates between two approaches to the development of semantic verb classes: those based on syntactic behavior and those based on semantic criteria. Levin (1993) groups verbs based on an analysis of their syntactic properties, especially their ability to be expressed in diathesis alternations; her approach reflects the assumption that the syntactic behavior of a verb is determined in large part by its meaning. Verb classes at the bottom of Levin’s shallow network group together (quasi-) synonyms, hierarchically related verbs, and antonyms, alongside verbs with looser semantic relationships. The verb categories based on Pantel and Lin (2002) and Lin and Pantel (2001) are induced automatically from a large corpus, using an unsupervised clustering algorithm, based on syntactic dependency features. The resulting clusters contain synonyms, hierarchically related verbs, and antonyms, as well as verbs more loosely related from the perspective of paraphrase. The handcrafted WordNet (Fellbaum, 1998a) uses the hyperonymy/hyponymy relationship to structure the English verb lexicon into a semantic network. Each collection of a top-level node supplemented by its descendants may be seen as a semantic verb class. In all fairness, resolution of the paraphrase problem is not the explicit goal of most efforts to build semantic verb classes. However, they can process some paraphrases through lexical synonymy, hierarchically related terms, and antonymy. 3 Resources Used in SemFrame We adopt an approach that relies heavily on pre-existing lexical resources. Such resources have several advantages over corpus data in identifying semantic frames. First, both Third, lexical resources provide their data in a more systematic fashion than do corpora. Most centrally, the syntactic arguments of the verbs used in a definition often correspond to the semantic arguments of the verb being defined. For example, Table 1 gives the definitions of several verb senses in LDOCE that evoke the COMMERCIAL TRANSACTION frame, which includes as its semantic arguments a Buyer, a Seller, some Merchandise, and Money. Words corresponding to the Money (money, value), the Merchandise (property, goods), and the Buyer (buyer, buyers) are present in, and to some extent shared across, the definitions; however, no words corresponding to the Seller are present. Verb LDOCE Definition sense buy 1 to obtain (something) by giving money (or something else of value) buy 2 to obtain in exchange for something, often something of great value buy 3 to be exchangeable for purchase 1 to gain (something) at the cost of effort, suffering, or loss of something of value sell 1 to give up (property or goods) to another for money or other value sell 2 to offer (goods) for sale sell 3 to be bought; get a buyer or buyers; gain a sale Table 1. LDOCE Definitions for Verbs Evoking the COMMERCIAL TRANSACTION Frame Of available machine-readable dictionaries, LDOCE appears especially useful for this research. It uses a restricted vocabulary of about 2000 words in its definitions and example sentences, thus increasing the likelihood that words with closely related meanings will use Merge pairs, filtering out those not meeting threshold criteria Map WordNet synsets to LDOCE senses Extract verb sense pairs from WordNet Extract verb sense pairs from LDOCE Build fully-connected verb groups Cluster related verb groups Verb sense framesets the same words in their definitions and support WordNet verb synsets and LDOCE verb senses the pattern of discovery envisioned. LDOCE’s relies on finding matches between the data subject field codes also accomplish some of the available for the verb senses in each resource same type of grouping as semantic frames. (e.g., other words in the synset; words in WordNet is a machine-readable lexicodefinitions and example sentences; words closely semantic database whose primary organizational related to these words; and stems of these words). structure is the synset—a set of synonymous word The similarity measure used is the average of the senses. A limited number of relationship types proportion of words on each side of the (e.g., antonymy, hyponymy, meronymy, comparison that are matched in the other. This troponymy, entailment) also relate synsets within mapping is used both to relate LDOCE verb senses, a part of speech. (Version 1.7.1 was used.) that map to the same WordNet synset (fig. 3f) and to Fellbaum (1998b) suggests that relationships translate previously paired WordNet verb synsets in WordNet “reflect some of the structure of into LDOCE verb sense pairs. frame semantics” (p. 5). Through the relational In the third stage, the resulting verb sense structure of WordNet, buy, purchase, sell, and pay pairs are merged into a single data set, retaining are related together: buy and purchase comprise one only those pairs whose cumulative support synset; they entail paying and are opposed to sell. exceeds thresholds for either the number of The relationship of buy, purchase, sell, and supporting data sources or strength of support, pay to other COMMERCIAL TRANSACTION thus achieving higher precision in the merged verbs—for example, cost, price, and the demand data set than in the input data sets. Then, the payment sense of charge—is not made explicit in graph formed by the verb sense pairs in the WordNet, however. Further, as Roger Chaffin merged data set is analyzed to find the fully has noted, the specialized vocabulary of, for connected components. example, tennis (e.g. racket, court, lob) is not coFinally, these groups of verb senses become located, but is dispersed across different branches input to a clustering operation (Voorhees, 1986). of the noun network (Miller, 1998, p. 34). Those groups whose similarity (due to overlap in 4 SemFrame Approach SemFrame gathers evidence about frame semantic relatedness between verb senses by analyzing LDOCE and WordNet data from a variety of perspectives. The overall approach used is shown in Figure 1. The first stage of processing extracts pairs of LDOCE and WordNet verb senses that potentially evoke the same frame. By exploiting many different clues to semantic relatedness, we overgenerate these pairs, favoring recall; subsequent stages improve the precision of the resulting data. Figures 2 and 3 give details of the algorithms for extracting verb pairs based on different types of evidence. These include: clustering LDOCE verb senses/WordNet synsets on the basis of words in their definitions and example sentences (fig. 2); relating LDOCE verb senses defined in terms of the same verb (fig. 3a); relating LDOCE verb senses that share a common stem (fig. 3b); extracting explicit sense-linking relationships in LDOCE (fig. 3c); relating verb senses that share general or specific subject field codes in LDOCE (fig. 3d); and extracting (direct or extended) semantic relationships in WordNet (fig. 3e). In the second stage, mapping between membership) exceed a threshold are merged together, thus reducing the number of verb sense groups. The verb senses within each resulting group are hypothesized to evoke the same semantic frame and constitute a frameset. Figure 1. Approach for Building Frame Semantic Verb Classes wgtwordf  1 frequencyf wgtwordf  .01 Input. SW, a set of stop words; M, a set of (word, stem) pairs; F, a set of (word, frequency) pairs; DE, a set of (verb_sense_id, def+ex) pairs, where def+ex = the set of words in the d definitions and example sentences of verb_sense_idd Step 1. forall d  DE, append to def+ex : d verb_sense_id and remove from d def+ex any word w  SW d Step 2. forall d  DE forall m  M if word exists in def+ex , m d substitute stem for word m m Step 3. forall f  F if frequency > 1, f , else if frequency == 1, f Step 4. O Voorhees’ average link clustering algorithm applied to DE, with initial weights forall t in def+ex set to wgtt Step 5. forall o  O return all combinations of two members from o Figure 2. Algorithm for Generating Clustering-based Verb Pairs 5 Results We explored a range of thresholds in the final stage of the algorithm. In general, the lower the 1 threshold, the looser the verb grouping. The number of verb senses retained (out of 12,663 non-phrasal verb senses in LDOCE) and the verb sense groups produced by using these thresholds are recorded in Table 2. 6 Evaluation One of our goals is to produce sets of verb senses capable of extending FrameNet's coverage while requiring reasonably little post-editing. This goal has two subgoals: identifying new frames and identifying additional lexical units that evoke Threshold Num verb senses Num groups 0.5 6461 1338 1.0 6414 1759 1.5 5607 1421 2.0 5604 1563 Table 2. Results of Frame Clustering Process previously recognized frames. We use the handcrafted FrameNet, which is of reliably high precision, as a gold standard for the initial 2 evaluation of SemFrame's ability to achieve these subgoals. For the first, we evaluate SemFrame’s ability to generate frames that correspond to FrameNet’s frames, reasoning that the system must be able to identify a large proportion of known frames if the quality of its output is good enough to identify new frames. (At this stage we do not measure the quality of new frames.) For the second subgoal we can be more concrete: For frames identified by both systems, we measure the degree to which the verbs identified by SemFrame can be shown to evoke those frames, even if FrameNet has not identified them as frame-evoking verbs. FrameNet includes hierarchically organized frames of varying levels of generality: Some semantic areas are covered by a general frame, some by a combination of specific frames, and some by a mix of general and specific frames. Because of this variation we determined the degree to which SemFrame and FrameNet overlap by automatically finding and comparing corresponding frames instead of fully equivalent frames. Frames correspond if the semantic scope of one frame is included within the semantic For the clustering algorithm used, the clustering FrameNet's frames are more syntactically than 1 threshold range is open-ended. The values semantically motivated (e.g., EXPERIENCER-OBJECT, investigated in the evaluation are fairly low. EXPERIENCER-SUBJECT). Certain constraints imposed by FrameNet's 2 development strategy restrict its use as a full-fledged gold standard for evaluating semantic frame induction. (1) As of summer 2003, only 382 frames had been identified within the FrameNet project. (2) Low recall affects not only the set of semantic frames identified by FrameNet, but also the sets of frame-evoking units listed for each frame. No verbs are listed for 38.5% of FrameNet's frames, while another 13.1% of them list only 1 or 2 verbs. The comparison here is limited to the 197 FrameNet frames for which at least one verb is listed with a counterpart in LDOCE. (3) Some of a. Relates LDOCE verb senses that are defined in terms of the same verb Input. D, a set of (verb_sense_id, def_verb) pairs, where def_verb = the verb in terms of which d verb_sense_id is defined d Step 1. forall v that exist as def_verb in D, form DV  D, by extracting all (verb_sense_id, def_verb) v pairs where v = def_verb Step 2. remove all DV for which | DV | > 40 v v Step 3. forall v that exist as def_verb in D, return all combinations of two members from DVv b. Relates LDOCE verb senses that share a common stem Input. D, a set of (verb_sense_id, verb_stem) pairs, where verb_stem = the stem for the verb on which d verb_sense_id is based d Step 1. forall m that exist as verb_stem in D, form DV  D, by extracting all (verb_sense_id, m verb_stem) pairs where m = verb_stem Step 2. forall m that exist as verb_stem in D, return all combinations of two members from DVv c. Extracts explicit sense-linking relationships in LDOCE Input. D, a set of (verb_sense_id, def) pairs, where def = the definition for verb_sense_id d d Step 1. forall d  D, if def contains compare or opposite note, extract related_verb from note; generate d (verb_sense_id , related_verb ) pair d d Step 2. forall d  D, if def defines verb_sense_id in terms of a related standalone verb (in BLOCK d d CAPS), extract related_verb from definition; generate (verb_sense_id , related_verb ) pair d d Step 3. forall (verb_sense_id , related_verb ) pairs, if there is only one sense of related_verb , choose it d d d and return (verb_sense_id , related_verb_sense_id ), else apply generalized mapping d d algorithm to return (verb_sense_id , related_verb_sense_id ) pairs where overlap occurs in d d the glosses of verb_sense_id and related_verb_sense_id d d d. Relates verb senses that share general or specific subject field codes in LDOCE Input. D, a set of (verb_sense_id, subject_code) pairs, where subject_code = any 2- or 4-character d subject field code assigned to verb_sense_id Step 1. forall c that exist as subject_code in D, form DV  D, by extracting all (verb_sense_id, c subject_code) pairs where c = subject_code Step 2. forall c that exist as subject_code in D, return all combinations of two members from DVv e. Extracts (direct or extended) semantic relationships in WordNet Input. WordNet data file for verb synsets Step 1. forall synset lines in input file return (synset, related_synset) pairs for all synsets directly related through hyponymy, antonymy, entailment, or cause_to relationships in WordNet (for extended relationship pairs, also return (synset, related_synset) pairs for all synsets within hyponymy tree, i.e., no matter how many levels removed) f. Relates LDOCE verb senses that map to the same WordNet synset Input. mapping of LDOCE verb senses to WordNet synsets Step 1. forall lines in input file return all combinations of two LDOCE verb senses mapped to the same WordNetłsynset Figure 3. Algorithms for Generating Non-clustering-based Verb Pairs scope of the other frame or if the semantic scopes SemFrame’s verb classes list specific LDOCE of the two frames have significant overlap. Since verb senses. In extending FrameNet, verbs from FrameNet lists evoking words, without SemFrame would be word-sense-disambiguated specification of word sense, the comparison was in the same way that FrameNet verbs currently done on the word level rather than on the word are, through the correspondence of lexeme and sense level, as if LDOCE verb senses were not frame. specified in SemFrame. However, it is clearly Incompleteness in the listing of evoking verbs specific word senses that evoke frames, and in FrameNet and SemFrame precludes a straightforward detection of correspondences between incrust, and ornament. Two of the verbs—adorn their frames. Instead, correspondence between and decorate—are shared. In addition, the frame FrameNet and SemFrame frames is established names are semantically related through a using either of two somewhat indirect approaches. WordNet synset consisting of decorate, adorn In the first approach, a SemFrame frame is (which CatVar relates to ADORNING), grace, deemed to correspond to a FrameNet frame if the ornament (which CatVar relates to two frames meet both a minimal-overlap ORNAMENTATION), embellish, and beautify. The criterion (i.e., there is some, perhaps small, two frames are therefore designated as overlap between the FrameNet and SemFrame corresponding frames by meeting both the framesets) and a frame-name-relatedness minimal-overlap and the frame-name relatedness criterion. The minimal-overlap criterion is met if criteria. either of two conditions is met: (1) If the In the second approach, a SemFrame frame is FrameNet frame lists four or fewer verbs (true of deemed to correspond to a FrameNet frame if the over one-third of the FrameNet frames that list two frames meet either of two relatively stringent associated verbs), minimal overlap occurs when verb overlap criteria, the majority-match criterion any one verb associated with the FrameNet frame or the majority-related criterion, in which case matches a verb associated with a SemFrame examination of frame names is unnecessary. frame. (2) If the FrameNet frame lists five or The majority-match criterion is met if the set more verbs, minimal overlap occurs when two or of verbs shared by FrameNet and SemFrame more verbs in the FrameNet frame are matched by framesets account for half or more of the verbs in verbs in the SemFrame frame. either frameset. For example, the APPLY_HEAT The looseness of the minimal overlap frame in FrameNet includes 22 verbs: bake, criterion is tightened by also requiring that the blanch, boil, braise, broil, brown, char, coddle, names of the FrameNet and SemFrame frames be cook, fry, grill, microwave, parboil, poach, roast, closely related. Establishing this frame-name saute, scald, simmer, steam, steep, stew, and relatedness involves identifying individual toast, while the BOILING frame in SemFrame components of each frame name and augmenting includes 7 verbs: boil, coddle, jug, parboil, 3 this set with morphological variants from CatVar poach, seethe, and simmer. Five of these (Habash and Dorr 2003). The resulting set for verbs—boil, coddle, parboil, poach, and each FrameNet and SemFrame frame name is simmer—are shared across the two frames and then searched in both the noun and verb WordNet constitute over half of the SemFrame frameset. networks to find all the synsets that might Therefore the two frames are deemed to correspond to the frame name. To these sets are correspond by meeting the majority-match also added all synsets directly related to the criterion. synsets corresponding to the frame names. If the The majority-related criterion is met if half or resulting set of synsets gathered for a FrameNet more of the verbs from the SemFrame frame are frame name intersects with the set of synsets semantically related to verbs from the FrameNet gathered for a SemFrame frame name, the two frame (that is, if the precision of the SemFrame frame names are deemed to be semantically verb set is at least 0.5). To evaluate this criterion, related. each FrameNet and SemFrame verb is associated For example, the FrameNet ADORNING frame with the WordNet verb synsets it occurs in, contains 17 verbs: adorn, blanket, cloak, coat, augmented by the synsets to which the initial sets cover, deck, decorate, dot, encircle, envelop, of synsets are directly related. If the sets of festoon, fill, film, line, pave, stud, and wreathe. synsets corresponding to two verbs share one or The SemFrame ORNAMENTATION frame contains more synsets, the two verbs are deemed to be 12 verbs: adorn, caparison, decorate, embellish, semantically related. This process is extended embroider, garland, garnish, gild, grace, hang, one further level, such that a SemFrame verb found by this process to be semantically related to a SemFrame verb, whose semantic relationship to a FrameNet verb has already been established, will also be designated a frame-evoking verb. If half or more of the verbs listed for a SemFrame frame are established as evoking the same frame as the list of WordNet verbs, then the FrameNet All SemFrame frame names are nouns. (See 3 Green and Dorr, 2004 for an explanation of their selection.) FrameNet frame names (e.g., ABUNDANCE, A C T IV IT Y _ S T A R T , C AU S E_ TO _ B E_ WET, INCHOATIVE_ATTACHING), however, exhibit considerable variation. and SemFrame frames are hypothesized to bound on the task, i.e., 100% recall and 100% correspond through the majority-related criterion. precision. The Lin & Pantel results are here a For example, the FrameNet ABUNDANCE lower bound for automatically induced semantic frame includes 4 verbs: crawl, swarm, teem, and verb classes and probably reflect the limitations of throng. The SemFrame FLOW frame likewise using only corpus data. Among efforts to develop includes 4 verbs: pour, teem, stream, and semantic verb classes, SemFrame’s results pullulate. Only one verb—teem—is shared, so correspond more closely to semantic frames than the majority-match criterion is not met, nor is the do others. related-frame-name criterion met, as the frame names are not semantically related. The majorityrelated criterion, however, is met through a WordNet verb synset that includes pour, swarm, stream, teem, and pullulate. Of the 197 FrameNet frames that include at least one LDOCE verb, 175 were found to have a corresponding SemFrame frame. But this 88.8% recall level should be balanced against the precision ratio of SemFrame verb framesets. After all, we could get 100% recall by listing all verbs in every SemFrame frame. The majority-related function computes the precision ratio of the SemFrame frame for each pair of FrameNet and SemFrame frames being compared. By modifying the minimum precision threshold, the balance between recall and precision, as measured using F-score, can be investigated. The best balance for the SemFrame version is based on a clustering threshold of 2.0 and a minimum precision threshold of 0.4, which yields a recall of 83.2% and overall precision of 73.8%. To interpret these results meaningfully, one would like to know if SemFrame achieves more FrameNet-like results than do other available verb category data, more specifically the 258 verb classes from Levin, the 357 semantic verb classes of WordNet 1.7.1, or the 272 verb clusters of Lin and Pantel, as described in Section 2. For purposes of comparison with FrameNet, Levin’s verb class names have been hand-edited to isolate the word that best captures the semantic sense of the class; the name of a WordNet-based frame is taken from the words for the root-level synset; and the name of each Lin and Pantel cluster is taken to be the first verb in the cluster.4 Evaluation results for the best balance between recall and precision (i.e., the maximum F-score) of the four comparisons are summarized in Table 3. FrameNet itself constitutes the upper Semantic verb Precision Recall Precision classes threshold at max Fscore SemFrame 0.40 0.832 0.738 Levin 0.20 0.569 0.550 WordNet 0.15 0.528 0.466 Lin & Pantel 0.15 0.472 0.407 Table 3. Best Recall-Precision Balance When Compared with FrameNet 7 Conclusions and Future Work We have demonstrated that sets of verbs evoking a common semantic frame can be induced from existing lexical tools. In a head-to-head comparison with frames in FrameNet, the frame semantic verb classes developed by the SemFrame approach achieve a recall of 83.2% and the verbs listed for frames achieve a precision of 73.8%; these results far outpace those of other semantic verb classes. On a practical level, a large number of frame semantic verb classes have been identified. Associated with clustering threshold 1.5 are 1421 verb classes, averaging 14.1 WordNet verb synsets. Associated with clustering threshold 2.0 are 1563 verb classes, averaging 6.6 WordNet verb synsets. Despite these promising results, we are limited by the scope of our input data set. While LDOCE and WordNet data are generally of high quality, the relative sparseness of these resources has an adverse impact on recall. In addition, the mapping technique used for picking out corresponding word senses in WordNet and LDOCE is shallow, thus constraining the recall and precision of SemFrame outputs. Finally, the multi-step process of merging smaller verb groups into verb groups that are intended to correspond to frames sometimes fails to achieve an appropriate degree of correspondence (all the verb classes discovered are not distinct). Lin and Pantel have taken a similar approach, 4 “naming” their verb clusters by the first three verbs listed for a cluster, i.e., the three most similar verbs. In our future work, we will experiment with the more recent release of WordNet (2.0). This version provides derivational morphology links between nouns and verbs, which will promote far greater precision in the linking of verb senses based on morphology than was possible in our initial implementation. Another significant addition to WordNet 2.0 is the inclusion of category domains, which co-locate words pertaining to a subject and perform the same function as LDOCE's subject field codes. Finally, data sparseness issues may be addressed by supplementing the use of the lexical resources used here with access to, for example, the British National Corpus, with its broad coverage and carefully-checked parse trees. Acknowledgments This research has been supported in part by a National Science Foundation Graduate Research Fellowship NSF ITR grant #IIS-0326553, and NSF CISE Research Infrastructure Award EIA0130422. References Boguraev, Bran and Ted Briscoe. 1989. Introduction. In B. Boguraev and T. Briscoe (Eds.), Computational Lexicography for Natural Language Processing, 140. London: Longman. EAGLES Lexicon Interest Group. 1998. EAGLES Preliminary Recommendations on Semantic Encoding: Interim Report, <http:// www.ilc.cnr.it/EAGLES96/rep2/ rep2.html>. Fellbaum, Christiane (Ed.). 1998a. WordNet: An Electronic Lexical Database. Cambridge, MA: The MIT Press. Fellbaum, Christiane. 1998b. Introduction. In C. Fellbaum, 1998a, 1-17. Fillmore, Charles J. 1982. Frame semantics. In Linguistics in the Morning Calm, 111-137. Seoul: Hanshin. Fillmore, Charles J. and B. T. S. Atkins. 1992. Towards a frame-based lexicon: The semantics of RISK and its neighbors. In A. Lehrer and E. F. Kittay (Eds.), Frames, Fields, and Contrasts, 75102. Hillsdale, NJ: Erlbaum. Green, Rebecca. 2004. Inducing Semantic Frames from Lexical Resources. Ph.D. dissertation, University of Maryland. Green, Rebecca and Bonnie J. Dorr. 2004. Inducing A Semantic Frame Lexicon from WordNet Data. In Proceedings of the 2nd Workshop on Text Meaning and Interpretation (ACL 2004). Habash, Nizar and Bonnie Dorr. 2003. A categorial variation database for English. In Proceedings of North American Association for Computational Linguistics, 96-102. Hirst, Graeme. 2003. Paraphrasing paraphrased. Keynote address for The Second International Workshop on Paraphrasing: Paraphrase Acquisition and Applications, ACL 2003, <http://nlp.nagaokaut.ac.jp/IWP2003/pdf/ Hirst-slides.pdf>. Johnson, Christopher R., Charles J. Fillmore, Miriam R. L. Petruck, Collin F. Baker, Michael Ellsworth, Josef Ruppenhofer, and Esther J. Wood. 2002. FrameNet: Theory and P r a c t i c e , v e r s i o n 1 . 0 , < h t t p : / / w w w . i c s i . b e r k e l e y . e d u / ~framenet/book/book.html>. Kozlowski, Raymond, Kathleen F. McCoy, and K. Vijay-Shanker. 2003. Generation of single-sentence paraphrases from predicate/argument structure using lexico-grammatical resources. In The Second International Workshop on Paraphrasing: Paraphrase Acquisition and Applications (IWP2003), ACL 2003, 1-8. Levin, Beth. 1993. English Verb Classes and Alternations: A Preliminary Investigation. Chicago: University of Chicago Press. Lin, Dekang and Patrick Pantel. 2001. Induction of semantic classes from natural language text. In Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 317-322. Litkowski, Ken. 2004. Senseval-3 task: Word-sense disambiguation of WordNet glosses, <http://www.clres.com/SensWNDisamb.html>. Miller, George A. 1998. Nouns in WordNet. In C. Fellbaum, 1998a, 23-67. Pantel, Patrick and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 613619. Procter, Paul (Ed.). 1978. Longman Dictionary of Contemporary English. Longman Group Ltd., Essex, UK. Rinaldi, Fabio, James Dowdall, Kaarel Kaljurand, Michael Hess, and Diego Mollá. 2003. Exploiting paraphrases in a question answering system. In The Second International Workshop on Paraphrasing: Paraphrase Acquisition and Applications (IWP2003), ACL 2003, 25-32. Voorhees, Ellen. 1986. Implementing agglomerative hierarchic clustering algorithms for use in document retrieval. Information Processing & Management 22/6: 465-476.
2004
48
Paragraph-, word-, and coherence-based approaches to sentence ranking: A comparison of algorithm and human performance Florian WOLF Massachusetts Institute of Technology MIT NE20-448, 3 Cambridge Center Cambridge, MA 02139, USA [email protected] Edward GIBSON Massachusetts Institute of Technology MIT NE20-459, 3 Cambridge Center Cambridge, MA 02139, USA [email protected] Abstract Sentence ranking is a crucial part of generating text summaries. We compared human sentence rankings obtained in a psycholinguistic experiment to three different approaches to sentence ranking: A simple paragraph-based approach intended as a baseline, two word-based approaches, and two coherence-based approaches. In the paragraph-based approach, sentences in the beginning of paragraphs received higher importance ratings than other sentences. The word-based approaches determined sentence rankings based on relative word frequencies (Luhn (1958); Salton & Buckley (1988)). Coherence-based approaches determined sentence rankings based on some property of the coherence structure of a text (Marcu (2000); Page et al. (1998)). Our results suggest poor performance for the simple paragraph-based approach, whereas wordbased approaches perform remarkably well. The best performance was achieved by a coherence-based approach where coherence structures are represented in a non-tree structure. Most approaches also outperformed the commercially available MSWord summarizer. 1 Introduction Automatic generation of text summaries is a natural language engineering application that has received considerable interest, particularly due to the ever-increasing volume of text information available through the internet. The task of a human generating a summary generally involves three subtasks (Brandow et al. (1995); Mitra et al. (1997)): (1) understanding a text; (2) ranking text pieces (sentences, paragraphs, phrases, etc.) for importance; (3) generating a new text (the summary). Like most approaches to summarization, we are concerned with the second subtask (e.g. Carlson et al. (2001); Goldstein et al. (1999); Gong & Liu (2001); Jing et al. (1998); Luhn (1958); Mitra et al. (1997); Sparck-Jones & Sakai (2001); Zechner (1996)). Furthermore, we are concerned with obtaining generic rather than query-relevant importance rankings (cf. Goldstein et al. (1999), Radev et al. (2002) for that distinction). We evaluated different approaches to sentence ranking against human sentence rankings. To obtain human sentence rankings, we asked people to read 15 texts from the Wall Street Journal on a wide variety of topics (e.g. economics, foreign and domestic affairs, political commentaries). For each of the sentences in the text, they provided a ranking of how important that sentence is with respect to the content of the text, on an integer scale from 1 (not important) to 7 (very important). The approaches we evaluated are a simple paragraph-based approach that serves as a baseline, two word-based algorithms, and two coherencebased approaches1. We furthermore evaluated the MSWord summarizer. 2 Approaches to sentence ranking 2.1 Paragraph-based approach Sentences at the beginning of a paragraph are usually more important than sentences that are further down in a paragraph, due in part to the way people are instructed to write. Therefore, probably the simplest approach conceivable to sentence ranking is to choose the first sentences of each 1 We did not use any machine learning techniques to boost performance of the algorithms we tested. Therefore performance of the algorithms tested here will almost certainly be below the level of performance that could be reached if we had augmented the algorithms with such techniques (e.g. Carlson et al. (2001)). However, we think that a comparison between ‘bare-bones’ algorithms is viable because it allows to see how performance differs due to different basic approaches to sentence ranking, and not due to potentially different effects of different machine learning algorithms on different basic approaches to sentence ranking. In future research we plan to address the impact of machine learning on the algorithms tested here. paragraph as important, and the other sentences as not important. We included this approach merely as a simple baseline. 2.2 Word-based approaches Word-based approaches to summarization are based on the idea that discourse segments are important if they contain “important” words. Different approaches have different definitions of what an important word is. For example, Luhn (1958), in a classic approach to summarization, argues that sentences are more important if they contain many significant words. Significant words are words that are not in some predefined stoplist of words with high overall corpus frequency2. Once significant words are marked in a text, clusters of significant words are formed. A cluster has to start and end with a significant word, and fewer than n insignificant words must separate any two significant words (we chose n = 3, cf. Luhn (1958)). Then, the weight of each cluster is calculated by dividing the square of the number of significant words in the cluster by the total number of words in the cluster. Sentences can contain multiple clusters. In order to compute the weight of a sentence, the weights of all clusters in that sentence are added. The higher the weight of a sentence, the higher is its ranking. A more recent and frequently used word-based method used for text piece ranking is tf.idf (e.g. Manning & Schuetze (2000); Salton & Buckley (1988); Sparck-Jones & Sakai (2001); Zechner (1996)). The tf.idf measure relates the frequency of words in a text piece, in the text, and in a collection of texts respectively. The intuition behind tf.idf is to give more weight to sentences that contain terms with high frequency in a document but low frequency in a reference corpus. Figure 1 shows a formula for calculating tf.idf, where dsij is the tf.idf weight of sentence i in document j, nsi is the number of words in sentence i, k is the kth word in sentence i, tfjk is the frequency of word k in document j, nd is the number of documents in the reference corpus, and dfk is the number of documents in the reference corpus in which word k appears.         ⋅ =∑ = df n tf ds k d k jk ij nsi log 1 Figure 1. Formula for calculating tf.idf (Salton & Buckley (1988)). 2 Instead of stoplists, tf.idf values have also been used to determine significant words (e.g. Buyukkokten et al. (2001)). We compared both Luhn (1958)’s measure and tf.idf scores to human rankings of sentence importance. We will show that both methods performed remarkably well, although one coherence-based method performed better. 2.3 Coherence-based approaches The sentence ranking methods introduced in the two previous sections are solely based on layout or on properties of word distributions in sentences, texts, and document collections. Other approaches to sentence ranking are based on the informational structure of texts. With informational structure, we mean the set of informational relations that hold between sentences in a text. This set can be represented in a graph, where the nodes represent sentences, and labeled directed arcs represent informational relations that hold between the sentences (cf. Hobbs (1985)). Often, informational structures of texts have been represented as trees (e.g. Carlson et al. (2001), Corston-Oliver (1998), Mann & Thompson (1988), Ono et al. (1994)). We will present one coherence-based approach that assumes trees as a data structure for representing discourse structure, and one approach that assumes less constrained graphs. As we will show, the approach based on less constrained graphs performs better than the tree-based approach when compared to human sentence rankings. 3 Coherence-based summarization revisited This section will discuss in more detail the data structures we used to represent discourse structure, as well as the algorithms used to calculate sentence importance, based on discourse structures. 3.1 Representing coherence structures 3.1.1 Discourse segments Discourse segments can be defined as nonoverlapping spans of prosodic units (Hirschberg & Nakatani (1996)), intentional units (Grosz & Sidner (1986)), phrasal units (Lascarides & Asher (1993)), or sentences (Hobbs (1985)). We adopted a sentence unit-based definition of discourse segments for the coherence-based approach that assumes non-tree graphs. For the coherence-based approach that assumes trees, we used Marcu (2000)’s more fine-grained definition of discourse segments because we used the discourse trees from Carlson et al. (2002)’s database of coherenceannotated texts. 3.1.2 Kinds of coherence relations We assume a set of coherence relations that is similar to that of Hobbs (1985). Below are examples of each coherence relation. (1) Cause-Effect [There was bad weather at the airport]a [and so our flight got delayed.]b (2) Violated Expectation [The weather was nice]a [but our flight got delayed.]b (3) Condition [If the new software works,]a [everyone will be happy.]b (4) Similarity [There is a train on Platform A.]a [There is another train on Platform B.]b (5) Contrast [John supported Bush]a [but Susan opposed him.]b (6) Elaboration [A probe to Mars was launched this week.]a [The European-built ‘Mars Express’ is scheduled to reach Mars by late December.]b (7) Attribution [John said that]a [the weather would be nice tomorrow.]b (8) Temporal Sequence [Before he went to bed,]a [John took a shower.]b Cause-effect, violated expectation, condition, elaboration, temporal sequence, and attribution are asymmetrical or directed relations, whereas similarity, contrast, and temporal sequence are symmetrical or undirected relations (Mann & Thompson, 1988; Marcu, 2000). In the non-treebased approach, the directions of asymmetrical or directed relations are as follows: cause Æ effect for cause-effect; cause Æ absent effect for violated expectation; condition Æ consequence for condition; elaborating Æ elaborated for elaboration, and source Æ attributed for attribution. In the tree-based approach, the asymmetrical or directed relations are between a more important discourse segment, or a Nucleus, and a less important discourse segment, or a Satellite (Marcu (2000)). The Nucleus is the equivalent of the arc destination, and the Satellite is the equivalent of the arc origin in the non-treebased approach. The symmetrical or undirected relations are between two discourse elements of equal importance, or two Nuclei. Below we will explain how the difference between Satellites and Nuclei is considered in tree-based sentence rankings. 3.1.3 Data structures for representing discourse coherence As mentioned above, we used two alternative representations for discourse structure, tree- and non-tree based. In order to illustrate both data structures, consider (9) as an example: (9) Example text 0. Susan wanted to buy some tomatoes. 1. She also tried to find some basil. 2. The basil would probably be quite expensive at this time of the year. Figure 2 shows one possible tree representation of the coherence structure of (9)3. Sim represents a similarity relation, and elab an elaboration relation. Furthermore, nodes with a “Nuc” subscript are Nuclei, and nodes with a “Sat” subscript are Satellites. Figure 2. Coherence tree for (9). Figure 3 shows a non-tree representation of the coherence structure of (9). Here, the heads of the arrows represent the directionality of a relation. Figure 3. Non-tree coherence graph for (9). 3.2 Coherence-based sentence ranking This section explains the algorithms for the tree- and the non-tree-based sentence ranking approach. 3.2.1 Tree-based approach We used Marcu (2000)’s algorithm to determine sentence rankings based on tree discourse structures. In this algorithm, sentence salience is determined based on the tree level of a discourse segment in the coherence tree. Figure 4 shows Marcu (2000)’s algorithm, where r(s,D,d) is the rank of a sentence s in a discourse tree D with depth d. Every node in a discourse tree D has a promotion set promotion(D), which is the union of all Nucleus children of that node. Associated with every node in a discourse tree D is also a set of parenthetical nodes parentheticals(D) (for example, in “Mars – half the size of Earth – is red”, “half the size of earth” would be a parenthetical node in a discourse tree). Both promotion(D) and parentheticals(D) can be empty sets. Furthermore, each node has a left subtree, 3 Another possible tree structure might be ( elab ( par ( 0 1 ) 2 ) ). 0Nuc 1Nuc 2Sat elabNuc sim elab sim 0 1 2 lc(D), and a right subtree, rc(D). Both lc(D) and rc(D) can also be empty.         − − ∈ − ∈ = otherwise d D rc s r d D lc s r D cals parentheti s if d D promotion s if d NIL is D if d D s r )) 1 ), ( , ( ), 1 ), ( , ( max( ), ( 1 ), ( , 0 ) , , ( Figure 4. Formula for calculating coherence-treebased sentence rank (Marcu (2000)). The discourse segments in Carlson et al. (2002)’s database are often sub-sentential. Therefore, we had to calculate sentence rankings from the rankings of the discourse segments that form the sentence under consideration. We did this by calculating the average ranking, the minimal ranking, and the maximal ranking of all discourse segments in a sentence. Our results showed that choosing the minimal ranking performed best, followed by the average ranking, followed by the maximal ranking (cf. Section 4.4). 3.2.2 Non-tree-based approach We used two different methods to determine sentence rankings for the non-tree coherence graphs4. Both methods implement the intuition that sentences are more important if other sentences relate to them (Sparck-Jones (1993)). The first method consists of simply determining the in-degree of each node in the graph. A node represents a sentence, and the in-degree of a node represents the number of sentences that relate to that sentence. The second method uses Page et al. (1998)’s PageRank algorithm, which is used, for example, in the Google™ search engine. Unlike just determining the in-degree of a node, PageRank takes into account the importance of sentences that relate to a sentence. PageRank thus is a recursive algorithm that implements the idea that the more important sentences relate to a sentence, the more important that sentence becomes. Figure 5 shows how PageRank is calculated. PRn is the PageRank of the current sentence, PRn-1 is the PageRank of the sentence that relates to sentence n, on-1 is the out-degree of sentence n-1, and α is a damping parameter that is set to a value between 0 and 1. We report results for α set to 0.85 because this is a value often used in applications of PageRank (e.g. Ding et al. (2002); Page et al. (1998)). We also 4 Neither of these methods could be implemented for coherence trees since Marcu (2000)’s tree-based algorithm assumes binary branching trees. Thus, the indegree for all non-terminal nodes is always 2. calculated PageRanks for α set to values between 0.05 and 0.95, in increments of 0.05; changing α did not affect performance. o PR PR n n n 1 1 1 − − + − = α α Figure 5. Formula for calculating PageRank (Page et al. (1998)). 4 Experiments In order to test algorithm performance, we compared algorithm sentence rankings to human sentence rankings. This section describes the experiments we conducted. In Experiment 1, the texts were presented with paragraph breaks; in Experiment 2, the texts were presented without paragraph breaks. This was done to control for the effect of paragraph information on human sentence rankings. 4.1 Materials for the coherence-based approaches In order to test the tree-based approach, we took coherence trees for 15 texts from a database of 385 texts from the Wall Street Journal that were annotated for coherence (Carlson et al. (2002)). The database was independently annotated by six annotators. Inter-annotator agreement was determined for six pairs of two annotators each, resulting in kappa values (Carletta (1996)) ranging from 0.62 to 0.82 for the whole database (Carlson et al. (2003)). No kappa values for just the 15 texts we used were available. For the non-tree based approach, we used coherence graphs from a database of 135 texts from the Wall Street Journal and the AP Newswire, annotated for coherence. Each text was independently annotated by two annotators. For the 15 texts we used, kappa was 0.78, for the whole database, kappa was 0.84. 4.2 Experiment 1: With paragraph information 15 participants from the MIT community were paid for their participation. All were native speakers of English and were naïve as to the purpose of the study (i.e. none of the subjects was familiar with theories of coherence in natural language, for example). Participants were asked to read 15 texts from the Wall Street Journal, and, for each sentence in each text, to provide a ranking of how important that sentence is with respect to the content of the text, on an integer scale from 1 to 7 (1 = not important; 7 = very important). The texts were selected so 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 sentence number importance ranking NoParagraph WithParagraph Figure 6. Human ranking results for one text (wsj_1306). that there was a coherence tree annotation available in Carlson et al. (2002)’s database. Text lengths for the 15 texts we selected ranged from 130 to 901 words (5 to 47 sentences); average text length was 442 words (20 sentences), median was 368 words (16 sentences). Additionally, texts were selected so that they were about as diverse topics as possible. The experiment was conducted in front of personal computers. Texts were presented in a web browser as one webpage per text; for some texts, participants had to scroll to see the whole text. Each sentence was presented on a new line. Paragraph breaks were indicated by empty lines; this was pointed out to the participants during the instructions for the experiment. 4.3 Experiment 2: Without paragraph information The method was the same as in Experiment 1, except that texts in Experiment 2 did not include paragraph information. Each sentence was presented on a new line. None of the 15 participants who participated in Experiment 2 had participated in Experiment 1. 4.4 Results of the experiments Human sentence rankings did not differ significantly between Experiment 1 and Experiment 2 for any of the 15 texts (all Fs < 1). This suggests that paragraph information does not have a big effect on human sentence rankings, at least not for the 15 texts that we examined. Figure 6 shows the results from both experiments for one text. We compared human sentence rankings to different algorithmic approaches. The paragraphbased rankings do not provide scaled importance rankings but only “important” vs. “not important”. Therefore, in order to compare human rankings to the paragraph-based baseline approach, we calculated point biserial correlations (cf. Bortz (1999)). We obtained significant correlations between paragraph-based rankings and human rankings only for one of the 15 texts. All other algorithms provided scaled importance rankings. Many evaluations of scalable sentence ranking algorithms are based on precision/recall/Fscores (e.g. Carlson et al. (2001); Ono et al. (1994)). However, Jing et al. (1998) argue that such measures are inadequate because they only distinguish between hits and misses or false alarms, but do not account for a degree of agreement. For example, imagine a situation where the human ranking for a given sentence is “7” (“very important”) on an integer scale ranging from 1 to 7, and Algorithm A gives the same sentence a ranking of “7” on the same scale, Algorithm B gives a ranking of “6”, and Algorithm C gives a ranking of “2”. Intuitively, Algorithm B, although it does not reach perfect performance, still performs better than Algorithm C. Precision/recall/F-scores do not account for that difference and would rate Algorithm A as “hit” but Algorithm B as well as Algorithm C as “miss”. In order to collect performance measures that are more adequate to the evaluation of scaled importance rankings, we computed Spearman’s rank correlation coefficients. The rank correlation coefficients were corrected for tied ranks because in our rankings it was possible for more than one sentence to have the same importance rank, i.e. to have tied ranks (Horn (1942); Bortz (1999)). In addition to evaluating word-based and coherence-based algorithms, we evaluated one commercially available summarizer, the MSWord summarizer, against human sentence rankings. Our reason for including an evaluation of the MSWord summarizer was to have a more useful baseline for scalable sentence rankings than the paragraph-based approach provides. 0 0.1 0.2 0.3 0.4 0.5 0.6 MSWord Luhn tf.idf MarcuAvg MarcuMin MarcuMax in-degree PageRank mean rank correlation coefficient NoParagraph WithParagraph Figure 7. Average rank correlations of algorithm and human sentence rankings. Figure 7 shows average rank correlations (ρavg) of each algorithm and human sentence ranking for the 15 texts. MarcuAvg refers to the version of Marcu (2000)’s algorithm where we calculated sentence rankings as the average of the rankings of all discourse segments that constitute that sentence; for MarcuMin, sentence rankings were the minimum of the rankings of all discourse segments in that sentence; for MarcuMax we selected the maximum of the rankings of all discourse segments in that sentence. Figure 7 shows that the MSWord summarizer performed numerically worse than most other algorithms, except MarcuMin. Figure 7 also shows that PageRank performed numerically better than all other algorithms. Performance was significantly better than most other algorithms (MSWord, NoParagraph: F(1,28) = 21.405, p = 0.0001; MSWord, WithParagraph: F(1,28) = 26.071, p = 0.0001; Luhn, WithParagraph: F(1,28) = 5.495, p = 0.026; MarcuAvg, NoParagraph: F(1,28) = 9.186, p = 0.005; MarcuAvg, WithParagraph: F(1,28) = 9.097, p = 0.005; MarcuMin, NoParagraph: F(1,28) = 4.753, p = 0.038; MarcuMax, NoParagraph F(1,28) = 24.633, p = 0.0001; MarcuMax, WithParagraph: F(1,28) = 31.430, p =0.0001). Exceptions are Luhn, NoParagraph (F(1,28) = 1.859, p = 0.184); tf.idf, NoParagraph (F(1,28) = 2.307, p = 0.14); MarcuMin, WithParagraph (F(1,28) = 2.555, p = 0.121). The difference between PageRank and tf.idf, WithParagraph was marginally significant (F(1,28) = 3.113, p = 0.089). As mentioned above, human sentence rankings did not differ significantly between Experiment 1 and Experiment 2 for any of the 15 texts (all Fs < 1). Therefore, in order to lend more power to our statistical tests, we collapsed the data for each text for the WithParagraph and the NoParagraph condition, and treated them as one experiment. Figure 8 shows that when the data from Experiments 1 and 2 are collapsed, PageRank performed significantly better than all other algorithms except in-degree (two-tailed t-test results: MSWord: F(1, 58) = 48.717, p = 0.0001; Luhn: F(1,58) = 6.368, p = 0.014; tf.idf: F(1,58) = 5.522, p = 0.022; MarcuAvg: F(1,58) = 18.922, p = 0.0001; MarcuMin: F(1,58) = 7.362, p = 0.009; MarcuMax: F(1,58) = 56.989, p = 0.0001; indegree: F(1,58) < 1). 0 0.1 0.2 0.3 0.4 0.5 MSWord Luhn tf.idf MarcuAvg MarcuMin MarcuMax in-degree PageRank mean rank correlation coefficient Figure 8. Average rank correlations of algorithm and human sentence rankings with collapsed data. 5 Conclusion The goal of this paper was to evaluate the results of three different kinds of sentence ranking algorithms and one commercially available summarizer. In order to evaluate the algorithms, we compared their sentence rankings to human sentence rankings of fifteen texts of varying length from the Wall Street Journal. Our results indicated that a simple paragraphbased algorithm that was intended as a baseline performed very poorly, and that word-based and some coherence-based algorithms showed the best performance. The only commercially available summarizer that we tested, the MSWord summarizer, showed worse performance than most other algorithms. Furthermore, we found that a coherence-based algorithm that uses PageRank and takes non-tree coherence graphs as input performed better than most versions of a coherence-based algorithm that operates on coherence trees. When data from Experiments 1 and 2 were collapsed, the PageRank algorithm performed significantly better than all other algorithms, except the coherence-based algorithm that uses in-degrees of nodes in non-tree coherence graphs. References Jürgen Bortz. 1999. Statistik für Sozialwissenschaftler. Berlin: Springer Verlag. Ronald Brandow, Karl Mitze, & Lisa F Rau. 1995. Automatic condensation of electronic publications by sentence selection. Information Processing and Management, 31(5), 675-685. Orkut Buyukkokten, Hector Garcia-Molina, & Andreas Paepcke. 2001. Seeing the whole in parts: Text summarization for web browsing on handheld devices. Paper presented at the 10th International WWW Conference, Hong Kong, China. Jean Carletta. 1996. Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2), 249254. Lynn Carlson, John M Conroy, Daniel Marcu, Dianne P O'Leary, Mary E Okurowski, Anthony Taylor, et al. 2001. An empirical study on the relation between abstracts, extracts, and the discourse structure of texts. Paper presented at the DUC-2001, New Orleans, LA, USA. Lynn Carlson, Daniel Marcu, & Mary E Okurowski. 2002. RST Discourse Treebank. Philadelphia, PA: Linguistic Data Consortium. Lynn Carlson, Daniel Marcu, & Mary E Okurowski. 2003. Building a discoursetagged corpus in the framework of rhetorical structure theory. In J. van Kuppevelt & R. Smith (Eds.), Current directions in discourse and dialogue. New York: Kluwer Academic Publishers. Simon Corston-Oliver. 1998. Computing representations of the structure of written discourse. Redmont, WA. Chris Ding, Xiaofeng He, Perry Husbands, Hongyuan Zha, & Horst Simon. 2002. PageRank, HITS, and a unified framework for link analysis. (No. 49372). Berkeley, CA, USA. Jade Goldstein, Mark Kantrowitz, Vibhu O Mittal, & Jamie O Carbonell. 1999. Summarizing text documents: Sentence selection and evaluation metrics. Paper presented at the SIGIR-99, Melbourne, Australia. Yihong Gong, & Xin Liu. 2001. Generic text summarization using relevance measure and latent semantic analysis. Paper presented at the Annual ACM Conference on Research and Development in Information Retrieval, New Orleans, LA, USA. Barbara J Grosz, & Candace L Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 12(3), 175-204. Julia Hirschberg, & Christine H Nakatani. 1996. A prosodic analysis of discourse segments in direction-giving monologues. Paper presented at the 34th Annual Meeting of the Association for Computational Linguistics, Santa Cruz, CA. Jerry R Hobbs. 1985. On the coherence and structure of discourse. Stanford, CA. D Horn. 1942. A correction for the effect of tied ranks on the value of the rank difference correlation coefficient. Journal of Educational Psychology, 33, 686-690. Hongyan Jing, Kathleen R McKeown, Regina Barzilay, & Michael Elhadad. 1998. Summarization evaluation methods: Experiments and analysis. Paper presented at the AAAI-98 Spring Symposium on Intelligent Text Summarization, Stanford, CA, USA. Alex Lascarides, & Nicholas Asher. 1993. Temporal interpretation, discourse relations and common sense entailment. Linguistics and Philosophy, 16(5), 437493. Hans Peter Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of Research and Development, 2(2), 159-165. William C Mann, & Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3), 243-281. Christopher D Manning, & Hinrich Schuetze. 2000. Foundations of statistical natural language processing. Cambridge, MA, USA: MIT Press. Daniel Marcu. 2000. The theory and practice of discourse parsing and summarization. Cambridge, MA: MIT Press. Mandar Mitra, Amit Singhal, & Chris Buckley. 1997. Automatic text summarization by paragraph extraction. Paper presented at the ACL/EACL-97 Workshop on Intelligent Scalable Text Summarization, Madrid, Spain. Kenji Ono, Kazuo Sumita, & Seiji Miike. 1994. Abstract generation based on rhetorical structure extraction. Paper presented at the COLING-94, Kyoto, Japan. Lawrence Page, Sergey Brin, Rajeev Motwani, & Terry Winograd. 1998. The PageRank citation ranking: Bringing order to the web. Stanford, CA. Dragomir R Radev, Eduard Hovy, & Kathleen R McKeown. 2002. Introduction to the special issue on summarization. Computational Linguistics, 28(4), 399408. Gerard Salton, & Christopher Buckley. 1988. Term-weighting approaches in automatic text retrieval. Information Processing and Management, 24(5), 513-523. Karen Sparck-Jones. 1993. What might be in a summary? In G. Knorz, J. Krause & C. Womser-Hacker (Eds.), Information retrieval 93: Von der Modellierung zur Anwendung (pp. 9-26). Konstanz: Universitaetsverlag. Karen Sparck-Jones, & Tetsuya Sakai. 2001, September 2001. Generic summaries for indexing in IR. Paper presented at the ACM SIGIR-2001, New Orleans, LA, USA. Klaus Zechner. 1996. Fast generation of abstracts from general domain text corpora by extracting relevant sentences. Paper presented at the COLING-96, Copenhagen, Denmark.
2004
49
A TAG-based noisy channel model of speech repairs Mark Johnson Brown University Providence, RI 02912 [email protected] Eugene Charniak Brown University Providence, RI 02912 [email protected] Abstract This paper describes a noisy channel model of speech repairs, which can identify and correct repairs in speech transcripts. A syntactic parser is used as the source model, and a novel type of TAG-based transducer is the channel model. The use of TAG is motivated by the intuition that the reparandum is a “rough copy” of the repair. The model is trained and tested on the Switchboard disfluency-annotated corpus. 1 Introduction Most spontaneous speech contains disfluencies such as partial words, filled pauses (e.g., “uh”, “um”, “huh”), explicit editing terms (e.g., “I mean”), parenthetical asides and repairs. Of these repairs pose particularly difficult problems for parsing and related NLP tasks. This paper presents an explicit generative model of speech repairs and shows how it can eliminate this kind of disfluency. While speech repairs have been studied by psycholinguists for some time, as far as we know this is the first time a probabilistic model of speech repairs based on a model of syntactic structure has been described in the literature. Probabilistic models have the advantage over other kinds of models that they can in principle be integrated with other probabilistic models to produce a combined model that uses all available evidence to select the globally optimal analysis. Shriberg and Stolcke (1998) studied the location and distribution of repairs in the Switchboard corpus, but did not propose an actual model of repairs. Heeman and Allen (1999) describe a noisy channel model of speech repairs, but leave “extending the model to incorporate higher level syntactic . . . processing” to future work. The previous work most closely related to the current work is Charniak and Johnson (2001), who used a boosted decision stub classifier to classify words as edited or not on a word by word basis, but do not identify or assign a probability to a repair as a whole. There are two innovations in this paper. First, we demonstrate that using a syntactic parser-based language model Charniak (2001) instead of bi/trigram language models significantly improves the accuracy of repair detection and correction. Second, we show how Tree Adjoining Grammars (TAGs) can be used to provide a precise formal description and probabilistic model of the crossed dependencies occurring in speech repairs. The rest of this paper is structured as follows. The next section describes the noisy channel model of speech repairs and the section after that explains how it can be applied to detect and repair speech repairs. Section 4 evaluates this model on the Penn 3 disfluency-tagged Switchboard corpus, and section 5 concludes and discusses future work. 2 A noisy channel model of repairs We follow Shriberg (1994) and most other work on speech repairs by dividing a repair into three parts: the reparandum (the material repaired), the interregnum that is typically either empty or consists of a filler, and the repair. Figure 1 shows these three parts for a typical repair. Most current probabilistic language models are based on HMMs or PCFGs, which induce linear or tree-structured dependencies between words. The relationship between reparandum and repair seems to be quite different: the repair is a “rough copy” of the reparandum, often incorporating the same or very similar words in roughly the same word order. That is, they seem to involve “crossed” dependencies between the reparandum and the repair, shown in Figure 1. Languages with an unbounded number of crossed dependencies cannot be described by a context-free or finitestate grammar, and crossed dependencies like these have been used to argue natural languages . . . a flight to Boston, | {z } Reparandum uh, I mean, | {z } Interregnum to Denver | {z } Repair on Friday . . . Figure 1: The structure of a typical repair, with crossing dependencies between reparandum and repair. I mean uh a flight to Boston to Denver on Friday Figure 2: The “helical” dependency structure induced by the generative model of speech repairs for the repair depicted in Figure 1. are not context-free Shieber (1985). Mildly context-sensitive grammars, such as Tree Adjoining Grammars (TAGs) and Combinatory Categorial Grammars, can describe such crossing dependencies, and that is why TAGs are used here. Figure 2 shows the combined model’s dependency structure for the repair of Figure 1. Interestingly, if we trace the temporal word string through this dependency structure, aligning words next to the words they are dependent on, we obtain a “helical” type of structure familiar from genome models, and in fact TAGs are being used to model genomes for very similar reasons. The noisy channel model described here involves two components. A language model defines a probability distribution P(X) over the source sentences X, which do not contain repairs. The channel model defines a conditional probability distribution P(Y |X) of surface sentences Y , which may contain repairs, given source sentences. In the work reported here, X is a word string and Y is a speech transcription not containing punctuation or partial words. We use two language models here: a bigram language model, which is used in the search process, and a syntactic parser-based language model Charniak (2001), which is used to rescore a set of the most likely analysis obtained using the bigram model. Because the language model is responsible for generating the well-formed sentence X, it is reasonable to expect that a language model that can model more global properties of sentences will lead to better performance, and the results presented here show that this is the case. The channel model is a stochastic TAG-based transducer; it is responsible for generating the repairs in the transcript Y , and it uses the ability of TAGs to straightforwardly model crossed dependencies. 2.1 Informal description Given an observed sentence Y we wish to find the most likely source sentence bX, where: bX = argmax X P(X|Y ) = argmax X P(Y |X)P(Y ). This is the same general setup that is used in statistical speech recognition and machine translation, and in these applications syntaxbased language models P(Y ) yield state-of-theart performance, so we use one such model here. The channel model P(Y |X) generates sentences Y given a source X. A repair can potentially begin before any word of X. When a repair has begun, the channel model incrementally processes the succeeding words from the start of the repair. Before each succeeding word either the repair can end or else a sequence of words can be inserted in the reparandum. At the end of each repair, a (possibly null) interregnum is appended to the reparandum. The intuition motivating the channel model design is that the words inserted into the reparandum are very closely related those in the repair. Indeed, in our training data over 60% of the words in the reparandum are exact copies of words in the repair; this similarity is strong evidence of a repair. The channel model is designed so that exact copy reparandum words will have high probability. We assume that X is a substring of Y , i.e., that the source sentence can be obtained by deleting words from Y , so for a fixed observed sentence there are only a finite number of possible source sentences. However, the number of source sentences grows exponentially with the length of Y , so exhaustive search is probably infeasible. TAGs provide a systematic way of formalizing the channel model, and their polynomialtime dynamic programming parsing algorithms can be used to search for likely repairs, at least when used with simple language models like a bigram language model. In this paper we first identify the 20 most likely analysis of each sentence using the TAG channel model together with a bigram language model. Then each of these analysis is rescored using the TAG channel model and a syntactic parser based language model. The TAG channel model’s analysis do not reflect the syntactic structure of the sentence being analyzed; instead they encode the crossed dependencies of the speech repairs. If we want to use TAG dynamic programming algorithms to efficiently search for repairs, it is necessary that the intersection (in language terms) of the TAG channel model and the language model itself be describable by a TAG. One way to guarantee this is to use a finite state language model; this motivates our use of a bigram language model. On the other hand, it seems desirable to use a language model that is sensitive to more global properties of the sentence, and we do this by reranking the initial analysis, replacing the bigram language model with a syntactic parser based model. We do not need to intersect this parser based language model with our TAG channel model since we evaluate each analysis separately. 2.2 The TAG channel model The TAG channel model defines a stochastic mapping of source sentences X into observed sentences Y . There are several ways to define transducers using TAGs such as Shieber and Schabes (1990), but the following simple method, inspired by finite-state transducers, suffices for the application here. The TAG defines a language whose vocabulary is the set of pairs (Σ∪{∅})×(Σ∪{∅}), where Σ is the vocabulary of the observed sentences Y . A string Z in this language can be interpreted as a pair of strings (Y, X), where Y is the concatenation of the projection of the first components of Z and X is the concatenation of the projection of the second components. For example, the string Z = a:a flight:flight to:∅Boston:∅ uh:∅I:∅mean:∅to:to Denver:Denver on:on Friday:Friday corresponds to the observed string Y = a flight to Boston uh I mean to Denver on Friday and the source string X = a flight to Denver on Friday. Figure 3 shows the TAG rules used to generate this example. The nonterminals in this grammar are of the form Nwx, Rwy:wx and I, where wx is a word appearing in the source string and wy is a word appearing in the observed string. Informally, the Nwx nonterminals indicate that the preceding word wx was analyzed as not being part of a repair, while the Rwy:wx that the preceding words wy and wx were part of a repair. The nonterminal I generates words in the interregnum of a repair. Encoding the preceding words in the TAGs nonterminals permits the channel model to be sensitive to lexical properties of the preceding words. The start symbol is N$, where ‘$’ is a distinguished symbol used to indicate the beginning and end of sentences. 2.3 Estimating the repair channel model from data The model is trained from the disfluency and POS tagged Switchboard corpus on the LDC Penn tree bank III CD-ROM (specifically, the files under dysfl/dps/swbd). This version of the corpus annotates the beginning and ending positions of repairs as well as fillers, editing terms, asides, etc., which might serve as the interregnum in a repair. The corpus also includes punctuation and partial words, which are ignored in both training and evaluation here since we felt that in realistic applications these would not be available in speech recognizer output. The transcript of the example of Figure 1 would look something like the following: a/DT flight/NN [to/IN Boston/NNP + {F uh/UH} {E I/PRP mean/VBP} to/IN Denver/NNP] on/IN Friday/NNP In this transcription the repair is the string from the opening bracket “[” to the interruption point “+”; the interregnum is the sequence of braced strings following the interregnum, and the repair is the string that begins at the end of the interregnum and ends at the closing bracket “]”. The interregnum consists of the braced (α1) Nwant a:a Na ↓ 1 −Pn(repair|a) (α2) Na flight:flight Rflight:flight I↓ Pn(repair|flight) (α3) NDenver on:on Non ↓ 1 −Pn(repair|on) (α5) I uh I I mean Pi(uh I mean) (β1) Rflight:flight to:∅ Rto:to R⋆ flight:flight to:to Pr(copy|flight, flight) (β2) Rto:to Boston:∅ RBoston:Denver R⋆ to:to Denver:Denver Pr(subst|to, to)Pr(Boston|subst, to, Denver) (β3) RBoston:Denver R⋆ Boston:Denver NDenver ↓ Pr(nonrep|Boston, Denver) (β4) RBoston,Denver RBoston,tomorrow R⋆ Boston,Denver tomorrow:tomorrow Pr(del|Boston, Denver) (β5) RBoston,Denver tomorrow:∅ Rtomorrow,Denver R⋆ Boston,Denver Pr(ins|Boston, Denver) Pr(tomorrow|ins, Boston, Denver) . . . α1 α2 α5 β1 β2 β3 α3 α4 . . . Nwant a:a Na flight:flight Rflight:flight to:∅ Rto:to Boston:∅ RBoston:Denver RBoston:Denver Rto:to Rflight:flight I uh:∅ I I:∅ mean:∅ to:to Denver:Denver NDenver on:on Non Friday:Friday NFriday . . . Figure 3: The TAG rules used to generate the example shown in Figure 1 and their respective weights, and the corresponding derivation and derived trees. expressions immediately following the interruption point. We used the disfluency tagged version of the corpus for training rather than the parsed version because the parsed version does not mark the interregnum, but we need this information for training our repair channel model. Testing was performed using data from the parsed version since this data is cleaner, and it enables a direct comparison with earlier work. We followed Charniak and Johnson (2001) and split the corpus into main training data, heldout training data and test data as follows: main training consisted of all sw[23]*.dps files, heldout training consisted of all sw4[5-9]*.dps files and test consisted of all sw4[0-1]*.mrg files. We now describe how the weights on the TAG productions described in subsection 2.2 are estimated from this training data. In order to estimate these weights we need to know the TAG derivation of each sentence in the training data. In order to uniquely determine this we need the not just the locations of each reparandum, interregnum and repair (which are annotated in the corpus) but also the crossing dependencies between the reparandum and repair words, as indicated in Figure 1. We obtain these by aligning the reparandum and repair strings of each repair using a minimum-edit distance string aligner with the following alignment costs: aligning identical words costs 0, aligning words with the same POS tag costs 2, an insertion or a deletion costs 4, aligning words with POS tags that begin with the same letter costs 5, and an arbitrary substitution costs 7. These costs were chosen so that a substitution will be selected over an insertion followed by a deletion, and the lower cost for substitutions involving POS tags beginning with the same letter is a rough and easy way of establishing a preference for aligning words whose POS tags come from the same broad class, e.g., it results in aligning singular and plural nouns, present and past participles, etc. While we did not evaluate the quality of the alignments since they are not in themselves the object of this exercise, they seem to be fairly good. From our training data we estimate a number of conditional probability distributions. These estimated probability distributions are the linear interpolation of the corresponding empirical distributions from the main sub-corpus using various subsets of conditioning variables (e.g., bigram models are mixed with unigram models, etc.) using Chen’s bucketing scheme Chen and Goodman (1998). As is commonly done in language modelling, the interpolation coefficients are determined by maximizing the likelihood of the held out data counts using EM. Special care was taken to ensure that all distributions over words ranged over (and assigned non-zero probability to) every word that occurred in the training corpora; this turns out to be important as the size of the training data for the different distributions varies greatly. The first distribution is defined over the words in source sentences (i.e., that do not contain reparandums or interregnums). Pn(repair|W) is the probability of a repair beginning after a word W in the source sentence X; it is estimated from the training sentences with reparandums and interregnums removed. Here and in what follows, W ranges over Σ ∪ {$}, where ‘$’ is a distinguished beginning-ofsentence marker. For example, Pn(repair|flight) is the probability of a repair beginning after the word flight. Note that repairs are relatively rare; in our training data Pn(repair) ≈0.02, which is a fairly strong bias against repairs. The other distributions are defined over aligned reparandum/repair strings, and are estimated from the aligned repairs extracted from the training data. In training we ignored all overlapping repairs (i.e., cases where the reparandum of one repair is the repair of another). (Naturally, in testing we have no such freedom.) We analyze each repair as consisting of n aligned word pairs (we describe the interregnum model later). Mi is the ith reparandum word and Ri is the corresponding repair word, so both of these range over Σ ∪{∅}. We define M0 and R0 to be source sentence word that preceded the repair (which is ‘$’ if the repair begins at the beginning of a sentence). We define M ′ i and R′ i to be the last non-∅ reparandum and repair words respectively, i.e., M′ i = Mi if Mi ̸= ∅and M ′ i = M′ i−1 otherwise. Finally, Ti, i = 1 . . . n + 1, which indicates the type of repair that occurs at position i, ranges over {copy, subst, ins, del, nonrep}, where Tn+1 = nonrep (indicating that the repair has ended), and for i = 1 . . . n, Ti = copy if Mi = Ri, Ti = ins if Ri = ∅, Ti = del if Mi = ∅ and Ti = subst otherwise. The distributions we estimate from the aligned repair data are the following. Pr(Ti|M′ i−1, R′ i−1) is the probability of seeing repair type Ti following the reparandum word M ′ i−1 and repair word R′ i−1; e.g., Pr(nonrep|Boston, Denver) is the probability of the repair ending when Boston is the last reparandum word and Denver is the last repair word. Pr(Mi|Ti = ins, M ′ i−1, R′ i) is the probability that Mi is the word that is inserted into the reparandum (i.e., Ri = ∅) given that some word is substituted, and that the preceding reparandum and repair words are M ′ i−1 and R′ i. For example Pr(tomorrow|ins, Boston, Denver) is the probability that the word tomorrow is inserted into the reparandum after the words Boston and Denver, given that some word is inserted. Pr(Mi|Ti = subst, M ′ i−1, R′ i) is the probability that Mi is the word that is substituted in the reparandum for R′ i, given that some word is substituted. For example, Pr(Boston|subst, to, Denver) is the probability that Boston is substituted for Denver, given that some word is substituted. Finally, we also estimated a probability distribution Pi(W) over interregnum strings as follows. Our training corpus annotates what we call interregnum expressions, such as uh and I mean. We estimated a simple unigram distribution over all of the interregnum expressions observed in our training corpus, and also extracted the empirical distribution of the number of interregnum expressions in each repair. Interregnums are generated as follows. First, the number k of interregnum expressions is chosen using the empirical distribution. Then k interregnum expressions are independently generated from the unigram distribution of interregnum expressions, and appended to yield the interregnum string W. The weighted TAG that constitutes the channel model is straight forward to define using these conditional probability distributions. Note that the language model generates the source string X. Thus the weights of the TAG rules condition on the words in X, but do not generate them. There are three different schema defining the initial trees of the TAG. These correspond to analyzing a source word as not beginning a repair (e.g., α1 and α3 in Figure 3), analyzing a source word as beginning a repair (e.g., α2), and generating an interregnum (e.g., α5). Auxiliary trees generate the paired reparandum/repair words of a repair. There are five different schema defining the auxiliary trees corresponding to the five different values that Ti can take. Note that the nonterminal Rm,r expanded by the auxiliary trees is annotated with the last reparandum and repair words M ′ i−1 and R′ i−1 respectively, which makes it possible to condition the rule’s weight on these words. Auxiliary trees of the form (β1) generate reparandum words that are copies of the corresponding repair words; the weight on such trees is Pr(copy|M ′ i−1, R′ i−1). Trees of the form (β2) substitute a reparandum word for a repair word; their weight is Pr(subst|M ′ i−1, R′ i−1)Pr(Mi|subst, M ′ i−1, R′ i). Trees of the form (β3) end a repair; their weight is Pr(nonrep|, M ′ i−1, R′ i−1). Auxiliary trees of the form (β3) end a repair; they are weighted Pr(nonrep|M ′ i−1, R′ i−1). Auxiliary trees of the form (β4) permit the repair word R′ i−1 to be deleted in the reparandum; the weight of such a tree is Pr(del|M ′ i−1, R′ i−1). Finally, auxiliary trees of the form (β5) generate a reparandum word Mi is inserted; the weight of such a tree is Pr(ins|M ′ i−1, R′ i−1)Pr(Mi|ins, M ′ i−1, R′ i−1). 3 Detecting and repairing speech repairs The TAG just described is not probabilistic; informally, it does not include the probability costs for generating the source words. However, it is easy to modify the TAG so it does include a bigram model that does generate the source words, since each nonterminal encodes the preceding source word. That is, we multiply the weights of each TAG production given earlier that introduces a source word Ri by Pn(Ri|Ri−1). The resulting stochastic TAG is in fact exactly the intersection of the channel model TAG with a bigram language model. The standard n5 bottom-up dynamic programming parsing algorithm can be used with this stochastic TAG. Each different parse of the observed string Y with this grammar corresponds to a way of analyzing Y in terms of a hypothetical underlying sentence X and a number of different repairs. In our experiments below we extract the 20 most likely parses for each sentence. Since the weighted grammar just given does not generate the source string X, the score of the parse using the weighted TAG is P(Y |X). This score multiplied by the probability P(X) of the source string using the syntactic parser based language model, is our best estimate of the probability of an analysis. However, there is one additional complication that makes a marked improvement to the model’s performance. Recall that we use the standard bottom-up dynamic programming TAG parsing algorithm to search for candidate parses. This algorithm has n5 running time, where n is the length of the string. Even though our sentences are often long, it is extremely unlikely that any repair will be longer than, say, 12 words. So to increase processing speed we only compute analyses for strings of length 12 or less. For every such substring that can be analyzed as a repair we calculate the repair odds, i.e., the probability of generating this substring as a repair divided by the probability of generating this substring via the non-repair rules, or equivalently, the odds that this substring constitutes a repair. The substrings with high repair odds are likely to be repairs. This more local approach has a number of advantages over computing a global analysis. First, as just noted it is much more efficient to compute these partial analyses rather than to compute global analyses of the entire sentence. Second, there are rare cases in which the same substring functions as both repair and reparandum (i.e., the repair string is itself repaired again). A single global analysis would not be able to capture this (since the TAG channel model does not permit the same substring to be both a reparandum and a repair), but we combine these overlapping repair substring analyses in a post-processing operation to yield an analysis of the whole sentence. (We do insist that the reparandum and interregnum of a repair do not overlap with those of any other repairs in the same analysis). 4 Evaluation This section describes how we evaluate our noisy model. As mentioned earlier, following Charniak and Johnson (2001) our test data consisted of all Penn III Switchboard tree-bank sw4[01]*.mrg files. However, our test data differs from theirs in that in this test we deleted all partial words and punctuation from the data, as this results in a more realistic test situation. Since the immediate goal of this work is to produce a program that identifies the words of a sentence that belong to the reparandum of a repair construction (to a first approximation these words can be ignored in later processing), our evaluation focuses on the model’s performance in recovering the words in a reparandum. That is, the model is used to classify each word in the sentence as belonging to a reparandum or not, and all other additional structure produced by the model is ignored. We measure model performance using standard precision p, recall r and f-score f, measures. If nc is the number of reparandum words the model correctly classified, nt is the number of true reparandum words given by the manual annotations and nm is the number of words the model predicts to be reparandum words, then the precision is nc/nm, recall is nc/nt, and f is 2pr/(p + r). For comparison we include the results of running the word-by-word classifier described in Charniak and Johnson (2001), but where partial words and punctuation have been removed from the training and test data. We also provide results for our noisy channel model using a bigram language model and a second trigram model where the twenty most likely analyses are rescored. Finally we show the results using the parser language model. CJ01′ Bigram Trigram Parser Precision 0.951 0.776 0.774 0.820 Recall 0.631 0.736 0.763 0.778 F-score 0.759 0.756 0.768 0.797 The noisy channel model using a bigram language model does a slightly worse job at identifying reparandum and interregnum words than the classifier proposed in Charniak and Johnson (2001). Replacing the bigram language model with a trigram model helps slightly, and parserbased language model results in a significant performance improvement over all of the others. 5 Conclusion and further work This paper has proposed a novel noisy channel model of speech repairs and has used it to identify reparandum words. One of the advantages of probabilistic models is that they can be integrated with other probabilistic models in a principled way, and it would be interesting to investigate how to integrate this kind of model of speech repairs with probabilistic speech recognizers. There are other kinds of joint models of reparandum and repair that may produce a better reparandum detection system. We have experimented with versions of the models described above based on POS bi-tag dependencies rather than word bigram dependencies, but with results very close to those presented here. Still, more sophisticated models may yield better performance. It would also be interesting to combine this probabilistic model of speech repairs with the word classifier approach of Charniak and Johnson (2001). That approach may do so well because many speech repairs are very short, involving only one or two words Shriberg and Stolcke (1998), so the reparandum, interregnum and repair are all contained in the surrounding word window used as features by the classifier. On the other hand, the probabilistic model of repairs explored here seems to be most successful in identifying long repairs in which the reparandum and repair are similar enough to be unlikely to have been generated independently. Since the two approaches seem to have different strengths, a combined model may outperform both of them. References Eugene Charniak and Mark Johnson. 2001. Edit detection and parsing for transcribed speech. In Proceedings of the 2nd Meeting of the North American Chapter of the Association for Computational Linguistics, pages 118–126. The Association for Computational Linguistics. Eugene Charniak. 2001. Immediate-head parsing for language models. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics. The Association for Computational Linguistics. Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR10-98, Center for Research in Computing Technology, Harvard University. Peter A. Heeman and James F. Allen. 1999. Speech repairs, intonational phrases, and discourse markers: Modeling speaker’s utterances in spoken dialogue. Computational Linguistics, 25(4):527–571. Stuart M. Shieber and Yves Schabes. 1990. Synchronous tree-adjoining grammars. In Proceedings of the 13th International Conference on Computational Linguistics (COLING 1990), pages 253–258. Stuart M. Shieber. 1985. Evidence against the Context-Freeness of natural language. Linguistics and Philosophy, 8(3):333–344. Elizabeth Shriberg and Andreas Stolcke. 1998. How far do speakers back up in repairs? a quantitative model. In Proceedings of the International Conference on Spoken Language Processing, volume 5, pages 2183–2186, Sydney, Australia. Elizabeth Shriberg. 1994. Preliminaries to a Theory of Speech Disfluencies. Ph.D. thesis, University of California, Berkeley.
2004
5
Evaluating Centering-based metrics of coherence for text structuring using a reliably annotated corpus Nikiforos Karamanis,♣Massimo Poesio,♦Chris Mellish,♠and Jon Oberlander♣ ♣School of Informatics, University of Edinburgh, UK, {nikiforo,jon}@ed.ac.uk ♦Dept. of Computer Science, University of Essex, UK, poesio at essex dot ac dot uk ♠Dept. of Computing Science, University of Aberdeen, UK, [email protected] Abstract We use a reliably annotated corpus to compare metrics of coherence based on Centering Theory with respect to their potential usefulness for text structuring in natural language generation. Previous corpus-based evaluations of the coherence of text according to Centering did not compare the coherence of the chosen text structure with that of the possible alternatives. A corpusbased methodology is presented which distinguishes between Centering-based metrics taking these alternatives into account, and represents therefore a more appropriate way to evaluate Centering from a text structuring perspective. 1 Motivation Our research area is descriptive text generation (O’Donnell et al., 2001; Isard et al., 2003), i.e. the generation of descriptions of objects, typically museum artefacts, depicted in a picture. Text (1), from the gnome corpus (Poesio et al., 2004), is an example of short human-authored text from this genre: (1) (a) 144 is a torc. (b) Its present arrangement, twisted into three rings, may be a modern alteration; (c) it should probably be a single ring, worn around the neck. (d) The terminals are in the form of goats’ heads. According to Centering Theory (Grosz et al., 1995; Walker et al., 1998a), an important factor for the felicity of (1) is its entity coherence: the way centers (discourse entities), such as the referent of the NPs “144” in clause (a) and “its” in clause (b), are introduced and discussed in subsequent clauses. It is often claimed in current work on in natural language generation that the constraints on felicitous text proposed by the theory are useful to guide text structuring, in combination with other factors (see (Karamanis, 2003) for an overview). However, how successful Centering’s constraints are on their own in generating a felicitous text structure is an open question, already raised by the seminal papers of the theory (Brennan et al., 1987; Grosz et al., 1995). In this work, we explored this question by developing an approach to text structuring purely based on Centering, in which the role of other factors is deliberately ignored. In accordance with recent work in the emerging field of text-to-text generation (Barzilay et al., 2002; Lapata, 2003), we assume that the input to text structuring is a set of clauses. The output of text structuring is merely an ordering of these clauses, rather than the tree-like structure of database facts often used in traditional deep generation (Reiter and Dale, 2000). Our approach is further characterized by two key insights. The first distinguishing feature is that we assume a search-based approach to text structuring (Mellish et al., 1998; Kibble and Power, 2000; Karamanis and Manurung, 2002) in which many candidate orderings of clauses are evaluated according to scores assigned by a given metric, and the best-scoring ordering among the candidate solutions is chosen. The second novel aspect is that our approach is based on the position that the most straightforward way of using Centering for text structuring is by defining a Centering-based metric of coherence Karamanis (2003). Together, these two assumptions lead to a view of text planning in which the constraints of Centering act not as filters, but as ranking factors, and the text planner may be forced to choose a sub-optimal solution. However, Karamanis (2003) pointed out that many metrics of coherence can be derived from the claims of Centering, all of which could be used for the type of text structuring assumed in this paper. Hence, a general methodology for identifying which of these metrics represent the most promising candidates for text structuring is required, so that at least some of them can be compared empirically. This is the second research question that this paper addresses, building upon previous work on corpus-based evaluations of Centering, and particularly the methods used by Poesio et al. (2004). We use the gnome corpus (Poesio et al., 2004) as the domain of our experiments because it is reliably annotated with features relevant to Centering and contains the genre that we are mainly interested in. To sum up, in this paper we try to identify the most promising Centering-based metric for text structuring, and to evaluate how useful this metric is for that purpose, using corpusbased methods instead of generally more expensive psycholinguistic techniques. The paper is structured as follows. After discussing how the gnome corpus has been used in previous work to evaluate the coherence of a text according to Centering we discuss why such evaluations are not sufficient for text structuring. We continue by showing how Centering can be used to define different metrics of coherence which might be useful to drive a text planner. We then outline a corpus-based methodology to choose among these metrics, estimating how well they are expected to do when used by a text planner. We conclude by discussing our experiments in which this methodology is applied using a subset of the gnome corpus. 2 Evaluating the coherence of a corpus text according to Centering In this section we briefly introduce Centering, as well as the methodology developed in Poesio et al. (2004) to evaluate the coherence of a text according to Centering. 2.1 Computing CF lists, CPs and CBs According to Grosz et al. (1995), each “utterance” in a discourse is assigned a list of forward looking centers (CF list) each of which is “realised” by at least one NP in the utterance. The members of the CF list are “ranked” in order of prominence, the first element being the preferred center CP. In this paper, we used what we considered to be the most common definitions of the central notions of Centering (its ‘parameters’). Poesio et al. (2004) point out that there are many definitions of parameters such as “utterance”, “ranking” or “realisation”, and that the setting of these parameters greatly affects the predictions of the theory;1 however, they found violations of the Centering constraints with any way of setting the parameters (for instance, at least 25% of utterances have no CB under any such setting), so that the questions addressed by our work arise for all other settings as well. Following most mainstream work on Centering for English, we assume that an “utterance” corresponds to what is annotated as a finite unit in the gnome corpus.2 The spans of text with the indexes (a) to (d) in example (1) are examples. This definition of utterance is not optimal from the point of view of minimizing Centering violations (Poesio et al., 2004), but in this way most utterances are the realization of a single proposition; i.e., the impact of aggregation is greatly reduced. Similarly, we use grammatical function (gf) combined with linear order within the unit (what Poesio et al. (2004) call gftherelin) for CF ranking. In this configuration, the CP is the referent of the first NP within the unit that is annotated as a subject for its gf.3 Example (2) shows the relevant annotation features of unit u210 which corresponds to utterance (a) in example (1). According to gftherelin, the CP of (a) is the referent of ne410 “144”. (2) <unit finite=’finite-yes’ id=’u210’> <ne id="ne410" gf="subj">144</ne> is <ne id="ne411" gf="predicate"> a torc</ne> </unit>. The ranking of the CFs other than the CP is defined according to the following preference on their gf (Brennan et al., 1987): obj>iobj>other. CFs with the same gf are ranked according to the linear order of the corresponding NPs in the utterance. The second column of Table 1 shows how the utterances in example (1) are automatically translated by the scripts developed by Poesio et al. (2004) into a 1For example, one could equate “utterance” with sentence (Strube and Hahn, 1999; Miltsakaki, 2002), use indirect realisation for the computation of the CF list (Grosz et al., 1995), rank the CFs according to their information status (Strube and Hahn, 1999), etc. 2Our definition includes titles which are not always finite units, but excludes finite relative clauses, the second element of coordinated VPs and clause complements which are often taken as not having their own CF lists in the literature. 3Or as a post-copular subject in a there-clause. CF list: cheapness U {CP, other CFs} CB Transition CBn=CPn−1 (a) {de374, de375} n.a. n.a. n.a. (b) {de376, de374, de377} de374 retain + (c) {de374, de379} de374 continue ∗ (d) {de380, de381, de382} nocb + Table 1: CP, CFs other than CP, CB, nocb or standard (see Table 2) transition and violations of cheapness (denoted with an asterisk) for each utterance (U) in example (1) coherence: coherence∗: CBn=CBn−1 CBn ̸=CBn−1 or nocb in CFn−1 salience: CBn=CPn continue smooth-shift salience∗: CBn ̸=CPn retain rough-shift Table 2: coherence, salience and the table of standard transitions sequence of CF lists, each decomposed into the CP and the CFs other than the CP, according to the chosen setting of the Centering parameters. Note that the CP of (a) is the center de374 and that the same center is used as the referent of the other NPs which are annotated as coreferring with ne410. Given two subsequent utterances Un−1 and Un, with CF lists CFn−1 and CFn respectively, the backward looking center of Un, CBn, is defined as the highest ranked element of CFn−1 which also appears in CFn (Centering’s Constraint 3). For instance, the CB of (b) is de374. The third column of Table 1 shows the CB for each utterance in (1).4 2.2 Computing transitions As the fourth column of Table 1 shows, each utterance, with the exception of (a), is also marked with a transition from the previous one. When CFn and CFn−1 do not have any centers in common, we compute the nocb transition (Kibble and Power, 2000) (Poesio et al’s null transition) for Un (e.g., utterance (d) in Table 1).5 4In accordance with Centering, no CB is computed for (a), the first utterance in the sequence. 5In this study we do not take indirect realisation into account, i.e., we ignore the bridging reference (annotated in the corpus) between the referent of “it” de374 in (c) and the referent of “the terminals” de380 in (d), by virtue of which de374 might be thought as being a member of the CF list of (d). Poesio et al. (2004) showed that hypothesizing indirect realization eliminates many violations of entity continuity, the part of Constraint 1 that rules out nocb transitions. However, in this work we are treating CF lists as an abstract representation Following again the terminology in Kibble and Power (2000), we call the requirement that CBn be the same as CBn−1 the principle of coherence and the requirement that CBn be the same as CPn the principle of salience. Each of these principles can be satisfied or violated while their various combinations give rise to the standard transitions of Centering shown in Table 2; Poesio et al’s scripts compute these violations.6 We also make note of the preference between these transitions, known as Centering’s Rule 2 (Brennan et al., 1987): continue is preferred to retain, which is preferred to smoothshift, which is preferred to rough-shift. Finally, the scripts determine whether CBn is the same as CPn−1, known as the principle of cheapness (Strube and Hahn, 1999). The last column of Table 1 shows the violations of cheapness (denoted with an asterisk) in (1).7 2.3 Evaluating the coherence of a text and text structuring The statistics about transitions computed as just discussed can be used to determine the degree to which a text conforms with, or violates, Centering’s principles. Poesio et al. (2004) found that nocbs account for more than 50% of the atomic facts the algorithm has to structure, i.e., we are assuming that CFs are arguments of such facts; including indirectly realized entities in CF lists would violate this assumption. 6If the second utterance in a sequence U2 has a CB, then it is taken to be either a continue or a retain, although U1 is not classified as a nocb. 7As for the other two principles, no violation of cheapness is computed for (a) or when Un is marked as a nocb. of the transitions in the gnome corpus in configurations such as the one used in this paper. More generally, a significant percentage of nocbs (at least 20%) and other “dispreferred” transitions was found with all parameter configurations tested by Poesio et al. (2004) and indeed by all previous corpus-based evaluations of Centering such as Passoneau (1998), Di Eugenio (1998), Strube and Hahn (1999) among others. These results led Poesio et al. (2004) to the conclusion that the entity coherence as formalized in Centering should be supplemented with an account of other coherence inducing factors to explain what makes texts coherent. These studies, however, do not investigate the question that is most important from the text structuring perspective adopted in this paper: whether there would be alternative ways of structuring the text that would result in fewer violations of Centering’s constraints (Kibble, 2001). Consider the nocb utterance (d) in (1). Simply observing that this transition is ‘dispreferred’ ignores the fact that every other ordering of utterances (b) to (d) would result in more nocbs than those found in (1). Even a textstructuring algorithm functioning solely on the basis of the Centering constraints might therefore still choose the particular order in (1). In other words, a metric of text coherence purely based on Centering principles–trying to minimize the number of nocbs–may be sufficient to explain why this order of clauses was chosen, at least in this particular genre, without need to involve more complex explanations. In the rest of the paper, we consider several such metrics, and use the texts in the gnome corpus to choose among them. We return to the issue of coherence (i.e., whether additional coherenceinducing factors need to be stipulated in addition to those assumed in Centering) in the Discussion. 3 Centering-based metrics of coherence As said previously, we assume a text structuring system taking as input a set of utterances represented in terms of their CF lists. The system orders these utterances by applying a bias in favour of the best scoring ordering among the candidate solutions for the preferred output.8 In this section, we discuss how the Centering 8Additional assumptions for choosing between the orderings that are assigned the best score are presented in the next section. concepts just described can be used to define metrics of coherence which might be useful for text structuring. The simplest way to define a metric of coherence using notions from Centering is to classify each ordering of propositions according to the number of nocbs it contains, and pick the ordering with the fewest nocbs. We call this metric M.NOCB, following (Karamanis and Manurung, 2002). Because of its simplicity, M.NOCB serves as the baseline metric in our experiments. We consider three more metrics. M.CHEAP is biased in favour of the ordering with the fewest violations of cheapness. M.KP sums up the nocbs and the violations of cheapness, coherence and salience, preferring the ordering with the lowest total cost (Kibble and Power, 2000). Finally, M.BFP employs the preferences between standard transitions as expressed by Rule 2. More specifically, M.BFP selects the ordering with the highest number of continues. If there exist several orderings which have the most continues, the one which has the most retains is favoured. The number of smooth-shifts is used only to distinguish between the orderings that score best for continues as well as for retains, etc. In the next section, we present a general methodology to compare these metrics, using the actual ordering of clauses in real texts of a corpus to identify the metric whose behavior mimics more closely the way these actual orderings were chosen. This methodology was implemented in a program called the System for Evaluating Entity Coherence (seec). 4 Exploring the space of possible orderings In section 2, we discussed how an ordering of utterances in a text like (1) can be translated into a sequence of CF lists, which is the representation that the Centering-based metrics operate on. We use the term Basis for Comparison (BfC) to indicate this sequence of CF lists. In this section, we discuss how the BfC is used in our search-oriented evaluation methodology to calculate a performance measure for each metric and compare them with each other. In the next section, we will see how our corpus was used to identify the most promising Centering-based metric for a text classifier. 4.1 Computing the classification rate The performance measure we employ is called the classification rate of a metric M on a certain BfC B. The classification rate estimates the ability of M to produce B as the output of text structuring according to a specific generation scenario. The first step of seec is to search through the space of possible orderings defined by the permutations of the CF lists that B consists of, and to divide the explored search space into sets of orderings that score better, equal, or worse than B according to M. Then, the classification rate is defined according to the following generation scenario. We assume that an ordering has higher chances of being selected as the output of text structuring the better it scores for M. This is turn means that the fewer the members of the set of better scoring orderings, the better the chances of B to be the chosen output. Moreover, we assume that additional factors play a role in the selection of one of the orderings that score the same for M. On average, B is expected to sit in the middle of the set of equally scoring orderings with respect to these additional factors. Hence, half of the orderings with the same score will have better chances than B to be selected by M. The classification rate υ of a metric M on B expresses the expected percentage of orderings with a higher probability of being generated than B according to the scores assigned by M and the additional biases assumed by the generation scenario as follows: (3) Classification rate: υ(M, B) = Better(M) + Equal(M) 2 Better(M) stands for the percentage of orderings that score better than B according to M, whilst Equal(M) is the percentage of orderings that score equal to B according to M. If υ(Mx, B) is the classification rate of Mx on B, and υ(My, B) is the classification rate of My on B, My is a more suitable candidate than Mx for generating B if υ(My, B) is smaller than υ(Mx, B). 4.2 Generalising across many BfCs In order for the experimental results to be reliable and generalisable, Mx and My should be compared on more than one BfC from a corpus C. In our standard analysis, the BfCs B1, ..., Bm from C are treated as the random factor in a repeated measures design since each BfC contributes a score for each metric. Then, the classification rates for Mx and My on the BfCs are compared with each other and significance is tested using the Sign Test. After calculating the number of BfCs that return a lower classification rate for Mx than for My and vice versa, the Sign Test reports whether the difference in the number of BfCs is significant, that is, whether there are significantly more BfCs with a lower classification rate for Mx than the BfCs with a lower classification rate for My (or vice versa).9 Finally, we summarise the performance of M on m BfCs from C in terms of the average classification rate Y : (4) Average classification rate: Y (M, C) = υ(M,B1)+...+υ(M,Bm) m 5 Using the gnome corpus for a search-based comparison of metrics We will now discuss how the methodology discussed above was used to compare the Centering-based metrics discussed in Section 3, using the original ordering of texts in the gnome corpus to compute the average classification rate of each metric. The gnome corpus contains texts from different genres, not all of which are of interest to us. In order to restrict the scope of the experiment to the text-type most relevant to our study, we selected 20 “museum labels”, i.e., short texts that describe a concrete artefact, which served as the input to seec together with the metrics in section 3.10 5.1 Permutation and search strategy In specifying the performance of the metrics we made use of a simple permutation heuristic exploiting a piece of domain-specific communication knowledge (Kittredge et al., 1991). Like Dimitromanolaki and Androutsopoulos (2003), we noticed that utterances like (a) in example (1), should always appear at the beginning of a felicitous museum label. Hence, we restricted the orderings considered by the seec 9The Sign Test was chosen over its parametric alternatives to test significance because it does not carry specific assumptions about population distributions and variance. It is also more appropriate for small samples like the one used in this study. 10Note that example (1) is characteristic of the genre, not the length, of the texts in our subcorpus. The number of CF lists that the BfCs consist of ranges from 4 to 16 (average cardinality: 8.35 CF lists). Pair M.NOCB p Winner lower greater ties M.NOCB vs M.CHEAP 18 2 0 0.000 M.NOCB M.NOCB vs M.KP 16 2 2 0.001 M.NOCB M.NOCB vs M.BFP 12 3 5 0.018 M.NOCB N 20 Table 3: Comparing M.NOCB with M.CHEAP, M.KP and M.BFP in gnome to those in which the first CF list of B, CF1, appears in first position.11 For very short texts like (1), which give rise to a small BfC, the search space of possible orderings can be enumerated exhaustively. However, when B consists of many more CF lists, it is impractical to explore the search space in this way. Elsewhere we show that even in these cases it is possible to estimate υ(M, B) reliably for the whole population of orderings using a large random sample. In the experiments reported here, we had to resort to random sampling only once, for a BfC with 16 CF lists. 5.2 Comparing M.NOCB with other metrics The experimental results of the comparisons of the metrics from section 3, computed using the methodology in section 4, are reported in Table 3. In this table, the baseline metric M.NOCB is compared with each of M.CHEAP, M.KP and M.BFP. The first column of the Table identifies the comparison in question, e.g. M.NOCB versus M.CHEAP. The exact number of BfCs for which the classification rate of M.NOCB is lower than its competitor for each comparison is reported in the next column of the Table. For example, M.NOCB has a lower classification rate than M.CHEAP for 18 (out of 20) BfCs from the gnome corpus. M.CHEAP only achieves a lower classification rate for 2 BfCs, and there are no ties, i.e. cases where the classification rate of the two metrics is the same. The p value returned by the Sign Test for the difference in the number of BfCs, rounded to the third decimal place, is reported in the fifth column of the Table. The last column of the Table 3 shows M.NOCB as the “winner” of the comparison with M.CHEAP since it has a lower classifica11Thus, we assume that when the set of CF lists serves as the input to text structuring, CF1 will be identified as the initial CF list of the ordering to be generated using annotation features such as the unit type which distinguishes (a) from the other utterances in (1). tion rate than its competitor for significantly more BfCs in the corpus.12 Overall, the Table shows that M.NOCB does significantly better than the other three metrics which employ additional Centering concepts. This result means that there exist proportionally fewer orderings with a higher probability of being selected than the BfC when M.NOCB is used to guide the hypothetical text structuring algorithm instead of the other metrics. Hence, M.NOCB is the most suitable among the investigated metrics for structuring the CF lists in gnome. This in turn indicates that simply avoiding nocb transitions is more relevant to text structuring than the combinations of the other Centering notions that the more complicated metrics make use of. (However, these notions might still be appropriate for other tasks, such as anaphora resolution.) 6 Discussion: the performance of M.NOCB We already saw that Poesio et al. (2004) found that the majority of the recorded transitions in the configuration of Centering used in this study are nocbs. However, we also explained in section 2.3 that what really matters when trying to determine whether a text might have been generated only paying attention to Centering constraints is the extent to which it would be possible to ‘improve’ upon the ordering chosen in that text, given the information that the text structuring algorithm had to convey. The average classification rate of M.NOCB is an esti12No winner is reported for a comparison when the p value returned by the Sign Test is not significant (ns), i.e. greater than 0.05. Note also that despite conducting more than one pairwise comparison simultaneously we refrain from further adjusting the overall threshold of significance (e.g. according to the Bonferroni method, typically used for multiple planned comparisons that employ parametric statistics) since it is assumed that choosing a conservative statistic such as the Sign Test already provides substantial protection against the possibility of a type I error. Pair M.NOCB p Winner lower greater ties M.NOCB vs M.CHEAP 110 12 0 0.000 M.NOCB M.NOCB vs M.KP 103 16 3 0.000 M.NOCB M.NOCB vs M.BFP 41 31 49 0.121 ns N 122 Table 4: Comparing M.NOCB with M.CHEAP, M.KP and M.BFP using the novel methodology in MPIRO mate of exactly this variable, indicating whether M.NOCB is likely to arrive at the BfC during text structuring. The average classification rate Y for M.NOCB on the subcorpus of gnome studied here, for the parameter configuration of Centering we have assumed, is 19.95%. This means that on average the BfC is close to the top 20% of alternative orderings when these orderings are ranked according to their probability of being selected as the output of the algorithm. On the one hand, this result shows that although the ordering of CF lists in the BfC might not completely minimise the number of observed nocb transitions, the BfC tends to be in greater agreement with the preference to avoid nocbs than most of the alternative orderings. In this sense, it appears that the BfC optimises with respect to the number of potential nocbs to a certain extent. On the other hand, this result indicates that there are quite a few orderings which would appear more likely to be selected than the BfC. We believe this finding can be interpreted in two ways. One possibility is that M.NOCB needs to be supplemented by other features in order to explain why the original text was structured this way. This is the conclusion arrived at by Poesio et al. (2004) and those text structuring practitioners who use notions derived from Centering in combination with other coherence constraints in the definitions of their metrics. There is also a second possibility, however: we might want to reconsider the assumption that human text planners are trying to ensure that each utterance in a text is locally coherent. They might do all of their planning just on the basis of Centering constraints, at least in this genre –perhaps because of resource limitations– and simply accept a certain degree of incoherence. Further research on this issue will require psycholinguistic methods; our analysis nevertheless sheds more light on two previously unaddressed questions in the corpus-based evaluation of Centering – a) which of the Centering notions are most relevant to the text structuring task, and b) to which extent Centering on its own can be useful for this purpose. 7 Further results In related work, we applied the methodology discussed here to a larger set of existing data (122 BfCs) derived from the MPIRO system and ordered by a domain expert (Dimitromanolaki and Androutsopoulos, 2003). As Table 4 shows, the results from MPIRO verify the ones reported here, especially with respect to M.KP and M.CHEAP which are overwhelmingly beaten by the baseline in the new domain as well. Also note that since M.BFP fails to overtake M.NOCB in MPIRO, the baseline can be considered the most promising solution among the ones investigated in both domains by applying Occam’s logical principle. We also tried to account for some additional constraints on coherence, namely local rhetorical relations, based on some of the assumptions in Knott et al. (2001), and what Karamanis (2003) calls the “PageFocus” which corresponds to the main entity described in a text, in our example de374. These results, reported in (Karamanis, 2003), indicate that these constraints conflict with Centering as formulated in this paper, by increasing - instead of reducing - the classification rate of the metrics. Hence, it remains unclear to us how to improve upon M.NOCB. In our future work, we would like to experiment with more metrics. Moreover, although we consider the parameter configuration of Centering used here a plausible choice, we intend to apply our methodology to study different instantiations of the Centering parameters, e.g. by investigating whether “indirect realisation” reduces the classification rate for M.NOCB compared to “direct realisation”, etc. Acknowledgements Special thanks to James Soutter for writing the program which translates the output produced by gnome’s scripts into a format appropriate for seec. The first author was able to engage in this research thanks to a scholarship from the Greek State Scholarships Foundation (IKY). References Regina Barzilay, Noemie Elhadad, and Kathleen McKeown. 2002. Inferring strategies for sentence ordering in multidocument news summarization. Journal of Artificial Intelligence Research, 17:35–55. Susan E. Brennan, Marilyn A. Friedman [Walker], and Carl J. Pollard. 1987. A centering approach to pronouns. In Proceedings of ACL 1987, pages 155–162, Stanford, California. Barbara Di Eugenio. 1998. Centering in Italian. In Walker et al. (Walker et al., 1998b), pages 115–137. Aggeliki Dimitromanolaki and Ion Androutsopoulos. 2003. Learning to order facts for discourse planning in natural language generation. In Proceedings of the 9th European Workshop on Natural Language Generation, Budapest, Hungary. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–225. Amy Isard, Jon Oberlander, Ion Androutsopoulos, and Colin Matheson. 2003. Speaking the users’ languages. IEEE Intelligent Systems Magazine, 18(1):40–45. Nikiforos Karamanis and Hisar Maruli Manurung. 2002. Stochastic text structuring using the principle of continuity. In Proceedings of INLG 2002, pages 81–88, Harriman, NY, USA, July. Nikiforos Karamanis. 2003. Entity Coherence for Descriptive Text Structuring. Ph.D. thesis, Division of Informatics, University of Edinburgh. Rodger Kibble and Richard Power. 2000. An integrated framework for text planning and pronominalisation. In Proceedings of INLG 2000, pages 77–84, Israel. Rodger Kibble. 2001. A reformulation of Rule 2 of Centering Theory. Computational Linguistics, 27(4):579–587. Richard Kittredge, Tanya Korelsky, and Owen Rambow. 1991. On the need for domain communication knowledge. Computational Intelligence, 7:305–314. Alistair Knott, Jon Oberlander, Mick O’Donnell, and Chris Mellish. 2001. Beyond elaboration: The interaction of relations and focus in coherent text. In T. Sanders, J. Schilperoord, and W. Spooren, editors, Text Representation: Linguistic and Psycholinguistic Aspects, chapter 7, pages 181–196. John Benjamins. Mirella Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proceedings of ACL 2003, Saporo, Japan, July. Chris Mellish, Alistair Knott, Jon Oberlander, and Mick O’Donnell. 1998. Experiments using stochastic search for text planning. In Proceedings of the 9th International Workshop on NLG, pages 98–107, Niagara-on-theLake, Ontario, Canada. Eleni Miltsakaki. 2002. Towards an aposynthesis of topic continuity and intrasentential anaphora. Computational Linguistics, 28(3):319–355. Mick O’Donnell, Chris Mellish, Jon Oberlander, and Alistair Knott. 2001. ILEX: An architecture for a dynamic hypertext generation system. Natural Language Engineering, 7(3):225–250. Rebecca J. Passoneau. 1998. Interaction of discourse structure with explicitness of discourse anaphoric phrases. In Walker et al. (Walker et al., 1998b), pages 327–358. Massimo Poesio, Rosemary Stevenson, Barbara Di Eugenio, and Janet Hitzeman. 2004. Centering: a parametric theory and its instantiations. Computational Linguistics, 30(3). Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge. Michael Strube and Udo Hahn. 1999. Functional centering: Grounding referential coherence in information structure. Computational Linguistics, 25(3):309–344. Marilyn A. Walker, Aravind K. Joshi, and Ellen F. Prince. 1998a. Centering in naturally occuring discourse: An overview. In Walker et al. (Walker et al., 1998b), pages 1–30. Marilyn A. Walker, Aravind K. Joshi, and Ellen F. Prince, editors. 1998b. Centering Theory in Discourse. Clarendon Press, Oxford.
2004
50
Computing Locally Coherent Discourses Ernst Althaus LORIA Universit´e Henri Poincar´e Vandœuvre-l`es-Nancy, France [email protected] Nikiforos Karamanis School of Informatics University of Edinburgh Edinburgh, UK [email protected] Alexander Koller Dept. of Computational Linguistics Saarland University Saarbr¨ucken, Germany [email protected] Abstract We present the first algorithm that computes optimal orderings of sentences into a locally coherent discourse. The algorithm runs very efficiently on a variety of coherence measures from the literature. We also show that the discourse ordering problem is NP-complete and cannot be approximated. 1 Introduction One central problem in discourse generation and summarisation is to structure the discourse in a way that maximises coherence. Coherence is the property of a good human-authored text that makes it easier to read and understand than a randomlyordered collection of sentences. Several papers in the recent literature (Mellish et al., 1998; Barzilay et al., 2002; Karamanis and Manurung, 2002; Lapata, 2003; Karamanis et al., 2004) have focused on defining local coherence, which evaluates the quality of sentence-to-sentence transitions. This is in contrast to theories of global coherence, which can consider relations between larger chunks of the discourse and e.g. structures them into a tree (Mann and Thompson, 1988; Marcu, 1997; Webber et al., 1999). Measures of local coherence specify which ordering of the sentences makes for the most coherent discourse, and can be based e.g. on Centering Theory (Walker et al., 1998; Brennan et al., 1987; Kibble and Power, 2000; Karamanis and Manurung, 2002) or on statistical models (Lapata, 2003). But while formal models of local coherence have made substantial progress over the past few years, the question of how to efficiently compute an ordering of the sentences in a discourse that maximises local coherence is still largely unsolved. The fundamental problem is that any of the factorial number of permutations of the sentences could be the optimal discourse, which makes for a formidable search space for nontrivial discourses. Mellish et al. (1998) and Karamanis and Manurung (2002) present algorithms based on genetic programming, and Lapata (2003) uses a graph-based heuristic algorithm, but none of them can give any guarantees about the quality of the computed ordering. This paper presents the first algorithm that computes optimal locally coherent discourses, and establishes the complexity of the discourse ordering problem. We first prove that the discourse ordering problem for local coherence measures is equivalent to the Travelling Salesman Problem (TSP). This means that discourse ordering is NP-complete, i.e. there are probably no polynomial algorithms for it. Worse, our result implies that the problem is not even approximable; any polynomial algorithm will compute arbitrarily bad solutions on unfortunate inputs. Note that all approximation algorithms for the TSP assume that the underlying cost function is a metric, which is not the case for the coherence measures we consider. Despite this negative result, we show that by applying modern algorithms for TSP, the discourse ordering problem can be solved efficiently enough for practical applications. We define a branch-and-cut algorithm based on linear programming, and evaluate it on discourse ordering problems based on the GNOME corpus (Karamanis, 2003) and the BLLIP corpus (Lapata, 2003). If the local coherence measure depends only on the adjacent pairs of sentences in the discourse, we can order discourses of up to 50 sentences in under a second. If it is allowed to depend on the left-hand context of the sentence pair, computation is often still efficient, but can become expensive. The structure of the paper is as follows. We will first formally define the discourse ordering problem and relate our definition to the literature on local coherence measures in Section 2. Then we will prove the equivalence of discourse ordering and TSP (Section 3), and present algorithms for solving it in Section 4. Section 5 evaluates our algorithms on examples from the literature. We compare our approach to various others in Section 6, and then conclude in Section 7. 2 The Discourse Ordering Problem We will first give a formal definition of the problem of computing locally coherent discourses, and demonstrate how some local coherence measures from the literature fit into this framework. 2.1 Definitions We assume that a discourse is made up of discourse units (depending on the underlying theory, these could be utterances, sentences, clauses, etc.), which must be ordered to achieve maximum local coherence. We call the problem of computing the optimal ordering the discourse ordering problem. We formalise the problem by assigning a cost to each unit-to-unit transition, and a cost for the discourse to start with a certain unit. Transition costs may depend on the local context, i.e. a fixed number of discourse units to the left may influence the cost of a transition. The optimal ordering is the one which minimises the sum of the costs. Definition 1. A d-place transition cost function for a set U of discourse units is a function cT : Ud → R. Intuitively, cT (un|u1, . . . , ud−1) is the cost of the transition (ud−1, ud) given that the immediately preceding units were u1, . . . , ud−2. A d-place initial cost function for U is a function cI : Ud →R. Intuitively, cI(u1, . . . , ud) is the cost for the fact that the discourse starts with the sequence u1, . . . , ud. The d-place discourse ordering problem is defined as follows: Given a set U = {u1, . . . , un}, a d-place transition cost function cT and a (d −1)place initial cost function cI, compute a permutation π of {1, . . . , n} such that cI(uπ(1), . . . , uπ(d−1)) + n−d+1 X i=1 cT (uπ(i+d−1)|uπ(i), . . . , uπ(i+d−2)) is minimal. The notation for the cost functions is suggestive: The transition cost function has the character of a conditional probability, which specifies that the cost of continuing the discourse with the unit ud depends on the local context u1, . . . , ud−1. This local context is not available for the first d −1 units of the discourse, which is why their costs are summarily covered by the initial function. 2.2 Centering-Based Cost Functions One popular class of coherence measures is based on Centering Theory (CT, (Walker et al., 1998)). We will briefly sketch its basic notions and then show how some CT-based coherence measures can be cast into our framework. The standard formulation of CT e.g. in (Walker et al., 1998), calls the discourse units utterances, and assigns to each utterance ui in the discourse a list Cf(ui) of forward-looking centres. The members of Cf(ui) correspond to the referents of the NPs in ui and are ranked in order of prominence, the first element being the preferred centre Cp(ui). The backward-looking centre Cb(ui) of ui is defined as the highest ranked element of Cf(ui) which also appears in Cf(ui−1), and serves as the link between the two subsequent utterances ui−1 and ui. Each utterance has at most one Cb. If ui and ui−1 have no forward-looking centres in common, or if ui is the first utterance in the discourse, then ui does not have a Cb at all. Based on these concepts, CT classifies the transitions between subsequent utterances into different types. Table 1 shows the most common classification into the four types CONTINUE, RETAIN, SMOOTH-SHIFT, and ROUGH-SHIFT, which are predicted to be less and less coherent in this order (Brennan et al., 1987). Kibble and Power (2000) define three further classes of transitions: COHERENCE and SALIENCE, which are both defined in Table 1 as well, and NOCB, the class of transitions for which Cb(ui) is undefined. Finally, a transition is considered to satisfy the CHEAPNESS constraint (Strube and Hahn, 1999) if Cb(ui) = Cp(ui−1). Table 2 summarises some cost functions from the literature, in the reconstruction of Karamanis et al. (2004). Each line shows the name of the coherence measure, the arity d from Definition 1, and the initial and transition cost functions. To fit the definitions in one line, we use terms of the form fk, which abbreviate applications of f to the last k arguments of the cost functions, i.e. f(ud−k+1, . . . , ud). The most basic coherence measure, M.NOCB (Karamanis and Manurung, 2002), simply assigns to each NOCB transition the cost 1 and to every other transition the cost 0. The definition of cT (u2|u1), which decodes to nocb(u1, u2), only looks at the two units in the transition, and no further context. The initial costs for this coherence measure are always zero. The measure M.KP (Kibble and Power, 2000) sums the value of nocb and the values of three functions which evaluate to 0 if the transition is cheap, salient, or coherent, and 1 otherwise. This is an instance of the 3-place discourse ordering problem because COHERENCE depends on Cb(ui−1), which itself depends on Cf(ui−2); hence nocoh must take COHERENCE: COHERENCE∗: Cb(ui) = Cb(ui−1) Cb(ui) ̸= Cb(ui−1) SALIENCE: Cb(ui) = Cp(ui) CONTINUE SMOOTH-SHIFT SALIENCE∗: Cb(ui) ̸= Cp(ui) RETAIN ROUGH-SHIFT Table 1: COHERENCE, SALIENCE and the table of standard transitions d initial cost cI(u1, . . . , ud−1) transition cost cT (ud|u1, . . . , ud−1) M.NOCB 2 0 nocb2 M.KP 3 nocb2 + nocheap2 + nosal2 nocb2 + nocheap2 + nosal2 + nocoh3 M.BFP 3 (1 −nosal2, nosal2, 0, 0) (cont3, ret3, ss3, rs3) M.LAPATA 2 −log P(u1) −log P(u2|u1) Table 2: Some cost functions from the literature. three arguments. Finally, the measure M.BFP (Brennan et al., 1987) uses a lexicographic ordering on 4-tuples which indicate whether the transition is a CONTINUE, RETAIN, SMOOTH-SHIFT, or ROUGHSHIFT. cT and all four functions it is computed from take three arguments because the classification depends on COHERENCE. As the first transition in the discourse is coherent by default (it has no Cb), we can compute cI by distinguishing RETAIN and CONTINUE via SALIENCE. The tuple-valued cost functions can be converted to real-valued functions by choosing a sufficiently large number M and using the value M3 · cont + M2 · ret + M · ss + rs. 2.3 Probability-Based Cost Functions A fundamentally different approach to measure discourse coherence was proposed by Lapata (2003). It uses a statistical bigram model that assigns each pair ui, uk of utterances a probability P(uk|ui) of appearing in subsequent positions, and each utterance a probability P(ui) of appearing in the initial position of the discourse. The probabilities are estimated on the grounds of syntactic features of the discourse units. The probability of the entire discourse u1 . . . un is the product P(u1) · P(u2|u1) · . . . · P(un|un−1). We can transform Lapata’s model straightforwardly into our cost function framework, as shown under M.LAPATA in Table 2. The discourse that minimizes the sum of the negative logarithms will also maximise the product of the probabilities. We have d = 2 because it is a bigram model in which the transition probability does not depend on the previous discourse units. 3 Equivalence of Discourse Ordering and TSP Now we show that discourse ordering and the travelling salesman problem are equivalent. In order to do this, we first redefine discourse ordering as a graph problem. d-place discourse ordering problem (dPDOP): Given a directed graph G = (V, E), a node s ∈V and a function c : V d →R, compute a simple directed path P = (s = v0, v1, . . . , vn) from s through all vertices in V which minimises Pn−d+1 i=0 c(vi, vi+1, . . . , vi+d−1). We write instances of dPDOP as (V, E, s, c). The nodes v1, . . . , vn correspond to the discourse units. The cost function c encodes both the initial and the transition cost functions from Section 2 by returning the initial cost if its first argument is the (new) start node s. Now let’s define the version of the travelling salesman problem we will use below. Generalised asymmetric TSP (GATSP): Given a directed graph G = (V, E), edge weights c : E →R, and a partition (V1, . . . , Vk) of the nodes V , compute the shortest directed cycle that visits exactly one node of each Vi. We call such a cycle a tour and write instances of GATSP as ((V1, . . . , Vk), E, c). The usual definition of the TSP, in which every node must be visited exactly once, is the special case of GATSP where each Vi contains exactly one node. We call this case asymmetric travelling salesman problem, ATSP. ATSP 2PDOP       Figure 1: Reduction of ATSP to 2PDOP We will show that ATSP can be reduced to 2PDOP, and that any dPDOP can be reduced to GATSP. 3.1 Reduction of ATSP to 2PDOP First, we introduce the reduction of ATSP to 2PDOP, which establishes NP-completeness of dPDOP for all d > 1. The reduction is approximation preserving, i.e. if we can find a solution of 2PDOP that is worse than the optimum only by a factor of ϵ (an ϵ-approximation), it translates to a solution of ATSP that is also an ϵ-approximation. Since it is known that there can be no polynomial algorithms that compute ϵ-approximations for general ATSP, for any ϵ (Cormen et al., 1990), this means that dPDOP cannot be approximated either (unless P=NP): Any polynomial algorithm for dPDOP will compute arbitrarily bad solutions on certain inputs. The reduction works as follows. Let G = ((V1, . . . , Vk), E, c) be an instance of ATSP, and V = V1 ∪. . . ∪Vk. We choose an arbitrary node v ∈V and split it into two nodes vs and vt. We assign all edges with source node v to vs and all edges with target node v to vt (compare Figure 1). Finally we make vs the source node of our 2PDOP instance G′. For every tour in G, we have a path in G′ starting at vs visiting all other nodes (and ending in vt) with the same cost by replacing the edge (v, u) out of v by (vs, u) and the edge (w, v) into v by (w, vt). Conversely, for every path starting at vs visiting all nodes, we have an ATSP tour of the same cost, since all such paths will end in vt (as vt has no outgoing edges). An example is shown in Fig. 1. The ATSP instance on the left has the tour (1, 3, 2, 1), indicated by the solid edges. The node 1 is split into the two nodes 1s and 1t, and the tour translates to the path (1s, 3, 2, 1t) in the 2PDOP instance. 3.2 Reduction of dPDOP to GATSP Conversely, we can encode an instance G = (V, E, s, c) of dPDOP as an instance G′ = 3PDOP GATSP                        Figure 2: Reduction of dPDOP to GATSP. Edges to the source node [s, s] are not drawn. ((V ′ u)u∈V , E′, c′) of GATSP, in such a way that the optimal solutions correspond. The cost of traversing an edge in dPDOP depends on the previous d −1 nodes; we compress these costs into ordinary costs of single edges in the reduction to GATSP. The GATSP instance has a node [u1, . . . , ud−1] for every d −1-tuple of nodes of V . It has an edge from [u1, . . . , ud−1] to [u2, . . . , ud−1, ud] iff there is an edge from ud−1 to ud in G, and it has an edge from each node into [s, . . . , s]. The idea is to encode a path P = (s = u0, u1, . . . , un) in G as a tour TP in G′ that successively visits the nodes [ui−d+1, . . . ui], i = 0, . . . n, where we assume that uj = s for all j ≤0 (compare Figure 2). The cost of TP can be made equal to the cost of P by making the cost of the edge from [u1, . . . , ud−1] to [u2, . . . , ud] equal to c(u1, . . . ud). (We set c′(e) to 0 for all edges e between nodes with first component s and for the edges e with target node [sd−1].) Finally, we define V ′ u to be the set of all nodes in G′ with last component u. It is not hard to see that for any simple path of length n in G, we find a tour TP in G′ with the same cost. Conversely, we can find for every tour in G′ a simple path of length n in G with the same cost. Note that the encoding G′ will contain many unnecessary nodes and edges. For instance, all nodes that have no incoming edges can never be used in a tour, and can be deleted. We can safely delete such unnecessary nodes in a post-processing step. An example is shown in Fig. 2. The 3PDOP instance on the left has a path (s, 3, 1, 2), which translates to the path ([s, s], [s, 3], [3, 1], [1, 2]) in the GATSP instance shown on the right. This path can be completed by a tour by adding the edge ([1, 2], [s, s]), of cost 0. The tour indeed visits each V ′ u (i.e., each column) exactly once. Nodes with last component s which are not [s, s] are unreachable and are not shown. For the special case of d = 2, the GATSP is simply an ordinary ATSP. The graphs of both problems look identical in this case, except that the GATSP instance has edges of cost 0 from any node to the source [s]. 4 Computing Optimal Orderings The equivalence of dPDOP and GATSP implies that we can now bring algorithms from the vast literature on TSP to bear on the discourse ordering problem. One straightforward method is to reduce the GATSP further to ATSP (Noon and Bean, 1993); for the case d = 2, nothing has to be done. Then one can solve the reduced ATSP instance; see (Fischetti et al., 2001; Fischetti et al., 2002) for a recent survey of exact methods. We choose the alternative of developing a new algorithm for solving GATSP directly, which uses standard techniques from combinatorial optimisation, gives us a better handle on optimising the algorithm for our problem instances, and runs more efficiently in practice. Our algorithm translates the GATSP instance into an integer linear program (ILP) and uses the branch-and-cut method (Nemhauser and Wolsey, 1988) to solve it. Integer linear programs consist of a set of linear equations and inequalities, and are solved by integer variable assignments which maximise or minimise a goal function while satisfying the other conditions. Let G = (V, E) be a directed graph and S ⊆V . We define δ+(S) = {(u, v) ∈E | u ∈S and v ̸∈ S} and δ−(S) = {(u, v) ∈E | u /∈S and v ∈S}, i.e. δ+(S) and δ−(S) are the sets of all incoming and outgoing edges of S, respectively. We assume that the graph G has no edges within one partition Vu, since such edges cannot be used by any solution. With this assumption, GATSP can be phrased as an ILP as follows (this formulation is similar to the one proposed by Laporte et al. (1987)): min X e∈E cexe s.t. X e∈δ+(v) xe = X e∈δ−(v) xe ∀v ∈V (1) X e∈δ−(Vi) xe = 1 1 ≤i ≤n (2) X e∈δ+(∪i∈IVi) xe ≥ 1 I ⊂{1, . . . , n} (3) xe ∈ {0, 1} We have a binary variable xe for each edge e of the graph. The intention is that xe has value 1 if e is used in the tour, and 0 otherwise. Thus the cost of the tour can be written as P e∈E cexe. The three conditions enforce the variable assignment to encode a valid GATSP tour. (1) ensures that all integer solutions encode a set of cycles. (2) guarantees that every partition Vi is visited by exactly one cycle. The inequalities (3) say that every subset of the partitions has an outgoing edge; this makes sure a solution encodes one cycle, rather than a set of multiple cycles. To solve such an ILP using the branch-and-cut method, we drop the integrality constraints (i.e. we replace xe ∈{0, 1} by 0 ≤xe ≤1) and solve the corresponding linear programming (LP) relaxation. If the solution of the LP is integral, we found the optimal solution. Otherwise we pick a variable with a fractional value and split the problem into two subproblems by setting the variable to 0 and 1, respectively. We solve the subproblems recursively and disregard a subproblem if its LP bound is worse than the best known solution. Since our ILP contains an exponential number of inequalities of type (3), solving the complete LPs directly would be too expensive. Instead, we start with a small subset of these inequalities, and test (efficiently) whether a solution of the smaller LP violates an inequality which is not in the current LP. If so, we add the inequality to the LP, resolve it, and iterate. Otherwise we found the solution of the LP with the exponential number of inequalities. The inequalities we add by need are called cutting planes; algorithms that find violated cutting planes are called separation algorithms. To keep the size of the branch-and-cut tree small, our algorithm employs some heuristics to find further upper bounds. In addition, we improve lower bound from the LP relaxations by adding further inequalities to the LP that are valid for all integral solutions, but can be violated for optimal solutions of the LP. One major challenge here was to find separation algorithms for these inequalities. We cannot go into these details for lack of space, but will discuss them in a separate paper. 5 Evaluation We implemented the algorithm and ran it on some examples to evaluate its practical efficiency. The runtimes are shown in Tables 3 and 4 for an implementation using a branch-and-cut ILP solver which is free for all academic purposes (ILP-FS) and a commercial branch-and-cut ILP solver (ILP-CS). Our implementations are based on LEDA 4.4.1 Instance Size ILP-FS ILP-CS lapata-10 13 0.05 0.05 coffers1 M.NOCB 10 0.04 0.02 cabinet1 M.NOCB 15 0.07 0.01 random (avg) 20 0.09 0.07 random (avg) 40 0.28 0.17 random (avg) 60 1.39 0.40 random (avg) 100 6.17 1.97 Table 3: Some runtimes for d = 2 (in seconds). (www.algorithmic-solutions.com) for the data structures and the graph algorithms and on SCIL 0.8 (www.mpi-sb.mpg.de/SCIL) for implementing the ILP-based branch-and-cut algorithm. SCIL can be used with different branch-and-cut core codes. We used CPLEX 9.0 (www.ilog.com) as commercial core and SCIP 0.68 (www.zib.de/Optimization/ Software/SCIP/) based on SOPLEX 1.2.2a (www.zib.de/Optimization/Software/ Soplex/) as the free implementation. Note that all our implementations are still preliminary. The software is publicly available (www.mpi-sb. mpg.de/˜althaus/PDOP.html). We evaluate the implementations on three classes of inputs. First, we use two discourses from the GNOME corpus, taken from (Karamanis, 2003), together with the centering-based cost functions from Section 2: coffers1, containing 10 discourse units, and cabinet1, containing 15 discourse units. Second, we use twelve discourses from the BLLIP corpus taken from (Lapata, 2003), together with M.LAPATA. These discourses are 4 to 13 discourse units long; the table only shows the instance with the highest running time. Finally, we generate random instances of 2PDOP of size 20–100, and of 3PDOP of size 10, 15, and 20. A random instance is the complete graph, where c(u1, . . . , ud) is chosen uniformly at random from {0, . . . , 999}. The results for the 2-place instances are shown in Table 3, and the results for the 3-place instances are shown in Table 4. The numbers are runtimes in seconds on a Pentium 4 (Xeon) processor with 3.06 GHz. Note that a hypothetical baseline implementation which naively generates and evaluates all permutations would run over 77 years for a discourse of length 20, even on a highly optimistic platform that evaluates one billion permutations per second. For d = 2, all real-life instances and all random instances of size up to 50 can be solved in less than one second, with either implementation. The problem becomes more challenging for d = 3. Here the algorithm quickly establishes good LP bounds for Instance Size ILP-FS ILP-CS coffers1 M.KP 10 0.05 0.05 coffers1 M.BFP 10 0.08 0.06 cabinet1 M.KP 15 0.40 1.12 cabinet1 M.BFP 15 0.39 0.28 random (avg) 10 1.00 0.42 random (avg) 15 35.1 5.79 random (avg) 20 115.8 Table 4: Some runtimes for d = 3 (in seconds). the real-life instances, and thus the branch-and-cut trees remain small. The LP bounds for the random instances are worse, in particular when the number of units gets larger. In this case, the further optimisations in the commercial software make a big difference in the size of the branch-and-cut tree and thus in the solution time. An example output for cabinet1 with M.NOCB is shown in Fig. 3; we have modified referring expressions to make the text more readable, and have marked discourse unit boundaries with “/” and expressions that establish local coherence with square brackets. This is one of many possible optimal solutions, which have cost 2 because of the two NOCB transitions at the very start of the discourse. Details on the comparison of different centering-based coherence measures are discussed by Karamanis et al. (2004). 6 Comparison to Other Approaches There are two approaches in the literature that are similar enough to ours that a closer comparison is in order. The first is a family of algorithms for discourse ordering based on genetic programming (Mellish et al., 1998; Karamanis and Manurung, 2002). This is a very flexible and powerful approach, which can be applied to measures of local coherence that do not seem to fit in our framework trivially. For example, the measure from (Mellish et al., 1998) looks at the entire discourse up to the current transition for some of their cost factors. However, our algorithm is several orders of magnitude faster where a direct comparison is possible (Manurung, p.c.), and it is guaranteed to find an optimal ordering. The nonapproximability result for TSP means that a genetic (or any other) algorithm which is restricted to polynomial runtime could theoretically deliver arbitrarily bad solutions. Second, the discourse ordering problem we have discussed in this paper looks very similar to the Majority Ordering problem that arises in the context of multi-document summarisation (Barzilay et al., Both cabinets probably entered England in the early nineteenth century / after the French Revolution caused the dispersal of so many French collections. / The pair to [this monumental cabinet] still exists in Scotland. / The fleurs-de-lis on the top two drawers indicate that [the cabinet] was made for the French King Louis XIV. / [It] may have served as a royal gift, / as [it] does not appear in inventories of [his] possessions. / Another medallion inside shows [him] a few years later. / The bronze medallion above [the central door] was cast from a medal struck in 1661 which shows [the king] at the age of twenty-one. / A panel of marquetry showing the cockerel of [France] standing triumphant over both the eagle of the Holy Roman Empire and the lion of Spain and the Spanish Netherlands decorates [the central door]. / In [the Dutch Wars] of 1672 - 1678, [France] fought simultaneously against the Dutch, Spanish, and Imperial armies, defeating them all. / [The cabinet] celebrates the Treaty of Nijmegen, which concluded [the war]. / The Sun King’s portrait appears twice on [this work]. / Two large figures from Greek mythology, Hercules and Hippolyta, Queen of the Amazons, representatives of strength and bravery in war appear to support [the cabinet]. / The decoration on [the cabinet] refers to [Louis XIV’s] military victories. / On the drawer above the door, gilt-bronze military trophies flank a medallion portrait of [the king]. Figure 3: An example output based on M.NOCB. 2002). The difference between the two problems is that Barzilay et al. minimise the sum of all costs Cij for any pair i, j of discourse units with i < j, whereas we only sum over the Cij for i = j −1. This makes their problem amenable to the approximation algorithm by Cohen et al. (1999), which allows them to compute a solution that is at least half as good as the optimum, in polynomial time; i.e. this problem is strictly easier than TSP or discourse ordering. However, a Majority Ordering algorithm is not guaranteed to compute good solutions to the discourse ordering problem, as Lapata (2003) assumes. 7 Conclusion We have shown that the problem of ordering clauses into a discourse that maximises local coherence is equivalent to the travelling salesman problem: Even the two-place discourse ordering problem can encode ATSP. This means that the problem is NPcomplete and doesn’t even admit polynomial approximation algorithms (unless P=NP). On the other hand, we have shown how to encode the discourse ordering problems of arbitrary arity d into GATSP. We have demonstrated that modern branch-and-cut algorithms for GATSP can easily solve practical discourse ordering problems if d = 2, and are still usable for many instances with d = 3. As far as we are aware, this is the first algorithm for discourse ordering that can make any guarantees about the solution it computes. Our efficient implementation can benefit generation and summarisation research in at least two respects. First, we show that computing locally coherent orderings of clauses is feasible in practice, as such coherence measures will probably be applied on sentences within the same paragraph, i.e. on problem instances of limited size. Second, our system should be a useful experimentation tool in developing new measures of local coherence. We have focused on local coherence in this paper, but it seems clear that notions of global coherence, which go beyond the level of sentence-to-sentence transitions, capture important aspects of coherence that a purely local model cannot. However, our algorithm can still be useful as a subroutine in a more complex system that deals with global coherence (Marcu, 1997; Mellish et al., 1998). Whether our methods can be directly applied to the tree structures that come up in theories of global coherence is an interesting question for future research. Acknowledgments. We would like to thank Mirella Lapata for providing the experimental data and Andrea Lodi for providing an efficiency baseline by running his ATSP solver on our inputs. We are grateful to Malte Gabsdil, Ruli Manurung, Chris Mellish, Kristina Striegnitz, and our reviewers for helpful comments and discussions. References R. Barzilay, N. Elhadad, and K. R. McKeown. 2002. Inferring strategies for sentence ordering in multidocument news summarization. Journal of Artificial Intelligence Research, 17:35–55. S. Brennan, M. Walker Friedman, and C. Pollard. 1987. A centering approach to pronouns. In Proc. 25th ACL, pages 155–162, Stanford. W. Cohen, R. Schapire, and Y. Singer. 1999. Learning to order things. Journal of Artificial Intelligence Research, 10:243–270. T. H. Cormen, C. E. Leiserson, and R. L. Rivest. 1990. Introduction to Algorithms. MIT Press, Cambridge. M. Fischetti, A. Lodi, and P. Toth. 2001. Solving real-world ATSP instances by branch-andcut. Combinatorial Optimization. M. Fischetti, A. Lodi, and P. Toth. 2002. Exact methods for the asymmmetric traveling salesman problem. In G. Gutin and A. Punnen, editors, The Traveling Salesman Problem and its Variations. Kluwer. N. Karamanis and H. M. Manurung. 2002. Stochastic text structuring using the principle of continuity. In Proceedings of INLG-02, pages 81–88, New York. N. Karamanis, M. Poesio, C. Mellish, and J. Oberlander. 2004. Evaluating centering-based metrics of coherence for text structuring using a reliably annotated corpus. In Proceedings of the 42nd ACL, Barcelona. N. Karamanis. 2003. Entity Coherence for Descriptive Text Structuring. Ph.D. thesis, Division of Informatics, University of Edinburgh. R. Kibble and R. Power. 2000. An integrated framework for text planning and pronominalisation. In Proc. INLG 2000, pages 77–84, Mitzpe Ramon. M. Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proc. 41st ACL, pages 545–552, Sapporo, Japan. G. Laporte, H. Mercure, and Y. Nobert. 1987. Generalized travelling salesman problem through n sets of nodes: the asymmetrical case. Discrete Applied Mathematics, 18:185–197. W. Mann and S. Thompson. 1988. Rhetorical structure theory: A theory of text organization. Text, 8(3):243–281. D. Marcu. 1997. From local to global coherence: A bottom-up approach to text planning. In Proceedings of the 14th AAAI, pages 629–635. C. Mellish, A. Knott, J. Oberlander, and M. O’Donnell. 1998. Experiments using stochastic search for text planning. In Proc. 9th INLG, pages 98–107, Niagara-on-the-Lake. G.L. Nemhauser and L.A. Wolsey. 1988. Integer and Combinatorial Optimization. John Wiley & Sons. C.E. Noon and J.C. Bean. 1993. An efficient transformation of the generalized traveling salesman problem. Information Systems and Operational Research, 31(1). M. Strube and U. Hahn. 1999. Functional centering: Grounding referential coherence in information structure. Computational Linguistics, 25(3). M. Walker, A. Joshi, and E. Prince. 1998. Centering in naturally occuring discourse: An overview. In M. Walker, A. Joshi, and E. Prince, editors, Centering Theory in Discourse, pages 1–30. Clarendon Press, Oxford. B. Webber, A. Knott, M. Stone, and A. Joshi. 1999. What are little trees made of: A structural and presuppositional account using Lexicalized TAG. In Proc. 36th ACL, pages 151–156, College Park.
2004
51
Generating Referring Expressions in Open Domains Advaith Siddharthan Ann Copestake Computer Science Department Computer Laboratory Columbia University University of Cambridge [email protected] [email protected] Abstract We present an algorithm for generating referring expressions in open domains. Existing algorithms work at the semantic level and assume the availability of a classification for attributes, which is only feasible for restricted domains. Our alternative works at the realisation level, relies on WordNet synonym and antonym sets, and gives equivalent results on the examples cited in the literature and improved results for examples that prior approaches cannot handle. We believe that ours is also the first algorithm that allows for the incremental incorporation of relations. We present a novel corpus-evaluation using referring expressions from the Penn Wall Street Journal Treebank. 1 Introduction Referring expression generation has historically been treated as a part of the wider issue of generating text from an underlying semantic representation. The task has therefore traditionally been approached at the semantic level. Entities in the real world are logically represented; for example (ignoring quantifiers), a big brown dog might be represented as big1(x) ∧brown1(x) ∧dog1(x), where the predicates big1, brown1 and dog1 represent different attributes of the variable (entity) x. The task of referring expression generation has traditionally been framed as the identification of the shortest logical description for the referent entity that differentiates it from all other entities in the discourse domain. For example, if there were a small brown dog (small1(x) ∧brown1(x) ∧dog1(x)) in context, the minimal description for the big brown dog would be big1(x) ∧dog1(x)1. This semantic framework makes it difficult to apply existing referring expression generation algorithms to the many regeneration tasks that are important today; for example, summarisation, openended question answering and text simplification. Unlike in traditional generation, the starting point in 1The predicate dog1 is selected because it has a distinguished status, referred to as type in Reiter and Dale (1992). One such predicate has to to be present in the description. these tasks is unrestricted text, rather than a semantic representation of a small domain. It is difficult to extract the required semantics from unrestricted text (this task would require sense disambiguation, among other issues) and even harder to construct a classification for the extracted predicates in the manner that existing approaches require (cf., §2). In this paper, we present an algorithm for generating referring expressions in open domains. We discuss the literature and detail the problems in applying existing approaches to reference generation to open domains in §2. We then present our approach in §3, contrasting it with existing approaches. We extend our approach to handle relations in §3.3 and present a novel corpus-based evaluation on the Penn WSJ Treebank in §4. 2 Overview of Prior Approaches The incremental algorithm (Reiter and Dale, 1992) is the most widely discussed attribute selection algorithm. It takes as input the intended referent and a contrast set of distractors (other entities that could be confused with the intended referent). Entities are represented as attribute value matrices (AVMs). The algorithm also takes as input a *preferred-attributes* list that contains, in order of preference, the attributes that human writers use to reference objects. For example, the preference might be {colour, size, shape...}. The algorithm then repeatedly selects attributes from *preferredattributes* that rule out at least one entity in the contrast set until all distractors have been ruled out. It is instructive to look at how the incremental algorithm works. Consider an example where a large brown dog needs to be referred to. The contrast set contains a large black dog. These are represented by the AVMs shown below.   type dog size large colour brown     type dog size large colour black   Assuming that the *preferred-attributes* list is [size, colour, ...], the algorithm would first compare the values of the size attribute (both large), disregard that attribute as not being discriminating, compare the values of the colour attribute and return the brown dog. Subsequent work on referring expression generation has expanded the logical framework to allow reference by negation (the dog that is not black) and references to multiple entities (the brown or black dogs) (van Deemter, 2002), explored different search algorithms for finding the minimal description (e.g., Horacek (2003)) and offered different representation frameworks like graph theory (Krahmer et al., 2003) as alternatives to AVMs. However, all these approaches are based on very similar formalisations of the problem, and all make the following assumptions: 1. A semantic representation exists. 2. A classification scheme for attributes exists. 3. The linguistic realisations are unambiguous. 4. Attributes cannot be reference modifying. All these assumptions are violated when we move from generation in a very restricted domain to regeneration in an open domain. In regeneration tasks such as summarisation, open-ended question answering and text simplification, AVMs for entities are typically constructed from noun phrases, with the head noun as the type and pre-modifiers as attributes. Converting words into semantic labels would involve sense disambiguation, adding to the cost and complexity of the analysis module. Also, attribute classification is a hard problem and there is no existing classification scheme that can be used for open domains like newswire; for example, WordNet (Miller et al., 1993) organises adjectives as concepts that are related by the non-hierarchical relations of synonymy and antonymy (unlike nouns that are related through hierarchical links such as hyponymy, hypernymy and metonymy). In addition, selecting attributes at the semantic level is risky because their linguistic realisation might be ambiguous and many common adjectives are polysemous (cf., example 1 in §3.1). Reference modification, which has not been considered in the referring expression generation literature, raises further issues; for example, referring to an alleged murderer as the murderer is potentially libellous. In addition to the above, there is the issue of overlap between values of attributes. The case of subsumption (for example, that the colour red subsumes crimson and the type dog subsumes chihuahua) has received formal treatment in the literature; Dale and Reiter (1995) provide a find-bestvalue function that evaluates tree-like hierarchies of values. As mentioned earlier, such hierarchical knowledge bases do not exist for open domains. Further, a treatment of subsumption is insufficient, and degrees of intersection between attribute values also require consideration. van Deemter (2000) discusses the generation of vague descriptions when entities have gradable attributes like size; for example, in a domain with four mice sized 2, 5, 7 and 10cm, it is possible to refer to the large mouse (the mouse sized 10cm) or the two small mice (the mice sized 2 and 5cm). However, when applying referring expression generation to regeneration tasks where the representation of entities is derived from text rather than a knowledge base, we have to consider the case where the grading of attributes is not explicit. For example, we might need to compare the attribute dark with black, light or white. In contrast to previous approaches, our algorithm works at the level of words, not semantic labels, and measures the relatedness of adjectives (lexicalised attributes) using the lexical knowledge base WordNet rather than a semantic classification. Our approach also addresses the issue of comparing intersective attributes that are not explicitly graded, by making novel use of the synonymy and antonymy links in WordNet. Further, it treats discriminating power as only one criteria for selecting attributes and allows for the easy incorporation of other considerations such as reference modification (§5). 3 The Lexicalised Approach 3.1 Quantifying Discriminating Power We define the following three quotients. Similarity Quotient (SQ) We define similarity as transitive synonymy. The idea is that if X is a synonym of Y and Y is a synonym of Z, then X is likely to be similar to Z. The degree of similarity between two adjectives depends on how many steps must be made through WordNet synonymy lists to get from one to the other. Suppose we need to find a referring expression for e0. For each adjective aj describing e0, we calculate a similarity quotient SQj by initialising it to 0, forming a set of WordNet synonyms S1 of aj, forming a synonymy set S2 containing all the WordNet synonyms of all the adjectives in S1 and forming S3 from S2 similarly. Now for each adjective describing any distractor, we increment SQj by 4 if it is present in S1, by 2 if it is present in S2, and by 1 if it is present in S3. SQj now measures how similar aj is to other adjectives describing distractors. Contrastive Quotient (CQ) Similarly, we define contrastive in terms of antonymy relationships. We form the set C1 of strict WordNet antonyms of aj. The set C2 consists of strict WordNet antonyms of members of S1 and WordNet synonyms of members of C1. C3 is similarly constructed from S2 and C2. We now initialise CQj to zero and for each adjective describing each distractor, we add w =∈{4, 2, 1} to CQj, depending on whether it is a member of C1, C2 or C3. CQj now measures how contrasting aj is to other adjectives describing distractors. Discriminating Quotient (DQ) An attribute that has a high value of SQ has bad discriminating power. An attribute that has a high value of CQ has good discriminating power. We can now define the Discriminating Quotient (DQ) as DQ = CQ −SQ. We now have an order (decreasing DQs) in which to incorporate attributes. This constitutes our *preferred* list. We illustrate the benefits of our approach with two examples. Example 1: The Importance of Lexicalisation Previous referring expression generation algorithms ignore the issue of realising the logical description for the referent. The semantic labels are chosen such that they have a direct correspondence with their linguistic realisation and the realisation is thus considered trivial. Ambiguity and syntactically optional arguments are ignored. To illustrate one problem this causes, consider the two entities below: e1 e2   type president age old tenure current     type president age young tenure past   If we followed the strict typing system used by previous algorithms, with *preferred*={age, tenure}, to refer to e1 we would compare the age attributes and rule out e2 and generate the old president. This expression is ambiguous since old can also mean previous. Models that select attributes at the semantic level will run into trouble when their linguistic realisations are ambiguous. In contrast, our algorithm, given flattened attribute lists: e1 e2  head president attrib old, current   head president attrib young, past  successfully picks the current president as current has a higher DQ (2) than old (0): attribute distractor CQ SQ DQ old e2{young, past} 4 4 0 current e2{young, past} 2 0 2 In this example, old is a WordNet antonym of young and a WordNet synonym of past. Current is a WordNet synonym of present, which is a WordNet antonym of past. Note that WordNet synonym and antonym links capture the implicit gradation in the lexicalised values of the age and tenure attributes. Example 2: Naive Incrementality To illustrate another problem with the original incremental algorithm, consider three dogs: e1(a big black dog), e2(a small black dog) and e3(a tiny white dog). Consider using the original incremental algorithm to refer to e1 with *preferred*={colour, size}. The colour attribute black rules out e3. We then we have to select the size attribute big as well to rule out e2, thus generating the sub-optimal expression the big black dog. Here, the use of a predetermined *preferred* list fails to capture what is obvious from the context: that e1 stands out not because it is black, but because it is big. In our approach, for each of e1’s attributes, we calculate DQ with respect to e2 and e3: attribute distractor CQ SQ DQ big e2{small, black} 4 0 4 big e3{tiny, white} 2 0 2 black e2{small, black} 1 4 -3 black e3{tiny, white} 2 1 1 Overall, big has a higher discriminating power (6) than black (-2) and rules out both e2 and e3. We therefore generate the big dog. Our incremental approach thus manages to select the attribute that stands out in context. This is because we construct the *preferred* list after observing the context. We discuss this issue further in the next section. Note again that WordNet antonym and synonym links capture the gradation in the lexicalised size and colour attributes. However, this only works where the gradation is along one axis; in particular, this approach will not work for colours in general, and cannot be used to deduce the relative similarity between yellow and orange as compared to, say, yellow and blue. 3.2 Justifying our Algorithm The psycholinguistic justification for the incremental algorithm (IA) hinges on two premises: 1. Humans build referring expressions incrementally. 2. There is a preferred order in which humans select attributes (e.g., colour>shape>size...). Our algorithm is also incremental. However, it departs significantly from premise 2. We assume that speakers pick out attributes that are distinctive in context (cf., example 2, previous section). Averaged over contexts, some attributes have more discriminating power than others (largely because of the way we visualise entities) and premise 2 is an approximation to our approach. We now quantify the extra effort we are making to identify attributes that “stand out” in a given context. Let N be the maximum number of entities in the contrast set and n be the maximum number of attributes per entity. The table below compares the computational complexity of an optimal algorithm (such as Reiter (1990)), our algorithm and the IA. Incremental Algo Our Algorithm Optimal Algo O(nN) O(n2N) O(n2N) Both the IA and our algorithm are linear in the number of entities N. This is because neither algorithm allows backtracking; an attribute, once selected, cannot be discarded. In contrast, an optimal search requires O(2N) comparisons. As our algorithm compares each attribute of the discourse referent to every attribute of every distractor, it is quadratic in n. The IA compares each attribute of the discourse referent to only one attribute per distractor and is linear in n. Note, however, that values for n of over 4 are rare. 3.3 Relations Semantically, attributes describe an entity (e.g., the small grey dog) and relations relate an entity to other entities (e.g., the dog in the bin). Relations are troublesome because in relating an entity eo to e1, we need to recursively generate a referring expression for e1. The IA does not consider relations and the referring expression is constructed out of attributes alone. The Dale and Haddock (1991) algorithm allows for relational descriptions but involves exponential global search, or a greedy search approximation. To incorporate relational descriptions in the incremental framework would require a classification system which somehow takes into account the relations themselves and the secondary entities e1 etc. This again suggests that the existing algorithms force the incrementality at the wrong stage in the generation process. Our approach computes the order in which attributes are incorporated after observing the context, by quantifying their utility through the quotient DQ. This makes it easy for us to extend our algorithm to handle relations, because we can compute DQ for relations in much the same way as we did for attributes.We illustrate this for prepositions. 3.4 Calculating DQ for Relations Suppose the referent entity eref contains a relation [prepo eo] that we need to calculate the three quotients for (cf., figure 1 for representation of relations in AVMs). We consider each entity ei in the contrast set for eref in turn. If ei does not have a prepo relation then the relation is useful and we increment CQ by 4. If ei has a prepo relation then two cases arise. If the object of ei’s prepo relation is eo then we increment SQ by 4. If it is not eo, the relation is useful and we increment CQ by 4. This is an efficient non-recursive way of computing the quotients CQ and SQ for relations. We now discuss how to calculate DQ. For attributes, we defined DQ = CQ −SQ. However, as the linguistic realisation of a relation is a phrase and not a word, we would like to normalise the discriminating power of a relation with the length of its linguistic realisation. Calculating the length involves recursively generating referring expressions for the object of the preposition, an expensive task that we want to avoid unless we are actually using that relation in the final referring expression. We therefore initially approximate the length as follows. The realisation of a relation [prepo eo] consists of prepo, a determiner and the referring expression for eo. If none of eref’s distractors have a prepo relation then we only require the head noun of eo in the referring expression and length = 3. In this case, the relation is sufficient to identify both entities; for example, even if there were multiple bins in figure 1, as long as only one dog is in a bin, the reference the dog in the bin succeeds in uniquely referencing both the dog and the bin. If n distractors of eref contain a prepo relation with a non-eo object that is distractor for eo, we set length = 3 + n. This is an estimate for the word length of the realised relation that assumes one extra attribute for distinguishing eo from each distractor. Normalisation by estimated length is vital; if eo requires a long description, the relations’s DQ should be small so that shorter possibilities are considered first in the incremental process. The formula for DQ for relations is therefore DQ = (CQ −SQ)/length. This approach can also be extended to allow for relations such as comparatives which have syntactically optional arguments (e.g., the earlier flight vs the flight earlier than UA941) which are not allowed for by approaches which ignore realisation. 3.5 The Lexicalised Context-Sensitive IA Our lexicalised context-sensitive incremental algorithm (below) generates a referring expression for Entity. As it recurses, it keeps track of entities it has used up in order to avoid entering loops like the dog in the bin containing the dog in the bin.... To generate a referring expression for an entity, the algorithm calculates the DQs for all its attributes and approximates the DQs for all its relations (2). It then forms the *preferred* list (3) and constructs the referring expression by adding elements of *preferred* till the contrast set is empty (4). This is straightforward for attributes (5). For relations (6), it needs to recursively generate the prepositional phrase first. It checks that it hasn’t entered a loop (6a), generates a new contrast set for the object of the relation (6(a)i), recursively generates a referring expression for the object of the preposition (6(a)ii), recalculates DQ (6(a)iii) and either incorporates the relation in the referring expression or shifts the relation down the *preferred* list (6(a)iv). This step ensures that an initial mis-estimation in the word length of a relation doesn’t force its inclusion at the expense of shorter possibilities. If after incorporating all attributes and relations, the contrast set is still nonempty, the algorithm returns the best expression it can find (7). set generate-ref-exp(Entity, ContrastSet, UsedEntities) 1. IF ContrastSet = [] THEN RETURN {Entity.head} 2. Calculate CQ, SQ and DQ for each attribute and relation of Entity (as in Sec 3.1 and 3.4) 3. Let *preferred* be the list of attributes/ relations sorted in decreasing order of DQs. FOR each element (Mod) of *preferred* DO steps 4, 5 and 6 4. IF ContrastSet = [] THEN RETURN RefExp ∪ {Entity.head} 5. IF Mod is an Attribute THEN (a) LET RefExp = {Mod} ∪RefExp (b) Remove from ContrastSet, any entities Mod rules out 6. IF Mod is a Relation [prepi ei] THEN (a) IF ei ∈UsedEntities THEN i. Set DQ = −∞ ii. Move Mod to the end of *preferred* ELSE i. LET ContrastSet2 be the set of non-ei entities that are the objects of prepi relations in members of ContrastSet ii. LET RE = generate-referring-exp(ei, ContrastSet2, {ei}∪UsedEntities) iii. recalculate DQ using length = 2 + length(RE) iv. IF position in *preferred* is lowered THEN re-sort *preferred* ELSE (α) SET RefExp = RefExp ∪ {[prepi|determiner|RE]} (β) Remove from ContrastSet, any entities that Mod rules out 7. RETURN RefExp ∪{Entity.head} An Example Trace: We now trace the algorithm above as it generates a referring expression for d1 in figure 1. call generate-ref-exp(d1,[d2],[]) • step 1: ContrastSet is not empty • step 2: DQsmall = −4, DQgrey = −4 DQ[in b1] = 4/3, DQ[near d2] = 4/4 • step 3: *preferred* = [[in b1], [near d2], small, grey] d2 d1 b1 d1   head dog attrib [small, grey] in b1 near d2  d2   head dog attrib [small, grey] outside b1 near d1   b1   head bin attrib [large, steel] containing d1 near d2   Figure 1: AVMs for two dogs and a bin • Iteration 1 — mod = [in b1] – step 6(a)i: ContrastSet2 = [] – step 6(a)ii: call generate-ref-exp(b1,[],[d1]) ∗step 1: ContrastSet = [] return {bin} – step 6(a)iii: DQ[in b1] = 4/3 – step 6(a)ivα: RefExp = {[in, the, {bin}]} – step 6(a)ivβ: ContrastSet = [] • Iteration 2 — mod = [near d2] – step 4: ContrastSet = [] return {[in the {bin}], dog} The algorithm presented above is designed to return the shortest referring expression that uniquely identifies an entity. If the scene in figure 1 were cluttered with bins, the algorithm would still refer to d1 as the dog in the bin as there is only one dog that is in a bin. The user gets no help in locating the bin. If helping the user locate entities is important to the discourse plan, we need to change step 6(a)(ELSE)i so that the contrast set includes all bins in context, not just bins that are objects of in relations of distractors of d1. 3.6 Compound Nominals Our analysis so far has assumed that attributes are adjectives. However, many nominals introduced through relations can also be introduced in compound nominals, for example: 1. a church in Paris ↔a Paris church 2. a novel by Archer ↔an Archer novel 3. a company from London ↔a London company This is an important issue for regeneration applications, where the AVMs for entities are constructed from text rather than a semantic knowledge base (which could be constructed such that such cases are stored in relational form, though possibly with an underspecified relation). We need to augment our algorithm so that it can compare AVMs like:  head church in  head Paris   and  head church attrib [Paris]  Formally, the algorithm for calculating SQ and CQ for a nominal attribute anom of entity eo is: FOR each distractor ei of eo DO 1. IF anom is similar to any nominal attribute of ei THEN SQ = SQ + 4 2. IF anom is similar to the head noun of the object of any relation of ei THEN (a) SQ = SQ + 4 (b) flatten that relation for ei, i.e., add the attributes of the object of the relation to the attribute list for ei In step 2, we compare a nominal attribute anom of eo to the head noun of the object of a relation of ei. If they are similar, it is likely that any attributes of that object might help distinguish eo from ei. We then add those attributes to the attribute list of ei. Now, if SQ is non-zero, the nominal attribute anom has bad discriminating power and we set DQ = −SQ. If SQ = 0, then anom has good discriminating power and we set DQ = 4. We also extend the algorithm for calculating DQ for a relation [prepj ej] of eo as follows: 1. IF any distractor ei has a nominal attribute anom THEN (a) IF anom is similar to the head of ej THEN i. Add all attributes of eo to the attribute list and calculate their DQs 2. calculate DQ for the relation as in section 3.4 We can demonstrate how this approach works using entities extracted from the following sentence (from the Wall Street Journal): Also contributing to the firmness in copper, the analyst noted, was a report by Chicago purchasing agents, which precedes the full purchasing agents report that is due out today and gives an indication of what the full report might hold. Consider generating a referring expression for eo when the distractor is e1: eo =   head report by   head agents attrib [Chicago, purchasing]     e1 =  head report attributes [full, purchasing, agents]  The distractor the full purchasing agents report contains the nominal attribute agents. To compare report by Chicago purchasing agents with full purchasing agents report, our algorithm flattens the former to Chicago purchasing agents report. Our algorithm now gives: DQagents = −4, DQpurchasing = −4, DQChicago = 4, DQby Chicago purchasing agents = 4/4 We thus generate the referring expression the Chicago report. This approach takes advantage of the flexibility of the relationships that can hold between nouns in a compound: although examples can be devised where removing a nominal causes ungrammaticality, it works well enough empirically. To generate a referring expression for e1 (full purchasing agents report) when the distractor is eo(report by Chicago purchasing agents), our algorithm again flattens eo to obtain: DQagents = −4, DQpurchasing = −4 DQfull = 4 The generated referring expression is the full report. This is identical to the referring expression used in the original text. 4 Evaluation As our algorithm works in open domains, we were able to perform a corpus-based evaluation using the Penn WSJ Treebank (Marcus et al., 1993). Our evaluation aimed to reproduce existing referring expressions (NPs with a definite determiner) in the Penn Treebank by providing our algorithm as input: 1. The first mention NP for that reference. 2. The contrast set of distractor NPs For each referring expression (NP with a definite determiner) in the Penn Treebank, we automatically identified its first mention and all its distractors in a four sentence window, as described in §4.1. We then used our program to generate a referring expression for the first mention NP, giving it a contrastset containing the distractor NPs. Our evaluation compared this generated description with the original WSJ reference that we had started out with. Our algorithm was developed using toy examples and counter-examples constructed by hand, and the Penn Treebank was unseen data for this evaluation. 4.1 Identifying Antecedents and Distractors For every definite noun phrase NPo in the Penn Treebank, we shortlisted all the noun phrases NPi in a discourse window of four sentences (the two preceding sentences, current sentence and the following sentence) that had a head noun identical to or a WordNet synonym of the head noun of NPo. We compared the set of attributes and relations for each shortlisted NPi that preceded NPo in the discourse window with that of NPo. If the attributes and relations set of NPi was a superset of that of NPo, we assumed that NPo referred to NPi and added NPi to an antecedent set. We added all other NPi to the contrast set of distractors. Similarly, we excluded any noun phrase NPi that appeared in the discourse after NPo whose attributes and relations set was a subset of NPo’s and added the remaining NPi to the contrast set. We then selected the longest noun phrase in the antecedent set to be the antecedent that we would try and generate a referring expression from. The table below gives some examples of distractors that our program found using WordNet synonyms to compare head nouns: Entity Distractors first half-free Soviet vote fair elections in the GDR military construction bill fiscal measure steep fall in currency drop in market stock permanent insurance death benefit coverage 4.2 Results There were 146 instances of definite descriptions in the WSJ where the following conditions (that ensure that the referring expression generation task is nontrivial) were satisfied: 1. The definite NP (referring expression) contained at least one attribute or relation. 2. An antecedent was found for the definite NP. 3. There was at least one distractor NP in the discourse window. In 81.5% of these cases, our program returned a referring expression that was identical to the one used in the WSJ. This is a surprisingly high accuracy, considering that there is a fair amount of variability in the way human writers use referring expressions. For comparison, the baseline of reproducing the antecedent NP performed at 48%2. Some errors were due to non-recognition of multiword expessions in the antecedent (for example, our program generated care product from personal care product). In many of the remaining error cases, it was difficult to decide whether what our program generated was acceptable or wrong. For example, the WSJ contained the referring expression the one-day limit, where the automatically detected antecedent was the maximum one-day limit for the 2We are only evaluating content selection (the nouns and pre- and post-modifiers) and ignore determiner choice. S&P 500 stock-index futures contract and the automatically detected contrast set was: {the five-point opening limit for the contract, the 12-point limit, the 30-point limit, the intermediate limit of 20 points} Our program generated the maximum limit, where the WSJ writer preferred the one-day limit. 5 Further Issues 5.1 Reference Modifying Attributes The analysis thus far has assumed that all attributes modify the referent rather than the reference to the referent. However, for example, if e1 is an alleged murderer, the attribute alleged modifies the reference murderer rather than the referent e1 and referring to e1 as the murderer would be factually incorrect. Logically e1 could be represented as (alleged1(murderer1))(x), rather than alleged1(x) ∧murderer1(x). This is no longer first-order, and presents new difficulties for the traditional formalisation of the reference generation problem. One (inelegant) solution would be to introduce a new predicate allegedMurderer1(x). A working approach in our framework would be to add a large positive weight to the DQs of reference modifying attributes, thus forcing them to be selected in the referring expression. 5.2 Discourse Context and Salience The incremental algorithm assumes the availability of a contrast set and does not provide an algorithm for constructing and updating it. The contrast set, in general, needs to take context into account. Krahmer and Theune (2002) propose an extension to the IA which treats the context set as a combination of a discourse domain and a salience function. The black dog would then refer to the most salient entity in the discourse domain that is both black and a dog. Incorporating salience into our algorithm is straightforward. As described earlier, we compute the quotients SQ and CQ for each attribute or relation by adding an amount w ∈{4, 2, 1} to the relevant quotient based on a comparison with the attributes and relations of each distractor. We can incorporate salience by weighting w with the salience of the distractor whose attribute or relation we are considering. This will result in attributes and relations with high discriminating power with regard to more salient distractors getting selected first in the incremental process. 5.3 Discourse Plans In many situations, attributes and relations serve different discourse functions. For example, attributes might be used to help the hearer identify an entity while relations might serve to help locate the entity. This needs to be taken into account when generating a referring expression. If we were generating instructions for using a machine, we might want to include both attributes and relations; so to instruct the user to switch on the power, we might say switch on the red button on the top-left corner. This would help the user locate the switch (on the top-left corner) and identify it (red). If we were helping a chef find the salt in a kitchen, we might want to use only relations because the chef knows what salt looks like. The salt behind the corn flakes on the shelf above the fridge is in this context preferable to the white powder. If the discourse plan that controls generation requires our algorithm to preferentially select relations or attributes, it can add a positive amount α to their DQs. Then, the resultant formula is DQ = (CQ −SQ)/length + α, where length = 1 for attributes and by default α = 0 for both relations and attributes. 6 Conclusions and Future Work We have described an algorithm for generating referring expressions that can be used in any domain. Our algorithm selects attributes and relations that are distinctive in context. It does not rely on the availability of an adjective classification scheme and uses WordNet antonym and synonym lists instead. It is also, as far as we know, the first algorithm that allows for the incremental incorporation of relations and the first that handles nominals. In a novel evaluation, our algorithm successfully generates identical referring expressions to those in the Penn WSJ Treebank in over 80% of cases. In future work, we plan to use this algorithm as part of a system for generation from a database of user opinions on products which has been automatically extracted from newsgroups and similar text. This is midway between regeneration and the classical task of generating from a knowledge base because, while the database itself provides structure, many of the field values are strings corresponding to phrases used in the original text. Thus, our lexicalised approach is directly applicable to this task. 7 Acknowledgements Thanks are due to Kees van Deemter and three anonymous ACL reviewers for useful feedback on prior versions of this paper. This document was generated partly in the context of the Deep Thought project, funded under the Thematic Programme User-friendly Information Society of the 5th Framework Programme of the European Community (Contract N IST-2001-37836) References Robert Dale and Nicholas Haddock. 1991. Generating referring expressions involving relations. In Proceedings of the 5th Conference of the European Chapter of the Association for Computational Linguistics (EACL’91), pages 161–166, Berlin, Germany. Robert Dale and Ehud Reiter. 1995. Computational interpretations of the Gricean maxims in the generation of referring expressions. Cognitive Science, 19:233–263. Helmut Horacek. 2003. A best-first search algorithm for generating referring expressions. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL’03), pages 103–106, Budapest, Hungary. Emiel Krahmer and Mari¨et Theune. 2002. Efficient context-sensitive generation of referring expressions. In Kees van Deemter and Rodger Kibble, editors, Information Sharing: Givenness and Newness in Language Processing, pages 223– 264. CSLI Publications, Stanford,California. Emiel Krahmer, Sebastiaan van Erk, and Andr´e Verleg. 2003. Graph-based generation of referring expressions. Computational Linguistics, 29(1):53–72. Mitchell Marcus, Beatrice Santorini, and Mary Marcinkiewicz. 1993. Building a large natural language corpus of English: The Penn Treebank. Computational Linguistics, 19:313–330. George A. Miller, Richard Beckwith, Christiane D. Fellbaum, Derek Gross, and Katherine Miller. 1993. Five Papers on WordNet. Technical report, Princeton University, Princeton, N.J. Ehud Reiter. 1990. The computational complexity of avoiding conversational implicatures. In Proceedings of the 28th Annual Meeting of Association for Computational Linguistics (ACL’90), pages 97–104, Pittsburgh, Pennsylvania. Ehud Reiter and Robert Dale. 1992. A fast algorithm for the generation of referring expressions. In Proceedings of the 14th International Conference on Computational Linguistics (COLING’92), pages 232–238, Nantes, France. Kees van Deemter. 2000. Generating vague descriptions. In Proceedings of the 1st International Conference on Natural Language Generation (INLG’00), pages 179–185, Mitzpe Ramon, Israel. Kees van Deemter. 2002. Generating referring expressions: Boolean extensions of the incremental algorithm. Computational Linguistics, 28(1):37– 52.
2004
52
Discovering Relations among Named Entities from Large Corpora Takaaki Hasegawa Cyberspace Laboratories Nippon Telegraph and Telephone Corporation 1-1 Hikarinooka, Yokosuka, Kanagawa 239-0847, Japan [email protected] Satoshi Sekine and Ralph Grishman Dept. of Computer Science New York University 715 Broadway, 7th floor, New York, NY 10003, U.S.A.  sekine,grishman  @cs.nyu.edu Abstract Discovering the significant relations embedded in documents would be very useful not only for information retrieval but also for question answering and summarization. Prior methods for relation discovery, however, needed large annotated corpora which cost a great deal of time and effort. We propose an unsupervised method for relation discovery from large corpora. The key idea is clustering pairs of named entities according to the similarity of context words intervening between the named entities. Our experiments using one year of newspapers reveals not only that the relations among named entities could be detected with high recall and precision, but also that appropriate labels could be automatically provided for the relations. 1 Introduction Although Internet search engines enable us to access a great deal of information, they cannot easily give us answers to complicated queries, such as “a list of recent mergers and acquisitions of companies” or “current leaders of nations from all over the world”. In order to find answers to these types of queries, we have to analyze relevant documents to collect the necessary information. If many relations such as “Company A merged with Company B” embedded in those documents could be gathered and structured automatically, it would be very useful not only for information retrieval but also for question answering and summarization. Information Extraction provides methods for extracting information such as particular events and relations between entities from text. However, it is domain dependent and it could not give answers to those types of queries from Web documents which include widely various domains. Our goal is automatically discovering useful relations among arbitrary entities embedded in large  This work is supported by Nippon Telegraph and Telephone (NTT) Corporation’s one-year visiting program at New York University. text corpora. We defined a relation broadly as an affiliation, role, location, part-whole, social relationship and so on between a pair of entities. For example, if the sentence, “George Bush was inaugurated as the president of the United States.” exists in documents, the relation, “George Bush”(PERSON) is the “President of” the “United States” (GPE1), should be extracted. In this paper, we propose an unsupervised method of discovering relations among various entities from large text corpora. Our method does not need the richly annotated corpora required for supervised learning — corpora which take great time and effort to prepare. It also does not need any instances of relations as initial seeds for weakly supervised learning. This is an advantage of our approach, since we cannot know in advance all the relations embedded in text. Instead, we only need a named entity (NE) tagger to focus on the named entities which should be the arguments of relations. Recently developed named entity taggers work quite well and are able to extract named entities from text at a practically useful level. The rest of this paper is organized as follows. We discuss prior work and their limitations in section 2. We propose a new method of relation discovery in section 3. Then we describe experiments and evaluations in section 4 and 5, and discuss the approach in section 6. Finally, we conclude with future work. 2 Prior Work The concept of relation extraction was introduced as part of the Template Element Task, one of the information extraction tasks in the Sixth Message Understanding Conference (MUC-6) (Defense Advanced Research Projects Agency, 1995). MUC-7 added a Template Relation Task, with three relations. Following MUC, the Automatic Content Extraction (ACE) meetings (National Institute of Standards and Technology, 2000) are pursuing informa1GPE is an acronym introduced by the ACE program to represent a Geo-Political Entity — an entity with land and a government. tion extraction. In the ACE Program2, Relation Detection and Characterization (RDC) was introduced as a task in 2002. Most of approaches to the ACE RDC task involved supervised learning such as kernel methods (Zelenko et al., 2002) and need richly annotated corpora which are tagged with relation instances. The biggest problem with this approach is that it takes a great deal of time and effort to prepare annotated corpora large enough to apply supervised learning. In addition, the varieties of relations were limited to those defined by the ACE RDC task. In order to discover knowledge from diverse corpora, a broader range of relations would be necessary. Some previous work adopted a weakly supervised learning approach. This approach has the advantage of not needing large tagged corpora. Brin proposed the bootstrapping method for relation discovery (Brin, 1998). Brin’s method acquired patterns and examples by bootstrapping from a small initial set of seeds for a particular relation. Brin used a few samples of book titles and authors, collected common patterns from context including the samples and finally found new examples of book title and authors whose context matched the common patterns. Agichtein improved Brin’s method by adopting the constraint of using a named entity tagger (Agichtein and Gravano, 2000). Ravichandran also explored a similar method for question answering (Ravichandran and Hovy, 2002). These approaches, however, need a small set of initial seeds. It is also unclear how initial seeds should be selected and how many seeds are required. Also their methods were only tried on functional relations, and this was an important constraint on their bootstrapping. The variety of expressions conveying the same relation can be considered an example of paraphrases, and so some of the prior work on paraphrase acquisition is pertinent to relation discovery. Lin proposed another weakly supervised approach for discovering paraphrase (Lin and Pantel, 2001). Firstly Lin focused on verb phrases and their fillers as subject or object. Lin’s idea was that two verb phrases which have similar fillers might be regarded as paraphrases. This approach, however, also needs a sample verb phrase as an initial seed in order to find similar verb phrases. 3 Relation Discovery 3.1 Overview We propose a new approach to relation discovery from large text corpora. Our approach is based on 2A research and evaluation program in information extraction organized by the U.S. Government. context based clustering of pairs of entities. We assume that pairs of entities occurring in similar context can be clustered and that each pair in a cluster is an instance of the same relation. Relations between entities are discovered through this clustering process. In cases where the contexts linking a pair of entities express multiple relations, we expect that the pair of entities either would not be clustered at all, or would be placed in a cluster corresponding to its most frequently expressed relation, because its contexts would not be sufficiently similar to contexts for less frequent relations. We assume that useful relations will be frequently mentioned in large corpora. Conversely, relations mentioned once or twice are not likely to be important. Our basic idea is as follows: 1. tagging named entities in text corpora 2. getting co-occurrence pairs of named entities and their context 3. measuring context similarities among pairs of named entities 4. making clusters of pairs of named entities 5. labeling each cluster of pairs of named entities We show an example in Figure 1. First, we find the pair of ORGANIZATIONs (ORG) A and B, and the pair of ORGANIZATIONs (ORG) C and D, after we run the named entity tagger on our newspaper corpus. We collect all instances of the pair A and B occurring within a certain distance of one another. Then, we accumulate the context words intervening between A and B, such as “be offer to buy”, “be negotiate to acquire”.3 In same way, we also accumulate context words intervening between C and D. If the set of contexts of A and B and those of C and D are similar, these two pairs are placed into the same cluster. A – B and C – D would be in the same relation, in this case, merger and acquisition (M&A). That is, we could discover the relation between these ORGANIZATIONs. 3.2 Named entity tagging Our proposed method is fully unsupervised. We do not need richly annotated corpora or any initial manually selected seeds. Instead of them, we use a named entity (NE) tagger. Recently developed named entity taggers work quite well and extract named entities from text at a practically usable 3We collect the base forms of words which are stemmed by a POS tagger (Sekine, 2001). But verb past participles are distinguished from other verb forms in order to distinguish the passive voice from the active voice.                                                            ! "#  $    % $"&'$(&" " )  (  )     )  $     Figure 1: Overview of our basic idea level. In addition, the set of types of named entities has been extended by several research groups. For example, Sekine proposed 150 types of named entities (Sekine et al., 2002). Extending the range of NE types would lead to more effective relation discovery. If the type ORGANIZATION could be divided into subtypes, COMPANY, MILITARY, GOVERNMENT and so on, the discovery procedure could detect more specific relations such as those between COMPANY and COMPANY. We use an extended named entity tagger (Sekine, 2001) in order to detect useful relations between extended named entities. 3.3 NE pairs and context We define the co-occurrence of NE pairs as follows: two named entities are considered to co-occur if they appear within the same sentence and are separated by at most N intervening words. We collect the intervening words between two named entities for each co-occurrence. These words, which are stemmed, could be regarded as the context of the pair of named entities. Different orders of occurrence of the named entities are also considered as different contexts. For example,  and   are collected as different contexts, where  and  represent named entities. Less frequent pairs of NEs should be eliminated because they might be less reliable in learning relations. So we have set a frequency threshold to remove those pairs. 3.4 Context similarity among NE pairs We adopt a vector space model and cosine similarity in order to calculate the similarities between the set of contexts of NE pairs. We only compare NE pairs which have the same NE types, e.g., one PERSON – GPE pair and another PERSON – GPE pair. We define a domain as a pair of named entity types, e.g., the PERSON-GPE domain. For example, we have to detect relations between PERSON and GPE in the PERSON-GPE domain. Before making context vectors, we eliminate stop words, words in parallel expressions, and expressions peculiar to particular source documents (examples of these are given below), because these expressions would introduce noise in calculating similarities. A context vector for each NE pair consists of the bag of words formed from all intervening words from all co-occurrences of two named entities. Each word of a context vector is weighed by tf*idf, the product of term frequency and inverse document frequency. Term frequency is the number of occurrences of a word in the collected context words. The order of co-occurrence of the named entities is also considered. If a word occurred  times in context  and  times in context  , the term frequency  of the word is defined as   , where  and  are named entities. We think that this term frequency of a word in different orders would be effective to detect the direction of a relation if the arguments of a relation have the same NE types. Document frequency is the number of documents which include the word. If the norm   of the context vector  is extremely small due to a lack of content words, the cosine similarity between the vector and others might be unreliable. So, we also define a norm threshold in advance to eliminate short context vectors. The cosine similarity     between context vectors  and  is calculated by the following formula.          Cosine similarity varies from to  . A cosine similarity of would mean these NE pairs have exactly the same context words with the NEs appearing predominantly in the same order, and a cosine similarity of  would mean these NE pairs have exactly the same context words with the NEs appearing predominantly in reverse order. 3.5 Clustering NE pairs After we calculate the similarity among context vectors of NE pairs, we make clusters of NE pairs based on the similarity. We do not know how many clusters we should make in advance, so we adopt hierarchical clustering. Many clustering methods were proposed for hierarchical clustering, but we adopt complete linkage because it is conservative in making clusters. The distance between clusters is taken to be the distance of the furthest nodes between clusters in complete linkage. 3.6 Labeling clusters If most of the NE pairs in the same cluster had words in common, the common words would represent the characterization of the cluster. In other words, we can regard the common words as the characterization of a particular relation. We simply count the frequency of the common words in all combinations of the NE pairs in the same cluster. The frequencies are normalized by the number of combinations. The frequent common words in a cluster would become the label of the cluster, i.e. they would become the label of the relation, if the cluster would consist of the NE pairs in the same relation. 4 Experiments We experimented with one year of The New York Times (1995) as our corpus to verify our proposed method. We determined three parameters for thresholds and identified the patterns for parallel expressions and expressions peculiar to The New York Times as ignorable context. We set the maximum context word length to 5 words and set the frequency threshold of co-occurring NE pairs to 30 empirically. We also used the patterns, “,.*,”, “and” and “or” for parallel expressions, and the pattern “) --” (used in datelines at the beginning of articles) as peculiar to The New York Times. In our experiment, the norm threshold was set to 10. We also used stop words when context vectors are made. The stop words include symbols and words which occurred under 3 times as infrequent words and those which occurred over 100,000 times as highly frequent words. We applied our proposed method to The New York Times 1995, identified the NE pairs satisfying our criteria, and extracted the NE pairs along with their intervening words as our data set. In order to evaluate the relations detected automatically, we analyzed the data set manually and identified the relations for two different domains. One was the PERSON-GPE (PER-GPE) domain. We obtained 177 distinct NE pairs and classified them into 38 classes (relations) manually. The other was the COMPANY-COMPANY (COM-COM) domain. We got 65 distinct NE pairs and classified them into 10 classes manually. However, the types of both arguments of a relation are the same in the COM-COM domain. So the COM-COM domain includes symmetrical relations as well as asymmetrical relations. For the latter, we have to distinguish the different orders of arguments. We show the types of classes and the number in each class in Table 1. The errors in NE tagging were eliminated to evaluate our method correctly. 5 Evaluation We evaluated separately the placement of the NE pairs into clusters and the assignment of labels to these clusters. In the first step, we evaluated clusters consisting of two or more pairs. For each cluster, we determined the relation (R) of the cluster as the most frequently represented relation; we call this the major relation of the cluster. NE pairs with relation R in a cluster whose major relation was R were counted as correct; the correct pair count, !#"%$'&&)(*",+ , is defined as the total number of correct pairs in all clusters. Other NE pairs in the cluster were counted as incorrect; the incorrect pair count, ! .-/"%$'&&)(*",+ , is also defined as the total number of incorrect pairs in all clusters. We evaluated clusters based on Recall, Precision and F-measure. We defined these meaPER-GPE President Senator Governor Prime Minister Player Living Coach # NE pairs 28 21 17 16 12 9 8 PER-GPE Republican Secretary Mayor Enemy Working others(2 and 3) others(only 1) # NE pairs 8 7 5 5 4 20 17 COM-COM M&A Rival Parent Alliance Joint Venture Trading others(only 1) # NE pairs 35 8 8 6 2 2 4 Table 1: Manually classified relations which are extracted from Newspapers sures as follows. Recall (R) How many correct pairs are detected out of all the key pairs? The key pair count, !( , is defined as the total number of pairs manually classified in clusters of two or more pairs. Recall is defined as follows:   ! "$*&*&)(*",+ !( Precision (P) How many correct pairs are detected among the pairs clustered automatically? Precision is defined as follows:   ! "$*&*&)(*",+ ! "$*&&)('",+! .-/"%$'&&)(*",+ F-measure (F) F-measure is defined as a combination of recall and precision according to the following formula:       These values vary depending on the threshold of cosine similarity. As the threshold is decreased, the clusters gradually merge, finally forming one big cluster. We show the results of complete linkage clustering for the PERSON-GPE (PER-GPE) domain in Figure 2 and for the COMPANY-COMPANY (COM-COM) domain in Figure 3. With these metrics, precision fell as the threshold of cosine similarity was lowered. Recall increased until the threshold was almost 0, at which point it fell because the total number of correct pairs in the remaining few big clusters decreased. The best F-measure was 82 in the PER-GPE domain, 77 in the COM-COM domain. In both domains, the best F-measure was found near 0 cosine similarity. Generally, it is difficult to determine the threshold of similarity in advance. Since the best threshold of cosine similarity was almost same in the two domains, we fixed the cosine threshold at a single value just above zero for both domains for simplicity. We also investigated each cluster with the threshold of cosine similarity just above 0. We got 34                                   ! " Figure 2: F-measure, recall and precision by varying the threshold of cosine similarity in complete linkage clustering for the PERSON-GPE domain                          !" #$%" "   ! & ' (!% "  )%  * "+! Figure 3: F-measure, recall and precision by varying the threshold of cosine similarity in complete linkage clustering for the COMPANY-COMPANY domain Precision Recall F-measure PER-GPE 79 83 80 COM-COM 76 74 75 Table 2: F-measure, recall and precision with the threshold of cosine similarity just above 0 Major relations Ratio Common words (Relative frequency) President 17 / 23 President (1.0), president (0.415), ... Senator 19 / 21 Sen. (1.0), Republican (0.214), Democrat (0.133), republican (0.133), ... Prime Minister 15 / 16 Minister (1.0), minister (0.875), Prime (0.875), prime (0.758), ... Governor 15 / 16 Gov. (1.0), governor (0.458), Governor (0.3), ... Secretary 6 / 7 Secretary (1.0), secretary (0.143), ... Republican 5 / 6 Rep. (1.0), Republican (0.667), ... Coach 5 / 5 coach (1.0), ... M&A 10 / 11 buy (1.0), bid (0.382), offer (0.273), purchase (0.273), ... M&A 9 / 9 acquire (1.0), acquisition (0.583), buy (0.583), agree (0.417), ... Parent 7 / 7 parent (1.0), unit (0.476), own (0.143), ... Alliance 3 / 4 join (1.0) Table 3: Major relations in clusters and the most frequent common words in each cluster PER-GPE clusters and 15 COM-COM clusters. We show the F-measure, recall and precision at this cosine threshold in both domains in Table 2. We got 80 F-measure in the PER-GPE domain and 75 Fmeasure in the COM-COM domain. These values were very close to the best F-measure. Then, we evaluated the labeling of clusters of NE pairs. We show the larger clusters for each domain, along with the ratio of the number of pairs bearing the major relation to the total number of pairs in each cluster, on the left in Table 3. (As noted above, the major relation is the most frequently represented relation in the cluster.) We also show the most frequent common words and their relative frequency in each cluster on the right in Table 3. If two NE pairs in a cluster share a particular context word, we consider these pairs to be linked (with respect to this word). The relative frequency for a word is the number of such links, relative to the maximal possible number of links ( !  !   for a cluster of ! pairs). If the relative frequency is  , the word is shared by all NE pairs. Although we obtained some meaningful relations in small clusters, we have omitted the small clusters because the common words in such small clusters might be unreliable. We found that all large clusters had appropriate relations and that the common words which occurred frequently in those clusters accurately represented the relations. In other words, the frequent common words could be regarded as suitable labels for the relations. 6 Discussion The results of our experiments revealed good performance. The performance was a little higher in the PER-GPE domain than in the COM-COM domain, perhaps because there were more NE pairs with high cosine similarity in the PER-GPE domain than in the COM-COM domain. However, the graphs in both domains were similar, in particular when the cosine similarity was under 0.2. We would like to discuss the differences between the two domains and the following aspects of our unsupervised method for discovering the relations:  properties of relations  appropriate context word length  selecting best clustering method  covering less frequent pairs We address each of these points in turn. 6.1 Properties of relations We found that the COM-COM domain was more difficult to judge than the PER-GPE domain due to the similarities of relations. For example, the pair of companies in M&A relation might also subsequently appear in the parent relation. Asymmetric properties caused additional difficulties in the COM-COM domain, because most relations have directions. We have to recognize the direction of relations,   vs.    , to distinguish, for example, “A is parent company of B” and “B is parent company of A”. In determining the similarities between the NE pairs A and B and the NE pairs C and D, we must calculate both the similarity    with  and the similarity  with  . Sometimes the wrong correspondence ends up being favored. This kind of error was observed in 2 out of the 15 clusters, due to the fact that words happened to be shared by NE pairs aligned in the wrong direction more than in right direction. 6.2 Context word length The main reason for undetected or mis-clustered NE pairs in both domains is the absence of common words in the pairs’ context which explicitly represent the particular relations. Mis-clustered NE pairs were clustered based on another common word which occurred by accident. If the maximum context length were longer than the limit of 5 words which we set in the experiments, we could detect additional common words, but the noise would also increase. In our experiments, we used only the words between the two NEs. Although the outer context words (preceding the first NE or following the second NE) may be helpful, extending the context in this way will have to be carefully evaluated. It is future work to determine the best context word length. 6.3 Clustering method We tried single linkage and average linkage as well as complete linkage for making clusters. Complete linkage was the best clustering method because it yielded the highest F-measure. Furthermore, for the other two clustering methods, the threshold of cosine similarity producing the best F-measure was different in the two domains. In contrast, for complete linkage the optimal threshold was almost the same in the two domains. The best threshold of cosine similarity in complete linkage was determined to be just above 0; when this threshold reaches 0, the F-measure drops suddenly because the pairs need not share any words. A threshold just above 0 means that each combination of NE pairs in the same cluster shares at least one word in common — and most of these common words were pertinent to the relations. We consider that this is relevant to context word length. We used a relatively small maximum context word length – 5 words – making it less likely that noise words appear in common across different relations. The combination of complete linkage and small context word length proved useful for relation discovery. 6.4 Less frequent pairs As we set the frequency threshold of NE cooccurrence to 30, we will miss the less frequent NE pairs. Some of those pairs might be in valuable relations. For the less frequent NE pairs, since the context varieties would be small and the norms of context vectors would be too short, it is difficult to reliably classify the relation based on those pairs. One way of addressing this defect would be through bootstrapping. The problem of bootstrapping is how to select initial seeds; we could resolve this problem with our proposed method. NE pairs which have many context words in common in each cluster could be promising seeds. Once these seeds have been established, additional, lower-frequency NE pairs could be added to these clusters based on more relaxed keyword-overlap criteria. 7 Conclusion We proposed an unsupervised method for relation discovery from large corpora. The key idea was clustering of pairs of named entities according to the similarity of the context words intervening between the named entities. The experiments using one year’s newspapers revealed not only that the relations among named entities could be detected with high recall and precision, but also that appropriate labels could be automatically provided to the relations. In the future, we are planning to discover less frequent pairs of named entities by combining our method with bootstrapping as well as to improve our method by tuning parameters. 8 Acknowledgments This research was supported in part by the Defense Advanced Research Projects Agency as part of the Translingual Information Detection, Extraction and Summarization (TIDES) program, under Grant N66001-001-1-8917 from the Space and Naval Warfare Systems Center, San Diego, and by the National Science Foundation under Grant ITS00325657. This paper does not necessarily reflect the position of the U.S. Government. We would like to thank Dr. Yoshihiko Hayashi at Nippon Telegraph and Telephone Corporation, currently at Osaka University, who gave one of us (T.H.) an opportunity to conduct this research. References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proc. of the 5th ACM International Conference on Digital Libraries (ACM DL’00), pages 85–94. Sergey Brin. 1998. Extracting patterns and relations from world wide web. In Proc. of WebDB Workshop at 6th International Conference on Extending Database Technology (WebDB’98), pages 172–183. Defense Advanced Research Projects Agency. 1995. Proceedings of the Sixth Message Understanding Conference (MUC-6). Morgan Kaufmann Publishers, Inc. Dekang Lin and Patrick Pantel. 2001. Dirt - discovery of inference rules from text. In Proc. of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD2001), pages 323–328. National Institute of Standards and Technology. 2000. Automatic Content Extraction. http://www.nist.gov/speech/tests/ace/index.htm. Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a question answering system. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-2002), pages 41–47. Satoshi Sekine, Kiyoshi Sudo, and Chikashi Nobata. 2002. Extended named entity hierarchy. In Proc. of the Third International Conference on Language Resources and Evaluation (LREC2002), pages 1818–1824. Satoshi Sekine. 2001. OAK System (English Sentence Analyzer). http://nlp.cs.nyu.edu/oak/. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2002. Kernel methods for relation extraction. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2002), pages 71–78.
2004
53
Dependency Tree Kernels for Relation Extraction Aron Culotta University of Massachusetts Amherst, MA 01002 USA [email protected] Jeffrey Sorensen IBM T.J. Watson Research Center Yorktown Heights, NY 10598 USA [email protected] Abstract We extend previous work on tree kernels to estimate the similarity between the dependency trees of sentences. Using this kernel within a Support Vector Machine, we detect and classify relations between entities in the Automatic Content Extraction (ACE) corpus of news articles. We examine the utility of different features such as Wordnet hypernyms, parts of speech, and entity types, and find that the dependency tree kernel achieves a 20% F1 improvement over a “bag-of-words” kernel. 1 Introduction The ability to detect complex patterns in data is limited by the complexity of the data’s representation. In the case of text, a more structured data source (e.g. a relational database) allows richer queries than does an unstructured data source (e.g. a collection of news articles). For example, current web search engines would not perform well on the query, “list all California-based CEOs who have social ties with a United States Senator.” Only a structured representation of the data can effectively provide such a list. The goal of Information Extraction (IE) is to discover relevant segments of information in a data stream that will be useful for structuring the data. In the case of text, this usually amounts to finding mentions of interesting entities and the relations that join them, transforming a large corpus of unstructured text into a relational database with entries such as those in Table 1. IE is commonly viewed as a three stage process: first, an entity tagger detects all mentions of interest; second, coreference resolution resolves disparate mentions of the same entity; third, a relation extractor finds relations between these entities. Entity tagging has been thoroughly addressed by many statistical machine learning techniques, obtaining greater than 90% F1 on many datasets (Tjong Kim Sang and De Meulder, 2003). Coreference resolution is an active area of research not investigated here (PaEntity Type Location Apple Organization Cupertino, CA Microsoft Organization Redmond, WA Table 1: An example of extracted fields sula et al., 2002; McCallum and Wellner, 2003). We describe a relation extraction technique based on kernel methods. Kernel methods are nonparametric density estimation techniques that compute a kernel function between data instances, where a kernel function can be thought of as a similarity measure. Given a set of labeled instances, kernel methods determine the label of a novel instance by comparing it to the labeled training instances using this kernel function. Nearest neighbor classification and support-vector machines (SVMs) are two popular examples of kernel methods (Fukunaga, 1990; Cortes and Vapnik, 1995). An advantage of kernel methods is that they can search a feature space much larger than could be represented by a feature extraction-based approach. This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two instances, as described in the Section 3. Working in such a large feature space can lead to over-fitting in many machine learning algorithms. To address this problem, we apply SVMs to the task of relation extraction. SVMs find a boundary between instances of different classes such that the distance between the boundary and the nearest instances is maximized. This characteristic, in addition to empirical validation, indicates that SVMs are particularly robust to over-fitting. Here we are interested in detecting and classifying instances of relations, where a relation is some meaningful connection between two entities (Table 2). We represent each relation instance as an augmented dependency tree. A dependency tree represents the grammatical dependencies in a sentence; we augment this tree with features for each node AT NEAR PART ROLE SOCIAL Based-In Relative-location Part-of Affiliate, Founder Associate, Grandparent Located Subsidiary Citizen-of, Management Parent, Sibling Residence Other Client, Member Spouse, Other-professional Owner, Other, Staff Other-relative, Other-personal Table 2: Relation types and subtypes. (e.g. part of speech) We choose this representation because we hypothesize that instances containing similar relations will share similar substructures in their dependency trees. The task of the kernel function is to find these similarities. We define a tree kernel over dependency trees and incorporate this kernel within an SVM to extract relations from newswire documents. The tree kernel approach consistently outperforms the bag-ofwords kernel, suggesting that this highly-structured representation of sentences is more informative for detecting and distinguishing relations. 2 Related Work Kernel methods (Vapnik, 1998; Cristianini and Shawe-Taylor, 2000) have become increasingly popular because of their ability to map arbitrary objects to a Euclidian feature space. Haussler (1999) describes a framework for calculating kernels over discrete structures such as strings and trees. String kernels for text classification are explored in Lodhi et al. (2000), and tree kernel variants are described in (Zelenko et al., 2003; Collins and Duffy, 2002; Cumby and Roth, 2003). Our algorithm is similar to that described by Zelenko et al. (2003). Our contributions are a richer sentence representation, a more general framework to allow feature weighting, as well as the use of composite kernels to reduce kernel sparsity. Brin (1998) and Agichtein and Gravano (2000) apply pattern matching and wrapper techniques for relation extraction, but these approaches do not scale well to fastly evolving corpora. Miller et al. (2000) propose an integrated statistical parsing technique that augments parse trees with semantic labels denoting entity and relation types. Whereas Miller et al. (2000) use a generative model to produce parse information as well as relation information, we hypothesize that a technique discriminatively trained to classify relations will achieve better performance. Also, Roth and Yih (2002) learn a Bayesian network to tag entities and their relations simultaneously. We experiment with a more challenging set of relation types and a larger corpus. 3 Kernel Methods In traditional machine learning, we are provided a set of training instances S = {x1 . . . xN}, where each instance xi is represented by some ddimensional feature vector. Much time is spent on the task of feature engineering – searching for the optimal feature set either manually by consulting domain experts or automatically through feature induction and selection (Scott and Matwin, 1999). For example, in entity detection the original instance representation is generally a word vector corresponding to a sentence. Feature extraction and induction may result in features such as part-ofspeech, word n-grams, character n-grams, capitalization, and conjunctions of these features. In the case of more structured objects, such as parse trees, features may include some description of the object’s structure, such as “has an NP-VP subtree.” Kernel methods can be particularly effective at reducing the feature engineering burden for structured objects. By calculating the similarity between two objects, kernel methods can employ dynamic programming solutions to efficiently enumerate over substructures that would be too costly to explicitly include as features. Formally, a kernel function K is a mapping K : X × X →[0, ∞] from instance space X to a similarity score K(x, y) = P i φi(x)φi(y) = φ(x) · φ(y). Here, φi(x) is some feature function over the instance x. The kernel function must be symmetric [K(x, y) = K(y, x)] and positivesemidefinite. By positive-semidefinite, we require that the if x1, . . . , xn ∈X, then the n × n matrix G defined by Gij = K(xi, xj) is positive semidefinite. It has been shown that any function that takes the dot product of feature vectors is a kernel function (Haussler, 1999). A simple kernel function takes the dot product of the vector representation of instances being compared. For example, in document classification, each document can be represented by a binary vector, where each element corresponds to the presence or absence of a particular word in that document. Here, φi(x) = 1 if word i occurs in document x. Thus, the kernel function K(x, y) returns the number of words in common between x and y. We refer to this kernel as the “bag-of-words” kernel, since it ignores word order. When instances are more structured, as in the case of dependency trees, more complex kernels become necessary. Haussler (1999) describes convolution kernels, which find the similarity between two structures by summing the similarity of their substructures. As an example, consider a kernel over strings. To determine the similarity between two strings, string kernels (Lodhi et al., 2000) count the number of common subsequences in the two strings, and weight these matches by their length. Thus, φi(x) is the number of times string x contains the subsequence referenced by i. These matches can be found efficiently through a dynamic program, allowing string kernels to examine long-range features that would be computationally infeasible in a feature-based method. Given a training set S = {x1 . . . xN}, kernel methods compute the Gram matrix G such that Gij = K(xi, xj). Given G, the classifier finds a hyperplane which separates instances of different classes. To classify an unseen instance x, the classifier first projects x into the feature space defined by the kernel function. Classification then consists of determining on which side of the separating hyperplane x lies. A support vector machine (SVM) is a type of classifier that formulates the task of finding the separating hyperplane as the solution to a quadratic programming problem (Cristianini and Shawe-Taylor, 2000). Support vector machines attempt to find a hyperplane that not only separates the classes but also maximizes the margin between them. The hope is that this will lead to better generalization performance on unseen instances. 4 Augmented Dependency Trees Our task is to detect and classify relations between entities in text. We assume that entity tagging has been performed; so to generate potential relation instances, we iterate over all pairs of entities occurring in the same sentence. For each entity pair, we create an augmented dependency tree (described below) representing this instance. Given a labeled training set of potential relations, we define a tree kernel over dependency trees which we then use in an SVM to classify test instances. A dependency tree is a representation that denotes grammatical relations between words in a sentence (Figure 1). A set of rules maps a parse tree to a dependency tree. For example, subjects are dependent on their verbs and adjectives are dependent Troops Tikrit advanced near t t t t 0 1 2 3 Figure 1: A dependency tree for the sentence Troops advanced near Tikrit. Feature Example word troops, Tikrit part-of-speech (24 values) NN, NNP general-pos (5 values) noun, verb, adj chunk-tag NP, VP, ADJP entity-type person, geo-political-entity entity-level name, nominal, pronoun Wordnet hypernyms social group, city relation-argument ARG A, ARG B Table 3: List of features assigned to each node in the dependency tree. on the nouns they modify. Note that for the purposes of this paper, we do not consider the link labels (e.g. “object”, “subject”); instead we use only the dependency structure. To generate the parse tree of each sentence, we use MXPOST, a maximum entropy statistical parser1; we then convert this parse tree to a dependency tree. Note that the left-to-right ordering of the sentence is maintained in the dependency tree only among siblings (i.e. the dependency tree does not specify an order to traverse the tree to recover the original sentence). For each pair of entities in a sentence, we find the smallest common subtree in the dependency tree that includes both entities. We choose to use this subtree instead of the entire tree to reduce noise and emphasize the local characteristics of relations. We then augment each node of the tree with a feature vector (Table 3). The relation-argument feature specifies whether an entity is the first or second argument in a relation. This is required to learn asymmetric relations (e.g. X OWNS Y). Formally, a relation instance is a dependency tree 1http://www.cis.upenn.edu/˜adwait/statnlp.html T with nodes {t0 . . . tn}. The features of node ti are given by φ(ti) = {v1 . . . vd}. We refer to the jth child of node ti as ti[j], and we denote the set of all children of node ti as ti[c]. We reference a subset j of children of ti by ti[j] ⊆ti[c]. Finally, we refer to the parent of node ti as ti.p. From the example in Figure 1, t0[1] = t2, t0[{0, 1}] = {t1, t2}, and t1.p = t0. 5 Tree kernels for dependency trees We now define a kernel function for dependency trees. The tree kernel is a function K(T1, T2) that returns a normalized, symmetric similarity score in the range (0, 1) for two trees T1 and T2. We define a slightly more general version of the kernel described by Zelenko et al. (2003). We first define two functions over the features of tree nodes: a matching function m(ti, tj) ∈{0, 1} and a similarity function s(ti, tj) ∈(0, ∞]. Let the feature vector φ(ti) = {v1 . . . vd} consist of two possibly overlapping subsets φm(ti) ⊆φ(ti) and φs(ti) ⊆φ(ti). We use φm(ti) in the matching function and φs(ti) in the similarity function. We define m(ti, tj) = ( 1 if φm(ti) = φm(tj) 0 otherwise and s(ti, tj) = X vq∈φs(ti) X vr∈φs(tj) C(vq, vr) where C(vq, vr) is some compatibility function between two feature values. For example, in the simplest case where C(vq, vr) = ( 1 if vq = vr 0 otherwise s(ti, tj) returns the number of feature values in common between feature vectors φs(ti) and φs(tj). We can think of the distinction between functions m(ti, tj) and s(ti, tj) as a way to discretize the similarity between two nodes. If φm(ti) ̸= φm(tj), then we declare the two nodes completely dissimilar. However, if φm(ti) = φm(tj), then we proceed to compute the similarity s(ti, tj). Thus, restricting nodes by m(ti, tj) is a way to prune the search space of matching subtrees, as shown below. For two dependency trees T1, T2, with root nodes r1 and r2, we define the tree kernel K(T1, T2) as follows: K(T1, T2) =      0 if m(r1, r2) = 0 s(r1, r2)+ Kc(r1[c], r2[c]) otherwise where Kc is a kernel function over children. Let a and b be sequences of indices such that a is a sequence a1 ≤a2 ≤. . . ≤an, and likewise for b. Let d(a) = an −a1 + 1 and l(a) be the length of a. Then we have Kc(ti[c], tj[c]) = X a,b,l(a)=l(b) λd(a)λd(b)K(ti[a], tj[b]) The constant 0 < λ < 1 is a decay factor that penalizes matching subsequences that are spread out within the child sequences. See Zelenko et al. (2003) for a proof that K is kernel function. Intuitively, whenever we find a pair of matching nodes, we search for all matching subsequences of the children of each node. A matching subsequence of children is a sequence of children a and b such that m(ai, bi) = 1 (∀i < n). For each matching pair of nodes (ai, bi) in a matching subsequence, we accumulate the result of the similarity function s(ai, bj) and then recursively search for matching subsequences of their children ai[c], bj[c]. We implement two types of tree kernels. A contiguous kernel only matches children subsequences that are uninterrupted by non-matching nodes. Therefore, d(a) = l(a). A sparse tree kernel, by contrast, allows non-matching nodes within matching subsequences. Figure 2 shows two relation instances, where each node contains the original text plus the features used for the matching function, φm(ti) = {generalpos, entity-type, relation-argument}. (“NA” denotes the feature is not present for this node.) The contiguous kernel matches the following substructures: {t0[0], u0[0]}, {t0[2], u0[1]}, {t3[0], u2[0]}. Because the sparse kernel allows non-contiguous matching sequences, it matches an additional substructure {t0[0, ∗, 2], u0[0, ∗, 1]}, where (∗) indicates an arbitrary number of non-matching nodes. Zelenko et al. (2003) have shown the contiguous kernel to be computable in O(mn) and the sparse kernel in O(mn3), where m and n are the number of children in trees T1 and T2 respectively. 6 Experiments We extract relations from the Automatic Content Extraction (ACE) corpus provided by the National Institute for Standards and Technology (NIST). The person noun NA NA verb ARG_B geo−political 1 0 troops advanced noun Tikrit ARG_A person noun forces NA NA verb moved NA NA prep toward ARG_B t t t t t 1 0 2 3 4 geo−political noun Baghdad quickly adverb NA NA ARG_A near prep NA NA 2 3 u u u u Figure 2: Two instances of the NEAR relation. data consists of about 800 annotated text documents gathered from various newspapers and broadcasts. Five entities have been annotated (PERSON, ORGANIZATION, GEO-POLITICAL ENTITY, LOCATION, FACILITY), along with 24 types of relations (Table 2). As noted from the distribution of relationship types in the training data (Figure 3), data imbalance and sparsity are potential problems. In addition to the contiguous and sparse tree kernels, we also implement a bag-of-words kernel, which treats the tree as a vector of features over nodes, disregarding any structural information. We also create composite kernels by combining the sparse and contiguous kernels with the bagof-words kernel. Joachims et al. (2001) have shown that given two kernels K1, K2, the composite kernel K12(xi, xj) = K1(xi, xj)+K2(xi, xj) is also a kernel. We find that this composite kernel improves performance when the Gram matrix G is sparse (i.e. our instances are far apart in the kernel space). The features used to represent each node are shown in Table 3. After initial experimentation, the set of features we use in the matching function is φm(ti) = {general-pos, entity-type, relationargument}, and the similarity function examines the Figure 3: Distribution over relation types in training data. remaining features. In our experiments we tested the following five kernels: K0 = sparse kernel K1 = contiguous kernel K2 = bag-of-words kernel K3 = K0 + K2 K4 = K1 + K2 We also experimented with the function C(vq, vr), the compatibility function between two feature values. For example, we can increase the importance of two nodes having the same Wordnet hypernym2. If vq, vr are hypernym features, then we can define C(vq, vr) = ( α if vq = vr 0 otherwise When α > 1, we increase the similarity of nodes that share a hypernym. We tested a number of weighting schemes, but did not obtain a set of weights that produced consistent significant improvements. See Section 8 for alternate approaches to setting C. 2http://www.cogsci.princeton.edu/˜wn/ Avg. Prec. Avg. Rec. Avg. F1 K1 69.6 25.3 36.8 K2 47.0 10.0 14.2 K3 68.9 24.3 35.5 K4 70.3 26.3 38.0 Table 4: Kernel performance comparison. Table 4 shows the results of each kernel within an SVM. (We augment the LibSVM3 implementation to include our dependency tree kernel.) Note that, although training was done over all 24 relation subtypes, we evaluate only over the 5 high-level relation types. Thus, classifying a RESIDENCE relation as a LOCATED relation is deemed correct4. Note also that K0 is not included in Table 4 because of burdensome computational time. Table 4 shows that precision is adequate, but recall is low. This is a result of the aforementioned class imbalance – very few of the training examples are relations, so the classifier is less likely to identify a testing instances as a relation. Because we treat every pair of mentions in a sentence as a possible relation, our training set contains fewer than 15% positive relation instances. To remedy this, we retrain each SVMs for a binary classification task. Here, we detect, but do not classify, relations. This allows us to combine all positive relation instances into one class, which provides us more training samples to estimate the class boundary. We then threshold our output to achieve an optimal operating point. As seen in Table 5, this method of relation detection outperforms that of the multi-class classifier. We then use these binary classifiers in a cascading scheme as follows: First, we use the binary SVM to detect possible relations. Then, we use the SVM trained only on positive relation instances to classify each predicted relation. These results are shown in Table 6. The first result of interest is that the sparse tree kernel, K0, does not perform as well as the contiguous tree kernel, K1. Suspecting that noise was introduced by the non-matching nodes allowed in the sparse tree kernel, we performed the experiment with different values for the decay factor λ = {.9, .5, .1}, but obtained no improvement. The second result of interest is that all tree kernels outperform the bag-of-words kernel, K2, most noticeably in recall performance, implying that the 3http://www.csie.ntu.edu.tw/˜cjlin/libsvm/ 4This is to compensate for the small amount of training data for many classes. Prec. Rec. F1 K0 – – – K0 (B) 83.4 45.5 58.8 K1 91.4 37.1 52.8 K1 (B) 84.7 49.3 62.3 K2 92.7 10.6 19.0 K2 (B) 72.5 40.2 51.7 K3 91.3 35.1 50.8 K3 (B) 80.1 49.9 61.5 K4 91.8 37.5 53.3 K4 (B) 81.2 51.8 63.2 Table 5: Relation detection performance. (B) denotes binary classification. D C Avg. Prec. Avg. Rec. Avg. F1 K0 K0 66.0 29.0 40.1 K1 K1 66.6 32.4 43.5 K2 K2 62.5 27.7 38.1 K3 K3 67.5 34.3 45.3 K4 K4 67.1 35.0 45.8 K1 K4 67.4 33.9 45.0 K4 K1 65.3 32.5 43.3 Table 6: Results on the cascading classification. D and C denote the kernel used for relation detection and classification, respectively. structural information the tree kernel provides is extremely useful for relation detection. Note that the average results reported here are representative of the performance per relation, except for the NEAR relation, which had slightly lower results overall due to its infrequency in training. 7 Conclusions We have shown that using a dependency tree kernel for relation extraction provides a vast improvement over a bag-of-words kernel. While the dependency tree kernel appears to perform well at the task of classifying relations, recall is still relatively low. Detecting relations is a difficult task for a kernel method because the set of all non-relation instances is extremely heterogeneous, and is therefore difficult to characterize with a similarity metric. An improved system might use a different method to detect candidate relations and then use this kernel method to classify the relations. 8 Future Work The most immediate extension is to automatically learn the feature compatibility function C(vq, vr). A first approach might use tf-idf to weight each feature. Another approach might be to calculate the information gain for each feature and use that as its weight. A more complex system might learn a weight for each pair of features; however this seems computationally infeasible for large numbers of features. One could also perform latent semantic indexing to collapse feature values into similar “categories” — for example, the words “football” and “baseball” might fall into the same category. Here, C(vq, vr) might return α1 if vq = vr, and α2 if vq and vr are in the same category, where α1 > α2 > 0. Any method that provides a “soft” match between feature values will sharpen the granularity of the kernel and enhance its modeling power. Further investigation is also needed to understand why the sparse kernel performs worse than the contiguous kernel. These results contradict those given in Zelenko et al. (2003), where the sparse kernel achieves 2-3% better F1 performance than the contiguous kernel. It is worthwhile to characterize relation types that are better captured by the sparse kernel, and to determine when using the sparse kernel is worth the increased computational burden. References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the Fifth ACM International Conference on Digital Libraries. Sergey Brin. 1998. Extracting patterns and relations from the world wide web. In WebDB Workshop at 6th International Conference on Extending Database Technology, EDBT’98. M. Collins and N. Duffy. 2002. Convolution kernels for natural language. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA. MIT Press. Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine Learning, 20(3):273–297. N. Cristianini and J. Shawe-Taylor. 2000. An introduction to support vector machines. Cambridge University Press. Chad M. Cumby and Dan Roth. 2003. On kernel methods for relational learning. In Tom Fawcett and Nina Mishra, editors, Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), August 21-24, 2003, Washington, DC, USA. AAAI Press. K. Fukunaga. 1990. Introduction to Statistical Pattern Recognition. Academic Press, second edition. D. Haussler. 1999. Convolution kernels on discrete structures. Technical Report UCS-CRL-9910, University of California, Santa Cruz. Thorsten Joachims, Nello Cristianini, and John Shawe-Taylor. 2001. Composite kernels for hypertext categorisation. In Carla Brodley and Andrea Danyluk, editors, Proceedings of ICML01, 18th International Conference on Machine Learning, pages 250–257, Williams College, US. Morgan Kaufmann Publishers, San Francisco, US. Huma Lodhi, John Shawe-Taylor, Nello Cristianini, and Christopher J. C. H. Watkins. 2000. Text classification using string kernels. In NIPS, pages 563–569. A. McCallum and B. Wellner. 2003. Toward conditional models of identity uncertainty with application to proper noun coreference. In IJCAI Workshop on Information Integration on the Web. S. Miller, H. Fox, L. Ramshaw, and R. Weischedel. 2000. A novel use of statistical parsing to extract information from text. In 6th Applied Natural Language Processing Conference. H. Pasula, B. Marthi, B. Milch, S. Russell, and I. Shpitser. 2002. Identity uncertainty and citation matching. Dan Roth and Wen-tau Yih. 2002. Probabilistic reasoning for entity and relation recognition. In 19th International Conference on Computational Linguistics. Sam Scott and Stan Matwin. 1999. Feature engineering for text classification. In Proceedings of ICML-99, 16th International Conference on Machine Learning. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Walter Daelemans and Miles Osborne, editors, Proceedings of CoNLL-2003, pages 142– 147. Edmonton, Canada. Vladimir Vapnik. 1998. Statistical Learning Theory. Whiley, Chichester, GB. D. Zelenko, C. Aone, and A. Richardella. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research, pages 1083– 1106.
2004
54
Classifying Semantic Relations in Bioscience Texts Barbara Rosario SIMS UC Berkeley Berkeley, CA 94720 [email protected] Marti A. Hearst SIMS UC Berkeley Berkeley, CA 94720 [email protected] Abstract A crucial step toward the goal of automatic extraction of propositional information from natural language text is the identification of semantic relations between constituents in sentences. We examine the problem of distinguishing among seven relation types that can occur between the entities “treatment” and “disease” in bioscience text, and the problem of identifying such entities. We compare five generative graphical models and a neural network, using lexical, syntactic, and semantic features, finding that the latter help achieve high classification accuracy. 1 Introduction The biosciences literature is rich, complex and continually growing. The National Library of Medicine’s MEDLINE database1 contains bibliographic citations and abstracts from more than 4,600 biomedical journals, and an estimated half a million new articles are added every year. Much of the important, late-breaking bioscience information is found only in textual form, and so methods are needed to automatically extract semantic entities and the relations between them from this text. For example, in the following sentences, hepatitis and its variants, which are DISEASES, are found in different semantic relationships with various TREATMENTs: 1http://www.nlm.nih.gov/pubs/factsheets/medline.html (1) Effect of interferon on hepatitis B (2) A two-dose combined hepatitis A and B vaccine would facilitate immunization programs (3) These results suggest that con A-induced hepatitis was ameliorated by pretreatment with TJ-135. In (1) there is an unspecified effect of the treatment interferon on hepatitis B. In (2) the vaccine prevents hepatitis A and B while in (3) hepatitis is cured by the treatment TJ-135. We refer to this problem as Relation Classification. A related task is Role Extraction (also called, in the literature, “information extraction” or “named entity recognition”), defined as: given a sentence such as “The fluoroquinolones for urinary tract infections: a review”, extract all and only the strings of text that correspond to the roles TREATMENT (fluoroquinolones) and DISEASE (urinary tract infections). To make inferences about the facts in the text we need a system that accomplishes both these tasks: the extraction of the semantic roles and the recognition of the relationship that holds between them. In this paper we compare five generative graphical models and a discriminative model (a multilayer neural network) on these tasks. Recognizing subtle differences among relations is a difficult task; nevertheless the results achieved by our models are quite promising: when the roles are not given, the neural network achieves 79.6% accuracy and the best graphical model achieves 74.9%. When the roles are given, the neural net reaches 96.9% accuracy while the best graphical model gets 91.6% accuracy. Part of the reason for the Relationship Definition and Example Cure TREAT cures DIS 810 (648, 162) Intravenous immune globulin for recurrent spontaneous abortion Only DIS TREAT not mentioned 616 (492, 124) Social ties and susceptibility to the common cold Only TREAT DIS not mentioned 166 (132, 34) Flucticasone propionate is safe in recommended doses Prevent TREAT prevents the DIS 63 (50, 13) Statins for prevention of stroke Vague Very unclear relationship 36 (28, 8) Phenylbutazone and leukemia Side Effect DIS is a result of a TREAT 29 (24, 5) Malignant mesodermal mixed tumor of the uterus following irradiation NO Cure TREAT does not cure DIS 4 (3, 1) Evidence for double resistance to permethrin and malathion in head lice Total relevant: 1724 (1377, 347) Irrelevant TREAT and DIS not present 1771 (1416, 355) Patients were followed up for 6 months Total: 3495 (2793, 702) Table 1: Candidate semantic relationships between treatments and diseases. In parentheses are shown the numbers of sentences used for training and testing, respectively. success of the algorithms is the use of a large domain-specific lexical hierarchy for generalization across classes of nouns. In the remainder of this paper we discuss related work, describe the annotated dataset, describe the models, present and discuss the results of running the models on the relation classification and entity extraction tasks and analyze the relative importance of the features used. 2 Related work While there is much work on role extraction, very little work has been done for relationship recognition. Moreover, many papers that claim to be doing relationship recognition in reality address the task of role extraction: (usually two) entities are extracted and the relationship is implied by the cooccurrence of these entities or by the presence of some linguistic expression. These linguistic patterns could in principle distinguish between different relations, but instead are usually used to identify examples of one relation. In the related work for statistical models there has been, to the best of our knowledge, no attempt to distinguish between different relations that can occur between the same semantic entities. In Agichtein and Gravano (2000) the goal is to extract pairs such as (Microsoft, Redmond), where Redmond is the location of the organization Microsoft. Their technique generates and evaluates lexical patterns that are indicative of the relation. Only the relation location of is tackled and the entities are assumed given. In Zelenko et al. (2002), the task is to extract the relationships person-affiliation and organization-location. The classification (done with Support Vector Machine and Voted Perceptron algorithms) is between positive and negative sentences, where the positive sentences contain the two entities. In the bioscience NLP literature there are also efforts to extract entities and relations. In Ray and Craven (2001), Hidden Markov Models are applied to MEDLINE text to extract the entities PROTEINS and LOCATIONS in the relationship subcellular-location and the entities GENE and DISORDER in the relationship disorderassociation. The authors acknowledge that the task of extracting relations is different from the task of extracting entities. Nevertheless, they consider positive examples to be all the sentences that simply contain the entities, rather than analyzing which relations hold between these entities. In Craven (1999), the problem tackled is relationship extraction from MEDLINE for the relation subcellular-location. The authors treat it as a text classification problem and propose and compare two classifiers: a Naive Bayes classifier and a relational learning algorithm. This is a two-way classification, and again there is no mention of whether the co-occurrence of the entities actually represents the target relation. Pustejovsky et al. (2002) use a rule-based system to extract entities in the inhibit-relation. Their experiments use sentences that contain verbal and nominal forms of the stem inhibit. Thus the actual task performed is the extraction of entities that are connected by some form of the stem inhibit, which by requiring occurrence of this word explicitly, is not the same as finding all sentences that talk about inhibiting actions. Similarly, Rindflesch et al. (1999) identify noun phrases surrounding forms of the stem bind which signify entities that can enter into molecular binding relationships. In Srinivasan and Rindflesch (2002) MeSH term co-occurrences within MEDLINE articles are used to attempt to infer relationships between different concepts, including diseases and drugs. In the bioscience domain the work on relation classification is primary done through hand-built rules. Feldman et al. (2002) use hand-built rules that make use of syntactic and lexical features and semantic constraints to find relations between genes, proteins, drugs and diseases. The GENIES system (Friedman et al., 2001) uses a hand-built semantic grammar along with hand-derived syntactic and semantic constraints, and recognizes a wide range of relationships between biological molecules. 3 Data and Features For our experiments, the text was obtained from MEDLINE 20012. An annotator with biology expertise considered the titles and abstracts separately and labeled the sentences (both roles and relations) based solely on the content of the individual sentences. Seven possible types of relationships between TREATMENT and DISEASE were identified. Table 1 shows, for each relation, its definition, one example sentence and the number of sentences found containing it. We used a large domain-specific lexical hierarchy (MeSH, Medical Subject Headings3) to map words into semantic categories. There are about 19,000 unique terms in MeSH and 15 main sub-hierarchies, each corresponding to a major branch of medical ontology; e.g., tree A corresponds to Anatomy, tree C to Disease, and so on. As an example, the word migraine maps to the term C10.228, that is, C (a disease), C10 (Nervous System Diseases), C10.228 (Central Ner2We used the first 100 titles and the first 40 abstracts from each of the 59 files medline01n*.xml in Medline 2001; the labeled data is available at biotext.berkeley.edu 3http://www.nlm.nih.gov/mesh/meshhome.html vous System Diseases). When there are multiple MeSH terms for one word, we simply choose the first one. These semantic features are shown to be very useful for our tasks (see Section 4.3). Rosario et al. (2002) demonstrate the usefulness of MeSH for the classification of the semantic relationships between nouns in noun compounds. The results reported in this paper were obtained with the following features: the word itself, its part of speech from the Brill tagger (Brill, 1995), the phrase constituent the word belongs to, obtained by flattening the output of a parser (Collins, 1996), and the word’s MeSH ID (if available). In addition, we identified the sub-hierarchies of MeSH that tend to correspond to treatments and diseases, and convert these into a tri-valued attribute indicating one of: disease, treatment or neither. Finally, we included orthographic features such as ‘is the word a number’, ‘only part of the word is a number’, ‘first letter is capitalized’, ‘all letters are capitalized’. In Section 4.3 we analyze the impact of these features. 4 Models and Results This section describes the models and their performance on both entity extraction and relation classification. Generative models learn the prior probability of the class and the probability of the features given the class; they are the natural choice in cases with hidden variables (partially observed or missing data). Since labeled data is expensive to collect, these models may be useful when no labels are available. However, in this paper we test the generative models on fully observed data and show that, although not as accurate as the discriminative model, their performance is promising enough to encourage their use for the case of partially observed data. Discriminative models learn the probability of the class given the features. When we have fully observed data and we just need to learn the mapping from features to classes (classification), a discriminative approach may be more appropriate, as shown in Ng and Jordan (2002), but has other shortcomings as discussed below. For the evaluation of the role extraction task, we calculate the usual metrics of precision, recall and F-measure. Precision is a measure of how many of the roles extracted by the system are correct and recall is the measure of how many of the true roles were extracted by the system. The F-measure is a weighted combination of precision and recall4. Our role evaluation is very strict: every token is assessed and we do not assign partial credit for constituents for which only some of the words are correctly labeled. We report results for two cases: (i) considering only the relevant sentences and (ii) including also irrelevant sentences. For the relation classification task, we report results in terms of classification accuracy, choosing one out of seven choices for (i) and one out of eight choices for (ii). (Most papers report the results for only the relevant sentences, while some papers assign credit to their algorithms if their system extracts only one instance of a given relation from the collection. By contrast, in our experiments we expect the system to extract all instances of every relation type.) For both tasks, 75% of the data were used for training and the rest for testing. 4.1 Generative Models In Figure 1 we show two static and three dynamic models. The nodes labeled “Role” represent the entities (in this case the choices are DISEASE, TREATMENT and NULL) and the node labeled “Relation” represents the relationship present in the sentence. We assume here that there is a single relation for each sentence between the entities5. The children of the role nodes are the words and their features, thus there are as many role states as there are words in the sentence; for the static models, this is depicted by the box (or “plate”) which is the standard graphical model notation for replication. For each state, the features  are those mentioned in Section 3. The simpler static models S1 and S2 do not assume an ordering in the role sequence. The dynamic models were inspired by prior work on HMM-like graphical models for role extraction (Bikel et al., 1999; Freitag and McCallum, 2000; Ray and Craven, 2001). These models consist of a 4In this paper, precision and recall are given equal weight, that is, F-measure =         . 5We found 75 sentences which contain more than one relationship, often with multiple entities or the same entities taking part in several interconnected relationships; we did not include these in the study. f 1 R ole f 2 f n . . . Relati on T  f 1 R ole f 2 f n . . . Relati on T  static model (S1) static model (S2) f 1 R ole f 2 f n . . . f 1 R ole f 2 f n . . . f 1 R ole f 2 f n . . . Relati on dynamic model (D1) f 1 R ole f 2 f n . . . f 1 R ole f 2 f n . . . f 1 R ole f 2 f n . . . Relati on dynamic model (D2) f 1 R ole f 2 f n . . . f 1 R ole f 2 f n . . . f 1 R ole f 2 f n . . . Relati on dynamic model (D3) Figure 1: Models for role and relation extraction. Markov sequence of states (usually corresponding to semantic roles) where each state generates one or multiple observations. Model D1 in Figure 1 is typical of these models, but we have augmented it with the Relation node. The task is to recover the sequence of Role states, given the observed features. These models assume that there is an ordering in the semantic roles that can be captured with the Markov assumption and that the role generates the observations (the words, for example). All our models make the additional assumption that there is a relation that generates the role sequence; thus, these Sentences Static Dynamic S1 S2 D1 D2 D3 No Smoothing Only rel. 0.67 0.68 0.71 0.52 0.55 Rel. + irrel. 0.61 0.62 0.66 0.35 0.37 Absolute discounting Only rel. 0.67 0.68 0.72 0.73 0.73 Rel. + irrel. 0.60 0.62 0.67 0.71 0.69 Table 2: F-measures for the models of Figure 1 for role extraction. models have the appealing property that they can simultaneously perform role extraction and relationship recognition, given the sequence of observations. In S1 and D1 the observations are independent from the relation (given the roles). In S2 and D2, the observations are dependent on both the relation and the role (or in other words, the relation generates not only the sequence of roles but also the observations). D2 encodes the fact that even when the roles are given, the observations depend on the relation. For example, sentences containing the word prevent are more likely to represent a “prevent” kind of relationship. Finally, in D3 only one observation per state is dependent on both the relation and the role, the motivation being that some observations (such as the words) depend on the relation while others might not (like for example, the parts of speech). In the experiments reported here, the observations which have edges from both the role and the relation nodes are the words. (We ran an experiment in which this observation node was the MeSH term, obtaining similar results.) Model D1 defines the following joint probability distribution over relations, roles, words and word features, assuming the leftmost Role node is  "!$# , and % is the number of words in the sentence: & ')(*+ ,(-'/.*10202*+ ,$('/34*6587+.9*9020:*65);<.)*602020:*6587=3>*60202*65);<3  ? & ')(& ,('.A@) '9( ; B CED 7 &5 C .@) ,('.$ (1) 3 B F D 7 & A,$(' F @) A,$(' F+G 71*+ '9( ; B CHD 7 &5 C F @) A,$(' F  Model D1 is similar to the model in Thompson et al. (2003) for the extraction of roles, using a different domain. Structurally, the differences are (i) Thompson et al. (2003) has only one observation node per role and (ii) it has an additional node “on top”, with an edge to the relation node, to represent a predicator “trigger word” which is always observed; the predicator words are taken from a fixed list and one must be present in order for a sentence to be analyzed. The joint probability distributions for D2 and D3 are similar to Equation (1) where we substitute the term IJ<KMLONP JHQSR T U! Q9V with I J<KML NP JHQSR TW"! Q9X T!Y V for D2 and NP L"Q/R  "! Q6X !Y V IJ<K Z NP JHQ/R  "! Q6V for D3. The parameters NP JHQ R  "! Q V and NP J # R  "! # V of Equation (1) are constrained to be equal. The parameters were estimated using maximum likelihood on the training set; we also implemented a simple absolute discounting smoothing method (Zhai and Lafferty, 2001) that improves the results for both tasks. Table 2 shows the results (F-measures) for the problem of finding the most likely sequence of roles given the features observed. In this case, the relation is hidden and we marginalize over it6. We experimented with different values for the smoothing factor ranging from a minimum of 0.0000005 to a maximum of 10; the results shown fix the smoothing factor at its minimum value. We found that for the dynamic models, for a wide range of smoothing factors, we achieved almost identical results; nevertheless, in future work, we plan to implement cross-validation to find the optimal smoothing factor. By contrast, the static models were more sensitive to the value of the smoothing factor. Using maximum likelihood with no smoothing, model D1 performs better than D2 and D3. This was expected, since the parameters for models D2 and D3 are more sparse than D1. However, when smoothing is applied, the three dynamic models achieve similar results. Although the additional edges in models D2 and D3 did not help much for the task of role extraction, they did help for relation classification, discussed next. Model D2 6To perform inference for the dynamic model, we used the junction tree algorithm. We used Kevin Murphy’s BNT package, found at http://www.ai.mit.edu/ murphyk/Bayes/bnintro.html. achieves the best F-measures: 0.73 for “only relevant” and 0.71 for “rel. + irrel.”. It is difficult to compare results with the related work since the data, the semantic roles and the evaluation are different; in Ray and Craven (2001) however, the role extraction task is quite similar to ours and the text is also from MEDLINE. They report approximately an F-measure of 32% for the extraction of the entities PROTEINS and LOCATIONS, and an F-measure of 50% for GENE and DISORDER. The second target task is to find the most likely relation, i.e., to classify a sentence into one of the possible relations. Two types of experiments were conducted. In the first, the true roles are hidden and we classify the relations given only the observable features, marginalizing over the hidden roles. In the second, the roles are given and only the relations need to be inferred. Table 3 reports the results for both conditions, both with absolute discounting smoothing and without. Again model D1 outperforms the other dynamic models when no smoothing is applied; with smoothing and when the true roles are hidden, D2 achieves the best classification accuracies. When the roles are given D1 is the best model; D1 does well in the cases when both roles are not present. By contrast, D2 does better than D1 when the presence of specific words strongly determines the outcome (e.g., the presence “prevention” or “prevent” helps identify the Prevent relation). The percentage improvements of D2 and D3 versus D1 are, respectively, 10% and 6.5% for relation classification and 1.4% for role extraction (in the “only relevant”, “only features” case). This suggests that there is a dependency between the observations and the relation that is captured by the additional edges in D2 and D3, but that this dependency is more helpful in relation classification than in role extraction. For relation classification the static models perform worse than for role extraction; the decreases in performance from D1 to S1 and from D2 to S2 are, respectively (in the “only relevant”, “only features” case), 7.4% and 7.3% for role extraction and 27.1% and 44% for relation classification. This suggests the importance of modeling the sequence of roles for relation classification. To provide an idea of where the errors occur, Table 4 shows the confusion matrix for model D2 for the most realistic and difficult case of “rel + irrel.”, “only features”. This indicates that the algorithm performs poorly primarily for the cases for which there is little training data, with the exception of the ONLY DISEASE case, which is often mistaken for CURE. 4.2 Neural Network To compare the results of the generative models of the previous section with a discriminative method, we use a neural network, using the Matlab package to train a feed-forward network with conjugate gradient descent. The features are the same as those used for the models in Section 4.1, but are represented with indicator variables. That is, for each feature we calculated the number of possible values [ and then represented an observation of the feature as a sequence of [ binary values in which one value is set to \ and the remaining [^]_\ values are set to ` . The input layer of the NN is the concatenation of this representation for all features. The network has one hidden layer, with a hyperbolic tangent function. The output layer uses a logistic sigmoid function. The number of units of the output layer is fixed to be the number of relations (seven or eight) for the relation classification task and the number of roles (three) for the role extraction task. The network was trained for several choices of numbers of hidden units; we chose the bestperforming networks based on training set error. We then tested these networks on held-out testing data. The results for the neural network are reported in Table 3 in the column labeled NN. These results are quite strong, achieving 79.6% accuracy in the relation classification task when the entities are hidden and 96.9% when the entities are given, outperforming the graphical models. Two possible reasons for this are: as already mentioned, the discriminative approach may be the most appropriate for fully labeled data; or the graphical models we proposed may not be the right ones, i.e., the independence assumptions they make may misrepresent underlying dependencies. It must be pointed out that the neural network Sentences Input B Static Dynamic NN S1 S2 D1 D2 D3 No Smoothing Only rel. only feat. 46.7 51.9 50.4 65.4 58.2 61.4 79.8 roles given 51.3 52.9 66.6 43.8 49.3 92.5 Rel. + irrel. only feat. 50.6 51.2 50.2 68.9 58.7 61.4 79.6 roles given 55.7 54.4 82.3 55.2 58.8 96.6 Absolute discounting Only rel. only feat. 46.7 51.9 50.4 66.0 72.6 70.3 roles given 51.9 53.6 83.0 76.6 76.6 Rel. + irrel. only feat. 50.6 51.1 50.2 68.9 74.9 74.6 roles given 56.1 54.8 91.6 82.0 82.3 Table 3: Accuracies of relationship classification for the models in Figure 1 and for the neural network (NN). For absolute discounting, the smoothing factor was fixed at the minimum value. B is the baseline of always choosing the most frequent relation. The best results are indicated in boldface. is much slower than the graphical models, and requires a great deal of memory; we were not able to run the neural network package on our machines for the role extraction task, when the feature vectors are very large. The graphical models can perform both tasks simultaneously; the percentage decrease in relation classification of model D2 with respect to the NN is of 8.9% for “only relevant” and 5.8% for “relevant + irrelevant”. 4.3 Features In order to analyze the relative importance of the different features, we performed both tasks using the dynamic model D1 of Figure 1, leaving out single features and sets of features (grouping all of the features related to the MeSH hierarchy, meaning both the classification of words into MeSH IDs and the domain knowledge as defined in Section 3). The results reported here were found with maximum likelihood (no smoothing) and are for the “relevant only” case; results for “relevant + irrelevant” were similar. For the role extraction task, the most important feature was the word: not using it, the GM achieved only 0.65 F-measure (a decrease of 9.7% from 0.72 F-measure using all the features). Leaving out the features related to MeSH the Fmeasure obtained was 0.69% (a 4.1% decrease) and the next most important feature was the partof-speech (0.70 F-measure not using this feature). For all the other features, the F-measure ranged between 0.71 and 0.73. For the task of relation classification, the MeSH-based features seem to be the most important. Leaving out the word again lead to the biggest decrease in the classification accuracy for a single feature but not so dramatically as in the role extraction task (62.2% accuracy, for a decrease of 4% from the original value), but leaving out all the MeSH features caused the accuracy to decrease the most (a decrease of 13.2% for 56.2% accuracy). For both tasks, the impact of the domain knowledge alone was negligible. As described in Section 3, words can be mapped to different levels of the MeSH hierarchy. Currently, we use the “second” level, so that, for example, surgery is mapped to G02.403 (when the whole MeSH ID is G02.403.810.762). This is somewhat arbitrary (and mainly chosen with the sparsity issue in mind), but in light of the importance of the MeSH features it may be worthwhile investigating the issue of finding the optimal level of description. (This can be seen as another form of smoothing.) 5 Conclusions We have addressed the problem of distinguishing between several different relations that can hold between two semantic entities, a difficult and important task in natural language understanding. We have presented five graphical models and a neural network for the tasks of semantic relation classification and role extraction from bioscience text. The methods proposed yield quite promising results. We also discussed the strengths and weaknesses of the discriminative and generative Prediction Num. Sent. Relation Truth Vague OD NC Cure Prev. OT SE Irr. (Train, Test) accuracy Vague 0 3 0 4 0 0 0 1 28, 8 0 Only DIS (OD) 2 69 0 27 1 1 0 24 492, 124 55.6 No Cure (NC) 0 0 0 1 0 0 0 0 3, 1 0 Cure 2 5 0 150 1 1 0 3 648, 162 92.6 Prevent 0 1 0 2 5 0 0 5 50, 13 38.5 Only TREAT (OT) 0 0 0 16 0 6 1 11 132, 34 17.6 Side effect (SE) 0 0 0 3 1 0 0 1 24, 5 20 Irrelevant 1 32 1 16 2 7 0 296 1416, 355 83.4 Table 4: Confusion matrix for the dynamic model D2 for “rel + irrel.”, “only features”. In column “Num. Sent.” the numbers of sentences used for training and testing and in the last column the classification accuracies for each relation. The total accuracy for this case is 74.9%. approaches and the use of a lexical hierarchy. Because there is no existing gold-standard for this problem, we have developed the relation definitions of Table 1; this however may not be an exhaustive list. In the future we plan to assess additional relation types. It is unclear at this time if this approach will work on other types of text; the technical nature of bioscience text may lend itself well to this type of analysis. Acknowledgements We thank Kaichi Sung for her work on the relation labeling and Chris Manning for helpful suggestions. This research was supported by a grant from the ARDA AQUAINT program, NSF DBI-0317510, and a gift from Genentech. References E. Agichtein and L. Gravano. 2000. Snowball: Extracting relations from large plain-text collections. Proceedings of DL ’00. D. Bikel, R. Schwartz, and R. Weischedel. 1999. An algorithm that learns what’s in a name. Machine Learning, 34(1-3):211–231. E. Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543–565. M. Collins. 1996. A new statistical parser based on bigram lexical dependencies. Proc. of ACL ’96. M. Craven. 1999. Learning to extract relations from Medline. AAAI-99 Workshop on Machine Learning for Information Extraction. R. Feldman, Y. Regev, M. Finkelstein-Landau, E. Hurvitz, and B. Kogan. 2002. Mining biomedical literature using information extraction. Current Drug Discovery, Oct. D. Freitag and A. McCallum. 2000. Information extraction with HMM structures learned by stochastic optimization. AAAI/IAAI, pages 584–589. C. Friedman, P. Kra, H. Yu, M. Krauthammer, and A. Rzhetzky. 2001. Genies: a natural-language processing system for the extraction of molecular pathways from journal articles. Bioinformatics, 17(1). A. Ng and M. Jordan. 2002. On discriminative vs. generative classifiers: A comparison of logistic regression and Naive Bayes. NIPS 14. J. Pustejovsky, J. Castano, and J. Zhang. 2002. Robust relational parsing over biomedical literature: Extracting inhibit relations. PSB 2002. S. Ray and M. Craven. 2001. Representing sentence structure in Hidden Markov Models for information extraction. Proceedings of IJCAI-2001. T. Rindflesch, L. Hunter, and L. Aronson. 1999. Mining molecular binding terminology from biomedical text. Proceedings of the AMIA Symposium. B. Rosario, M. Hearst, and C. Fillmore. 2002. The descent of hierarchy, and selection in relational semantics. Proceedings of ACL-02. P. Srinivasan and T. Rindflesch. 2002. Exploring text mining from Medline. Proceedings of the AMIA Symposium. C. Thompson, R. Levy, and C. Manning. 2003. A generative model for semantic role labeling. Proceedings of EMCL ’03. D. Zelenko, C. Aone, and A. Richardella. 2002. Kernel methods for relation extraction. Proceedings of EMNLP 2002. C. Zhai and J. Lafferty. 2001. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of SIGIR ’01.
2004
55
    "!$#% & '  (  )+*,- .0/ ' 1-3245 (  1%6 798:<;8&=?>A@-=CBDFE<@ GHJI KML NOPHJQRN%SMT'USMOPIVWNXHYL[ZW\J]^HJQ\YHJ_ `aQ]cbMHYL_ ]cNedfSMT-gCHih&KM_[KjN%klV_NX]cQ m `aQ]cbMHYL _]cNedfZNnKjN]^S<Q$Upo<qMo<o k[V_ N]^Q5r&gpsutjvRt mxw y&z<{R|&zj}~€-‚„ƒW…&†<‡Wz&€‚†<ˆ<ƒ 798R‰‹ŠŒ=-Ž,’‘4Œ5Œ=CB‰ GHJI KML NOPHJQRN%SMT'USMOPIVWNXHYL[ZW\J]^HJQ\YHJ_ `aQ]cbMHYL_ ]cNedfSMT-gCHih&KM_[KjN%klV_NX]cQ m `aQ]cbMHYL _]cNedfZNnKjN]^S<Q$Upo<qMo<o k[V_ N]^Q5r&gpsutjvRt mxw “•””x}†<–W~€-‚„ƒW…&†<‡Wz&€‚†<ˆ<ƒ —™˜ DJšF›œ8 EMš PžxŸe .¡£¢R¤^ž¦¥§©¨F ¡ªžx¢¬«X­< e¥¨¦®X ¡ªžx¢f¯±°„².³CŸ´MŸe e«§µŸ’ e¥«¨F  Ÿe«n¶¨F¥ ¨F e«·¶ž¦ e«¢œ ¡¸¨¦¹'«X­M e¥ ¨¦®X ¡ªžx¢Ÿ¬¨¦Ÿµ¡£¢º«n¶&«¢ºR«¢œ n» ¼ži½«n¾¦«n¥¿À¡£¢™§©¨¦¢j´P®n¨¦Ÿe«Ÿn¿®Xžx¢Ÿ¡£ºR«n¥¡¸¢RÁ¡£¢RÃWÄR«¢®X«Ÿ Å ÆDZÈ-ÆÆnÉ º¡ËÊ&«n¥«¢œ •¶&ž¦ e«¢j ¡£¨¦¹j«X­M e¥ ¨¦®X ¡ªžx¢Ÿ5®Xžxā¹£ºl¡£§Ì ¶¥žJ¾¦«Íži¾¦«n¥¨¦¹£¹Î¨¦®n®n⥍¦®X´¦» ÏM ¨F ¡¸Ÿe ¡£®n¨¦¹9§Ð«n ÑRž<ºŸ Ò ¨¦Ÿe«ºÓžx¢Ô ÉWÕFÖ^ׯØÇÙÆÕ Á¦¥¨F¶Wс¡£®n¨¦¹§ÐžMº«¹£Ÿn¿ Ÿā® ÑÚ¨¦Ÿ ØÛ¦ÉWÕFֱDZֱÛFÉ&ÜFÝ©×eÜFÉWÕxÛFÞ3߯XݪÕYà ¯Ùáâ㋟X³¿µÑ¨¾¦« Ò «n«¢ ŸÑRžJ½¢ä ež Ò «å¨¦¢Ú«XÊ«®X ¡ª¾¦«·¨F¶¶¥žx¨¦®ÑÚ ež$¹£«¨F¥¢¡£¢RÁ ¨¦®n®nÄR¥¨F e«æ°ç²èŸe´<Ÿe e«§©Ÿn»êéΫ북¥«Ÿ«¢œ $¨ì¢R«n½í°„² §Ð«n ÑRž<ºî Ñ¨F Â«§Ð¶W¹ªži´<Ÿ©â«¹£¨F ¡ªžx¢¨¦¹P¨F¥咽J¾ëð«n ñÌ ½ž¦¥ïMŸÓ¯ò¨æÁ¦«¢R«n¥¨¦¹£¡ªó¨F ¡£žx¢ìž¦¤¬á’â㋟X³¿a½с¡£®Ñô®n¨¦¢ ¥«n¶¥«Ÿe«¢œ f¨F¥ Ò ¡ª e¥¨F¥´ôºR«n¶«¢ºR«¢®n¡ª«Ÿ Ò «n Ù½«n«¢,«X­MÌ  e¥¨¦®X ¡ªžx¢Ÿn»îõ.с¡£Ÿ¨¦¹£¹£ži½Ÿ¤^ž¦¥$öe®Xžx¹£¹ª«®X ¡£¾¦«·¡£¢R¤^ž¦¥§©¨YÌ  ¡ªžx¢ô«X­M e¥ ¨¦®X ¡ªžx¢÷ с¨F ™«X­M¶¹£žx¡ª Ÿ· с«Ó§lÄR ⍦¹l¡£¢<Ì ÃWÄR«¢®X« Ò «n ç½-«n«¢¶žxŸŸ¡ Ò ¹ª«·«X­< e¥¨¦®X ¡ªžx¢Ÿn»ø²C­M¶«n¥¡ËÌ §Ð«¢j ŸÀžx¢[¹ª«¨F¥¢¡¸¢RÁ’ ež«X­< e¥¨¦®X •¶¥ž¦ e«¡£¢%¢¨¦§©«Ÿ¤^¥žx§ Ò ¡ªžx§©«º¡£®n¨¦¹• e«X­M ùºR«§Ðžx¢Ÿe e¥¨F e« ÑR«Ð¨¦ºR¾F¨¦¢œ ¨FÁ¦«Ÿùž¦¤  с¡£Ÿ'¨F¶¶¥žx¨¦®Ñ» ú û = šF›œŒ‹Ž @-EMš¦üýŒÀ= °ñ¢R¤cž¦¥ §©¨F ¡ªžx¢©«X­< e¥¨¦®X ¡ªžx¢Î¯±°„².³¿<¹ªž<®n¨F ¡£¢RÁ[¥«n¤c«n¥«¢®X«Ÿ  ežëŸe¶«®n¡Ëþ&®P ç´j¶«Ÿåž¦¤¡ª e«§©Ÿå¡¸¢ÿ¢¨F ÄR¥¨¦¹ËÌÙ¹¸¨¦¢RÁxā¨FÁ¦« ºRž<®nā§Ð«¢j Ÿn¿µ¡£Ÿ$¨¦¢ø¡£§Ð¶ž¦¥ ¨¦¢j Î ¨¦Ÿeïͽ¡ª Ñø§©¨¦¢œ´ ¶¥¨¦®X ¡¸®n¨¦¹¨F¶¶¹£¡¸®n¨F ¡ªžx¢Ÿn» Ï<¡£¢®X«’°„² Ÿe´<Ÿe e«§©Ÿ-¨F¥«º¡£¤¸Ì þW®nā¹£ 0¨¦¢ºÿ ¡£§Ð«XÌÙ®Xžx¢Ÿā§©¡£¢ÁÓ ežæ®Xžx¢Ÿe e¥ā®X n¿[§ÐžxŸe  ¥«®X«¢j Ð¥«Ÿe«¨F¥® Ñîс¨¦Ÿl¤^žM®nāŸ«ºäžx¢ä«§Ð¶¡ª¥¡¸®n¨¦¹C e«® Ñ<Ì ¢¡  ÄR«Ÿ с¨F  ¨¦ÄR ežx§©¨F ¡£®n¨¦¹¸¹ª´®Xžx¢Ÿe e¥ā®X  ¡£¢R¤^ž¦¥§©¨F ¡ªžx¢ «X­< e¥¨¦®X ež¦¥Ÿ Ò ´Í e¥¨¦¡¸¢¡£¢RÁžx¢øŸÄR¶«n¥¾M¡¸Ÿe«º,®Xž¦¥¶ž¦¥¨ ¯ÙᒨF¥º¡ª«¦¿ &ᒨ¦¹£¡ËÊ5¿j³» p¢«ž¦¤• ÑR«ù®nÄR¥¥«¢j  Ò «Ÿe «§Ð¶¡ª¥ ¡£®n¨¦¹’¨F¶¶¥žx¨¦® ÑR«ŸÐ ežÎ°„² ¡£Ÿ Ø ÛFÉWÕFֱDZֱۦÉWÜFÝ ×eÜFÉWÕxÛFÞ ßÆXݪÕYà ¯Ùá’â㠟X³0¯•¨YÊ«n¥ Ù´«n å¨¦¹ò»ª¿Y³» áâã Ÿ•¨F¥«C¨'¥«Ÿe e¥¡£®X e«º[®n¹£¨¦ŸŸ ž¦¤Ô ÉWÕFÖ^×eÆØÇÙÆÕ¦×eÜjÖ ØܦÝÞ©Û՜ÆnÝ à ¯xž¦¥ º¨¦¢ ¿ j³‹ºR«Ÿ¡£Áx¢R«º¤cž¦¥Ÿe«  ÄR«¢®X« Ÿe«nÁx§Ð«¢j ¨F ¡ªžx¢ÿ ¨¦ŸïMŸAŸā® Ѩ¦ŸA°ç²¿-¶¨F¥ ñÌýž¦¤¸Ìٟe¶«n«® Ñ ¯! aÏW³‹ ¨FÁ¦Áx¡£¢RÁ·¯ ¨YÊ&«n¥ ç´A«n .¨¦¹ò»ª¿Y³¿&¨¦¢ºåŸс¨¦¹ËÌ ¹ªžJ½4¶¨F¥Ÿ¡£¢Á ¯ÙÏ<с¨9¨¦¢º"!5«n¥«¡ª¥ ¨<¿#$j³» °ñ¢ä¨9¥«XÌ ®X«¢j Â¤^žx¹£¹ªžJ½'ÌÙÄR¶ë ež ¶¥«n¾<¡ªžxāŸ¹£´ ¶Ä Ò ¹£¡£Ÿс«º «X­<¶«n¥¡ËÌ §Ð«¢j Ÿù®Xžx§Ð¶¨F¥¡¸¢RÁ¬¨¬¹£¨F¥Á¦«¾F¨F¥¡ª«n ç´·ž¦¤C°„² ÌÙ¹£«¨F¥¢¡£¢RÁ §Ð«n ÑRž<ºŸ[¯ò¡£¢®n¹£Äº¡£¢RÁ¼™ ¿Ï&%p$¿&P¨Y­<²-¢j n¿ ¨¦¢º ¥ā¹£«XÌ Ò ¨¦Ÿe«ºå§Ð«n ÑRž<ºŸX³ žx¢0 ÑR«% ¨¦Ÿeﬞ¦¤5 ¨FÁ¦Áx¡£¢RÁ©¥«n¤cÌ «n¥«¢®X«Ÿ’ ežÐÑMā§©¨¦¢¬¶¥ž¦ e«¡¸¢Ÿ-¡¸¢ÂP«º¹£¡£¢«¨ Ò Ÿe e¥ ¨¦®X Ÿ ¯'’ā¢R«Ÿ®nÄ«n ‹¨¦¹ò»ª¿()<³¿áâã Ÿ5½«n¥«’¤^žx⢁ºl ežùŸ¡ªÁFÌ ¢¡Ëþ&®n¨¦¢œ ¹ª´ÂžxÄR ñÌý¶«n¥¤^ž¦¥§ ®Xžx§Ð¶«n ¡£¢RÁ e«® с¢¡  ÄR«Ÿn» * ŸÀ ç´j¶W¡£®n¨¦¹£¹ª´ù¨F¶¶¹¸¡ª«º ¿Fá’â㠟n¿¦¹£¡ªï¦«C¨¦¹¸§ÐžxŸe •¨¦¹¸¹x°„² §Ð«n ÑRž<ºŸn¿R¨¦ŸŸā§Ð«pŸe«n¶¨F¥¨F e«ù«X­< e¥¨¦®X ¡ªžx¢Ÿ¨F¥«ù¡£¢ºR«XÌ ¶«¢ºR«¢j %¨¦¢ºÎ e¥«¨F a«¨¦®Ñ ¶&ž¦ e«¢j ¡£¨¦¹‹«X­< e¥¨¦®X ¡ªžx¢ ¡£¢ ¡£Ÿežx¹¸¨F ¡ªžx¢ »a¼ži½«n¾¦«n¥¿5¡£¢9§©¨¦¢j´™®n¨¦Ÿe«Ÿ¿•®Xžx¢Ÿ¡¸ºR«n¥¡£¢RÁ ¡£¢<Ã&ÄR«¢®X«Ÿ ÅÆnDZÈ-Æ ÆXÉ «X­< e¥¨¦®X ¡ªžx¢Ÿp®n¨¦¢ Ò «[¾¦«n¥´PāŸe«XÌ ¤±Ä¹ò»9ぞ¦¥Ð«X­R¨¦§Ð¶¹ª«¦¿ ¡£¢ žxÄR¥¶¥ž¦ e«¡¸¢<Ìý ¨FÁ¦Áx¡£¢RÁ9 ¨¦Ÿeï&¿ ¥«n¶«¨F e«º%¥«n¤c«n¥«¢®X«Ÿ ež с«CŸ¨¦§Ð« ¶¥ž¦ e«¡£¢%¨F¥«-®Xžx§Ì §Ðžx¢ »5°„¤ ÑR«®Xžx¢œ e«X­< -Ÿā¥¥žx⢁º¡£¢RÁ.žx¢R«žM®n®nÄR¥¥«¢®X« ž¦¤C¨Â¶ÑR¥¨¦Ÿ«[¡£Ÿ¾¦«n¥´·¡£¢º¡£®n¨F ¡ª¾¦«[ž¦¤C¡ª  Ò «¡£¢Á¨¶¥žFÌ  e«¡£¢ ¿M ÑR«¢¬ с¡£ŸŸÑRžxĹ£ºµ¨¦¹£Ÿžl¡£¢RÃWÄR«¢®X« ÑR« ¨FÁ¦Áx¡¸¢RÁ ž¦¤¨¦¢Rž¦ с«n¥[ž<®n®nÄR¥¥«¢®X«Âž¦¤. с«¬Ÿ¨¦§©«Â¶с¥¨¦Ÿe«µ¡£¢Ó¨ º¡ËÊ«n¥«¢j .®Xžx¢œ e«X­< ½с¡£®Ñ塸Ÿ.¢Rž¦ .¡£¢º¡£®n¨F ¡ª¾¦«ùž¦¤ ¶¥žFÌ  e«¡£¢å¥«n¤c«n¥«¢®X«Ÿn» ⫹£¨F ¡ªžx¢¨¦¹ ™¨F¥咽i¾ ð«n Ù½ž¦¥ï<Ÿ ¯±âPð+ ŸX³ ¯òõ ¨¦ŸeïY¨F¥ä«n ä¨¦¹ò»ª¿,j³Î¨F¥«¨ôÁ¦«¢R«n¥¨¦¹£¡£ó¨F ¡ªžx¢øž¦¤ áâã Ÿë с¨F ¨¦¹£¹ªži½ ¤cž¦¥ ØÛFÝ^Ý£ÆØDZÖ-FÆ ØÝËÜYà à Ö ß-ØÜxDZֱÛFÉ ž¦¤ô¨ Ÿ«n 4ž¦¤ì¥«¹£¨F e«º «¢œ ¡ª ¡£«Ÿ Ò ´"¡£¢œ e«nÁ¦¥ ¨F ¡£¢RÁ ¡£¢R¤^ž¦¥§©¨F ¡£žx¢0¤^¥žx§ ¤^«¨F ÄR¥«Ÿž¦¤ ¡£¢º¡ª¾M¡¸ºÄ¨¦¹&«¢œ ¡£ ¡ª«Ÿ ¨¦Ÿp½«¹£¹5¨¦Ÿp ÑR«Ð¥«¹£¨F ¡ªžx¢Ÿ Ò «n ç½-«n«¢9 ÑR«§0»aâ«Ÿā¹ª Ÿ žx¢0®n¹£¨¦ŸŸ¡ª¤^´M¡£¢Ál®Xžx¢¢«®X e«º0Ÿe«n Ÿ'ž¦¤ ½-« Ò ¶¨FÁ¦«Ÿс¨i¾¦« ¾¦«n¥¡ËþW«ºA ÑR«a¨¦ºR¾F¨¦¢œ ¨FÁ¦«[ž¦¤• Ñ¡£Ÿ.¨F¶¶¥žx¨¦®Ñf¯òõ ¨¦ŸeïF¨F¥ «n  ¨¦¹ý»ª¿.j³» °„¢? с¡£ŸÎ¶¨F¶«n¥¿Ð½«ë¶¥«Ÿe«¢j $¨¦¢ ¨F¶¶¥žx¨¦®Ñ4 ež Ø ÛFÝcÝ£ÆØiDZÖ/-¦ÆÖ^É0nÛF× Þ©Ü¦Ç±ÖòÛFÉ Æ21¦Ç±×eÜxØiDZֱÛFÉ ÄŸ¡¸¢RÁ ❷ð+ Ÿ  с¨F íŸ¡£§[Ĺª ¨¦¢R«nžxāŸ¹£´ «X­M e¥ ¨¦®X Ÿ ¨¦¹£¹·ž¦¤f ÑR«Í¡£¢R¤^ž¦¥§©¨F ¡ªžx¢ ¤c¥žx§ ¨ ºRžM®nħЫ¢œ  Ò ´ «X­<¶¹ªžx¡ª ¡£¢ÁÓ с«Î e«X­M ⍦¹[®Xžx¢œ e«¢j P¨¦¢º ®Xžx¢œ e«X­< ·ž¦¤ «¨¦®Ѭ¥«¹£«n¾Y¨¦¢j ŸÄ Ò Ÿe e¥¡¸¢RÁ%¨¦Ÿ½-«¹£¹W¨¦Ÿ ÑR«ºRž<®nā§Ð«¢j  ¥«¹£¨F ¡£žx¢Ÿс¡ª¶WŸ Ò «n ç½-«n«¢  ÑR«§å» ²C­M¶«n¥¡£§©«¢œ Ÿ žx¢èÑjħ©¨¦¢ ¶¥ž¦ e«¡£¢ ¨FÁ¦Áx¡£¢RÁ ºR«§Ðžx¢Ÿe e¥¨F e« ÑR« ¨¦ºR¾F¨¦¢œ ¨FÁ¦«Ÿ9ž¦¤Â®Xžx¹£¹ª«®X ¡ª¾¦«Ó«X­M e¥ ¨¦®X ¡ªžx¢ žx¢ Ÿ«n¾¦«n¥¨¦¹ ¨¦¢¢Rž¦ ¨F e«ºP®Xž¦¥¶ž¦¥¨Ðž¦¤5·«º¹£¡£¢R«ù¨ Ò Ÿe e¥ ¨¦®X Ÿn» 3 465 BÓ7Α87:9‹›œ8ŠæB;AŒ ›<>=ñŒ ›@?= š¦üòš‰ 7™BWE<ŒBA=CüýšFüٌ= C ¡ª¾¦«¢Ð¨p®Xžx¹£¹£«®X ¡ªžx¢lž¦¤WºRž<®nā§Ð«¢j ŸED·¿¦½«'¨¦ŸŸež<®n¡£¨F e« ½¡ª Ñ$«¨¦®ÑÚºRž<®nā§Ð«¢j GF"HID ¨PŸ«n [ž¦¤.®n¨¦¢º¡£º¨F e« «¢j ¡ª ¡ª«Ÿ+FKJML©¿‹¡£¢$žxÄR¥[®n¨¦Ÿ«¬¨P¥«Ÿe e¥¡¸®X e«º$Ÿe«n lž¦¤' ežFÌ ï¦«¢™Ÿe«  ā«¢®X«Ÿ¤c¥žx§  ÑR«ºRžM®nħЫ¢œ n»'²¨¦®љ«¢j ¡ª Ù´ N HOFKJML ¡¸Ÿù®с¨F¥¨¦®X e«n¥ ¡ªón«º Ò ´™¨A¶¥«ºR«XþW¢R«º9Ÿe«n ùž¦¤ Ò žjžx¹£«¨¦¢©¤^«¨F ÄR¥«Ÿ N JMPл õс¡£ŸCŸ«n -ž¦¤¤^«¨F ÄR¥«Ÿ¡£Ÿ- ÑR« Ÿ¨¦§Ð«™¤^ž¦¥¬¨¦¹£¹®n¨¦¢º¡£º¨F e«·«¢j ¡ª ¡ª«Ÿn¿.¨¦¢º¡ª ¬®n¨¦¢ Ò « ¨¦ŸŸ¡¸§©¡£¹£¨F e«º·½¡ª Ñ0 с«a¥«¹£¨F ¡£žx¢¨¦¹Àº¨F ¨ Ò ¨¦Ÿe«ÐºR«XþW¢¡ËÌ  ¡ªžx¢Âž¦¤•¨[ ¨ Ò ¹ª«¦»# p¢«¶¨F¥ ¡¸®nā¹£¨F¥¤c«¨F ÄR¥«¡£Ÿ N J N ½с¡¸®Ñ¡£Ÿ5Ÿe«n ‹ ež ’¡ª¤ N ¡¸Ÿ‹®Xžx¢Ÿ¡£ºR«n¥«ºl¨¾F¨¦¹£¡£º[«X­M e¥ ¨¦®Ì  ¡ªžx¢ ¿R¨¦¢º,ž¦ с«n¥½¡£Ÿe«¦»‹°ñ¢A с¡£Ÿ’ºRž<®nā§Ð«¢j .§Ðž<ºR«¹ò¿ ¹£¨ Ò «¹£Ÿ'¨F¥«[ ÑR«ažx¢¹ª´¬Ñ¡£ººR«¢A¤^«¨F ÄR¥«Ÿn¿&¨¦¢ºå ÑR«l¡£¢<Ì ¤^«n¥«¢®X«¶¥ž<®X«ºÄR¥«[½¡£¹£¹ e¥´0 ež¬þW¢º™¨Â§©žxŸe p¶¥ž Ò Ì ¨ Ò ¹ª«™¨¦ŸŸ¡ªÁx¢§Ð«¢œ Âž¦¤ù¾Y¨¦¹¸ÄR«Ÿ© ežä¹£¨ Ò «¹£Ÿn¿’Áx¡ª¾¦«¢æ ÑR« ®nÄR¥¥«¢œ .§Ðž<ºR«¹¶¨F¥ ¨¦§Ð«n e«n¥Ÿn» ²¨¦®ÑºRž<®nā§Ð«¢j ¬¡£Ÿ¬¨¦ŸŸež<®n¡£¨F e«º½¡ª Ñ樦¢⢁º¡ËÌ ¥«®X e«º¬Á¦¥¨F¶Ñ¡£®n¨¦¹W§©žMºR«¹ý¿œ½¡ª Ñ¢Rž<ºR«Ÿ-®Xž¦¥¥«Ÿe¶žx¢º<Ì ¡£¢RÁùº¡£¥«®X ¹ª´% ežù«¢j ¡ª Ù´l¤^«¨F ÄR¥«Ÿn¿jžx¢R«.¢Rž<ºR«-¤^ž¦¥ «¨¦® Ñ ¤^«¨F ÄR¥«fž¦¤l«¨¦®Ñ ®n¨¦¢º¡£º¨F e«Î«¢œ ¡ª ç´ÿ¡£¢ ÑR«fºRž<®nÄ<Ì §Ð«¢j n»aõ.ÑR«Ÿe«n pž¦¤C«ºÁ¦«Ÿù¡£Ÿp®X¥«¨F e«º Ò ´™§©¨F ®Ñ¡£¢RÁ ØnÝ Ö Ô ÆåÇÙÆnÞ&ÝËܦÇÙÆXà ¨FÁx¨¦¡£¢Ÿe Ð ÑR«å«¢j ¡ª¥«0Ÿ«n Ðž¦¤«¢j ¡ËÌ  ¡ª«Ÿ FKJML©» * ®n¹£¡  ÄR«ù e«§Ð¶¹£¨F e«[¡£Ÿ.¨Ð¶¥žM®X«ºÄ¥«p с¨F  þW¢ºŸå¨¦¹£¹%ŸÄ Ò Ÿe«n Ÿåž¦¤«¢œ ¡£ ¡ª«Ÿ·Ÿ¨F ¡¸Ÿe¤c´<¡£¢RÁî¨îÁx¡ª¾¦«¢ ®Xžx¢Ÿe e¥ ¨¦¡£¢œ n¿x¨F¤^ e«n¥‹½с¡¸®Ñ ¿Y¤cž¦¥5«¨¦® Ñ«¢œ ¡ª ç´[ŸÄ Ò Ÿe«n n¿¦¡ª  ®Xžx¢¢R«®X Ÿ¨©Ÿe«¹£«®X e«ºPŸe«n ž¦¤5¤c«¨F ÄR¥«[¢Rž<ºR«Ÿ.Ÿežµ с¨F   ÑR«n´A¤^ž¦¥§4¨©®n¹£¡  ÄR«¦» ぞ¦¥ §©¨¦¹£¹ª´¦¿j ÑR«n¥«¡£Ÿ-¨%Ÿe«n -ž¦¤®n¹¸¡  ā«' e«§Ð¶¹£¨F e«Ÿ ¿ ½¡ª ÑA«¨¦® Ñ· e«§©¶¹£¨F e« H ?Ÿe¶«®n¡Ëþ«º Ò ´ F» * §©¨F ® с¡£¢RÁµž¦¶&«n¥ ¨F ež¦¥.¤cž¦¥Ÿe«¹ª«®X ¡£¢RÁŸÄ Ò Ì Ÿ«n Ÿ.ž¦¤ «¢œ ¡ª ¡£«Ÿn» M» * Ÿe«¹£«®X e«ºPŸe«n ž¦¤‹¤c«¨F ā¥«Ÿ "!¤^ž¦¥ «¢j ¡ª ¡ª«Ÿ ¥«n ā¥¢R«º Ò ´[ ÑR«'§©¨F ®Ñ¡£¢RÁž¦¶&«n¥ ¨F ež¦¥» .º«¢Rž¦ e«Ÿ. ÑR«až Ò Ÿe«n¥¾¦«º·¤c«¨F ÄR¥«Ÿn¿½с¡¸¹ª«#$ ¥«n¤c«n¥Ÿ. ež© ÑR«%с¡£ººR«¢A¹£¨ Ò «¹£Ÿn» $M» * ®n¹£¡  ÄR«A¶ž¦ e«¢œ ¡£¨¦¹% Рс¨F ÐÁx¡ª¾¦«Ÿ с«0®Xžx§Ì ¶W¨F ¡ Ò ¡£¹£¡£ Ù´0ž¦¤«¨¦®Ñ ¶žxŸŸ¡ Ò ¹£«l®Xžx¢<þWÁxÄR¥¨F ¡ªžx¢Îž¦¤ ¾F¨¦¹£ÄR«Ÿa¤cž¦¥l ÑR«Â¤c«¨F ÄR¥«Ÿl¡¸¢n¿‹Ÿn»  n»&% ¯('x³*) ,+-' H .» C ¡£¾¦«¢ ¨™Ÿe«n n¿ L©¿5ž¦¤.¢Rž<ºR«Ÿn¿/¯Lг10 32ô®Xžx¢<Ì Ÿ¡£Ÿ Ÿùž¦¤'ŸÄ Ò Ÿe«n Ÿaž¦¤’«¢œ ¡ª ¡£«Ÿ%½ÑRžxŸ«Ð¤c«¨F ÄR¥«Â¢Rž<ºR«Ÿ µ¨F¥«· ež Ò «·®Xžx¢¢R«®X e«ºæ¡¸¢î¨$®n¹£¡  ÄR«¦»°ñ¢î¶¥«n¾<¡ËÌ žxāŸ¨F¶¶¹£¡¸®n¨F ¡ªžx¢Ÿ.ž¦¤CâPðŸn¿ ÑR«Ÿe«¹ª«®X e«º9ŸÄ Ò Ÿe«n Ÿ ž¦¤.«¢j ¡ª ¡ª«Ÿl¤^ž¦¥¨PÁx¡ª¾¦«¢Ó e«§Ð¶¹¸¨F e«¬Ñ¨i¾¦«A ÑR«åŸ¨¦§Ð« Ÿ¡ªón« •ÑRži½«n¾¦«n¥¿Àžxā¥p®n¹¸¡  ā«[ e«§Ð¶W¹£¨F e«Ÿù§©¨´9§©¨F ® Ñ ¨å¾F¨F¥¡£¨ Ò ¹£«©¢jā§ Ò «n¥%ž¦¤’«¢œ ¡£ ¡ª«Ÿn»ÂõÑR«©Ÿe«n 1.%§©¨´ ®Xžx¢j ¨¦¡£¢µ с«Ÿ¨¦§Ð«¤^«¨F ÄR¥«¤^¥žx§ º¡ËÊ«n¥«¢j  «¢œ ¡£ ¡ª«Ÿn» 4pŸ⍦¹£¹ª´¦¿R¤^ž¦¥.«¨¦® Ñ·«¢j ¡ª Ù´å¡£¢0 ÑR«%§©¨F ®Ñ¡£¢RÁµŸe«n n¿&¡ª Ÿ ¹£¨ Ò «¹ ¡£Ÿ[¡£¢®n¹£ÄºR«º9¡£¢5X» * ¹£¹  ÑR«Ÿ«µ½¡£¹¸¹ Ò «Â¡£¹£¹£ÄŸeÌ  e¥¨F e«ºA½¡£ ѵ«X­R¨¦§Ð¶¹ª«Ÿ¡£¢åÏM«®X ¡ªžx¢Ÿ)¨¦¢º76%½ÑR«n¥«  ÑR«Ð®n¹£¡  ÄR«[ e«§Ð¶¹£¨F e«ŸùāŸ«ºP¡£¢™žxÄR¥ù§Ðž<ºR«¹ ¨F¥«ÐºR«XÌ Ÿ®X¥¡ Ò «ºA¡£¢åº«n ¨¦¡£¹ò» 8p«n¶«¢º¡£¢Á%žx¢¬ ÑR«¢Mā§ Ò «n¥Cž¦¤Àс¡¸ººR«¢©¹£¨ Ò «¹£Ÿ-¡£¢  ¿M½«ºR«XþW¢R«' ç½-ž®n¨F e«nÁ¦ž¦¥¡ª«Ÿ-ž¦¤®n¹£¡  ÄR«’ e«§Ð¶¹¸¨F e«Ÿ9 :<;>=-?A@$B C>DEGFHB@JIKDML ¨F¥«a¨¦¹£¹ e«§©¶¹£¨F e«ŸN H< ¤^ž¦¥5½с¡£® ÑPO  QOA F»5õ.ÑR«n´[§ÐžMº«¹j с«’®Xž¦¥¥«¹£¨YÌ  ¡£žx¢Ÿ Ò «n ç½-«n«¢A¨¦¢Â«¢œ ¡ª ç´ Ÿ’ž Ò Ÿe«n¥¾¦«º¬¤c«¨F ā¥«Ÿ ¨¦¢º0¡ª Ÿ'¹£¨ Ò «¹ò» :5R7B=$ST@$B C>DMEFHBU@VIKDWL ¨F¥«Â¨¦¹£¹C e«§Ð¶¹£¨F e«ŸX H ¤^ž¦¥å½с¡¸®ÑYO $QO Z F» õ.ÑR«n´ë®n¨F¶ ÄR¥«Î¡£¢<Ì Ã&ÄR«¢®X«Ÿ Ò «n Ù½«n«¢å§[ā¹£ ¡ª¶¹ª«.«¢j ¡ª ¡ª«Ÿ¤^¥žx§  ÑR« Ÿ¨¦§Ð«aºRž<®nā§Ð«¢j n» * ¤c e«n¥p ÑR«lÁ¦¥¨F¶љ§Ðž<ºR«¹¤^ž¦¥ù¨ÂºžM®nā§©«¢œ  Fåс¨¦Ÿ Ò «n«¢©®Xžx§Ð¶¹£«n e«º©½¡ª ÑЮn¹¸¡  ā«Ÿ‹¤c¥žx§?¨¦¹£¹< e«§Ð¶¹¸¨F e«Ÿn¿  ÑR«î¶¥ž Ò ¨ Ò ¡£¹¸¡ª Ù´ º¡£Ÿe e¥¡ Ò ÄR ¡ªžx¢ôži¾¦«n¥Ó ÑR«ë¥ ¨¦¢ºRžx§ þ«¹£º ž¦¤Ñ¡£ººR«¢$«¢œ ¡ª ç´ ¹£¨ Ò «¹£Ÿ+FKJ Áx¡ª¾¦«¢Ó ÑR«Až Ò Ì Ÿe«n¥¾¦«º·¤c«¨F ā¥«Ÿ FKJ[ ¡£Ÿ'®Xžx§©¶ÄR e«º0¨¦Ÿ9 \#]_^a`[bdc ^A` egfih j kl]U^A` egflm n,oKp m q osrut,vxwzy {$|9} p ]_~` e n" ~` b n f ] j f ½с«n¥«>€[¯FKJ[$³&¡£Ÿ  с«-¢Rž¦¥§µ¨¦¹£¡ªó¡£¢RÁ’¶¨F¥ ¡ª ¡ªžx¢ù¤±Ä¢®Ì  ¡ªžx¢- kN]_^a` egfihƒ‚ m n,oKp m q osr„tƒvxwzy {$|9} p ]_~` e n… ~` b n f ]U†Qf õ.с«a¨ Ò ži¾¦«[º¡£Ÿe e¥¡ Ò ÄR ¡ªžx¢¬¶¥«Ÿe«¢j Ÿ. с«%âPð?¨¦Ÿ ¨0™¨F¥咽i¾9¥¨¦¢ºRžx§uþ«¹£ºÚ¯òPâã³½¡ª Ñ9 ÑR«©®n¹£¡  ÄR«  e«§Ð¶¹¸¨F e«Ÿ ¨¦Ÿ5¨§Ð«n ÑRž<ºa¤^ž¦¥5 Ù´<¡£¢RÁ¶&ž¦ e«¢j ¡£¨¦¹M¾F¨¦¹£ÄR«Ÿ ¨¦®X¥žxŸŸº¡ËÊ«n¥«¢j .®n¹£¡  ÄR«Ÿ'¡£¢A ÑR«ùÁ¦¥¨F¶Wс¡£®n¨¦¹§Ðž<ºR«¹ò» ‡ ˆ 8=-Ž üçŽ 8&š¦B ?l= š¦üòš¦üýBD08&=Ž ?=•š¦üòši‰ 9‹B8&š¦@ ›œBD •¡£ï¦«Â§ÐžxŸe [«¢œ ¡£ Ù´f¢¨¦§Ð«Ÿ¿ ¨¦¹£§©žxŸe [¨¦¹£¹C¶¥ž¦ e«¡£¢Ÿ%¡£¢ žxÄR¥5º¨F ¨¨F¥« Ò ¨¦Ÿ«-¢Ržxā¢a¶ÑR¥¨¦Ÿ«Ÿž¦¥ ¶¨F¥ Ÿ•ž¦¤R ÑR«§å» õ.ÑR«n¥«n¤cž¦¥«¦¿%Ÿā®ÑôŸÄ Ò Ÿe e¥ ¡£¢RÁxŸå¨F¥«$ÄŸe«º  ežëº«n e«n¥eÌ §©¡£¢«Â®n¨¦¢º¡¸º¨F e«Â«¢j ¡ª ¡ª«Ÿ»9õ•žÎ¨¾¦žx¡£ºÚ§µ¡£ŸŸ¡£¢Á0ž¦¶RÌ  ¡ªžx¢Ÿ¿W½«Ð¨¦ºRž¦¶ p¨Â¾¦«n¥´ Ò ¥žx¨¦º9ºR«XþW¢¡ª ¡ªžx¢0ž¦¤ Ò ¨¦Ÿe« ¢Ržxā¢A¶WÑR¥¨¦Ÿe«¦» ‰ DWŠH‹TŒUIQŒ= ‹Ž$ * ÅÜFànÆ[ÉWÛ Ô É.œ×eÜYàÆ ¡£Ÿ.¨©§©¨Y­MÌ ¡£§©¨¦¹À®Xžx¢œ ¡ªÁxāžxāŸŸe«  ÄR«¢®X«[ž¦¤‹ ež¦ï¦«¢Ÿ½ÑRžxŸe« ! %Ï  ¨FÁxŸp¨F¥«a¤^¥žx§‘-’z“M“i’"” ’M•-– —P’"” ’M•-–d˜X’…” ’™#š>›œ’"” ’——7’…”’——ž›œ’"”’——™X’"”’——™>›„’…”’3Ÿi ¡’…”’z¢.’"£x¿ ¨¦¢ºÂ½ÑRžxŸe«¹£¨¦Ÿe -½-ž¦¥º™¯± ÑR«pÑR«¨¦º³‹¡£Ÿ  ¨FÁ¦Á¦«ºA«¡£ ÑR«n¥ ¨¦Ÿ¨Ð¢žx⢠¿Rž¦¥¨©¢Mā§ Ò «n¥» ᒨ¦¢º¡£º¨F e«Î«X­< e¥¨¦®X ¡ªžx¢Ÿ™®Xžx¢Ÿ¡¸Ÿe ·ž¦¤ Ò ¨¦Ÿe« ð !Ÿn¿ ¨¦ÄRÁx§Ð«¢j e«ºè½¡ª Ñ訦¹£¹  ÑR«¡ª¥,®Xžx¢j ¡ªÁxÄRžxāŸ,ŸÄ Ò Ÿe«XÌ  ÄR«¢®X«Ÿ.ÑR«¨¦º«º Ò ´å¨Ð¢RžxĢ垦¥¢Mā§ Ò «n¥» õ.с«%Ÿe«n 'ž¦¤5¤c«¨F ÄR¥«Ÿ.¨¦ŸŸež<®n¡£¨F e«º0½¡ª ÑA«¨¦®ÑP®n¨¦¢<Ì º¡£º¨F e«¡¸Ÿ Ò ¨¦Ÿe«º9žx¢9 ÑR«Ð¤^«¨F ÄR¥«Ð e«§Ð¶W¹£¨F e«Ÿ%¡£¢j e¥žFÌ ºÄ®X«ºÚ¡¸¢¯Ùᒞx¹£¹£¡£¢Ÿ¿ j³¿'āŸ«ºä ÑR«n¥«A¤cž¦¥µ e¥¨¦¡£¢<Ì ¡£¢RÁP¨0¥¨¦¢Rï<¡£¢RÁP¨¦¹ªÁ¦ž¦¥¡ª с§ žx¢f ÑR«Â«X­M e¥ ¨¦®X ¡ªžx¢Ÿl¥«XÌ  ÄR¥¢«º Ò ´î¨ §µ¨Y­<¡£§lā§Ìý«¢j e¥ž¦¶M´î ¨FÁ¦Á¦«n¥»,™¨¦¢œ´ ž¦¤’ ÑR«Ÿe«©¤^«¨F ÄR¥«ŸlāŸe«Ð с«µ®Xžx¢®X«n¶ [ž¦¤ ÈCÛ¦×ñÕÎǤRÆ ¿ ½с¡¸®Ñ9¨¦¹£¹ªžJ½Ÿù¨0º¡ËÊ«n¥«¢j p¤cž¦¥§už¦¤ ež¦ï¦«¢fÁ¦«¢«n¥¨¦¹ËÌ ¡ªó¨F ¡ªžx¢l с¨¦¢+! %Ïa ¨FÁxŸn»‹õ.ÑR« à2MÛF×XÇ Ç¤2Æ ž¦¤¨½ž¦¥º ¡£Ÿ®X¥«¨F e«º Ò ´Ð¥«n¶¹£¨¦®n¡£¢Á[¨¦¢j´µ§©¨Y­R¡£§©¨¦¹&®Xžx¢œ ¡ªÁxāžx⟠Ÿe«  ÄR«¢®X«Ÿž¦¤®n¨F¶¡ª ¨¦¹'¹ª«n e e«n¥Ÿ½¡ª Ñ * ª¿‹ž¦¤¹£ži½«n¥eÌ ®n¨¦Ÿe«¹ª«n e e«n¥Ÿ½¡ª Ñ ¨ ª¿¨¦¢ºPž¦¤ º¡ªÁx¡ª Ÿ.½¡ª Ñ M ª».㞦¥ «X­R¨¦§Ð¶¹ª«¦¿< ÑR«ù½ž¦¥º¦¥>˜„§ ,¨ ½-žx⹏º Ò «p§©¨F¶¶«º¬ ež  Ù´M¶«X© «ª » ážx¢Ÿe«  ā«¢œ ¹ª´¦¿R«¨¦® Ñå ež¦ï¦«¢0¶žxŸ¡£ ¡ªžx¢g¬C¡£¢A¨Ð®n¨¦¢<Ì º¡£º¨F e«a«X­< e¥¨¦®X ¡ªžx¢9¶¥žJ¾M¡¸ºR«Ÿ. ÑR¥«n«[ ç´j¶«Ÿž¦¤C¡£¢R¤^ž¦¥eÌ §©¨F ¡ªžx¢i  с«[½ž¦¥º™¡ª Ÿe«¹ª¤/­ ®ñ¿&¡£ Ÿ ! aÏå ¨FÁ°¯ƒ®„¿ ¨¦¢º ¡ª ŸpŸÑRž¦¥  Ù´M¶«±'®„»ùõ.ÑR«a¤±Ä¹£¹•Ÿe«n pž¦¤ ¤^«¨F ÄR¥«Ÿp Ù´M¶«Ÿ ¡£Ÿ¹£¡£Ÿe e«º¬¡£¢Âõ•¨ Ò ¹£«+F¿<½ÑR«n¥«½«p®Xžx¢Ÿ¡£ºR«n¥’¨aÁ¦«¢«n¥¡£® ®n¨¦¢º¡¸º¨F e«‹«X­M e¥ ¨¦®X ¡ªžx¢a¨¦Ÿ•¨'Ÿe«  ā«¢®X« ž¦¤ 5½-ž¦¥ ºŸ ­9­ J J J­ W» 8p«Ÿ®X¥¡£¶ ¡ªžx¢ に¨F ÄR¥«lõÀ«§Ð¶W¹£¨F e« ¼«¨¦º0éΞ¦¥º ­  õ•«X­M  ­  ­  J J J ­  Ï<сž¦¥ .õC´M¶« '  '  J J J '  '’¡ªÁ¦¥¨¦§8À«n¤^  ­   ­  ­   '  ¯) Ò ¡ªÁ¦¥¨¦§©ŸX³ '   ­  '   '  '’¡ªÁ¦¥¨¦§ ⡪ÁxÑj  ­  ­   ­  '  ¯) Ò ¡ªÁ¦¥¨¦§©ŸX³ '  ­   '  '   õ•¥¡ªÁ¦¥¨¦§8À«n¤^  ­  ­   ­  J J J ¯ e¥ ¡ªÁ¦¥¨¦§©ŸX³ '  '   '  õ•¥¡ªÁ¦¥¨¦§ ⡪ÁxÑj  ­  ­   ­    J J J ¯ e¥ ¡ªÁ¦¥¨¦§©ŸX³ '  '   '   ! %Ï.À«n¤^  ¯   ! %ÏÂ⡪ÁxÑj  ¯  !-¥«Xþ­ '  '  '  J J J ¯ò¢ p¶¥«Xþ­<«ŸX³ '  '  J J J '   Ï<Ä©­ '  '    '  J J J ¯ò¢ ùŸÄ©­<«ŸX³ '  '  J J J '   õ ¨ Ò ¹ª« 3 㫨F ÄR¥«[õ•«§Ð¶¹¸¨F e«Ÿn»  Œ‹E<8"! ˆ !ýü$# @CB 4 BŠ&%'!ý8&š¦BD ²-¨¦® Ѥc«¨F ā¥«ÿ e«§Ð¶¹£¨F e«ÿ¡£¢Ÿe ¨¦¢œ ¡¸¨F e«ŸÚ¢jā§©«n¥žx⟠¤^«¨F ÄR¥«Ÿn»5ぞ¦¥«X­<¨¦§©¶¹ª«¦¿œ с«®n¨¦¢º¡£º¨F e«'«X­M e¥¨¦®X ¡£žx¢ (*)*+*, j.-/02134с¨¦Ÿ- ÑR«ÑR«¨¦ºµ½ž¦¥º65 87 ÆXÉ:99¤FÞµÆ ¿  ÑR«åŸÑRž¦¥ [ ç´j¶« ›H¥*7>© ª Ü ¿  ÑR«A¶¥«Xþ­M«Ÿ±™„§;7>© ª ¨¦¢º ™„§;7>© ª Ü ¿æ¨¦¢º ÑR«ŸÄ©­<«Ÿ ›§;7 Ü ¨¦¢º ›§;7>© ª Ü » * ¹£¹až¦ ÑR«n¥9¤c«¨F ÄR¥«Ÿ™º«n¶&«¢º žx¢  ÑR« ¹ª«n¤^ pž¦¥ù¥¡ªÁxÑj p®Xžx¢j e«X­< %ž¦¤- с«Ð«¢œ ¡ª ç´¦»[㫨F ÄR¥«©¾F¨¦¹ËÌ ÄR«Ÿ’ с¨F 'ž<®n®nÄR¥.¹ª«ŸŸ- Ñ¨¦¢A ÑR¥«n«ù ¡£§Ð«Ÿ’¨F¥«pþ&¹ª e«n¥«º žxÄR n»ä°„¤ç¿¨F¤c e«n¥Ðþ&¹ª e«n¥¡£¢RÁR¿‹½«0¨F¥«0¹ª«n¤^ Ð½¡ª Ñ=<¡£ŸñÌ  ¡£¢®X  Ò žMžx¹ª«¨¦¢l¤c«¨F ÄR¥«Ÿ.¯> ® 4?A@F³¿x½-«.®X¥«¨F e« <йªž<®n¨¦¹ ¯ò®n¹£¡  ÄR«F³  e«§Ð¶¹£¨F e«Ÿ BDC  BDC   J J J BDCFER»‹²-¨¦® Ѭ e«§Ì ¶¹£¨F e« ŸÐ§µ¨F ®с¡£¢ÁΞ¦¶&«n¥ ¨F ež¦¥©¡£ŸÐŸe«n  ežf§©¨F ®Ñ樦¢œ´ Ÿ¡£¢Áx¹ª«XÌý«¢œ ¡£ Ù´$Ÿe«n n» õ.ÑR«A®Xžx¹£¹ª«®X ¡ªžx¢Óž¦¤.¤^«¨F ÄR¥«Ÿ ® ®Xž¦¥¥«Ÿ¶&žx¢º¡£¢RÁΠežÓ e«§Ð¶¹£¨F e«GBDC.®a¨F¶¶W¹£¡ª«ºÚ ežÓ ÑR« Ÿ¡£¢Áx¹ª«n ežx¢Ó«¢œ ¡ª ç´ÓŸ«n g Æ £·¡£Ÿ±®H7 ®ƒ”N ®(!I7 ƒ ÆKJ >s®7*? @ £ ”i ÆKJ¸ÝËܜŠÆXÝ £s! »Cõ.с« JÌÙ¢žMºR«.®n¹¸¡  ā«Ÿ ®X¥«¨F e«º Ò ´9¨¦¹¸¹'<Πe«§©¶¹£¨F e«Ÿl¨F¥žx⢁ºÎžx¢«©«¢œ ¡£ Ù´f¨F¥«¬¡£¹£¹£ÄŸeÌ  e¥¨F e«ºP¡£¢A㋡ªÁxÄR¥« F» ... e e e label e f =v 1 f =v f =v 2 h i 1 2 i h i ㋡ªÁx⥫ 3 âPð Á¦«¢R«n¥¨F e«º Ò ´A¹ªž<®n¨¦¹  e«§Ð¶¹£¨F e«Ÿ» ²¨¦®Ñf«¢j ¡ª ç´9Ѩ¦Ÿù ÑR«µ¹£¨ Ò «¹5¢Rž<ºR«©®Xžx¢¢R«®X e«º$ ež ¡ª Ÿ ži½¢ÐŸe«n 5ž¦¤L< Ò ¡£¢¨F¥´ù¤c«¨F ā¥«'¢RžMº«Ÿn» õс¡£Ÿ ¹ª«¨¦ºŸ  ež·¨¦¢Î«X­R®X«ŸŸ¡£¾¦«µ¢jħ Ò «n¥pž¦¤'¢RžMº«Ÿù¡£¢9 ÑR«©§Ðž<ºR«¹ò¿ §ÐžxŸe lž¦¤'½с¡¸®Ñ$Ѩ¾¦«¬ ÑR«Â¾F¨¦¹£ÄR«µón«n¥žR»Põ•žP¥«ºÄ®X«  ÑR«©¢Mā§ Ò «n¥pž¦¤¢RžMº«Ÿn¿ ½-«©®XžxĹ£º™¥«§Ðži¾¦«µ ÑR«Ðž Ò Ì Ÿe«n¥¾¦«ºµ¢RžMº«Ÿ•¤^¥žx§Í ÑR«'Á¦¥ ¨F¶Ñ ¿x½Ñ¡£®Ñl ÑR«¢¥«Ÿā¹ª Ÿ ¡£¢%§µ¨¦¢œ´ùžx¢R«XÌÙ¢žMºR«®n¹£¡  ÄR«‹¶ž¦ e«¢œ ¡¸¨¦¹£Ÿ-¯ò®Xž¦¥¥«Ÿe¶žx¢º<Ì ¡£¢RÁ9 ežf с«åž Ò Ÿe«n¥¾¦«ºÚ¤^«¨F ÄR¥«Ÿn³ Ò «¡¸¢RÁ™¨¦ŸŸežM®n¡¸¨F e«º ½¡ª ÑA с«%Ÿ¨¦§Ð«a¹£¨ Ò «¹¢Rž<ºR«¦» '«®n¨¦ÄŸe«[ ÑR«Ÿe«%®n¹£¡  ÄR« ¶ž¦ e«¢œ ¡¸¨¦¹£Ÿ5§©¨´l¢Rž¹£žx¢RÁ¦«n¥ Ò «’º¡£Ÿ ¡£¢RÁxā¡£ŸÑR«º%¡£¢[ ÑR« âPð Á¦¥¨F¶Ñ ¿ù¡£¢ž¦¥ºR«n¥· ežëс¨¾¦« ¨¦¹£¹pž¦¤[ с«§ ¨¦Ÿ «X­<¶¹£¡£®n¡ª %¨¦Ÿ[¶žxŸŸ¡ Ò ¹ª«Ð¡£¢Î ÑR«ÂÁ¦¥¨F¶с¡£®n¨¦¹C§ÐžMº«¹ò¿À½«  e¥¨¦¢Ÿ¤cž¦¥§" ÑR«™¥«¹£¨F ¡£žx¢¨¦¹pP¨F¥咽J¾¢«n Ù½ž¦¥ïæ¡£¢j ež ¡ª Ÿ«  ā¡£¾Y¨¦¹ª«¢j  0nÜxØÇýÛ¦× x×ñÜ  ¥«n¶¥«Ÿe«¢j ¨F ¡ªžx¢ »'ãW¨¦®Ì  ež¦¥'Á¦¥¨F¶ÑŸ¯NMaŸ®с¡¸Ÿ®с¨¦¢Ál«n '¨¦¹ò»ª¿KY³-¨F¥« Ò ¡ª¶¨F¥eÌ  ¡ª e«ÂÁ¦¥¨F¶сŸù Ñ¨F a«X­<¶¥«ŸŸ[ÑRži½¨0Áx¹ªž Ò ¨¦¹‹¤^Ä¢®X ¡ªžx¢ ž¦¤’§©¨¦¢œ´™¾F¨F¥¡£¨ Ò ¹ª«ŸÐ¯± с«Ð¶¥ž Ò ¨ Ò ¡¸¹£¡ª Ù´POµ¯FKJ¡O FKJ[$³ ¡£¢Í²  ā¨F ¡£žx¢ Y³·¤±¨¦®X ež¦¥ŸÓ¡£¢œ ež ¨ì¶¥ž<ºÄ®X 9ž¦¤å¹ªžFÌ ®n¨¦¹•¤^⢮X ¡ªžx¢Ÿa¯± с«a¶ž¦ e«¢œ ¡¸¨¦¹£Ÿ%"Q¯$R J[92R J$X³’¡£¢ ²  ā¨F ¡ªžx¢ Y³» ãW¨¦®X ež¦¥æÁ¦¥¨F¶WсŸÚŸÄ Ò Ÿā§Ð«§©¨¦¢œ´ º¡ËÊ«n¥«¢j 0 Ù´M¶«Ÿ0ž¦¤Á¦¥¨F¶Ñ¡£®n¨¦¹%§Ðž<ºR«¹£Ÿn¿ù¡£¢®n¹£Äº¡£¢Á '’¨´¦«Ÿ¡¸¨¦¢î¢R«n Ù½ž¦¥ï<Ÿ©¨¦¢ºä™¨F¥咽i¾ä¥¨¦¢ºžx§ þW«¹£ºŸn» õ.ÑR«'ŸÄ§HSY§©¨Y­MÌý¶¥ž<ºÄ®X 5¨¦¹ªÁ¦ž¦¥¡ª с§ÍāŸ«º[¤^ž¦¥ ¡£¢R¤^«n¥eÌ «¢®X«a¡¸¢A¤^¨¦®X ež¦¥Á¦¥¨F¶сŸ'Á¦«¢«n¥¨¦¹£¡ªón«Ÿ.¨©½¡£ºR«p¾F¨F¥¡ª«n Ù´ ž¦¤%¨¦¹ªÁ¦ž¦¥¡£ с§©Ÿ©¡£¢®n¹£Äº¡£¢Á™ с«0¤cž¦¥½¨F¥ºLS Ò ¨¦®ïM½¨F¥ º ¨¦¹ªÁ¦ž¦¥¡£ с§å¿p ÑR« %p¡ª e«n¥ Ò ¡ù¨¦¹£Á¦ž¦¥¡ª с§å¿ù¨¦¢º ! «¨F¥ ¹ Ÿ Ò «¹£¡ª«n¤‹¶¥ž¦¶¨FÁx¨F ¡ªžx¢f¨¦¹ªÁ¦ž¦¥ ¡ª с§è¯!5«¨F¥¹ò¿   j³»©õ•ž ž Ò  ¨¦¡£¢µ ÑR«¤±¨¦®X ež¦¥’Á¦¥¨F¶Ѥcž¦¥¨aÁx¡£¾¦«¢Â™¨F¥咽i¾µ¥¨¦¢<Ì ºRžx§íþ«¹£º ¿½-«™®Xž¦¶j´î¨¦¹¸¹ 0nÆÜ¦Ç Ô ×eÆ9ÉWÛ՜ÆXà ¤c¥žx§  ÑR« Pâ㒿 ¨¦¢º ®X¥«¨F e«å¨ <Û¦ÇÙÆnÉDZֱܦÝÉ&ÛÕœÆ ¤^ž¦¥[«¨¦® ÑÚ¡£¢<Ì Ÿe ¨¦¢j ¡£¨F e«ºÐ®n¹£¡  ÄR«-¶&ž¦ e«¢j ¡£¨¦¹ò» ²-¨¦® Ѷž¦ e«¢œ ¡£¨¦¹<¢Rž<ºR« ¡£Ÿù с«¢f¹£¡£¢R嶺º9 ežP¨¦¹£¹ ¢Rž<ºR«Ÿù¤^¥žx§  ÑR«©¨¦ŸŸežM®n¡¸¨F e«º ®n¹£¡  ÄR«¦» ¼žJ½-«n¾¦«n¥.¡¸¢Ð с¡£Ÿ ®n¨¦Ÿ«¦¿<¡¸¢Ÿe e«¨¦º©ž¦¤®X¥«¨F ¡£¢RÁ ¨0¶ž¦ e«¢j ¡£¨¦¹C¢žMºR«µ¤cž¦¥[«¨¦®ÑÓ¤c«¨F ā¥«XÌý¾Y¨¦¹¸ÄR«©¶¨¦¡ª¥l¨¦Ÿ ¡£¢Â ÑR«p¡£¢¡ª ¡£¨¦¹W·âãÓ§©žMºR«¹ý¿œ½«p®X¥«¨F e«%¨a¶ž¦ e«¢j ¡£¨¦¹ ¢Rž<ºR«ùžx¢¹ª´Â¤cž¦¥ ÑR« Ò ¡£¢¨F¥´©¤^«¨F ÄR¥«Ÿ. Ñ¨F ¨F¥« ù¤^ž¦¥  ÑR«ùÁx¡ª¾¦«¢0«¢œ ¡ª ç´¦»’ឦ¥¥«Ÿe¶žx¢º¡£¢Áx¹ª´¦¿œ ÑR«ù ¨ Ò ¹ª«ù¨¦ŸñÌ Ÿež<®n¡£¨F e«º½¡ª Ñl ÑR«'¶ž¦ e«¢j ¡£¨¦¹<½¡£¹¸¹ Ò «’¥«ºÄ®X«ºl¤^¥žx§ )· ež 0¾F¨¦¹£ÄR«Ÿ» * Ÿa¨¦¢ «X­R¨¦§Ð¶¹ª«¦¿ ㋡ªÁxÄR¥«.·ŸÑRžJ½Ÿ  с¨F -¶¨F¥  ž¦¤ ÑR«¤^¨¦®X ež¦¥Á¦¥¨F¶ѵ½с¡£® Ñ©¡£Ÿ‹Á¦«¢R«n¥ ¨F e«º ¨F¥žxā¢º ÑR«.«¢j ¡ª Ù´Ð¹£¨ Ò «¹<¤^ž¦¥ (*)*+*, jD-/021T34¯±½¡ª Ñ ¤^«¨F ÄR¥«Â¢žMºR«ŸùþÁxÄR¥«º ¨¦Ÿ%«§Ð¶ ç´f®n¡ª¥®n¹ª«Ÿa¨¦¢ºf¶žFÌ  e«¢j ¡£¨¦¹¢Rž<ºR«Ÿ’þÁx⥫ºå¨¦Ÿ Ò ¹¸¨¦®ïAŸ  ĨF¥«ŸX³» e label φHD=enzyme φPF=A0 φPF=A0_a φSF=a φSF=A0_a ... ã5¡£ÁxÄR¥« M 㨦®X ež¦¥ C ¥¨F¶WÑå¤cž¦¥¹ªžM®n¨¦¹ e«§Ð¶¹£¨F e«Ÿn» 𞦠e«ì с¨F ë ÑR« ¤±¨¦®X ež¦¥æÁ¦¥¨F¶WÑ ¨ Ò ži¾¦«ôѨ¦Ÿî¨¦¢ «  ā¡ª¾F¨¦¹ª«¢œ -âPðÿÁ¦¥¨F¶Ѭ®Xžx¢Ÿ¡£Ÿ ¡£¢RÁ%ž¦¤¨[žx¢R«XÌÙ¢Rž<ºR« ®n¹£¡  ÄR«ùžx¢¹£´¦¿žx¢P½с¡£® Ñ0¡ª ¡¸Ÿ.с¨F¥º· ežÂ¾<¡£Ÿ⍦¹¸¡ªón«ù ÑR« ¾F¨F¥¡ªžxāŸ9¶ž¦ e«¢j ¡£¨¦¹£Ÿf¡£¢j¾¦žx¹ª¾¦«º » õÑR«n¥«î¨F¥«ë®n¨¦Ÿ«Ÿ ½ÑR«n¥«ìº¡ËÊ«n¥«¢j î¤^¨¦®X ež¦¥Á¦¥¨F¶ÑŸë§©¨i´4´M¡ª«¹¸º4 ÑR« Ÿ¨¦§Ð«[⢁ºR«n¥¹£´M¡£¢Á[❷ðÍÁ¦¥¨F¶Ñ¿½с¡£® Ñ0§©¨F嶺Ÿ ÑR« ¤±¨¦®X ež¦¥Á¦¥¨F¶Ñ0¥«n¶¥«Ÿe«¢j ¨F ¡ªžx¢·¶¥«n¤^«n¥¨ Ò ¹ª«¦»  !ýŒ ˜ 8"! ˆ !ýü$# @-B 4 BŠ&%'!Ù8Wš¦BD C ¹ªž Ò ¨¦¹®n¹£¡  ÄR«p e«§Ð¶¹£¨F e«Ÿ.«¢¨ Ò ¹ª«%āŸ. ežµ§Ðž<ºR«¹ ќ´jÌ ¶ž¦ ÑR«Ÿ¡ªón«º·¡£¢<ÃWÄR«¢®X«Ÿ Ò «n Ù½«n«¢·«¢j ¡ª ¡ª«Ÿ¤c¥žx§  ÑR« Ÿ¨¦§Ð«ÐºRž<®nā§Ð«¢j n»ùõ.ÑR«n´·®Xžx¢¢R«®X p ÑR«l¹¸¨ Ò «¹ ¢Rž<ºR«Ÿ ž¦¤Ð Ù½žž¦¥™§©ž¦¥«$«¢j ¡ª ¡ª«Ÿn¿a½с¡£® Ñ ¿%¡£¢  с«$¤±¨¦®X ež¦¥ Á¦¥¨F¶Ñ¿J e¥¨¦¢Ÿ¹£¨F e«Ÿ ¡£¢œ ež¶ž¦ e«¢œ ¡£¨¦¹j¢Rž<ºR«ŸÀ®Xžx¢¢«®X e«º  ežÂ¨F ¹ª«¨¦Ÿe . ç½-ž¬¹£¨ Ò «¹ ¢RžMº«Ÿn»C°ñ¢åžxÄR¥.«X­<¶«n¥¡£§Ð«¢j Ÿ ½«aāŸe«ù ÑR¥«n«ùÁx¹ªž Ò ¨¦¹  e«§Ð¶W¹£¨F e«Ÿ9  D3BU@$F C>DMEFHBU@VIKD  C K ðžæ ç½-ž«¢j ¡ª Ù´ ¢¨¦§Ð«Ÿži¾¦«n¥ ¹£¨F¶™¡¸¢0 ÑR«[ e«X­M p¡ò» «[¡£¤• ÑR«lŸe¶¨¦¢Pž¦¤‹žx¢R« «¢j ¡ª Ù´™¡£Ÿ ['TK N  ¨¦¢º™ ÑR«ÐŸe¶W¨¦¢Pž¦¤-¨¦¢Rž¦ ÑR«n¥ù«¢j ¡ª Ù´ ¡£Ÿ ['   N  ý¿¨¦¢º¡'  '  ¿R ÑR«¢ N  '  »  DWF„DW@JI C>DMEFTB@JIQD  C K °ç¤p§lā¹ª ¡ª¶¹£«Â«¢j ¡ËÌ  ¡ª«Ÿ-¡£¢ ÑR«Ÿ¨¦§©«ºRž<®nā§Ð«¢j  ¨F¥«.¥«n¶«n ¡ª ¡£žx¢Ÿ‹ž¦¤ ÑR« Ÿ¨¦§Ð«Ð¢¨¦§©«¦¿& с«¡ª¥ù¹£¨ Ò «¹£Ÿ. e«¢º™ ežåс¨i¾¦«Ð ÑR«ÐŸ¨¦§Ð« ¾F¨¦¹£ÄR«å¯ò¡ò» «¦»[§©žxŸe ùž¦¤C ÑR«§ ¨F¥«¶¥ž¦ e«¡£¢Î¢¨¦§©«Ÿn¿ ž¦¥ §ÐžxŸe åž¦¤[ ÑR«§ ¨F¥«f¢ž¦ å¶¥ž¦ e«¡£¢¢¨¦§Ð«Ÿn³»  ¨F e«n¥ ½«º¡£Ÿ®nāŸŸ Ÿ¡£ ā¨F ¡ªžx¢Ÿ ¡¸¢Ð½с¡£® ÑÐ¥«n¶«n ¡ª ¡ªžx¢Ÿ‹ž¦¤  ÑR« Ÿ¨¦§Ð«Ð¶¥ž¦ e«¡£¢Î¢¨¦§Ð«Ð¨F¥«©¢ž¦ ù ¨FÁ¦Á¦«º$¨¦Ÿù¶¥ž¦ e«¡£¢Ÿn¿ ¨¦¢º0º«Ÿ¡ªÁx¢å¨¦¢0¨F¶¶¥žx¨¦®Ñ· ežµÑ¨¦¢º¹£« Ñ¡£Ÿn»  ?s= ‹ E CuDEFTB@JIKD  C K °ç ¬¡£ŸÂ®Xžx§µ§Ðžx¢ ®Xžx¢j¾¦«¢œ ¡ªžx¢  с¨F ä¨ô¶¥ž¦ e«¡£¢ ¡£ŸÎþ¥ Ÿe  ¡£¢j e¥ž<ºÄ®X«º Ò ´ ¡ª ŸÐ¹ªžx¢Á9¢¨¦§Ð«¦¿-¡£§µ§Ð«º¡£¨F e«¹ª´$¤cžx¹£¹£ži½«º Ò ´ä¡ª Ÿ ŸÑRž¦¥ ñÌý¤cž¦¥§ ¯ò¨¦®X¥žx¢œ´<§A³’¡£¢A¶¨F¥«¢j ÑR«Ÿe«Ÿn»   C D  D3B@VF C>DEGFHB@JIKD õ.ÑR«fº«XþW¢¡ª ¡ªžx¢ž¦¤Ð¨ Ø ÜFÉWÕFÖòÕxܦÇÙÆëÆ21¦Ç±×eÜxØiDZֱÛFÉ ¤^¥žx§ ÏM«®X ¡ªžx¢>$«¨¦ºŸ0 ežæ§µ¨¦¢œ´ži¾¦«n¥ ¹£¨F¶¶¡£¢Áî«¢œ ¡£ ¡ª«Ÿn» ぞ¦¥ «X­R¨¦§Ð¶¹£«¦¿  !#"%$'&)(*&,+-%. / -0/21 &,34( /6587 34( 5 i¡¸Ÿ‹¨ Ò ¨¦Ÿe« ð ! ¿R¨¦¢º¬¡ª Á¦«¢R«n¥¨F e«Ÿþ¾¦«p®n¨¦¢º¡¸º¨F e««X­< e¥¨¦®X ¡ªžx¢Ÿ9  !9"%$'&)(*&,+-%. /Ë¿ !#":$&)(;&)+-%. /-</ Ë¿  !9"%$'&)(*&,+-%. /-=/>1 &)3)( /6581 7 34( 5 Ë¿0 / 1 &,34( /6587 34( 5 Ë¿¨¦¢º? &)3)( /6587 34( 5 Ë»%°ç¤> !#"%$ 1 &)(*&,+-%. /-@/?1 &,34( /6587 34( 5 Ѩ¦ŸÎ¹£¨ Ò «¹ËÌý¾Y¨¦¹¸ÄR« F¿ Ò «XÌ ®n¨¦ÄŸe«ä ÑR«Óž¦ ÑR«n¥9¤^žxÄR¥P«¢j ¡ª ¡ª«Ÿ™žJ¾¦«n¥¹£¨F¶Í½¡ª Ñ ¡ª n¿  ÑR«n´åŸÑRžxĹ£ºA¨¦¹£¹ с¨¾¦«l¹£¨ Ò «¹ËÌý¾F¨¦¹£ÄR« <» õ.Ñ¡£Ÿ™ Ù´M¶&«Úž¦¤A®Xžx¢Ÿe e¥ ¨¦¡£¢œ f¡¸ŸP«¢¤cž¦¥®X«º Ò ´  ÑR« ži¾¦«n¥ ¹£¨F¶? e«§Ð¶¹¸¨F e«Ú½ÑRžxŸe«  ž¦¶&«n¥ ¨F ež¦¥$§©¨F ®с«Ÿ ¨¦¢j´  Ù½ž žJ¾¦«n¥¹£¨F¶¶W¡£¢RÁ ®n¨¦¢º¡£º¨F e«ô«¢œ ¡ª ¡£«Ÿn¿f¨¦¢º ½с¡¸®Ñl®Xžx¢¢R«®X Ÿ5 ÑR«¡ª¥5¹£¨ Ò «¹œ¢žMºR«Ÿ¯òŸe¶«®n¡Ëþ«ºl¡£¢-³  ÑR¥žxāÁxѬ¨[¶ž¦ e«¢œ ¡¸¨¦¹&¢žMºR«½¡ª Ѭ¨l¶ž¦ e«¢œ ¡£¨¦¹&¤±Ä¢®Ì  ¡ªžx¢ %™ с¨F ¨¦¹£¹£ži½Ÿ¨F §ÐžxŸe žx¢R«až¦¤5 ÑR«§  ežÂс¨i¾¦« ¹£¨ Ò «¹ËÌý¾F¨¦¹£ÄR« F¿W¨¦Ÿ¡£¹£¹£ÄŸe e¥¨F e«ºå¡£¢åõ ¨ Ò ¹ª«GM»'ážx¢j ¡£¢<Ì Ä¡£¢Áν¡ª ÑÚ ÑR«·¶¥«n¾<¡ªžxāŸÐ«X­R¨¦§Ð¶¹ª«¦¿ Ò «®n¨¦ÄŸe«A !#"%$ 1 &)(*&,+-%. /-B/ ¨¦¢º? / 1 &)3)( /6587 34( 5 &¨F¥«Ð Ù½ž0ži¾¦«n¥¹¸¨F¶RÌ ¶¡£¢Á0«¢œ ¡£ ¡ª«Ÿn¿  ÑR«Â¤±¨¦®X ež¦¥Á¦¥¨F¶Ñ §ÐžMº«¹ ½¡£¹¸¹‹®Xžx¢<Ì  ¨¦¡£¢¬¨¦¢¬ži¾¦«n¥¹£¨F¶A¶ž¦ e«¢j ¡£¨¦¹W¢žMºR«®Xžx¢¢R«®X e«º¬ ež ÑR« ¹£¨ Ò «¹ ¢Rž<ºR«Ÿ’ž¦¤5 ÑR«Ÿe«% ç½-žµ«¢œ ¡ª ¡£«Ÿn» * ¢ ¨¦¹£ e«n¥¢¨F ¡ª¾¦«AŸežx¹£ÄR ¡£žx¢f¤^ž¦¥[ ÑR«¬ži¾¦«n¥ ¹£¨F¶ä e«§Ì ¶¹£¨F e«’¡£Ÿ  ežù®X¥«¨F e«.¨¶ž¦ e«¢œ ¡£¨¦¹M¢Rž<ºR«¤cž¦¥5«¨¦®Ñ© ež¦ï¦«¢ ¶žxŸ¡ª ¡ªžx¢f с¨F [¡£Ÿ[®Xži¾¦«n¥«º Ò ´Î¨F l¹ª«¨¦Ÿ a Ù½ž9®n¨¦¢º¡ËÌ º¨F e«P«¢œ ¡ª ¡£«Ÿµ¡£¢Ú с«·ºRž<®nā§Ð«¢j n¿’¨¦¢ºë®Xžx¢¢R«®X ¬¡ª   ež© ÑR«¡£¥.¹£¨ Ò «¹&¢žMºR«Ÿ» õ.ÑR«ùº¡ËÊ«n¥«¢®X«ù¡£¢A с¡¸Ÿ'®n¨¦Ÿe« ¡£Ÿ с¨F p ÑR«¶ž¦ e«¢œ ¡¸¨¦¹ ¢žMºR«l½¡£¹¸¹ Ò «l®Xžx¢¢R«®X e«º9 ež ¨å¾F¨F¥¡£¨ Ò ¹£«¢Mā§ Ò «n¥ùž¦¤«¢j ¡ª ç´9¹¸¨ Ò «¹‹¢žMºR«Ÿ»Ð¼žJ½'Ì «n¾¦«n¥ Ñ¡£Ÿ'Ÿe«®Xžx¢ºP¨F¶¶¥žx¨¦® Ñ0с¨¦Ÿ' ÑR«[¨¦ºR¾Y¨¦¢j ¨FÁ¦«až¦¤ ®X¥«¨F ¡£¢ÁФc«n½«n¥¶ž¦ e«¢j ¡£¨¦¹ ¢Rž<ºR«Ÿ'¡£¢A ÑR«%ºRž<®nā§Ð«¢j  ¤±¨¦®X ež¦¥Á¦¥¨F¶Ñ¿R½с¡£® Ñ奫Ÿā¹£ Ÿ'¡£¢A¤^¨¦Ÿ e«n¥¡£¢R¤^«n¥«¢®X«¦» %CD N  J N I N  J N i  N  JM N I   N  JM N     õ•¨ Ò ¹£« M  ¾¦«n¥¹£¨F¶,!5ž¦ e«¢j ¡£¨¦¹ò» FE C D  DMFœDW@VI CuDEFTB@JIKD éΫήXžxā¹£ºŸe¶«®n¡ª¤^´Ú ÑR«P¶ž¦ e«¢j ¡£¨¦¹¤cž¦¥A ÑR«™¥«n¶«¨F   e«§Ð¶¹¸¨F e«ä¡¸¢ô¨Ÿ¡£§µ¡£¹£¨F¥@JÌ Ò ´jÌ  ¨ Ò ¹ª«¦¿l с¡£ŸP ¡£§Ð« ¹ª«¨i¾M¡£¢ÁÚ ÑR«Î ¨ Ò ¹ª«Î«¢j e¥¡ª«Ÿå ež Ò «Î¹ª«¨F¥ ¢R«º ¿Áx¡ª¾¦«¢  с¨F ¬¡ª Â¡£Ÿµ¢Rž¦ Â¨ с¨F¥ºë®Xžx¢Ÿe e¥¨¦¡£¢j n»ÿ¼žJ½-«n¾¦«n¥A½« ®n¨¦¢9ºRž Ò «n e e«n¥ Ò ´·¢Rž¦ ¡£¢RÁ с¨F  ÑR«[¾F¨¦Ÿe p§©¨#Gež¦¥¡ª Ù´ ž¦¤.®n¨¦Ÿe«Ÿl½ÑR«n¥«Â¨P¥«n¶«¨F e«º$¶¥ž¦ e«¡£¢$¢¨¦§©«Â¡£Ÿl¢Rž¦  ¨¦¹£Ÿežù ¨FÁ¦Á¦«º¬¨¦ŸC¨%¶¥ž¦ e«¡£¢Ðс¨F¶¶&«¢Ÿ5½ÑR«¢©¡ª  ¡¸Ÿ5¶¨F¥  ž¦¤a¨Ó¹£¨F¥Á¦«n¥A¶ÑR¥ ¨¦Ÿe«· с¨F  Öcà  ¨FÁ¦Á¦«º »Íぞ¦¥å«X­R¨¦§Ì ¶¹ª«¦¿H (*)*+*, j -2/0134 ¡£Ÿ¨P¶¥ž¦ e«¡£¢ä¢¨¦§Ð«¦¿  с«n¥«XÌ ¤^ž¦¥«  (*).+*, j ¡£Ÿ¢ž¦  ¨FÁ¦Á¦«º9¡£¢· с¡£Ÿ.¶WÑR¥¨¦Ÿe«¦¿W«n¾¦«¢  ÑRžxÄRÁxÑÓ¡ª §©¨´ с¨¾¦« Ò «n«¢$ ¨FÁ¦Á¦«ºä¶¥«n¾M¡£žxāŸ¹ª´9¡£¢  ÑR«A¨ Ò Ÿe e¥¨¦®X Ð½ÑR«n¥«Â¡£ [½’¨¦ŸÐ¢Rž¦ l¤cžx¹£¹£ži½«º Ò ´I -/'1 0134 »éΫ[¢R«n«º™¨Ð¶ž¦ e«¢j ¡£¨¦¹ с¨F ¨¦¹£¹ªžJ½Ÿ. ç½-ž¬«¢<Ì  ¡ª ¡ª«Ÿ ½¡ª Ñ ÑR«Ÿ¨¦§Ð«. e«X­< C ežaс¨i¾¦«º¡ªÊ&«n¥«¢j  ¹£¨ Ò «¹£Ÿ ¡ª¤  ÑR«p«¢j ¡ª Ù´Â½¡ª Ѭ¹£¨ Ò «¹ËÌý¾F¨¦¹£ÄR« ¡£Ÿ’¡£¢Ÿ¡£ºR«¨¦¢ž¦ ÑR«n¥ «¢j ¡ª Ù´Î½¡£ Ñ飨 Ò «¹ËÌý¾Y¨¦¹¸ÄR« F».'’ÄR l¨0®n¨¦¢º¡£º¨F e«©«¢<Ì  ¡ª ç´Ó§©¨i´ Ò «å¡£¢Ÿ¡£ºR«Â§©ž¦¥«A с¨¦¢Úžx¢R«Úöe¡£¢®n¹¸Äº¡£¢RÁx÷ «¢j ¡ª Ù´¦¿œ¨¦¢º[ с«’¢Mā§ Ò «n¥•ž¦¤¡¸¢®n¹£Äº¡¸¢RÁ'«¢œ ¡£ ¡ª«Ÿ5§©¨´ ¾F¨F¥´A¤c¥žx§ žx¢R«a®n¨¦¢º¡£º¨F e«a«X­< e¥¨¦®X ¡ªžx¢· ežÂ¨¦¢Rž¦ с«n¥» 4pŸ¡£¢RÁРÑR«a«X­<¨¦§Ð¶W¹ª«ù¤c¥žx§ ÏM«®X ¡ªžx¢&6M» F¿W ÑR«[®n¨¦¢º¡ËÌ º¨F e«p«¢j ¡ª Ù´J !#":$&)(;&)+-%. /x¡£Ÿ-¡¸¢®n¹£ÄºR«ºÂ¡£¢Â Ù½žÐž¦ ÑR«n¥ «¢j ¡ª ¡ª«Ÿ9 K !9"%$'&)(*&,+-%. /-L/ œ¨¦¢ºJ !#"%$'&)(*&,+-%. / -L/M1 &)3)( /6581 7 34( 5 Ë» °ñ¢ëž¦¥ ºR«n¥Â ežÚ¡£¢Ÿe ¨¦¢œ ¡¸¨F e«P¶ž¦ e«¢j ¡£¨¦¹£ŸÂži¾¦«n¥å¾F¨F¥¡ËÌ ¨ Ò ¹ª«p¢Mā§ Ò «n¥’ž¦¤•¹¸¨ Ò «¹&¢Rž<ºR«Ÿn¿M½-«%¡¸¢œ e¥ž<ºÄ®X«¨Ð¹ªž¦ÁFÌ ¡£®n¨¦¹ â®n¹£¡  ÄR«P e«§©¶¹£¨F e«9 с¨F 0§©¨F ® ÑR«Ÿ·¨Ó¾F¨F¥¡ËÌ ¨ Ò ¹ª«9¢Mā§ Ò «n¥¬ž¦¤a«¢j ¡ª ¡ª«Ÿn»øéÑR«¢ с¡¸ŸÂ e«§©¶¹£¨F e« §©¨F ® ÑR«Ÿp¨µŸÄ Ò Ÿe«n ž¦¤‹«¢j ¡ª ¡ª«Ÿ N K N   J J J  N ¿W¡ª ½¡£¹£¹ ®X¥«¨F e«l¨¦¢0¨¦Ä<­R¡£¹£¡£¨F¥´ â «¢j ¡ª Ù´ N;N4O ¿R½¡ª Ñå¨ÐŸ¡£¢RÁx¹ª« ¤^«¨F ÄR¥« N*N4O J N „»AõÑR«Ð¶ž¦ e«¢œ ¡£¨¦¹C¤^Ä¢®X ¡ªžx¢f¡£ŸaŸe«n  Ÿež с¨F 5¡ª À¨¦ŸŸ¡£Áx¢Ÿ•¨¢Ržx¢<Ìýón«n¥ž¶ž¦ e«¢j ¡£¨¦¹xžx¢¹£´ù½ÑR«¢ N;N4O JM N  N  JM N ;P N  JM N *P+J J J,P N J N „»‹õ.ÑR« ®n¹£¡  ÄR«Ÿ ¨F¥«Cžx¢¹ª´®X¥«¨F e«º[¨¦Ÿ•¢R«n«ºR«º ¿J«¦» ÁR»•½с«¢ù ÑR« ¨¦Ä<­R¡£¹£¡£¨F¥´ pâÿ¾Y¨F¥¡¸¨ Ò ¹ª«¡¸ŸC¥«  ā¡ª¥«º Ò ´©¥«n¶«¨F '¨¦¢º ¨¦®X¥žx¢j´M§ ®n¹£¡  ÄR«p e«§Ð¶¹£¨F e«Ÿ» ㋡ªÁxÄR¥« $ΟÑRžJ½Ÿ© ÑR«·¤^¨¦®X ež¦¥¬Á¦¥¨F¶ÑÚ¤^ž¦¥µ¨ Ÿ¨¦§Ì ¶¹ª«P¡£¢Ÿe ¨¦¢j ¡£¨F ¡ªžx¢ëž¦¤ с«·¥«n¶«¨F µ e«§©¶¹£¨F e«PÄŸ¡£¢RÁ  ÑR« âÍ e«§Ð¶¹£¨F e«¦»p¼«n¥«¦¿Ψ¦¢ºG?A¥«n¶¥«Ÿ«¢œ  Ù½ž Ÿ¨¦§Ð«XÌý e«X­< ¬«¢œ ¡ª ¡£«Ÿn¿Li¿ ¿p»ª»ª»ä¨F¥«™¨¦¹£¹.«¢j ¡ËÌ  ¡ª«Ÿ с¨F ù¡£¢®n¹¸ÄºR«À¿¨¦¢º ?i¿?  ¿ »ª»ª»ª¿??¨F¥««¢j ¡ËÌ  ¡ª«Ÿ- с¨F '¡£¢®n¹£ÄºR«;?» õ•žl¨i¾¦žx¡£ºA®n¹£ÄR e e«n¥¿R¨¦¹£¹«¢œ ¡£ ¡ª«Ÿ ¡£¢Î Ñ¡£Ÿù¨¦¢º$ŸÄ Ò Ÿe«  ÄR«¢j ù¤^¨¦®X ež¦¥lÁ¦¥¨F¶сŸaŸe ¨¦¢ºÎ¤^ž¦¥  ÑR«¡ª¥%®Xž¦¥¥«Ÿe¶žx¢º¡£¢ÁA¹¸¨ Ò «¹ ¤^«¨F ÄR¥«Ÿn»Âõ.ÑR«Ð¶ž¦ e«¢<Ì  ¡£¨¦¹M¤±Ä¢®X ¡ªžx¢®n¨¦¢«¡ª ÑR«n¥ Ò «¶¥«Ÿe«n 5 ežù¶¥žxс¡ Ò ¡ª  ā¢<Ì ¹£¡ªï¦«¹£´0¹£¨ Ò «¹•®Xžx¢<þWÁxÄR¥¨F ¡ªžx¢Ÿ¿&ž¦¥ù¡ª p®n¨¦¢ Ò «l¹£«¨F¥¢R«º  ežå¥«n¶¥«Ÿe«¢œ %¨¦¢Î¨F¶¶¥ž¦¶¥¡¸¨F e«Ÿž¦¤c %®Xžx¢Ÿ e¥¨¦¡£¢j n»l°ñ¢ žxÄR¥p«X­<¶&«n¥ ¡£§Ð«¢j Ÿn¿&¡£ ½’¨¦Ÿù¹ª«¨F¥¢R«º™Ÿ¡£¢®X«[ с¡¸ŸÁx¨i¾¦« Ÿ¹£¡£Áxќ ¹ª´ Ò «n e e«n¥.¶«n¥¤cž¦¥ §©¨¦¢®X«¦» ぞx¹¸¹ªži½¡¸¢RÁ с«¶¥«n¾M¡£žxāŸÀ«X­R¨¦§Ð¶¹£«¦¿xŸÄR¶¶žxŸe«- с¨F   ÑR«0¶WÑR¥¨¦Ÿe«  !9"%$'&)(*&,+-%. /Àž<®n®nÄR¥Ÿµ¡£¢Ÿ¡£º«Â ç½-ž Ò ¨¦Ÿe« ð !-Ÿ ¡£¢a с«-Ÿ¨¦§Ð«’ºRž<®nā§Ð«¢j n¿ !#":$&)(;&)+-%. // 1 &)3)( /6581 7 34( 5 M¨¦¢º  !#"%$'&)(*&,+-%. / ( / &)-:.' ( / & 5 1'5 & -3 Ë»‹õ.ÑR«¢  ÑR«pþ¥ Ÿe 'žM®n®n⥥«¢®X«ùž¦¤K !9"%$'&)(*&,+-%. /œ½¡£¹£¹ Ò «%¨¦ŸŸežFÌ ®n¡£¨F e«ºæ½¡ª Ñë ÑR«·«¢j ¡ª Ù´ À¿.¨¦¢º®Xž¦¥¥«Ÿe¶žx¢º¡¸¢RÁx¹ª´ ¡ª Ÿ’¡£¢®n¹¸Äº¡£¢RÁa«¢œ ¡ª ¡£«Ÿ’½¡£¹£¹ Ò « "  !9"%$'&4(;&,+6-:. //  ¨¦¢º    !9"%$'&)(*&,+-%. /-/ 1 &,34( /5 7 3)( 5$ »[Ï<¡¸§©¡£¹£¨F¥¹£´¦¿  ÑR«Ÿe«®Xžx¢ºÂžM®n®nÄR¥¥«¢®X«ž¦¤= !9"%$'&4(;&)+-%. /J½¡£¹¸¹ Ò «¨¦ŸñÌ Ÿež<®n¡£¨F e«ºf½¡ª Ñ9 ÑR«©«¢j ¡ª Ù´ ?¿•½с¡£¹ª«l ÑR«µ¡£¢®n¹£Äº¡£¢Á «¢j ¡ª ¡ª«Ÿ-½¡£¹£¹ Ò « ?l  !9"%$'&)(*&,+-%. /( / &,-%.' ( / & F¨¦¢º ?    !9"%$'&)(*&,+-%. /( / &)-%. ( / & 5$1'5 & -23  » 1 u2 u v 1 2 v φ φ u u v v or or or or RT φ un vm ... ... ㋡ªÁxÄR¥« $M â«n¶«¨F .㨦®X ež¦¥ C ¥¨F¶WÑ »  C D  ?s= ‹ E C>DMEFHBU@VIKD p¢R«¬¨F¶¶¥žx¨¦® Ñ$ ež™ ÑR«A¨¦®X¥žx¢j´M§  e«§Ð¶¹¸¨F e«µ½žxā¹£º Ò «9 ežîāŸe«Î¨¦¢ÿ«X­< ¨¦¢œ ·¨¦¹£Á¦ž¦¥¡ª с§"¤^ž¦¥0¡£ºR«¢j ¡ª¤c´<¡£¢RÁ ¨¦®X¥žx¢j´M§µŸ%¨¦¢ºÎ ÑR«¡£¥%¹ªžx¢RÁA¤cž¦¥ §©Ÿ%¡£¢9¨0ºRž<®nā§Ð«¢j n¿ ¨¦¢ºÂ ÑR«¢¬ºR«XþW¢R«¨a¶ž¦ e«¢j ¡£¨¦¹&¤^⢮X ¡ªžx¢© с¨F ½žxā¹£º ¤±¨¾¦ž¦¥î¹£¨ Ò «¹µ®Xžx¢RþÁxÄR¥¨F ¡ªžx¢Ÿ ¡£¢ø½с¡£® Ñ Ò ž¦ Ñ  ÑR« ¨¦®X¥žx¢j´M§4¨¦¢ºå¡ª Ÿ’ºR«XþW¢¡ª ¡£žx¢Âс¨i¾¦«ù ÑR«ùŸ¨¦§Ð«ù¹£¨ Ò «¹ò» p¢R«™Ÿā® Ñæ¨¦¹ªÁ¦ž¦¥ ¡ª с§ ¡¸ŸÂºR«Ÿ®X¥¡ Ò «ºî¡£¢ô¯ÙÏ<®Ñj½¨F¥ eó ¨¦¢ºÂ¼«¨F¥ Ÿe n¿$j³¿¨¦®Ñ¡ª«n¾M¡¸¢RÁ[¨l¶¥«®n¡¸Ÿ¡ªžx¢©ž¦¤  ¨F î¨Í¥«®n¨¦¹£¹A¥ ¨F e« ž¦¤GA» ¼ži½«n¾¦«n¥¿ Ò «®n¨¦ÄŸe«  с¡£Ÿ¨¦¹ªÁ¦ž¦¥¡ª с§ ½-žxĹ£º0§©¡£ŸŸ.¨µŸ¡£Áx¢¡ËþW®n¨¦¢j .¢jħ Ò «n¥ ž¦¤p¨¦®X¥žx¢œ´<§©Ÿn¿-½-«0Ѩ¾¦«·ºR«®n¡¸ºR«ºÓ ežf¡£§Ð¶¹ª«§©«¢œ Ð¨ Ÿež¦¤^ e«n¥’¾¦«n¥Ÿ¡ªžx¢Â¨¦Ÿ-¤^žx¹£¹ªžJ½Ÿ9 5ºR«n e«®X '¨¦¹£¹Ÿ¡ª ā¨F ¡ªžx¢Ÿ-¡£¢ ½с¡¸®Ñ0¨ÂŸ¡£¢RÁx¹ª«ù½ž¦¥ºP¡£Ÿ'«¢®n¹ªžxŸ«º Ò «n Ù½«n«¢·¶W¨F¥«¢<Ì  ÑR«Ÿe«Ÿ¿5Ÿā®Ñ$ с¨F [ с«µ½ž¦¥º$¹ª«¢Á¦ Ñ$¡£Ÿ[¨F l¹ª«¨¦Ÿ + ¨¦¢ºÚ¡ª  Ò «nÁx¡£¢Ÿ[½¡ª Ñڨ骫n e e«n¥» •«n  ?$ºR«¢ž¦ e«0 ÑR« ®Xž¦¥¥«Ÿ¶&žx¢º¡£¢RÁA«¢j ¡ª Ù´¦» •«n "¿ ¿»ª»ª»ª¿ Ò «µ¨¦¹£¹ «¢j ¡ª ¡ª«Ÿ- с¨F C«¢ºµ«X­<¨¦®X ¹£´ Ò «n¤cž¦¥« ÑR«ž¦¶«¢©¶W¨F¥«¢<Ì  ÑR«Ÿ¡¸Ÿn»ÿ°ç¤ù Ñ¡£Ÿµ¡£ŸÂ¨$Ÿ¡£ ā¨F ¡ªžx¢ë¡¸¢î½с¡¸®Ñ ?ä¡£ŸÂ¨¦¢ ¨¦®X¥žx¢j´M§0¿• ÑR«¢$žx¢R«©ž¦¤’ ÑR«©«¢j ¡ª ¡ª«ŸV®'¡£Ÿ%¡ª Ÿa®Xž¦¥eÌ ¥«Ÿe¶žx¢º¡£¢RÁžx¢RÁæ¤cž¦¥ §å» ážx¢Ÿ«  ÄR«¢j ¹ª´¦¿%½«äāŸe« ¨f¹ªž¦Áx¡¸®n¨¦¹ â  e«§Ð¶¹£¨F e«0 ežÓ¡£¢j e¥žMºā®X«A ÑR«·¨¦Ä<­R¡£¹ËÌ ¡£¨F¥´Â¾Y¨F¥ ¡£¨ Ò ¹ª« N4O ¿¨¦¢ºå®Xžx¢¢«®X ¡ª ' ež ? Ÿ¢Rž<ºR«ù¹£¨YÌ Ò «¹œ с¥žxÄRÁxѨ¦¢Ð¨¦®X¥žx¢j´M§Í¶ž¦ e«¢j ¡£¨¦¹ò¿x¨¦Ÿ‹¡£¹£¹¸ÄŸe e¥¨F e«º ¡£¢Î㋡ªÁx⥫ )R»åぞ¦¥l«X­<¨¦§©¶¹ª«¦¿5®Xžx¢Ÿ¡£ºR«n¥[ ÑR«µ¶WÑR¥¨¦Ÿe«  &,+ ( / &)-%. ( / & 5 $ 3,.' '5$3 $'&4( 5 1 j ] / ) j f Ë¿ ½ÑR«n¥« Ò ž¦ ÑJ 5 $ 3). '5$3 $'&)( 5$-=1/j j¨¦¢ºJ / ) j  ¨F¥«[ ¨FÁ¦Á¦«ºÎ¨¦Ÿ¶¥ž¦ e«¡£¢Ÿn»M / ) j Ÿ¨F ¡£ŸñþW«Ÿ.žxÄR¥®X¥¡ËÌ  e«n¥¡£¨ä¤^ž¦¥0¨¦®X¥žx¢j´M§µŸn¿p Ñj⟷¡ª A½¡£¹£¹ Ò «9¨¦ŸŸežM®n¡¸¨F e«º ½¡ª Ñä с«å«¢œ ¡ª ç´ ? ¡£¢ä㋡ªÁx⥫,)R»îõ.ÑR«0®n¨¦¢º¡£º¨F e« ¹ªžx¢RÁP¤cž¦¥ §©Ÿ[¨F¥«"   ( / &)-%. ( / & 5 $ 3,.' '581 3 $'&)( 5$-1lj Ë¿  5 $ 3).'- '5 3 $&)( 5 -21lj  ¿ ¨¦¢º *  '5 3 $&)( 5 1œj  » 1 u2 u φ u v or or un ... φAT ㋡ªÁxÄR¥«)J * ®X¥žx¢œ´<§ 㨦®X ež¦¥ C ¥¨F¶WÑ »  û = =ñBR›jB=-E<Bäüý= 9‹8 EMš¦Œ ›  ›j8% 5 D C ¡ª¾¦«¢Ó ÑR«å®n¹£¡  ÄR«Â¶ž¦ e«¢œ ¡£¨¦¹¸Ÿn¿‹ ÑR«A¡£¢R¤^«n¥«¢®X«AŸe e«n¶ ¤^ž¦¥‹ ÑR«’¤^¨¦®X ež¦¥ Á¦¥ ¨F¶ÑШ¦ŸŸež<®n¡£¨F e«ºÐ½¡£ Ñl¨pºRž<®nā§Ð«¢j  ¡£¢j¾¦žx¹ª¾¦«ŸA®Xžx§Ð¶ÄR ¡¸¢RÁ$ ÑR«9§ÐžxŸe ¬¶¥ž Ò ¨ Ò ¹ª«·¨¦ŸŸ¡ªÁx¢<Ì §Ð«¢j -ž¦¤À¾Y¨¦¹£Ä«Ÿ- ežl ÑR«pс¡£ºº«¢µ¹£¨ Ò «¹£ŸCž¦¤À¨¦¹£¹&®n¨¦¢º¡ËÌ º¨F e«%«¢j ¡ª ¡ª«Ÿ b! œh#"%$'& 3 (( ‚ \#]U^A`[bdc ^A` egf ]*)sf ½ÑR«n¥« O©¯FKJ¡O FKJ[$³-¡£Ÿ.º«XþW¢R«ºå¨¦Ÿ.¡£¢A²  ĨF ¡ªžx¢ F» * Ò ¥ ÄR e«XÌý¤cž¦¥ ®X«9¨F¶¶¥žx¨¦®Ñ¡£Ÿ¬«X­R®n¹£ÄºR«º¿.Ÿ¡£¢®X«™ ÑR« ¢Mā§ Ò «n¥Až¦¤a¶žxŸŸ¡ Ò ¹ª«™¹£¨ Ò «¹®Xžx¢<þWÁxÄR¥¨F ¡ªžx¢ŸA¡£Ÿ¬«X­MÌ ¶žx¢R«¢j ¡£¨¦¹’¡£¢Ó ÑR«å¢jā§ Ò «n¥ž¦¤®n¨¦¢º¡£º¨F e«A«¢œ ¡£ ¡ª«Ÿn» õ.ÑR«Ÿā§Ìý¶¥žMºā®X À¨¦¹ªÁ¦ž¦¥¡£ с§ ¯NMaŸ®с¡¸Ÿ®с¨¦¢Á«n ‹¨¦¹ò»ª¿ Y³a¡£Ÿl¨P§©«ŸŸ¨FÁ¦«XÌý¶¨¦ŸŸ¡£¢RÁ9¨¦¹ªÁ¦ž¦¥¡ª с§  с¨F Ð®n¨¦¢ Ò «-āŸ«ºa¤^ž¦¥ ®Xžx§©¶ÄR ¡£¢RÁ ÑR«’§©¨F¥Áx¡£¢¨¦¹Mº¡£Ÿ e¥¡ Ò ÄR ¡£žx¢ ži¾¦«n¥µ ÑR«å¹£¨ Ò «¹-¾F¨F¥¡£¨ Ò ¹ª«Ÿ[¡¸¢ ¤±¨¦®X ež¦¥©Á¦¥¨F¶WсŸl½¡ª Ñ<Ì žxÄR ®X´M®n¹ª«Ÿ¿&¨¦¢º·½¡ª Ñ0¨¬§©¡£¢ž¦¥®с¨¦¢Á¦«¬¯±¥«n¶W¹£¨¦®n¡£¢RÁ  ÑR«ùŸā§ ž¦¶«n¥¨F ež¦¥.āŸe«º¬¤^ž¦¥'§©¨F¥Áx¡£¢¨¦¹¸¡ªó¨F ¡ªžx¢Â½¡ª Ñ ¨·§µ¨Y­fž¦¶&«n¥ ¨F ež¦¥n³¡ª l®n¨¦¢ ¨¦¹¸Ÿež Ò «µÄŸe«º$¤cž¦¥ºR«n¥¡£¾œÌ ¡£¢RÁРс«%§ÐžxŸe .¶¥ž Ò ¨ Ò ¹ª«%¹£¨ Ò «¹¨¦ŸŸ¡£Áx¢§Ð«¢j n»C°ñ¢AžxÄR¥ ®n¨¦Ÿe«¦¿¡£¢µž¦¥ºR«n¥ ežÁ¦«n ’¨¦¢¬¨¦®X´<®n¹£¡£®Á¦¥¨F¶Ñ ¿M½-«½žxā¹£º с¨i¾¦«0 ež$āŸe«å¹ªž<®n¨¦¹' e«§Ð¶W¹£¨F e«ŸÐžx¢¹ª´¦»ä¼ži½«n¾¦«n¥¿'¡ª  с¨¦Ÿ Ò «n«¢ž Ò Ÿe«n¥¾¦«ºÐ с¨F ‹ ÑR«.¨¦¹ªÁ¦ž¦¥ ¡ª с§ ž¦¤c e«¢µ®Xžx¢<Ì ¾¦«n¥Á¦«Ÿ5¡£¢ùÁ¦«¢«n¥¨¦¹x¤±¨¦®X ež¦¥5Á¦¥¨F¶сŸn¿F¨¦¢ºa½ÑR«¢%¡ª •®Xžx¢<Ì ¾¦«n¥Á¦«Ÿn¿x¡£ •Áx¡ª¾¦«Ÿ ¨Á¦žjž<º¨F¶¶¥ži­<¡£§µ¨F ¡ªžx¢% ežp ÑR«®Xž¦¥eÌ ¥«®X  §©¨F¥Áx¡£¢¨¦¹¸Ÿn»•õ.с«C¨¦¹ªÁ¦ž¦¥¡ª Ñ§ ½ž¦¥ïMŸ Ò ´%¨¦¹£ e«n¥¡£¢RÁ  ÑR« Ò «¹£¡ª«n¤¨F ‹«¨¦® ѵ¹£¨ Ò «¹M¢Rž<ºR« Ò ´[¥«n¶«¨F e«º¹ª´a¶W¨¦ŸŸñÌ ¡£¢RÁp§Ð«ŸŸ¨FÁ¦«Ÿ Ò «n Ù½«n«¢ ÑR«'¢RžMº«¨¦¢ºÐ¨¦¹¸¹œ¶ž¦ e«¢j ¡£¨¦¹ ¢Rž<ºR«ŸÀ®Xžx¢¢R«®X e«ºl ež¡ª ¯NM%Ÿ®Ñ¡£Ÿ®Ñ¨¦¢RÁ.«n 5¨¦¹ò»ª¿Y³» * Ÿù§©¨¦¢œ´™ž¦¤ ÑR«Ð¹£¨ Ò «¹ ¢žMºR«Ÿù¨F¥«¡¸¢º¡ª¥«®X ¹£´0®Xžx¢<Ì ¢R«®X e«ºP ÑR¥žxÄRÁxÑ·¶ž¦ e«¢œ ¡£¨¦¹À¢RžMº«Ÿ¡£¢Ÿ ¨¦¢œ ¡£¨F e«º Ò ´ Áx¹ªž Ò ¨¦¹À e«§Ð¶¹£¨F e«Ÿ¿ ÑR«¡ª¥ Ò «¹£¡ª«n¤À¾F¨¦¹£ÄR«Ÿ½¡£¹£¹¶¥ž¦¶¨YÌ Áx¨F e«A¡£¢f ÑR«µÁ¦¥¨F¶ÑÓ¨¦¢º$§lÄR ⍦¹£¹£´™¡£¢RÃWÄR«¢®X«Ð«¨¦® Ñ ž¦ ÑR«n¥¿<¹ª«¨¦º¡¸¢RÁa¡£¢Ð с««¢ºµ ežl¨[®Xžx¹£¹ª«®X ¡ª¾¦«¹£¨ Ò «¹£¡£¢RÁ ºR«®n¡£Ÿ¡ªžx¢ » õ.с«l ¡¸§Ð«Ð®Xžx§Ð¶¹ª«X­R¡ª Ù´Pž¦¤-®Xžx§Ð¶WÄR ¡£¢RÁå§Ð«ŸŸ¨FÁ¦«Ÿ ¤^¥žx§ ¨å¶ž¦ e«¢j ¡£¨¦¹ ¢Rž<ºR«Ð ež·¨0¹¸¨ Ò «¹‹¢žMºR«Ð¡£Ÿ%«X­M¶žFÌ ¢R«¢j ¡£¨¦¹<¡£¢l ÑR«'¢jħ Ò «n¥5ž¦¤W¹£¨ Ò «¹<¢Rž<ºR«Ÿ5¨F e ¨¦®ÑR«ºµ ež  ÑR«©¶ž¦ e«¢j ¡£¨¦¹ò»åÏ<¡£¢®X«Ð с¡¸Ÿ0öñ¤^¨¦¢RÌÙ¡£¢÷0®n¨¦¢ Ò «Â¹£¨F¥Á¦« ¤^ž¦¥ âÿ¶ž¦ e«¢œ ¡£¨¦¹ ¢žMºR«Ÿ¿j с¡¸ŸŸ e«n¶å¥«  Ä¡ª¥«ºÂž¦¶ ¡ËÌ §©¡ªó¨F ¡£žx¢ »[㞦¥ ⢁¨F e«¹ª´¦¿•ºÄR« ež0 ÑR«ÐŸe¶«®n¡£¨¦¹ ¤^ž¦¥§ ž¦¤5 ÑR«+ â ¶ž¦ e«¢œ ¡¸¨¦¹ò¿¨¦¢ºå с«[¢Rž¦¥§µ¨¦¹£¡ªó¨F ¡ªžx¢ Ò «XÌ ¤^ž¦¥«p«¨¦®Ñ0§Ð«ŸŸ¨FÁ¦«XÌý¶¨¦ŸŸ¡¸¢RÁПe e«n¶¿<½-«p½«n¥«ù¨ Ò ¹ª« ež ºR«n¾¦«¹ªž¦¶Î¨¬¹£¡£¢R«¨F¥Ìý ¡£§Ð«a¨¦¹ªÁ¦ž¦¥ ¡ª с§ ¤cž¦¥p с¡£ŸŸe¶«®n¡£¨¦¹ ®n¨¦Ÿe«¦»u8«n ¨¦¡¸¹£Ÿ.¨F¥«%žx§µ¡ª e e«º0ºÄR«ù ežÂ¹£¡£§©¡ª e«ºAŸe¶W¨¦®X«¦» B8&›œ=-üý= AŒ š¦B= š¦üý8"!ÙDåüý= 9 8EMš¦Œ ›  ›œ8L% 5 D ぞx¹£¹£ži½¡£¢Áa¨%§©¨Y­R¡£§lā§?¹£¡ªï¦«¹¸¡£ÑRžMžMºl«Ÿe ¡£§µ¨F ¡ªžx¢ ¿x½« Ÿс¨¦¹¸¹&ÄŸe«ù ÑR«ù¹ªž¦ÁFÌÙ¹£¡£¢«¨F¥'¥«n¶¥«Ÿe«¢j ¨F ¡ªžx¢åž¦¤ ¶ž¦ e«¢<Ì  ¡£¨¦¹£Ÿ } p ]U~` e n" ~`[b n f h   3]_~` e n" ~` b n f ]3f ½ÑR«n¥« >  ¡£Ÿ'¨Ð¾¦«®X ež¦¥ž¦¤ Ò ¡£¢¨F¥´©¤^«¨F ÄR¥«Ÿ¿žx¢R«%¤^ž¦¥ «¨¦®љ®Xžx¢<þÁx⥍F ¡ªžx¢Až¦¤5¾Y¨¦¹¸ÄR«Ÿ'¤^ž¦¥l  ¨¦¢º¡  » •«n  Ò «© ÑR«Â®Xžx¢®n¨F e«¢¨F e«ºä¾¦«®X ež¦¥Ðž¦¤.¨¦¹£¹‹¶žFÌ  e«¢j ¡£¨¦¹R¶¨F¥¨¦§©«n e«n¥ŸF» p¢R«.¨F¶¶¥žx¨¦®ÑРežpþ&¢º¡£¢RÁ  ÑR«.§©¨Y­R¡£§lā§ÌÙ¹£¡£ï¦«¹£¡£ÑRžMžMº[Ÿežx¹£ÄR ¡£žx¢[¤^ž¦¥ ¡¸Ÿ5 ež%āŸe« ¨PÁ¦¥¨¦º¡ª«¢j ñÌ Ò ¨¦Ÿe«ºä§Ð«n ÑRž<º ¿‹½Ñ¡£®Ñ ¥«  ā¡ª¥«Ÿ[®Xžx§Ì ¶ÄR ¡¸¢RÁ$ ÑR«™Á¦¥¨¦º¡£«¢œ ¬ž¦¤% ÑR«Î¹ªž¦ÁFÌÙ¹£¡£ï¦«¹£¡£ÑRžMžMºî½¡ª Ñ ¥«Ÿe¶«®X [ ež·¶ž¦ e«¢œ ¡¸¨¦¹C¶¨F¥ ¨¦§Ð«n e«n¥ŸF»å°ç [®n¨¦¢ Ò « ŸÑRžJ½¢Î с¨F a с¡£ŸpÁ¦¥¨¦º¡£«¢œ %¡£Ÿ%«  ⍦¹ ½¡ª Ñ9 ÑR«©º¡£¤¸Ì ¤^«n¥«¢®X« Ò «n Ù½«n«¢Î ÑR«Ð«§Ð¶W¡ª¥¡£®n¨¦¹ ®Xžxā¢j Ÿùž¦¤Ш¦¢º  ÑR«¡ª¥Ð«X­<¶«®X ¨F ¡ªžx¢îā¢ºR«n¥ ÑR«0®nÄR¥¥«¢j ©Ÿe«n Ðž¦¤¶¨YÌ ¥¨¦§Ð«n e«n¥ Ÿ·»îõс¡£Ÿl«X­M¶«®X ¨F ¡ªžx¢æ¡£Ÿ«X­<¶&«¢Ÿ¡ª¾¦«A ež ®Xžx§Ð¶ā e«¦¿xŸ¡£¢®X«’¡ª À¥«  ā¡£¥«Ÿ•Ÿā§©§©¡£¢Á.ži¾¦«n¥ ¨¦¹£¹j¶žxŸñÌ Ÿ¡ Ò ¹ª«Î®Xžx¢<þÁxÄR¥¨F ¡£žx¢Ÿåž¦¤Ð®n¨¦¢º¡£º¨F e«f«¢œ ¡£ Ù´ ¹£¨ Ò «¹£Ÿ ¤^¥žx§ ¨æÁx¡£¾¦«¢,ºRžM®nħЫ¢œ n»3õÀžÿ®n¡ª¥®nħa¾¦«¢j 9 с¡£Ÿ ®Xžx§Ð¶¹£«X­<¡ª ç´¦¿J½«CāŸ«ážx¹£¹£¡¸¢Ÿ ¾¦ž¦ e«ºa¶«n¥®X«n¶ e¥žx¢%¨F¶RÌ ¶¥žx¨¦® Ñf¯Ùážx¹£¹¸¡£¢Ÿn¿ j³¿½с¡¸®ÑA¨F¶¶¥ži­<¡¸§©¨F e«Ÿ’ ÑR« ¤±Ä¹£¹&«X­M¶«®X ¨F ¡ªžx¢Pž¦¤ù½¡£ Ñå ÑR« a®Xžx⢜ Ÿ¤cž¦¥ ÑR« §ÐžxŸe ù¹£¡£ï¦«¹ª´å¹£¨ Ò «¹£¡£¢RÁµÄ¢ºR«n¥ ÑR«l®n⥥«¢j ¶W¨F¥¨¦§Ð«XÌ  e«n¥Ÿn¿!·»’°ñ¢·¨¦¹£¹ žx⥫X­<¶&«n¥ ¡£§Ð«¢j Ÿn¿ ÑR«[¶&«n¥ ®X«n¶ e¥žx¢ ½’¨¦Ÿ¥ ā¢0¤^ž¦¥ 6µ«n¶ž<®сŸn¿&½¡ª љ¨¬¹ª«¨F¥¢¡£¢Á©¥¨F e«Ÿe«n  ¨F <»MF» " ?# %.BR›jüýŠæB=•š¦8"!ù79BDY@*!òšxD éΫôѨ¾¦«  e«Ÿe e«º  ÑR« ❷ð ¨F¶¶¥žx¨¦®Ñ žx¢  Ù½ž º¨F ¨¦Ÿe«n Ÿf с¨F fс¨¾¦« Ò «n«¢Íс¨¦¢ºRÌý ¨FÁ¦Á¦«º,¤cž¦¥ÎÑMÄ<Ì §©¨¦¢µ¶¥ž¦ e«¡£¢©¢¨¦§Ð«Ÿn» õ.ÑR«’þW¥Ÿe Cº¨F ¨¦Ÿ«n ¡£Ÿ$C¨F¶&«X­  ½с¡¸®Ñ ®Xžx¢Ÿ¡£Ÿe ŸPž¦¤.P«º¹£¡£¢R«Ó¨ Ò Ÿe e¥¨¦®X Ÿ» p¤  ÑR«Ÿe«¦¿B ) lѨ¾¦« Ò «n«¢¬¥¨¦¢ºRžx§©¹£´©Ÿe«¹ª«®X e«º Ò ´©¶žxŸñÌ ¡£¢RÁp¨  ā«n¥´[®Xžx¢j ¨¦¡£¢¡£¢Á ÑR«ù¯òP«ŸÑ³ e«n¥§µŸ &×ñÛ¦ÇÙÆnÖcÉ ÅnÖ^ÉWÕFÖ^É  ¿ Ö^ÉÇÙÆX×eÜxØiDZֱÛFÉ ¿¨¦¢º Þ©ÛFÝ£ÆØ Ô ÝªÜF×  ežA·«º¹£¡£¢R«¦¿ ½с¡¸¹ª«å ÑR«·¥«Ÿe µž¦¤ 6$ с¨¾¦« Ò «n«¢î«X­< e¥¨¦®X e«º¥¨¦¢<Ì ºRžx§©¹£´Î¤c¥žx§ ÑR« C ²Cð° * ®Xž¦¥¶⟬¯Ùážx¹£¹¸¡ª«n¥%«n Ð¨¦¹ò»ª¿ j³»’°ç ®Xžx¢œ ¨¦¡£¢Ÿ¨© ež¦ ¨¦¹Àž¦¤ $&$©¶¥ž¦ e«¡£¢å¥«n¤^«n¥eÌ «¢®X«Ÿn»-õ.ÑR«pŸe«®Xžx¢ºåº¨F ¨¦Ÿe«n .¡¸Ÿ * ¡£§Ð«º  ½с¡¸®Ѭс¨¦Ÿ Ò «n«¢¶¥«n¾<¡ªžxāŸ¹£´%āŸe«º¤^ž¦¥‹ e¥¨¦¡£¢¡£¢Á ÑR«’¶¥ž¦ e«¡£¢Ð¡£¢<Ì  e«n¥¨¦®X ¡ªžx¢A«X­< e¥¨¦®X ¡ªžx¢·Ÿe´MŸ e«§©Ÿ’¡£¢9¯'Ä¢R«Ÿ®nĬ«n .¨¦¹ò»ª¿ )<³»å°„ [®Xžx¢Ÿ¡¸Ÿe Ÿùž¦¤ 60P«º¹£¡¸¢R«©¨ Ò Ÿe e¥¨¦®X Ÿ¿ ž¦¤ ½с¡¸®Ñ Ú¨F¥«Îï<¢Rži½¢ ežîº«Ÿ®X¥¡ Ò «P¡¸¢œ e«n¥¨¦®X ¡£žx¢Ÿ Ò «n Ù½«n«¢·ÑMā§©¨¦¢å¶¥ž¦ e«¡£¢Ÿn¿<½Ñ¡£¹ª«p ÑR«ùž¦ ÑR«n¥  6кRž ¢Rž¦ •¥«n¤c«n¥  ežp¨¦¢œ´%¡£¢j e«n¥¨¦®X ¡ªžx¢» õ.с«n¥«-¨F¥«#) )p¶¥žFÌ  e«¡£¢æ¥«n¤^«n¥«¢®X«ŸA¡£¢î Ñ¡£ŸÂº¨F ¨¦Ÿe«n n»ôéΫήXžx§Ð¶W¨F¥«º  ÑR«.¶«n¥¤^ž¦¥§©¨¦¢®X«.ž¦¤ ÑR¥«n«Ÿe´<Ÿe e«§©Ÿ ;-C % '&)( ¡£Ÿ  ÑR«0❷ð ¨F¶¶¥žx¨¦®ÑæÄŸ¡£¢Áf¹ªžM®n¨¦¹. e«§Ð¶¹£¨F e«Ÿ¬¨¦¢º  ÑR«ÎžJ¾¦«n¥¹£¨F¶  e«§Ð¶W¹£¨F e«¦¿ RP;-C*% +&)( ¡£ŸA ÑR«9¤±Ä¹£¹ âPð ¨F¶¶¥žx¨¦® Ñ ¿ÂÄŸ¡£¢RÁ Ò ž¦ Ñ?¹ªž<®n¨¦¹©¨¦¢ºøÁx¹ªž Ò ¨¦¹  e«§Ð¶¹¸¨F e«Ÿn¿¨¦¢º-,+.ù¿<½с¡£® ÑAāŸe«Ÿ'¨µáâã ¤^ž¦¥.¹£¨YÌ Ò «¹£¡£¢RÁ ež¦ï¦«¢fŸe«  ÄR«¢®X«Ÿn»léΫµÄŸ«ºP с«Âáâã¡£§Ì ¶¹ª«§©«¢œ ¨F ¡ªžx¢f¤c¥žx§3¯òP®iá'¨¦¹£¹£Ä§å¿ j³½¡£ ÑΠÑR« Ÿe«n 'ž¦¤5 ¨FÁxŸ¨¦¢ºA¤c«¨F ÄR¥«Ÿ.āŸe«º Ò ´Â с«%P¨Y­R¡£§lā§Ì ²-¢j e¥ž¦¶M´ì ¨FÁ¦Á¦«n¥$º«Ÿ®X¥¡ Ò «º ¡£¢¯'Ä¢R«Ÿ®nÄ «n f¨¦¹ò»ª¿ )<³» * ¹£¹aP«º¹£¡¸¢R« ¨ Ò Ÿ e¥¨¦®X Ÿ9½-«n¥«  ež¦ï¦«¢¡£ón«º ¨¦¢ºÐ с«¢ ! %Ï[ ¨FÁ¦Á¦«º¬ÄŸ¡¸¢RÁ'-¥ ¡£¹£¹ Ÿ5 ¨FÁ¦Á¦«n¥p¯'¥¡£¹£¹ò¿  6j³»²-¨¦®Ñ·«X­M e¥ ¨¦®X e«º0¶¥ž¦ e«¡¸¢å¢¨¦§Ð«a¡£¢A с«ù e«Ÿe  º¨F ¨¬½’¨¦Ÿ®Xžx§©¶¨F¥«º™ ežA ÑR«lÑMā§©¨¦¢RÌý ¨FÁ¦Á¦«ºÎº¨F ¨<¿ ½¡ª ÑA с«%¶žxŸ¡ª ¡ªžx¢Ÿ’ ¨F嶺¢™¡£¢j ež©¨¦®n®Xžxā¢j n»õ'½ž©«X­MÌ  e¥¨¦®X ¡ªžx¢Ÿ'¨F¥«%®Xžx¢Ÿ¡£º«n¥«ºA¨l§µ¨F ®Ñ0¡ª¤À ÑR«n´Â®Xžx¢Ÿ¡£Ÿe  ž¦¤  ÑR«%Ÿ¨¦§Ð«%® с¨F¥¨¦®X e«n¥Ÿ«  ÄR«¢®X«%¡£¢¬ ÑR«%Ÿ¨¦§Ð«ù¶žFÌ Ÿ¡ª ¡£žx¢l¡¸¢ ÑR«' e«X­M n»‹â«Ÿā¹ª Ÿ‹¨F¥«.ŸÑRži½¢©¡£¢õ•¨ Ò ¹ª«Ÿ $ ¨¦¢º )µ½с¡£® ÑAÁx¡ª¾¦«a¨¾¦«n¥ ¨FÁ¦«[¶¥«®n¡£Ÿ¡ªžx¢ ¿<¥«®n¨¦¹¸¹ò¿¨¦¢º ã Ì٧Ы¨¦ŸÄR¥«aāŸ¡£¢Á  YÌý¤^žx¹£º0®X¥žxŸŸ.¾F¨¦¹£¡£º¨F ¡£žx¢ » P«n ÑRž<º !-¥«®n¡£Ÿ¡ªžx¢ â«®n¨¦¹£¹ ã Ì٧Ы¨¦ŸÄR¥« Rõ’Ìý❷ð  <»  6$M»    F»  ) C <õ'ÌýâPð M» &  6M»   j»   á’âã M» )a6 6 M»  )  )R»   õ ¨ Ò ¹ª« $M C² ­< e¥¨¦®X ¡ªžx¢ !5«n¥¤^ž¦¥§©¨¦¢®X«%žx¢/$C¨F¶«X­&» P«n ÑRž<º !-¥«®n¡£Ÿ¡ªžx¢ â«®n¨¦¹£¹ ã Ì٧Ы¨¦ŸÄR¥« Rõ’Ìý❷ð  F» $$ M»   M»  C <õ'ÌýâPð M»  <»M)  F» $ á’âã  6M» $ 36M»  <» $ õ ¨ Ò ¹ª«)J ²C­M e¥ ¨¦®X ¡ªžx¢ !5«n¥¤^ž¦¥§©¨¦¢®X«ažx¢ * ¡¸§Ð«º » õ.с«Ÿe«  ¨ Ò ¹ª«ŸøŸÑRži½  с¨F n¿ì¡£¢è e«n¥ §©Ÿ?ž¦¤ã Ì §Ð«¨¦Ÿā¥«¦¿‹ ÑR«ÂÄŸe«Âž¦¤'Áx¹£ž Ò ¨¦¹C e«§©¶¹£¨F e«Ÿl¤cž¦¥§Ðž<º<Ì 0214365879:9:9; <2=?>@<@; <2ACBCDEGFIH?ACBKJMLONCPQA@>SRO<BCJTLONRODGUHVRB W 1436587XYROJZ; >@<@; EMROAS[TU<@; A]\TE^BKFIN_N`GASaGBKbT=?Ncd\GUCRUB «¹£¡£¢Á9¡¸¢<ÃWÄR«¢®X«Ÿ Ò «n ç½-«n«¢ë¶&žxŸŸ¡ Ò ¹ª«Â«¢œ ¡£ ¡ª«ŸÐ¤^¥žx§  ÑR«ÎŸ¨¦§Ð«9ºRž<®nā§Ð«¢j åŸ¡ªÁx¢¡Ëþ&®n¨¦¢œ ¹ª´Ú¡£§©¶¥ži¾¦«Ÿ¬«X­MÌ  e¥¨¦®X ¡ªžx¢ô¶«n¥¤^ž¦¥§©¨¦¢®X« ži¾¦«n¥9 ÑR« ¹£žM®n¨¦¹l¨F¶¶¥žx¨¦® Ñ ¯ò¨©žx¢«XÌý ¨¦¡£¹ª«º0¶W¨¦¡ª¥«ºå ñÌý e«Ÿe ¤^ž¦¥Ÿe ¨F ¡¸Ÿe ¡£®n¨¦¹ÀŸ¡ªÁx¢¡ËþÌ ®n¨¦¢®X«¥«Ÿā¹£ Ÿ ¡£¢µ¨¾F¨¦¹£ÄR«¹£«ŸŸ  с¨¦¢  JMžx¢ Ò ž¦ Ñ º¨F ¨¦Ÿe«n Ÿn³»õÑR«n¥«0¡£Ÿ©¨¦¹¸Ÿež$¨fŸ§µ¨¦¹£¹.¡£§Ð¶¥ži¾¦«§Ð«¢j  ži¾¦«n¥[á’â㠟n¿½¡ª ÑP ÑR«l¥«Ÿā¹ª Ÿ Ò «¡£¢RÁŸ ¨F ¡£Ÿe ¡£®n¨¦¹£¹£´ Ÿ¡ªÁx¢¡ËþW®n¨¦¢j åžx¢¹ª´¤cž¦¥P ÑR«$C¨F¶«X­ìº¨F ¨¦Ÿe«n n¿a®Xž¦¥¥«XÌ Ÿe¶žx¢º¡¸¢RÁµ ež0¨f¾F¨¦¹£ÄR«ž¦¤# JMM»ÐéΫ©Ñj´M¶&ž¦ с«Ÿ¡ªón«  с¨F .¤±ÄR¥ с«n¥¡£§Ð¶¥ži¾¦«§Ð«¢j Ÿ. ež© с« Rõ’ÌýâPð?¨F¶RÌ ¶¥žx¨¦® Ѭ½-žx⹏º©¶āŸÑРÑR« C <õ'ÌýâPðì¶«n¥¤^ž¦¥§©¨¦¢®X« «n¾¦«¢с¡ªÁxÑR«n¥i»Àõ.ÑR«- ¨FÁ¦Áx¡£¢RÁŸ®ÑR«§Ð«’āŸe«º Ò ´[á’â㋟n¿ ¡£¢P½с¡£® Ñ0«¨¦® Ñ9 ež¦ï¦«¢Î¡£Ÿ¨¦ŸŸ¡ªÁx¢R«º™¨¬ ¨FÁR¿¡£Ÿ«ŸŸe«¢<Ì  ¡£¨¦¹£¹£´Aº¡ËÊ&«n¥«¢œ .¤^¥žx§  ÑR«lâPð?¨F¶¶¥žx¨¦® Ñ ¿W½ÑR«n¥« ®n¨¦¢º¡¸º¨F e«[«X­< e¥¨¦®X ¡ªžx¢Ÿ¨F¥«[«¡ª с«n¥¥«)Ge«®X e«º9ž¦¥ù¨¦®Ì ®X«n¶ e«º »‹°ñ¢Â ÑR« ¨FÁ¦Áx¡£¢Ál¨F¶¶¥žx¨¦®ÑAāŸe«º Ò ´Âá’â㋟n¿ «X­< e¥¨¦®X e«ºë«¢j ¡ª ¡ª«Ÿµ¨F¥«·¨¾F¨¦¡£¹£¨ Ò ¹ª«åžx¢¹ª´Ó¨F¤c e«n¥Â ¨FÁFÌ Áx¡£¢RÁA¡£Ÿp®Xžx§Ð¶¹£«n e«¦¿  ÑR«n¥« Ò ´P§©¨Fï<¡£¢RÁA¡ª pº¡ ®nā¹£ . ež ¨¦®n®Xžxā¢j ‹¤cž¦¥‹¡£¢RÃWÄR«¢®X«Ÿ Ò «n ç½-«n«¢ с«§ ºÄR¥¡£¢RÁ ¨FÁFÌ Áx¡£¢RÁR» ㋡ªÁxÄR¥«Ÿ 6 ¨¦¢º øŸÑRži½  ÑR«ô¶¥«®n¡£Ÿ¡£žx¢<Ìý¥«®n¨¦¹£¹ ®nÄR¥¾¦«Ÿ·¤cž¦¥P ÑR«f ç½-žº¨F ¨¦Ÿe«n Ÿn» õ.ÑR«Ÿ«Î½-«n¥« ž Ò Ì  ¨¦¡£¢R«º Ò ´ÿ¾F¨F¥´<¡£¢RÁî¨ë ÑR¥«ŸÑRžx¹¸ºÿžx¢ì с«f«X­M e¥ ¨¦®Ì  ¡ªžx¢©®Xžx¢RþWºR«¢®X«¦¿M½с¡£® ÑС£Ÿ‹ ÑR«¶&žxŸ e«n¥¡ªž¦¥ ¶¥ž Ò ¨ Ò ¡£¹ËÌ ¡ª ç´0 с¨F p¡ª Ÿ¹¸¨ Ò «¹•¡£Ÿ l¨¦Ÿp®Xžx§Ð¶ÄR e«º Ò ´0 ÑR«Ÿā§Ì ¶¥ž<ºÄ®X '¨¦¹ªÁ¦ž¦¥ ¡ª с§å» 50 60 70 80 90 100 0 20 40 60 80 100 Precision (%) Recall (%) GLT-RMN LT-RMN ㋡ªÁxÄR¥«*6M !C¥«®n¡£Ÿ¡ªžx¢Aâ«®n¨¦¹£¹ á’ÄR¥¾¦«Ÿ’žx¢ $C¨F¶&«X­» 50 60 70 80 90 100 0 20 40 60 80 100 Precision (%) Recall (%) GLT-RMN LT-RMN ã5¡£ÁxÄR¥«M !-¥«®n¡£Ÿ¡ªžx¢Aâ«®n¨¦¹£¹•á’⥾¦«Ÿ'žx¢ * ¡£§Ð«º » éΫ.¨¦¹£Ÿž«X­<¶¹ªž¦¥«ºāŸ¡£¢RÁ¨Áx¹ªž Ò ¨¦¹œ e«§Ð¶W¹£¨F e«’ с¨F  ®n¨F¶ ÄR¥«º  с«æ e«¢º«¢®X´?¤^ž¦¥ä®n¨¦¢º¡£º¨F e««¢œ ¡£ ¡ª«Ÿ ½ÑRžxŸ«¶WÑR¥¨¦Ÿe«Ÿ‹¨F¥«'®XžMž¦¥º¡£¢¨F e«º ež%с¨¾¦«. с«.Ÿ¨¦§Ð« ¹£¨ Ò «¹ò»$õс¡£Ÿ[ e«®с¢¡  ÄR«åº¡¸º ¢Rž¦ ©¡¸§Ð¶¥žJ¾¦«¬¶«n¥¤^ž¦¥eÌ §©¨¦¢®X«%Ÿ¡£¢®X«pºR«n e«®X ¡£¢RÁнс«n ÑR«n¥’ Ù½žÐð !Ÿ'¨F¥«%®XžFÌ ž¦¥º¡¸¢¨F e«ºå¡£Ÿ’º¡ ®nĹª n¿M¨¦¢ºA ÑR«%§Ð«n ÑRž<ºŸ’½«ù e¥¡ª«º ¡£¢j e¥ž<ºÄ®X«ºA ežMžµ§©¨¦¢j´Â¤±¨¦¹£Ÿe«%®XžMž¦¥º¡£¢¨F ¡ªžx¢Ÿn» °ñ¢$ž¦¥ ºR«n¥ ežP«n¾F¨¦¹£Ä¨F e«A ÑR«A¨F¶¶¹£¡¸®n¨ Ò ¡£¹£¡ª ç´Pž¦¤žxÄR¥ §Ð«n ÑRž<º? ežôž¦ ÑR«n¥$ ç´j¶«Ÿ$ž¦¤0¢¨F¥¥¨F ¡ª¾¦«¦¿µ½-«¨¦¹£Ÿež  e¥¡ª«º¡ª äžx¢  ÑR«ôáž¦ð   $,²-¢RÁx¹£¡¸ŸÑ ®Xž¦¥¶⟠¯òõ0Gežx¢RÁ Ma¡£§ Ï<¨¦¢RÁ[¨¦¢º 8p«·«⹏ºR«n¥¿ $j³‹½с¡¸®Ñ ®Xžx¢j ¨¦¡£¢Ÿ[¤cžxÄR¥[ Ù´M¶«Ÿ%ž¦¤.¢¨¦§Ð«º$«¢œ ¡£ ¡ª«Ÿ9 l¶&«n¥ Ÿežx¢Ÿ ¯!C²-â%³¿¹ªž<®n¨F ¡ªžx¢Ÿa¯E %᳿Rž¦¥Áx¨¦¢¡ªó¨F ¡ªžx¢Ÿa¯ â C ³¿ ¨¦¢ºµž¦ ÑR«n¥¯òP°ρá³»<ᒞx¢Ÿe«  ÄR«¢œ ¹£´l с«¢jħ Ò «n¥Cž¦¤ ¹£¨ Ò «¹ ¾Y¨¦¹¸ÄR«Ÿ%¡£¢®X¥«¨¦Ÿe«º9¤c¥žx§  Ù½ž0 ežåþ¾¦«0¯±½¡ª Ñf¨ ¹£¨ Ò «¹ËÌý¾F¨¦¹£ÄR«'ž¦¤Ba ežl¡£¢º¡£®n¨F e«¢Ržx¢«.ž¦¤  ÑR«¤^žxÄR¥®n¨F ñÌ «nÁ¦ž¦¥¡ª«Ÿn³»0ぞ¦¥ ÑR«µÁx¹£ž Ò ¨¦¹C¨F¶¶¥žx¨¦®Ñ ½-«AāŸe«º$ ÑR« Ÿ¨¦§Ð«AžJ¾¦«n¥¹£¨F¶Ú e«§Ð¶¹¸¨F e«0¨¦¢ºä¨™§©žMº¡ªþ«ºf¾¦«n¥Ÿ¡ªžx¢ ž¦¤‹ ÑR«[¥«n¶«¨F . e«§Ð¶¹£¨F e«l¡£¢0½с¡£®Ñ0 ÑR« â ¶ž¦ e«¢<Ì  ¡£¨¦¹½¨¦Ÿ¥«n¶¹¸¨¦®X«ºµ½¡£ ѵ¨aº¡ËÊ&«n¥«¢œ - Ù´M¶&«ž¦¤ ¶ž¦ e«¢<Ì  ¡£¨¦¹l¯ÙÏM² C³[ с¨F A¨¦¹£¹ªžJ½Ÿ¬¨F ¬§ÐžxŸ Âžx¢R«™ž¦¤ù ÑR«9¡£¢<Ì ®n¹£Äº¡£¢RÁµ«¢j ¡ª ¡ª«Ÿ% ež0с¨¾¦«µ¨å¢žx¢<Ìýón«n¥ž·¹£¨ Ò «¹ËÌý¾F¨¦¹£ÄR«¦» õ.ÑR«·ÏM²#ÿ¾Y¨F¥ ¡£¨ Ò ¹ª«9¯±¥«n¶¹£¨¦®n¡¸¢RÁP с« pâ ¾Y¨F¥ ¡£¨ Ò ¹ª«F³ ¡£Ÿµ¤cž¦¥®X«ºë ež с¨i¾¦«™¹¸¨ Ò «¹ªÌý¾Y¨¦¹£Ä« $¡ª¤p¨¦¹£¹.¡£¢®n¹£Äº¡£¢Á «¢j ¡ª ¡ª«Ÿlс¨¾¦«å¹£¨ Ò «¹ËÌý¾F¨¦¹£ÄR«6<¿‹ž¦ с«n¥½¡£Ÿe«Â¡ª [Ÿe«¹£«®X Ÿ  ÑR«%žx¢«a¹£¨ Ò «¹ËÌý¾F¨¦¹£ÄR«p с¨F ¡¸Ÿ.¢Rž¦ <»õ.ÑR«%¥«Ÿā¹ª ¡£¢RÁ ¥«n¶«¨F ¬ e«§Ð¶¹£¨F e«¦¿ Ò «Ÿ¡£ºR«ŸµÑ¨¦¢º¹£¡£¢ÁΫX­<¨¦®X A¥«n¶«XÌ  ¡ª ¡ªžx¢Ÿn¿ ¡¸Ÿ%¨¦¹£Ÿež™¨ Ò ¹ª«© ež9®n¨F¶ ÄR¥«¬®Xž¦¥¥«¹£¨F ¡£žx¢Ÿ Ò «XÌ  Ù½«n«¢Ó«¢œ ¡£ Ù´f ç´j¶«Ÿn¿5½ÑR«¢$žx¢R«µ«¢j ¡ª ç´Î¥«n¶«n ¡ª ¡ªžx¢ ¡£Ÿ¡£¢®n¹£Äº«ºå¡£¢·¨¦¢Rž¦ с«n¥«¢j ¡ª Ù´å½¡ª Ñ·¨µ¶&ž¦ e«¢j ¡£¨¦¹£¹£´ º¡ËÊ«n¥«¢j µ ç´j¶«¦» ぞ¦¥A«X­R¨¦§Ð¶¹ª«¦¿¡ª µ¡£Ÿ¬®Xžx§©§Ðžx¢¡£¢  с¡£Ÿ'®Xž¦¥¶āŸ' ežµÑ¨¾¦«a®Xžxā¢j e¥´A¢¨¦§Ð«Ÿ¥«n¶«¨F e«º0¡£¢<Ì Ÿ¡£º«ž¦¥Áx¨¦¢¡ªó¨F ¡ªžx¢A¢¨¦§Ð«Ÿ'¡£¢Â ÑR«ùŸ¨¦§Ð«ùºRž<®nā§Ð«¢j n¿ ¨¦Ÿµ¡¸ŸÎö œ¨F¶¨¦¢÷$¡£¢,ö '’¨¦¢Rïڞ¦¤ j¨F¶¨¦¢÷<¿ž¦¥ ö j¨F¶¨¦¢ * ¹£Ä§©¡£¢¡£Ä§?㫺R«n¥¨F ¡ªžx¢÷<» õ.с«æžJ¾¦«n¥¨¦¹£¹A¥«ŸÄ¹ª Ÿ ¨F¥« ŸÑRži½¢ ¡¸¢ õ ¨ Ò ¹ª« 6M¿ ½¡ª Ñ  ÑR«fÁx¹ªž Ò ¨¦¹[¨F¶¶¥žx¨¦® Ñ «X­Rс¡ Ò ¡£ ¡£¢RÁÚ¡£§Ð¶¥ži¾¦«XÌ §Ð«¢j µžJ¾¦«n¥Â ÑR«P¹ªžM®n¨¦¹¨F¶¶¥žx¨¦® Ñ ¿'¨¦¹ Ò «¡ª ©¹ª«ŸŸ©¶¥žFÌ ¢Ržx⢮X«º с¨¦¢ÿ¡£¢ ÑR« Ò ¡£žx§Ð«º¡£®n¨¦¹pºRžx§©¨¦¡¸¢ »øðž º¡£®X ¡£žx¢¨F¥¡ª«ŸÀ½-«n¥«āŸe«ºl¡£¢% с«Ÿe«C«X­<¶«n¥¡£§Ð«¢j Ÿn¿¦¨¦¢º ¢Ržp®nāŸe ežx§ ¤^«¨F ÄR¥«'Ÿe«¹ª«®X ¡ªžx¢[½’¨¦Ÿ5¶&«n¥¤cž¦¥§©«º% ÑR« ¤^«¨F ÄR¥«© e«§Ð¶W¹£¨F e«Ÿù½«n¥«© ÑR«©Ÿ¨¦§©«µ¨¦Ÿù ÑRžxŸe«ÂāŸe«º ¡£¢A ÑR« Ò ¡ªžx§Ð«º¡¸®n¨¦¹W«X­< e¥¨¦®X ¡ªžx¢» P«n ÑRž<º !-¥«®n¡£Ÿ¡ªžx¢ â«®n¨¦¹£¹ ã Ì٧Ы¨¦ŸÄR¥« Rõ’Ìý❷ð M» 6 M» $ <»M C <õ'ÌýâPð $M»    F» )) M» $ á’âã  F» 6 <»M <»  õ•¨ Ò ¹ª«*6M C²C­M e¥¨¦®X ¡£žx¢,!5«n¥¤cž¦¥ §©¨¦¢®X«%žx¢Pᒞ¦ð  C»  7™B!ý8&šFBWŽ Œ ›< õ.ÑR«n¥«с¨¾¦« Ò «n«¢µŸžx§Ð«¶¥«n¾M¡ªžxÄŸC¨F e e«§Ð¶ Ÿ ežlāŸe« Áx¹ªž Ò ¨¦¹C¡£¢R¤^ž¦¥§©¨F ¡ªžx¢9¤^¥žx§ ¥«n¶«n ¡ª ¡£žx¢Ÿn¿¨¦®X¥žx¢j´M§µŸn¿ ¨¦¢ºå¨ ÒÒ ¥«n¾M¡¸¨F ¡ªžx¢Ÿ’ºÄR¥¡¸¢RÁ[«X­< e¥¨¦®X ¡ªžx¢» °„¢Î¯Ùá'с¡ª«Ä ¨¦¢ºAðÁR¿ $j³¿¨ÐŸe«n .ž¦¤ Áx¹ªž Ò ¨¦¹¤^«¨F ÄR¥«Ÿ.¨F¥«[āŸe«º  ežä¡£§Ð¶¥žJ¾¦«P¨ P¨Y­R¡£§lā§Ìý²-¢j e¥ž¦¶M´î ¨FÁ¦Á¦«n¥ %ÑRžJ½'Ì «n¾¦«n¥¿x с«Ÿe«’¤c«¨F ÄR¥«Ÿ ºRžp¢Rž¦ 5¤^⹏¹ª´%®n¨F¶ ÄR¥«’ ÑR«'§lÄ<Ì  ⍦¹¡£¢<ÃWÄR«¢®X« Ò «n ç½-«n«¢ ÑR«™¹£¨ Ò «¹£ŸÐž¦¤%¨¦®X¥žx¢j´<§©Ÿ ¨¦¢º™ ÑR«¡ª¥ù¹ªžx¢Á¬¤^ž¦¥§©Ÿ¿&ž¦¥ Ò «n ç½-«n«¢9«¢j ¡ª Ù´™¥«n¶«n ¡ËÌ  ¡ªžx¢Ÿ»°ñ¢·¶¨F¥ ¡£®nā¹£¨F¥¿W ÑR«n´0žx¢¹£´0¨¦¹£¹ªži½Í«¨F¥¹¸¡ª«n¥«X­MÌ  e¥¨¦®X ¡ªžx¢Ÿ¡¸¢P¨AºRž<®nā§Ð«¢j  ežå¡£¢RÃWÄR«¢®X«l¹£¨F e«n¥pžx¢R«Ÿ ¨¦¢ºl¢Rž¦  ¾M¡£®X«XÌý¾¦«n¥ Ÿ¨<»‹õ.ÑR«C❷ðæ¨F¶¶¥žx¨¦® ÑlѨ¦¢º¹ª«Ÿ  ÑR«Ÿe« ¨¦¢ºÿ¶ž¦ e«¢œ ¡¸¨¦¹£¹ª´ëž¦ с«n¥P§lÄR ⍦¹%¡£¢RÃWÄR«¢®X«Ÿ Ò «n Ù½«n«¢©«¢j ¡ª ¡ª«Ÿ ¡¸¢l¨a§Ðž¦¥«.®Xžx§Ð¶¹£«n e«¦¿œ¶¥ž Ò ¨ Ò ¡£¹£¡£ŸeÌ  ¡£®n¨¦¹£¹£´ÂŸežxā¢ºå§©¨¦¢¢R«n¥» úˆ Œ=E!ý@-DYüýŒ=D·8=Ž 9‹@ š¦@ ›jB  Œ ›< éΫ с¨¾¦«u¶¥«Ÿe«¢œ e«º+¨¦¢+¨F¶¶¥žx¨¦®Ñ" ež ®Xžx¹£¹ª«®Ì  ¡ª¾¦«¡¸¢R¤cž¦¥ §©¨F ¡ªžx¢Ð«X­< e¥¨¦®X ¡ªžx¢µ с¨F -āŸ«Ÿ ⫹¸¨F ¡ªžx¢¨¦¹ ™¨F¥咽i¾äð«n Ù½ž¦¥ïMŸ© ežÎ¥«¨¦Ÿežx¢î¨ Ò žxÄR l ÑR«0§lÄR ⍦¹ ¡£¢<Ã&ÄR«¢®X«Ÿ Ò «n ç½-«n«¢·§[ā¹£ ¡ª¶¹ª««X­M e¥¨¦®X ¡£žx¢Ÿn» * ¢R«n½  Ù´M¶«Âž¦¤®n¹£¡  ÄR«µ e«§©¶¹£¨F e« 9 с«A¹£ž¦Áx¡£®n¨¦¹ pâ4 e«§Ì ¶¹£¨F e« ½’¨¦Ÿ•¡£¢j e¥ž<ºÄ®X«º ¿Y¨¦¹£¹ªžJ½¡£¢RÁ.¨¾Y¨F¥¡¸¨ Ò ¹ª« ¢Mā§Ì Ò «n¥ž¦¤<¥«¹ª«n¾F¨¦¢œ •«¢œ ¡£ ¡ª«Ÿ  ež Ò «CÄŸe«º Ò ´ž¦ с«n¥•®n¹£¡  ÄR«  e«§Ð¶¹¸¨F e«Ÿn»‹Ï<ž¦¤c  ®Xž¦¥¥«¹£¨F ¡ªžx¢Ÿ Ò «n Ù½«n«¢l¥«n¶«n ¡ª ¡ªžx¢Ÿ ¨¦¢º9¨¦®X¥žx¢j´M§µŸù¨¦¢º9 ÑR«¡ª¥p¹ªžx¢RÁA¤^ž¦¥§u¡£¢™ ÑR«ÐŸ¨¦§Ð« ºRž<®nā§Ð«¢j %с¨¾¦« Ò «n«¢$®n¨F¶ ÄR¥«º Ò ´™Áx¹ªž Ò ¨¦¹‹®n¹£¡  ÄR«  e«§Ð¶¹¸¨F e«Ÿn¿µ¨¦¹£¹ªžJ½¡£¢RÁÿ¤^ž¦¥$¹ªž<®n¨¦¹Ð«X­< e¥¨¦®X ¡ªžx¢ ºR«®n¡ËÌ Ÿ¡ªžx¢Ÿ' ež©¶¥ž¦¶W¨FÁx¨F e«[¨¦¢º0§lÄR ⍦¹¸¹ª´¬¡¸¢<ÃWÄR«¢®X«p«¨¦® Ñ ž¦ ÑR«n¥» â«nÁx¨F¥º¡£¢RÁ¤^ÄR ā¥« ½ž¦¥ïW¿¦¨¥¡£®с«n¥•Ÿe«n  ž¦¤R¤c«¨F ā¥«Ÿ ¤^ž¦¥À ÑR«¹ªžM®n¨¦¹œ e«§Ð¶¹£¨F e«Ÿ•½žxā¹£ºa¹¸¡ªï¦«¹ª´p¡£§Ð¶¥žJ¾¦«C¶«n¥eÌ ¤^ž¦¥§©¨¦¢®X«¦» á’⥥«¢j ¹ª´¦¿ <õ'ÌýâPðG Ÿ ¨¦®n®nÄR¥¨¦®X´,¡£Ÿ Ÿe ¡£¹¸¹Ÿ¡£Áx¢¡ËþW®n¨¦¢j ¹ª´ë¹ª«ŸŸÂ Ñ¨¦¢ìáâ㠟n¿½с¡£® ѹ£¡£§Ì ¡ª Ÿ ÑR«p¶«n¥¤^ž¦¥§©¨¦¢®X«ž¦¤À ÑR«¤±Ä¹£¹&Ÿe´MŸ e«§å» * ¢ž¦ ÑR«n¥ ¹£¡£§µ¡ª ¨F ¡ªžx¢A¡£Ÿ’ ÑR«%¨F¶¶¥ž­R¡£§©¨F e«%¡£¢¤c«n¥«¢®X«pÄŸe«º Ò ´ Ò ž¦ ÑÍâPð §Ð«n ÑRž<ºŸn» õ.ÑR«î¢Mā§ Ò «n¥Îž¦¤¬¤±¨¦®X ež¦¥ Á¦¥¨F¶ÑŸÀ¤^ž¦¥•½с¡£®Ña ÑR«-Ÿā§Ìý¶¥ž<ºÄ®X ¨¦¹£Á¦ž¦¥¡ª с§ º¡£º ¢Rž¦ ·®Xžx¢j¾¦«n¥Á¦«Ó½¨¦ŸP¢Ržx¢<ÌÙ¢R«nÁx¹¸¡ªÁx¡ Ò ¹ª«¦¿¨¦¢ºÿžxÄR¥™¨F¶RÌ ¶¥žx¨¦® ÑAŸe ež¦¶¶«ºA¨F¤c e«n¥'¨[þ­µ¢Mā§ Ò «n¥ž¦¤À¡ª e«n¥¨F ¡£žx¢Ÿn» '«Ÿ¡£ºR«Ÿ«X­M¶W¹ªž¦¥¡£¢RÁ©¡¸§Ð¶¥žJ¾¦«§Ð«¢œ Ÿ ežÂ¹£žjž¦¶M´ Ò «¹¸¡ª«n¤ ¶¥ž¦¶W¨FÁx¨F ¡ªžx¢  с¨F ™¡¸¢®X¥«¨¦Ÿe«ä®Xžx§Ð¶ÄR ¨F ¡£žx¢¨¦¹®XžxŸe  ¯$ «º¡£º¡£¨l«n ¨¦¹ò»ª¿ M³¿W½-«a¡¸¢œ e«¢ºA ežµ«X­<¨¦§©¡¸¢R«%¨¦¹ËÌ  e«n¥¢¨F ¡£¾¦«a¨F¶¶¥ži­<¡¸§©¨F e«XÌÙ¡£¢R¤^«n¥«¢®X«%§Ð«n сžMºŸ» úú — E<‹=CŒ ; !ýBWŽ ABŠB= šxD õ.с¡¸Ÿ0½ž¦¥ï ½’¨¦ŸP¶W¨F¥ ¡£¨¦¹£¹ª´ÿŸⶁ¶ž¦¥ e«º Ò ´ÿÁ¦¥¨¦¢j Ÿ °°ÏjÌ  $©¨¦¢ºå°°ÏjÌ $ 6  ©¤^¥žx§4 с«ùðùÏM㒻 79BK=ñBR›œB=E<BD  LO= >!LO=?H?H;  _; ZLUK`G<QXYNLOF UKRO=?N`Tc b^UK<2A]\ASL2LONL2cd\MLO=AS` H?A]UKLO`T= ` UK`^\ `^UKROEMLUH H U` E^U A JMLON_>@A@<2<2=?`M7 >@U<2A <QROE^\a*=?` J^UCL2R2c NXYc <2J A@AS>D RU   =?`M;! #" %$#&" ')(*$%&,+ .$%/0 %$#12/4365.7#8.97 8:<;. = _; 34U >2U`? EG`TA@<2>@E@33 EG=VX UK`BA6A 33 N DG=VRCG;D UCROA3  \M9 UKL\FE; E UKLO>@NKR2ROA3 34U]aMFIN`^\GCG;HE N_N `GAaI3J LOET`KDEGF UKL34UKc F U`T=L3KUK`^\BM ENO UDBO N`M;5 P P 8M;Q N FIJGUKLUKRO=A!AS[MJ ASL2c =?FIA@`_RO< N ` H?A]UCLO`G=?`=?`TXYNLOF UCRO= N`IAS[R2LU>SRONKLO< XYNL!JTLONKROA@=?`G< U`G\'RODGA@=VL =?`_ROASLU>SRO=?N`G<@;SRT412$%" 'VUW/4/4TX$%&Y [Z!TX\..]0&," ' ^ ]_ %$ `a12$#" '@UW& LT2'%' $ T_&1WTF$%&cbdTfe$#12$%&,TF&gR.hF" ]W$ji2" %$%& "&,ekUW&l_ ]Wh" %$#&Fman %]o" 1 %$#&BlW]pqbdTfe$%10"'.rs12.tT_& %/O; NIUKJGJ A]UCL]; E UKL2a  H U=?`GAuQ UH?=wv)3 A]\M=VRONL];x2 M;ayu"WT_]W/,lW]p K [ZT ^z^^ U_+ { | | |~} ]4/pZ!0~&FbX" 1WZI$%&T€(Tf"]W&$%& l_]aUW&2l_]0F" %$# & man %]o" 1 %$#&3@4LOH U`G\TN3‚ 5 ;ƒzz„a…8LOAS<2<@; Q H U=VLOAJQ UCL\T=?A ;H  †;  FIJG=VLO=?>]UH FIASRODTN\T<4= ` =?`TXYNLOF UCRO= N` AS[R2LU>SRO=?N`Z; ^ UzbX" "i2$‡&T_3@ˆ7#8I97 = ;.† ; ‰ UK= 5 A@N `~Q DT= ASE U`G\ ‰ 9 A@At NESŠƒM;H5PP:;HŠUKFIA]\AS`Tc RO=VRaLOA@>@N`G=VRO=?N `9 =VRODU F UC[M= F EGF A@`_R2LON J_a UKJGJMLONU>DZ; „`‹yV]p10T4Tfe$%& /dflc [ZTgRT_ŒT_& [Z&l_T_]fT_&,10TS&ŽB" %!+ ]p" '(@" & !" Tu(@T4" ]W&$%& -t <Žƒ(‘(+L’.“ “”_•<3J^U A@<V= P<;=:.3  \MFIN `_RON `@3Q U`^U\GU; Š ;aQ NH?H =?ASL3 ‰ ;… UKL4N3VŠ ;Vƒ UKRU.3VM; UCROA@=?<2=#3zQ4;aŠ4N bGUKRU!3 6; 6DRU.3 6;,–A2N=?FI=j>SE‘3 ‰ ;.„dF U=#3.D ;.„bTEG<2DT=L3MUK`^\JCG; <2ETc P2=?= ;H M;z DGAhA  Šƒ„pJMLONCPQA@>SR]7kQ NLOJGET<Qc b^U<2A@\cN`TNC9 HVc A]\A UK>—_EG=?<2=VRO=?N ` U`^\ =?`MX NLOF UCRO=?N ` AS[R2LU>SRO=?N `X LON F  A@`TN FIALOAS<2A]UKLO>D J^UKJ ASLO<@;B„d`SŽk$%&, [Z˜&2l2T_]fT_&10TflF [ZT mu.]oWTf"&™šZ!"W LT_]Sflg [Z!T ^ /4/012$#" %$# &›l_]œz, #"+ %$#&" 'I(*$%& .$%/0 %$#12/  m ^ š()+ | | •<3 J^U A@<5 †.W;.5† 5.3! ASL4A@`Z; E =?>DGUA@HuQ N H?H?= `T<@;t5PP5; 36UK`N=?`*UKH NLO=VRODTFI< XYNL `GUFIA]\c A@`_RO=VRa AS[R2LU>SRO=?N`Z7 N_N<QRO=?` U`G\ RODGAu NROA]\J ALO>@A@JTR2LON`Z; „`›yV]p10T4Tfe$%& /plJ [Z!T ^ &&"'bdT4T2 %$%& flt [ZT ^ /4/W12$%+ " %$#&slW]F! #" %$#&" '6(*$%& .$%/0 %$#12/  ^ š()+o“ ’2•<3GJ^U A@< 8 ˆ <;I8=.3@… DT= H U\TA@H?JGDT= U!3…6 ; E =?>DGUA@Ha„;‘CNL\TU`@3 A]\M=?RONKL];t M;h(@T4" ]W&$%& $%&ŸžV]p"0Z.$#14" ' bXeIT2' /O;xE„p Y… LOA@<2<236Q UKFbTLO= \! A3,E  ; ‚ ;83 ;)D<2>DT=?<2>DGU`!3V ;‚TLOASaI3 U`G\ ‰ ; c¡ ;85 N_A@H?=jASL];5 P P!; ‚^U>RONLLUKJGDG< UK`^\ RODGA<2ETF cdJMLON\TET>SR UHNLO=VRODGF ;aUfmmam ¢ ]p" &/0" 1 %$#&/s&UW&l_ ]Wh" %$#& ¢ ZTf]0£ 38I†.7¤5 97 8  ˆ<;! I; CNDG` 5 Uv AL2RaI3¥4`G\MLOAS9¦E ><Q UH?H?EGF 3 U`G\§‚GASLO`^UK`^\MN … ASLOA@=VLUM;¨5PP. ;GQ N `G\T=VRO=?N `^UKHILU`^\MN Fª©^A@H \M<@7… LON bGUKc bG=?H?=?<QRO= > FIN\MA@H?< XYNL <2A2FIA@`_RO= `*U`G\H Ub ASH =?`*<2A—_EGA@`T>@A \GUCRUM;)„d`Jyu]o10T0Tfe$‡& /fl {« [ZhUW& LT_]W&" %$#&" 'š&2l2T_]fT_&1WT &˜bX"1WZ.$%&T(‘Tf"]0&$%& -‹ U šb ()+¤’.““ { •3 J^U A@<5 ˆ5;.5 ˆ.3 O =?H?H = UKFI<ƒQ NH?H A_ A3‘E  ; 4`G\MLOAS9 D UK>DT=VROA@< E ><Q UH?H?EGF ; 5 PP5; E UKH?H AR]7  F UK>DT= `TA H?A]UKLO`T=?` XYNL H UK`E^U A RON_N HN=?R]; D_R2ROJZ7 B BCF UH?H?ASR]; >@<@; EGF U<2<@; A]\MEZ; CEG\TA]Uk… A@UKLOH ;u ˆˆM;ayV]p ¬4" ¬_$#' $%/0 %$#1€­€Tf"/0&$%& $%&hUW&, LT2'%' $ T_&, R£/0 LT_h/W®GŽHT2 %¯]f/°fl~yu'w"./4$L¬2'TXUW&l2T_]pT_&,10TS;±E NL4UK` D UEMX F U`T`‘3–TUK`~E UCROA@N3 QV ;  LO=?A@Hz– ;a–M>D9 UKL2R4> U`^\±E UKL2RO= ; ‰ A]UKLO<QR];5PP :M; <2=?F c JGH?A UKH NLO=VRODTF X NL = \TAS`RO=VX aT=?` UKbGbTLOA_M= UCRO= N` \MA_©^`T=?RO=?N`G< =?` bG=?NFIA]\T=?>]UKH ROAS[R];z„`gyu]o10T0Tfe$‡& /Jflt [ZT « [Zcy€" 12$ `a1 R£z!/4$%.²&S³$#14z, %$‡& 3 J^U A@<F8 .0;I8 =5.3 5Z=?DEGA3 ‰ „03CU`E^UCL2a_; ‚GA@=ƒ–MDGU UK`^\±‚TASLO`^UK`^\TNS… ASLOA@=VLUM;~5 PP :M;X–D^UKH H?N]9 J^UCLO<2= ` 9 =VROD >SN `^\M=VRO= N`^UKH^LU`^\MN F´©^A@H \M<@;)„`yV]p10T4T4e $%& /ƒflVµk!+ F"&´(@"& " T ¢ Tf1WZ.& 'w £q" &,eK [ZT˜bdT4T2 %$%& fl± [ZT ŽH ]_ [Z ^ tT_]0$#14"& ^ /4/W12$%" %$#&¶l_ ]! #" %$#&" ')(*$%&,+ .$%/0 %$#12/43^J^U A@<B2: 8;28!3  \TFIN `_RON`‘3Q U`GU \GU;  A@`CPOUKFI= `œ UK<fN UCL3V…8=?ASROASLF4bGb ASA@H#3 UK`^\°·;)DNH?H AL];J5 P P 5_; ·=?<2>SLO=?FI=?`^UKRO=AJMLON bGUbG=?H?=?<QRO=?> FIN\MA@H?< X NKL LOA@H UCRO= N`^UKH \GUCRUM;~„d`Yyu]o10T0Tfe$‡& /Xfl {« [Zq&l2T_]pT_&,10Tg&¸š&,10T_]W+ #"$%& %£$%& ^ ]_ %$ `a12$#" '@UW& LT2'%' $ T_&1WT  ¸ ^ U_+¡“ ’•3 J^U A@<z8 ˆ ; 8 5.3  \TFIN`_RON `‘3Q UK`^U \TUM;  LO=jN¹‚ ;Ÿ ZPQN `ºD=?F –TUK` UK`^\»‚ =?A@`¼·A¥E ASEGH \TASL]; 5 P P:M;h„`_R2LON\TET>SRO=?N ` RON*RODGAQ NŠ65 5 cp5PP :<2DGUKLOA]\ RU<fN 7 5 UK` EGU ASc = `G\TA@J AS`^\TAS`R`^UKFIA]\ A@`_RO=?Rda LOA@>SN `T=VRO= N`Z;¶„` yu]o10T0Tfe$‡& /Fpl<Žz(š(+L’.“ “ ”3 JGU A@<F28 5;,_8I†_;  \TFIN`Tc RON `@3Q UK`^U \TUM; CN`^UKRODGU`½– ;ƒM A]\M= \T= U!3FO/= H?H?= UF¾ 6;s‚TLOA@A@F UK`‘3U`^\qM U=VL O A@=?<2<@;B5 PP PM;sA6AS`GASLUKH =>@A@\ b A@H?= AX:JTLONJ^U  UKRO=?N ` ;B„` ^ e+ Œ<"&,10T_/B$%&ŽHT_.]p" ',UW&l_]0F" %$#&yu]o10T_/0/4$%& R£/0 LT_h/ { ’ 3 J^U A@<=ˆ <;I= I36·A@`.ASL3Qu ;
2004
56
Error Mining for Wide-Coverage Grammar Engineering Gertjan van Noord Alfa-informatica University of Groningen POBox 716 9700 AS Groningen The Netherlands [email protected] Abstract Parsing systems which rely on hand-coded linguistic descriptions can only perform adequately in as far as these descriptions are correct and complete. The paper describes an error mining technique to discover problems in hand-coded linguistic descriptions for parsing such as grammars and lexicons. By analysing parse results for very large unannotated corpora, the technique discovers missing, incorrect or incomplete linguistic descriptions. The technique uses the frequency of n-grams of words for arbitrary values of n. It is shown how a new combination of suffix arrays and perfect hash finite automata allows an efficient implementation. 1 Introduction As we all know, hand-crafted linguistic descriptions such as wide-coverage grammars and large scale dictionaries contain mistakes, and are incomplete. In the context of parsing, people often construct sets of example sentences that the system should be able to parse correctly. If a sentence cannot be parsed, it is a clear sign that something is wrong. This technique only works in as far as the problems that might occur have been anticipated. More recently, tree-banks have become available, and we can apply the parser to the sentences of the tree-bank and compare the resulting parse trees with the gold standard. Such techniques are limited, however, because treebanks are relatively small. This is a serious problem, because the distribution of words is Zipfian (there are very many words that occur very infrequently), and the same appears to hold for syntactic constructions. In this paper, an error mining technique is described which is very effective at automatically discovering systematic mistakes in a parser by using very large (but unannotated) corpora. The idea is very simple. We run the parser on a large set of sentences, and then analyze those sentences the parser cannot parse successfully. Depending on the nature of the parser, we define the notion ‘successful parse’ in different ways. In the experiments described here, we use the Alpino wide-coverage parser for Dutch (Bouma et al., 2001; van der Beek et al., 2002b). This parser is based on a large constructionalist HPSG for Dutch as well as a very large electronic dictionary (partly derived from CELEX, Parole, and CGN). The parser is robust in the sense that it essentially always produces a parse. If a full parse is not possible for a given sentence, then the parser returns a (minimal) number of parsed nonoverlapping sentence parts. In the context of the present paper, a parse is called successful only if the parser finds an analysis spanning the full sentence. The basic idea is to compare the frequency of words and word sequences in sentences that cannot be parsed successfully with the frequency of the same words and word sequences in unproblematic sentences. As we illustrate in section 3, this technique obtains very good results if it is applied to large sets of sentences. To compute the frequency of word sequences of arbitrary length for very large corpora, we use a new combination of suffix arrays and perfect hash finite automata. This implementation is described in section 4. The error mining technique is able to discover systematic problems which lead to parsing failure. This includes missing, incomplete and incorrect lexical entries and grammar rules. Problems which cause the parser to assign complete but incorrect parses cannot be discovered. Therefore, tree-banks and hand-crafted sets of example sentences remain important to discover problems of the latter type. 2 A parsability metric for word sequences The error mining technique assumes we have available a large corpus of sentences. Each sentence is a sequence of words (of course, words might include tokens such as punctuation marks, etc.). We run the parser on all sentences, and we note for which sentences the parser is successful. We define the parsability of a word R(w) as the ratio of the number of times the word occurs in a sentence with a successful parse (C(w|OK)) and the total number of sentences that this word occurs in (C(w)): R(w) = C(w|OK) C(w) Thus, if a word only occurs in sentences that cannot be parsed successfully, the parsability of that word is 0. On the other hand, if a word only occurs in sentences with a successful parse, its parsability is 1. If we have no reason to believe that a word is particularly easy or difficult, then we expect its parsability to be equal to the coverage of the parser (the proportion of sentences with a successful parse). If its parsability is (much) lower, then this indicates that something is wrong. For the experiments described below, the coverage of the parser lies between 91% and 95%. Yet, for many words we found parsability values that were much lower than that, including quite a number of words with parsability 0. Below we show some typical examples, and discuss the types of problem that are discovered in this way. If a word has a parsability of 0, but its frequency is very low (say 1 or 2) then this might easily be due to chance. We therefore use a frequency cut-off (e.g. 5), and we ignore words which occur less often in sentences without a successful parse. In many cases, the parsability of a word depends on its context. For instance, the Dutch word via is a preposition. Its parsability in a certain experiment was more than 90%. Yet, the parser was unable to parse sentences with the phrase via via which is an adverbial expression which means via some complicated route. For this reason, we generalize the parsability of a word to word sequences in a straightforward way. We write C(wi . . . wj) for the number of sentences in which the sequence wi . . . wj occurs. Furthermore, C(wi . . . wj|OK), is the number of sentences with a successful parse which contain the sequence wi . . . wj. The parsability of a sequence is defined as: R(wi . . . wj) = C(wi . . . wj|OK) C(wi . . . wj) If a word sequence wi . . . wj has a low parsability, then this might be because it is part of a difficult phrase. It might also be that part of the sequence is the culprit. In order that we focus on the relevant sequence, we consider a longer sequence wh . . . wi . . . wj . . . wk only if its parsability is lower than the parsability of each of its substrings: R(wh . . . wi . . . wj . . . wk) < R(wi . . . wj) This is computed efficiently by considering the parsability of sequences in order of length (shorter sequences before longer ones). We construct a parsability table, which is a list of n-grams sorted with respect to parsability. An ngram is included in the parsability table, provided: • its frequency in problematic parses is larger than the frequency cut-off • its parsability is lower than the parsability of all of its sub-strings The claim in this paper is that a parsability table provides a wealth of information about systematic problems in the grammar and lexicon, which is otherwise hard to obtain. 3 Experiments and results 3.1 First experiment Data. For our experiments, we used the Twente Nieuws Corpus, version pre-release 0.1.1 This corpus contains among others a large collection of news articles from various Dutch newspapers in the period 1994-2001. In addition, we used all news articles from the Volkskrant 1997 (available on CDROM). In order that this material can be parsed relatively quickly, we discarded all sentences of more than 20 words. Furthermore, a time-out per sentence of twenty CPU-seconds was enforced. The Alpino parser normally exploits a part-of-speech tag filter for efficient parsing (Prins and van Noord, 2003) which was switched off, to ensure that the results were not influenced by mistakes due to this filter. In table 1 we list some basic quantitative facts about this material. We exploited a cluster of Linux PCs for parsing. If only a single PC had been available, it would have taken in the order of 100 CPU days, to construct the material described in table 1. These experiments were performed in the autumn of 2002, with the Alpino parser available then. Below, we report on more recent experiments with the latest version of the Alpino parser, which has been improved quite a lot on the basis of the results of the experiments described here. Results. For the data described above, we computed the parsability table, using a frequency cutoff of 5. In figure 1 the frequencies of parsability scores in the parsability table are presented. From the figure, it is immediately clear that the relatively high number of word sequences with a parsability of (almost) zero cannot be due to chance. Indeed, the 1http://wwwhome.cs.utwente.nl/˜druid/ TwNC/TwNC-main.html newspaper sents coverage % NRC 1994 582K 91.2 NRC 1995 588K 91.5 Volkskrant 1997 596K 91.6 AD 2000 631K 91.5 PAROOL 2001 529K 91.3 total 2,927K 91.4 Table 1: Overview of corpus material; first experiment (Autumn 2002). Parsability Frequency 0.0 0.2 0.4 0.6 0.8 1.0 0 5000 15000 Figure 1: Histogram of the frequencies of parsability scores occurring in parsability table. Frequency cut-off=5; first experiment (Autumn 2002). parsability table starts with word sequences which constitute systematic problems for the parser. In quite a lot of cases, these word sequences originate from particular types of newspaper text with idiosyncratic syntax, such as announcements of new books, movies, events, television programs etc.; as well as checkers, bridge and chess diagrams. Another category consists of (parts of) English, French and German phrases. We also find frequent spelling mistakes such as de de where only a single de (the definite article) is expected, and heben for hebben (to have), indentiek for identiek (identical), koninging for koningin (queen), etc. Other examples include wordt ik (becomes I), vindt ik (finds I), vind hij (find he) etc. We now describe a number of categories of examples which have been used to improve the parser. Tokenization. A number of n-grams with low parsability scores point towards systematic mistakes during tokenization. Here are a number of examples:2 2The @ symbol indicates a sentence boundary. R C n-gram 0.00 1884 @ . @ . 0.00 385 @ ! @ ! 0.00 22 ’s advocaat ’s lawyer 0.11 8 H. ’s H. ’s 0.00 98 @ , roept @ , yells 0.00 20 @ , schreeuwt @ , screams 0.00 469 @ , vraagt @ , asks The first and second n-gram indicate sentences which start with a full stop or an exclamation mark, due to a mistake in the tokenizer. The third and fourth n-grams indicate a problem the tokenizer had with a sequence of a single capital letter with a dot, followed by the genitive marker. The grammar assumes that the genitive marking is attached to the proper name. Such phrases occur frequently in reports on criminals, which are indicated in news paper only with their initials. Another systematic mistake is reflected by the last n-grams. In reported speech such as (1) Je You bent are gek!, crazy!, roept yells Franca. Franca. Franca yells: You are crazy! the tokenizer mistakenly introduced a sentence boundary between the exclamation mark and the comma. On the basis of examples such as these, the tokenizer has been improved. Mistakes in the lexicon. Another reason an ngram receives a low parsability score is a mistake in the lexicon. The following table lists two typical examples: R C n-gram 0.27 18 de kaft the cover 0.30 7 heeft opgetreden has performed In Dutch, there is a distinction between neuter and non-neuter common nouns. The definite article de combines with non-neuter nouns, whereas neuter nouns select het. The common noun kaft, for example, combines with the definite article de. However, according to the dictionary, it is a neuter common noun (and thus would be expected to combine only with the definite article het). Many similar errors were discovered. Another syntactic distinction that is listed in the dictionary is the distinction between verbs which take the auxiliary hebben (to have) to construct a perfect tense clause vs. those that take the auxiliary zijn (to be). Some verbs allow both possibilities. The last example illustrates an error in the dictionary with respect to this syntactic feature. Incomplete lexical descriptions. The majority of problems that the parsability scores indicate reflect incomplete lexical entries. A number of examples is provided in the following table: R C n-gram 0.00 11 begunstigden favoured (N/V) 0.23 10 zich eraan dat self there-on that 0.08 12 aan te klikken on to click 0.08 12 doodzonde dat mortal sin that 0.15 11 zwarts black’s 0.00 16 dupe van victim of 0.00 13 het Turks . the Turkish The word begunstigden is ambiguous between on the one hand the past tense of the verb begunstigen (to favour) and on the other hand the plural nominalization begunstigden (beneficiaries). The dictionary contained only the first reading. The sequence zich eraan dat illustrates a missing valency frame for verbs such as ergeren (to irritate). In Dutch, verbs which take a prepositional complement sometimes also allow the object of the prepositional complement to be realized by a subordinate (finite or infinite) clause. In that case, the prepositional complement is R-pronominalized. Examples: (2) a. Hij He ergert is-irritated zich self aan on zijn his aanwezigheid presence He is irritated by his presence b. Hij He ergert is-irritated zich self er there niet not aan on dat that .. . .. . He is not irritated by the fact that . . . The sequence aan te klikken is an example of a verb-particle combination which is not licensed in the dictionary. This is a relatively new verb which is used for click in the context of buttons and hyperlinks. The sequence doodzonde dat illustrates a syntactic construction where a copula combines with a predicative complement and a sentential subject, if that predicative complement is of the appropriate type. This type is specified in the dictionary, but was missing in the case of doodzonde. Example: (3) Het It is is doodzonde mortal-sin dat that hij he slaapt sleeps That he is sleeping is a pity The word zwarts should have been analyzed as a genitive noun, as in (typically sentences about chess or checkers): (4) Hij He keek looked naar at zwarts black’s toren rook whereas the dictionary only assigned the inflected adjectival reading. The sequence dupe van illustrates an example of an R-pronominalization of a PP modifier. This is generally not possible, except for (quite a large) number of contexts which are determined by the verb and the object: (5) a. Hij He is is de the dupe victim van of jouw your vergissing mistake He has to suffer for your mistake b. Hij He is is daar there nu now de the dupe victim van of He has to suffer for it The word Turks can be both an adjective (Turkish) or a noun the Turkish language. The dictionary contained only the first reading. Very many other examples of incomplete lexical entries were found. Frozen expressions with idiosyncratic syntax. Dutch has many frozen expressions and idioms with archaic inflection and/or word order which breaks the parser. Examples include: R C n-gram 0.00 13 dan schaadt het then harms it 0.00 13 @ God zij @ God be[I] 0.22 25 God zij God be[I] 0.00 19 Het zij zo It be[I] so 0.45 12 goeden huize good house[I] 0.09 11 berge mountain[I] 0.00 10 hele gedwaald whole[I] dwelled 0.00 14 te weeg The sequence dan schaadt het is part of the idiom Baat het niet, dan schaadt het niet (meaning: it might be unsure whether something is helpful, but in any case it won’t do any harm). The sequence God zij is part of a number of archaic formulas such as God zij dank (Thank God). In such examples, the form zij is the (archaic) subjunctive form of the Dutch verb zijn (to be). The sequence Het zij zo is another fixed formula (English: So be it), containing the same subjunctive. The phrase van goeden huize (of good family) is a frozen expression with archaic inflection. The word berge exhibits archaic inflection on the word berg (mountain), which only occurs in the idiomatic expression de haren rijzen mij te berge (my hair rises to the mountain) which expresses a great deal of surprise. The n-gram hele gedwaald only occurs in the idiom Beter ten halve gekeerd dan ten hele gedwaald: it is better to turn halfway, then to go all the way in the wrong direction. Many other (parts of) idiomatic expressions were found in the parsability table. The sequence te weeg only occurs as part of the phrasal verb te weeg brengen (to cause). Incomplete grammatical descriptions. Although the technique strictly operates at the level of words and word sequences, it is capable of indicating grammatical constructions that are not treated, or not properly treated, in the grammar. R C n-gram 0.06 34 Wij Nederlanders We Dutch 0.08 23 Geeft niet Matters not 0.00 15 de alles the everything 0.10 17 Het laten The letting 0.00 10 tenzij . unless . The sequence Wij Nederlanders constitutes an example of a pronoun modified by means of an apposition (not allowed in the grammar) as in (6) Wij We Nederlanders Dutch eten eat vaak often aardappels potatoes We, the Dutch, often eat potatoes The sequence Geeft niet illustrates the syntactic phenomenon of topic-drop (not treated in the grammar): verb initial sentences in which the topic (typically the subject) is not spelled out. The sequence de alles occurs with present participles (used as prenominal modifiers) such as overheersende as in de alles overheersende paniek (literally: the all dominating panic, i.e., the panic that dominated everything). The grammar did not allow prenominal modifiers to select an NP complement. The sequence Het laten often occurs in nominalizations with multiple verbs. These were not treated in the grammar. Example: (7) Het The laten letting zien see van of problemen problems Showing problems The word sequence tenzij . is due to sentences in which a subordinate coordinator occurs without a complement clause: (8) Gij Thou zult shallt niet not doden, kill, tenzij. unless. A large number of n-grams also indicate elliptical structures, not treated in that version of the grammar. Another fairly large source of errors are irregular named entities (Gil y Gil, Osama bin Laden . . . ). newspaper # sentences coverage % NRC 1994 552,833 95.0 Volkskrant 1997 569,314 95,2 AD 2000 662,380 95,7 Trouw 1999 406,339 95,5 Volkskrant 2001 782,645 95,1 Table 2: Overview of corpus material used for the experiments; second experiment (January 2004). 3.2 Later experiment Many of the errors and omissions that were found on the basis of the parsability table have been corrected. As can be seen in table 2, the coverage obtained by the improved parser increased substantially. In this experiment, we also measured the coverage on additional sets of sentences (all sentences from the Trouw 1999 and Volkskrant 2001 newspaper, available in the TwNC corpus). The results show that coverage is similar on these unseen testsets. Obviously, coverage only indicates how often the parser found a full parse, but it does not indicate whether that parse actually was the correct parse. For this reason, we also closely monitored the performance of the parser on the Alpino tree-bank3 (van der Beek et al., 2002a), both in terms of parsing accuracy and in terms of average number of parses per sentence. The average number of parses increased, which is to be expected if the grammar and lexicon are extended. Accuracy has been steadily increasing on the Alpino tree-bank. Accuracy is defined as the proportion of correct named dependency relations of the first parse returned by Alpino. Alpino employs a maximum entropy disambiguation component; the first parse is the most promising parse according to this statistical model. The maximum entropy disambiguation component of Alpino assigns a score S(x) to each parse x: S(x) = X i θifi(x) (1) where fi(x) is the frequency of a particular feature i in parse x and θi is the corresponding weight of that feature. The probability of a parse x for sentence w is then defined as follows, where Y (w) are all the parses of w: p(x|w) = exp (S(x)) P y∈Y (w) exp (S(y)) (2) The disambiguation component is described in detail in Malouf and van Noord (2004). 3http://www.let.rug.nl/˜vannoord/trees/ Time (days) Accuracy 0 50 100 150 200 250 300 350 84.5 85.5 86.5 Figure 2: Development of Accuracy of the Alpino parser on the Alpino Tree-bank Figure 2 displays the accuracy from May 2003May 2004. During this period many of the problems described earlier were solved, but other parts of the system were improved too (in particular, the disambiguation component was improved considerably). The point of the graph is that apparently the increase in coverage has not been obtained at the cost of decreasing accuracy. 4 A note on the implementation The most demanding part of the implementation consists of the computation of the frequency of ngrams. If the corpus is large, or n increases, simple techniques break down. For example, an approach in which a hash data-structure is used to maintain the counts of each n-gram, and which increments the counts of each n-gram that is encountered, requires excessive amounts of memory for large n and/or for large corpora. On the other hand, if a more compact data-structure is used, speed becomes an issue. Church (1995) shows that suffix arrays can be used for efficiently computing the frequency of n-grams, in particular for larger n. If the corpus size increases, the memory required for the suffix array may become problematic. We propose a new combination of suffix arrays with perfect hash finite automata, which reduces typical memory requirements by a factor of five, in combination with a modest increase in processing efficiency. 4.1 Suffix arrays Suffix arrays (Manber and Myers, 1990; Yamamoto and Church, 2001) are a simple, but useful datastructure for various text-processing tasks. A corpus is a sequence of characters. A suffix array s is an array consisting of all suffixes of the corpus, sorted alphabetically. For example, if the corpus is the string abba, the suffix array is ⟨a,abba,ba,bba⟩. Rather than writing out each suffix, we use integers i to refer to the suffix starting at position i in the corpus. Thus, in this case the suffix array consists of the integers ⟨3, 0, 2, 1⟩. It is straightforward to compute the suffix array. For a corpus of k + 1 characters, we initialize the suffix array by the integers 0 . . . k. The suffix array is sorted, using a specialized comparison routine which takes integers i and j, and alphabetically compares the strings starting at i and j in the corpus.4 Once we have the suffix array, it is simple to compute the frequency of n-grams. Suppose we are interested in the frequency of all n-grams for n = 10. We simply iterate over the elements of the suffix array: for each element, we print the first ten words of the corresponding suffix. This gives us all occurrences of all 10-grams in the corpus, sorted alphabetically. We now count each 10-gram, e.g. by piping the result to the Unix uniq -c command. 4.2 Perfect hash finite automata Suffix arrays can be used more efficiently to compute frequencies of n-grams for larger n, with the help of an additional data-structure, known as the perfect hash finite automaton (Lucchiesi and Kowaltowski, 1993; Roche, 1995; Revuz, 1991). The perfect hash automaton for an alphabetically sorted finite set of words w0 . . . wn is a weighted minimal deterministic finite automaton which maps wi →i for each w0≤i≤n. We call i the word code of wi. An example is given in figure 3. Note that perfect hash automata implement an order preserving, minimal perfect hash function. The function is minimal, in the sense that n keys are mapped into the range 0 . . . n −1, and the function is order preserving, in the sense that the alphabetic order of words is reflected in the numeric order of word codes. 4.3 Suffix arrays with words In the approach of Church (1995), the corpus is a sequence of characters (represented by integers reflecting the alphabetic order). A more spaceefficient approach takes the corpus as a sequence of words, represented by word codes reflecting the alphabetic order. To compute frequencies of n-grams for larger n, we first compute the perfect hash finite automaton for all words which occur in the corpus,5 and map 4The suffix sort algorithm of Peter M. McIlroy and M. Douglas McIlroy is used, available as http://www.cs. dartmouth.edu/˜doug/ssort.c; This algorithm is robust against long repeated substrings in the corpus. 5We use an implementation by Jan Daciuk freely available from http://www.eti.pg.gda.pl/˜jandac/ fsa.html. d::1 c r::5 s::7 e::1 r g::1 c k o u::2 c s::1 l o t t k c c o Figure 3: Example of a perfect hash finite automaton for the words clock, dock, dog, duck, dust, rock, rocker, stock. Summing the weights along an accepting path in the automaton yields the rank of the word in alphabetic ordering. the corpus to a sequence of integers, by mapping each word to its word code. Suffix array construction then proceeds on the basis of word codes, rather than character codes. This approach has several advantages. The representation of both the corpus and the suffix array is more compact. If the average word length is k, then the corresponding arrays are k times smaller (but we need some additional space for the perfect hash automaton). In Dutch, the average word length k is about 5, and we obtained space savings in that order. If the suffix array is shorter, sorting should be faster too (but we need some additional time to compute the perfect hash automaton). In our experience, sorting is about twice as fast for word codes. 4.4 Computing parsability table To compute parsability scores, we assume there are two corpora cm and ca, where the first is a subcorpus of the second. cm contains all sentences for which parsing was not successful. ca contains all sentences overall. For both corpora, we compute the frequency of all n-grams for all n; n-grams with a frequency below a specified frequency cutoff are ignored. Note that we need not impose an a priori maximum value for n; since there is a frequency cut-off, for some n there simply aren’t any sequences which occur more frequently than this cut-off. The two n-gram frequency files are organized in such a way that shorter n-grams precede longer n-grams. The two frequency files are then combined as follows. Since the frequency file corresponding to cm is (much) smaller than the file corresponding to ca, we read the first file into memory (into a hash data structure). We then iteratively read an n-gram frequency from the second file, and compute the parsability of that n-gram. In doing so, we keep track of the parsability scores assigned to previous (hence shorter) n-grams, in order to ensure that larger n-grams are only reported in case the parsability scores decrease. The final step consists in sorting all remaining n-grams with respect to their parsability. To give an idea of the practicality of the approach, consider the following data for one of the experiments described above. For a corpus of 2,927,016 sentences (38,846,604 words, 209Mb), it takes about 150 seconds to construct the perfect hash automaton (mostly sorting). The automaton is about 5Mb in size, to represent 677,488 distinct words. To compute the suffix array and frequencies of all n-grams (cut-off=5), about 15 minutes of CPU-time are required. Maximum runtime memory requirements are about 400Mb. The result contains frequencies for 1,641,608 distinct ngrams. Constructing the parsability scores on the basis of the n-gram files only takes 10 seconds CPU-time, resulting in parsability scores for 64,998 n-grams (since there are much fewer n-grams which actually occur in problematic sentences). The experiment was performed on a Intel Pentium III, 1266MHz machine running Linux. The software is freely available from http://www.let.rug. nl/˜vannoord/software.html. 5 Discussion An error mining technique has been presented which is very helpful in identifying problems in hand-coded grammars and lexicons for parsing. An important ingredient of the technique consists of the computation of the frequency of n-grams of words for arbitrary values of n. It was shown how a new combination of suffix arrays and perfect hash finite automata allows an efficient implementation. A number of potential improvements can be envisioned. In the definition of R(w), the absolute frequency of w is ignored. Yet, if w is very frequent, R(w) is more reliable than if w is not frequent. Therefore, as an alternative, we also experimented with a set-up in which an exact binomial test is applied to compute a confidence interval for R(w). Results can then be ordered with respect to the maximum of these confidence intervals. This procedure seemed to improve results somewhat, but is computationally much more expensive. For the first experiment described above, this alternative set-up results in a parsability table of 42K word tuples, whereas the original method produces a table of 65K word tuples. R C n-gram 0.00 8 Beter ten 0.20 12 ten halve 0.15 11 halve gekeerd 0.00 8 gekeerd dan 0.09 10 dan ten hele 0.69 15 dan ten 0.17 10 ten hele 0.00 10 hele gedwaald 0.00 8 gedwaald . 0.20 10 gedwaald Table 3: Multiple n-grams indicating same error The parsability table only contains longer ngrams if these have a lower parsability than the corresponding shorter n-grams. Although this heuristic appears to be useful, it is still possible that a single problem is reflected multiple times in the parsability table. For longer problematic sequences, the parsability table typically contains partially overlapping parts of that sequence. This phenomenon is illustrated in table 3 for the idiom Beter ten halve gekeerd dan ten hele gedwaald discussed earlier. This suggests that it would be useful to consider other heuristics to eliminate such redundancy, perhaps by considering statistical feature selection methods. The definition used in this paper to identify a successful parse is a rather crude one. Given that grammars of the type assumed here typically assign very many analyses to a given sentence, it is often the case that a specific problem in the grammar or lexicon rules out the intended parse for a given sentence, but alternative (wrong) parses are still possible. What appears to be required is a (statistical) model which is capable of judging the plausibility of a parse. We investigated whether the maximum entropy score S(x) (equation 1) can be used to indicate parse plausibility. In this set-up, we considered a parse successful only if S(x) of the best parse is above a certain threshold. However, the resulting parsability table did not appear to indicate problematic word sequences, but rather word sequences typically found in elliptical sentences were returned. Apparently, the grammatical rules used for ellipsis are heavily punished by the maximum entropy model in order that these rules are used only if other rules are not applicable. Acknowledgments This research was supported by the PIONIER project Algorithms for Linguistic Processing funded by NWO. References Gosse Bouma, Gertjan van Noord, and Robert Malouf. 2001. Wide coverage computational analysis of Dutch. In W. Daelemans, K. Sima’an, J. Veenstra, and J. Zavrel, editors, Computational Linguistics in the Netherlands 2000. Kenneth Ward Church. 1995. Ngrams. ACL 1995, MIT Cambridge MA, June 16. ACL Tutorial. Claudio Lucchiesi and Tomasz Kowaltowski. 1993. Applications of finite automata representing large vocabularies. Software Practice and Experience, 23(1):15–30, Jan. Robert Malouf and Gertjan van Noord. 2004. Wide coverage parsing with stochastic attribute value grammars. In Beyond shallow analyses. Formalisms and statistical modeling for deep analysis, Sanya City, Hainan, China. IJCNLP-04 Workshop. Udi Manber and Gene Myers. 1990. Suffix arrays: A new method for on-line string searching. In Proceedings of the First Annual AC-SIAM Symposium on Discrete Algorithms, pages 319–327. http://manber. com/publications.html. Robbert Prins and Gertjan van Noord. 2003. Reinforcing parser preferences through tagging. Traitement Automatique des Langues, 44(3):121– 139. in press. Dominique Revuz. 1991. Dictionnaires et lexiques: m´ethodes et algorithmes. Ph.D. thesis, Institut Blaise Pascal, Paris, France. LITP 91.44. Emmanuel Roche. 1995. Finite-state tools for language processing. ACL 1995, MIT Cambridge MA, June 16. ACL Tutorial. Leonoor van der Beek, Gosse Bouma, Robert Malouf, and Gertjan van Noord. 2002a. The Alpino dependency treebank. In Mari¨et Theune, Anton Nijholt, and Hendri Hondorp, editors, Computational Linguistics in the Netherlands 2001. Selected Papers from the Twelfth CLIN Meeting, pages 8–22. Rodopi. Leonoor van der Beek, Gosse Bouma, and Gertjan van Noord. 2002b. Een brede computationele grammatica voor het Nederlands. Nederlandse Taalkunde, 7(4):353–374. in Dutch. Mikio Yamamoto and Kenneth W. Church. 2001. Using suffix arrays to compute term frequency and document frequency for all substrings in a corpus. Computational Linguistics, 27(1):1–30.
2004
57
Alternative Approaches for Generating Bodies of Grammar Rules Gabriel Infante-Lopez and Maarten de Rijke Informatics Institute, University of Amsterdam {infante,mdr}@science.uva.nl Abstract We compare two approaches for describing and generating bodies of rules used for natural language parsing. In today’s parsers rule bodies do not exist a priori but are generated on the fly, usually with methods based on n-grams, which are one particular way of inducing probabilistic regular languages. We compare two approaches for inducing such languages. One is based on n-grams, the other on minimization of the Kullback-Leibler divergence. The inferred regular languages are used for generating bodies of rules inside a parsing procedure. We compare the two approaches along two dimensions: the quality of the probabilistic regular language they produce, and the performance of the parser they were used to build. The second approach outperforms the first one along both dimensions. 1 Introduction N-grams have had a big impact on the state of the art in natural language parsing. They are central to many parsing models (Charniak, 1997; Collins, 1997, 2000; Eisner, 1996), and despite their simplicity n-gram models have been very successful. Modeling with n-grams is an induction task (Gold, 1967). Given a sample set of strings, the task is to guess the grammar that produced that sample. Usually, the grammar is not be chosen from an arbitrary set of possible grammars, but from a given class. Hence, grammar induction consists of two parts: choosing the class of languages amongst which to search and designing the procedure for performing the search. By using n-grams for grammar induction one addresses the two parts in one go. In particular, the use of n-grams implies that the solution will be searched for in the class of probabilistic regular languages, since n-grams induce probabilistic automata and, consequently, probabilistic regular languages. However, the class of probabilistic regular languages induced using n-grams is a proper subclass of the class of all probabilistic regular languages; n-grams are incapable of capturing long-distance relations between words. At the technical level the restricted nature of n-grams is witnessed by the special structure of the automata induced from them, as we will see in Section 4.2. N-grams are not the only way to induce regular languages, and not the most powerful way to do so. There is a variety of general methods capable of inducing all regular languages (Denis, 2001; Carrasco and Oncina, 1994; Thollard et al., 2000). What is their relevance for natural language parsing? Recall that regular languages are used for describing the bodies of rules in a grammar. Consequently, the quality and expressive power of the resulting grammar is tied to the quality and expressive power of the regular languages used to describe them. And the quality and expressive power of the latter, in turn, are influenced directly by the method used to induce them. These observations give rise to a natural question: can we gain anything in parsing from using general methods for inducing regular languages instead of methods based on n-grams? Specifically, can we describe the bodies of grammatical rules more accurately and more concisely by using general methods for inducing regular languages? In the context of natural language parsing we present an empirical comparison between algorithms for inducing regular languages using ngrams on the one hand, and more general algorithms for learning the general class of regular language on the other hand. We proceed as follows. We generate our training data from the Wall Street Journal Section of the Penn Tree Bank (PTB), by transforming it to projective dependency structures, following (Collins, 1996), and extracting rules from the result. These rules are used as training material for the rule induction algorithms we consider. The automata produced this way are then used to build grammars which, in turn, are used for parsing. We are interested in two different aspects of the use of probabilistic regular languages for natural language parsing: the quality of the induced automata and the performance of the resulting parsers. For evaluation purposes, we use two different metrics: perplexity for the first aspect and percentage of correct attachments for the second. The main results of the paper are that, measured in terms of perplexity, the automata induced by algorithms other than n-grams describe the rule bodies better than automata induced using n-gram-based algorithms, and that, moreover, the gain in automata quality is reflected by an improvement in parsing performance. We also find that the parsing performance of both methods (n-grams vs. general automata) can be substantially improved by splitting the training material into POS categories. As a side product, we find empirical evidence to suggest that the effectiveness of rule lexicalization techniques (Collins, 1997; Sima’an, 2000) and parent annotation techniques (Klein and Manning, 2003) is due to the fact that both lead to a reduction in perplexity in the automata induced from training corpora. Section 2 surveys our experiments, and later sections provide details of the various aspects. Section 3 offers details on our grammatical framework, PCW-grammars, on transforming automata to PCW-grammars, and on parsing with PCWgrammars. Section 4 explains the starting point of this process: learning automata, and Section 5 reports on parsing experiments. We discuss related work in Section 6 and conclude in Section 7. 2 Overview We want to build grammars using different algorithms for inducing their rules. Our main question is aimed at understanding how different algorithms for inducing regular languages impact the parsing performance with those grammars. A second issue that we want to explore is how the grammars perform when the quality of the training material is improved, that is, when the training material is separated into part of speech (POS) categories before the regular language learning algorithms are run. We first transform the PTB into projective dependencies structures following (Collins, 1996). From the resulting tree bank we delete all lexical information except POS tags. Every POS in a tree belonging to the tree-bank has associated to it two different, possibly empty, sequences of right and left dependents, respectively. We extract all these sequences for all trees, producing two different sets containing right and left sequences of dependents respectively. These two sets form the training material used for building four different grammars. The four grammars differ along two dimensions: the number of automata used for building them and the algorithm used for inducing the automata. As to the latter dimension, in Section 4 we use two algorithms: the Minimum Discriminative Information (MDI) algorithm, and a bigram-based algorithm. As to the former dimension, two of the grammars are built using only two different automata, each of which is built using the two sample set generated from the PTB. The other two grammars were built using two automata per POS, exploiting a split of the training samples into multiple samples, two samples per POS, to be precise, each containing only those samples where the POS appeared as the head. The grammars built from the induced automata are so-called PCW-grammars (see Section 3), a formalism based on probabilistic context free grammars (PCFGs); as we will see in Section 3, inferring them from automata is almost immediate. 3 Grammatical Framework We briefly detail the grammars we work with (PCW-grammars), how automata give rise to these grammars, and how we parse using them. 3.1 PCW-Grammars We need a grammatical framework that models rule bodies as instances of a regular language and that allows us to transform automata to grammars as directly as possible. We decided to embed them in the general grammatical framework of CW-grammars (Infante-Lopez and de Rijke, 2003): based on PCFGs, they have a clear and wellunderstood mathematical background and we do not need to implement ad-hoc parsing algorithms. A probabilistic constrained W-grammar (PCWgrammar) consists of two different sets of PCF-like rules called pseudo-rules and meta-rules respectively and three pairwise disjoint sets of symbols: variables, non-terminals and terminals. Pseudorules and meta-rules provide mechanisms for building ‘real’ rewrite rules. We use α w =⇒β to indicate that α should be rewritten as β. In the case of PCWgrammars, rewrite rules are built by first selecting a pseudo-rule, and then using meta-rules for instantiating all the variables in the body of the pseudo-rule. To illustrate these concepts, we provide an example. Let W = (V, NT, T, S, m −→, s−→) be a CWgrammar such that the set of variable, non-terminals meta-rules pseudo-rules Adj m −→0.5 AdjAdj S s−→1 Adj Noun Adj m −→0.5 Adj Adj s−→0.1 big Noun s−→1 ball ... and terminals are defined as follows: V = {Adj }, NT = {S, Adj, Noun}, T = {ball, big, fat, red, green, . . .}. As usual, the numbers attached to the arrows indicate the probabilities of the rules. The rules defined by W have the following shape: S w =⇒Adj ∗Noun. Suppose now that we want to build the rule S w =⇒Adj Adj Noun. We take the pseudo-rule S s−→1 Adj Noun and instantiate the variable Adj with Adj Adj to get the desired rule. The probability for it is 1 × 0.5 × 0.5, that is, the probability of the derivation for Adj Adj times the probability of the pseudo-rule used. Trees for this particular grammar are flat, with a main node S and all the adjectives in it as daughters. An example derivation is given in Figure 1(a). 3.2 From Automata to Grammars Now that we have introduced PCW-grammars, we describe how we build them from the automata that we are going to induce in Section 4. Since we will induce two families of automata (“ManyAutomata” where we use two automata per POS, and “One-Automaton” where we use only two automata to fit every POS), we need to describe two automata-to-grammar transformations. Let’s start with the case where we build two automata per POS. Let w be a POS in the PTB; let Aw L and Aw R be the two automata associated to it. Let Gw L and Gw R be the PCFGs equivalent to Aw L and Aw R, respectively, following (Abney et al., 1999), and let Sw L and Sw R be the starting symbols of Gw L and Gw R, respectively. We build our final grammar G with starting symbol S, by defining its meta-rules as the disjoint union of all rules in Gw L and Gw R (for all POS w), its set of pseudo-rules as the union of the sets {W s−→1 Sw L wSw R and S s−→1 Sw LwSw R}, where W is a unique new variable symbol associated to w. When we use two automata for all parts of speech, the grammar is defined as follows. Let AL and AR be the two automata learned. Let GL and GR be the PCFGs equivalent to AL and AR, and let SL and SR be the starting symbols of GL and GR, respectively. Fix a POS w in the PTB. Since the automata are deterministic, there exist states Sw L and Sw R that are reachable from SL and SR, respectively, by following the arc labeled with w. Define a grammar as in the previous case. Its starting symbol is S, its set of meta-rules is the disjoint union of all rules in Gw L and Gw R (for all POS w), its set of pseudorules is {W s−→1 Sw LwSw R, S s−→1 Sw L wSw R : w is a POS in the PTB and W is a unique new variable symbol associated to w}. 3.3 Parsing PCW-Grammars Parsing PCW-grammars requires two steps: a generation-rule step followed by a tree-building step. We now explain how these two steps can be carried out in one go. Parsing with PCW-grammars can be viewed as parsing with PCF grammars. The main difference is that in PCW-parsing derivations for variables remain hidden in the final tree. To clarify this, consider the trees depicted in Figure 1; the tree in part (a) is the CW-tree corresponding to the word red big green ball, and the tree in part (b) is the same tree but now the instantiations of the metarules that were used have been made visible. S Adj red Adj big Adj green Noun ball S Adj 1 Adj 1 Adj 1 Adj red Adj big Adj green Noun ball (a) (b) Figure 1: (a) A tree generated by W. (b) The same tree with meta-rule derivations made visible. To adapt a PCFG to parse CW-grammars, we need to define a PCF grammar for a given PCWgrammar by adding the two sets of rules while making sure that all meta-rules have been marked somehow. In Figure 1(b) the head symbols of meta-rules have been marked with the superscript 1. After parsing the sentence with the PCF parser, all marked rules should be collapsed as shown in part (a). 4 Building Automata The four grammars we intend to induce are completely defined once the underlying automata have been built. We now explain how we build those automata from the training material. We start by detailing how the material is generated. 4.1 Building the Sample Sets We transform the PTB, sections 2–22, to dependency structures, as suggested by (Collins, 1999). All sentences containing CC tags are filtered out, following (Eisner, 1996). We also eliminate all word information, leaving only POS tags. For each resulting dependency tree we extract a sample set of right and left sequences of dependents as shown in Figure 2. From the tree we generate a sample set with all right sequences of dependents {ϵ, ϵ, ϵ}, and another with all left sequences {ϵ, ϵ, red big green}. The sample set used for automata induction is the union of all individual tree sample sets. 4.2 Learning Probabilistic Automata Probabilistic deterministic finite state automata (PDFA) inference is the problem of inducing a stochastic regular grammar from a sample set of strings belonging to an unknown regular language. The most direct approach for solving the task is by S JJ jj red JJ jj big JJ jj green nn ball ball green big red (a) (b) jj jj nn left right left right left right ϵ ϵ ϵ ϵ red big green ϵ (c) Figure 2: (a), (b) Dependency representations of Figure 1. (c) Sample instances extracted from this tree. using n-grams. The n-gram induction algorithm adds a state to the resulting automaton for each sequence of symbols of length n it has seen in the training material; it also adds an arc between states aβ and βb labeled b, if the sequence aβb appears in the training set. The probability assigned to the arc (aβ, βb) is proportional to the number of times the sequence aβb appears in the training set. For the remainder, we take n-grams to be bigrams. There are other approaches to inducing regular grammars besides ones based on n-grams. The first algorithm to learn PDFAs was ALERGIA (Carrasco and Oncina, 1994); it learns cyclic automata with the so-called state-merging method. The Minimum Discrimination Information (MDI) algorithm (Thollard et al., 2000) improves over ALERGIA and uses Kullback-Leibler divergence for deciding when to merge states. We opted for the MDI algorithm as an alternative to n-gram based induction algorithms, mainly because their working principles are radically different from the n-gram-based algorithm. The MDI algorithm first builds an automaton that only accepts the strings in the sample set by merging common prefixes, thus producing a tree-shaped automaton in which each transition has a probability proportional to the number of times it is used while generating the positive sample. The MDI algorithm traverses the lattice of all possible partitions for this general automaton, attempting to merge states that satisfy a trade-off that can be specified by the user. Specifically, assume that A1 is a temporary solution of the algorithm and that A2 is a tentative new solution derived from A1. ∆(A1, A2) = D(A0||A2) −D(A0||A1) denotes the divergence increment while going from A1 to A2, where D(A0||Ai) is the Kullback-Leibler divergence or relative entropy between the two distributions generated by the corresponding automata (Cover and Thomas, 1991). The new solution A2 is compatible with the training data if the divergence increment relative to the size reduction, that is, the reduction of the number of states, is small enough. Formally, let alpha denote a compatibility threshold; then the compatibility is satisfied if ∆(A1,A2) |A1|−|A2| < alpha. For this learning algorithm, alpha is the unique parameter; we tuned it to get better quality automata. 4.3 Optimizing Automata We use three measures to evaluate the quality of a probabilistic automaton (and set the value of alpha optimally). The first, called test sample perplexity (PP), is based on the per symbol loglikelihood of strings x belonging to a test sample according to the distribution defined by the automaton. Formally, LL = −1 |S| P x∈S log (P(x)), where P(x) is the probability assigned to the string x by the automata. The perplexity PP is defined as PP = 2LL. The minimal perplexity PP = 1 is reached when the next symbol is always predicted with probability 1 from the current state, while PP = |Σ| corresponds to uniformly guessing from an alphabet of size |Σ|. The second measure we used to evaluate the quality of an automaton is the number of missed samples (MS). A missed sample is a string in the test sample that the automaton failed to accept. One such instance suffices to have PP undefined (LL infinite). Since an undefined value of PP only witnesses the presence of at least one MS we decided to count the number of MS separately, and compute PP without taking MS into account. This choice leads to a more accurate value of PP, while, moreover, the value of MS provides us with information about the generalization capacity of automata: the lower the value of MS, the larger the generalization capacities of the automaton. The usual way to circumvent undefined perplexity is to smooth the resulting automaton with unigrams, thus increasing the generalization capacity of the automaton, which is usually paid for with an increase in perplexity. We decided not to use any smoothing techniques as we want to compare bigram-based automata with MDI-based automata in the cleanest possible way. The PP and MS measures are relative to a test sample; we transformed section 00 of the PTB to obtain one.1 1If smoothing techniques are used for optimizing automata based on n-grams, they should also be used for optimizing MDI-based automata. A fair experiment for comparing the two automata-learning algorithms using smoothing techniques would consist of first building two pairs of automata. The first pair would consist of the unigram-based automaton together The third measure we used to evaluate the quality of automata concerns the size of the automata. We compute NumEdges and NumStates (the number of edges and the number of states of the automaton). We used PP, US, NumEdges, and NumStates to compare automata. We say that one automaton is of a better quality than another if the values of the 4 indicators are lower for the first than for the second. Our aim is to find a value of alpha that produces an automaton of better quality than the bigram-based counterpart. By exhaustive search, using all training data, we determined the optimal value of alpha. We selected the value of alpha for which the MDI-based automaton outperforms the bigram-based one.2 We exemplify our procedure by considering automata for the “One-Automaton” setting (where we used the same automata for all parts of speech). In Figure 3 we plot all values of PP and MS computed for different values of alpha, for each training set (i.e., left and right). From the plots we can identify values of alpha that produce automata having better values of PP and MS than the bigram-based ones. All such alphas are the ones inside the marked areas; automata induced using those alphas possess a lower value of PP as well as a smaller number of MS, as required. Based on these explorations MDI Bigrams Right Left Right Left NumEdges 268 328 20519 16473 NumStates 12 15 844 755 Table 1: Automata sizes for the “One-Automaton” case, with alpha = 0.0001. we selected alpha = 0.0001 for building the automata used for grammar induction in the “OneAutomaton” case. Besides having lower values of PP and MS, the resulting automata are smaller than the bigram based automata (Table 1). MDI compresses information better; the values in the tables with an MDI-based automaton outperforming the unigrambased one. The second one, a bigram-based automata together with an MDI-based automata outperforming the bigram-based one. Second, the two n-gram based automata smoothed into a single automaton have to be compared against the two MDIbased automata smoothed into a single automaton. It would be hard to determine whether the differences between the final automata are due to smoothing procedure or to the algorithms used for creating the initial automata. By leaving smoothing out of the picture, we obtain a clearer understanding of the differences between the two automata induction algorithms. 2An equivalent value of alpha can be obtained independently of the performance of the bigram-based automata by defining a measure that combines PP and MS. This measure should reach its maximum when PP and MS reach their minimums. suggest that MDI finds more regularities in the sample set than the bigram-based algorithm. To determine optimal values for the “ManyAutomata” case (where we learned two automata for each POS) we used the same procedure as for the “One-Automaton” case, but now for every individual POS. Because of space constraints we are not able to reproduce analogues of Figure 3 and Table 1 for all parts of speech. Figure 4 contains representative plots; the remaining plots are available online at http://www.science. uva.nl/˜infante/POS. Besides allowing us to find the optimal alphas, the plots provide us with a great deal of information. For instance, there are two remarkable things in the plots for VBP (Figure 4, second row). First, it is one of the few examples where the bigrambased algorithm performs better than the MDI algorithm. Second, the values of PP in this plot are relatively high and unstable compared to other POS plots. Lower perplexity usually implies better quality automata, and as we will see in the next section, better automata produce better parsers. How can we obtain lower PP values for the VBP automata? The class of words tagged with VBP harbors many different behaviors, which is not surprising, given that verbs can differ widely in terms of, e.g., their subcategorization frames. One way to decrease the PP values is to split the class of words tagged with VBP into multiple, more homogeneous classes. Note from Figures 3 and 4 that splitting the original sample sets into POS-dependent sets produces a huge decrease on PP. One attempt to implement this idea is lexicalization: increasing the information in the POS tag by adding the lemma to it (Collins, 1997; Sima’an, 2000). Lexicalization splits the class of verbs into a family of singletons producing more homogeneous classes, as desired. A different approach (Klein and Manning, 2003) consists in adding head information to dependents; words tagged with VBP are then split into classes according to the words that dominate them in the training corpus. Some POS present very high perplexities, but tags such as DT present a PP close to 1 (and 0 MS) for all values of alpha. Hence, there is no need to introduce further distinctions in DT, doing so will not increase the quality of the automata but will increase their number; splitting techniques are bound to add noise to the resulting grammars. The plots also indicate that the bigram-based algorithm captures them as well as the MDI algorithm. In Figure 4, third row, we see that the MDI-based automata and the bigram-based automata achieve the same value of PP (close to 5) for NN, but 0 5 10 15 20 25 5e-05 0.0001 0.00015 0.0002 0.00025 0.0003 0.00035 0.0004 Alpha Unique Automaton - Left Side MDI Perplex. (PP) Bigram Perplex. (PP) MDI Missed Samples (MS) Bigram Missed Samples (MS) 0 5 10 15 20 25 30 5e-05 0.0001 0.00015 0.0002 0.00025 0.0003 0.00035 0.0004 Alpha Unique Automaton - Right Side MDI Perplex. (PP) Bigram Perplex. (PP) MDI Missed Samples (MS) Bigram Missed Samples (MS) Figure 3: Values of PP and MS for automata used in building One-Automaton grammars. (X-axis): alpha. (Y-axis): missed samples (MS) and perplexity (PP). The two constant lines represent the values of PP and MS for the bigram-based automata. 3 4 5 6 7 8 9 0.0e+00 2.0e-05 4.0e-05 6.0e-05 8.0e-05 1.0e-04 1.2e-04 1.4e-04 1.6e-04 1.8e-04 2.0e-04 Alpha VBP - LeftSide MDI Perplex. (PP) Bigram Perplex. (PP) MDI Missed Samples (MS) Bigram Missed Samples (MS) 3 4 5 6 7 8 9 0.0e+00 2.0e-05 4.0e-05 6.0e-05 8.0e-05 1.0e-04 1.2e-04 1.4e-04 1.6e-04 1.8e-04 2.0e-04 Alpha VBP - LeftSide MDI Perplex. (PP) Bigram Perplex. (PP) MDI Missed Samples (MS) Bigram Missed Samples (MS) 0 5 10 15 20 25 30 0.0e+00 2.0e-05 4.0e-05 6.0e-05 8.0e-05 1.0e-04 1.2e-04 1.4e-04 1.6e-04 1.8e-04 2.0e-04 Alpha NN - LeftSide MDI Perplex. (PP) Bigram Perplex. (PP) MDI Missed Samples (MS) Bigram Missed Samples (MS) 0 5 10 15 20 25 30 0.0e+00 2.0e-05 4.0e-05 6.0e-05 8.0e-05 1.0e-04 1.2e-04 1.4e-04 1.6e-04 1.8e-04 2.0e-04 Alpha NN - RightSide MDI Perplex. (PP) Bigram Perplex. (PP) MDI Missed Samples (MS) Bigram Missed Samples (MS) Figure 4: Values of PP and MS for automata for ad-hoc automata the MDI misses fewer examples for alphas bigger than 1.4e −04. As pointed out, we built the One-Automaton-MDI using alpha = 0.0001 and even though the method allows us to fine-tune each alpha in the Many-Automata-MDI grammar, we used a fixed alpha = 0.0002 for all parts of speech, which, for most parts of speech, produces better automata than bigrams. Table 2 lists the sizes of the automata. The differences between MDI-based and bigram-based automata are not as dramatic as in the “One-Automaton” case (Table 1), but the former again have consistently lower NumEdges and NumStates values, for all parts of speech, even where bigram-based automata have a lower perplexity. MDI Bigrams POS Right Left Right Left DT NumEdges 21 14 35 39 NumStates 4 3 25 17 VBP NumEdges 300 204 2596 1311 NumStates 50 45 250 149 NN NumEdges 104 111 3827 4709 NumStates 6 4 284 326 Table 2: Automata sizes for the three parts of speech in the “Many-Automata” case, with alpha = 0.0002 for parts of speech. 5 Parsing the PTB We have observed remarkable differences in quality between MDI-based and bigram-based automata. Next, we present the parsing scores, and discuss the meaning of the measures observed for automata in the context of the grammars they produce. The measure that translates directly from automata to grammars is automaton size. Since each automaton is transformed into a PCFG, the number of rules in the resulting grammar is proportional to the number of arcs in the automaton, and the number of nonterminals is proportional to the number of states. From Table 3 we see that MDI compresses information better: the sizes of the grammars produced by the MDI-based automata are an order of magnitude smaller that those produced using bigram-based automata. Moreover, the “One-Automaton” versions substantially reduce the size of the resulting grammars; this is obviously due to the fact that all POS share the same underlying automaton so that information does not need to be duplicated across parts of speech. To understand the meaning of PP and One Automaton Many Automata MDI Bigram MDI Bigram 702 38670 5316 68394 Table 3: Number of rules in the grammars built. MS in the context of grammars it helps to think of PCW-parsing as a two-phase procedure. The first phase consists of creating the rules that will be used in the second phase. And the second phase consists in using the rules created in the first phase as a PCFG and parsing the sentence using a PCF parser. Since regular expressions are used to build rules, the values of PP and MS quantify the quality of the set of rules built for the second phase: MS gives us a measure of the number rule bodies that should be created but that will not be created, and, hence, it gives us a measure of the number of “correct” trees that will not be produced. PP tells us how uncertain the first phase is about producing rules. Finally, we report on the parsing accuracy. We use two measures, the first one (%Words) was proposed by Lin (1995) and was the one reported in (Eisner, 1996). Lin’s measure computes the fraction of words that have been attached to the right word. The second one (%POS) marks as correct a word attachment if, and only if, the POS tag of the head is the same as that of the right head, i.e., the word was attached to the correct word-class, even though the word is not the correct one in the sentence. Clearly, the second measure is always higher than the first one. The two measures try to capture the performance of the PCW-parser in the two phases described above: (%POS) tries to capture the performance in the first phase, and (%Words) in the second phase. The measures reported in Table 4 are the mean values of (%POS) and (%Words) computed over all sentences in section 23 having length at most 20. We parsed only those sentences because the resulting grammars for bigrams are too big: parsing all sentences without any serious pruning techniques was simply not feasible. From Table 4 MDI Bigrams %Words %POS %Words %POS One-Aut. 0.69 0.73 0.59 0.63 Many-Aut. 0.85 0.88 0.73 0.76 Table 4: Parsing results for the PTB we see that the grammars induced with MDI outperform the grammars created with bigrams. Moreover, the grammar using different automata per POS outperforms the ones built using only a single automaton per side (left or right). The results suggest that an increase in quality of the automata has a direct impact on the parsing performance. 6 Related Work and Discussion Modeling rule bodies is a key component of parsers. N-grams have been used extensively for this purpose (Collins 1996, 1997; Eisner, 1996). In these formalisms the generative process is not considered in terms of probabilistic regular languages. Considering them as such (like we do) has two advantages. First, a vast area of research for inducing regular languages (Carrasco and Oncina, 1994; Thollard et al., 2000; Dupont and Chase, 1998) comes in sight. Second, the parsing device itself can be viewed under a unifying grammatical paradigm like PCW-grammars (Chastellier and Colmerauer, 1969; Infante-Lopez and de Rijke, 2003). As PCWgrammars are PCFGs plus post tree transformations, properties of PCFGs hold for them too (Booth and Thompson, 1973). In our comparison we optimized the value of alpha, but we did not optimize the n-grams, as doing so would mean two different things. First, smoothing techniques would have to be used to combine different order n-grams. To be fair, we would also have to smooth different MDI-based automata, which would leave us in the same point. Second, the degree of the n-gram. We opted for n = 2 as it seems the right balance of informativeness and generalization. N-grams are used to model sequences of arguments, and these hardly ever have length > 3, making higher degrees useless. To make a fair comparison for the Many-Automata grammars we did not tune the MDI-based automata individually, but we picked a unique alpha. MDI presents a way to compact rule information on the PTB; of course, other approaches exists. In particular, Krotov et al. (1998) try to induce a CW-grammar from the PTB with the underlying assumption that some derivations that were supposed to be hidden were left visible. The attempt to use algorithms other than n-grams-based for inducing of regular languages in the context of grammar induction is not new; for example, Kruijff (2003) uses profile hidden models in an attempt to quantify free order variations across languages; we are not aware of evaluations of his grammars as parsing devices. 7 Conclusions and Future Work Our experiments support two kinds of conclusions. First, modeling rules with algorithms other than n-grams not only produces smaller grammars but also better performing ones. Second, the procedure used for optimizing alpha reveals that some POS behave almost deterministically for selecting their arguments, while others do not. These findings suggests that splitting classes that behave nondeterministically into homogeneous ones could improve the quality of the inferred automata. We saw that lexicalization and head-annotation seem to attack this problem. Obvious questions for future work arise: Are these two techniques the best way to split non-homogeneous classes into homogeneous ones? Is there an optimal splitting? Acknowledgments We thank our referees for valuable comments. Both authors were supported by the Netherlands Organization for Scientific Research (NWO) under project number 220-80-001. De Rijke was also supported by grants from NWO, under project numbers 36520-005, 612.069.006, 612.000.106, 612.000.207, and 612.066.302. References S. Abney, D. McAllester, and F. Pereira. 1999. Relating probabilistic grammars and automata. In Proc. 37th Annual Meeting of the ACL, pages 542–549. T. Booth and R. Thompson. 1973. Applying probability measures to abstract languages. IEEE Transaction on Computers, C-33(5):442–450. R. Carrasco and J. Oncina. 1994. Learning stochastic regular grammars by means of state merging method. In Proc. ICGI-94, Springer, pages 139–150. E. Charniak. 1997. Statistical parsing with a contextfree grammar and word statistics. In Proc. 14th Nat. Conf. on Artificial Intelligence, pages 598–603. G. Chastellier and A. Colmerauer. 1969. W-grammar. In Proc. 1969 24th National Conf., pages 511–518. M. Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proc. 34th Annual Meeting of the ACL, pages 184–191. M. Collins. 1997. Three generative, lexicalized models for statistical parsing. In Proc. 35th Annual Meeting of the ACL and 8th Conf. of the EACL, pages 16–23. M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, PA. M. Collins. 2000. Discriminative reranking for natural language parsing. In Proc. ICML-2000, Stanford, Ca. T. Cover and J. Thomas. 1991. Elements of Information Theory. Jonh Wiley and Sons, New York. F. Denis. 2001. Learning regular languages from simple positive examples. Machine Learning, 44(1/2):37–66. P. Dupont and L. Chase. 1998. Using symbol clustering to improve probabilistic automaton inference. In Proc. ICGI-98, pages 232–243. J. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proc. COLING96, pages 340–245, Copenhagen, Denmark. J. Eisner. 2000. Bilexical grammars and their cubic-time parsing algorithms. In Advances in Probabilistic and Other Parsing Technologies, pages 29–62. Kluwer. E. M. Gold. 1967. Language identification in the limit. Information and Control, 10:447–474. G. Infante-Lopez and M. de Rijke. 2003. Natural language parsing with W-grammars. In Proc. CLIN 2003. D. Klein and C. Manning. 2003. Accurate unlexicalized parsing. In Proc. 41st Annual Meeting of the ACL. A. Krotov, M. Hepple, R.J. Gaizauskas, and Y. Wilks. 1998. Compacting the Penn Treebank grammar. In Proc. COLING-ACL, pages 699–703. G. Kruijff. 2003. 3-phase grammar learning. In Proc. Workshop on Ideas and Strategies for Multilingual Grammar Development. D. Lin. 1995. A dependency-based method for evaluating broad-coverage parsers. In Proc. IJCAI-95. K. Sima’an. 2000. Tree-gram Parsing: Lexical Dependencies and Structual Relations. In Proc. 38th Annual Meeting of the ACL, pages 53–60, Hong Kong, China. F. Thollard, P. Dupont, and C. de la Higuera. 2000. Probabilistic DFA inference using kullback-leibler divergence and minimality. In Proc. ICML 2000.
2004
58
Adaptive Chinese Word Segmentation 1 Jianfeng Gao*, Andi Wu*, Mu Li*, Chang-Ning Huang*, Hongqiao Li**, Xinsong Xia$, Haowei Qin& * Microsoft Research. {jfgao, andiwu, muli, cnhuang}@microsoft.com ** Beijing Institute of Technology, Beijing. [email protected] $ Peking University, Beijing. [email protected] & Shanghai Jiaotong university, Shanghai. [email protected] 1 This work was done while Hongqiao Li, Xinsong Xia and Haowei Qin were visiting Microsoft Research (MSR) Asia. We thank Xiaodan Zhu for his early contribution, and the three reviewers, one of whom alerted us the related work of (Uchimoto et al., 2001). Abstract This paper presents a Chinese word segmentation system which can adapt to different domains and standards. We first present a statistical framework where domain-specific words are identified in a unified approach to word segmentation based on linear models. We explore several features and describe how to create training data by sampling. We then describe a transformation-based learning method used to adapt our system to different word segmentation standards. Evaluation of the proposed system on five test sets with different standards shows that the system achieves state- of-the-art performance on all of them. 1 Introduction Chinese word segmentation has been a long- standing research topic in Chinese language processing. Recent development in this field shows that, in addition to ambiguity resolution and unknown word detection, the usefulness of a Chinese word segmenter also depends crucially on its ability to adapt to different domains of texts and different segmentation standards. The need of adaptation involves two research issues that we will address in this paper. The first is new word detection. Different domains/applications may have different vocabularies which contain new words/terms that are not available in a general dictionary. In this paper, new words refer to OOV words other than named entities, factoids and morphologically derived words. These words are mostly domain specific terms (e.g. 蜂窝式 ‘cellular’) and time-sensitive political, social or cultural terms (e.g. 三通‘Three Links’, 非典 ‘SARS’). The second issue concerns the customizable display of word segmentation. Different Chinese NLP-enabled applications may have different requirements that call for different granularities of word segmentation. For example, speech recognition systems prefer “longer words” to achieve higher accuracy whereas information retrieval systems prefer “shorter words” to obtain higher recall rates, etc. (Wu, 2003). Given a word segmentation specification (or standard) and/or some application data used as training data, a segmenter with customizable display should be able to provide alternative segmentation units according to the specification which is either pre-defined or implied in the data. In this paper, we first present a statistical framework for Chinese word segmentation, where various problems of word segmentation are solved simultaneously in a unified approach. Our approach is based on linear models where component models are inspired by the source-channel models of Chinese sentence generation. We then describe in detail how the new word identification (NWI) problem is handled in this framework. We explore several features and describe how to create training data by sampling. We evaluate the performance of our segmentation system using an annotated test set, where new words are simulated by sampling. We then describe a transformation-based learning (TBL, Brill, 1995) method that is used to adapt our system to different segmentation standards. We compare the adaptive system to other state-of-the-art systems using four test sets in the SIGHAN’s First International Chinese Word Segmentation Bakeoff, each of which is constructed according to a different segmentation standard. The performance of our system is comparable to the best systems reported on all four test sets. It demonstrates the possibility of having a single adaptive Chinese word segmenter that is capable of supporting multiple user applications. Word Class2 Model Feature Functions, f(S,W) Context Model Word class based trigram, P(W). -log(P(W)) Lexical Word (LW) --- 1 if S forms a word lexicon entry, 0 otherwise. Morphological Word (MW) --- 1 if S forms a morph lexicon entry, 0 otherwise. Named Entity (NE) Character/word bigram, P(S|NE). -log(P(S|NE)) Factoid (FT) --- 1 if S can be parsed using a factoid grammar, 0 otherwise New Word (NW) --- Score of SVM classifier Figure 1: Context model, word classes, and class models, and feature functions. 2 In our system, we define three types of named entity: person name (PN), location name (LN), organization (ON) and transliteration name (TN); ten types of factoid: date, time (TIME), percentage, money, number (NUM), measure, e-mail, phone number, and WWW; and five types of morphologically derived words (MDW): affixation, reduplication, merging, head particle and split. 2 Chinese Word Segmentation with Linear Models Let S be a Chinese sentence which is a character string. For all possible word segmentations W, we will choose the most likely one W* which achieves the highest conditional probability P(W|S): W* = argmaxw P(W|S). According to Bayes’ decision rule and dropping the constant denominator, we can equivalently perform the following maximization: ) | ( ) ( max arg * W S P W P W W = . (1) Equation (1) represents a source-channel approach to Chinese word segmentation. This approach models the generation process of a Chinese sentence: first, the speaker selects a sequence of concepts W to output, according to the probability distribution P(W); then he attempts to express each concept by choosing a sequence of characters, according to the probability distribution P(S|W). We define word class as a group of words that are supposed to be generated according to the same distribution (or in the same manner). For instance, all Chinese person names form a word class. We then have multiple channel models, each for one word class. Since a channel model estimates the likelihood that a character string is generated given a word class, it is also referred to as class model. Similarly, source model is referred to as context model because it indicates the likelihood that a word class occurs in a context. We have only one context model which is a word-class-based trigram model. Figure 1 shows word classes and class models that we used in our system. We notice that different class models are constructed in different ways (e.g. name entity models are n-gram models trained on corpora whereas factoid models use derivation rules and have binary values). The dynamic value ranges of different class models can be so different that it is improper to combine all models through simple multiplication as Equation (1). In this study we use linear models. The method is derived from linear discriminant functions widely used for pattern classification (Duda et al., 2001), and has been recently introduced into NLP tasks by Collins and Duffy (2001). It is also related to log- linear models for machine translation (Och, 2003). In this framework, we have a set of M+1 feature functions fi(S,W), i = 0,…,M. They are derived from the context model (i.e. f0(W)) and M class models, each for one word class, as shown in Figure 1: For probabilistic models such as the context model or person name model, the feature functions are defined as the negative logarithm of the corresponding probabilistic models. For each feature function, there is a model parameter λi. The best word segmentation W* is determined by the decision rule as ∑ = = = M i i i W M W W S f W S Score W 0 0 * ) , ( max arg ) , , ( max arg λ λ (2) Below we describe how to optimize λs. Our method is a discriminative approach inspired by the Minimum Error Rate Training method proposed in Och (2003). Assume that we can measure the number of segmentation errors in W by comparing it with a reference segmentation R using a function Er(R,W). The training criterion is to minimize the count of errors over the training data as ∑ = R W S M M S W R Er M , , 1 1 ^ )) , ( , ( min arg 1 λ λ λ , (3) where W is detected by Equation (2). However, we cannot apply standard gradient descent to optimize Initialization: λ0=α, λi=1, i = 1,…,M. For t = 1 … T, j = 1 … N Wj = argmax ∑ λi fi(Sj,W) For i = 1… M λi = λi + η(Score(λ,S,W)-Score(λ,S,R))(fi(R) - fi(W)), where λ={λ0, λ1,…,λM} and η =0.001. Figure 2: The training algorithm for model parameters model parameters according to Equation (3) because the gradient cannot be computed explicitly (i.e., Er is not differentiable), and there are many local minima in the error surface. We then use a variation called stochastic gradient descent (or unthresholded perceptron, Mitchell, 1997). As shown in Figure 2, the algorithm takes T passes over the training set (i.e. N sentences). All parameters are initially set to be 1, except for the context model parameter λ0 which is set to be a constant α during training, and is estimated separately on held-out data. Class model parameters are updated in a simple additive fashion. Notice that Score(λ,S,W) is not less than Score(λ,S,R). Intuitively the updated rule increases the parameter values for word classes whose models were “underestimated” (i.e. expected feature value f(W) is less than observed feature value f(R)), and decreases the parameter values whose models were “overestimated” (i.e. f(W) is larger than f(R)). Although the method cannot guarantee a global optimal solution, it is chosen for our modeling because of its efficiency and the best results achieved in our experiments. Given the linear models, the procedure of word segmentation in our system is as follows: First, all word candidates (lexical words and OOV words of certain types) are generated, each with its word class tag and class model score. Second, Viterbi search is used to select the best W according to Equation (2). Since the resulting W* is a sequence of segmented words that are either lexical words or OOV words with certain types (e.g. person name, morphological words, new words) we then have a system that can perform word segmentation and OOV word detection simultaneously in a unified approach. Most previous works treat OOV word detection as a separate step after word segmentation. Compared to these approaches, our method avoids the error propagation problem and can incorporate a variety of knowledge to achieve a globally optimal solution. The superiority of the unified approach has been demonstrated empirically in Gao et al. (2003), and will also be discussed in Section 5. 3 New Word Identification New words in this section refer to OOV words that are neither recognized as named entities or factoids nor derived by morphological rules. These words are mostly domain specific and/or time-sensitive. The identification of such new words has not been studied extensively before. It is an important issue that would have substantial impact on the performance of word segmentation. For example, approximately 30% of OOV words in the SIGHAN’s PK corpus (see Table 1) are new words of this type. There has been previous work on detecting Chinese new words from a large corpus in an off-line manner and updating the dictionary before word segmentation. However, our approach is able to detect new words on-line, i.e. to spot new words in a sentence on the fly during the process of word segmentation where widely-used statistical features such as mutual information or term frequency are not available. For brevity of discussion, we will focus on the identification of 2-character new words, denoted as NW_11. Other types of new words such as NW_21 (a 2-character word followed with a character) and NW_12 can be detected similarly (e.g. by viewing the 2-character word as an inseparable unit, like a character). Below, we shall describe the class model and context model for NWI, and the creation of training data by sampling. 3.1 Class Model We use a classifier (SVM in our experiments) to estimate the likelihood of two adjacent characters to form a new word. Of the great number of features we experimented, three linguistically-motivated features are chosen due to their effectiveness and availability for on-line detection. They are Independent Word Probability (IWP), Anti-Word Pair (AWP), and Word Formation Analogy (WFA). Below we describe each feature in turn. In Section 3.2, we shall describe the way the training data (new word list) for the classifier is created by sampling. IWP is a real valued feature. Most Chinese characters can be used either as independent words or component parts of multi-character words, or both. The IWP of a single character is the likelihood for this character to appear as an independent word in texts (Wu and Jiang, 2000): ) ( ) , ( ) ( x C W x C x IWP = . (4) where C(x, W) is the number of occurrences of the character x as an independent word in training data, and C(x) is the total number of x in training data. We assume that the IWP of a character string is the product of the IWPs of the component characters. Intuitively, the lower the IWP value, the more likely the character string forms a new word. In our implementation, the training data is word-segmented. AWP is a binary feature derived from IWP. For example, the value of AWP of an NW_11 candidate ab is defined as: AWP(ab)=1 if IWP(a)>θ or IWP(b) >θ, 0 otherwise. θ ∈[0, 1] is a pre-set threshold. Intuitively, if one of the component characters is very likely to be an independent word, it is unlikely to be able to form a word with any other characters. While IWP considers all component characters in a new word candidate, AWP only considers the one with the maximal IWP value. WFA is a binary feature. Given a character pair (x, y), a character (or a multi-character string) z is called the common stem of (x, y) if at least one of the following two conditions hold: (1) character strings xz and yz are lexical words (i.e. x and y as prefixes); and (2) character strings zx and zy are lexical words (i.e. x and y as suffixes). We then collect a list of such character pairs, called affix pairs, of which the number of common stems is larger than a pre-set threshold. The value of WFA for a given NW_11 candidate ab is defined as: WFA(ab) = 1 if there exist an affix pair (a, x) (or (b, x)) and the string xb (or ax) is a lexical word, 0 otherwise. For example, given an NW_11 candidate 下岗 (xia4-gang3, ‘out of work’), we have WFA(下岗) = 1 because (上, 下) is an affix pair (they have 32 common stems such as _ 任, 游, 台, 车, 面, 午, 班) and 上岗 (shang4-gang3, ‘take over a shift’) is a lexical word. 3.2 Context Model The motivations of using context model for NWI are two-fold. The first is to capture useful contextual information. For example, new words are more likely to be nouns than pronouns, and the POS tagging is context-sensitive. The second is more important. As described in Section 2, with a context model, NWI can be performed simultaneously with other word segmentation tasks (e.g.: word break, named entity recognition and morphological analysis) in a unified approach. However, it is difficult to develop a training corpus where new words are annotated because “we usually do not know what we don’t know”. Our solution is Monte Carlo simulation. We sample a set of new words from our dictionary according to the distribution – the probability that any lexical word w would be a new word P(NW|w). We then generate a new-word-annotated corpus from a word-segmented text corpus. Now we describe the way P(NW|w) is estimated. It is reasonable to assume that new words are those words whose probability to appear in a new document is lower than general lexical words. Let Pi(k) be the probability of word wi that occurs k times in a document. In our experiments, we assume that P(NW|wi) can be approximated by the probability of wi occurring less than K times in a new document: ∑ − = ≈ 1 0 ) ( ) | ( K k i i k P w NW P , (5) where the constant K is dependent on the size of the document: The larger the document, the larger the value. Pi(k) can be estimated using several term distribution models (see Chapter 15.3 in Manning and Schütze, 1999). Following the empirical study in (Gao and Lee, 2000), we use K-Mixture (Katz, 1996) which estimate Pi(k) as k k i k P ) 1 ( 1 ) 1( ) ( 0 , + + + − = β β β α δ α , (6) where δk,0=1 if k=0, 0 otherwise. α and β are parameters that can be fit using the observed mean λ and the observed inverse document frequency IDF as follow: N cf = λ , df N IDF log = , df df cf IDF − = − × = 1 2 λ β , and β λ α = , where cf is the total number of occurrence of word wi in training data, df is the number of documents in training data that wi occurs in, and N is the total number of documents. In our implementation, the training data contain approximately 40 thousand documents that have been balanced among domain, style and time. 4 Adaptation to Different Standards The word segmentation standard (or standard for brevity) varies from system to system because there is no commonly accepted definition of Chinese Condition: ‘Affixation’ Condition: ‘Date’ Condition: ‘PersonName’ Actions: Insert a boundary between ‘Prefix’ and ‘Stem’… Actions: Insert a boundary between ‘Year’ and ‘Mon’ … Actions: Insert a boundary between ‘FamilyName’ and ‘GivenName’… Figure 3: Word internal structure and class-type transformation templates. words and different applications may have different requirements that call for different granularities of word segmentation. It is ideal to develop a single word segmentation system that is able to adapt to different standards. We consider the following standard adaptation paradigm. Suppose we have a ‘general’ standard pre-defined by ourselves. We have also created a large amount of training data which are segmented according to this general standard. We then develop a generic word segmenter, i.e. the system described in Sections 2 and 3. Whenever we deploy the segmenter for any application, we need to customize the output of the segmenter according to an application-specific standard, which is not always explicitly defined. However, it is often implicitly defined in a given amount of application data (called adaptation data) from which the specific standard can be partially learned. In our system, the standard adaptation is conducted by a postprocessor which performs an ordered list of transformations on the output of the generic segmenter – removing extraneous word boundaries, and inserting new boundaries – to obtain a word segmentation that meets a different standard. The method we use is transformation-based learning (Brill, 1995), which requires an initial segmentation, a goal segmentation into which we wish to transform the initial segmentation and a space of allowable transformations (i.e. transformation templates). Under the abovementioned adaptation paradigm, the initial segmentation is the output of the generic segmenter. The goal segmentation is adaptation data. The transformation templates can make reference to words (i.e. lexicalized templates) as well as some pre-defined types (i.e. class-type based templates), as described below. We notice that most variability in word segmentation across different standards comes from those words that are not typically stored in the dictionary. Those words are dynamic in nature and are usually formed through productive morphological processes. In this study, we focus on three categories: morphologically derived words (MDW), named entities (NE) and factoids. For each word class that belongs to these categories2, we define an internal structure similar to (Wu, 2003). The structure is a tree with ‘word class’ as the root, and ‘component types’ as the other nodes. There are 30 component types. As shown in Figure 3, the word class Affixation has three component types: Prefix, Stem and Suffix. Similarly, PersonName has two component types and Date has nine – 3 as non-terminals and 6 as terminals. These internal structures are assigned to words by the generic segmenter at run time. The transformation templates for words of the above three categories are of the form: Condition: word class Actions: z Insert – place a new boundary between two component types. z Delete – remove an existing boundary between two component types. Since the application of the transformations derived from the above templates are conditioned on word class and make reference to component types, we call the templates class-type transformation templates. Some examples are shown in Figure 3. In addition, we also use lexicalized transformation templates as: z Insert – place a new boundary between two lemmas. Mon Day Pre_Y Pre_M Dig_M Dig_D Year Date PersonName FamilyName GivenName Affixation Prefix Stem Suffix Pre_D Dig_Y z Delete – remove an existing boundary between two lemmas. Here, lemmas refer to those basic lexical words that cannot be formed by any productive morphological process. They are mostly single characters, bi-character words, and 4-character idioms. In short, our adaptive Chinese word segmenter consists of two components: (1) a generic segmenter that is capable of adapting to the vocabularies of different domains and (2) a set of output adaptors, learned from application data, for adapting to different “application-specific” standards 5 Evaluation We evaluated the proposed adaptive word segmentation system (henceforth AWS) using five different standards. The training and test corpora of these standards are detailed in Table 1, where MSR is defined by ourselves, and the other four are standards used in SIGHAN’s First International Chinese Word Segmentation Bakeoff (Bakeoff test sets for brevity, see Sproat and Emperson (2003) for details). Corpus Abbrev. # Tr. Word # Te. Word ‘General’ standard MSR 20M 226K Beijing University PK 1.1M 17K U. Penn Chinese Treebank CTB 250K 40K Hong Kong City U. HK 240K 35K Academia Sinica AS 5.8M 12K Table 1: standards and corpora. MSR is used as the general standard in our experiments, on the basis of which the generic segmenter has been developed. The training and test corpora were annotated manually, where there is only one allowable word segmentation for each sentence. The training corpus contains approximately 35 million Chinese characters from various domains of text such as newspapers, novels, magazines etc. 90% of the training corpus are used for context model training, and 10% are held-out data for model parameter training as shown in Figure 2. The NE class models, as shown in Figure 1, were trained on the corresponding NE lists that were collected separately. The test set contains a total of 225,734 tokens, including 205,162 lexicon/morph-lexicon words, 3,703 PNs, 5,287 LNs, 3,822 ONs, and 4,152 factoids. In Section 5.1, we will describe some simulated test sets that are derived from the MSR test set by sampling NWs from a 98,686-entry dictionary. The four Bakeoff standards are used as ‘specific’ standards into which we wish to adapt the general standard. We notice in Table 1 that the sizes of adaptation data sets (i.e. training corpora of the four Bakeoff standards) are much smaller than that of the MSR training set. The experimental setting turns out to be a good simulation of the adaptation paradigm described in Section 4. The performance of word segmentation is measured through test precision (P), test recall (R), F score (which is defined as 2PR/(P+R)), the OOV rate for the test corpus (on Bakeoff corpora, OOV is defined as the set of words in the test corpus not occurring in the training corpus.), the recall on OOV words (Roov), and the recall on in-vocabulary (Riv) words. We also tested the statistical significance of results, using the criterion proposed by Sproat and Emperson (2003), and all results reported in this section are significantly different from each other. 5.1 NWI Results This section discusses two factors that we believe have the most impact on the performance of NWI. First, we compare methods where we use the NWI component (i.e. an SVM classifier) as a post- processor versus as a feature function in the linear models of Equation (2). Second, we compare different sampling methods of creating simulated training data for context model. Which sampling method is best depends on the nature of P(NW|w). As described in Section 3.2, P(NW|w) is unknown and has to be approximated by Pi(k) in our study, so it is expected that the closer P(NW|w) and Pi(k) are, the better the resulting context model. We compare three estimates of Pi(k) in Equation (5) using term models based on Uniform, Possion, and K- Mixture distributions, respectively. Table 2 shows the results of the generic segmenter on three test sets that are derived from the MSR test set using the above three different sampling methods, respectively. For all three distributions, unified approaches (i.e. using NWI component as a feature function) outperform consecutive approaches (i.e. using NWI component as a post- processor). This demonstrates empirically the benefits of using context model for NWI and the unified approach to Chinese word segmentation, as described in 3.2. We also perform NWI on Bakeoff AWS w/o NW AWS w/ NW (post-processor) AWS w/ NW (unified approach) word segmentation word segmentation NW word segmentation NW # of NW P% R% P% R% P% R% P% R% P% R% Uniform 5,682 92.6 94.5 94.7 95.2 64.1 66.8 95.1 95.5 68.1 78.4 Poisson 3,862 93.4 95.6 94.5 95.9 61.4 45.6 95.0 95.7 57.2 60.6 K-Mixture 2,915 94.7 96.4 95.1 96.2 44.1 41.5 95.6 96.2 46.2 60.4 Table 2: NWI results on MSR test set, NWI as post-processor versus unified approach PK CTB P R F OOV Roov Riv P R F OOV Roov Riv 1. AWS w/o adaptation .824 .854 .839 .069 .320 .861 .799 .818 .809 .181 .624 .861 2. AWS .952 .959 .955 .069 .781 .972 .895 .914 .904 .181 .746 .950 3. AWS w/o NWI .949 .963 .956 .069 .741 .980 .875 .910 .892 .181 .690 .959 4. FMM w/ adaptation .913 .946 .929 .069 .524 .977 .805 .874 .838 .181 .521 .952 5. Rank 1 in Bakeoff .956 .963 .959 .069 .799 .975 .907 .916 .912 .181 .766 .949 6. Rank 2 in Bakeoff .943 .963 .953 .069 .743 .980 .891 .911 .901 .181 .736 .949 Table 3: Comparison scores for PK open and CTB open. HK AS P R F OOV Roov Riv P R F OOV Roov Riv 1. AWS w/o adaptation .819 .822 .820 .071 .593 .840 .832 .838 .835 .021 .405 .847 2. AWS .948 .960 .954 .071 .746 .977 .955 .961 .958 .021 .584 .969 3. AWS w/o NWI .937 .958 .947 .071 .694 .978 .958 .943 .951 .021 .436 .969 4. FMM w/ adaptation .818 .823 .821 .071 .591 .841 .930 .947 .939 .021 .160 .964 5. Rank 1 in Bakeoff .954 .958 .956 .071 .788 .971 .894 .915 .904 .021 .426 .926 6. Rank 2 in Bakeoff .863 .909 .886 .071 .579 .935 .853 .892 .872 .021 .236 .906 Table 4: Comparison scores for HK open and AS open. test sets. As shown in Tables 3 and 4 (Rows 2 and 3), the use of NW functions (via the unified approach) substantially improves the word segmentation performance. We find in our experiments that NWs sampled by Possion and K-Mixture are mostly specific and time-sensitive terms, in agreement with our intuition, while NWs sampled by Uniform include more common words and lemmas that are easier to detect. Consequently, by Uniform sampling, the P/R of NWI is the highest but the P/R of the overall word segmentation is the lowest, as shown in Table 2. Notice that the three sampling methods are not comparable in terms of P/R of NWI in Table 2 because of different sampling result in different sets of new words in the test set. We then perform NWI on Bakeoff test sets where the sets of new words are less dependent on specific sampling methods. The results however do not give a clear indication which sampling method is the best because the test sets are too small to show the difference. We then leave it to future work a thorough empirical comparison among different sampling methods. 5.2 Standard Adaptation Results The results of standard adaptation on four Bakeoff test sets are shown in Tables 3 and 4. A set of transformations for each standard is learnt using TBL from the corresponding Bakeoff training set. For each test set, we report results using our system with and without standard adaptation (Rows 1 and 2). It turns out that performance improves dramatically across the board in all four test sets. For comparison, we also include in each table the results of using the forward maximum matching (FMM) greedy segmenter as a generic segmenter (Row 4), and the top 2 scores (sorted by F) that are reported in SIGHAN’s First International Chinese Word Segmentation Bakeoff (Rows 5 and 6). We can see that with adaptation, our generic segmenter can achieve state-of-the-art performance on different standards, showing its superiority over other systems. For example, there is no single segmenter in SIGHAN’s Bakeoff, which achieved top-2 ranks in all four test sets (Sproat and Emperson, 2003). We notice in Table 3 and 4 that the quality of adaptation seems to depend largely upon the size of adaptation data: we outperformed the best bakeoff systems in the AS set because the size of the adaptation data is big while we are worse in the CTB set because of the small size of the adaptation data. To verify our speculation, we evaluated the adaptation results using subsets of the AS training set of different sizes, and observed the same trend. However, even with a much smaller adaptation data set (e.g. 250K), we still outperform the best bakeoff results. 6 Related Work Many methods of Chinese word segmentation have been proposed (See Wu and Tseng, 1993; Sproat and Shih, 2001 for reviews). However, it is difficult to compare systems due to the fact that there is no widely accepted standard. There has been less work on dealing with NWI and standard adaptation. All feature functions in Figure 1, except the NW function, are derived from models presented in (Gao et al., 2003). The linear models are similar to what was presented in Collins and Duffy (2001). An alternative to linear models is the log-linear models suggested by Och (2003). See Collins (2002) for a comparison of these approaches. The features for NWI were studied in Wu & Jiang (2000) and Li et al. (2004). The use of sampling was proposed in Della Pietra et al. (1997) and Rosenfeld et al. (2001). There is also a related work on this line in Japanese (Uchimoto et al., 2001). A detailed discussion on differences among the four Bakeoff standards is presented in Wu (2003), which also proposes an adaptive system where the display of the output can be customized by users. The method described in Section 4 can be viewed as an improved version in that the transformations are learnt automatically from adaptation data. The use of TBL for Chinese word segmentation was first suggested in Palmer (1997). 7 Conclusion This paper presents a statistical approach to adaptive Chinese word segmentation based on linear models and TBL. The system has two components: A generic segmenter that can adapt to the vocabularies of different domains, and a set of output adaptors, learned from application data, for adapting to different “application-specific” standards. We evaluate our system on five test sets, each corresponding to a different standard. We achieve state-of-the-art performance on all test sets. References Brill, Eric. 1995. Transformation-based error-driven learning and natural language processing: a case study in Part-of-Speech tagging. In: Computational Linguistics, 21(4). Collins, Michael and Nigel Duffy. 2001. Convolution kernels for natural language. In: Advances in Neural Information Processing Systems (NLPS 14). Collins, Michael. 2002. Parameter estimation for statistical parsing models: theory and practice of distribution-free methods. To appear. Della Pietra, S., Della Pietra, V., and Lafferty, J. 1997. Inducing features of random fields. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 380-393. Duda, Richard O, Hart, Peter E. and Stork, David G. 2001. Pattern classification. John Wiley & Sons, Inc. Gao, Jianfeng and Kai-Fu Lee. 2000. Distribution based pruning of backoff language models. In: ACL2000. Gao, Jianfeng, Mu Li and Chang-Ning Huang. 2003. Improved source-channel model for Chinese word segmentation. In: ACL2003. Katz, S. M. 1996. Distribution of content words and phrases in text and language modeling, In: Natural Language Engineering, 1996(2): 15-59 Li, Hongqiao, Chang-Ning Huang, Jianfeng Gao and Xiaozhong Fan. 2004. The use of SVM for Chinese new word identification. In: IJCNLP2004. Manning, C. D. and H. Schütze, 1999. Foundations of Statistical Natural Language Processing. The MIT Press. Mitchell, Tom M. 1997. Machine learning. The McGraw-Hill Companies, Inc. Och, Franz. 2003. Minimum error rate training in statistical machine translation. In: ACL2003. Palmer, D. 1997. A trainable rule-based algorithm for word segmentation. In: ACL '97. Rosenfeld, R., S. F. Chen and X. Zhu. 2001. Whole sentence exponential language models: a vehicle for linguistic statistical integration. In: Computer Speech and Language, 15 (1). Sproat, Richard and Chilin Shih. 2002. Corpus-based methods in Chinese morphology and phonology. In: COLING 2002. Sproat, Richard and Tom Emerson. 2003. The first international Chinese word segmentation bakeoff. In: SIGHAN 2003. Uchimoto, K., S. Sekine and H. Isahara. 2001. The unknown word problem: a morphological analysis of Japanese using maximum entropy aided by a dictionary. In: EMNLP2001. Wu, Andi and Zixin Jiang. 2000. Statistically-enhanced new word identification in a rule-based Chinese system. In: Proc of the 2rd ACL Chinese Processing Workshop. Wu, Andi. 2003. Customizable segmentation of morphologically derived words in Chinese. In: International Journal of Computational Linguistics and Chinese Language Processing, 8(1): 1-27. Wu, Zimin and Gwyneth Tseng. 1993. Chinese text segmentation for text retrieval achievements and problems. In: JASIS, 44(9): 532-542.
2004
59
Attention Shifting for Parsing Speech ∗ Keith Hall Department of Computer Science Brown University Providence, RI 02912 [email protected] Mark Johnson Department of Cognitive and Linguistic Science Brown University Providence, RI 02912 Mark [email protected] Abstract We present a technique that improves the efficiency of word-lattice parsing as used in speech recognition language modeling. Our technique applies a probabilistic parser iteratively where on each iteration it focuses on a different subset of the wordlattice. The parser’s attention is shifted towards word-lattice subsets for which there are few or no syntactic analyses posited. This attention-shifting technique provides a six-times increase in speed (measured as the number of parser analyses evaluated) while performing equivalently when used as the first-stage of a multi-stage parsing-based language model. 1 Introduction Success in language modeling has been dominated by the linear n-gram for the past few decades. A number of syntactic language models have proven to be competitive with the n-gram and better than the most popular n-gram, the trigram (Roark, 2001; Xu et al., 2002; Charniak, 2001; Hall and Johnson, 2003). Language modeling for speech could well be the first real problem for which syntactic techniques are useful. John ate the pizza on a plate with a fork . NP:plate NP:fork PP:with PP:on IN IN VB NP VP:ate Figure 1: An incomplete parse tree with head-word annotations. One reason that we expect syntactic models to perform well is that they are capable of modeling long-distance dependencies that simple n-gram ∗This research was supported in part by NSF grants 9870676 and 0085940. models cannot. For example, the model presented by Chelba and Jelinek (Chelba and Jelinek, 1998; Xu et al., 2002) uses syntactic structure to identify lexical items in the left-context which are then modeled as an n-gram process. The model presented by Charniak (Charniak, 2001) identifies both syntactic structural and lexical dependencies that aid in language modeling. While there are n-gram models that attempt to extend the left-context window through the use of caching and skip models (Goodman, 2001), we believe that linguistically motivated models, such as these lexical-syntactic models, are more robust. Figure 1 presents a simple example to illustrate the nature of long-distance dependencies. Using a syntactic model such as the the Structured Language Model (Chelba and Jelinek, 1998), we predict the word fork given the context {ate, with} where a trigram model uses the context {with, a}. Consider the problem of disambiguating between . . . plate with a fork and . . . plate with effort. The syntactic model captures the semantic relationship between the words ate and fork. The syntactic structure allows us to find lexical contexts for which there is some semantic relationship (e.g., predicateargument). Unfortunately, syntactic language modeling techniques have proven to be extremely expensive in terms of computational effort. Many employ the use of string parsers; in order to utilize such techniques for language modeling one must preselect a set of strings from the word-lattice and parse each of them separately, an inherently inefficient procedure. Of the techniques that can process word-lattices directly, it takes significant computation to achieve the same levels of accuracy as the n–best reranking method. This computational cost is the result of increasing the search space evaluated with the syntactic model (parser); the larger space resulting from combining the search for syntactic structure with the search for paths in the word-lattice. In this paper we propose a variation of a probabilistic word-lattice parsing technique that increases 0 1 yesterday/0 2 and/4.004 3 in/14.73 4 tuesday/0 14 tuesday/0 5 to/0.000 6 two/8.769 7 it/51.59 to/0 8 outlaw/83.57 9 outline/2.573 10 outlined/12.58 outlines/10.71 outline/0 outlined/8.027 outlines/7.140 13 to/0 in/0 of/115.4 a/71.30 the/115.3 11 strategy/0 strategy/0 outline/0 12/0 </s>/0 Figure 2: A partial word-lattice from the NIST HUB-1 dataset. efficiency while incurring no loss of language modeling performance (measured as Word Error Rate – WER). In (Hall and Johnson, 2003) we presented a modular lattice parsing process that operates in two stages. The first stage is a PCFG word-lattice parser that generates a set of candidate parses over strings in a word-lattice, while the second stage rescores these candidate edges using a lexicalized syntactic language model (Charniak, 2001). Under this paradigm, the first stage is not only responsible for selecting candidate parses, but also for selecting paths in the word-lattice. Due to computational and memory requirements of the lexicalized model, the second stage parser is capable of rescoring only a small subset of all parser analyses. For this reason, the PCFG prunes the set of parser analyses, thereby indirectly pruning paths in the word lattice. We propose adding a meta-process to the firststage that effectively shifts the selection of wordlattice paths to the second stage (where lexical information is available). We achieve this by ensuring that for each path in the word-lattice the first-stage parser posits at least one parse. 2 Parsing speech word-lattices P(A, W) = P(A|W)P(W) (1) The noisy channel model for speech is presented in Equation 1, where A represents the acoustic data extracted from a speech signal, and W represents a word string. The acoustic model P(A|W) assigns probability mass to the acoustic data given a word string and the language model P(W) defines a distribution over word strings. Typically the acoustic model is broken into a series of distributions conditioned on individual words (though these are based on false independence assumptions). P(A|w1 . . . wi . . . wn) = n  i=1 P(A|wi) (2) The result of the acoustic modeling process is a set of string hypotheses; each word of each hypothesis is assigned a probability by the acoustic model. Word-lattices are a compact representation of output of the acoustic recognizer; an example is presented in Figure 2. The word-lattice is a weighted directed acyclic graph where a path in the graph corresponds to a string predicted by the acoustic recognizer. The (sum) product of the (log) weights on the graph (the acoustic probabilities) is the probability of the acoustic data given the string. Typically we want to know the most likely string given the acoustic data. arg max P(W|A) (3) = arg max P(A, W) = arg max P(A|W)P(W) In Equation 3 we use Bayes’ rule to find the optimal string given P(A|W), the acoustic model, and P(W), the language model. Although the language model can be used to rescore1 the word-lattice, it is typically used to select a single hypothesis. We focus our attention in this paper to syntactic language modeling techniques that perform complete parsing, meaning that parse trees are built upon the strings in the word-lattice. 2.1 n–best list reranking Much effort has been put forth in developing efficient probabilistic models for parsing strings (Caraballo and Charniak, 1998; Goldwater et al., 1998; Blaheta and Charniak, 1999; Charniak, 2000; Charniak, 2001); an obvious solution to parsing wordlattices is to use n–best list reranking. The n–best list reranking procedure, depicted in Figure 3, utilizes an external language model that selects a set of strings from the word-lattice. These strings are analyzed by the parser which computes a language model probability. This probability is combined 1To rescore a word-lattice, each arch is assigned a new score (probability) defined by a new model (in combination with the acoustic model). w1, ..., wi, ..., wn1 ... Language Model w1, ..., wi, ..., wn2 w1, ..., wi, ..., wn3 w1, ..., wi, ..., wn4 w1, ..., wi, ..., wnm o1, ..., oi, ..., on 8 2 3 5 1 6 4 7 10 9 the/0 man/0 is/0 duh/1.385 man/0 is/0 surely/0 early/0 mans/1.385 man's/1.385 surly/0 surly/0.692 early/0 early/0 n-best list extractor Figure 3: n–best list reranking with the acoustic model probability to reranked the strings according to the joint probability P(A, W). There are two significant disadvantages to this approach. First, we are limited by the performance of the language model used to select the n–best lists. Usually, the trigram model is used to select n paths through the lattice generating at most n unique strings. The maximum performance that can be achieved is limited by the performance of this extractor model. Second, of the strings that are analyzed by the parser, many will share common substrings. Much of the work performed by the parser is duplicated for these substrings. This second point is the primary motivation behind parsing word-lattices (Hall and Johnson, 2003). 2.2 Multi-stage parsing Π PCFG Parser π′ ⊂Π Lexicalized Parser Figure 4: Coarse-to-fine lattice parsing. In Figure 4 we present the general overview of a multi-stage parsing technique (Goodman, 1997; Charniak, 2000; Charniak, 2001). This process 1. Parse word-lattice with PCFG parser 2. Overparse, generating additional candidates 3. Compute inside-outside probabilities 4. Prune candidates with probability threshold Table 1: First stage word-lattice parser is know as coarse-to-fine modeling, where coarse models are more efficient but less accurate than fine models, which are robust but computationally expensive. In this particular parsing model a PCFG best-first parser (Bobrow, 1990; Caraballo and Charniak, 1998) is used to search the unconstrained space of parses Π over a string. This first stage performs overparsing which effectively allows it to generate a set of high probability candidate parses π′. These parses are then rescored using a lexicalized syntactic model (Charniak, 2001). Although the coarse-to-fine model may include any number of intermediary stages, in this paper we consider this two-stage model. There is no guarantee that parses favored by the second stage will be generated by the first stage. In other words, because the first stage model prunes the space of parses from which the second stage rescores, the first stage model may remove solutions that the second stage would have assigned a high probability. In (Hall and Johnson, 2003), we extended the multi-stage parsing model to work on word-lattices. The first-stage parser, Table 1, is responsible for positing a set of candidate parses over the wordlattice. Were we to run the parser to completion it would generate all parses for all strings described by the word-lattice. As with string parsing, we stop the first stage parser early, generating a subset of all parses. Only the strings covered by complete parses are passed on to the second stage parser. This indirectly prunes the word-lattice of all word-arcs that were not covered by complete parses in the first stage. We use a first stage PCFG parser that performs a best-first search over the space of parses, which means that it depends on a heuristic “figure-ofmerit” (FOM) (Caraballo and Charniak, 1998). A good FOM attempts to model the true probability of a chart edge2 P(N i j,k). Generally, this probability is impossible to compute during the parsing process as it requires knowing both the inside and outside probabilities (Charniak, 1993; Manning and Sch¨utze, 1999). The FOM we describe is an approximation to the edge probability and is computed using an estimate of the inside probability times an approximation to the outside probability 3. The inside probability β(Ni j,k) can be computed incrementally during bottom-up parsing. The normalized acoustic probabilities from the acoustic recognizer are included in this calculation. ˆα(N i j,k) (4) =  i,l,q,r fwd(T q i,j)p(N i|T q)p(Tr|N i)bkwd(T r k,l) The outside probability is approximated with a bitag model and the standard tag/category boundary model (Caraballo and Charniak, 1998; Hall and Johnson, 2003). Equation 4 presents the approximation to the outside probability. Part-of-speech tags T q and T r are the candidate tags to the left and right of the constituent Ni j,k. The fwd() and bkwd() functions are the HMM forward and backward probabilities calculated over a lattice containing the part-of-speech tag, the word, and the acoustic scores from the word-lattice to the left and right of the constituent, respectively. p(Ni|T q) and p(Tr|N i) are the boundary statistics which are estimated from training data (details of this model can be found in (Hall and Johnson, 2003)). FOM(N i j,k) = ˆα(N i j,k)β(N i j,k)ηC(j, k) (5) The best-first search employed by the first stage parser uses the FOM defined in Equation 5, where η is a normalization factor based on path length C(j, k). The normalization factor prevents small constituents from consistently being assigned a 2A chart edge Ni j,k indicates a grammar category Ni can be constructed from nodes j to k. 3An alternative to the inside and outside probabilities are the Viterbi inside and outside probabilities (Goldwater et al., 1998; Hall and Johnson, 2003). higher probability than larger constituents (Goldwater et al., 1998). Although this heuristic works well for directing the parser towards likely parses over a string, it is not an ideal model for pruning the word-lattice. First, the outside approximation of this FOM is based on a linear part-of-speech tag model (the bitag). Such a simple syntactic model is unlikely to provide realistic information when choosing a word-lattice path to consider. Second, the model is prone to favoring subsets of the word-lattice causing it to posit additional parse trees for the favored sublattice rather than exploring the remainder of the word-lattice. This second point is the primary motivation for the attention shifting technique presented in the next section. 3 Attention shifting4 We explore a modification to the multi-stage parsing algorithm that ensures the first stage parser posits at least one parse for each path in the word-lattice. The idea behind this is to intermittently shift the attention of the parser to unexplored parts of the word lattice. Identify Used Edges Clear Agenda/ Add Edges for Unused Words Is Agenda Empty? no Continue Multi-stage Parsing yes PCFG Word-lattice Parser Figure 5: Attention shifting parser. Figure 5 depicts the attention shifting first stage parsing procedure. A used edge is a parse edge that has non-zero outside probability. By definition of 4The notion of attention shifting is motivated by the work on parser FOM compensation presented in (Blaheta and Charniak, 1999). the outside probability, used edges are constituents that are part of a complete parse; a parse is complete if there is a root category label (e.g., S for sentence) that spans the entire word-lattice. In order to identify used edges, we compute the outside probabilities for each parse edge (efficiently computing the outside probability of an edge requires that the inside probabilities have already been computed). In the third step of this algorithm we clear the agenda, removing all partial analyses evaluated by the parser. This forces the parser to abandon analyses of parts of the word-lattice for which complete parses exist. Following this, the agenda is populated with edges corresponding to the unused words, priming the parser to consider these words. To ensure the parser builds upon at least one of these unused edges, we further modify the parsing algorithm: • Only unused edges are added to the agenda. • When building parses from the bottom up, a parse is considered complete if it connects to a used edge. These modifications ensure that the parser focuses on edges built upon the unused words. The second modification ensures the parser is able to determine when it has connected an unused word with a previously completed parse. The application of these constraints directs the attention of the parser towards new edges that contribute to parse analyses covering unused words. We are guaranteed that each iteration of the attention shifting algorithm adds a parse for at least one unused word, meaning that it will take at most |A| iterations to cover the entire lattice, where A is the set of word-lattice arcs. This guarantee is trivially provided through the constraints just described. The attention-shifting parser continues until there are no unused words remaining and each parsing iteration runs until it has found a complete parse using at least one of the unused words. As with multi-stage parsing, an adjustable parameter determines how much overparsing to perform on the initial parse. In the attention shifting algorithm an additional parameter specifies the amount of overparsing for each iteration after the first. The new parameter allows for independent control of the attention shifting iterations. After the attention shifting parser populates a parse chart with parses covering all paths in the lattice, the multi-stage parsing algorithm performs additional pruning based on the probability of the parse edges (the product of the inside and outside probabilities). This is necessary in order to constrain the size of the hypothesis set passed on to the second stage parsing model. The Charniak lexicalized syntactic language model effectively splits the number of parse states (an edges in a PCFG parser) by the number of unique contexts in which the state is found. These contexts include syntactic structure such as parent and grandparent category labels as well as lexical items such as the head of the parent or the head of a sibling constituent (Charniak, 2001). State splitting on this level causes the memory requirement of the lexicalized parser to grow rapidly. Ideally, we would pass all edges on to the second stage, but due to memory limitations, pruning is necessary. It is likely that edges recently discovered by the attention shifting procedure are pruned. However, the true PCFG probability model is used to prune these edges rather than the approximation used in the FOM. We believe that by considering parses which have a relatively high probability according to the combined PCFG and acoustic models that we will include most of the analyses for which the lexicalized parser assigns a high probability. 4 Experiments The purpose of attention shifting is to reduce the amount of work exerted by the first stage PCFG parser while maintaining the same quality of language modeling (in the multi-stage system). We have performed a set of experiments on the NIST ’93 HUB–1 word-lattices. The HUB–1 is a collection of 213 word-lattices resulting from an acoustic recognizer’s analysis of speech utterances. Professional readers reading Wall Street Journal articles generated the utterances. The first stage parser is a best-first PCFG parser trained on sections 2 through 22, and 24 of the Penn WSJ treebank (Marcus et al., 1993). Prior to training, the treebank is transformed into speech-like text, removing punctuation and expanding numerals, etc.5 Overparsing is performed using an edge pop6 multiplicative factor. The parser records the number of edge pops required to reach the first complete parse. The parser continues to parse a until multiple of the number of edge pops required for the first parse are popped off the agenda. The second stage parser used is a modified version of the Charniak language modeling parser described in (Charniak, 2001). We trained this parser 5Brian Roark of AT&T provided a tool to perform the speech normalization. 6An edge pop is the process of the parser removing an edge from the agenda and placing it in the parse chart. on the BLLIP99 corpus (Charniak et al., 1999); a corpus of 30million words automatically parsed using the Charniak parser (Charniak, 2000). In order to compare the work done by the n–best reranking technique to the word-lattice parser, we generated a set of n–best lattices. 50–best lists were extracted using the Chelba A* decoder7. A 50– best lattice is a sublattice of the acoustic lattice that generates only the strings found in the 50–best list. Additionally, we provide the results for parsing the full acoustic lattices (although these work measurements should not be compared to those of n–best reranking). We report the amount of work, shown as the cumulative # edge pops, the oracle WER for the word-lattices after first stage pruning, and the WER of the complete multi-stage parser. In all of the word-lattice parsing experiments, we pruned the set of posited hypothesis so that no more than 30,000 local-trees are generated8. We chose this threshold due to the memory requirements of the second stage parser. Performing pruning at the end of the first stage prevents the attention shifting parser from reaching the minimum oracle WER (most notable in the full acoustic word-lattice experiments). While the attention-shifting algorithm ensures all word-lattice arcs are included in complete parses, forward-backward pruning, as used here, will eliminate some of these parses, indirectly eliminating some of the word-lattice arcs. To illustrate the need for pruning, we computed the number of states used by the Charniak lexicalized syntactic language model for 30,000 local trees. An average of 215 lexicalized states were generated for each of the 30,000 local trees. This means that the lexicalized language model, on average, computes probabilities for over 6.5 million states when provided with 30,000 local trees. Model # edge pops O-WER WER n–best (Charniak) 2.5 million 7.75 11.8 100x LatParse 3.4 million 8.18 12.0 10x AttShift 564,895 7.78 11.9 Table 2: Results for n–best lists and n–best lattices. Table 2 shows the results for n–best list reranking and word-lattice parsing of n–best lattices. We recreated the results of the Charniak language model parser used for reranking in order to measure the amount of work required. We ran the first stage parser with 4-times overparsing for each string in 7The n–best lists were provided by Brian Roark (Roark, 2001) 8A local-tree is an explicit expansion of an edge and its children. An example local tree is NP3,8 →DT3,4 NN4,8. the n–best list. The LatParse result represents running the word-lattice parser on the n–best lattices performing 100–times overparsing in the first stage. The AttShift model is the attention shifting parser described in this paper. We used 10–times overparsing for both the initial parse and each of the attention shifting iterations. When run on the n–best lattice, this model achieves a comparable WER, while reducing the amount of parser work sixfold (as compared to the regular word-lattice parser). Model # edge pops O-WER WER acoustic lats N/A 3.26 N/A 100x LatParse 3.4 million 5.45 13.1 10x AttShift 1.6 million 4.17 13.1 Table 3: Results for acoustic lattices. In Table 3 we present the results of the wordlattice parser and the attention shifting parser when run on full acoustic lattices. While the oracle WER is reduced, we are considering almost half as many edges as the standard word-lattice parser. The increased size of the acoustic lattices suggests that it may not be computationally efficient to consider the entire lattice and that an additional pruning phase is necessary. The most significant constraint of this multi-stage lattice parsing technique is that the second stage process has a large memory requirement. While the attention shifting technique does allow the parser to propose constituents for every path in the lattice, we prune some of these constituents prior to performing analysis by the second stage parser. Currently, pruning is accomplished using the PCFG model. One solution is to incorporate an intermediate pruning stage (e.g., lexicalized PCFG) between the PCFG parser and the full lexicalized model. Doing so will relax the requirement for aggressive PCFG pruning and allows for a lexicalized model to influence the selection of word-lattice paths. 5 Conclusion We presented a parsing technique that shifts the attention of a word-lattice parser in order to ensure syntactic analyses for all lattice paths. Attention shifting can be thought of as a meta-process around the first stage of a multi-stage word-lattice parser. We show that this technique reduces the amount of work exerted by the first stage PCFG parser while maintaining comparable language modeling performance. Attention shifting is a simple technique that attempts to make word-lattice parsing more efficient. As suggested by the results for the acoustic lattice experiments, this technique alone is not sufficient. Solutions to improve these results include modifying the first-stage grammar by annotating the category labels with local syntactic features as suggested in (Johnson, 1998) and (Klein and Manning, 2003) as well as incorporating some level of lexicalization. Improving the quality of the parses selected by the first stage should reduce the need for generating such a large number of candidates prior to pruning, improving efficiency as well as overall accuracy. We believe that attention shifting, or some variety of this technique, will be an integral part of efficient solutions for word-lattice parsing. References Don Blaheta and Eugene Charniak. 1999. Automatic compensation for parser figure-of-merit flaws. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics, pages 513–518. Robert J. Bobrow. 1990. Statistical agenda parsing. In DARPA Speech and Language Workshop, pages 222–224. Sharon Caraballo and Eugene Charniak. 1998. New figures of merit for best-first probabilistic chart parsing. Computational Linguistics, 24(2):275–298, June. Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, John Hale, and Mark Johnson. 1999. BLLIP 1987–89 wsj corpus release 1. LDC corpus LDC2000T43. Eugene Charniak. 1993. Statistical Language Learning. MIT Press. Eugene Charniak. 2000. A maximum-entropyinspired parser. In Proceedings of the 2000 Conference of the North American Chapter of the Association for Computational Linguistics., ACL, New Brunswick, NJ. Eugene Charniak. 2001. Immediate-head parsing for language models. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics. Ciprian Chelba and Frederick Jelinek. 1998. A study on richer syntactic dependencies for structured language modeling. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, pages 225–231. Sharon Goldwater, Eugene Charniak, and Mark Johnson. 1998. Best-first edge-based chart parsing. In 6th Annual Workshop for Very Large Corpora, pages 127–133. Joshua Goodman. 1997. Global thresholding and multiple-pass parsing. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, pages 11–25. Joshua Goodman. 2001. A bit of progress in language modeling, extendend version. In Microsoft Research Technical Report MSR-TR-2001-72. Keith Hall and Mark Johnson. 2003. Language modeling using efficient best-first bottom-up parsing. In Proceedings of IEEE Automated Speech Recognition and Understanding Workshop. Mark Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24:617–636. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Meeting of the Association for Computational Linguistics (ACL-03). Christopher D. Manning and Hinrich Sch¨utze. 1999. Foundations of statistical natural language processing. MIT Press. Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19:313–330. Brian Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(3):249–276. Peng Xu, Ciprian Chelba, and Frederick Jelinek. 2002. A study on richer syntactic dependencies for structured language modeling. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 191– 198.
2004
6
Experiments in Parallel-Text Based Grammar Induction Jonas Kuhn Department of Linguistics The University of Texas at Austin Austin, TX 78712 [email protected] Abstract This paper discusses the use of statistical word alignment over multiple parallel texts for the identification of string spans that cannot be constituents in one of the languages. This information is exploited in monolingual PCFG grammar induction for that language, within an augmented version of the inside-outside algorithm. Besides the aligned corpus, no other resources are required. We discuss an implemented system and present experimental results with an evaluation against the Penn Treebank. 1 Introduction There have been a number of recent studies exploiting parallel corpora in bootstrapping of monolingual analysis tools. In the “information projection” approach (e.g., (Yarowsky and Ngai, 2001)), statistical word alignment is applied to a parallel corpus of English and some other language for which no tagger/morphological analyzer/chunker etc. (henceforth simply: analysis tool) exists. A high-quality analysis tool is applied to the English text, and the statistical word alignment is used to project a (noisy) target annotation to the version of the text. Robust learning techniques are then applied to bootstrap an analysis tool for , using the annotations projected with high confidence as the initial training data. (Confidence of both the English analysis tool and the statistical word alignment is taken into account.) The results that have been achieved by this method are very encouraging. Will the information projection approach also work for less shallow analysis tools, in particular full syntactic parsers? An obvious issue is that one does not expect the phrase structure representation of English (as produced by state-of-the-art treebank parsers) to carry over to less configurational languages. Therefore, (Hwa et al., 2002) extract a more language-independent dependency structure from the English parse as the basis for projection to Chinese. From the resulting (noisy) dependency treebank, a dependency parser is trained using the techniques of (Collins, 1999). (Hwa et al., 2002) report that the noise in the projected treebank is still a major challenge, suggesting that a future research focus should be on the filtering of (parts of) unreliable trees and statistical word alignment models sensitive to the syntactic projection framework. Our hypothesis is that the quality of the resulting parser/grammar for language can be significantly improved if the training method for the parser is changed to accomodate for training data which are in part unreliable. The experiments we report in this paper focus on a specific part of the problem: we replace standard treebank training with an Expectation-Maximization (EM) algorithm for PCFGs, augmented by weighting factors for the reliability of training data, following the approach of (Nigam et al., 2000), who apply it for EM training of a text classifier. The factors are only sensitive to the constituent/distituent (C/D) status of each span of the string in (cp. (Klein and Manning, 2002)). The C/D status is derived from an aligned parallel corpus in a way discussed in section 2. We use the Europarl corpus (Koehn, 2002), and the statistical word alignment was performed with the GIZA++ toolkit (Al-Onaizan et al., 1999; Och and Ney, 2003).1 For the current experiments we assume no preexisting parser for any of the languages, contrary to the information projection scenario. While better absolute results could be expected using one or more parsers for the languages involved, we think that it is important to isolate the usefulness of exploiting just crosslinguistic word order divergences in order to obtain partial prior knowledge about the constituent structure of a language, which is then exploited in an EM learning approach (section 3). Not using a parser for some languages also makes it possible to compare various language pairs at the same level, and specifically, we can experiment with grammar induction for English exploiting various 1The software is available at http://www.isi.edu/˜och/GIZA++.html At that  moment  the  voting  will  commence  .  Le vote  aura  lieu  à  ce  moment  -la  .  Figure 1: Alignment example other languages. Indeed the focus of our initial experiments has been on English (section 4), which facilitates evaluation against a treebank (section 5). 2 Cross-language order divergences The English-French example in figure 1 gives a simple illustration of the partial information about constituency that a word-aligned parallel corpus may provide. The en bloc reversal of subsequences of words provides strong evidence that, for instance, [ moment the voting ] or [ aura lieu à ce ] do not form constituents. At first sight it appears as if there is also clear evidence for [ at that moment ] forming a constituent, since it fully covers a substring that appears in a different position in French. Similarly for [ Le vote aura lieu ]. However, from the distribution of contiguous substrings alone we cannot distinguish between two the types of situations sketched in (1) and (2): (1)               (2)               A string that is contiguous under projection, like    (1) may be a true constituent, but it may also be a non-constituent part of a larger constituent as in  in (2). Word blocks. Let us define the notion of a word block (as opposed to a phrase or constituent) induced by a word alignment to capture the relevant property of contiguousness under translation.2 The alignments induced by GIZA++ (following the IBM models) are asymmetrical in that several words from   may be aligned with one word in  , but not vice versa. So we can view a word alignment as a function  that maps each word in an  -sentence to a (possibly empty) subset of words from its translation in  . For example, in figure 1,  voting  ={vote  }, and  that  = {ce  -la  . Note that !#"%$&(')!#"+*!,.- for "/$10 ,2"+* . The  -images of a sentence need not exhaust the words of the translation in   ; however it is common to assume a special empty word NULL in each  -sentence, for which by definition ! NULL  is the set of   -words not contained in any  -image of the overt words. We now define an  -induced block (or  -block for short) as a substring  435353  $ of a sentence in  , such that the union over all  -images ( 6 8797 $ !  $: ) forms a contiguous substring in   , modulo the words from  NULL  . For example,      in (1) (or (2)) is not an  -block since the union over its  -images is ;<  <   <  which do not form a contiguous string in   . The sequences     or       are  -induced blocks. Let us define a maximal  -block as an  -block  $ 35353  * , such that adding  $#= at the beginning or  *> at the end is either (i) impossible (because it would lead to a non-block, or  $?= or  *> do not exist as we are at the beginning or end of the string), or (ii) it would introduce a new crossing alignment 2The block notion we are defining in this section is indirectly related to the concept of a “phrase” in recent work in Statistical Machine Translation. (Koehn et al., 2003) show that exploiting all contiguous word blocks in phrase-based alignment is better than focusing on syntactic constituents only. In our context, we are interested in inducing syntactic constituents based on alignment information; given the observations from Statistical MT, it does not come as a surprise that there is no direct link from blocks to constituents. Our work can be seen as an attempt to zero in on the distinction between the concepts; we find that it is most useful to keep track of the boundaries between blocks. (Wu, 1997) also includes a brief discussion of crossing constraints that can be derived from phrase structure correspondences. to the block.3 String     in (1) is not a maximal  -block, because       is an  -block; but       is maximal since   is the final word of the sentence and         is a non-block. We can now make the initial observation precise that (1) and (2) have the same block structure, but the constituent structures are different (and this is not due to an incorrect alignment).    is a maximal block in both cases, but while it is a constituent in (1), it isn’t in (2). We may call maximal blocks that contain only non-maximal blocks as substrings first-order maximal  -blocks. A maximal block that contains other maximal blocks as substrings is a higher-order maximal  -block. In (1) and (2), the complete string          is a higher-order maximal block. Note that a higher-order maximal block may contain substrings which are non-blocks. Higher-order maximal blocks may still be nonconstituents as the following simple English-French example shows: (3) He gave Mary a book Il a donné un livre à Mary The three first-order maximal blocks in English are [He gave], [Mary], and [a book]. [Mary a book] is a higher-order maximal block, since its “projection” to French is contiguous, but it is not a constituent. (Note that the VP constituent gave Mary a book on the other hand is not a maximal block here.) Block boundaries. Let us call the string position between two maximal blocks an  -block boundary.4 In (1)/(2), the position between   and   is a block boundary. We can now formulate the (4) Distituent hypothesis If a substring of a sentence in language  crosses a first-order  -block boundary (zone5), then it can only be a constituent of  if it contains at least one of the two maximal  -blocks separated by that boundary in full. This hypothesis makes it precise under which conditions we assume to have reliable negative evidence against a constituent. Even examples of complicated structural divergence from the classical MT 3I.e., an element of   (or   ) continues the  string at the other end. 4We will come back to the situation where a block boundary may not be unique below. 5This will be explained below. literature tend not to pose counterexamples to the hypothesis, since it is so conservative. Projecting phrasal constituents from one language to another is problematic in cases of divergence, but projecting information about distituents is generally safe. Mild divergences are best. As should be clear, the  -block-based approach relies on the occurrence of reorderings of constituents in translation. If two languages have the exact same structure (and no paraphrases whatsoever are used in translation), the approach does not gain any information from a parallel text. However, this situation does not occur realistically. If on the other hand, massive reordering occurs without preserving any contiguous subblocks, the approach cannot gain information either. The ideal situation is in the middleground, with a number of mid-sized blocks in most sentences. The table in figure 2 shows the distribution of sentences with   -block boundaries based on the alignment of English and 7 other languages, for a sample of c. 3,000 sentences from the Europarl corpus. We can see that the occurrence of boundaries is in a range that should make it indeed useful.6  :  de el es fi fr it sv 1 82.3% 76.7% 80.9% 70.2% 83.3% 82.9% 67.4% 2 73.5% 64.2% 74.0% 55.7% 76.0% 74.6% 58.0% 3 57.7% 50.4% 57.5% 39.3% 60.5% 60.7% 38.4% 4 47.9% 40.1% 50.9% 29.7% 53.3% 52.1% 31.3% 5 38.0% 30.6% 42.5% 21.5% 45.9% 42.0% 23.0% 6 28.7% 23.2% 33.4% 15.2% 36.1% 33.4% 15.2% 7 22.6% 17.9% 28.0% 10.2% 30.2% 26.6% 11.0% 8 17.0% 13.6% 22.4% 7.6% 24.4% 21.8% 8.0% 9 12.3% 10.3% 17.4% 5.4% 19.7% 17.3% 5.6% 10 9.5% 7.8% 13.7% 3.4% 16.3% 13.1% 4.1% de: German; el: Greek; es: Spanish; fi: Finnish; fr: French; it: Italian; sv: Swedish. Figure 2: Proportion of sentences with   -block boundaries for / : English Zero fertility words. So far we have not addressed the effect of finding zero fertility words, i.e., words  $ from  with !  $: , - . Statistical word alignment makes frequent use of this mechanism. An actual example from our alignment is shown in figure 3. The English word has is treated as a zero fertility word. While we can tell from the block structure that there is a maximal block boundary somewhere between Baringdorf and the, it is 6The average sentence length for the English sentence is 26.5 words. (Not too suprisingly, Swedish gives rise to the fewest divergences against English. Note also that the Romance languages shown here behave very similarly.) Mr. Graefe zu Baringdorf has the floor to explain this request . La parole est à M. Graefe zu Baringdorf pour motiver la demande . Figure 3: Alignment example with zero-fertility word in English unclear on which side has should be located.7 The definitions of the various types of word blocks cover zero fertility words in principle, but they are somewhat awkward in that the same word may belong to two maximal  -blocks, on its left and on its right. It is not clear where the exact block boundary is located. So we redefine the notion of  block boundaries. We call the (possibly empty) substring between the rightmost non-zero-fertility word of one maximal  -block and the leftmost non-zerofertility word of its right neighbor block the  -block boundary zone. The distituent hypothesis is sensitive to crossing a boundary zone, i.e., if a constituent-candidate ends somewhere in the middle of a non-empty boundary zone, this does not count as a crossing. This reflects the intuition of uncertainty and keeps the exclusion of clear distituents intact. 3 EM grammar induction with weighting factors The distituent identification scheme introduced in the previous section can be used to hypothesize a fairly reliable exclusion of constituency for many spans of strings from a parallel corpus. Besides a statistical word alignment, no further resources are required. In order to make use of this scattered (non-) constituency information, a semi-supervised approach is needed that can fill in the (potentially large) areas for which no prior information is available. For the present experiments we decided to choose a conceptually simple such approach, with which we can build on substantial existing work in grammar induction: we construe the learning problem as PCFG induction, using the inside-outside algorithm, with the addition of weighting factors based on the (non)constituency information. This use of weighting factors in EM learning follows the approach discussed in (Nigam et al., 2000). Since we are mainly interested in comparative experiments at this stage, the conceptual simplicity, and the availability of efficient implemented open7Since zero-fertility words are often function words, there is probably a rightward-tendency that one might be able to exploit; however in the present study we didn’t want to build such high-level linguistic assumptions into the system. source systems of a PCFG induction approach outweighs the disadvantage of potentially poorer overall performance than one might expect from some other approaches. The PCFG topology we use is a binary, entirely unrestricted X-bar-style grammar based on the Penn Treebank POS-tagset (expanded as in the TreeTagger by (Schmid, 1994)). All possible combinations of projections of POS-categories X and Y are included following the schemata in (5). This gives rise to 13,110 rules. (5) a. XP X b. XP XP YP c. XP YP XP d. XP YP X e. XP X YP We tagged the English version of our training section of the Europarl corpus with the TreeTagger and used the strings of POS-tags as the training corpus for the inside-outside algorithm; however, it is straightforward to apply our approach to a language for which no taggers are available if an unsupervised word clustering technique is applied first. We based our EM training algorithm on Mark Johnson’s implementation of the inside-outside algorithm.8 The initial parameters on the PCFG rules are set to be uniform. In the iterative induction process of parameter reestimation, the current rule parameters are used to compute the expectations of how often each rule occurred in the parses of the training corpus, and these expectations are used to adjust the rule parameters, so that the likelihood of the training data is increased. When the probablity of a given rule drops below a certain threshold, the rule is excluded from the grammar. The iteration is continued until the increase in likelihood of the training corpus is very small. Weight factors. The inside-outside algorithm is a dynamic programming algorithm that uses a chart in order to compute the rule expectations for each sentence. We use the information obtained from the parallel corpus as discussed in section 2 as prior information (in a Bayesian framework) to adjust the 8http://cog.brown.edu/˜mj/ you can table questions under rule 28 , and you no longer have the floor . vous pouvez poser les questions au moyen de l’ article 28 du réglement . je ne vous donne pas la parole . Figure 4: Alignment example with higher-fertility words in English expectations that the inside-outside algorithm determines based on its current rule parameters. Note that the this prior information is information about string spans of (non-)constituents – it does not tell us anything about the categories of the potential constituents affected. It is combined with the PCFG expectations as the chart is constructed. For each span in the chart, we get a weight factor that is multiplied with the parameter-based expectations.9 4 Experiments We applied GIZA++ (Al-Onaizan et al., 1999; Och and Ney, 2003) to word-align parts of the Europarl corpus (Koehn, 2002) for English and all other 10 languages. For the experiments we report in this paper, we only used the 1999 debates, with the language pairs of English combined with Finnish, French, German, Greek, Italian, Spanish, and Swedish. For computing the weight factors we used a twostep process implemented in Perl, which first determines the maximal  -block boundaries (by detecting discontinuities in the sequence of the  projected words). Words with fertility  whose  correspondents were non-adjacent (modulo NULLprojections) were treated like zero fertility words, i.e., we viewed them as unreliable indicators of block status (compare figure 4). (7) shows the internal representation of the block structure for (6) (compare figure 3). L and R are used for the beginning and end of blocks, when the adjacent boundary zone is empty; l and r are used next to non-empty boundary zones. Words that have correspondents in 9In the simplest model, we use the factor 0 for spans satisfying the distituent condition underlying hypothesis (4), and factor 1 for all other spans; in other words, parses involving a distituent are cancelled out. We also experimented with various levels of weight factors: for instance, distituents were assigned factor 0.01, likely distituents factor 0.1, neutral spans 1, and likely constituents factor 2. Likely constituents are defined as spans for which one end is adjacent to an empty block boundary zone (i.e., there is no zero fertility word in the block boundary zone which could be the actual boundary of constituents in which the block is involved). Most variations in the weighting scheme did not have a significant effect, but they caused differences in coverage because rules with a probability below a certain threshold were dropped in training. Below, we report the results of the 0.01–0.1–1–2 scheme, which had a reasonably high coverage on the test data. the normal sequence are encoded as *, zero fertility words as -; A and B are used for the first block in a sentence instead of L and R, unless it arises from “relocation”, which increases likelihood for constituent status (likewise for the last block: Y and Z). Since we are interested only in first-order blocks here, the compact string-based representation is sufficient. (6) la parole est à m. graefe zu baringdorf pour motiver la demande NULL ({ 3 4 11 }) mr ({ 5 }) graefe ({ 6 }) zu ({ 7 }) baringdorf ({ 8 }) has ({ }) the ({ 1 }) floor ({ 2 }) to ({ 9 }) explain ({ 10 }) this ({ }) request ({ 12 }) (7) [L**r-lRY*-*Z] The second step for computing the weight factors creates a chart of all string spans over the given sentence and marks for each span whether it is a distituent, possible constituent or likely distituent, based on the location of boundary symbols. (For instance zu Baringdorf has the is marked as a distituent; the floor and has the floor are marked as likely constituents.) The tests are implemented as simple regular expressions. The chart of weight factors is represented as an array which is stored in the training corpus file along with the sentences. We combine the weight factors from various languages, since each of them may contribute distinct (non)constituent information. The inside-outside algorithm reads in the weight factor array and uses it in the computation of expected rule counts. We used the probability of the statistical word alignment as a confidence measure to filter out unreliable training sentences. Due to the conservative nature of the information we extract from the alignment, the results indicate however that filtering is not necessary. 5 Evaluation For evaluation, we ran the PCFG resulting from training with the Viterbi algorithm10 on parts of the Wall Street Journal (WSJ) section of the Penn Treebank and compared the tree structure for the most 10We used the LoPar parser (Schmid, 2000) for this. System Unlab. Prec. Unlab. Recall F -Score Crossing Brack. Left-branching 30.4 35.8 32.9 3.06 Right-branching 36.2 42.6 39.2 2.48 Standard PCFG induction 42.4 64.9 51.3 2.2 PCFG trained with C/D weight 47.8 72.1 57.5 1.7 factors from Europarl corpus Upper limit 66.08 100.0 79.6 0.0 Figure 5: Scores for test sentences from WSJ section 23, up to length 10. probable parse for the test sentences against the gold standard treebank annotation. (Note that one does not necessarily expect that an induced grammar will match a treebank annotation, but it may at least serve as a basis for comparison.) The evaluation criteria we apply are unlabeled bracketing precision and recall (and crossing brackets). We follow an evaluation criterion that (Klein and Manning, 2002, footnote 3) discuss for the evaluation of a not fully supervised grammar induction approach based on a binary grammar topology: bracket multiplicity (i.e., non-branching projections) is collapsed into a single set of brackets (since what is relevant is the constituent structure that was induced).11 For comparison, we provide baseline results that a uniform left-branching structure and a uniform right-branching structure (which encodes some nontrivial information about English syntax) would give rise to. As an upper boundary for the performance a binary grammar can achieve on the WSJ, we present the scores for a minimal binarized extension of the gold-standard annotation. The results we can report at this point are based on a comparatively small training set.12 So, it may be too early for conclusive results. (An issue that arises with the small training set is that smoothing techniques would be required to avoid overtraining, but these tend to dominate the test application, so the effect of the parallel-corpus based information cannot be seen so clearly.) But we think that the results are rather encouraging. As the table in figure 5 shows, the PCFG we induced based on the parallel-text derived weight factors reaches 57.5 as the F -score of unlabeled precision and recall on sentences up to length 10.13 We 11Note that we removed null elements from the WSJ, but we left punctuation in place. We used the EVALB program for obtaining the measures, however we preprocessed the bracketings to reflect the criteria we discuss here. 12This is not due to scalability issues of the system; we expect to be able to run experiments on rather large training sets. Since no manual annotation is required, the available resources are practically indefinite. 13For sentences up to length 30, the F  -score drops to 28.7 show the scores for an experiment without smoothing, trained on c. 3,000 sentences. Since no smoothing was applied, the resulting coverage (with lowprobability rules removed) on the test set is about 80%. It took 74 iterations of the inside-outside algorithm to train the weight-factor-trained grammar; the final version has 1005 rules. For comparison we induced another PCFG based on the same X-bar topology without using the weight factor mechanism. This grammar ended up with 1145 rules after 115 iterations. The F -score is only 51.3 (while the coverage is the same as for the weight-factor-trained grammar). Figure 6 shows the complete set of (singular) “NP rules” emerging from the weight-factor-trained grammar, which are remarkably well-behaved, in particular when we compare them to the corresponding rules from the PCFG induced in the standard way (figure 7). (XP categories are written as POS-TAG  -P, X head categories are written as POS-TAG  -0 – so the most probable NP productions in figure 6 are NP  N PP, NP  N, NP  ADJP N, NP  NP PP, NP  N PropNP.) Of course we are comparing an unsupervised technique with a mildly supervised technique; but the results indicate that the relatively subtle information discussed in section 2 seems to be indeed very useful. 6 Discussion This paper presented a novel approach of using parallel corpora as the only resource in the creation of a monolingual analysis tools. We believe that in order to induce high-quality tools based on statistical word alignment, the training approach for the target language tool has to be able to exploit islands of reliable information in a stream of potentially rather noisy data. We experimented with an initial idea to address this task, which is conceptually simple and can be implemented building on existing technology: using the notion of word blocks projected (as compared to 23.5 for the standard PCFG). 0.300467 NN-P --> NN-0 IN-P 0.25727 NN-P --> NN-0 0.222335 NN-P --> JJ-P NN-0 0.0612312 NN-P --> NN-P IN-P 0.0462079 NN-P --> NN-0 NP-P 0.0216048 NN-P --> NN-0 ,-P 0.0173518 NN-P --> NN-P NN-0 0.0114746 NN-P --> NN-0 NNS-P 0.00975112 NN-P --> NN-0 MD-P 0.00719605 NN-P --> NN-0 VBZ-P 0.00556762 NN-P --> NN-0 NN-P 0.00511326 NN-P --> NN-0 VVD-P 0.00438077 NN-P --> NN-P VBD-P 0.00423814 NN-P --> NN-P ,-P 0.00409675 NN-P --> NN-0 CD-P 0.00286634 NN-P --> NN-0 VHZ-P 0.00258022 NN-P --> VVG-P NN-0 0.0018237 NN-P --> NN-0 TO-P 0.00162601 NN-P --> NN-P VVN-P 0.00157752 NN-P --> NN-P VB-P 0.00125101 NN-P --> NN-0 VVN-P 0.00106749 NN-P --> NN-P VBZ-P 0.00105866 NN-P --> NN-0 VBD-P 0.000975359 NN-P --> VVN-P NN-0 0.000957702 NN-P --> NN-0 SENT-P 0.000931056 NN-P --> NN-0 CC-P 0.000902116 NN-P --> NN-P SENT-P 0.000717542 NN-P --> NN-0 VBP-P 0.000620843 NN-P --> RB-P NN-0 0.00059608 NN-P --> NN-0 WP-P 0.000550255 NN-P --> NN-0 PDT-P 0.000539155 NN-P --> NN-P CC-P 0.000341498 NN-P --> WP$-P NN-0 0.000330967 NN-P --> WRB-P NN-0 0.000186441 NN-P --> ,-P NN-0 0.000135449 NN-P --> CD-P NN-0 7.16819e-05 NN-P --> NN-0 POS-P Figure 6: Full set of rules based on the NN tag in the C/D-trained PCFG by word alignment as an indication for (mainly) impossible string spans. Applying this information in order to impose weighting factors on the EM algorithm for PCFG induction gives us a first, simple instance of the “island-exploiting” system we think is needed. More sophisticated models may make use some of the experience gathered in these experiments. The conservative way in which cross-linguistic relations between phrase structure is exploited has the advantage that we don’t have to make unwarranted assumptions about direct correspondences among the majority of constituent spans, or even direct correspondences of phrasal categories. The technique is particularly well-suited for the exploitation of parallel corpora involving multiple lan0.429157 NN-P --> DT-P NN-0 0.0816385 NN-P --> IN-P NN-0 0.0630426 NN-P --> NN-0 0.0489261 NN-P --> PP$-P NN-0 0.0487434 NN-P --> JJ-P NN-0 0.0451819 NN-P --> NN-P ,-P 0.0389741 NN-P --> NN-P VBZ-P 0.0330732 NN-P --> NN-P NN-0 0.0215872 NN-P --> NN-P MD-P 0.0201612 NN-P --> NN-P TO-P 0.0199536 NN-P --> CC-P NN-0 0.015509 NN-P --> NN-P VVZ-P 0.0112734 NN-P --> NN-P RB-P 0.00977683 NN-P --> NP-P NN-0 0.00943218 NN-P --> CD-P NN-0 0.00922132 NN-P --> NN-P WDT-P 0.00896826 NN-P --> POS-P NN-0 0.00749452 NN-P --> NN-P VHZ-P 0.00621328 NN-P --> NN-0 ,-P 0.00520734 NN-P --> NN-P VBD-P 0.004674 NN-P --> JJR-P NN-0 0.00407644 NN-P --> NN-P VVD-P 0.00394681 NN-P --> NN-P VVN-P 0.00354741 NN-P --> NN-0 MD-P 0.00335451 NN-P --> NN-0 NN-P 0.0030748 NN-P --> EX-P NN-0 0.0026483 NN-P --> WRB-P NN-0 0.00262025 NN-P --> NN-0 TO-P [...] 0.000403279 NN-P --> NN-0 VBP-P 0.000378414 NN-P --> NN-0 PDT-P 0.000318026 NN-P --> NN-0 VHZ-P 2.27821e-05 NN-P --> NN-P PP-P Figure 7: Standard induced PCFG: Excerpt of rules based on the NN tag guages like the Europarl corpus. Note that nothing in our methodology made any language particular assumptions; future research has to show whether there are language pairs that are particularly effective, but in general the technique should be applicable for whatever parallel corpus is at hand. A number of studies are related to the work we presented, most specifically work on parallel-text based “information projection” for parsing (Hwa et al., 2002), but also grammar induction work based on constituent/distituent information (Klein and Manning, 2002) and (language-internal) alignmentbased learning (van Zaanen, 2000). However to our knowledge the specific way of bringing these aspects together is new. References Yaser Al-Onaizan, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, Dan Melamed, FranzJosef Och, David Purdy, Noah A. Smith, and David Yarowsky. 1999. Statistical machine translation. Final report, JHU Workshop. Michael Collins. 1999. A statistical parser for Czech. In Proceedings of ACL. Rebecca Hwa, Philip Resnik, and Amy Weinberg. 2002. Breaking the resource bottleneck for multilingual parsing. In Proceedings of LREC. Dan Klein and Christopher Manning. 2002. A generative constituent-context model for improved grammar induction. In Proceedings of ACL. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Human Language Technology Conference 2003 (HLT-NAACL 2003), Edmonton, Canada. Philipp Koehn. 2002. Europarl: A multilingual corpus for evaluation of machine translation. Ms., University of Southern California. Kamal Nigam, Andrew Kachites McCallum, Sebastian Thrun, and Tom M. Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103–134. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In International Conference on New Methods in Language Processing, Manchester, UK. Helmut Schmid. 2000. Lopar: Design and implementation. Arbeitspapiere des Sonderforschungsbereiches 340, No. 149, IMS Stuttgart. Menno van Zaanen. 2000. ABL: Alignment-based learning. In COLING 2000 - Proceedings of the 18th International Conference on Computational Linguistics, pages 961–967. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. David Yarowsky and Grace Ngai. 2001. Inducing multilingual POS taggers and NP bracketers via robust projection across aligned corpora. In Proceedings of NAACL.
2004
60
Corpus-Based Induction of Syntactic Structure: Models of Dependency and Constituency Dan Klein Computer Science Department Stanford University Stanford, CA 94305-9040 [email protected] Christopher D. Manning Computer Science Department Stanford University Stanford, CA 94305-9040 [email protected] Abstract We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional regularities that are salient in the data. 1 Introduction The task of statistically inducing hierarchical syntactic structure over unannotated sentences of natural language has received a great deal of attention (Carroll and Charniak, 1992; Pereira and Schabes, 1992; Brill, 1993; Stolcke and Omohundro, 1994). Researchers have explored this problem for a variety of reasons: to argue empirically against the poverty of the stimulus (Clark, 2001), to use induction systems as a first stage in constructing large treebanks (van Zaanen, 2000), to build better language models (Baker, 1979; Chen, 1995), and to examine cognitive issues in language learning (Solan et al., 2003). An important distinction should be drawn between work primarily interested in the weak generative capacity of models, where modeling hierarchical structure is only useful insofar as it leads to improved models over observed structures (Baker, 1979; Chen, 1995), and work interested in the strong generative capacity of models, where the unobserved structure itself is evaluated (van Zaanen, 2000; Clark, 2001; Klein and Manning, 2002). This paper falls into the latter category; we will be inducing models of linguistic constituency and dependency with the goal of recovering linguistically plausible structures. We make no claims as to the cognitive plausibility of the induction mechanisms we present here; however, the ability of these systems to recover substantial linguistic patterns from surface yields alone does speak to the strength of support for these patterns in the data, and hence undermines arguments based on “the poverty of the stimulus” (Chomsky, 1965). 2 Unsupervised Dependency Parsing Most recent progress in unsupervised parsing has come from tree or phrase-structure grammar based models (Clark, 2001; Klein and Manning, 2002), but there are compelling reasons to reconsider unsupervised dependency parsing. First, most state-ofthe-art supervised parsers make use of specific lexical information in addition to word-class level information – perhaps lexical information could be a useful source of information for unsupervised methods. Second, a central motivation for using tree structures in computational linguistics is to enable the extraction of dependencies – function-argument and modification structures – and it might be more advantageous to induce such structures directly. Third, as we show below, for languages such as Chinese, which have few function words, and for which the definition of lexical categories is much less clear, dependency structures may be easier to detect. 2.1 Representation and Evaluation An example dependency representation of a short sentence is shown in figure 1(a), where, following the traditional dependency grammar notation, the regent or head of a dependency is marked with the tail of the dependency arrow, and the dependent is marked with the arrowhead (Mel′ˇcuk, 1988). It will be important in what follows to see that such a representation is isomorphic (in terms of strong generative capacity) to a restricted form of phrase structure grammar, where the set of terminals and nonterminals is identical, and every rule is of the form X →X Y or X →Y X (Miller, 1999), giving the isomorphic representation of figure 1(a) shown in figure 1(b).1 Depending on the model, part-of1Strictly, such phrase structure trees are isomorphic not to flat dependency structures, but to specific derivations of those NN Factory NNS payrolls VBD fell IN in NN September ROOT VBD NNS NN Factory NNS payrolls VBD VBD fell IN IN in NN September S NP NN Factory NNS payrolls VP VBD fell PP IN in NN September (a) Classical Dependency Structure (b) Dependency Structure as CF Tree (c) CFG Structure Figure 1: Three kinds of parse structures. speech categories may be included in the dependency representation, as shown here, or dependencies may be directly between words. Below, we will assume an additonal reserved nonterminal ROOT, whose sole dependent is the head of the sentence. This simplifies the notation, math, and the evaluation metric. A dependency analysis will always consist of exactly as many dependencies as there are words in the sentence. For example, in the dependency structure of figure 1(b), the dependencies are {(ROOT, fell), (fell, payrolls), (fell, in), (in, September), (payrolls, Factory)}. The quality of a hypothesized dependency structure can hence be evaluated by accuracy as compared to a gold-standard dependency structure, by reporting the percentage of dependencies shared between the two analyses. In the next section, we discuss several models of dependency structure, and throughout this paper we report the accuracy of various methods at recovering gold-standard dependency parses from various corpora, detailed here. WSJ is the entire Penn English Treebank WSJ portion. WSJ10 is the subset of sentences which contained 10 words or less after the removal of punctuation. CTB10 is the sentences of the same length from the Penn Chinese treebank (v3). NEGRA10 is the same, for the German NEGRA corpus, based on the supplied conversion of the NEGRA corpus into Penn treebank format. In most of the present experiments, the provided partsof-speech were used as the input alphabet, though we also present limited experimentation with synthetic parts-of-speech. It is important to note that the Penn treebanks do not include dependency annotations; however, the automatic dependency rules from (Collins, 1999) are sufficiently accurate to be a good benchmark for unsupervised systems for the time being (though see below for specific issues). Similar head-finding rules were used for Chinese experiments. The NEGRA corpus, however, does supply hand-annotated dependency structures. structures which specify orders of attachment among multiple dependents which share a common head. • • • • • ROOT Figure 2: Dependency graph with skeleton chosen, but words not populated. Where possible, we report an accuracy figure for both directed and undirected dependencies. Reporting undirected numbers has two advantages: first, it facilitates comparison with earlier work, and, more importantly, it allows one to partially obscure the effects of alternate analyses, such as the systematic choice between a modal and a main verb for the head of a sentence (in either case, the two verbs would be linked, but the direction would vary). 2.2 Dependency Models The dependency induction task has received relatively little attention; the best known work is Carroll and Charniak (1992), Yuret (1998), and Paskin (2002). All systems that we are aware of operate under the assumption that the probability of a dependency structure is the product of the scores of the dependencies (attachments) in that structure. Dependencies are seen as ordered (head, dependent) pairs of words, but the score of a dependency can optionally condition on other characteristics of the structure, most often the direction of the dependency (whether the arrow points left or right). Some notation before we present specific models: a dependency d is a pair ⟨h, a⟩of a head and argument, which are words in a sentence s, in a corpus S. For uniformity of notation with section 4, words in s are specified as size-one spans of s: for example the first word would be 0s1. A dependency structure D over a sentence is a set of dependencies (arcs) which form a planar, acyclic graph rooted at the special symbol ROOT, and in which each word in s appears as an argument exactly once. For a dependency structure D, there is an associated graph G which represents the number of words and arrows between them, without specifying the words themselves (see figure 2). A graph G and sentence s together thus determine a dependency structure. The Model Dir. Undir. English (WSJ) Paskin 01 39.7 RANDOM 41.7 Charniak and Carroll 92-inspired 44.7 ADJACENT 53.2 DMV 54.4 English (WSJ10) RANDOM 30.1 45.6 ADJACENT 33.6 56.7 DMV 43.2 63.7 German (NEGRA10) RANDOM 21.8 41.5 ADJACENT 32.6 51.2 DMV 36.3 55.8 Chinese (CTB10) RANDOM 35.9 47.3 ADJACENT 30.2 47.3 DMV 42.5 54.2 Figure 3: Parsing performance (directed and undirected dependency accuracy) of various dependency models on various treebanks, along with baselines. dependency structure is the object generated by all of the models that follow; the steps in the derivations vary from model to model. Existing generative dependency models intended for unsupervised learning have chosen to first generate a word-free graph G, then populate the sentence s conditioned on G. For instance, the model of Paskin (2002), which is broadly similar to the semiprobabilistic model in Yuret (1998), first chooses a graph G uniformly at random (such as figure 2), then fills in the words, starting with a fixed root symbol (assumed to be at the rightmost end), and working down G until an entire dependency structure D is filled in (figure 1a). The corresponding probabilistic model is P(D) = P(s, G) = P(G)P(s|G) = P(G) Y (i, j,dir)∈G P(i−1si| j−1s j, dir) . In Paskin (2002), the distribution P(G) is fixed to be uniform, so the only model parameters are the conditional multinomial distributions P(a|h, dir) that encode which head words take which other words as arguments. The parameters for left and right arguments of a single head are completely independent, while the parameters for first and subsequent arguments in the same direction are identified. In those experiments, the model above was trained on over 30M words of raw newswire, using EM in an entirely unsupervised fashion, and at great computational cost. However, as shown in figure 3, the resulting parser predicted dependencies at below chance level (measured by choosing a random dependency structure). This below-random performance seems to be because the model links word pairs which have high mutual information (such as occurrences of congress and bill) regardless of whether they are plausibly syntactically related. In practice, high mutual information between words is often stronger between two topically similar nouns than between, say, a preposition and its object. One might hope that the problem with this model is that the actual lexical items are too semantically charged to represent workable units of syntactic structure. If one were to apply the Paskin (2002) model to dependency structures parameterized simply on the word-classes, the result would be isomorphic to the “dependency PCFG” models described in Carroll and Charniak (1992). In these models, Carroll and Charniak considered PCFGs with precisely the productions (discussed above) that make them isomorphic to dependency grammars, with the terminal alphabet being simply partsof-speech. Here, the rule probabilities are equivalent to P(Y|X, right) and P(Y|X, left) respectively.2 The actual experiments in Carroll and Charniak (1992) do not report accuracies that we can compare to, but they suggest that the learned grammars were of extremely poor quality. With hindsight, however, the main issue in their experiments appears to be not their model, but that they randomly initialized the production (attachment) probabilities. As a result, their learned grammars were of very poor quality and had high variance. However, one nice property of their structural constraint, which all dependency models share, is that the symbols in the grammar are not symmetric. Even with a grammar in which the productions are initially uniform, a symbol X can only possibly have non-zero posterior likelihood over spans which contain a matching terminal X. Therefore, one can start with uniform rewrites and let the interaction between the data and the model structure break the initial symmetry. If one recasts their experiments in this way, they achieve an accuracy of 44.7% on the Penn treebank, which is higher than choosing a random dependency structure, but lower than simply linking all adjacent words into a left-headed (and right-branching) structure (53.2%). A huge limitation of both of the above models is that they are incapable of encoding even first-order valence facts. For example, the latter model learns that nouns to the left of the verb (usually subjects) 2There is another, subtle distinction: in the Paskin work, a canonical ordering of multiple attachments was fixed, while in the Carroll and Charniak work all attachment orders are considered, giving a numerical bias towards structures where heads take more than one argument. i h j ⌈a⌉ k h i ⌈a⌉ j h⌉ k h⌉ i h j h⌉ STOP i h⌉ j ⌈h⌉ STOP (a) (b) (c) (d) Figure 4: Dependency configurations in a lexicalized tree: (a) right attachment, (b) left attachment, (c) right stop, (d) left stop. h and a are head and argument words, respectively, while i, j, and k are positions between words. attach to the verb. But then, given a NOUN NOUN VERB sequence, both nouns will attach to the verb – there is no way that the model can learn that verbs have exactly one subject. We now turn to an improved dependency model that addresses this problem. 3 An Improved Dependency Model The dependency models discussed above are distinct from dependency models used inside highperformance supervised probabilistic parsers in several ways. First, in supervised models, a head outward process is modeled (Eisner, 1996; Collins, 1999). In such processes, heads generate a sequence of arguments outward to the left or right, conditioning on not only the identity of the head and direction of the attachment, but also on some notion of distance or valence. Moreover, in a head-outward model, it is natural to model stop steps, where the final argument on each side of a head is always the special symbol STOP. Models like Paskin (2002) avoid modeling STOP by generating the graph skeleton G first, uniformly at random, then populating the words of s conditioned on G. Previous work (Collins, 1999) has stressed the importance of including termination probabilities, which allows the graph structure to be generated jointly with the terminal words, precisely because it does allow the modeling of required dependents. We propose a simple head-outward dependency model over word classes which includes a model of valence, which we call DMV (for dependency model with valence). We begin at the ROOT. In the standard way, each head generates a series of nonSTOP arguments to one side, then a STOP argument to that side, then non-STOP arguments to the other side, then a second STOP. For example, in the dependency structure in figure 1, we first generate a single child of ROOT, here fell. Then we recurse to the subtree under fell. This subtree begins with generating the right argument in. We then recurse to the subtree under in (generating September to the right, a right STOP, and a left STOP). Since there are no more right arguments after in, its right STOP is generated, and the process moves on to the left arguments of fell. In this process, there are two kinds of derivation events, whose local probability factors constitute the model’s parameters. First, there is the decision at any point whether to terminate (generate STOP) or not: PSTOP(STOP|h, dir, adj). This is a binary decision conditioned on three things: the head h, the direction (generating to the left or right of the head), and the adjacency (whether or not an argument has been generated yet in the current direction, a binary variable). The stopping decision is estimated directly, with no smoothing. If a stop is generated, no more arguments are generated for the current head to the current side. If the current head’s argument generation does not stop, another argument is chosen using: PCHOOSE(a|h, dir). Here, the argument is picked conditionally on the identity of the head (which, recall, is a word class) and the direction. This term, also, is not smoothed in any way. Adjacency has no effect on the identity of the argument, only on the likelihood of termination. After an argument is generated, its subtree in the dependency structure is recursively generated. Formally, for a dependency structure D, let each word h have left dependents depsD(h,l) and right dependents depsD(h,r). The following recursion defines the probability of the fragment D(h) of the dependency tree rooted at h: P(D(h)) = Y dir∈{l,r} Y a∈depsD(h,dir) PSTOP(¬STOP|h, dir, adj) PCHOOSE(a|h, dir)P(D(a)) PSTOP(STOP|h, dir, adj) One can view a structure generated by this derivational process as a “lexicalized” tree composed of the local binary and unary context-free configurations shown in figure 4.3 Each configuration equivalently represents either a head-outward derivation step or a context-free rewrite rule. There are four such configurations. Figure 4(a) shows a head h 3It is lexicalized in the sense that the labels in the tree are derived from terminal symbols, but in our experiments the terminals were word classes, not individual lexical items. taking a right argument a. The tree headed by h contains h itself, possibly some right arguments of h, but no left arguments of h (they attach after all the right arguments). The tree headed by a contains a itself, along with all of its left and right children. Figure 4(b) shows a head h taking a left argument a – the tree headed by h must have already generated its right stop to do so. Figure 4(c) and figure 4(d) show the sealing operations, where STOP derivation steps are generated. The left and right marks on node labels represent left and right STOPs that have been generated.4 The basic inside-outside algorithm (Baker, 1979) can be used for re-estimation. For each sentence s ∈S, it gives us cs(x : i, j), the expected fraction of parses of s with a node labeled x extending from position i to position j. The model can be re-estimated from these counts. For example, to re-estimate an entry of PSTOP(STOP|h, left, non-adj) according to a current model 2, we calculate two quantities.5 The first is the (expected) number of trees headed by h⌉whose rightmost edge i is strictly left of h. The second is the number of trees headed by ⌈h⌉with rightmost edge i strictly left of h. The ratio is the MLE of that local probability factor: PSTOP(STOP|h, left, non-adj) = P s∈S P i<loc(h) P k c(h⌉: i, k) P s∈S P i<loc(h) P k c(⌈h⌉: i, k) This can be intuitively thought of as the relative number of times a tree headed by h had already taken at least one argument to the left, had an opportunity to take another, but didn’t.6 Initialization is important to the success of any local search procedure. We chose to initialize EM not with an initial model, but with an initial guess at posterior distributions over dependency structures (completions). For the first-round, we constructed a somewhat ad-hoc “harmonic” completion where all non-ROOT words took the same number of arguments, and each took other words as arguments in inverse proportion to (a constant plus) the distance between them. The ROOT always had a single 4Note that the asymmetry of the attachment rules enforces the right-before-left attachment convention. This is harmless and arbitrary as far as dependency evaluations go, but imposes an x-bar-like structure on the constituency assertions made by this model. This bias/constraint is dealt with in section 5. 5To simplify notation, we assume each word h occurs at most one time in a given sentence, between indexes loc(h) and loc(h) + 1). 6As a final note, in addition to enforcing the right-argumentfirst convention, we constrained ROOT to have at most a single dependent, by a similar device. argument and took each word with equal probability. This structure had two advantages: first, when testing multiple models, it is easier to start them all off in a common way by beginning with an M-step, and, second, it allowed us to point the model in the vague general direction of what linguistic dependency structures should look like. On the WSJ10 corpus, the DMV model recovers a substantial fraction of the broad dependency trends: 43.2% of guessed directed dependencies were correct (63.7% ignoring direction). To our knowledge, this is the first published result to break the adjacent-word heuristic (at 33.6% for this corpus). Verbs are the sentence heads, prepositions take following noun phrases as arguments, adverbs attach to verbs, and so on. The most common source of discrepancy between the test dependencies and the model’s guesses is a result of the model systematically choosing determiners as the heads of noun phrases, while the test trees have the rightmost noun as the head. The model’s choice is supported by a good deal of linguistic research (Abney, 1987), and is sufficiently systematic that we also report the scores where the NP headship rule is changed to percolate determiners when present. On this adjusted metric, the score jumps hugely to 55.7% directed (and 67.9% undirected). This model also works on German and Chinese at above-baseline levels (55.8% and 54.2% undirected, respectively), with no modifications whatsoever. In German, the largest source of errors is also the systematic postulation of determiner-headed nounphrases. In Chinese, the primary mismatch is that subjects are considered to be the heads of sentences rather than verbs. This dependency induction model is reasonably successful. However, our intuition is still that the model can be improved by paying more attention to syntactic constituency. To this end, after briefly recapping the model of Klein and Manning (2002), we present a combined model that exploits dependencies and constituencies. As we will see, this combined model finds correct dependencies more successfully than the model above, and finds constituents more successfully than the model of Klein and Manning (2002). 4 Distributional Constituency Induction In linear distributional clustering, items (e.g., words or word sequences) are represented by characteristic distributions over their linear contexts (e.g., multinomial models over the preceding and following words, see figure 5). These context distributions are then clustered in some way, often using standard Span Label Constituent Context ⟨0,5⟩ S NN NNS VBD IN NN ⋄– ⋄ ⟨0,2⟩ NP NN NNS ⋄– VBD ⟨2,5⟩ VP VBD IN NN NNS – ⋄ ⟨3,5⟩ PP IN NN VBD – ⋄ ⟨0,1⟩ NN NN ⋄– NNS ⟨1,2⟩ NNS NNS NN – VBD ⟨2,3⟩ VBD VBD NNS – IN ⟨3,4⟩ IN IN VBD – NN ⟨4,5⟩ NN NNS IN – ⋄ (a) (b) Figure 5: The CCM model’s generative process for the sentence in figure 1. (a) A binary tree-equivalent bracketing is chosen at random. (b) Each span generates its yield and context (empty spans not shown here). Derivations which are not coherent are given mass zero. data clustering methods. In the most common case, the items are words, and one uses distributions over adjacent words to induce word classes. Previous work has shown that even this quite simple representation allows the induction of quite high quality word classes, largely corresponding to traditional parts of speech (Finch, 1993; Sch¨utze, 1995; Clark, 2000). A typical pattern would be that stocks and treasuries both frequently occur before the words fell and rose, and might therefore be put into the same class. Clark (2001) and Klein and Manning (2002) show that this approach can be successfully used for discovering syntactic constituents as well. However, as one might expect, it is easier to cluster word sequences (or word class sequences) than to tell how to put them together into trees. In particular, if one is given all contiguous subsequences (subspans) from a corpus of sentences, most natural clusters will not represent valid constituents (to the extent that constituency of a non-situated sequence is even a well-formed notion). For example, it is easy enough to discover that DET N and DET ADJ N are similar and that V PREP DET and V PREP DET ADJ are similar, but it is much less clear how to discover that the former pair are generally constituents while the latter pair are generally not. In Klein and Manning (2002), we proposed a constituent-context model (CCM) which solves this problem by building constituency decisions directly into the distributional model, by earmarking a single cluster d for non-constituents. During the calculation of cluster assignments, only a non-crossing subset of the observed word sequences can be assigned to other, constituent clusters. This integrated approach is empirically successful. The CCM works as follows. Sentences are given as sequences s of word classes (parts-of-speech or otherwise). One imagines each sentence as a list of the O(n2) index pairs ⟨i, j⟩, each followed by the corresponding subspan is j and linear context i−1si ∼js j+1 (see figure 5). The model generates all constituent-context pairs, span by span. The first stage is to choose a bracketing B for the sentence, which is a maximal non-crossing subset of the spans (equivalent to a binary tree). In the basic model, P(B) is uniform over binary trees. Then, for each ⟨i, j⟩, the subspan and context pair (is j, i−1si ∼ js j+1) is generated via a classconditional independence model: P(s, B) = P(B) Y ⟨i, j⟩ P(is j|bij)P(i−1si ∼js j+1|bij) That is, all spans guess their sequences and contexts given only a constituency decision b.7 This is a model P(s, B) over hidden bracketings and observed sentences, and it is estimated via EM to maximize the sentence likelihoods P(s) over the training corpus. Figure 6 shows the accuracy of the CCM model not only on English but for the Chinese and German corpora discussed above.8 Results are reported at convergence; for the English case, F1 is monotonic during training, while for the others, there is an earlier peak. Also shown is an upper bound (the target trees are not all binary and so any all-binary system will overpropose constituents). Klein and Manning (2002) gives comparative numbers showing that the basic CCM outperforms other recent systems on the ATIS corpus (which many other constituency induction systems have reported on). While absolute numbers are hard to compare across corpora, all the systems compared to in Klein and Manning (2002) parsed below a right-branching baseline, while the CCM is substantially above it. 5 A Combined Model The two models described above have some common ground. Both can be seen as models over lexicalized trees composed of the configurations in figure 4. For the DMV, it is already a model over these structures. At the “attachment” rewrite for the CCM 7As is typical of distributional clustering, positions in the corpus can get generated multiple times. Since derivations need not be consistent, the entire model is mass deficient when viewed as a model over sentences. 8In Klein and Manning (2002), we reported results using unlabeled bracketing statistics which gave no credit for brackets which spanned the entire sentence (raising the scores) but macro-averaged over sentences (lowering the scores). The numbers here hew more closely to the standard methods used for evaluating supervised parsers, by being micro-averaged and including full-span brackets. However, the scores are, overall, approximately the same. in (a/b), we assign the quantity: P(isk|true)P(i−1si ∼ksk+1|true) P(isk|false)P(i−1si ∼ksk+1|false) which is the odds ratio of generating the subsequence and context for span ⟨i, k⟩as a constituent as opposed to a non-constituent. If we multiply all trees’ attachment scores by Y ⟨i, j⟩P(is j|false)P(i−1si ∼js j+1|false) the denominators of the odds ratios cancel, and we are left with each tree being assigned the probability it would have received under the CCM.9 In this way, both models can be seen as generating either constituency or dependency structures. Of course, the CCM will generate fairly random dependency structures (constrained only by bracketings). Getting constituency structures from the DMV is also problematic, because the choice of which side to first attach arguments on has ramifications on constituency – it forces x-bar-like structures – even though it is an arbitrary convention as far as dependency evaluations are concerned. For example, if we attach right arguments first, then a verb with a left subject and a right object will attach the object first, giving traditional VPs, while the other attachment order gives subject-verb groups. To avoid this bias, we alter the DMV in the following ways. When using the dependency model alone, we allow each word to have even probability for either generation order (but in each actual head derivation, only one order occurs). When using the models together, better performance was obtained by releasing the one-side-attaching-first requirement entirely. In figure 6, we give the behavior of the CCM constituency model and the DMV dependency model on both constituency and dependency induction. Unsurprisingly, their strengths are complementary. The CCM is better at recovering constituency, and the dependency model is better at recovering dependency structures. It is reasonable to hope that a combination model might exhibit the best of both. In the supervised parsing domain, for example, scoring a lexicalized tree with the product of a simple lexical dependency model and a PCFG model can outperform each factor on its respective metric (Klein and Manning, 2003). 9This scoring function as described is not a generative model over lexicalized trees, because it has no generation step at which nodes’ lexical heads are chosen. This can be corrected by multiplying in a “head choice” factor of 1/(k −j) at each final “sealing” configuration (d). In practice, this correction factor was harmful for the model combination, since it duplicated a strength of the dependency model, badly. Model UP UR UF1 Dir Undir English (WSJ10 – 7422 Sentences) LBRANCH/RHEAD 25.6 32.6 28.7 33.6 56.7 RANDOM 31.0 39.4 34.7 30.1 45.6 RBRANCH/LHEAD 55.1 70.0 61.7 24.0 55.9 DMV 46.6 59.2 52.1 43.2 62.7 CCM 64.2 81.6 71.9 23.8 43.3 DMV+CCM (POS) 69.3 88.0 77.6 47.5 64.5 DMV+CCM (DISTR.) 65.2 82.8 72.9 42.3 60.4 UBOUND 78.8 100.0 88.1 100.0 100.0 German (NEGRA10 – 2175 Sentences) LBRANCH/RHEAD 27.4 48.8 35.1 32.6 51.2 RANDOM 27.9 49.6 35.7 21.8 41.5 RBRANCH/LHEAD 33.8 60.1 43.3 21.0 49.9 DMV 38.4 69.5 49.5 40.0 57.8 CCM 48.1 85.5 61.6 25.5 44.9 DMV+CCM 49.6 89.7 63.9 50.6 64.7 UBOUND 56.3 100.0 72.1 100.0 100.0 Chinese (CTB10 – 2437 Sentences) LBRANCH/RHEAD 26.3 48.8 34.2 30.2 43.9 RANDOM 27.3 50.7 35.5 35.9 47.3 RBRANCH/LHEAD 29.0 53.9 37.8 14.2 41.5 DMV 35.9 66.7 46.7 42.5 54.2 CCM 34.6 64.3 45.0 23.8 40.5 DMV+CCM 33.3 62.0 43.3 55.2 60.3 UBOUND 53.9 100.0 70.1 100.0 100.0 Figure 6: Parsing performance of the combined model on various treebanks, along with baselines. In the combined model, we score each tree with the product of the probabilities from the individual models above. We use the inside-outside algorithm to sum over all lexicalized trees, similar to the situation in section 3. The tree configurations are shown in figure 4. For each configuration, the relevant scores from each model are multiplied together. For example, consider figure 4(a). From the CCM we must generate isk as a constituent and its corresponding context. From the dependency model, we pay the cost of h taking a as a right argument (PCHOOSE), as well as the cost of choosing not to stop (PSTOP). We then running the inside-outside algorithm over this product model. For the results, we can extract the sufficient statistics needed to reestimate both individual models.10 The models in combination were intitialized in the same way as when they were run individually. Sufficient statistics were separately taken off these individual completions. From then on, the resulting models were used together during re-estimation. Figure 6 summarizes the results. The combined model beats the CCM on English F1: 77.6 vs. 71.9. The figure also shows the combination model’s score when using word classes which were induced entirely automatically, using the simplest distributional clustering method of Sch¨utze (1995). These classes show some degradation, e.g. 72.9 F1, but it 10The product, like the CCM itself, is mass-deficient. is worth noting that these totally unsupervised numbers are better than the performance of the CCM model of Klein and Manning (2002) running off of Penn treebank word classes. Again, if we modify the gold standard so as to make determiners the head of NPs, then this model with distributional tags scores 50.6% on directed and 64.8% on undirected dependency accuracy. On the German data, the combination again outperforms each factor alone, though while the combination was most helpful at boosting constituency quality for English, for German it provided a larger boost to the dependency structures. Finally, on the Chinese data, the combination did substantially boost dependency accuracy over either single factor, but actually suffered a small drop in constituency.11 Overall, the combination is able to combine the individual factors in an effective way. 6 Conclusion We have presented a successful new dependencybased model for the unsupervised induction of syntactic structure, which picks up the key ideas that have made dependency models successful in supervised statistical parsing work. We proceeded to show that it works cross-linguistically. We then demonstrated how this model could be combined with the previous best constituent-induction model to produce a combination which, in general, substantially outperforms either individual model, on either metric. A key reason that these models are capable of recovering structure more accurately than previous work is that they minimize the amount of hidden structure that must be induced. In particular, neither model attempts to learn intermediate, recursive categories with no direct connection to surface statistics. Our results here are just on the ungrounded induction of syntactic structure. Nonetheless, we see the investigation of what patterns can be recovered from corpora as important, both from a computational perspective and from a philosophical one. It demonstrates that the broad constituent and dependency structure of a language can be recovered quite successfully (individually or, more effectively, jointly) from a very modest amount of training data. 7 Acknowledgements This work was supported by a Microsoft Graduate Research Fellowship to the first author and by 11This seems to be partially due to the large number of unanalyzed fragments in the Chinese gold standard, which leave a very large fraction of the posited bracketings completely unjudged. the Advanced Research and Development Activity (ARDA)’s Advanced Question Answering for Intelligence (AQUAINT) Program. This work also benefited from an enormous amount of useful feedback, from many audiences and individuals. References Stephen P. Abney. 1987. The English Noun Phrase in its Sentential Aspect. Ph.D. thesis, MIT. James K. Baker. 1979. Trainable grammars for speech recognition. In D. H. Klatt and J. J. Wolf, editors, Speech Communication Papers for the 97th Meeting of the Acoustical Society of America, pages 547–550. Eric Brill. 1993. Automatic grammar induction and parsing free text: A transformation-based approach. In ACL 31, pages 259–265. Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. In Carl Weir, Stephen Abney, Ralph Grishman, and Ralph Weischedel, editors, Working Notes of the Workshop Statistically-Based NLP Techniques, pages 1–13. AAAI Press, Menlo Park, CA. Stanley F. Chen. 1995. Bayesian grammar induction for language modeling. In ACL 33, pages 228–235. Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press, Cambridge, MA. Alexander Clark. 2000. Inducing syntactic categories by context distribution clustering. In The Fourth Conference on Natural Language Learning. Alexander Clark. 2001. Unsupervised induction of stochastic contextfree grammars using distributional clustering. In The Fifth Conference on Natural Language Learning. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In COLING 16, pages 340–345. Steven Paul Finch. 1993. Finding Structure in Language. Ph.D. thesis, University of Edinburgh. Dan Klein and Christopher D. Manning. 2002. A generative constituent-context model for improved grammar induction. In ACL 40, pages 128–135. Dan Klein and Christopher D. Manning. 2003. Fast exact inference with a factored model for natural language parsing. In Suzanna Becker, Sebastian Thrun, and Klaus Obermayer, editors, Advances in Neural Information Processing Systems 15, Cambridge, MA. MIT Press. Igor Aleksandrovich Mel′ˇcuk. 1988. Dependency Syntax: theory and practice. State University of New York Press, Albany, NY. Philip H. Miller. 1999. Strong Generative Capacity. CSLI Publications, Stanford, CA. Mark A. Paskin. 2002. Grammatical bigrams. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA. MIT Press. Fernando Pereira and Yves Schabes. 1992. Inside-outside reestimation from partially bracketed corpora. In ACL 30, pages 128–135. Hinrich Sch¨utze. 1995. Distributional part-of-speech tagging. In EACL 7, pages 141–148. Zach Solan, Eytan Ruppin, David Horn, and Shimon Edelman. 2003. Automatic acquisition and efficient representation of syntactic structures. In Suzanna Becker, Sebastian Thrun, and Klaus Obermayer, editors, Advances in Neural Information Processing Systems 15, Cambridge, MA. MIT Press. Andreas Stolcke and Stephen M. Omohundro. 1994. Inducing probabilistic grammars by Bayesian model merging. In Grammatical Inference and Applications: Proceedings of the Second International Colloquium on Grammatical Inference. Springer Verlag. Menno van Zaanen. 2000. ABL: Alignment-based learning. In COLING 18, pages 961–967. Deniz Yuret. 1998. Discovery of Linguistic Relations Using Lexical Attraction. Ph.D. thesis, MIT.
2004
61
Annealing Techniques for Unsupervised Statistical Language Learning Noah A. Smith and Jason Eisner Department of Computer Science / Center for Language and Speech Processing Johns Hopkins University, Baltimore, MD 21218 USA {nasmith,jason}@cs.jhu.edu Abstract Exploiting unannotated natural language data is hard largely because unsupervised parameter estimation is hard. We describe deterministic annealing (Rose et al., 1990) as an appealing alternative to the ExpectationMaximization algorithm (Dempster et al., 1977). Seeking to avoid search error, DA begins by globally maximizing an easy concave function and maintains a local maximum as it gradually morphs the function into the desired non-concave likelihood function. Applying DA to parsing and tagging models is shown to be straightforward; significant improvements over EM are shown on a part-of-speech tagging task. We describe a variant, skewed DA, which can incorporate a good initializer when it is available, and show significant improvements over EM on a grammar induction task. 1 Introduction Unlabeled data remains a tantalizing potential resource for NLP researchers. Some tasks can thrive on a nearly pure diet of unlabeled data (Yarowsky, 1995; Collins and Singer, 1999; Cucerzan and Yarowsky, 2003). But for other tasks, such as machine translation (Brown et al., 1990), the chief merit of unlabeled data is simply that nothing else is available; unsupervised parameter estimation is notorious for achieving mediocre results. The standard starting point is the ExpectationMaximization (EM) algorithm (Dempster et al., 1977). EM iteratively adjusts a model’s parameters from an initial guess until it converges to a local maximum. Unfortunately, likelihood functions in practice are riddled with suboptimal local maxima (e.g., Charniak, 1993, ch. 7). Moreover, maximizing likelihood is not equivalent to maximizing task-defined accuracy (e.g., Merialdo, 1994). Here we focus on the search error problem. Assume that one has a model for which improving likelihood really will improve accuracy (e.g., at predicting hidden part-of-speech (POS) tags or parse trees). Hence, we seek methods that tend to locate mountaintops rather than hilltops of the likelihood function. Alternatively, we might want methods that find hilltops with other desirable properties.1 1Wang et al. (2003) suggest that one should seek a highIn §2 we review deterministic annealing (DA) and show how it generalizes the EM algorithm. §3 shows how DA can be used for parameter estimation for models of language structure that use dynamic programming to compute posteriors over hidden structure, such as hidden Markov models (HMMs) and stochastic context-free grammars (SCFGs). In §4 we apply DA to the problem of learning a trigram POS tagger without labeled data. We then describe how one of the received strengths of DA— its robustness to the initializing model parameters— can be a shortcoming in situations where the initial parameters carry a helpful bias. We present a solution to this problem in the form of a new algorithm, skewed deterministic annealing (SDA; §5). Finally we apply SDA to a grammar induction model and demonstrate significantly improved performance over EM (§6). §7 highlights future directions for this work. 2 Deterministic annealing Suppose our data consist of a pairs of random variables X and Y , where the value of X is observed and Y is hidden. For example, X might range over sentences in English and Y over POS tag sequences. We use X and Y to denote the sets of possible values of X and Y , respectively. We seek to build a model that assigns probabilities to each (x, y) ∈X×Y. Let ⃗x = {x1, x2, ..., xn} be a corpus of unlabeled examples. Assume the class of models is fixed (for example, we might consider only firstorder HMMs with s states, corresponding notionally to POS tags). Then the task is to find good parameters ⃗θ ∈RN for the model. The criterion most commonly used in building such models from unlabeled data is maximum likelihood (ML); we seek the parameters ⃗θ∗: argmax ⃗θ Pr(⃗x | ⃗θ) = argmax ⃗θ n Y i=1 X y∈Y Pr(xi, y | ⃗θ) (1) entropy hilltop. They argue that to account for partiallyobserved (unlabeled) data, one should choose the distribution with the highest Shannon entropy, subject to certain data-driven constraints. They show that this desirable distribution is one of the local maxima of likelihood. Whether high-entropy local maxima really predict test data better is an empirical question. Input: ⃗x, ⃗θ(0) Output: ⃗θ∗ i ←0 do: (E) ˜p(⃗y) ← Pr(⃗x,⃗y|⃗θ(i)) P ⃗y′∈Yn Pr(⃗x,⃗y′|⃗θ(i)), ∀⃗y (M) ⃗θ(i+1) ←argmax⃗θ E˜p(⃗Y ) h log Pr(⃗x, ⃗Y | ⃗θ) i i ←i + 1 until ⃗θ(i) ≈⃗θ(i−1) ⃗θ∗←⃗θ(i) Fig. 1: The EM algorithm. Each parameter θj corresponds to the conditional probability of a single model event, e.g., a state transition in an HMM or a rewrite in a PCFG. Many NLP models make it easy to maximize the likelihood of supervised training data: simply count the model events in the observed (xi, yi) pairs, and set the conditional probabilities θi to be proportional to the counts. In our unsupervised setting, the yi are unknown, but solving (1) is almost as easy provided that we can obtain the posterior distribution of Y given each xi (that is, Pr(y | xi) for each y ∈Y and each xi). The only difference is that we must now count the model events fractionally, using the expected number of occurrences of each (xi, y) pair. This intuition leads to the EM algorithm in Fig. 1. It is guaranteed that Pr(⃗x | ⃗θ(i+1)) ≥Pr(⃗x | ⃗θ(i)). For language-structure models like HMMs and SCFGs, efficient dynamic programming algorithms (forward-backward, inside-outside) are available to compute the distribution ˜p at the E step of Fig. 1 and use it at the M step. These algorithms run in polynomial time and space by structure-sharing the possible y (tag sequences or parse trees) for each xi, of which there may be exponentially many in the length of xi. Even so, the majority of time spent by EM for such models is on the E steps. In this paper, we can fairly compare the runtime of EM and other training procedures by counting the number of E steps they take on a given training set and model. 2.1 Generalizing EM Figure 2 shows the deterministic annealing (DA) algorithm derived from the framework of Rose et al. (1990). It is quite similar to EM.2 However, DA adds an outer loop that iteratively increases a value β, and computation of the posterior in the E step is modified to involve this β. 2Other expositions of DA abound; we have couched ours in data-modeling language. Readers interested in the Lagrangianbased derivations and analogies to statistical physics (including phase transitions and the role of β as the inverse of temperature in free-energy minimization) are referred to Rose (1998) for a thorough discussion. Input: ⃗x, ⃗θ(0), βmax >βmin >0, α>1 Output: ⃗θ∗ i ←0; β ←βmin while β ≤βmax: do: (E) ˜p(⃗y) ← Pr(⃗x,⃗y|⃗θ(i)) β P ⃗y′∈Yn Pr(⃗x,⃗y′|⃗θ(i)) β , ∀⃗y (M) ⃗θ(i+1) ←argmax⃗θ E˜p(⃗Y ) h log Pr(⃗x, ⃗Y | ⃗θ) i i ←i + 1 until ⃗θ(i) ≈⃗θ(i−1) β ←α · β end while ⃗θ∗←⃗θ(i) Fig. 2: The DA algorithm: a generalization of EM. When β = 1, DA’s inner loop will behave exactly like EM, computing ˜p at the E step by the same formula that EM uses. When β ≈0, ˜p will be close to a uniform distribution over the hidden variable ⃗y, since each numerator Pr(⃗x, ⃗y | ⃗θ)β ≈1. At such β-values, DA effectively ignores the current parameters θ when choosing the posterior ˜p and the new parameters. Finally, as β →+∞, ˜p tends to place nearly all of the probability mass on the single most likely ⃗y. This winner-take-all situation is equivalent to the “Viterbi” variant of the EM algorithm. 2.2 Gradated difficulty In both the EM and DA algorithms, the E step selects a posterior ˜p over the hidden variable ⃗Y and the M step selects parameters ⃗θ. Neal and Hinton (1998) show how the EM algorithm can be viewed as optimizing a single objective function over both ⃗θ and ˜p. DA can also be seen this way; DA’s objective function at a given β is F  ⃗θ, ˜p, β  = 1 β H(˜p) + E˜p(⃗Y ) h log Pr(⃗x, ⃗Y | ⃗θ) i (2) The EM version simply sets β = 1. A complete derivation is not difficult but is too lengthy to give here; it is a straightforward extension of that given by Neal and Hinton for EM. It is clear that the value of β allows us to manipulate the relative importance of the two terms when maximizing F. When β is close to 0, only the H term matters. The H term is the Shannon entropy of the posterior distribution ˜p, which is known to be concave in ˜p. Maximizing it is simple: set all x to be equiprobable (the uniform distribution). Therefore a sufficiently small β drives up the importance of H relative to the other term, and the entire problem becomes concave with a single global maximum to which we expect to converge. In gradually increasing β from near 0 to 1, we start out by solving an easy concave maximization problem and use the result to initialize the next maximization problem, which is slightly more difficult (i.e., less concave). This continues, with the solution to each problem in the series being used to initialize the subsequent problem. When β reaches 1, DA behaves just like EM. Since the objective function is continuous in β where β > 0, we can visualize DA as gradually morphing the easy concave objective function into the one we really care about (likelihood); we hope to “ride the maximum” as β moves toward 1. DA guarantees iterative improvement of the objective function (see Ueda and Nakano (1998) for proofs). But it does not guarantee convergence to a global maximum, or even to a better local maximum than EM will find, even with extremely slow β-raising. A new mountain on the surface of the objective function could arise at any stage that is preferable to the one that we will ultimately find. To run DA, we must choose a few control parameters. In this paper we set βmax = 1 so that DA will approach EM and finish at a local maximum of likelihood. βmin and the β-increase factor α can be set high for speed, but at a risk of introducing local maxima too quickly for DA to work as intended. (Note that a “fast” schedule that tries only a few β values is not as fast as one might expect, since it will generally take longer to converge at each β value.) To conclude the theoretical discussion of DA, we review its desirable properties. DA is robust to initial parameters, since when β is close to 0 the objective hardly depends on ⃗θ. DA gradually increases the difficulty of search, which may lead to the avoidance of some local optima. By modifying the annealing schedule, we can change the runtime of the DA algorithm. DA is almost exactly like EM in implementation, requiring only a slight modification to the E step (see §3) and an additional outer loop. 2.3 Prior work DA was originally described as an algorithm for clustering data in RN (Rose et al., 1990). Its predecessor, simulated annealing, modifies the objective function during search by applying random perturbations of gradually decreasing size (Kirkpatrick et al., 1983). Deterministic annealing moves the randomness “inside” the objective function by taking expectations. DA has since been applied to many problems (Rose, 1998); we describe two key applications in language and speech processing. Pereira, Tishby, and Lee (1993) used DA for soft hierarchical clustering of English nouns, based on the verbs that select them as direct objects. In their case, when β is close to 0, each noun is fuzzily placed in each cluster so that Pr(cluster | noun) is nearly uniform. On the M step, this results in clusters that are almost exactly identical; there is one effective cluster. As β is increased, it becomes increasingly attractive for the cluster centroids to move apart, or “split” into two groups (two effective clusters), and eventually they do so. Continuing to increase β yields a hierarchical clustering through repeated splits. Pereira et al. describe the tradeoff given through β as a control on the locality of influence of each noun on the cluster centroids, so that as β is raised, each noun exerts less influence on more distant centroids and more on the nearest centroids. DA has also been applied in speech recognition. Rao and Rose (2001) used DA for supervised discriminative training of HMMs. Their goal was to optimize not likelihood but classification error rate, a difficult objective function that is piecewiseconstant (hence not differentiable everywhere) and riddled with shallow local minima. Rao and Rose applied DA,3 moving from training a nearly uniform classifier with a concave cost surface (β ≈0) toward the desired deterministic classifier (β → +∞). They reported substantial gains in spoken letter recognition accuracy over both a ML-trained classifier and a localized error-rate optimizer. Brown et al. (1990) gradually increased learning difficulty using a series of increasingly complex models for machine translation. Their training algorithm began by running an EM approximation on the simplest model, then used the result to initialize the next, more complex model (which had greater predictive power and many more parameters), and so on. Whereas DA provides gradated difficulty in parameter search, their learning method involves gradated difficulty among classes of models. The two are orthogonal and could be used together. 3 DA with dynamic programming We turn now to the practical use of deterministic annealing in NLP. Readers familiar with the EM algorithm will note that, for typical stochastic models of language structure (e.g., HMMs and SCFGs), the bulk of the computational effort is required by the E step, which is accomplished by a two-pass dynamic programming (DP) algorithm (like the forward-backward algorithm). The M step for these models normalizes the posterior expected counts from the E step to get probabilities.4 3With an M step modified for their objective function: it improved expected accuracy under ˜p, not expected log-likelihood. 4That is, assuming the usual generative parameterization of such models; if we generalize to Markov random fields (also known as log-linear or maximum entropy models) the M step, while still concave, might entail an auxiliary optimization routine such as iterative scaling or a gradient-based method. Running DA for such models is quite simple and requires no modifications to the usual DP algorithms. The only change to make is in the values of the parameters passed to the DP algorithm: simply replace each θj by θβ j . For a given x, the forward pass of the DP computes (in a dense representation) Pr(y | x, ⃗θ) for all y. Each Pr(y | x, ⃗θ) is a product of some of the θj (each θj is multiplied in once for each time its corresponding model event is present in (x, y)). Raising the θj to a power will also raise their product to that power, so the forward pass will compute Pr(y | x, ⃗θ)β when given ⃗θβ as parameter values. The backward pass normalizes to the sum; in this case it is the sum of the Pr(y | x, ⃗θ)β, and we have the E step described in Figure 2. We therefore expect an EM iteration of DA to take the same amount of time as a normal EM iteration.5 4 Part-of-speech tagging We turn now to the task of inducing a trigram POS tagging model (second-order HMM) from an unlabeled corpus. This experiment is inspired by the experiments in Merialdo (1994). As in that work, complete knowledge of the tagging dictionary is assumed. The task is to find the trigram transition probabilities Pr(tagi | tagi−1, tagi−2) and emission probabilities Pr(wordi | tagi). Merialdo’s key result:6 If some labeled data were used to initialize the parameters (by taking the ML estimate), then it was not helpful to improve the model’s likelihood through EM iterations, because this almost always hurt the accuracy of the model’s Viterbi tagging on a held-out test set. If only a small amount of labeled data was used (200 sentences), then some accuracy improvement was possible using EM, but only for a few iterations. When no labeled data were used, EM was able to improve the accuracy of the tagger, and this improvement continued in the long term. Our replication of Merialdo’s experiment used the Wall Street Journal portion of the Penn Treebank corpus, reserving a randomly selected 2,000 sentences (48,526 words) for testing. The remaining 47,208 sentences (1,125,240 words) were used in training, without any tags. The tagging dictionary was constructed using the entire corpus (as done by Merialdo). To initialize, the conditional transition and emission distributions in the HMM were set to uniform with slight perturbation. Every distribution was smoothed using add-0.1 smoothing (at every M 5With one caveat: less pruning may be appropriate because probability mass is spread more uniformly over different reconstructions of the hidden data. This paper uses no pruning. 6Similar results were found by Elworthy (1994). Fig. 3: Learning curves for EM and DA. Steps in DA’s curve correspond to −changes. The shape of the DA curve is partly a function of the an− nealing schedule, which only gradually (and away from the uniform distribution. in steps) allows the parameters to move β 40 45 50 55 60 65 70 75 0 200 400 600 800 1000 1200 % correct ambiguous test tags EM iterations DA EM step). The criterion for convergence is that the relative increase in the objective function between two iterations fall below 10−9. 4.1 Experiment In the DA condition, we set βmin = 0.0001, βmax = 1, and α = 1.2. Results for the completely unsupervised condition (no labeled data) are shown in Figure 3 and Table 1. Accuracy was nearly monotonic: the final model is approximately the most accurate. DA happily obtained a 10% reduction in tag error rate on training data, and an 11% reduction on test data. On the other hand, it did not manage to improve likelihood over EM. So was the accuracy gain mere luck? Perhaps not. DA may be more resistant to overfitting, because it may favor models whose posteriors ˜p have high entropy. At least in this experiment, its initial bias toward such models carried over to the final learned model.7 In other words, the higher-entropy local maximum found by DA, in this case, explained the observed data almost as well without overcommitting to particular tag sequences. The maximum entropy and latent maximum entropy principles (Wang et al., 2003, discussed in footnote 1) are best justified as ways to avoid overfitting. For a supervised tagger, the maximum entropy principle prefers a conditional model Pr(⃗y | ⃗x) that is maximally unsure about what tag sequence ⃗y to apply to the training word sequence ⃗x (but expects the same feature counts as the true ⃗y). Such a model is hoped to generalize better to unsupervised data. We can make the same argument. But in our case, the split between supervised/unsupervised data is not the split between training/test data. Our supervised data are, roughly, the fragments of the training corpus that are unambiguously tagged thanks to the tag dictionary.8 The EM model may overfit some 7We computed the entropy over possible tags for each word in the test corpus, given the sentence the word occurs in. On average, the DA model had 0.082 bits per tag, while EM had only 0.057 bits per tag, a statistically significant difference (p < 10−6) under a binomial sign test on word tokens. 8Without the tag dictionary, our learners would treat the tag final training crossfinal test cross% correct training tags % correct test tags E steps entropy (bits/word) entropy (bits/word) (all) (ambiguous) (all) (ambiguous) EM 279 9.136 9.321 82.04 66.61 82.08 66.63 DA 1200 9.138 9.325 83.85 70.02 84.00 70.25 Table 1: EM vs. DA on unsupervised trigram POS tagging, using a tag dictionary. Each of the accuracy results is significant when accuracy is compared at either the word-level or sentence-level. (Significance at p < 10−6 under a binomial sign test in each case. E.g., on the test set, the DA model correctly tagged 1,652 words that EM’s model missed while EM correctly tagged 726 words that DA missed. Similarly, the DA model had higher accuracy on 850 sentences, while EM had higher accuracy on only 287. These differences are extremely unlikely to occur due to chance.) The differences in cross-entropy, compared by sentence, were significant in the training set but not the test set (p < 0.01 under a binomial sign test). Recall that lower cross entropy means higher likelihood. parameters to these fragments. The higher-entropy DA model may be less likely to overfit, allowing it to do better on the unsupervised data—i.e., the rest of the training corpus and the entire test corpus. We conclude that DA has settled on a local maximum of the likelihood function that (unsurprisingly) corresponds well with the entropy criterion, and perhaps as a result, does better on accuracy. 4.2 Significance Seeking to determine how well this result generalized, we randomly split the corpus into ten equallysized, nonoverlapping parts. EM and DA were run on each portion;9 the results were inconclusive. DA achieved better test accuracy than EM on three of ten trials, better training likelihood on five trials, and better test likelihood on all ten trials.10 Certainly decreasing the amount of data by an order of magnitude results in increased variance of the performance of any algorithm—so ten small corpora were not enough to determine whether to expect an improvement from DA more often than not. 4.3 Mixing labeled and unlabeled data (I) In the other conditions described by Merialdo, varying amounts of labeled data (ranging from 100 sentences to nearly half of the corpus) were used to initialize the parameters ⃗θ, which were then trained using EM on the remaining unlabeled data. Only in the case where 100 labeled examples were used, and only for a few iterations, did EM improve the names as interchangeable and could not reasonably be evaluated on gold-standard accuracy. 9The smoothing parameters were scaled down so as to be proportional to the corpus size. 10It is also worth noting that runtimes were longer with the 10%-sized corpora than the full corpus (EM took 1.5 times as many E steps; DA, 1.3 times). Perhaps the algorithms traveled farther to find a local maximum. We know of no study of the effect of unlabeled training set size on the likelihood surface, but suggest two issues for future exploration. Larger datasets contain more idiosyncrasies but provide a stronger overall signal. Hence, we might expect them to yield a bumpier likelihood surface whose local maxima are more numerous but also differ more noticeably in height. Both these tendencies of larger datasets would in theory increase DA’s advantage over EM. accuracy of this model. We replicated these experiments and compared EM with DA; DA damaged the models even more than EM. This is unsurprising; as noted before, DA effectively ignores the initial parameters ⃗θ(0). Therefore, even if initializing with a model trained on small amounts of labeled data had helped EM, DA would have missed out on this benefit. In the next section we address this issue. 5 Skewed deterministic annealing The EM algorithm is quite sensitive to the initial parameters ⃗θ(0). We touted DA’s insensitivity to those parameters as an advantage, but in scenarios where well-chosen initial parameters can be provided (as in §4.3), we wish for DA to be able exploit them. In particular, there are at least two cases where “good” initializers might be known. One is the case explored by Merialdo, where some labeled data were available to build an initial model. The other is a situation where a good distribution is known over the labels y; we will see an example of this in §6. We wish to find a way to incorporate an initializer into DA and still reap the benefit of gradated difficulty. To see how this will come about, consider again the E step for DA, which for all y: ˜p(y) ←Pr(x, y | ⃗θ)β Z′(⃗θ, β) = Pr(x, y | ⃗θ)βu(y)1−β Z(⃗θ, β) where u is the uniform distribution over Y and Z′(⃗θ, β) and Z(⃗θ, β) = Z′(⃗θ, β) · u(y)1−β are normalizing terms. (Note that Z(⃗θ, β) does not depend on y because u(y) is constant with respect to y.) Of course, when β is close to 0, DA chooses the uniform posterior because it has the highest entropy. Seen this way, DA is interpolating in the log domain between two posteriors: the one given by y and ⃗θ and the uniform one u; the interpolation coefficient is β. To generalize DA, we will replace the uniform u with another posterior, the “skew” posterior ´p, which is an input to the algorithm. This posterior might be specified directly, as it will be in §6, or it might be computed using an M step from some good initial ⃗θ(0). The skewed DA (SDA) E step is given by: ˜p(y) ← 1 Z(β) Pr(x, y | θ)β ´p(y)1−β (3) When β is close to 0, the E step will choose ˜p to be very close to ´p. With small β, SDA is a “cautious” EM variant that is wary of moving too far from the initializing posterior ´p (or, equivalently, the initial parameters ⃗θ(0)). As β approaches 1, the effect of ´p will diminish, and when β = 1, the algorithm becomes identical to EM. The overall objective (matching (2) except for the boxed term) is: F′  ⃗θ, ˜p, β  = 1 β H(˜p) + E˜p(⃗Y ) h log Pr  ⃗x, ⃗Y | ⃗θ i + 1 −β β E˜p(⃗Y ) h log ´p  ⃗Y i Mixing labeled and unlabeled data (II) Returning to Merialdo’s mixed conditions (§4.3), we found that SDA repaired the damage done by DA but did not offer any benefit over EM. Its behavior in the 100-labeled sentence condition was similar to that of EM’s, with a slightly but not significantly higher peak in training set accuracy. In the other conditions, SDA behaved like EM, with steady degradation of accuracy as training proceeded. It ultimately damaged performance only as much as EM did or did slightly better than EM (but still hurt). This is unsurprising: Merialdo’s result demonstrated that ML and maximizing accuracy are generally not the same; the EM algorithm consistently degraded the accuracy of his supervised models. SDA is simply another search algorithm with the same criterion as EM. SDA did do what it was expected to do—it used the initializer, repairing DA damage. 6 Grammar induction We turn next to the problem of statistical grammar induction: inducing parse trees over unlabeled text. An excellent recent result is by Klein and Manning (2002). The constituent-context model (CCM) they present is a generative, deficient channel model of POS tag strings given binary tree bracketings. We first review the model and describe a small modification that reduces the deficiency, then compare both models under EM and DA. 6.1 Constituent-context model Let (x, y) be a (tag sequence, binary tree) pair. xj i denotes the subsequence of x from the ith to the jth word. Let yi,j be 1 if the yield from i to j is a constituent in the tree y and 0 if it is not. The CCM gives to a pair (x, y) the following probability: Pr(x, y) = Pr(y) · Y 1≤i≤j≤|x| h ψ  xj i yi,j  · χ (xi−1, xj+1| yi,j) i where ψ is a conditional distribution over possible tag-sequence yields (given whether the yield is a constituent or not) and χ is a conditional distribution over possible contexts of one tag on either side of the yield (given whether the yield is a constituent or not). There are therefore four distributions to be estimated; Pr(y) is taken to be uniform. The model is initialized using expected counts of the constituent and context features given that all the trees are generated according to a random-split model.11 The CCM generates each tag not once but O(n2) times, once by every constituent or non-constituent span that dominates it. We suggest the following modification to alleviate some of the deficiency: Pr(x, y) = Pr(y) · Y 1≤i≤j≤|x| h ψ  xj i yi,j, j −i + 1  ·χ (xi−1, xj+1| yi,j) i The change is to condition the yield feature ψ on the length of the yield. This decreases deficiency by disallowing, for example, a constituent over a fourtag yield to generate a seven-tag sequence. It also decreases inter-parameter dependence by breaking the constituent (and non-constituent) distributions into a separate bin for each possible constituent length. We will refer to Klein and Manning’s CCM and our version as models 1 and 2, respectively. 6.2 Experiment We ran experiments using both CCM models on the tag sequences of length ten or less in the Wall Street Journal Penn Treebank corpus, after extracting punctuation. This corpus consists of 7,519 sentences (52,837 tag tokens, 38 types). We report PARSEVAL scores averaged by constituent (rather than by sentence), and do not give the learner credit for getting full sentences or single tags as constituents.12 Because the E step for this model is computationally intensive, we set the DA parameters at βmin = 0.01, α = 1.5 so that fewer E steps would be necessary.13 The convergence criterion was relative improvement < 10−9 in the objective. The results are shown in Table 2. The first point to notice is that a uniform initializer is a bad idea, as Klein and Manning predicted. All conditions but 11We refer readers to Klein and Manning (2002) or Cover and Thomas (1991, p. 72) for details; computing expected counts for a sentence is a closed form operation. Klein and Manning’s argument for this initialization step is that it is less biased toward balanced trees than the uniform model used during learning; we also found that it works far better in practice. 12This is why the CCM 1 performance reported here differs from Klein and Manning’s; our implementation of the EM condition gave virtually identical results under either evaluation scheme (D. Klein, personal communication). 13A pilot study got very similar results for βmin = 10−6. E steps cross-entropy (bits/tag) UR UP F CB CCM 1 EM (uniform) 146 103.1654 61.20 45.62 52.27 1.69 DA 403 103.1542 55.13 41.10 47.09 1.91 EM (split) 124 103.1951 78.14 58.24 66.74 0.98 SDA (split) 339 103.1651 62.71 46.75 53.57 1.62 CCM 2 EM (uniform) 26 84.8106 57.60 42.94 49.20 1.86 DA 331 84.7899 40.81 30.42 34.86 2.66 EM (split) 44 84.8049 78.56 58.56 67.10 0.98 SDA (split) 290 84.7940 79.64 59.37 68.03 0.93 Table 2: The two CCM models, trained with two unsupervised algorithms, each with two initializers. Note that DA is equivalent to SDA initialized with a uniform distribution. The third line corresponds to the setup reported by Klein and Manning (2002). UR is unlabeled recall, UP is unlabeled precision, F is their harmonic mean, and CB is the average number of crossing brackets per sentence. All evaluation is on the same data used for unsupervised learning (i.e., there is no training/test split). The high cross-entropy values arise from the deficiency of models 1 and 2, and are not comparable across models. one find better structure when initialized with Klein and Manning’s random-split model. (The exception is SDA on model 1; possibly the high deficiency of model 1 interacts poorly with SDA’s search in some way.) Next we note that with the random-split initializer, our model 2 is a bit better than model 1 on PARSEVAL measures and converges more quickly. Every instance of DA or SDA achieved higher log-likelihood than the corresponding EM condition. This is what we hoped to gain from annealing: better local maxima. In the case of model 2 with the random-split initializer, SDA significantly outperformed EM (comparing both matches and crossing brackets per sentence under a binomial sign test, p < 10−6); we see a > 5% reduction in average crossing brackets per sentence. Thus, our strategy of using DA but modifying it to accept an initializer worked as desired in this case, yielding our best overall performance. The systematic results we describe next suggest that these patterns persist across different training sets in this domain. 6.3 Significance The difficulty we experienced in finding generalization to small datasets, discussed in §4.2, was apparent here as well. For 10-way and 3-way random, nonoverlapping splits of the dataset, we did not have consistent results in favor of either EM or SDA. Interestingly, we found that training model 2 (using EM or SDA) on 10% of the corpus resulted on average in models that performed nearly as well on their respective training sets as the full corpus condition did on its training set; see Table 3. In addition, SDA sometimes performed as well as EM under model 1. For a random two-way split, EM and SDA converged to almost identical solutions on one of the sub-corpora, and SDA outperformed EM significantly on the other (on model 2). In order to get multiple points of comparison of EM and SDA on this task with a larger amount of data, we jack-knifed the WSJ-10 corpus by splitting it randomly into ten equally-sized nonoverlapping parts then training models on the corpus with each of the ten sub-corpora excluded.14 These trials are not independent of each other; any two of the sub-corpora have 8 9 of their training data in common. Aggregate results are shown in Table 3. Using model 2, SDA always outperformed EM, and in 8 of 10 cases the difference was significant when comparing matching constituents per sentence (7 of 10 when comparing crossing constituents).15 The variance of SDA was far less than that of EM; SDA not only always performed better with model 2, but its performance was more consistent over the trials. We conclude this experimental discussion by cautioning that both CCM models are highly deficient models, and it is unknown how well they generalize to corpora of longer sentences, other languages, or corpora of words (rather than POS tags). 7 Future work There are a number of interesting directions for future work. Noting the simplicity of the DA algorithm, we hope that current devotees of EM will run comparisons of their models with DA (or SDA). Not only might this improve performance of exist14Note that this is not a cross-validation experiment; results are reported on the unlabeled training set, and the excluded subcorpus remains unused. 15Binomial sign test, with significance defined as p < 0.05, though all significant results had p < 0.001. 10% corpus 90% corpus µF σF µF σF CCM 1 EM 65.00 1.091 66.12 0.6643 SDA 63.00 4.689 53.53 0.2135 CCM 2 EM 66.74 1.402 67.24 0.7077 SDA 66.77 1.034 68.07 0.1193 Table 3: The mean µ and standard deviation σ of F-measure performance for 10 trials using 10% of the corpus and 10 jackknifed trials using 90% of the corpus. ing systems, it will contribute to the general understanding of the likelihood surface for a variety of problems (e.g., this paper has raised the question of how factors like dataset size and model deficiency affect the likelihood surface). DA provides a very natural way to gradually introduce complexity to clustering models (Rose et al., 1990; Pereira et al., 1993). This comes about by manipulating the β parameter; as it rises, the number of effective clusters is allowed to increase. An open question is whether the analogues of “clusters” in tagging and parsing models—tag symbols and grammatical categories, respectively—might be treated in a similar manner under DA. For instance, we might begin with the CCM, the original formulation of which posits only one distinction about constituency (whether a span is a constituent or not) and gradually allow splits in constituent-label space, resulting in multiple grammatical categories that, we hope, arise naturally from the data. In this paper, we used βmax = 1. It would be interesting to explore the effect on accuracy of “quenching,” a phase at the end of optimization that rapidly raises β from 1 to the winner-take-all (Viterbi) variant at β = +∞. Finally, certain practical speedups may be possible. For instance, increasing βmin and α, as noted in §2.2, will vary the number of E steps required for convergence. We suggested that the change might result in slower or faster convergence; optimizing the schedule using an online algorithm (or determining precisely how these parameters affect the schedule in practice) may prove beneficial. Another possibility is to relax the convergence criterion for earlier β values, requiring fewer E steps before increasing β, or even raising β slightly after every E step (collapsing the outer and inner loops). 8 Conclusion We have reviewed the DA algorithm, describing it as a generalization of EM with certain desirable properties, most notably the gradual increase of difficulty of learning and the ease of implementation for NLP models. We have shown how DA can be used to improve the accuracy of a trigram POS tagger learned from an unlabeled corpus. We described a potential shortcoming of DA for NLP applications—its failure to exploit good initializers—and then described a novel algorithm, skewed DA, that solves this problem. Finally, we reported significant improvements to a state-of-the-art grammar induction model using SDA and a slight modification to the parameterization of that model. These results support the case that annealing techniques in some cases offer performance gains over the standard EM approach to learning from unlabeled corpora, particularly with large corpora. Acknowledgements This work was supported by a fellowship to the first author from the Fannie and John Hertz Foundation, and by an NSF ITR grant to the second author. The views expressed are not necessarily endorsed by the sponsors. The authors thank Shankar Kumar, Charles Schafer, David Smith, and Roy Tromble for helpful comments and discussions; three ACL reviewers for advice that improved the paper; Eric Goldlust for keeping the Dyna compiler (Eisner et al., 2004) up to date with the demands made by this work; and Dan Klein for sharing details of his CCM implementation. References P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79–85. E. Charniak. 1993. Statistical Language Learning. MIT Press. M. Collins and Y. Singer. 1999. Unsupervised models for named-entity classification. In Proc. of EMNLP. T. M. Cover and J. A. Thomas. 1991. Elements of Information Theory. John Wiley and Sons. S. Cucerzan and D. Yarowsky. 2003. Minimally supervised induction of grammatical gender. In Proc. of HLT/NAACL. A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likelihood estimation from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1–38. J. Eisner, E. Goldlust, and N. A. Smith. 2004. Dyna: A declarative language for implementing dynamic programs. In Proc. of ACL (companion volume). D. Elworthy. 1994. Does Baum-Welch re-estimation help taggers? In Proc. of ANLP. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. 1983. Optimization by simulated annealing. Science, 220:671–680. D. Klein and C. D. Manning. 2002. A generative constituentcontext model for grammar induction. In Proc. of ACL. B. Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2):155–72. R. Neal and G. Hinton. 1998. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer. F. C. N. Pereira, N. Tishby, and L. Lee. 1993. Distributional clustering of English words. In Proc. of ACL. A. Rao and K. Rose. 2001. Deterministically annealed design of Hidden Markov Model speech recognizers. IEEE Transactions on Speech and Audio Processing, 9(2):111–126. K. Rose, E. Gurewitz, and G. C. Fox. 1990. Statistical mechanics and phase transitions in clustering. Physical Review Letters, 65(8):945–948. K. Rose. 1998. Deterministic annealing for clustering, compression, classification, regression, and related optimization problems. Proc. of the IEEE, 86(11):2210–2239. N. Ueda and R. Nakano. 1998. Deterministic annealing EM algorithm. Neural Networks, 11(2):271–282. S. Wang, D. Schuurmans, and Y. Zhao. 2003. The latent maximum entropy principle. In review. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. of ACL.
2004
62
Multi-Engine Machine Translation with Voted Language Model Tadashi Nomoto National Institute of Japanese Literature 1-16-10 Yutaka Shinagawa Tokyo 142-8585 Japan [email protected] Abstract The paper describes a particular approach to multiengine machine translation (MEMT), where we make use of voted language models to selectively combine translation outputs from multiple off-theshelf MT systems. Experiments are done using large corpora from three distinct domains. The study found that the use of voted language models leads to an improved performance of MEMT systems. 1 Introduction As the Internet grows, an increasing number of commercial MT systems are getting on line ready to serve anyone anywhere on the earth. An interesting question we might ponder is whether it is not possible to aggregate the vast number of MT systems available on the Internet into one super MT which surpasses in performance any of those MTs that comprise the system. And this is what we will be concerned with in the paper, with somewhat watered-down settings. People in the speech community pursued the idea of combining off-the-shelf ASRs (automatic speech recognizers) into a super ASR for some time, and found that the idea works (Fiscus, 1997; Schwenk and Gauvain, 2000; Utsuro et al., 2003). In IR (information retrieval), we find some efforts going (under the name of distributed IR or meta-search) to selectively fuse outputs from multiple search engines on the Internet (Callan et al., 2003). So it would be curious to see whether we could do the same with MTs. Now back in machine translation, we do find some work addressing such concern: Frederking and Nirenburg (1994) develop a multi-engine MT or MEMT architecture which operates by combining outputs from three different engines based on the knowledge it has about inner workings of each of the component engines. Brown and Frederking (1995) is a continuation of Frederking and Nirenburg (1994) with an addition of a ngrambased mechanism for a candidate selection. Nomoto (2003), however, explores a different line of research whose goal is to combine black box MTs using statistical confidence models. Similar efforts are also found in Akiba et al. (2002). The present paper builds on the prior work by Nomoto (2003). We start by reviewing his approach, and go on to demonstrate that it could be improved by capitalizing on dependence of the MEMT model there on language model. Throughout the paper, we refer to commercial black box MT systems as OTS (off-the-shelf) systems, or more simply, OTSs. 2 Confidence Models We take it here that the business of MEMT is about choosing among translation outputs from multiple MT systems, whether black box or not, for each input text. Therefore the question we want to address is, how do we go about choosing among MT outputs so that we end up with a best one? What we propose to do is to use some confidence models for translations generated by OTSs, and let them decide which one we should pick. We essentially work along the lines of Nomoto (2003). We review below some of the models proposed there, together with some motivation behind them. Confidence models he proposes come in two varieties: Fluency based model (FLM) and Alignment based model (ALM), which is actually an extension of FLM. Now suppose we have an English sentence e and its Japanese translation j generated by some OTS. (One note here: throughout the paper we work on English to Japanese translation.) FLM dictates that the quality of j as a translation of e be determined by: FLM(e, j) = log Pl(j) (1) Pl(j) is the probability of j under a particular language model (LM) l.1 What FLM says is that the quality of a translation essentially depends on its log likelihood (or fluency) and has nothing to do with what it is a translation of. ALM extends FLM to include some information on fidelity. That is, it pays some attention to how faithful a translation is to its source text. ALM does this by using alignment models from the statistical machine translation literature (Brown et al., 1993). Here is what ALM looks like. ALM(e, j) = log Pl(j)Q(e | j) Q(e | j) is the probability estimated using IBM Model 1. ALM takes into account the fluency of a translation output (given by Pl(j)) and the degree of association between e and j (given by Q(e | j)), which are in fact two features generally agreed in the MT literature to be most relevant for assessing the quality of translations (White, 2001). One problem with FLM and ALM is that they fail to take into account the reliability of an OTS system. As Nomoto (2003) argues, it is reasonable to believe that some MT systems could inherently be more prone to error and outputs they produce tend to be of less quality than those from other systems, no matter what the outputs’ fluency or translation probability may be. ALM and FLM work solely on statistical information that can be gathered from source and target sentences, dismissing any operational bias that an OTS might have on a particular task. Nomoto (2003) responds to the problem by introducing a particular regression model known as Support Vector regression (SVR), which enables him to exploit bias in performance of OTSs. What SVR is intended to do is to modify confidence scores FLM and ALM produce for MT outputs in such a way that they may more accurately reflect their independent evaluation involving human translations or judgments. SVR is a multi-dimensional regressor, and works pretty much like its enormously popular counterpart, Support Vector classification, except that we are going to work with real numbers for target values and construct the margin, using Vapnik’s ϵ-insensitive loss function (Sch¨olkopf et al., 1998). 1Note that Pl(j) = P(l) Qm i P(wi | wi−2, wi−1, l) where j = w1 · · · wm. Assume a uniform prior for l. SVR looks something like this. h(⃗x) = ⃗w · ⃗x + b, with input data ⃗x = (x1, . . . , xm) and the corresponding weights ⃗w = (w1, . . . , wm). ‘x · y’ denotes the inner product of x and y. ⃗x could be a set of features associated with e and j. Parameters ⃗w and b are something determined by SVR. It is straightforward to extend the ALM and FLM with SVR, which merely consists of plugging in either model as an input variable in the regressor. This would give us the following two SVR models with m = 1. Regressive FLM (rFLM) h(FLM(e, j)) = w1 · FLM(e, j) + b Regressive ALM (rALM) h(ALM(e, j)) = w1 · ALM(e, j) + b Notice that h(·) here is supposed to relate FLM or ALM to some independent evaluation metric such as BLEU (Papineni et al., 2002), not the log likelihood of a translation. With confidence models in place, define a MEMT model Ψ by: Ψ(e, J, l) = arg maxj∈J(θ(e, j | l)) Here e represents a source sentence, J a set of translations for e generated by OTSs, and θ denotes some confidence model under an LM l. Throughout the rest of the paper, we let FLMψ and ALMψ denote MEMT systems based on FLM and ALM, respectively, and similarly for others. 3 Notes on Evaluation We assume here that the MEMT works on a sentence-by-sentence basis. That is, it takes as input a source sentence, gets it translated by several OTSs, and picks up the best among translations it gets. Now a problem with using BLEU in this setup is that translations often end up with zero because model translations they refer to do not contain ngrams of a particular length.2 This would make impossible a comparison and selection among possible translations. 2In their validity study of BLEU, Reeder and White (2003) finds that its correlation with human judgments increases with the corpus size, and warns that to get a reliable score for BLEU, one should run it on a corpus of at least 4,000 words. Also Tate et al. (2003) reports about some correlation between BLEU and task based judgments. One way out of this, Nomoto (2003) suggests, is to back off to a somewhat imprecise yet robust metric for evaluating translations, which he calls mprecision.3 The idea of m-precision helps define what an optimal MEMT should look like. Imagine a system which operates by choosing, among candidates, a translation that gives a best m-precision. We would reasonably expect the system to outperform any of its component OTSs. Indeed Nomoto (2003) demonstrates empirically that it is the case. Moreover, since rFLMψ and rALMψ work on a sentence, not on a block of them, what h(·) relates to is not BLEU, but m-precision. Hogan and Frederking (1998) introduces a new kind of yardstick for measuring the effectiveness of MEMT systems. The rationale for this is that it is often the case that the efficacy of MEMT systems does not translate into performance of outputs that they generate. We recall that with BLEU, one measures performance of translations, not how often a given MEMT system picks the best translation among candidates. The problem is, even if a MEMT is right about its choices more often than a best component engine, BLEU may not show it. This happens because a best translation may not always get a high score in BLEU. Indeed, differences in BLEU among candidate translations could be very small. Now what Hogan and Frederking (1998) suggest is the following. d(ψm) = PN i δ(ψm (e), max{σe1 · · · σeM }) N where δ(i, j) is the Kronecker delta function, which gives 1 if i = j and 0 otherwise. Here ψm represents some MEMT system, ψm (e) denotes a particular translation ψm chooses for sentence e, i.e., ψm (e) = Ψ(e, J, l). σe1 . . . σeM ∈J denotes a set of candidate translations. max here gives a translation with the highest score in m-precision. N is the number of source sentences. δ(·) says that you get 1 if a particular translation the MEMT chooses for a given sentences happens to rank highest among can3For a reference translation r and a machine-generated translation t, m-precision is defined as: m-precision = N X i P v∈Si t C(v, r) P v∈Si t C(v, t) , which is nothing more than Papineni et al. (2002)’s modified n-gram precision applied to a pair of a single reference and the associated translation. Si t here denotes a set of i-grams in t, v an i-gram. C(v, t) indicates the count of v in t. Nomoto (2003) finds that m-precision strongly correlates with BLEU, which justifies the use of m-precision as a replacement of BLEU at the sentence level. didates. d(ψm) gives the average ratio of the times ψm hits a right translation. Let us call d(ψm) HF accuracy (HFA) for the rest of the paper. 4 LM perplexity and MEMT performance Now the question we are interested in asking is whether the choice of LM really matters. That is, does a particular choice of LM gives a better performing FLMψ or ALMψ than something else, and if it does, do we have a systematic way of choosing one LM over another? Let us start with the first question. As a way of shedding some light on the issue, we ran FLMψ and ALMψ using a variety of LMs, derived from various domains with varying amount of training data. We worked with 24 LMs from various genres, with vocabulary of size ranging from somewhere near 10K to 20K in words (see below and also Appendix A for details on train sets). LMs here are trigram based and created using an open source speech recognition tool called JULIUS.4 Now train data for LMs are collected from five corpora, which we refer to as CPC, EJP, PAT, LIT, NIKMAI for the sake of convenience. CPC is a huge set of semi-automatically aligned pairs of English and Japanese texts from a Japanese news paper which contains as many as 150,000 sentences (Utiyama and Isahara, 2002), EJP represents a relatively small parallel corpus of English/Japanese phrases (totaling 15,187) for letter writing in business (Takubo and Hashimoto, 1999), PAT is a bilingual corpus of 336,971 abstracts from Japanese patents filed in 1995, with associated translations in English (a.k.a NTCIR-3 PATENT).5 LIT contains 100 Japanese literary works from the early 20th century, and NIKMAI 1,536,191 sentences compiled from several Japanese news paper sources. Both LIT and NIKMAI are monolingual. Fig.1 gives a plot of HF accuracy by perplexity for FLMψ’s on test sets pulled out of PAT, EJP and CPC.6 Each dot there represents an FLMψ with a particular LM plugged into it. The HFA of each FLMψ in Fig.1 represents a 10-fold cross validated HFA score, namely an HFA averaged over evenly4http://julius.sourceforge.jp 5A bibliographic note. NTCIR-3 PATENT: NII Test Collection for Information Retrieval Systems distributed through National Institute of Informatics (www.nii.ac.jp). 6A test set from EJP and CPC each contains 7,500 bilingual sentences, that from PAT contains 4,600 bilingual abstracts (approximately 9,200 sentences). None of them overlaps with the remaining part of the corresponding data set. Relevant LMs are built on Japanese data drawn from the data sets. We took care not to train LMs on test sets. (See Section 6 for further details.) • • • •• • • • •• • • • • •• • • • • •• • • LM Perplexity HF Accuracy 500 1000 1500 2000 0.55 0.65 0.75 PAT •• • • • • • • • • • • • •• • • • • • • • • LM Perplexity HF Accuracy 500 1000 1500 0.38 0.40 0.42 0.44 CPC • ••• • • • • • • • • • • • • • • • • • • • LM Perplexity HF Accuracy 500 1000 1500 2000 0.28 0.32 0.36 0.40 EJP Figure 1: HF accuracy-by-perplexity plots for FLMψ with four OTSs, Ai, Lo, At, Ib, on PAT (left), CPC (center) and EJP (right). Dots represent FLMψ’s with various LMs . split 10 blocks of a test set. The perplexity is that of Pl(j) averaged over blocks, with a particular LM plugged in for l (see Equation 1). We can see there an apparent tendency for an LM with lower perplexity to give rise to an FLMψ with higher HFA, indicating that the choice of LM does indeed influence the performance of FLMψ. Which is somewhat surprising given that the perplexity of a machine generated translation should be independent of how similar it is to a model translation, which dictates the HFA.7 Now let us turn to the question of whether there is any systematic way of choosing an LM so that it gives rise to a FLMψ with high HFA. Since we are working with multiple OTS systems here, we get multiple outputs for a source text. Our idea is to let them vote for an LM to plug into FLMψ or for that matter, any other forms of MEMT discussed earlier. Note that we could take an alternate approach of letting a model (or human) translation (associated with a source text) pick an LM by alone. An obvious problem with this approach, however, is that a mandatory reference to model translations would compromise the robustness of the approach. We would want the LM to work for MEMT regardless of whether model translations are available. So our concern here is more with choosing an LM in the absence of model translations, to which we will return below. 5 Voting Language Model We consider here a simple voting scheme `a la ROVER (Fiscus, 1997; Schwenk and Gauvain, 2000; Utsuro et al., 2003), which works by picking 7Recall that the HFA does not represent the confidence score such as one given by FLM (Equation 1), but the average ratio of the times that an MEMT based on FLM picks a translation with the best m-precision. Table 1: A MEMT algorithm implementing V-byM. S represents a set of OTS systems, L a set of language models. θ is some confidence model such (r)FLM or (r)ALM. V-by-M chooses a most-votedfor LM among those in L, given the set J of translations for e. MEMT(e,S,L) begin J = {j | j is a translation of e generated by s ∈S.} l = V-by-M(J, L) jk = arg maxj∈J(θ(e, j | l)) return jk end up an LM voted for by the majority. More specifically, for each output translation for a given input, we first pick up an LM which gives it the smallest perplexity, and out of those LMs, one picked by the majority of translations will be plugged into MEMT. We call the selection scheme voting-by-majority or simply V-by-M. The V-by-M scheme is motivated by the results in Fig.1, where perplexity is found to be a reasonably good predictor of HFA. Formally, we could put the V-by-M scheme as follows. For each of the translation outputs je 1 . . . je n associated with a given input sentence e, we want to find some LM M from a set L of LMs such that: Mi = arg minm∈LPP(je i | m), where PP(j | m) is the perplexity of j under m. Now assume M1 . . . Mn are such LMs for je 1 . . . je n. Then we pick up an M with the largest frequency and plug it into θ such as FLM.8 Suppose, for instance, that Ma, Mb, Ma and Mc are lowest perplexity LMs found for translations je 1,je 2,je 3 and je 4, respectively. Then we choose Ma as an LM most voted for, because it gets two votes from je 1 and je 3, meaning that Ma is nominated as an LM with lowest perplexity by je 1 and je 3, while Mb and Mc each collect only one vote. In case of ties, we randomly choose one of the LMs with the largest count of votes. 6 Experiment Setup and Procedure Let us describe the setup of experiments we have conducted. The goal here is to learn how the Vby-M affects the overall MEMT performance. For test sets, we carry over those from the perplexity experiments (see Footnote 6, Section 4), which are derived from CPC, EJP, and PAT. (Call them tCPC, tEJP, and tPAT hereafter.) In experiments, we begin by splitting a test set into equal-sized blocks, each containing 500 sentences for tEJP and tCPC, and 100 abstracts (approximately 200 sentences) for tPAT.9 We had the total of 15 blocks for tCPC and tEJP, and 46 blocks for tPAT. We leave one for evaluation and use the rest for training alignment models, i.e., Q(e | j), SV regressors and some inside-data LMs. (Again we took care not to inadvertently train LMs on test sets.) We send a test block to OTSs Ai, Lo, At, and Ib, for translation and combine their outputs using the V-by-M scheme, which may or may not be coupled with regression SVMs. Recall that the MEMT operates on a sentence by sentence basis. So what happens here is that for each of the sentences in a block, the MEMT works the four MT systems to get translations and picks one that produces the best score under θ. We evaluate the MEMT performance by running HFA and BLEU on MEMT selected translations block by block,10 and giving average performance over the blocks. Table 1 provides algorithmic details on how the MEMT actually operates. 8It is worth noting that the voted language model readily lends itself to a mixture model: P(j) = P m∈M λmP(j | m) where λm = 1 if m is most voted for and 0 otherwise. 9tCPC had the average of 15,478 words per block, whereas tEJP had about 11,964 words on the average in each block. With tPAT, however, the average per block word length grew to 16,150. 10We evaluate performance by block, because of some reports in the MT literature that warn that BLEU behaves erratically on a small set of sentences (Reeder and White, 2003). See also Section 3 and Footnote 2 for the relevant discussion. Table 2: HF accuracy of MEMT models with V-byM. Model tCPC tEJP tPAT avg. rFLMψ 0.4230 0.4510 0.8066 0.5602 rALMψ 0.4194 0.4346 0.8093 0.5544 FLMψ 0.4277 0.4452 0.7342 0.5357 ALMψ 0.4453 0.4485 0.7702 0.5547 Table 3: HF accuracy of MEMT models with randomly chosen LMs. Note how FLMψ and ALMψ drop in performance. Model tCPC tEJP tPAT avg. rFLMψ 0.4207 0.4186 0.8011 0.5468 rALMψ 0.4194 0.4321 0.8095 0.5537 FLMψ 0.4126 0.3520 0.6350 0.4665 ALMψ 0.4362 0.3597 0.6878 0.4946 7 Results and Discussion Now let us see what we found from the experiments. We ran the MEMT on a test set with (r)FLM or (r)ALM embedded in it. Recall that our goal here is to find how the V-by-M affects performance of MEMT on tCPC, tEJP, and tPAT. First, we look at whether the V-by-M affects in any way, the HFA of the MEMT, and if it does, then how much. Table 2 and Table 3 give summaries of results on HFA versus V-by-M. Table 2 shows how things are with V-by-M on, and Table 3 shows what happens to HFA when we turn off V-by-M, that is, when we randomly choose an LM from the same set that the V-by-M chooses from. The results indicate a clear drop in performance of FLMψ and ALMψ when one chooses an LM randomly.11 Curiously, however, rFLMψ and rALMψ are affected less. They remain roughly at the same level of HFA over Table 2 and Table 3. What this means 11Another interesting question to ask at this point is, how does one huge LM trained across domains compare to the Vby-M here? By definition of perplexity, the increase in size of the training data leads to an increase in perplexity of the LM. So if general observations in Fig.1 hold, then we would expect the “one-huge-LM” approach to perform poorly compared to the V-by-M, which is indeed demonstrated by the following results. HFLMψ below denotes a FLMψ based on a composite LM trained over CPC, LIT, PAT, NIKMAI, and EJP. The testing procedure is same as that described in Sec.6 Model tCPC tEJP tPAT avg. HFLMψ (HFA) 0.4182 0.4081 0.6927 0.5063 HFLMψ (BLEU) 0.1710 0.2619 0.1874 0.2067 Table 4: Performance in BLEU of MEMT models with V-by-M. Model tCPC tEJP tPAT avg. rFLMψ 0.1743 0.2861 0.1954 0.2186 rALMψ 0.1735 0.2869 0.1954 0.2186 FLMψ 0.1736 0.2677 0.1907 0.2107 ALMψ 0.1763 0.2622 0.1934 0.2106 Table 5: Performance in BLEU of MEMT models with randomly chosen LMs. Model tCPC tEJP tPAT avg. rFLMψ 0.1738 0.2717 0.1950 0.2135 rALMψ 0.1735 0.2863 0.1954 0.2184 FLMψ 0.1710 0.2301 0.1827 0.1946 ALMψ 0.1745 0.2286 0.1871 0.1967 is that there is some discrepancy in the effectiveness of V-by-M between the fluency based and regression based models. We have no explanation for the cause of the discrepancy at this time, though we may suspect that in learning, as long as there is some pattern to exploit in m-precision and the probability estimates of test sentences, how accurate those estimates are may not matter much. Table 4 and Table 5 give results in BLEU.12 The results tend to replicate what we found with HFA. rFLMψ and rALMψ keep the edge over FLMψ and ALMψ whether or not V-by-M is brought into action. The differences in performance between rFLMψ and rALMψ with or without the V-by-M scheme are rather negligible. However, if we turn to FLMψ and ALMψ, the effects of the V-by-M are clearly visible. FLMψ scores 0.2107 when coupled with the V-by-M. However, when disengaged, the score slips to 0.1946. The same holds for ALMψ. Table 6: HF accuracy of OTS systems Model tCPC tEJP tPAT avg. Ai 0.2363 0.4319 0.0921 0.2534 Lo 0.1718 0.2124 0.0504 0.1449 At 0.4211 0.1681 0.8037 0.4643 Ib 0.1707 0.1876 0.0537 0.1373 OPM 1.0000 1.0000 1.0000 1.0000 12The measurements in BLEU here take into account up to trigrams. Table 7: Performance of OTS systems in BLEU. Model tCPC tEJP tPAT avg. Ai 0.1495 0.2874 0.1385 0.1918 Lo 0.1440 0.1711 0.1402 0.1518 At 0.1738 0.1518 0.1959 0.1738 Ib 0.1385 0.1589 0.1409 0.1461 OPM 0.2111 0.3308 0.1995 0.2471 Leaving the issue of MEMT models momentarily, let us see how the OTS systems Ai, Lo, At, and Ib are doing on tCPC, tEJP, and tPAT. Note that the whole business of MEMT would collapse if it slips behind any of the OTS systems that compose it. Table 6 and Table 7 show performance of the four OTS systems plus OPM, by HFA and by BLEU. OPM here denotes an oracle MEMT which operates by choosing in hindsight a translation that gives the best score in m-precision, among those produced by OTSs. It serves as a practical upper bound for MEMT while OTSs serve as baselines. First, let us look at Table 6 and compare it to Table 2. A good news is that most of the OTS systems do not even come close to the MEMT models. At, a best performing OTS system, gets 0.4643 on the average, which is about 20% less than that scored by rFLMψ. Turning to BLEU, we find again in Table 7 that a best performing system among the OTSs, i.e., Ai, is outperformed by FLMψ, ALMψ and all their varieties (Table 4). Also something of note here is that on tPAT, (r)FLMψ and (r)ALMψ in Table 4, which operate by the V-by-M scheme, score somewhere from 0.1907 to 0.1954 in BLEU, coming close to OPM, which scores 0.1995 on tPAT (Table 7). It is interesting to note, incidentally, that there is some discrepancy between BLEU and HFA in performance of the OTSs: A top performing OTS in Table 6, namely At, achieves the average HFA of 0.4643, but scores only 0.1738 for BLEU (Table 7), which is worse than what Ai gets. Apparently, high HFA does not always mean a high BLEU score. Why? The reason is that a best MT output need not mark a high BLEU score. Notice that ‘best’ here means the best among translations by the OTSs. It could happen that a poor translation still gets chosen as best, because other translations are far worse. To return to the discussion of (r)FLMψ and (r)ALMψ, an obvious fact about their behavior is that regressor based systems rFLMψ and rALMψ, whether V-by-M enabled or not, surpass in performance their less sophisticated counterparts (see Table 8: HF accuracy of MEMTs with perturbed SV regressor in the V-by-M scheme. Model tCPC tEJP tPAT avg. rFLMψ 0.4230 0.4353 0.6712 0.5098 rALMψ 0.4195 0.4302 0.5582 0.4693 FLMψ 0.4277 0.4452 0.7342 0.5357 ALMψ 0.4453 0.4485 0.7702 0.5547 Table 9: Performance in BLEU of MEMTs with perturbed SV regressor in the V-by-M scheme. Model tCPC tEJP tPAT avg. rFLMψ 0.1743 0.2823 0.1835 0.2134 rALMψ 0.1736 0.2843 0.1696 0.2092 FLMψ 0.1736 0.2677 0.1907 0.2107 ALMψ 0.1763 0.2622 0.1934 0.2106 Table 2,4 and also Table 3,5). Regression allows the MEMT models to correct themselves for some domain-specific bias of the OTS systems. But the downside of using regression to capitalize on their bias is that you may need to be careful about data you train a regressor on. Here is what we mean. We ran experiments using SVM regressors trained on a set of data randomly sampled from tCPC, tEJP, and tPAT. (In contrast, rFLMψ and rALMψ in earlier experiments had a regressor trained separately on each data set.) They all operated in the V-by-M mode. The results are shown in Table 8 and Table 9. What we find there is that with regressors trained on perturbed data, both rFLMψ and rALMψ are not performing as well as before; in fact they even fall behind FLMψ and ALMψ in HFA and their performance in BLEU turns out to be just about as good as FLMψ and ALMψ. So regression may backfire when trained on wrong data. 8 Conclusion Let us summarize what we have done and learned from the work. We started with a finding that the choice of language model could affect performance of MEMT models of which it is part. The V-by-M was introduced as a way of responding to the problem of how to choose among LMs so that we get the best MEMT. We have shown that the V-by-M scheme is indeed up to the task, predicting a right LM most of the time. Also worth mentioning is that the MEMT models here, when coupled with V-byM, are all found to surpass component OTS systems by a respectable margin (cf., Tables 4, 7 for BLEU, 2, 6 for HFA). Regressive MEMTs such as rFLMψ and rALMψ, are found to be not affected as much by the choice of LM as their non-regressive counterparts. We suspect this happens because they have access to extra information on the quality of translation derived from human judgments or translations, which may cloud effects of LMs on them. But we also pointed out that regressive models work well only when they are trained on right data; if you train them across different sources of varying genres, they could fail. An interesting question that remains to be addressed is how we might deal with translations from a novel domain. One possible approach would be to use a dynamic language model which adapts itself for a new domain by re-training itself on data sampled from the Web (Berger and Miller, 1998). References Yasuhiro Akiba, Taro Watanabe, and Eiichiro Sumita. 2002. Using language and translation models to select the best among outputs from multiple mt systems. In Proceedings of the 19th International Conference on Computational Linguistics (COLING 2002), Taipei. Adam Berger and Robert Miller. 1998. Just-intime language modelling. In Proceedings of ICASSP98. Ralf Brown and Robert Frederking. 1995. Applying statistical English language modelling to symbolic machine translation. In Proceedings of the Sixth International Conference on Theoretical and Methodological Issues in Machine Translation (TMI’95), pages 221–239, Leuven, Belgium, July. Peter F. Brown, Stephen A. Della Pietra, Vincent J.Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311, June. Jamie Callan, Fabio Crestani, Henrik Nottelmann, Pietro Pala, and Xia Mang Shou. 2003. Resource selection and data fusion in multimedia distributed digital libaries. In Proceedings of the 26th Annual International ACM/SIGIR Conference on Research and Development in Information Retrieval. ACM. Jonathan G. Fiscus. 1997. A post-processing system to yield reduced word error rates: Recogniser output voting error reduction (ROVER). In Proc. IEEE ASRU Workshop, pages 347–352, Santa Barbara. Rober Frederking and Sergei Nirenburg. 1994. Three heads are better than one. In Proceedings of the Fourth Conference on Applied Natural Language Processing, Stuttgart. Christopher Hogan and Robert E. Frederking. 1998. An evaluation of the multi-engine MT architecture. In Proceedings of the Third Conference of the Association for Machine Translation in the Americas (AMTA ’98), pages 113–123, Berlin, October. Springer-Verlag. Lecture Notes in Artificial Intelligence 1529. Tadashi Nomoto. 2003. Predictive models of performance in multi-engine machine translation. In Proceedings of Machine Translation Summit IX, New Orleans, September. IAMT. Kishore Papineni, Salim Roukos, Todd Ward, and Wei ing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, July. Florence Reeder and John White. 2003. Granularity in MT evaluation. In MT Summit Workshop on Machine Translation Evaluation: Towards Systematizing MT Evaluation, pages 37–42, New Orleans. AMTA. Bernhard Sch¨olkopf, Chirstpher J. C. Burges, and Alexander J. Smola, editors. 1998. Advances in Kernel Methods: Support Vector Learning. The MIT Press. Holger Schwenk and Jean-Luc Gauvain. 2000. Combining multiple speech recognizers using voting and language model information. In Proceedings of the IEEE International Conference on Speech and Language Proceesing (ICSLP), volume 2, pages 915–918, Beijin, October. IEEE. Kohei Takubo and Mitsunori Hashimoto. 1999. A Dictionary of English Business Letter Expressions. Published in CDROM. Nihon Keizai Shinbun Sha. Calandra Tate, Sooyon Lee, and Clare R. Voss. 2003. Task-based MT evaluation: Tackling software, experimental design, & statistical models. In MT Summit Workshop on Machine Translation Evaluation: Towards Systematizing MT Evaluation, pages 43–50. AMTA. Masao Utiyama and Hitoshi Isahara. 2002. Alignment of japanese-english news articles and sentences. In IPSJ Proceedings 2002-NL-151, pages 15–22. In Japanese. Takehito Utsuro, Yasuhiro Kodama, Tomohiro Watanabe, Hiromitsu Nishizaki, and Seiichi Nakagawa. 2003. Confidence of agreement among multiple LVCSR models and model combination by svm. In Proceedings of the 28th IEEE InternaTable 10: Language models in MEMT Models Train Size Voc. Genre paj98j102t 1,020K 20K PAT paj96j5t 50K 20K PAT paj96j3t 30K 20K PAT paj98j5t 50K 20K PAT paj96j102t 1,020K 20K PAT paj98j3t 30K 20K PAT paj98j1t 10K 14K PAT paj1t 10K 14K PAT paj98j5k 5K 10K PAT paj5k 5K 10K PAT lit8t 80K 20K LIT lit5t 50K 20K LIT lit3t 30K 20K LIT lit5k 5K 13K LIT lit1t 10K 13K LIT nikmai154t 1,540K 20K NWS nikmai5t 50K 20K NWS crl14t 40K 20K NWS crl5t 50K 20K NWS nikmai3t 30K 20K NWS nikmai1t 10K 17K NWS nikmai5k 5K 12K NWS crl3t 30K 20K NWS ejp8k 8K 8K BIZ tional Conference on Acoustics, Speech and Signal Processing, pages 16–19. IEEE, April. John White. 2001. Predicting intelligibility from fidelity in MT evaluation. In Proceedings of the workshop ”MT Evaluation: Who did What to Whom”, pages 35–37. Appendix A Language Models Table 10 lists language models used in the voting based MEMTs discussed in the paper. They are more or less arbitrarily built from parts of the copora CPC, EJP, NIKMAI, EJP, and LIT. ‘Train size’ indicates the number of sentences, given in kilo, in a corpus on which a particular model is trained. Under ‘Voc(abulary)’ is listed the number of type words for each LM (also given in kilo). Notice the difference in the way the train set and vocabulary are measured. ‘Genre’ indicates the genre of a trainig data used for a given LM: PAT stands for patents (from PAT), LIT literary texts (from LIT), NWS news articles (from CPC and NIKMAI), and BIZ business related texts (from EJP).
2004
63
Aligning words using matrix factorisation Cyril Goutte, Kenji Yamada and Eric Gaussier Xerox Research Centre Europe 6, chemin de Maupertuis F-38240 Meylan, France Cyril.Goutte,Kenji.Yamada,[email protected] Abstract Aligning words from sentences which are mutual translations is an important problem in different settings, such as bilingual terminology extraction, Machine Translation, or projection of linguistic features. Here, we view word alignment as matrix factorisation. In order to produce proper alignments, we show that factors must satisfy a number of constraints such as orthogonality. We then propose an algorithm for orthogonal non-negative matrix factorisation, based on a probabilistic model of the alignment data, and apply it to word alignment. This is illustrated on a French-English alignment task from the Hansard. 1 Introduction Aligning words from mutually translated sentences in two different languages is an important and difficult problem. It is important because a wordaligned corpus is typically used as a first step in order to identify phrases or templates in phrase-based Machine Translation (Och et al., 1999), (Tillmann and Xia, 2003), (Koehn et al., 2003, sec. 3), or for projecting linguistic annotation across languages (Yarowsky et al., 2001). Obtaining a word-aligned corpus usually involves training a word-based translation models (Brown et al., 1993) in each directions and combining the resulting alignments. Besides processing time, important issues are completeness and propriety of the resulting alignment, and the ability to reliably identify general Nto-M alignments. In the following section, we introduce the problem of aligning words from a corpus that is already aligned at the sentence level. We show how this problem may be phrased in terms of matrix factorisation. We then identify a number of constraints on word alignment, show that these constraints entail that word alignment is equivalent to orthogonal non-negative matrix factorisation, and we give a novel algorithm that solves this problem. This is illustrated using data from the shared tasks of the 2003 HLT-NAACL Workshop on Building le droit de permis ne augmente pas the licence fee does not increase Figure 1: 1-1, M-1, 1-N and M-N alignments. and Using Parallel Texts (Mihalcea and Pedersen, 2003). 2 Word alignments We address the following problem: Given a source sentence f = f1 . . . fi . . . fI and a target sentence e = e1 . . . ej . . . eJ, we wish to find words fi and ej on either side which are aligned, ie in mutual correspondence. Note that words may be aligned without being directly “dictionary translations”. In order to have proper alignments, we want to enforce the following constraints: Coverage: Every word on either side must be aligned to at least one word on the other side (Possibly taking “null” words into account). Transitive closure: If fi is aligned to ej and eℓ, any fk aligned to eℓmust also de aligned to ej. Under these constraints, there are only 4 types of alignments: 1-1, 1-N, M-1 and M-N (fig. 1). Although the first three are particular cases where N=1 and/or M=1, the distinction is relevant, because most word-based translation models (eg IBM models (Brown et al., 1993)) can typically not accommodate general M-N alignments. We formalise this using the notion of cepts: a cept is a central pivot through which a subset of ewords is aligned to a subset of f-words. General M-N alignments then correspond to M-1-N alignments from e-words, to a cept, to f-words (fig. 2). Cepts naturally guarantee transitive closure as long as each word is connected to a single cept. In addition, coverage is ensured by imposing that each le droit de permis ne augmente pas the licence fee does not increase (1) (2) (3)(4) Figure 2: Same as figure 1, using cepts (1)-(4). English words cepts Mots francais Mots francais cepts English words Figure 3: Matrix factorisation of the example from fig. 1, 2. Black squares represent alignments. word is connected to a cept. A unique constraint therefore guarantees proper alignments: Propriety: Each word is associated to exactly one cept, and each cept is associated to at least one word on each side. Note that our use of cepts differs slightly from that of (Brown et al., 1993, sec.3), inasmuch cepts may not overlap, according to our definition. The motivation for our work is that better word alignments will lead to better translation models. For example, we may extract better chunks for phrase-based translation models. In addition, proper alignments ensure that cept-based phrases will cover the entire source and target sentences. 3 Matrix factorisation Alignments between source and target words may be represented by a I × J alignment matrix A = [aij], such that aij > 0 if fi is aligned with ej and aij = 0 otherwise. Similarly, given K cepts, words to cepts alignments may be represented by a I × K matrix F and a J × K matrix E, with positive elements indicating alignments. It is easy to see that matrix A = F × E⊤then represents the resulting word-to-word alignment (fig. 3). Let us now assume that we start from a I ×J matrix M = [mij], which we call the translation matrix, such that mij ≥0 measures the strength of the association between fi and ej (large values mean close association). This may be estimated using a translation table, a count (eg from a N-best list), etc. Finding a suitable alignment matrix A corresponds to finding factors F and E such that: M ≈F × S × E⊤ (1) where without loss of generality, we introduce a diagonal K × K scaling matrix S which may give different weights to the different cepts. As factors F and E contain only positive elements, this is an instance of non-negative matrix factorisation, aka NMF (Lee and Seung, 1999). Because NMF decomposes a matrix into additive, positive components, it naturally yields a sparse representation. In addition, the propriety constraint imposes that words are aligned to exactly one cept, ie each row of E and F has exactly one non-zero component, a property we call extreme sparsity. With the notation F = [Fik], this means that: ∀i, ∀k ̸= l, Fik.Fil = 0 As line i contains a single non-zero element, either Fik or Fil must be 0. An immediate consequence is that P i Fik.Fil = 0: columns of F are orthogonal, that is F is an orthogonal matrix (and similarly, E is orthogonal). Finding the best alignment starting from M therefore reduces to performing a decomposition into orthogonal non-negative factors. 4 An algorithm for Orthogonal Non-negative Matrix Factorisation Standard NMF algorithms (Lee and Seung, 2001) do not impose orthogonality between factors. We propose to perform the Orthogonal Non-negative Matrix Factorisation (ONMF) in two stages: We first factorise M using Probabilistic Latent Semantic Analysis, aka PLSA (Hofmann, 1999), then we orthogonalise factors using a Maximum A Posteriori (MAP) assignment of words to cepts. PLSA models a joint probability P(f, e) as a mixture of conditionally independent multinomial distributions: P(f, e) = X c P(c)P(f|c)P(e|c) (2) With F = [P(f|c)], E = [P(e|c)] and S = diag(P(c)), this is exactly eq. 1. Note also that despite the additional matrix S, if we set E = [P(e, c)], then P(f|e) would factor as F × E⊤, the original NMF formulation). All factors in eq. 2 are (conditional) probabilities, and therefore positive, so PLSA also implements NMF. The parameters are learned from a matrix M = [mij] of (fi, ej) counts, by maximising the likelihood using the iterative re-estimation formula of the Expectation-Maximisation algorithm (Dempster et al., 1977), cf. fig. 4. Convergence is guaranteed, leading to a non-negative factorisation of M. The second step of our algorithm is to orthogonalise E-step: P(c|fi, ej) = P(c)P(fi|c)P(ej|c) P cP(c)P(fi|c)P(ej|c) (3) M-step: P(c) = 1 N X ij mijP(c|fi, ej) (4) M-step: P(fi|c) ∝ X j mijP(c|fi, ej) (5) M-step: P(ej|c) ∝ X i mijP(c|fi, ej) (6) Figure 4: The EM algorithm iterates these E and M-steps until convergence. the resulting factors. Each source word fi is assigned the most probable cept, ie cept c for which P(c|fi) ∝P(c)P(fi|c) is maximal. Factor F is therefore set to: Fik ∝  1 if k = argmaxc P(c|fi) 0 otherwise (7) where proportionality ensures that column of F sum to 1, so that F stays a conditional probability matrix. We proceed similarly for target words ej to orthogonalise E. Thanks to the MAP assignment, each line of F and E contains exactly one non-zero element. We saw earlier that this is equivalent to having orthogonal factors. The result is therefore an orthogonal, non-negative factorisation of the original translation matrix M. 4.1 Number of cepts In general, the number of cepts is unknown and must be estimated. This corresponds to choosing the number of components in PLSA, a classical model selection problem. The likelihood may not be used as it always increases as components are added. A standard approach to optimise the complexity of a mixture model is to maximise the likelihood, penalised by a term that increases with model complexity, such as AIC (Akaike, 1974) or BIC (Schwartz, 1978). BIC asymptotically chooses the correct model size (for complete models), while AIC always overestimates the number of components, but usually yields good predictive performance. As the largest possible number of cepts is min(I, J), and the smallest is 1 (all fi aligned to all ej), we estimate the optimal number of cepts by maximising AIC or BIC between these two extremes. 4.2 Dealing with null alignments Alignment to a “null” word may be a feature of the underlying statistical model (eg IBM models), or it may be introduced to accommodate words which have a low association measure with all other words. Using PLSA, we can deal with null alignments in a principled way by introducing a null word on each side (f0 and e0), and two null cepts (“f-null” and “e-null”) with a 1-1 alignment to the corresponding null word, to ensure that null alignments will only be 1-N or M-1. This constraint is easily implemented using proper initial conditions in EM. Denoting the null cepts as cf∅and ce∅, 1-1 alignments between null cepts and the corresponding null words impose the conditions: 1. P(f0|cf∅) = 1 and ∀i ̸= 0, P(fi|cf∅) = 0; 2. P(e0|ce∅) = 1 and ∀j ̸= 0, P(ej|ce∅) = 0. Stepping through the E-step and M-step equations (3–6), we see that these conditions are preserved by each EM iteration. In order to deal with null alignments, the model is therefore augmented with two null cepts, for which the probabilities are initialised according to the above conditions. As these are preserved through EM, we maintain proper 1-N and M1 alignments to the null words. The main difference between null cepts and the other cepts is that we relax the propriety constraint and do not force null cepts to be aligned to at least one word on either side. This is because in many cases, all words from a sentence can be aligned to non-null words, and do not require any null alignments. 4.3 Modelling noise Most elements of M usually have a non-zero association measure. This means that for proper alignments, which give zero probability to alignments outside identified blocks, actual observations have exactly 0 probability, ie the log-likelihood of parameters corresponding to proper alignments is undefined. We therefore refine the model, adding a noise component indexed by c = 0: P(f, e) = X c>0 P(c)P(f|c)P(e|c) +P(c = 0)P(f, e|c = 0) The simplest choice for the noise component is a uniform distribution, P(f, e|c = 0) ∝1. E-step and M-steps in eqs. (3–6) are unchanged for c > 0, and the E-step equation for c = 0 is easily adapted: P(c=0|f, e) ∝P(c=0)P(f, e|c=0). 5 Example We first illustrate the factorisation process on a simple example. We use the data provided for the French-English shared task of the 2003 HLTNAACL Workshop on Building and Using Parallel Texts (Mihalcea and Pedersen, 2003). The data is from the Canadian Hansard, and reference alignments were originally produced by Franz Och and Hermann Ney (Och and Ney, 2000). Using the entire corpus (20 million words), we trained English→French and French→English IBM4 models using GIZA++. For all sentences from the trial and test set (37 and 447 sentences), we generated up to 100 best alignments for each sentence and in each direction. For each pair of source and target words (fi, ej), the association measure mij is simply the number of times these words were aligned together in the two N-best lists, leading to a count between 0 (never aligned) and 200 (always aligned). We focus on sentence 1023, from the trial set. Figure 5 shows the reference alignments together with the generated counts. There is a background “noise” count of 3 to 5 (small dots) and the largest counts are around 145-150. The N-best counts seem to give a good idea of the alignments, although clearly there is no chance that our factorisation algorithm will recover the alignment of the two instances of ’de’ to ’need’, as there is no evidence for it in the data. The ambiguity that the factorisation will have to address, and that is not easily resolved using, eg, thresholding, is whether ’ont’ should be aligned to ’They’ or to ’need’. The N-best count matrix serves as the translation matrix. We estimate PLSA parameters for K = 1 . . . 6, and find out that AIC and BIC reach their maximum for K = 6. We therefore select 6 cepts for this sentence, and produce the alignment matrices shown on figure 6. Note that the order of the cepts is arbitrary (here the first cept correspond ’et’ — ’and’), except for the null cepts which are fixed. There is a fixed 1-1 correspondence between these null cepts and the corresponding null words on each side, and only the French words ’de’ are mapped to a null cept. Finally, the estimated noise level is P(c = 0) = 0.00053. The ambiguity associated with aligning ’ont’ has been resolved through cepts 4 and 5. In our resulting model, P(c=4|’ont’) ≈0.40 while P(c=6|’ont’) ≈0.54: The MAP assignment forces ’ont’ to be aligned to cept 5 only, and therefore to ’need’. Note that although the count for (need,ont) is slightly larger than the count for (they,ont) (cf. fig. 5), this is not a trivial result. The model was able to resolve the fact that they and need had to be aligned to 2 different cepts, rather than eg a larger cept corresponding to a 2-4 alignment, and to produce proper alignments through the use of these cepts. 6 Experiments In order to perform a more systematic evaluation of the use of matrix factorisation for aligning words, we tested this technique on the full trial and test data from the 2003 HLT-NAACL Workshop. Note that the reference data has both “Sure” and “Probable” alignments, with about 77% of all alignments in the latter category. On the other hand, our system proposes only one type of alignment. The evaluation is done using the performance measures described in (Mihalcea and Pedersen, 2003): precision, recall and F-score on the probable and sure alignments, as well as the Alignment Error Rate (AER), which in our case is a weighted average of the recall on the sure alignments and the precision on the probable. Given an alignment A and gold standards GS and GP (for sure and probable alignments, respectively): PT = |A ∩GT | |A| (8) RT = |A ∩GT | |GT | (9) FT = 2PT RT PT + RT = 2|A ∩GT | |GT | + |A| (10) where T is either S or P, and: AER = 1 −|GS|RS + |A|PP |GS| + |A| (11) Using these measures, we first evaluate the performance on the trial set (37 sentences): as we produce only one type of alignment and evaluate against “Sure” and “Probable”, we observe, as expected, that the recall is very good on sure alignments, but precision relatively poor, with the reverse situation on the probable alignments (table 1). This is because we generate an intermediate number of alignments. There are 338 sure and 1446 probable alignments (for 721 French and 661 English words) in the reference trial data, and we produce 707 (AIC) or 766 (BIC) alignments with ONMF. Most of them are at least probably correct, as attested by PP , but only about half of them are in the “Sure” subset, yielding a low value of PS. Similarly, because “Probable” alignments were generated as the union of alignments produced by two annotators, they sometimes lead to very large MN alignments, which produce on average 2.5 to 2.7 alignments per word. By contrast ONMF produces less than 1.2 alignments per word, hence the low value of RP . As the AER is a weighted average of RS and PP , the resulting AER are relatively low for our method. Reference alignments NULL they need toys and entertainment . NULL les enfants ont besoin de jouets et de loisirs . N−best counts NULL they need toys and entertainment . NULL les enfants ont besoin de jouets et de loisirs . Figure 5: Left: reference alignments, large squares are sure, medium squares are probable; Right: accumulated counts from IBM4 N-best lists, bigger squares are larger counts. f−to−cept alignment cept1 cept2 cept3 cept4 cept5 cept6 f−null e−null NULL les enfants ont besoin de jouets et de loisirs . × e−to−cept alignment NULL they need toys and entertainment . e−null f−null cept6 cept5 cept4 cept3 cept2 cept1 = Resulting alignment NULL they need toys and entertainment . NULL les enfants ont besoin de jouets et de loisirs . Figure 6: Resulting word-to-cept and word-to-word alignments for sentence 1023. Method PS RS FS PP RP FP AER ONMF + AIC 45.26% 94.67% 61.24% 86.56% 34.30% 49.14% 10.81% ONMF + BIC 42.69% 96.75% 59.24% 83.42% 35.82% 50.12% 12.50% Table 1: Performance on the 37 trial sentences for orthogonal non-negative matrix factorisation (ONMF) using the AIC and BIC criterion for choosing the number of cepts, discounting null alignments. We also compared the performance on the 447 test sentences to 1/ the intersection of the alignments produced by the top IBM4 alignments in either directions, and 2/ the best systems from (Mihalcea and Pedersen, 2003). On limited resources, Ralign.EF.1 (Simard and Langlais, 2003) produced the best F-score, as well as the best AER when NULL alignments were taken into account, while XRCE.Nolem.EF.3 (Dejean et al., 2003) produced the best AER when NULL alignments were discounted. Tables 2 and 3 show that ONMF improves on several of these results. In particular, we get better recall and F-score on the probable alignments (and even a better precision than Ralign.EF.1 in table 2). On the other hand, the performance, and in particular the precision, on sure alignments is dismal. We attribute this at least partly to a key difference between our model and the reference data: Method PS RS FS PP RP FP AER ONMF + AIC 49.86% 95.12% 65.42% 84.63% 37.39% 51.87% 11.76% ONMF + BIC 46.50% 96.01% 62.65% 80.92% 38.69% 52.35% 14.16% IBM4 intersection 71.46% 90.04% 79.68% 97.66% 28.44% 44.12% 5.71% HLT-03 best F 72.54% 80.61% 76.36% 77.56% 38.19% 51.18% 18.50% HLT-03 best AER 55.43% 93.81% 69.68% 90.09% 35.30% 50.72% 8.53% Table 2: Performance on the 447 English-French test sentences, discounting NULL alignments, for orthogonal non-negative matrix factorisation (ONMF) using the AIC and BIC criterion for choosing the number of cepts. HLT-03 best F is Ralign.EF.1 and best AER is XRCE.Nolem.EF.3 (Mihalcea and Pedersen, 2003). our model enforces coverage and makes sure that all words are aligned, while the “Sure” reference alignments have no such constraints and actually have a very bad coverage. Indeed, less than half the words in the test set have a “Sure” alignment, which means that a method which ensures that all words are aligned will at best have a sub 50% precision. In addition, many reference “Probable” alignments are not proper alignments in the sense defined above. Note that the IBM4 intersection has a bias similar to the sure reference alignments, and performs very well in FS, PP and especially in AER, even though it produces very incomplete alignments. This points to a particular problem with the AER in the context of our study. In fact, a system that outputs exactly the set of sure alignments achieves a perfect AER of 0, even though it aligns only about 23% of words, clearly an unacceptable drawback in many applications. We think that this issue may be addressed in two different ways. One time-consuming possibility would be to post-edit the reference alignment to ensure coverage and proper alignments. Another possibility would be to use the probabilistic model to mimic the reference data and generate both “Sure” and “Probable” alignments using eg thresholds on the estimated alignment probabilities. This approach may lead to better performance according to our metrics, but it is not obvious that the produced alignments will be more reasonable or even useful in a practical application. We also tested our approach on the RomanianEnglish task of the same workshop, cf. table 4. The ’HLT-03 best’ is our earlier work (Dejean et al., 2003), simply based on IBM4 alignment using an additional lexicon extracted from the corpus. Slightly better results have been published since (Barbu, 2004), using additional linguistic processing, but those were not presented at the workshop. Note that the reference alignments for RomanianEnglish contain only “Sure” alignments, and therefore we only report the performance on those. In addition, AER = 1−FS in this setting. Table 4 shows that the matrix factorisation approach does not offer any quantitative improvements over these results. A gain of up to 10 points in recall does not offset a large decrease in precision. As a consequence, the AER for ONMF+AIC is about 10% higher than in our earlier work. This seems mainly due to the fact that the ’HLT-03 best’ produces alignments for only about 80% of the words, while our technique ensure coverage and therefore aligns all words. These results suggest that remaining 20% seem particularly problematic. These quantitative results are disappointing given the sofistication of the method. It should be noted, however, that ONMF provides the qualitative advantage of producing proper alignments, and in particular ensures coverage. This may be useful in some contexts, eg training a phrasebased translation system. 7 Discussion 7.1 Model selection and stability Like all mixture models, PLSA is subject to local minima. Although using a few random restarts seems to yield good performance, the results on difficult-to-align sentences may still be sensitive to initial conditions. A standard technique to stabilise the EM solution is to use deterministic annealing or tempered EM (Rose et al., 1990). As a side effect, deterministic annealing actually makes model selection easier. At low temperature, all components are identical, and they differentiate as the temperature increases, until the final temperature, where we recover the standard EM algorithm. By keeping track of the component differentiations, we may consider multiple effective numbers of components in one pass, therefore alleviating the need for costly multiple EM runs with different cept numbers and multiple restarts. 7.2 Other association measures ONMF is only a tool to factor the original translation matrix M, containing measures of associations between fi and ej. The quality of the resulting alignment greatly depends on the way M is Method PS RS FS PP RP FP AER ONMF + AIC 42.88% 95.12% 59.11% 75.17% 37.20% 49.77% 18.63% ONMF + BIC 40.17% 96.01% 56.65% 72.20% 38.49% 50.21% 20.78% IBM4 intersection 56.39% 90.04% 69.35% 81.14% 28.90% 42.62% 15.43% HLT-03 best 72.54% 80.61% 76.36% 77.56% 36.79% 49.91% 18.50% Table 3: Performance on the 447 English-French test sentences, taking NULL alignments into account, for orthogonal non-negative matrix factorisation (ONMF) using the AIC and BIC criterion for choosing the number of cepts. HLT-03 best is Ralign.EF.1 (Mihalcea and Pedersen, 2003). no NULL alignments with NULL alignments Method PS RS FS AER PS RS FS AER ONMF + AIC 70.34% 65.54% 67.85% 32.15% 62.65% 62.10% 62.38% 37.62% ONMF + BIC 55.88% 67.70% 61.23% 38.77% 51.78% 64.07% 57.27% 42.73% HLT-03 best 82.65% 62.44% 71.14% 28.86% 82.65% 54.11% 65.40% 34.60% Table 4: Performance on the 248 Romanian-English test sentences (only sure alignments), for orthogonal non-negative matrix factorisation (ONMF) using the AIC and BIC criterion for choosing the number of cepts. HLT-03 best is XRCE.Nolem (Mihalcea and Pedersen, 2003). filled. In our experiments we used counts from Nbest alignments obtained from IBM model 4. This is mainly used as a proof of concept: other strategies, such as weighting the alignments according to their probability or rank in the N-best list would be natural extensions. In addition, we are currently investigating the use of translation and distortion tables obtained from IBM model 2 to estimate M at a lower cost. Ultimately, it would be interesting to obtain association measures mij in a fully nonparametric way, using corpus statistics rather than translation models, which themselves perform some kind of alignment. We have investigated the use of co-occurrence counts or mutual information between words, but this has so far not proved successful, mostly because common words, such as function words, tend to dominate these measures. 7.3 M-1-0 alignments In our model, cepts ensure that resulting alignments are proper. There is however one situation in which improper alignments may be produced: If the MAP assigns f-words but no e-words to a cept (because e-words have more probable cepts), we may produce “orphan” cepts, which are aligned to words only on one side. One way to deal with this situation is simply to remove cepts which display this behaviour. Orphaned words may then be re-assigned to the remaining cepts, either directly or after retraining PLSA on the remaining cepts (this is guaranteed to converge as there is an obvious solution for K = 1). 7.4 Independence between sentences One natural comment on our factorisation scheme is that cepts should not be independent between sentences. However it is easy to show that the factorisation is optimally done on a sentence per sentence basis. Indeed, what we factorise is the association measures mij. For a sentence-aligned corpus, the association measure between source and target words from two different sentence pairs should be exactly 0 because words should not be aligned across sentences. Therefore, the larger translation matrix (calculated on the entire corpus) is block diagonal, with non-zero association measures only in blocks corresponding to aligned sentence. As blocks on the diagonal are mutually orthogonal, the optimal global orthogonal factorisation is identical to the block-based (ie sentence-based) factorisation. Any corpus-induced dependency between alignments from different sentences must therefore be built in the association measure mij, and cannot be handled by the factorisation method. Note that this is the case in our experiments, as model 4 alignments rely on parameters obtained on the entire corpus. 8 Conclusion In this paper, we view word alignment as 1/ estimating the association between source and target words, and 2/ factorising the resulting association measure into orthogonal, non-negative factors. For solving the latter problem, we propose an algorithm for ONMF, which guarantees both proper alignments and good coverage. Experiments carried out on the Hansard give encouraging results, in the sense that we improve in several ways over state-of-the-art results, despite a clear bias in the reference alignments. Further investigations are required to apply this technique on different association measures, and to measure the influence that ONMF may have, eg on a phrase-based Machine Translation system. Acknowledgements We acknowledge the Machine Learning group at XRCE for discussions related to the topic of word alignment. We would like to thank the three anonymous reviewers for their comments. References H. Akaike. 1974. A new look at the statistical model identification. IEEE Tr. Automatic Control, 19(6):716–723. A.-M. Barbu. 2004. Simple linguistic methods for improving a word alignment algorithm. In Le poids des mots — Proc. JADT04, pages 88–98. P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19:263–312. H. Dejean, E. Gaussier, C. Goutte, and K. Yamada. 2003. Reducing parameter space for word alignment. In HLT-NAACL 2003 Workshop: Building and Using Parallel Texts, pages 23–26. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. J. Royal Statistical Society, Series B, 39(1):1–38. T. Hofmann. 1999. Probabilistic latent semantic analysis. In Uncertainty in Artificial Intelligence, pages 289–296. P. Koehn, F. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proc. HLT-NAACL 2003. D. D. Lee and H. S. Seung. 1999. Learning the parts of objects by non-negative matrix factorization. Nature, 401:788–791. D. D. Lee and H. S. Seung. 2001. Algorithms for non-negative matrix factorization. In NIPS*13, pages 556–562. R. Mihalcea and T. Pedersen. 2003. An evaluation exercise for word alignment. In HLT-NAACL 2003 Workshop: Building and Using Parallel Texts, pages 1–10. F. Och and H. Ney. 2000. A comparison of alignment models for statistical machine translation. In Proc. COLING’00, pages 1086–1090. F. Och, C. Tillmann, and H. Ney. 1999. Improved alignment models for statistical machine translation. In Proc. EMNLP, pages 20–28. K. Rose, E. Gurewitz, and G. Fox. 1990. A deterministic annealing approach to clustering. Pattern Recognition Letters, 11(11):589–594. G. Schwartz. 1978. Estimating the dimension of a model. The Annals of Statistics, 6(2):461–464. M. Simard and P. Langlais. 2003. Statistical translation alignment with compositionality constraints. In HLT-NAACL 2003 Workshop: Building and Using Parallel Texts, pages 19–22. C. Tillmann and F. Xia. 2003. A phrase-based unigram model for statistical machine translation. In Proc. HLT-NAACL 2003. D. Yarowsky, G. Ngai, and R. Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proc. HLT 2001.
2004
64
FSA: An Efficient and Flexible C++ Toolkit for Finite State Automata Using On-Demand Computation Stephan Kanthak and Hermann Ney Lehrstuhl f¨ur Informatik VI, Computer Science Department RWTH Aachen – University of Technology 52056 Aachen, Germany {kanthak,ney}@informatik.rwth-aachen.de Abstract In this paper we present the RWTH FSA toolkit – an efficient implementation of algorithms for creating and manipulating weighted finite-state automata. The toolkit has been designed using the principle of on-demand computation and offers a large range of widely used algorithms. To prove the superior efficiency of the toolkit, we compare the implementation to that of other publically available toolkits. We also show that on-demand computations help to reduce memory requirements significantly without any loss in speed. To increase its flexibility, the RWTH FSA toolkit supports high-level interfaces to the programming language Python as well as a command-line tool for interactive manipulation of FSAs. Furthermore, we show how to utilize the toolkit to rapidly build a fast and accurate statistical machine translation system. Future extensibility of the toolkit is ensured as it will be publically available as open source software. 1 Introduction Finite-state automata (FSA) methods proved to elegantly solve many difficult problems in the field of natural language processing. Among the most recent ones are full and lazy compilation of the search network for speech recognition (Mohri et al., 2000a), integrated speech translation (Vidal, 1997; Bangalore and Riccardi, 2000), speech summarization (Hori et al., 2003), language modelling (Allauzen et al., 2003) and parameter estimation through EM (Eisner, 2001) to mention only a few. From this list of different applications it is clear that there is a high demand for generic tools to create and manipulate FSAs. In the past, a number of toolkits have been published, all with different design principles. Here, we give a short overview of toolkits that offer an almost complete set of algorithms: • The FSM LibraryTM from AT&T (Mohri et al., 2000b) is judged the most efficient implementation, offers various semirings, ondemand computation and many algorithms, but is available only in binary form with a proprietary, non commercial license. • FSA6.1 from (van Noord, 2000) is implemented in Prolog. It is licensed under the terms of the (GPL, 1991). • The WFST toolkit from (Adant, 2000) is built on top of the Automaton Standard Template Library (LeMaout, 1998) and uses C++ template mechanisms for efficiency and flexibility, but lacks on-demand computation. Also licensed under the terms of the (GPL, 1991). This paper describes a highly efficient new implementation of a finite-state automata toolkit that uses on-demand computation. Currently, it is being used at the Lehrstuhl f¨ur Informatik VI, RWTH Aachen in different speech recognition and translation research applications. The toolkit will be available under an open source license (GPL, 1991) and can be obtained from our website http://www-i6.informatik.rwth-aachen.de. The remaining part of the paper is organized as follows: Section 2 will give a short introduction to the theory of finite-state automata to recall part of the terminology and notation. We will also give a short explanation of composition which we use as an exemplary object of study in the following sections. In Section 2.3 we will discuss the locality of algorithms defined on finite-state automata. This forms the basis for implementations using on-demand computations. Then the RWTH FSA toolkit implementation is detailed in Section 3. In Section 4.1 we will compare the efficiency of different toolkits. As a showcase for the flexibility we show how to use the toolkit to build a statistical machine translation system in Section 4.2. We conclude the paper with a short summary in Section 5 and discuss some possible future extensions in Section 6. 2 Finite-State Automata 2.1 Weighted Finite-State Transducer The basic theory of weighted finite-state automata has been reviewed in numerous papers (Mohri, 1997; Allauzen et al., 2003). We will introduce the notation briefly. A semiring (K, ⊕, ⊗, 0, 1) is a structure with a set K and two binary operations ⊕and ⊗such that (K, ⊕, 0) is a commutative monoid, (K, ⊗, 1) is a monoid and ⊗distributes over ⊕and 0 ⊗ x = x ⊗ 0 = 0 for any x ∈K. We will also associate the term weights with the elements of a semiring. Semirings that are frequently used in speech recognition are the positive real semiring (IR ∪{−∞, +∞}, ⊕log, +, +∞, 0) with a ⊕log b = −log(e−a + e−b) and the tropical semiring (IR∪{−∞, +∞}, min, +, +∞, 0) representing the well-known sum and maximum weighted path criteria. A weighted finite-state transducer (Q, Σ ∪ {ϵ}, Ω∪{ϵ}, K, E, i, F, λ, ρ) is a structure with a set Q of states1, an alphabet Σ of input symbols, an alphabet Ωof output symbols, a weight semiring K (we assume it k-closed here for some algorithms as described in (Mohri and Riley, 2001)), a set E ⊆Q × (Σ ∪{ϵ}) × (Ω∪{ϵ}) × K × Q of arcs, a single initial state i with weight λ and a set of final states F weighted by the function ρ : F →K. To simplify the notation we will also denote with QT and ET the set of states and arcs of a transducer T. A weighted finite-state acceptor is simply a weighted finite-state transducer without the output alphabet. 2.2 Composition As we will refer to this example throughout the paper we shortly review the composition algorithm here. Let T1 : Σ∗×Ω∗→K and T2 : Ω∗×Γ∗→K be two transducers defined over the same semiring K. Their composition T1 ◦T2 realizes the function T : Σ∗×Γ∗→K and the theory has been described in detail in (Pereira and Riley, 1996). For simplification purposes, let us assume that the input automata are ϵ-free and S = (Q1 ×Q2, ←, → , empty) is a stack of state tuples of T1 and T2 with push, pop and empty test operations. A non lazy version of composition is shown in Figure 1. Composition of automata containing ϵ labels is more complex and can be solved by using an intermediate filter transducer that also has been described in (Pereira and Riley, 1996). 1we do not restrict this to be a finite set as most algorithms of the lazy implementation presented in this paper also support a virtually infinite set T = T1 ◦T2 : i = (i1, i2) S ←(i1, i2) while not S empty (s1, s2) ←S QT = QT ∪(s1, s2) foreach (s1, i1, o1, w1, t1) ∈ET1 foreach (s2, i2, o2, w2, t2) ∈ET2 with o1 = i2 ET = ET ∪((s1, s2), i1, o2, w1 ⊗w2, (t1, t2)) if (t1, t2) ̸∈QT then S ←(t1, t2) Figure 1: Simplified version of composition (assumes ϵ-free input transducers). What we can see from the pseudo-code above is that composition uses tuples of states of the two input transducers to describe states of the target transducer. Other operations defined on weighted finitestate automata use different abstract states. For example transducer determinization (Mohri, 1997) uses a set of pairs of states and weights. However, it is more convenient to use integers as state indices for an implementation. Therefore algorithms usually maintain a mapping from abstract states to integer state indices. This mapping has linear memory requirements of O(|QT |) which is quite attractive, but that depends on the structure of the abstract states. Especially in case of determinization where the size of an abstract state may vary, the complexity is no longer linear in general. 2.3 Local Algorithms Mohri and colleagues pointed out (Mohri et al., 2000b) that a special class of transducer algorithms can be computed on demand. We will give a more detailed analysis here. We focus on algorithms that produce a single transducer and refer to them as algorithmic transducers. Definition: Let θ be the input configuration of an algorithm A(θ) that outputs a single finite-state transducer T. Additionally, let M : S →QT be a one-to-one mapping from the set of abstract state descriptions S that A generates onto the set of states of T. We call A local iff for all states s ∈QT A can generate a state s of T and all outgoing arcs (s, i, o, w, s′) ∈ET , depending only on its abstract state M−1(s) and the input configuration θ. With the preceding definition it is quite easy to prove the following lemma: Lemma: An algorithm A that has the local property can be built on demand starting with the initial state iTA of its associated algorithmic transducer TA. Proof: For the proof it is sufficient to show that we can generate and therefore reach all states of TA. Let S be a stack of states of TA that we still have to process. Due to the one-to-one mapping M we can map each state of TA back to an abstract state of A. By definition the abstract state is sufficient to generate the complete state and its outgoing arcs. We then push those target states of all outgoing arcs onto the stack S that have not yet been processed. As TA is finite the traversal ends after all states of TA as been processed exactly once. 2 Algorithmic transducers that can be computed on-demand are also called lazy or virtual transducers. Note, that due to the local property the set of states does not necessarily be finite anymore. 3 The Toolkit The current implementation is the second version of this toolkit. For the first version – which was called FSM – we opted for using C++ templates to gain efficiency, but algorithms were not lazy. It turned out that the implementation was fast, but many operations wasted a lot of memory as their resulting transducer had been fully expanded in memory. However, we plan to also make this initial version publically available. The design principles of the second version of the toolkit, which we will call FSA, are: • decoupling of data structures and algorithms, • on-demand computation for increased memory efficiency, • low computational costs, • an abstract interface to alphabets to support lazy mappings from strings to indices for arc labels, • an abstract interface to semirings (should be kclosed for at least some algorithms), • implementation in C++, as it is fast, ubiquitous and well-known by many other researchers, • easy to use interfaces. 3.1 The C++ Library Implementation We use the lemma from Section 2.3 to specify an interface for lazy algorithmic transducers directly. The code written in pseudo-C++ is given in Figure 2. Note that all lazy algorithmic transducers are derived from the class Automaton. The lazy interface also has disadvantages. The virtual access to the data structure might slow computations down, and obtaining global information about the automaton becomes more complicated. For example the size of an automaton can only be class Automaton { public: struct Arc { StateId target(); Weight weight(); LabelId input(); LabelId output(); }; struct State { StateId id(); Weight weight(); ConstArcIterator arcsBegin(); ConstArcIterator arcsEnd(); }; virtual R<Alphabet> inputAlphabet(); virtual R<Alphabet> outputAlphabet(); virtual StateId initialState(); virtual R<State> getState(StateId); }; Figure 2: Pseudo-C++ code fragment for the abstract datatype of transducers. Note that R<T> refers to a smart pointer of T. computed by traversing it. Therefore central algorithms of the RWTH FSA toolkit are the depthfirst search (DFS) and the computation of strongly connected components (SCC). Efficient versions of these algorithms are described in (Mehlhorn, 1984) and (Cormen et al., 1990). It is very costly to store arbitrary types as arc labels within the arcs itself. Therefore the RWTH FSA toolkit offers alphabets that define mappings between strings and label indices. Alphabets are implemented using the abstract interface shown in Figure 4. With alphabets arcs only need to store the abstract label indices. The interface for alphabets is defined using a single constant: for each label index an alphabet reports it must ensure to always deliver the same symbol on request through getSymbol(). class Alphabet { public: virtual LabelId begin(); virtual LabelId end(); virtual LabelId next(LabelId); virtual string getSymbol(LabelId); }; Figure 4: Pseudo-C++ code fragment for the abstract datatype of alphabets. 3.2 Algorithms The current implementation of the toolkit offers a wide range of well-known algorithms defined on weighted finite-state transducers: • basic operations sort (by input labels, output labels or by tocompose(T1, T2) = simple-compose( cache(sort-output(map-output(T1, AT2,I))), cache(sort-input(T2))) Figure 3: Optimized composition where AT2,I denotes the input alphabet of T2. Six algorithmic transducers are used to gain maximum efficiency. Mapping of arc labels is necessary as symbol indices may differ between alphabets. tal arc), map-input and -output labels symbolically (as the user expects that two alphabets match symbolically, but their mapping to label indices may differ), cache (helps to reduce computations with lazy implementations), topologically-sort states • rational operations project-input, project-output, transpose (also known as reversal: calculates an equivalent automaton with the adjacency matrix being transposed), union, concat, invert • classical graph operations depth-first search (DFS), single-source shortest path (SSSP), connect (only keep accessible and coaccessible state), strongly connected components (SCCs) • operations on relations of sets compose (filtered), intersect, complement • equivalence transformations determinize, minimize, remove-epsilons • search algorithms best, n-best • weight/probability-based algorithms prune (based on forward/backward state potentials), posterior, push (push weights toward initial/final states), failure (given an acceptor/transducer defined over the tropical semiring converts ϵ-transitions to failure transitions) • diagnostic operations count (counts states, final states, different arc types, SCCs, alphabet sizes, . . .) • input/output operations supported input and/or output formats are: AT&T (currently, ASCII only), binary (fast, uses fixed byte-order), XML (slower, any encoding, fully portable), memory-mapped (also on-demand), dot (AT&T graphviz) We will discuss some details and refer to the publication of the algorithms briefly. Most of the basic operations have a straigthforward implementation. As arc labels are integers in the implementation and their meaning is bound to an appropriate symbolic alphabet, there is the need for symbolic mapping between different alphabets. Therefore the toolkit provides the lazy map-input and map-output transducers, which map the input and output arc indices of an automaton to be compatible with the indices of another given alphabet. The implementations of all classical graph algorithms are based on the descriptions of (Mehlhorn, 1984) and (Cormen et al., 1990) and (Mohri and Riley, 2001) for SSSP. The general graph algorithms DFS and SCC are helpful in the realisation of many other operations, examples are: transpose, connect and count. However, counting the number of states of an automaton or the number of symbols of an alphabet is not well-defined in case of an infinite set of states or symbols. SSSP and transpose are the only two algorithms without a lazy implementation. The result of SSSP is a list of state potentials (see also (Mohri and Riley, 2001)). And a lazy implementation for transpose would be possible if the data structures provide lists of both successor and predecessor arcs at each state. This needs either more memory or more computations and increases the size of the abstract interface for the lazy algorithms, so as a compromise we omitted this. The implementations of compose (Pereira and Riley, 1996), determinize (Mohri, 1997), minimize (Mohri, 1997) and remove-epsilons (Mohri, 2001) use more refined methods to gain efficiency. All use at least the lazy cache transducer as they refer to states of the input transducer(s) more than once. With respect to the number of lazy transducers involved in computing the result, compose has the most complicated implementation. Given the implementations for the algorithmic transducers cache, map-output, sort-input, sort-output and simple-compose that assumes arc labels to be compatible and sorted in order to perform matching as fast as possible, the final implementation of compose in the RWTH FSA toolkit is given in figure 3. So, the current implementation of compose uses 6 algorithmic transducers in addition to the two input automata. Determinize additionally uses lazy cache and sort-input transducers. The search algorithms best and n-best are based on (Mohri and Riley, 2002), push is based on (Mohri and Riley, 2001) and failure mainly uses ideas from (Allauzen et al., 2003). The algorithms posterior and prune compute arc posterior probabilities and prune arcs with respect to them. We believe they are standard algorithms defined on probabilistic networks and they were simply ported to the framework of weighted finite-state automata. Finally, the RWTH FSA toolkit can be loosely interfaced to the AT&T FSM LibraryTM through its ASCII-based input/output format. In addition, a new XML-based file format primarly designed as being human readable and a fast binary file format are also supported. All file formats support optional on-the-fly compression using gzip. 3.3 High-Level Interfaces In addition to the C++ library level interface the toolkit also offers two high-level interfaces: a Python interface, and an interactive command-line interface. The Python interface has been built using the SWIG interface generator (Beazley et al., 1996) and enables rapid development of larger applications without lengthy compilation of C++ code. The command-line interface comes handy for quickly applying various combinations of algorithms to transducers without writing any line of code at all. As the Python interface is mainly identical to the C++ interface we will only give a short impression of how to use the command-line interface. The command-line interface is a single executable and uses a stack-based execution model (postfix notation) for the application of operations. This is different from the pipe model that AT&T command-line tools use. The disadvantage of using pipes is that automata must be serialized and get fully expanded by the next executable in chain. However, an advantage of multiple executables is that memory does not get fragmented through the interaction of different algorithms. With the command-line interface, operations are applied to the topmost transducers of the stack and the results are pushed back onto the stack again. For example, > fsa A B compose determinize draw reads A and B from files, calculates the determinized composition and writes the resulting automaton to the terminal in dot format (which may be piped to dot directly). As you can see from the examples some operations like write or draw take additional arguments that must follow the name of the operation. Although this does not follow the strict postfix design, we found it more convenient as these parameters are not automata. 4 Experimental Results 4.1 Comparison of Toolkits A crucial aspect of an FSA toolkit is its computational and memory efficiency. In this section we will compare the efficiency of four different implementations of weighted-finite state toolkits, namely: • RWTH FSA, • RWTH FSM (predecessor of RWTH FSA), • AT&T FSM LibraryTM 4.0 (Mohri et al., 2000b), • WFST (Adant, 2000). We opted to not evaluate the FSA6.1 from (van Noord, 2000) as we found that it is not easy to install and it seemed to be significantly slower than any of the other implementations. RWTH FSA and the AT&T FSM LibraryTM use on-demand computations whereas FSM and WFST do not. As the algorithmic code between RWTH FSA and its predecessor RWTH FSM has not changed much except for the interface of lazy transducers, we can also compare lazy versus non lazy implementation. Nevertheless, this direct comparison is also possible with RWTH FSA as it provides a static storage class transducer and a traversing deep copy operation. Table 1 summarizes the tasks used for the evaluation of efficiency together with the sizes of the resulting transducers. The exact meaning of the different transducers is out of scope of this comparison. We simply focus on measuring the efficiency of the algorithms. Experiment 1 is the full expansion of the static part of a speech recognition search network. Experiment 2 deals with a translation problem and splits words of a “bilanguage” into single words. The meaning of the transducers used for Experiment 2 will be described in detail in Section 4.2. Experiment 3 is similar to Experiment 1 except for that the grammar transducer is exchanged with a translation transducer and the result represents the static network for a speech-to-text translation system. Table 1: Tasks used for measuring the efficiency of the toolkits. Sizes are given for the resulting transducers (VM = Verbmobil). Experiment states arcs 1 VM, HCL ◦G 12,203,420 37,174,684 2 VM, C1 ◦A ◦C2 341,614 832,225 3 Eutrans, HCL ◦T 1,201,718 3,572,601 All experiments were performed on a PC with a 1.2GHz AMD Athlon processor and 2 GB of memory using Linux as operating system. Table 2 summarizes the peak memory usage of the different toolkit implementations for the given tasks and Table 3 shows the CPU usage accordingly. As can be seen from Tables 2 and 3 for all given tasks the RWTH FSA toolkit uses less memory and computational power than any of the other toolkits. However, it is unclear to the authors why the AT&T LibraryTM is a factor of 1800 slower for experiment 2. The numbers also do not change much after additionally connecting the composition result (as in RWTH FSA compose does not connect the result by default): memory usage rises to 62 MB and execution time increases to 9.7 seconds. However, a detailed analysis for the RWTH FSA toolkit has shown that the composition task of experiment 2 makes intense use of the lazy cache transducer due to the loop character of the two transducers C1 and C2. It can also be seen from the two tables that the lazy implementation RWTH FSA uses significantly less memory than the non lazy implementation RWTH FSM and less than half of the CPU time. One explanation for this is the poor memory management of RWTH FSM as all intermediate results need to be fully expanded in memory. In contrast, due to its lazy transducer interface, RWTH FSA may allocate memory for a state only once and reuse it for all subsequent calls to the getState() method. Table 2: Comparison of peak memory usage in MB (∗aborted due to exceeded memory limits). Exp. FSA FSM AT&T WFST 1 360 1700 1500 > 1850∗ 2 59 310 69 > 1850∗ 3 48 230 176 550 Table 3: Comparison of CPU time in seconds including I/O using a 1.2GHz AMD Athlon processor (∗exceeded memory limits: given time indicates point of abortion). Exp. FSA FSM AT&T WFST 1 105 203 515 > 40∗ 2 6.5 182 11760 > 64∗ 3 6.6 21 28 3840 4.2 Statistical Machine Translation Statistical machine translation may be viewed as a weighted language transduction problem (Vidal, 1997). Therefore it is fairly easy to build a machine translation system with the use of weighted finitestate transducers. Let fJ 1 and eI i be two sentences from a source and target language respectively. Also assume that we have word level alignments A of all sentences from a bilingual training corpus. We denote with epJ p1 the segmentation of a target sentence eI 1 into phrases such that fJ 1 and epJ p1 can be aligned monotoneously. This segmentation can be directly calculated from the alignments A. Then we can formulate the problem of finding the best translation ˆeI 1 of a source sentence as follows: ˆeI 1 = argmax eI 1 Pr(fJ 1 , eI 1) ≈ argmax A,epJ p1 Pr(fJ 1 , epJ p1 ) = argmax A,epJ p1 Y fj:j=1..J Pr(fj, epj|fj−1 1 , epj−1 p1 ) ≈ argmax A,epJ p1 Y fj:j=1..J Pr(fj, epj|fj−1 j−n, epj−1 pj−n) The last line suggests to solve the translation problem by estimating a language model on a bilanguage (see also (Bangalore and Riccardi, 2000; Casacuberta et al., 2001)). An example of sentences from this bilanguage is given in Figure 5 for the translation task Vermobil (German →English). For technical reasons, ϵ-labels are represented by a $ symbol. Note, that due to the fixed segmentation given by the alignments, phrases in the target language are moved to the last source word of an alignment block. So, given an appropriate alignment which can be obtained by means of the pubically available GIZA++ toolkit (Och and Ney, 2000), the approach is very easy in practice: 1. Transform the training corpus with a given alignment into the corresponding bilingual corpus 2. Train a language model on the bilingual corpus 3. Build an acceptor A from the language model The symbols of the resulting acceptor are still a mixture of words from the source language and phrases from the target language. So, we additionally use two simple transducers to split these bilingual words (C1 maps source words fj to bilingual words that start with fj and C2 maps bilingual words with the target sequence epj to the sequences of target words the phrase was made of): 4. Split the bilingual phrases of A into single words: T = C1 ◦A ◦C2 Then the translation problem from above can be rewritten using finite-state terminology: dann|$ melde|$ ich|I_am_calling mich|$ noch|$ einmal|once_more .|. 11U|eleven Uhr|o’clock ist|is hervorragend|excellent .|. ich|I bin|have da|$ relativ|quite_a_lot_of frei|free_days_then .|. Figure 5: Example corpus for the bilanguage (Verbmobil, German →English). Table 4: Translation results for different tasks compared to similar systems using the alignment template (AT) approach (Tests were performed on a 1.2GHz AMD Athlon). Task System Translation WER PER 100-BLEU Memory Time/Sentence [%] [%] [MB] [ms] Eutrans FSA Spanish →English 8.12 7.64 10.7 6-8 20 AT 8.25 FUB FSA Italian →English 27.0 21.5 37.7 3-5 22 AT 23.7 18.1 36.0 Verbmobil FSA German →English 48.3 41.6 69.8 65-90 460 AT 40.5 30.1 62.2 PF-Star FSA Italian →English 39.8 34.1 58.4 12-15 35 AT 36.8 29.1 54.3 e′ = project-output(best(f ◦T)) Translation results using this approach are summarized in Table 4 and are being compared with results obtained using the alignment template approach (Och and Ney, 2000). Results for both approaches were obtaining using the same training corpus alignments. Detailed task descriptions for Eutrans/FUB and Verbmobil can be found in (Casacuberta et al., 2001) and (Zens et al., 2002) respectively. We use the usual definitions for word error rate (WER), position independent word error rate (PER) and BLEU statistics here. For the simpler tasks Eutrans, FUB and PF-Star, the WER, PER and the inverted BLEU statistics are close for both approaches. On the German-toEnglish Verbmobil task the FSA approach suffers from long distance reorderings (captured through the fixed training corpus segmentation), which is not very surprising. Although we do not have comparable numbers of the memory usage and the translation times for the alignment template approach, resource usage of the finite-state approach is quite remarkable as we only use generic methods from the RWTH FSA toolkit and full search (i.e. we do not prune the search space). However, informal tests have shown that the finite-state approach uses much less memory and computations than the current implementation of the alignment template approach. Two additional advantages of finite-state methods for translation in general are: the input to the search algorithm may also be a word lattice and it is easy to combine speech recognition with translation in order to do speech-to-speech translation. 5 Summary In this paper we have given a characterization of algorithms that produce a single finite-state automaton and bear an on-demand implementation. For this purpose we formally introduced the local property of such an algorithm. We have described the efficient implementation of a finite-state toolkit that uses the principle of lazy algorithmic transducers for almost all algorithms. Among several publically available toolkits, the RWTH FSA toolkit presented here turned out to be the most efficient one, as several tests showed. Additionally, with lazy algorithmic transducers we have reduced the memory requirements and even increased the speed significantly compared to a non lazy implementation. We have also shown that a finite-state automata toolkit supports rapid solutions to problems from the field of natural language processing such as statistical machine translation. Despite the genericity of the methods, statistical machine translation can be done very efficiently. 6 Shortcomings and Future Extensions There is still room to improve the RWTH FSA toolkit. For example, the current implementation of determinization is not as general as described in (Allauzen and Mohri, 2003). In case of ambiguous input the algorithm still produces an infinite transducer. At the moment this can be solved in many cases by adding disambiguation symbols to the input transducer manually. As the implementation model is based on virtual C++ methods for all types of objects in use (semirings, alphabets, transducers and algorithmic transducers) it should also be fairly easy to add support for dynamically loadable objects to the toolkit. Other semirings like the expectation semiring described in (Eisner, 2001) are supported but not yet implemented. 7 Acknowledgment The authors would like to thank Andre Altmann for his help with the translation experiments. References Alfred V. Aho and Jeffrey D. Ullman, 1972, The Theory of Parsing, Translation and Compiling, volume 1, Prentice-Hall, Englewood Cliffs, NJ, 1972. Arnaud Adant, 2000, WFST: A Finite-State Template Library in C++, http://membres.lycos.fr/adant/tfe/. Cyril Allauzen, Mehryar Mohri, and Brian Roark, 2003, Generalized Algorithms for Constructing Statistical Language Models, In Proc. of the 41st Meeting of the Association for Computational Linguistics, Sapporo, Japan, July 2003. Cyril Allauzen and Mehryar Mohri, 2003, Generalized Optimization Algorithm for Speech Recognition Transducers, In Proc. of the IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pp. , Hong Kong, China, April 2003. Srinivas Bangalore and Giuseppe Riccardi, 2000, Stochastic Finite-State models for Spoken Language Machine Translation, In Proc. of the Workshop on Embedded Machine Translation Systems, pp. 52–59, 2000. David Beazley, William Fulton, Matthias K¨oppe, Lyle Johnson, Richard Palmer, 1996, SWIG - Simplified Wrapper and Interface Generator, Electronic Document, http://www.swig.org, February 1996. F. Casacuberta, D. Llorens, C. Martinez, S. Molau, F. Nevado, H. Ney, M. Pasto, D. Pico, A. Sanchis, E. Vidal and J.M. Vilar, 2001, Speech-to-Speech Translation based on Finite-State Transducer, In Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 613-616, Salt Lake City, Utah, May 2001. Thomas H. Cormen, Charles E. Leiserson and Ronald L. Rivest, 1990, Introductions to Algorithms, The MIT Press, Cambridge, MA, 1990. Jason Eisner, 2001, Expectation Semirings: Flexible EM for Finite-State Transducers, In Proc. of the ESSLLI Workshop on Finite-State Methods in NLP (FSMNLP), Helsinki, August 2001. Free Software Foundation, 1991, GNU General Public License, Version 2, Electronic Document, http://www.gnu.org/copyleft/gpl.html, June 1991. Takaaki Hori, Chiori Hori and Yasuhiro Minami, 2003, Speech Summarization using Weighted Finite-State Transducers, In Proc. of the European Conf. on Speech Communication and Technology, Geneva, Switzerland, September 2003. Vincent Le Maout, 1998, ASTL: Automaton Standard Template Library, http://www-igm.univmlv.fr/˜lemaout/. Kurt Mehlhorn, 1984, Data Structures and Efficient Algorithms, Chapter 4, Springer Verlag, EATCS Monographs, 1984, also available from http://www.mpisb.mpg.de/ ˜mehlhorn/DatAlgbooks.html. Mehryar Mohri, 1997, Finite-State Transducers in Language and Speech Processing, Computational Linguistics, 23:2, 1997. Mehryar Mohri, Fernando C.N. Pereira, and Michael Riley, 2000, Weighted Finite-State Transducers in Speech Recognition, In Proc. of the ISCA Tutorial and Research Workshop, Automatic Speech Recognition: Challenges for the new Millenium (ASR2000), Paris, France, September 2000. Mehryar Mohri, Fernando C.N. Pereira, and Michael Riley, 2000, The Design Principles of a Weighted FiniteState Transducer Library, Theoretical Computer Science, 231:17-32, January 2000. Mehryar Mohri and Michael Riley, 2000, A Weight Pushing Algorithm for Large Vocabulary Speech Recognition, In Proc. of the European Conf. on Speech Communication and Technology, pp. 1603– 1606, ˚Aalborg, Denmark, September 2001. Mehryar Mohri, 2001, Generic Epsilon-Removal Algorithm for Weighted Automata, In Sheng Yu and Andrei Paun, editor, 5th Int. Conf., CIAA 2000, London Ontario, Canada. volume 2088 of Lecture Notes in Computer Science, pages 230-242. Springer-Verlag, Berlin-NY, 2001. Mehryar Mohri and Michael Riley, 2002, An Efficient Algorithm for the N-Best-Strings Problem, In Proc. of the Int. Conf. on Spoken Language Processing, pp. 1313–1316, Denver, Colorado, September 2002. Franz J. Och and Hermann Ney, 2000, Improved Statistical Alignment Models, In Proc. of the 38th Annual Meeting of the Association for Computational Linguistics, pp. 440-447, Hongkong, China, October 2000. Fernando C.N. Pereira and Michael Riley, 1996, Speech Recognition by Composition of Weighted Finite Automata, Available from http://xxx.lanl.gov/cmplg/9603001, Computation and Language, 1996. Gertjan van Noord, 2000, FSA6 Reference Manual, http://odur.let.rug.nl/˜vannoord/Fsa/. Enrique Vidal, 1997, Finite-State Speech-to-Speech Translation, In Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 111–114, Munich, Germany, 1997. Richard Zens, Franz J. Och and H. Ney, 2002, PhraseBased Statistical Machine Translation, In: M. Jarke, J. Koehler, G. Lakemeyer (Eds.) : KI - 2002: Advances in artificial intelligence. 25. Annual German Conference on AI, KI 2002, Vol. LNAI 2479, pp. 1832, Springer Verlag, September 2002.
2004
65
Improving IBM Word-Alignment Model 1 Robert C. MOORE Microsoft Research One Microsoft Way Redmond, WA 90052 USA [email protected] Abstract We investigate a number of simple methods for improving the word-alignment accuracy of IBM Model 1. We demonstrate reduction in alignment error rate of approximately 30% resulting from (1) giving extra weight to the probability of alignment to the null word, (2) smoothing probability estimates for rare words, and (3) using a simple heuristic estimation method to initialize, or replace, EM training of model parameters. 1 Introduction IBM Model 1 (Brown et al., 1993a) is a wordalignment model that is widely used in working with parallel bilingual corpora. It was originally developed to provide reasonable initial parameter estimates for more complex word-alignment models, but it has subsequently found a host of additional uses. Among the applications of Model 1 are segmenting long sentences into subsentental units for improved word alignment (Nevado et al., 2003), extracting parallel sentences from comparable corpora (Munteanu et al., 2004), bilingual sentence alignment (Moore, 2002), aligning syntactictree fragments (Ding et al., 2003), and estimating phrase translation probabilities (Venugopal et al., 2003). Furthermore, at the 2003 Johns Hopkins summer workshop on statistical machine translation, a large number of features were tested to discover which ones could improve a state-of-the-art translation system, and the only feature that produced a “truly significant improvement” was the Model 1 score (Och et al., 2004). Despite the fact that IBM Model 1 is so widely used, essentially no attention seems to have been paid to whether it is possible to improve on the standard Expectation-Maximization (EM) procedure for estimating its parameters. This may be due in part to the fact that Brown et al. (1993a) proved that the log-likelihood objective function for Model 1 is a strictly concave function of the model parameters, so that it has a unique local maximum. This, in turn, means that EM training will converge to that maximum from any starting point in which none of the initial parameter values is zero. If one equates optimum parameter estimation with finding the global maximum for the likelihood of the training data, then this result would seem to show no improvement is possible. However, in virtually every application of statistical techniques in natural-language processing, maximizing the likelihood of the training data causes overfitting, resulting in lower task performance than some other estimates for the model parameters. This is implicitly recognized in the widespread adoption of early stopping in estimating the parameters of Model 1. Brown et al. (1993a) stopped after only one iteration of EM in using Model 1 to initialize their Model 2, and Och and Ney (2003) stop after five iterations in using Model 1 to initialize the HMM word-alignment model. Both of these are far short of convergence to the maximum likelihood estimates for the model parameters. We have identified at least two ways in which the standard EM training method for Model 1 leads to suboptimal performance in terms of wordalignment accuracy. In this paper we show that by addressing these issues, substantial improvements in word-alignment accuracy can be achieved. 2 Definition of Model 1 Model 1 is a probabilistic generative model within a framework that assumes a source sentence S of length l translates as a target sentence T, according to the following stochastic process: • A length m for sentence T is generated. • For each target sentence position j ∈ {1, . . . , m}: – A generating word si in S (including a null word s0) is selected, and – The target word tj at position j is generated depending on si. Model 1 is defined as a particularly simple instance of this framework, by assuming all possible lengths for T (less than some arbitrary upper bound) have a uniform probability ϵ, all possible choices of source sentence generating words are equally likely, and the translation probability tr(tj|si) of the generated target language word depends only on the generating source language word—which Brown et al. (1993a) show yields the following equation: p(T|S) = ϵ (l + 1)m m  j=1 l  i=0 tr(tj|si) (1) Equation 1 gives the Model 1 estimate for the probability of a target sentence, given a source sentence. We may also be interested in the question of what is the most likely alignment of a source sentence and a target sentence, given an instance of Model 1; where, by an alignment, we mean a specification of which source words generated which target words according to the generative model. Since Model 1, like many other word-alignment models, requires each target word to be generated by exactly one source word (including the null word), an alignment a can be represented by a vector a1, . . . , am, where each aj is the sentence position of the source word generating tj according to the alignment. It is easy to show that for Model 1, the most likely alignment ˆa of S and T is given by this equation: ˆa = argmaxa m  j=1 tr(tj|saj) (2) Since in applying Model 1, there are no dependencies between any of the ajs, we can find the most likely aligment simply by choosing, for each j, the value for aj that leads to the highest value for tr(tj|saj). The parameters of Model 1 for a given pair of languages are normally estimated using EM, taking as training data a corpus of paired sentences of the two languages, such that each pair consists of sentence in one language and a possible translation in the other language. The training is normally initialized by setting all translation probability distributions to the uniform distribution over the target language vocabulary. 3 Problems with Model 1 Model 1 clearly has many shortcomings as a model of translation. Some of these are structural limitations, and cannot be remedied without making the model significantly more complicated. Some of the major structural limitations include: • (Many-to-one) Each word in the target sentence can be generated by at most one word in the source sentence. Situations in which a phrase in the source sentence translates as a single word in the target sentence are not wellmodeled. • (Distortion) The position of any word in the target sentence is independent of the position of the corresponding word in the source sentence, or the positions of any other source language words or their translations. The tendency for a contiguous phrase in one language to be translated as a contiguous phrase in another language is not modeled at all. • (Fertility) Whether a particular source word is selected to generate the target word for a given position is independent of which or how many other target words the same source word is selected to generate. These limitations of Model 1 are all well known, they have been addressed in other word-alignment models, and we will not discuss them further here. Our concern in this paper is with two other problems with Model 1 that are not deeply structural, and can be addressed merely by changing how the parameters of Model 1 are estimated. The first of these nonstructural problems with Model 1, as standardly trained, is that rare words in the source language tend to act as “garbage collectors” (Brown et al., 1993b; Och and Ney, 2004), aligning to too many words in the target language. This problem is not unique to Model 1, but anecdotal examination of Model 1 alignments suggests that it may be worse for Model 1, perhaps because Model 1 lacks the fertility and distortion parameters that may tend to mitigate the problem in more complex models. The cause of the problem can be easily understood if we consider a situation in which the source sentence contains a rare word that only occurs once in our training data, plus a frequent word that has an infrequent translation in the target sentence. Suppose the frequent source word has the translation present in the target sentence only 10% of the time in our training data, and thus has an estimated translation probability of around 0.1 for this target word. Since the rare source word has no other occurrences in the data, EM training is free to assign whatever probability distribution is required to maximize the joint probability of this sentence pair. Even if the rare word also needs to be used to generate its actual translation in the sentence pair, a relatively high joint probability will be obtained by giving the rare word a probability of 0.5 of generating its true translation and 0.5 of spuriously generating the translation of the frequent source word. The probability of this incorrect alignment will be higher than that obtained by assigning a probability of 1.0 to the rare word generating its true translation, and generating the true translation of the frequent source word with a probability of 0.1. The usual fix for over-fitting problems of this type in statistical NLP is to smooth the probability estimates involved in some way. The second nonstructural problem with Model 1 is that it seems to align too few target words to the null source word. Anecdotal examination of Model 1 alignments of English source sentences with French target sentences reveals that null word alignments rarely occur in the highest probability alignment, despite the fact that French sentences often contain function words that do not correspond directly to anything in their English translation. For example, English phrases of the form ⟨noun1⟩⟨noun2⟩are often expressed in French by a phrase of the form ⟨noun2⟩de ⟨noun1⟩, which may also be expressed in English (but less often) by a phrase of the form ⟨noun2⟩of ⟨noun1⟩. The structure of Model 1 again suggests why we should not be surprised by this problem. As normally defined, Model 1 hypothesizes only one null word per sentence. A target sentence may contain many words that ideally should be aligned to null, plus some other instances of the same word that should be aligned to an actual source language word. For example, we may have an English/French sentence pair that contains two instances of of in the English sentence, and five instances of de in the French sentence. Even if the null word and of have the same initial probabilty of generating de, in iterating EM, this sentence is going to push the model towards estimating a higher probabilty that of generates de and a lower estimate that the null word generates de. This happens because there are are two instances of of in the source sentence and only one hypothetical null word, and Model 1 gives equal weight to each occurrence of each source word. In effect, of gets two votes, but the null word gets only one. We seem to need more instances of the null word for Model 1 to assign reasonable probabilities to target words aligning to the null word. 4 Smoothing Translation Counts We address the nonstructural problems of Model 1 discussed above by three methods. First, to address the problem of rare words aligning to too many words, at each interation of EM we smooth all the translation probability estimates by adding virtual counts according to a uniform probability distribution over all target words. This prevents the model from becoming too confident about the translation probabilities for rare source words on the basis of very little evidence. To estimate the smoothed probabilties we use the following formula: tr(t|s) = C(t, s) + n C(s) + n · |V | (3) where C(t, s) is the expected count of s generating t, C(s) is the corresponding marginal count for s, |V | is the hypothesized size of the target vocabulary V , and n is the added count for each target word in V . |V | and n are both free parameters in this equation. We could take |V | simply to be the total number of distinct words observed in the target language training, but we know that the target language will have many words that we have never observed. We arbitrarily chose |V | to be 100,000, which is somewhat more than the total number of distinct words in our target language training data. The value of n is empirically optimized on annotated development test data. This sort of “add-n” smoothing has a poor reputation in statistical NLP, because it has repeatedly been shown to perform badly compared to other methods of smoothing higher-order n-gram models for statistical language modeling (e.g., Chen and Goodman, 1996). In those studies, however, add-n smoothing was used to smooth bigram or trigram models. Add-n smoothing is a way of smoothing with a uniform distribution, so it is not surprising that it performs poorly in language modeling when it is compared to smoothing with higher order models; e.g, smoothing trigrams with bigrams or smoothing bigrams with unigrams. In situations where smoothing with a uniform distribution is appropriate, it is not clear that add-n is a bad way to do it. Furthermore, we would argue that the word translation probabilities of Model 1 are a case where there is no clearly better alternative to a uniform distribution as the smoothing distribution. It should certainly be better than smoothing with a unigram distribution, since we especially want to benefit from smoothing the translation probabilities for the rarest words, and smoothing with a unigram distribution would assume that rare words are more likely to translate to frequent words than to other rare words, which seems counterintuitive. 5 Adding Null Words to the Source Sentence We address the lack of sufficient alignments of target words to the null source word by adding extra null words to each source sentence. Mathematically, there is no reason we have to add an integral number of null words, so in fact we let the number of null words in a sentence be any positive number. One can make arguments in favor of adding the same number of null words to every sentence, or in favor of letting the number of null words be proportional to the length of the sentence. We have chosen to add a fixed number of null words to each source sentence regardless of length, and will leave for another time the question of whether this works better or worse than adding a number of null words proportional to the sentence length. Conceptually, adding extra null words to source sentences is a slight modification to the structure of Model 1, but in fact, we can implement it without any additional model parameters by the simple expedient of multiplying all the translation probabilities for the null word by the number of null words per sentence. This multiplication is performed during every iteration of EM, as the translation probabilities for the null word are re-estimated from the corresponding expected counts. This makes these probabilities look like they are not normalized, but Model 1 can be applied in such a way that the translation probabilities for the null word are only ever used when multiplied by the number of null words in the sentence, so we are simply using the null word translation parameters to keep track of this product pre-computed. In training a version of Model 1 with only one null word per sentence, the parameters have their normal interpretation, since we are multiplying the standard probability estimates by 1. 6 Initializing Model 1 with Heuristic Parameter Estimates Normally, the translation probabilities of Model 1 are initialized to a uniform distribution over the target language vocabulary to start iterating EM. The unspoken justification for this is that EM training of Model 1 will always converge to the same set of parameter values from any set of initial values, so the intial values should not matter. But this is only the case if we want to obtain the parameter values at convergence, and we have strong reasons to believe that these values do not produce the most accurate sentence alignments. Even though EM will head towards those values from any initial position in the parameter space, there may be some starting points we can systematically find that will take us closer to the optimal parameter values for alignment accuracy along the way. To test whether a better set of initial parameter estimates can improve Model 1 alignment accuracy, we use a heuristic model based on the loglikelihood-ratio (LLR) statistic recommended by Dunning (1993). We chose this statistic because it has previously been found to be effective for automatically constructing translation lexicons (e.g., Melamed, 2000; Moore, 2001). In our application, the statistic can be defined by the following formula:  t?∈{t,¬t}  s?∈{s,¬s} C(t?, s?) log p(t?|s?) p(t?) (4) In this formula t and s mean that the corresponding words occur in the respective target and source sentences of an aligned sentence pair, ¬t and ¬s mean that the corresponding words do not occur in the respective sentences, t? and s? are variables ranging over these values, and C(t?, s?) is the observed joint count for the values of t? and s?. All the probabilities in the formula refer to maximum likelihood estimates.1 These LLR scores can range in value from 0 to N ·log(2), where N is the number of sentence pairs in the training data. The LLR score for a pair of words is high if the words have either a strong positive association or a strong negative association. Since we expect translation pairs to be positively associated, we discard any negatively associated word pairs by requiring that p(t, s) > p(t) · p(s). To use LLR scores to obtain initial estimates for the translation probabilities of Model 1, we have to somehow transform them into numbers that range from 0 to 1, and sum to no more than 1 for all the target words associated with each source word. We know that words with high LLR scores tend to be translations, so we want high LLR scores to correspond to high probabilities, and low LLR scores to correspond to low probabilities. The simplest approach would be to divide each LLR score by the sum of the scores for the source word of the pair, which would produce a normalized conditional probability distribution for each source word. Doing this, however, would discard one of the major advantages of using LLR scores as a measure of word association. All the LLR scores for rare words tend to be small; thus we do not put too much confidence in any of the hypothesized word associations for such words. This is exactly the property needed to prevent rare source words from becoming garbage collectors. To maintain this property, for each source word we compute the sum of the 1This is not the form in which the LLR statistic is usually presented, but it can easily be shown by basic algebra to be equivalent to −λ in Dunning’s paper. See Moore (2004) for details. LLR scores over all target words, but we then divide every LLR score by the single largest of these sums. Thus the source word with the highest LLR score sum receives a conditional probability distribution over target words summing to 1, but the corresponding distribution for every other source word sums to less than 1, reserving some probability mass for target words not seen with that word, with more probability mass being reserved the rarer the word. There is no guarantee, of course, that this is the optimal way of discounting the probabilities assigned to less frequent words. To allow a wider range of possibilities, we add one more parameter to the model by raising each LLR score to an empirically optimized exponent before summing the resulting scores and scaling them from 0 to 1 as described above. Choosing an exponent less than 1.0 decreases the degree to which low scores are discounted, and choosing an exponent greater than 1.0 increases degree of discounting. We still have to define an initialization of the translation probabilities for the null word. We cannot make use of LLR scores because the null word occurs in every source sentence, and any word occuring in every source sentence will have an LLR score of 0 with every target word, since p(t|s) = p(t) in that case. We could leave the distribution for the null word as the uniform distribution, but we know that a high proportion of the words that should align to the null word are frequently occuring function words. Hence we initialize the distribution for the null word to be the unigram distribution of target words, so that frequent function words will receive a higher probability of aligning to the null word than rare words, which tend to be content words that do have a translation. Finally, we also effectively add extra null words to every sentence in this heuristic model, by multiplying the null word probabilities by a constant, as described in Section 5. 7 Training and Evaluation We trained and evaluated our various modifications to Model 1 on data from the bilingual word alignment workshop held at HLT-NAACL 2003 (Mihalcea and Pedersen, 2003). We used a subset of the Canadian Hansards bilingual corpus supplied for the workshop, comprising 500,000 English-French sentences pairs, including 37 sentence pairs designated as “trial” data, and 447 sentence pairs designated as test data. The trial and test data had been manually aligned at the word level, noting particular pairs of words either as “sure” or “possible” alignments, as described by Och and Ney (2003). To limit the number of translation probabilities that we had to store, we first computed LLR association scores for all bilingual word pairs with a positive association (p(t, s) > p(t)·p(s)), and discarded from further consideration those with an LLR score of less that 0.9, which was chosen to be just low enough to retain all the “sure” word alignments in the trial data. This resulted in 13,285,942 possible word-to-word translation pairs (plus 66,406 possible null-word-to-word pairs). For most models, the word translation parameters are set automatically by EM. We trained each variation of each model for 20 iterations, which was enough in almost all cases to discern a clear minimum error on the 37 sentence pairs of trial data, and we chose as the preferred iteration the one with the lowest alignment error rate on the trial data. The other parameters of the various versions of Model 1 described in Sections 4–6 were optimized with respect to alignment error rate on the trial data using simple hill climbing. All the results we report for the 447 sentence pairs of test data use the parameter values set to their optimal values for the trial data. We report results for four principal versions of Model 1, trained using English as the source language and French as the target language: • The standard model is initialized using uniform distributions, and trained without smoothing using EM, for a number of iterations optimized on the trial data. • The smoothed model is like the standard model, but with optimized values of the nullword weight and add-n parameter. • The heuristic model simply uses the initial heuristic estimates of the translation parameter values, with an optimized LLR exponent and null-word weight, but no EM re-estimation. • The combined model initializes the translation parameter values with the heuristic estimates, using the LLR exponent and null-word weight from the optimal heuristic model, and applies EM using optimized values of the null-word weight and add-n parameters. The null-word weight used during EM is optimized separately from the null-word weight used in the initial heuristic parameter estimates. We also performed ablation experiments in which we ommitted each applicable modification in turn from each principal version of Model 1, to observe the effect on alignment error. All non-EM-trained parameters were re-optimized on the trial data for each version of Model 1 tested, with the exception Model Trial Test Test Test LLR Init EM Add EM (Ablation) AER AER Recall Precision Exp NW NW n Iter Standard 0.311 0.298 0.810 0.646 NA NA 1.0 0.0000 17 Smoothed 0.261 0.271 0.646 0.798 NA NA 10.0 0.0100 15 (EM NW) 0.285 0.273 0.833 0.671 NA NA 1.0 0.0100 20 (Add n) 0.302 0.300 0.638 0.751 NA NA 13.0 0.0000 14 Heuristic 0.234 0.255 0.655 0.844 1.3 2.4 NA NA NA (LLR Exp) 0.257 0.259 0.655 0.844 1.0 2.4 NA NA NA (Init NW) 0.300 0.308 0.740 0.657 1.5 1.0 NA NA NA Combined 0.203 0.215 0.724 0.839 1.3 2.4 7.0 0.005 1 (LLR Exp) 0.258 0.272 0.636 0.809 1.0 2.4 10.0 0.0035 3 (Init NW) 0.197 0.209 0.722 0.854 1.5 1.0 10.0 0.0005 1 (EM NW) 0.281 0.267 0.833 0.680 1.3 2.4 1.0 0.0080 8 (Add n) 0.208 0.221 0.724 0.826 1.3 2.4 8.0 0.0000 1 Table 1: Evaluation Results. that the value of the LLR exponent and initial nullword weight in the combined model were carried over from the heuristic model. 8 Results We report the performance of our different versions of Model 1 in terms of precision, recall, and alignment error rate (AER) as defined by Och and Ney (2003). These three performance statistics are defined as recall = |A ∩S| |S| (5) precision = |A ∩P| |A| (6) AER = 1 −|A ∩S| + |A ∩P| |A| + |S| (7) where S denotes the annotated set of sure alignments, P denotes the annotated set of possible alignments, and A denotes the set of alignments produced by the model under test.2 We take AER, which is derived from F-measure, as our primary evaluation metric. The results of our evaluation are presented in Table 1. The columns of the table present (in order) a description of the model being tested, the AER on the trial data, the AER on the test data, test data recall, and test data precision, followed by the optimal values on the trial data for the LLR exponent, the initial (heuristic model) null-word weight, the nullword weight used in EM re-estimation, the add-n parameter value used in EM re-estimation, and the number of iterations of EM. “NA” means a parameter is not applicable in a particular model. 2As is customary, alignments to the null word are not explicitly counted. Results for the four principal versions of Model 1 are presented in bold. For each principal version, results of the corresponding ablation experiments are presented in standard type, giving the name of each omitted modification in parentheses.3 Probably the most striking result is that the heuristic model substantially reduces the AER compared to the standard or smoothed model, even without EM re-estimation. The combined model produces an additional substantial reduction in alignment error, using a single iteration of EM. The ablation experiments show how important the different modifications are to the various models. It is interesting to note that the importance of a given modification varies from model to model. For example, the re-estimation null-word weight makes essentially no contribution to the smoothed model. It can be tuned to reduce the error on the trial data, but the improvement does not carry over to the test data. The smoothed model with only the nullword weight and no add-n smoothing has essentially the same error as the standard model; and the smoothed model with add-n smoothing alone has essentially the same error as the smoothed model with both the null-word weight and add-n smoothing. On the other hand, the re-estimation null-word weight is crucial to the combined model. With it, the combined model has substantially lower error than the heuristic model without re-estimation; without it, for any number of EM iterations, the combined model has higher error than the heuristic model. A similar analysis shows that add-n smoothing is much less important in the combined model than 3Modificiations are “omitted” by setting the corresponding parameter to a value that is equivalent to removing the modification from the model. the smoothed model. The probable explanation for this is that add-n smoothing is designed to address over-fitting from many iterations of EM. While the smoothed model does require many EM iterations to reach its minimum AER, the combined model, with or without add-n smoothing, is at its minimum AER with only one EM iteration. Finally, we note that, while the initial null-word weight is crucial to the heuristic model without reestimation, the combined model actually performs better without it. Presumably, the re-estimation null-word weight makes the inital null-word weight redundant. In fact, the combined model without the initial null word-weight has the lowest AER on both the trial and test data of any variation tested (note AERs in italics in Figure 1). The relative reduction in AER for this model is 29.9% compared to the standard model. We tested the significance of the differences in alignment error between each pair of our principal versions of Model 1 by looking at the AER for each sentence pair in the test set using a 2-tailed paired t test. The differences between all these models were significant at a level of 10−7 or better, except for the difference between the standard model and the smoothed model, which was “significant” at the 0.61 level—that is, not at all significant. The reason for this is probably the very different balance between precision and recall with the standard and smoothed models, which indicates that the models make quite different sorts of errors, making statistical significance hard to establish. This conjecture is supported by considering the smoothed model omitting the re-estimation null-word weight, which has substantially the same AER as the full smoothed model, but with a precision/recall balance much closer to the standard model. The 2-tailed paired t test comparing this model to the standard model showed significance at a level of better than 10−10. We also compared the combined model with and without the initial null-word weight, and found that the improvement without the weight was significant at the 0.008 level. 9 Conclusions We have demonstrated that it is possible to improve the performance of Model 1 in terms of alignment error by about 30%, simply by changing the way its parameters are estimated. Almost half this improvement is obtained with a simple heuristic model that does not require EM re-estimation. It is interesting to contrast our heuristic model with the heuristic models used by Och and Ney (2003) as baselines in their comparative study of alignment models. The major difference between our model and theirs is that they base theirs on the Dice coefficient, which is computed by the formula4 2 · C(t, s) C(t) + C(s) (8) while we use the log-likelihood-ratio statistic defined in Section 6. Och and Ney find that the standard version of Model 1 produces more accurate alignments after only one iteration of EM than either of the heuristic models they consider, while we find that our heuristic model outperforms the standard version of Model 1, even with an optimal number of iterations of EM. While the Dice coefficient is simple and intuitive—the value is 0 for words never found together, and 1 for words always found together—it lacks the important property of the LLR statistic that scores for rare words are discounted; thus it does not address the over-fitting problem for rare words. The list of applications of IBM word-alignment Model 1 given in Section 1 should be sufficient to convince anyone of the relevance of improving the model. However, it is not clear that AER as defined by Och and Ney (2003) is always the appropriate way to evaluate the quality of the model, since the Viterbi word alignment that AER is based on is seldom used in applications of Model 1.5 Moreover, it is notable that while the versions of Model 1 having the lowest AER have dramatically higher precision than the standard version, they also have quite a bit lower recall. If AER does not reflect the optimal balance between precision and recall for a particular application, then optimizing AER may not produce the best task-based performance for that application. Thus the next step in this research must be to test whether the improvements in AER we have demonstrated for Model 1 lead to improvements on task-based performance measures. References Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993a. 4Och and Ney give a different formula in their paper, in which the addition in the denominator is replaced by a multiplication. According to Och (personal communication), however, this is merely a typographical error in the publication, and the results reported are for the standard definition of the Dice coefficient. 5A possible exception is suggested by the results of Koehn et al. (2003), which show that phrase translations extracted from Model 1 alignments can perform almost as well in a phrase-based statistical translation system as those extracted from more sophisticated alignment models, provided enough training data is used. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263–311. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, Meredith J. Goldsmith, Jan Hajic, Robert L. Mercer, and Surya Mohanty. 1993b. But dictionaries are data too. In Proceedings of the ARPA Workshop on Human Language Technology, pp. 202–205, Plainsboro, New Jersey, USA. Stanley F. Chen and Joshua Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pp. 310–318, Santa Cruz, California, USA. Yuan Ding, Daniel Gildea, and Martha Palmer. 2003. An algorithm for word-level alignment of parallel dependency trees. In Proceedings of the Ninth Machine Translation Summit, pp. 95–101, New Orleans, Louisiana, USA. Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61–74. Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2003), pp. 127–133, Edmonton, Alberta, Canada. I. Dan Melamed. 2000. Models of Translational Equivalence. Computational Linguistics, 26(2):221–249. Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In Proceedings of the HLT-NAACL 2003 Workshop, Building and Using Parallel Texts: Data Driven Machine Translation and Beyond, pp. 1–6, Edmonton, Alberta, Canada. Robert C. Moore. 2001. Towards a simple and accurate statistical approach to learning translation relationships among words. In Proceedings of the Workshop Data-driven Machine Translation at the 39th Annual Meeting of the Association for Computational Linguistics, pp. 79–86, Toulouse, France. Robert C. Moore. 2002. Fast and accurate sentence alignment of bilingual corpora. In S. Richardson (ed.), Machine Translation: From Research to Real Users (Proceedings, 5th Conference of the Association for Machine Translation in the Americas, Tiburon, California), pp. 135–244, Springer-Verlag, Heidelberg, Germany. Robert C. Moore. 2004. On log-likelihood-ratios and the significance of rare events. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, Barcelona, Spain. Dragos S. Munteanu, Alexander Fraser, and Daniel Marcu. 2004. Improved machine translation performance via parallel sentence extraction from comparable corpora. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2004), pp. 265–272, Boston, Massachusetts, USA. Francisco Nevado, Francisco Casacuberta, and Enrique Vidal. 2003. Parallel corpora segmentation using anchor words. In Proceedings of the 7th International EAMT workshop on MT and other language technology tools, Improving MT through other language technology tools, Resources and tools for building MT, pp. 33–40, Budapest, Hungary. Franz Joseph Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Franz Josef Och et al. 2004. A smorgasbord of features for statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2004), pp. 161–168, Boston, Massachusetts, USA. Ashish Venugopal, Stephan Vogel, and Alex Waibel. 2003. Effective phrase translation extraction from alignment models. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pp. 319–326, Sapporo, Japan.
2004
66
A Geometric View on Bilingual Lexicon Extraction from Comparable Corpora E. Gaussier†, J.-M. Renders†, I. Matveeva∗, C. Goutte†, H. D´ejean† †Xerox Research Centre Europe 6, Chemin de Maupertuis — 38320 Meylan, France [email protected] ∗Dept of Computer Science, University of Chicago 1100 E. 58th St. Chicago, IL 60637 USA [email protected] Abstract We present a geometric view on bilingual lexicon extraction from comparable corpora, which allows to re-interpret the methods proposed so far and identify unresolved problems. This motivates three new methods that aim at solving these problems. Empirical evaluation shows the strengths and weaknesses of these methods, as well as a significant gain in the accuracy of extracted lexicons. 1 Introduction Comparable corpora contain texts written in different languages that, roughly speaking, ”talk about the same thing”. In comparison to parallel corpora, ie corpora which are mutual translations, comparable corpora have not received much attention from the research community, and very few methods have been proposed to extract bilingual lexicons from such corpora. However, except for those found in translation services or in a few international organisations, which, by essence, produce parallel documentations, most existing multilingual corpora are not parallel, but comparable. This concern is reflected in major evaluation conferences on crosslanguage information retrieval (CLIR), e.g. CLEF1, which only use comparable corpora for their multilingual tracks. We adopt here a geometric view on bilingual lexicon extraction from comparable corpora which allows one to re-interpret the methods proposed thus far and formulate new ones inspired by latent semantic analysis (LSA), which was developed within the information retrieval (IR) community to treat synonymous and polysemous terms (Deerwester et al., 1990). We will explain in this paper the motivations behind the use of such methods for bilingual lexicon extraction from comparable corpora, and show how to apply them. Section 2 is devoted to the presentation of the standard approach, ie the approach adopted by most researchers so far, its geometric interpretation, and the unresolved synonymy 1http://clef.iei.pi.cnr.it:2002/ and polysemy problems. Sections 3 to 4 then describe three new methods aiming at addressing the issues raised by synonymy and polysemy: in section 3 we introduce an extension of the standard approach, and show in appendix A how this approach relates to the probabilistic method proposed in (Dejean et al., 2002); in section 4, we present a bilingual extension to LSA, namely canonical correlation analysis and its kernel version; lastly, in section 5, we formulate the problem in terms of probabilistic LSA and review different associated similarities. Section 6 is then devoted to a large-scale evaluation of the different methods proposed. Open issues are then discussed in section 7. 2 Standard approach Bilingual lexicon extraction from comparable corpora has been studied by a number of researchers, (Rapp, 1995; Peters and Picchi, 1995; Tanaka and Iwasaki, 1996; Shahzad et al., 1999; Fung, 2000, among others). Their work relies on the assumption that if two words are mutual translations, then their more frequent collocates (taken here in a very broad sense) are likely to be mutual translations as well. Based on this assumption, the standard approach builds context vectors for each source and target word, translates the target context vectors using a general bilingual dictionary, and compares the translation with the source context vector: 1. For each source word v (resp. target word w), build a context vector −→v (resp. −→ w ) consisting in the measure of association of each word e (resp. f) in the context of v (resp. w), a(v, e). 2. Translate the context vectors with a general bilingual dictionary D, accumulating the contributions from words that yield identical translations. 3. Compute the similarity between source word v and target word w using a similarity measures, such as the Dice or Jaccard coefficients, or the cosine measure. As the dot-product plays a central role in all these measures, we consider, without loss of generality, the similarity given by the dot-product between −→v and the translation of −→ w : ⟨−→v , −−−→ tr(w)⟩ = X e a(v, e) X f,(e,f)inD a(w, f) = X (e,f)∈D a(v, e) a(w, f) (1) Because of the translation step, only the pairs (e, f) that are present in the dictionary contribute to the dot-product. Note that this approach requires some general bilingual dictionary as initial seed. One way to circumvent this requirement consists in automatically building a seed lexicon based on spelling and cognates clues (Koehn and Knight, 2002). Another approach directly tackles the problem from scratch by searching for a translation mapping which optimally preserves the intralingual association measure between words (Diab and Finch, 2000): the underlying assumption is that pairs of words which are highly associated in one language should have translations that are highly associated in the other language. In this latter case, the association measure is defined as the Spearman rank order correlation between their context vectors restricted to “peripheral tokens” (highly frequent words). The search method is based on a gradient descent algorithm, by iteratively changing the mapping of a single word until (locally) minimizing the sum of squared differences between the association measure of all pairs of words in one language and the association measure of the pairs of translated words obtained by the current mapping. 2.1 Geometric presentation We denote by si, 1 ≤i ≤p and tj, 1 ≤j ≤q the source and target words in the bilingual dictionary D. D is a set of n translation pairs (si, tj), and may be represented as a p × q matrix M, such that Mij = 1 iff (si, tj) ∈D (and 0 otherwise).2 Assuming there are m distinct source words e1, · · · , em and r distinct target words f1, · · · , fr in the corpus, figure 1 illustrates the geometric view of the standard method. The association measure a(v, e) may be viewed as the coordinates of the m-dimensional context vector −→v in the vector space formed by the orthogonal basis (e1, · · · , em). The dot-product in (1) only involves source dictionary entries. The corresponding dimensions are selected by an orthogonal 2The extension to weighted dictionary entries Mij ∈[0, 1] is straightforward but not considered here for clarity. projection on the sub-space formed by (s1, · · · , sp), using a p × m projection matrix Ps. Note that (s1, · · · , sp), being a sub-family of (e1, · · · , em), is an orthogonal basis of the new sub-space. Similarly, −→ w is projected on the dictionary entries (t1, · · · , tq) using a q × r orthogonal projection matrix Pt. As M encodes the relationship between the source and target entries of the dictionary, equation 1 may be rewritten as: S(v, w) = ⟨−→v , −−−→ tr(w)⟩= (Ps−→v )⊤M (Pt−→ w ) (2) where ⊤denotes transpose. In addition, notice that M can be rewritten as S⊤T, with S an n × p and T an n × q matrix encoding the relations between words and pairs in the bilingual dictionary (e.g. Ski is 1 iff si is in the kth translation pair). Hence: S(v, w)=−→v ⊤P⊤ s S⊤TPt−→ w =⟨SPs−→v , TPt−→ w ⟩(3) which shows that the standard approach amounts to performing a dot-product in the vector space formed by the n pairs ((s1, tl), · · · , (sp, tk)), which are assumed to be orthogonal, and correspond to translation pairs. 2.2 Problems with the standard approach There are two main potential problems associated with the use of a bilingual dictionary. Coverage. This is a problem if too few corpus words are covered by the dictionary. However, if the context is large enough, some context words are bound to belong to the general language, so a general bilingual dictionary should be suitable. We thus expect the standard approach to cope well with the coverage problem, at least for frequent words. For rarer words, we can bootstrap the bilingual dictionary by iteratively augmenting it with the most probable translations found in the corpus. Polysemy/synonymy. Because all entries on either side of the bilingual dictionary are treated as orthogonal dimensions in the standard methods, problems may arise when several entries have the same meaning (synonymy), or when an entry has several meanings (polysemy), especially when only one meaning is represented in the corpus. Ideally, the similarities wrt synonyms should not be independent, but the standard method fails to account for that. The axes corresponding to synonyms si and sj are orthogonal, so that projections of a context vector on si and sj will in general be uncorrelated. Therefore, a context vector that is similar to si may not necessarily be similar to sj. A similar situation arises for polysemous entries. Suppose the word bank appears as both financial institution (French: banque) and ground near a river Ps e 2 e m v e 1 s 1 s p v’ (s ,t ) t t f f f (s ,t ) 1 1 (s ,t ) 2 1 r w w’ 1 p Pt S T p k 1 i v" w" Figure 1: Geometric view of the standard approach (French: berge), but only the pair (banque, bank) is in the bilingual dictionary. The standard method will deem similar river, which co-occurs with bank, and argent (money), which co-occurs with banque. In both situations, however, the context vectors of the dictionary entries provide some additional information: for synonyms si and sj, it is likely that −→ si and −→ sj are similar; for polysemy, if the context vectors −−−−→ banque and −−→ bank have few translations pairs in common, it is likely that banque and bank are used with somewhat different meanings. The following methods try to leverage this additional information. 3 Extension of the standard approach The fact that synonyms may be captured through similarity of context vectors3 leads us to question the projection that is made in the standard method, and to replace it with a mapping into the sub-space formed by the context vectors of the dictionary entries, that is, instead of projecting −→v on the subspace formed by (s1, · · · , sp), we now map it onto the sub-space generated by (−→ s1, · · · , −→ sp). With this mapping, we try to find a vector space in which synonymous dictionary entries are close to each other, while polysemous ones still select different neighbors. This time, if −→v is close to −→ si and −→ sj , si and sj being synonyms, the translations of both si and sj will be used to find those words w close to v. Figure 2 illustrates this process. By denoting Qs, respectively Qt, such a mapping in the source (resp. target) side, and using the same translation mapping (S, T) as above, the similarity between source and target words becomes: S(v, w)=⟨SQs−→v , TQt−→ w ⟩=−→v ⊤Q⊤ s S⊤TQt−→ w (4) A natural choice for Qs (and similarly for Qt) is the following m × p matrix: Qs = R⊤ s =    a(s1, e1) · · · a(sp, e1) ... ... ... a(s1, em) · · · a(sp, em)    3This assumption has been experimentally validated in several studies, e.g. (Grefenstette, 1994; Lewis et al., 1967). but other choices, such as a pseudo-inverse of Rs, are possible. Note however that computing the pseudo-inverse of Rs is a complex operation, while the above projection is straightforward (the columns of Q correspond to the context vectors of the dictionary words). In appendix A we show how this method generalizes over the probabilistic approach presented in (Dejean et al., 2002). The above method bears similarities with the one described in (Besanc¸on et al., 1999), where a matrix similar to Qs is used to build a new term-document matrix. However, the motivations behind their work and ours differ, as do the derivations and the general framework, which justifies e.g. the choice of the pseudo-inverse of Rs in our case. 4 Canonical correlation analysis The data we have at our disposal can naturally be represented as an n × (m + r) matrix in which the rows correspond to translation pairs, and the columns to source and target vocabularies: C = e1 · · · em f1 · · · fr · · · · · · · · · · · · · · · · · · (s(1), t(1)) ... ... ... ... ... ... ... · · · · · · · · · · · · · · · · · · (s(n), t(n)) where (s(k), t(k)) is just a renumbering of the translation pairs (si, tj). Matrix C shows that each translation pair supports two views, provided by the context vectors in the source and target languages. Each view is connected to the other by the translation pair it represents. The statistical technique of canonical correlation analysis (CCA) can be used to identify directions in the source view (first m columns of C) and target view (last r columns of C) that are maximally correlated, ie “behave in the same way” wrt the translation pairs. We are thus looking for directions in the source and target vector spaces (defined by the orthogonal bases (e1, · · · , em) and (f1, · · · , fr)) such that the projections of the translation pairs on these directions are maximally correlated. Intuitively, those directions define latent semantic axes s e e e v f f f (s ,t ) 1 2 1 r w 1 t S T em e1 e2 m 1 2 s s s s (s ,t ) 1 (s ,t ) p 1 k i f fr 2 f t t t t 1 2 w" v" 1 2 p k q i v w Q Q Figure 2: Geometric view of the extended approach that capture the implicit relations between translation pairs, and induce a natural mapping across languages. Denoting by ξs and ξt the directions in the source and target spaces, respectively, this may be formulated as: ρ = max ξs,ξt P i⟨ξs, −→s (i)⟩⟨ξt, −→t (i)⟩ qP i⟨ξs, −→s (i)⟩P j⟨ξt, −→t (j)⟩ As in principal component analysis, once the first two directions (ξ1 s, ξ1 t ) have been identified, the process can be repeated in the sub-space orthogonal to the one formed by the already identified directions. However, a general solution based on a set of eigenvalues can be proposed. Following e.g. (Bach and Jordan, 2001), the above problem can be reformulated as the following generalized eigenvalue problem: B ξ = ρD ξ (5) where, denoting again Rs and Rt the first m and last r (respectively) columns of C, we define: B =  0 RtR⊤ t RsR⊤ s RsR⊤ s RtR⊤ t 0  , D =  (RsR⊤ s )2 0 0 (RtR⊤ t )2  , ξ =  ξs ξt  The standard approach to solve eq. 5 is to perform an incomplete Cholesky decomposition of a regularized form of D (Bach and Jordan, 2001). This yields pairs of source and target directions (ξ1 s, ξ1 t ), · · · , (ξl s, ξl t) that define a new sub-space in which to project words from each language. This sub-space plays the same role as the sub-space defined by translation pairs in the standard method, although with CCA, it is derived from the corpus via the context vectors of the translation pairs. Once projected, words from different languages can be compared through their dot-product or cosine. Denoting Ξs = h ξ1 s, . . . ξl s i⊤, and Ξt = h ξ1 t , . . . ξl t i⊤, the similarity becomes (figure 3): S(v, w) = ⟨Ξs−→v , Ξt−→ w ⟩= −→v ⊤Ξ⊤ s Ξt−→ w (6) The number l of vectors retained in each language directly defines the dimensions of the final subspace used for comparing words across languages. CCA and its kernelised version were used in (Vinokourov et al., 2002) as a way to build a crosslingual information retrieval system from parallel corpora. We show here that it can be used to infer language-independent semantic representations from comparable corpora, which induce a similarity between words in the source and target languages. 5 Multilingual probabilistic latent semantic analysis The matrix C described above encodes in each row k the context vectors of the source (first m columns) and target (last r columns) of each translation pair. Ideally, we would like to cluster this matrix such that translation pairs with synonymous words appear in the same cluster, while translation pairs with polysemous words appear in different clusters (soft clustering). Furthermore, because of the symmetry between the roles played by translation pairs and vocabulary words (synonymous and polysemous vocabulary words should also behave as described above), we want the clustering to behave symmetrically with respect to translation pairs and vocabulary words. One well-motivated method that fulfills all the above criteria is Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999). Assuming that C encodes the co-occurrences between vocabulary words w and translation pairs d, PLSA models the probability of co-occurrence w and d via latent classes α: P(w, d) = X α P(α) P(w|α) P(d|α) (7) where, for a given class, words and translation pairs are assumed to be independently generated from class-conditional probabilities P(w|α) and P(d|α). Note here that the latter distribution is languageindependent, and that the same latent classes are used for the two languages. The parameters of the model are obtained by maximizing the likelihood of the observed data (matrix C) through ExpectationMaximisation algorithm (Dempster et al., 1977). In e e e v f f f 2 1 r w 1 e e1 e2 m 1 2 f fr 2 f v" v w (CCA) w" (CCA) m (ξ1 s, ξ1 t ) ξ1 s ξi s ξl s ξ2 s (ξl s, ξl t) (ξ2 s, ξ2 t ) ξ1 t ξl t ξs ξt ξ2 t ξi t Figure 3: Geometric view of the Canonical Correlation Analysis approach addition, in order to reduce the sensitivity to initial conditions, we use a deterministic annealing scheme (Ueda and Nakano, 1995). The update formulas for the EM algorithm are given in appendix B. This model can identify relevant bilingual latent classes, but does not directly define a similarity between words across languages. That may be done by using Fisher kernels as described below. Associated similarities: Fisher kernels Fisher kernels (Jaakkola and Haussler, 1999) derive a similarity measure from a probabilistic model. They are useful whenever a direct similarity between observed feature is hard to define or insufficient. Denoting ℓ(w) = lnP(w|θ) the loglikelihood for example w, the Fisher kernel is: K(w1, w2) = ∇ℓ(w1)⊤IF−1∇ℓ(w2) (8) The Fisher information matrix IF = E  ∇ℓ(x)∇ℓ(x)⊤ keeps the kernel independent of reparameterisation. With a suitable parameterisation, we assume IF ≈1. For PLSA (Hofmann, 2000), the Fisher kernel between two words w1 and w2 becomes: K(w1, w2) = X α P(α|w1)P(α|w2) P(α) (9) + X d bP(d|w1) bP(d|w2) X α P(α|d,w1)P(α|d,w2) P(d|α) where d ranges over the translation pairs. The Fisher kernel performs a dot-product in a vector space defined by the parameters of the model. With only one class, the expression of the Fisher kernel (9) reduces to: K(w1, w2) = 1 + X d bP(d|w1) bP(d|w2) P(d) Apart from the additional intercept (’1’), this is exactly the similarity provided by the standard method, with associations given by scaled empirical frequencies a(w, d) = bP(d|w)/ p P(d). Accordingly, we expect that the standard method and the Fisher kernel with one class should have similar behaviors. In addition to the above kernel, we consider two additional versions, obtained:through normalisation (NFK) and exponentiation (EFK): NFK(w1, w2) = K(w1, w2) p K(w1)K(w2) (10) EFK(w1, w2) = e−1 2 (K(w1)+K(w2)−2K(w1,w2)) where K(w) stands for K(w, w). 6 Experiments and results We conducted experiments on an English-French corpus derived from the data used in the multilingual track of CLEF2003, corresponding to the newswire of months May 1994 and December 1994 of the Los Angeles Times (1994, English) and Le Monde (1994, French). As our bilingual dictionary, we used the ELRA multilingual dictionary,4 which contains ca. 13,500 entries with at least one match in our corpus. In addition, the following linguistic preprocessing steps were performed on both the corpus and the dictionary: tokenisation, lemmatisation and POS-tagging. Only lexical words (nouns, verbs, adverbs, adjectives) were indexed and only single word entries in the dicitonary were retained. Infrequent words (occurring less than 5 times) were discarded when building the indexing terms and the dictionary entries. After these steps our corpus contains 34,966 distinct English words, and 21,140 distinct French words, leading to ca. 25,000 English and 13,000 French words not present in the dictionary. To evaluate the performance of our extraction methods, we randomly split the dictionaries into a training set with 12,255 entries, and a test set with 1,245 entries. The split is designed in such a way that all pairs corresponding to the same source word are in the same set (training or test). All methods use the training set as the sole available resource and predict the most likely translations of the terms in the source language (English) belonging to the 4Available through www.elra.info test set. The context vectors were defined by computing the mutual information association measure between terms occurring in the same context window of size 5 (ie. by considering a neighborhood of +/- 2 words around the current word), and summing it over all contexts of the corpora. Different association measures and context sizes were assessed and the above settings turned out to give the best performance even if the optimum is relatively flat. For memory space and computational efficiency reasons, context vectors were pruned so that, for each term, the remaining components represented at least 90 percent of the total mutual information. After pruning, the context vectors were normalised so that their Euclidean norm is equal to 1. The PLSA-based methods used the raw co-occurrence counts as association measure, to be consistent with the underlying generative model. In addition, for the extended method, we retained only the N (N = 200 is the value which yielded the best results in our experiments) dictionary entries closest to source and target words when doing the projection with Q. As discussed below, this allows us to get rid of spurious relationships. The upper part of table 1 summarizes the results we obtained, measured in terms of F-1 score for different lengths of the candidate list, from 20 to 500. For each length, precision is based on the number of lists that contain an actual translation of the source word, whereas recall is based on the number of translations provided in the reference set and found in the list. Note that our results differ from the ones previously published, which can be explained by the fact that first our corpus is relatively small compared to others, second that our evaluation relies on a large number of candidates, which can occur as few as 5 times in the corpus, whereas previous evaluations were based on few, high frequent terms, and third that we do not use the same bilingual dictionary, the coverage of which being an important factor in the quality of the results obtained. Long candidate lists are justified by CLIR considerations, where longer lists might be preferred over shorter ones for query expansion purposes. For PLSA, the normalised Fisher kernels provided the best results, and increasing the number of latent classes did not lead in our case to improved results. We thus display here the results obtained with the normalised version of the Fisher kernel, using only one component. For CCA, we empirically optimised the number of dimensions to be used, and display the results obtained with the optimal value (l = 300). As one can note, the extended approach yields the best results in terms of F1-score. However, its performance for the first 20 candidates are below the standard approach and comparable to the PLSAbased method. Indeed, the standard approach leads to higher precision at the top of the list, but lower recall overall. This suggests that we could gain in performance by re-ranking the candidates of the extended approach with the standard and PLSA methods. The lower part of table 1 shows that this is indeed the case. The average precision goes up from 0.4 to 0.44 through this combination, and the F1-score is significantly improved for all the length ranges we considered (bold line in table 1). 7 Discussion Extended method As one could expect, the extended approach improves the recall of our bilingual lexicon extraction system. Contrary to the standard approach, in the extended approach, all the dictionary words, present or not in the context vector of a given word, can be used to translate it. This leads to a noise problem since spurious relations are bound to be detected. The restriction we impose on the translation pairs to be used (N nearest neighbors) directly aims at selecting only the translation pairs which are in true relation with the word to be translated. Multilingual PLSA Even though theoretically well-founded, PLSA does not lead to improved performance. When used alone, it performs slightly below the standard method, for different numbers of components, and performs similarly to the standard method when used in combination with the extended method. We believe the use of mere cooccurrence counts gives a disadvantage to PLSA over other methods, which can rely on more sophisticated measures. Furthermore, the complexity of the final vector space (several millions of dimensions) in which the comparison is done entails a longer processing time, which renders this method less attractive than the standard or extended ones. Canonical correlation analysis The results we obtain with CCA and its kernel version are disappointing. As already noted, CCA does not directly solve the problems we mentioned, and our results show that CCA does not provide a good alternative to the standard method. Here again, we may suffer from a noise problem, since each canonical direction is defined by a linear combination that can involve many different vocabulary words. Overall, starting with an average precision of 0.35 as provided by the standard approach, we were able to increase it to 0.44 with the methods we consider. Furthermore, we have shown here that such an improvement could be achieved with relatively simple 20 60 100 160 200 260 300 400 500 Avg. Prec. standard 0.14 0.20 0.24 0.29 0.30 0.33 0.35 0.38 0.40 0.35 Ext (N=500) 0.11 0.21 0.27 0.32 0.34 0.38 0.41 0.45 0.50 0.40 CCA (l=300) 0.04 0.10 0.14 0.20 0.22 0.26 0.29 0.35 0.41 0.25 NFK(k=1) 0.10 0.15 0.20 0.23 0.26 0.27 0.28 0.32 0.34 0.30 Ext + standard 0.16 0.26 0.32 0.37 0.40 0.44 0.45 0.47 0.50 0.44 Ext + NFK(k=1) 0.13 0.23 0.28 0.33 0.38 0.42 0.44 0.48 0.50 0.42 Ext + NFK(k=4) 0.13 0.22 0.26 0.33 0.37 0.40 0.42 0.47 0.50 0.41 Ext + NFK (k=16) 0.12 0.20 0.25 0.32 0.36 0.40 0.42 0.47 0.50 0.40 Table 1: Results of the different methods; F-1 score at different number of candidate translations. Ext refers to the extended approach, whereas NFK stands for normalised Fisher kernel. methods. Nevertheless, there are still a number of issues that need be addressed. The most important one concerns the combination of the different methods, which could be optimised on a validation set. Such a combination could involve Fisher kernels with different latent classes in a first step, and a final combination of the different methods. However, the results we obtained so far suggest that the rank of the candidates is an important feature. It is thus not guaranteed that we can gain over the combination we used here. 8 Conclusion We have shown in this paper how the problem of bilingual lexicon extraction from comparable corpora could be interpreted in geometric terms, and how this view led to the formulation of new solutions. We have evaluated the methods we propose on a comparable corpus extracted from the CLEF colection, and shown the strengths and weaknesses of each method. Our final results show that the combination of relatively simple methods helps improve the average precision of bilingual lexicon extraction methods from comparale corpora by 10 points. We hope this work will help pave the way towards a new generation of cross-lingual information retrieval systems. Acknowledgements We thank J.-C. Chappelier and M. Rajman who pointed to us the similarity between our extended method and the model DSIR (distributional semantics information retrieval), and provided us with useful comments on a first draft of this paper. We also want to thank three anonymous reviewers for useful comments on a first version of this paper. References F. R. Bach and M. I. Jordan. 2001. Kernel independent component analysis. Journal of Machine Learning Research. R. Besanc¸on, M. Rajman, and J.-C. Chappelier. 1999. Textual similarities based on a distributional approach. In Proceedings of the Tenth International Workshop on Database and Expert Systems Applications (DEX’99), Florence, Italy. S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391–407. H. Dejean, E. Gaussier, and F. Sadat. 2002. An approach based on multilingual thesauri and model combination for bilingual lexicon extraction. In International Conference on Computational Linguistics, COLING’02. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Mona Diab and Steve Finch. 2000. A statistical word-level translation model for comparable corpora. In Proceeding of the Conference on Content-Based Multimedia Information Access (RIAO). Pascale Fung. 2000. A statistical view on bilingual lexicon extraction - from parallel corpora to nonparallel corpora. In J. V´eronis, editor, Parallel Text Processing. Kluwer Academic Publishers. G. Grefenstette. 1994. Explorations in Automatic Thesaurus Construction. Kluwer Academic Publishers. Thomas Hofmann. 1999. Probabilistic latent semantic analysis. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, pages 289–296. Morgan Kaufmann. Thomas Hofmann. 2000. Learning the similarity of documents: An information-geometric approach to document retrieval and categorization. In Advances in Neural Information Processing Systems 12, page 914. MIT Press. Tommi S. Jaakkola and David Haussler. 1999. Exploiting generative models in discriminative classifiers. In Advances in Neural Information Processing Systems 11, pages 487–493. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In ACL 2002 Workshop on Unsupervised Lexical Acquisition. P.A.W. Lewis, P.B. Baxendale, and J.L. Bennet. 1967. Statistical discrimination of the synonym/antonym relationship between words. Journal of the ACM. C. Peters and E. Picchi. 1995. Capturing the comparable: A system for querying comparable text corpora. In JADT’95 - 3rd International Conference on Statistical Analysis of Textual Data, pages 255–262. R. Rapp. 1995. Identifying word translations in nonparallel texts. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. I. Shahzad, K. Ohtake, S. Masuyama, and K. Yamamoto. 1999. Identifying translations of compound nouns using non-aligned corpora. In Proceedings of the Workshop MAL’99, pages 108– 113. K. Tanaka and Hideya Iwasaki. 1996. Extraction of lexical translations from non-aligned corpora. In International Conference on Computational Linguistics, COLING’96. Naonori Ueda and Ryohei Nakano. 1995. Deterministic annealing variant of the EM algorithm. In Advances in Neural Information Processing Systems 7, pages 545–552. A. Vinokourov, J. Shawe-Taylor, and N. Cristianini. 2002. Finding language-independent semantic representation of text using kernel canonical correlation analysis. In Advances in Neural Information Processing Systems 12. Appendix A: probabilistic interpretation of the extension of standard approach As in section 3, SQs−→v is an n-dimensional vector, defined over ((s1, tl), · · · , (sp, tk)). The coordinate of SQs−→v on the axis corresponding to the translation pair (si, tj) is ⟨−→ si , −→v ⟩(the one for TQt−→ w on the same axis being ⟨−→ tj , −→ w ⟩). Thus, equation 4 can be rewritten as: S(v, w) = X (si,tj) ⟨−→ si , −→v ⟩⟨−→ tj , −→ w ⟩ which we can normalised in order to get a probability distribution, leading to: S(v, w) = X (si,tj) P(v)P(si|v)P(w|tj)P(tj) By imposing P(tj) to be uniform, and by denoting C a translation pair, one arrives at: S(v, w) ∝ X C P(v)P(C|v)P(w|C) with the interpretation that only the source, resp. target, word in C is relevant for P(C|v), resp. P(w|C). Now, if we are looking for those ws closest to a given v, we rely on: S(w|v) ∝ X C P(C|v)P(w|C) which is the probabilistic model adopted in (Dejean et al., 2002). This latter model is thus a special case of the extension we propose. Appendix B: update formulas for PLSA The deterministic annealing EM algorithm for PLSA (Hofmann, 1999) leads to the following equations for iteration t and temperature β: P(α|w, d) = P(α)βP(w|α)βP(d|α)β P αP(α)βP(w|α)βP(d|α)β P (t+)(α) = 1 P (w,d) n(w, d) X (w,d) n(w, d)P(α|w, d) P (t+)(w|α) = P d n(w, d)P(α|w, d) P (w,d) n(w, d)P(α|w, d) P (t+)(d|α) = P w n(w, d)P(α|w, d) P (w,d) n(w, d)P(α|w, d) where n(w, d) is the number of co-occurrences between w and d. Parameters are obtained by iterating eqs 11–11 for each β, 0 < β ≤1.
2004
67
Creating Multilingual Translation Lexicons with Regional Variations Using Web Corpora Pu-Jen Cheng*, Yi-Cheng Pan*, Wen-Hsiang Lu+, and Lee-Feng Chien* * Institute of Information Science, Academia Sinica, Taiwan + Dept. of Computer Science and Information Engineering, National Cheng Kung Univ., Taiwan Dept. of Information Management, National Taiwan University, Taiwan {pjcheng, thomas02, whlu, lfchien}@iis.sinica.edu.tw Abstract The purpose of this paper is to automatically create multilingual translation lexicons with regional variations. We propose a transitive translation approach to determine translation variations across languages that have insufficient corpora for translation via the mining of bilingual search-result pages and clues of geographic information obtained from Web search engines. The experimental results have shown the feasibility of the proposed approach in efficiently generating translation equivalents of various terms not covered by general translation dictionaries. It also revealed that the created translation lexicons can reflect different cultural aspects across regions such as Taiwan, Hong Kong and mainland China. 1 Introduction Compilation of translation lexicons is a crucial process for machine translation (MT) (Brown et al., 1990) and cross-language information retrieval (CLIR) systems (Nie et al., 1999). A lot of effort has been spent on constructing translation lexicons from domain-specific corpora in an automatic way (Melamed, 2000; Smadja et al., 1996; Kupiec, 1993). However, such methods encounter two fundamental problems: translation of regional variations and the lack of up-to-date and high-lexical-coverage corpus source, which are worthy of further investigation. The first problem is resulted from the fact that the translations of a term may have variations in different dialectal regions. Translation lexicons constructed with conventional methods may not adapt to regional usages. For example, a Chinese-English lexicon constructed using a Hong Kong corpus cannot be directly adapted to the use in mainland China and Taiwan. An obvious example is that the word “taxi” is normally translated into “的士” (Chinese transliteration of taxi) in Hong Kong, which is completely different from the translated Chinese words of “出租车” (rental cars) in mainland China and “計 程車” (cars with meters) in Taiwan. Besides, transliterations of a term are often pronounced differently across regions. For example, the company name “Sony” is transliterated into “新力” (xinli) in Taiwan and “索尼” (suoni) in mainland China. Such terms, in today’s increasingly internationalized world, are appearing more and more often. It is believed that their translations should reflect the cultural aspects across different dialectal regions. Translations without consideration of the regional usages will lead to many serious misunderstandings, especially if the context to the original terms is not available. Halpern (2000) discussed the importance of translating simplified and traditional Chinese lexemes that are semantically, not orthographically, equivalent in various regions. However, previous work on constructing translation lexicons for use in different regions was limited. That might be resulted from the other problem that most of the conventional approaches are based heavily on domain-specific corpora. Such corpora may be insufficient, or unavailable, for certain domains. The Web is becoming the largest data repository in the world. A number of studies have been reported on experiments in the use of the Web to complement insufficient corpora. Most of them (Kilgarriff et al., 2003) tried to automatically collect parallel texts of different language versions (e.g. English and Chinese), instead of different regional versions (e.g. Chinese in Hong Kong and Taiwan), from the Web. These methods are feasible but only certain pairs of languages and subject domains can extract sufficient parallel texts as corpora. Different from the previous work, Lu et al. (2002) utilized Web anchor texts as a comparable bilingual corpus source to extract translations for out-of-vocabulary terms (OOV), the terms not covered by general translation dictionaries. This approach is applicable to the compilation of translation lexicons in diverse domains but requires powerful crawlers and high network bandwidth to gather Web data. It is fortunate that the Web contains rich pages in a mixture of two or more languages for some language pairs such as Asian languages and English. Many of them contain bilingual translations of terms, including OOV terms, e.g. companies’, personal and technical names. In addition, geographic information about Web pages also provides useful clues to the regions where translations appear. We are, therefore, interested in realizing whether these nice characteristics make it possible to automatically construct multilingual translation lexicons with regional variations. Real search engines, such as Google (http://www.google.com) and AltaVista (http://www. altavista.com), allow us to search English terms only for pages in a certain language, e.g. Chinese or Japanese. This motivates us to investigate how to construct translation lexicons from bilingual searchresult pages (as the corpus), which are normally returned in a long ordered list of snippets of summaries (including titles and page descriptions) to help users locate interesting pages. The purpose of this paper is trying to propose a systematic approach to create multilingual translation lexicons with regional variations through mining of bilingual search-result pages. The bilingual pages retrieved by a term in one language are adopted as the corpus for extracting its translations in another language. Three major problems are found and have to be dealt with, including: (1) extracting translations for unknown terms – how to extract translations with correct lexical boundaries from noisy bilingual search-result pages, and how to estimate term similarity for determining correct translations from the extracted candidates; (2) finding translations with regional variations – how to find regional translation variations that seldom cooccur in the same Web pages, and how to identify the corresponding languages of the retrieved searchresult pages once if the location clues (e.g. URLs) in them might not imply the language they are written in; and (3) translation with limited corpora – how to translate terms with insufficient search-result pages for particular pairs of languages such as Chinese and Japanese, and simplified Chinese and traditional Chinese. The goal of this paper is to deal with the three problems. Given a term in one language, all possible translations will be extracted from the obtained bilingual search-result pages based on their similarity to the term. For those language pairs with unavailable corpora, a transitive translation model is proposed, by which the source term is translated into the target language through an intermediate language. The transitive translation model is further enhanced by a competitive linking algorithm. The algorithm can effectively alleviate the problem of error propagation in the process of translation, where translation errors may occur due to incorrect identification of the ambiguous terms in the intermediate language. In addition, because the search-result pages might contain snippets that do not be really written in the target language, a filtering process is further performed to eliminate the translation variations not of interest. Several experiments have been conducted to examine the performance of the proposed approach. The experimental results have shown that the approach can generate effective translation equivalents of various terms – especially for OOV terms such as proper nouns and technical names, which can be used to enrich general translation dictionaries. The results also revealed that the created translation lexicons can reflect different cultural aspects across regions such as Taiwan, Hong Kong and mainland China. In the rest of this paper, we review related work in translation extraction in Section 2. We present the transitive model and describe the direct translation process in Sections 3 and 4, respectively. The conducted experiments and their results are described in Section 5. Finally, in Section 6, some concluding remarks are given. 2 Related Work In this section, we review some research in generating translation equivalents for automatic construction of translational lexicons. Transitive translation: Several transitive translation techniques have been developed to deal with the unreliable direct translation problem. Borin (2000) used various sources to improve the alignment of word translation and proposed the pivot alignment, which combined direct translation and indirect translation via a third language. Gollins et al. (2001) proposed a feasible method that translated terms in parallel across multiple intermediate languages to eliminate errors. In addition, Simard (2000) exploited the transitive properties of translations to improve the quality of multilingual text alignment. Corpus-based translation: To automatically construct translation lexicons, conventional research in MT has generally used statistical techniques to extract translations from domain-specific sentencealigned parallel bilingual corpora. Kupiec (1993) attempted to find noun phrase correspondences in parallel corpora using part-of-speech tagging and noun phrase recognition methods. Smadja et al. (1996) proposed a statistical association measure of the Dice coefficient to deal with the problem of collocation translation. Melamed (2000) proposed statistical translation models to improve the techniques of word alignment by taking advantage of preexisting knowledge, which was more effective than a knowledge-free model. Although high accuracy of translation extraction can be easily achieved by these techniques, sufficiently large parallel corpora for (a) Taiwan (Traditional Chinese) (b) Mainland China (Simplified Chinese) (c) Hong Kong (Traditional Chinese) Figure 1: Examples of the search-result pages in different Chinese regions that were obtained via the English query term “George Bush” from Google. various subject domains and language pairs are not always available. Some attention has been devoted to automatic extraction of term translations from comparable or even unrelated texts. Such methods encounter more difficulties due to the lack of parallel correlations aligned between documents or sentence pairs. Rapp (1999) utilized non-parallel corpora based on the assumption that the contexts of a term should be similar to the contexts of its translation in any language pairs. Fung et al. (1998) also proposed a similar approach that used a vector-space model and took a bilingual lexicon (called seed words) as a feature set to estimate the similarity between a word and its translation candidates. Web-based translation: Collecting parallel texts of different language versions from the Web has recently received much attention (Kilgarriff et al., 2003). Nie et al. (1999) tried to automatically discover parallel Web documents. They assumed a Web page’s parents might contain the links to different versions of it and Web pages with the same content might have similar structures and lengths. Resnik (1999) addressed the issue of language identification for finding Web pages in the languages of interest. Yang et al. (2003) presented an alignment method to identify one-to-one Chinese and English title pairs based on dynamic programming. These methods often require powerful crawlers to gather sufficient Web data, as well as more network bandwidth and storage. On the other hand, Cao et al. (2002) used the Web to examine if the arbitrary combination of translations of a noun phrase was statistically important. 3 Construction of Translation Lexicons To construct translation lexicons with regional variations, we propose a transitive translation model Strans(s,t) to estimate the degree of possibility of the translation of a term s in one (source) language ls into a term t in another (target) language lt. Given the term s in ls, we first extract a set of terms C={tj}, where tj in lt acts as a translation candidate of s, from a corpus. In this case, the corpus consists of a set of search-result pages retrieved from search engines using term s as a query. Based on our previous work (Cheng et al., 2004), we can efficiently extract term tj by calculating the association measurement of every character or word n-gram in the corpus and applying the local maxima algorithm. The association measurement is determined by the degree of cohesion holding the words together within a word ngram, and enhanced by examining if a word n-gram has complete lexical boundaries. Next, we rank the extracted candidates C as a list T in a decreasing order by the model Strans(s,t) as the result. 3.1 Bilingual Search-Result Pages The Web contains rich texts in a mixture of multiple languages and in different regions. For example, Chinese pages on the Web may be written in traditional or simplified Chinese as a principle language and in English as an auxiliary language. According to our observations, translated terms frequently occur together with a term in mixed-language texts. For example, Figure 1 illustrates the search-result pages of the English term “George Bush,” which was submitted to Google for searching Chinese pages in different regions. In Figure 1 (a) it contains the translations “喬治布希” (George Bush) and “布 希” (Bush) obtained from the pages in Taiwan. In Figures 1 (b) and (c) the term “George Bush” is translated into “布什”(busir) or “布甚”(buson) in mainland China and “布殊”(busu) in Hong Kong. This characteristic of bilingual search-result pages is also useful for other language pairs such as other Asian languages mixed with English. For each term to be translated in one (source) language, we first submit it to a search engine for locating the bilingual Web documents containing the term and written in another (target) language from a specified region. The returned search-result pages containing snippets (illustrated in Figure 1), instead of the documents themselves, are collected as a corpus from which translation candidates are extracted and correct translations are then selected. Compared with parallel corpora and anchor texts, bilingual search-result pages are easier to collect and can promptly reflect the dynamic content of the Web. In addition, geographic information about Web pages such as URLs also provides useful clues to the regions where translations appear. 3.2 The Transitive Translation Model Transitive translation is particularly necessary for the translation of terms with regional variations because the variations seldom co-occur in the same bilingual pages. To estimate the possibility of being the translation t ∈T of term s, the transitive translation model first performs so-called direct translation, which attempts to learn translational equivalents directly from the corpus. The direct translation method is simple, but strongly affected by the quality of the adopted corpus. (Detailed description of the direct translation method will be given in Section 4.) If the term s and its translation t appear infrequently, the statistical information obtained from the corpus might not be reliable. For example, a term in simplified Chinese, e.g. 互联网 (Internet) does not usually co-occur together with its variation in traditional Chinese, e.g. 網際網路 (Internet). To deal with this problem, our idea is that the term s can be first translated into an intermediate translation m, which might co-occur with s, via a third (or intermediate) language lm. The correct translation t can then be extracted if it can be found as a translation of m. The transitive translation model, therefore, combines the processes of both direct translation and indirect translation, and is defined as:    × × = > = ∑ ∀ otherwise ), ( ) , ( ) , ( ) , ( ) , ( if ), , ( ) , ( m t m S m s S t s S t s S t s S t s S direct direct indirect direct direct m trans ϖ θ where m is one of the top k most probable intermediate translations of s in language lm, and ϖ is the confidence value of m’s accuracy, which can be estimated based on m’s probability of occurring in the corpus, and θ is a predefined threshold value. 3.3 The Competitive Linking Algorithm One major challenge of the transitive translation model is the propagation of translation errors. That is, incorrect m will significantly reduce the accuracy of the translation of s into t. A typical case is the indirect association problem (Melamed, 2000), as shown in Figure 2 in which we want to translate the term s1 (s=s1). Assume that t1 is s1’s corresponding translation, but appears infrequently with s1. An indirect association error might arise when t2, the translation of s1’s highly relevant term s2, co-occurs often with s1. This problem is very important for the situation in which translation is a many-to-many mapping. To reduce such errors and enhance the reliability of the estimation, a competitive linking algorithm, which is extended from Melamed’s work (Melamed, 2000), is developed to determine the most probable translations. Figure 2: An illustration of a bipartite graph. The idea of the algorithm is described below. For each translated term tj∈T in lt, we translate it back into original language ls and then model the translation mappings as a bipartite graph, as shown in Figure 2, where the vertices on one side correspond to the terms {si} or {tj} in one language. An edge eij indicates the corresponding two terms si and tj might be the translations of each other, and is weighted by the sum of Sdirect(si,tj) and Sdirect(tj,si,). Based on the weighted values, we can examine if each translated term tj∈T in lt can be correctly translated into the original term s1. If term tj has any translations better than term s1 in ls, term tj might be a so-called indirect association error and should be eliminated from T. In the above example, if the weight of e22 is larger than that of e12, the term “Technology” will be not considered as the translation of “網際網路” (Internet). Finally, for all translated terms {tj}⊆T that are not eliminated, we re-rank them by the weights of the edges {eij} and the top k ones are then taken as the translations. More detailed description of the algorithm could be referred to Lu et al. (2004). 4 Direct Translation In this section, we will describe the details of the direct translation process, i.e. the way to compute Sdirect(s,t). Three methods will be presented to estimate the similarity between a source term and each of its translation candidates. Moreover, because the searchresult pages of the term might contain snippets that do not actually be written in the target language, we will introduce a filtering method to eliminate the translation variations not of interest. 4.1 Translation Extraction The Chi-square Method: A number of statistical measures have been proposed for estimating term association based on co-occurrence analysis, including mutual information, DICE coefficient, chi-square test, and log-likelihood ratio (Rapp, 1999). Chisquare test (χ2) is adopted in our study because the required parameters for it can be obtained by submitInternet Technology 網際網路 (Internet) 技術 (Technology) 瀏覽器 (Browser) 電腦 (Computer) 資訊 (Information) t1 t2 s2 eij s3 s4 s5 s1 ting Boolean queries to search engines and utilizing the returned page counts (number of pages). Given a term s and a translation candidate t, suppose the total number of Web pages is N; the number of pages containing both s and t, n(s,t), is a; the number of pages containing s but not t, n(s,¬t), is b; the number of pages containing t but not s, n(¬s,t), is c; and the number of pages containing neither s nor t, n(¬s, ¬t), is d. (Although d is not provided by search engines, it can be computed by d=N-a-b-c.) Assume s and t are independent. Then, the expected frequency of (s,t), E(s,t), is (a+c)(a+b)/N; the expected frequency of (s,¬t), E(s,¬t), is (b+d)(a+b)/N; the expected frequency of (¬s,t), E(¬s,t), is (a+c)(c+d)/N; and the expected frequency of (¬s,¬t), E(¬s,¬t), is (b+d)(c+d)/N. Hence, the conventional chi-square test can be computed as: .) ( ) ( ) ( ) ( ) ( ) , ( )] , ( ) , ( [ ) , ( 2 } , { }, , { 2 2 d c d b c a b a c b d a N Y X E Y X E Y X n t s S t t Y s s X direct + × + × + × + × − × × = − = ∑ ¬ ∈ ∀ ¬ ∈ ∀ χ Although the chi-square method is simple to compute, it is more applicable to high-frequency terms than low-frequency terms since the former are more likely to appear with their candidates. Moreover, certain candidates that frequently co-occur with term s may not imply that they are appropriate translations. Thus, another method is presented. The Context-Vector Method: The basic idea of this method is that the term s’s translation equivalents may share common contextual terms with s in the search-result pages, similar to Rapp (1999). For both s and its candidates C, we take their contextual terms constituting the search-result pages as their features. The similarity between s and each candidate in C will be computed based on their feature vectors in the vector-space model. Herein, we adopt the conventional tf-idf weighting scheme to estimate the significance of features and define it as: ) log( ) , ( max ) , ( n N p t f p t f w j j i ti × = , where f(ti,p) is the frequency of term ti in search-result page p, N is the total number of Web pages, and n is the number of the pages containing ti. Finally, the similarity between term s and its translation candidate t can be estimated with the cosine measure, i.e. CV direct S (s,t)=cos(cvs, cvt), where cvs and cvt are the context vectors of s and t, respectively. In the context-vector method, a low-frequency term still has a chance of extracting correct translations, if it shares common contexts with its translations in the search-result pages. Although the method provides an effective way to overcome the chi-square method’s problem, its performance depends heavily on the quality of the retrieved search-result pages, such as the sizes and amounts of snippets. Also, feature selection needs to be carefully handled in some cases. The Combined Method: The context-vector and chisquare methods are basically complementary. Intuitively, a more complete solution is to integrate the two methods. Considering the various ranges of similarity values between the two methods, we compute the similarity between term s and its translation candidate t by the weighted sum of 1/Rχ2(s,t) and 1/RCV(s,t). Rχ2(s,t) (or RCV(s,t)) represents the similarity ranking of each translation candidate t with respect to s and is assigned to be from 1 to k (number of output) in decreasing order of similarity measure SX2 direct(s,t) (or SCVdirect(s,t)). That is, if the similarity rankings of t are high in both of the context-vector and chi-square methods, it will be also ranked high in the combined method. 4.2 Translation Filtering The direct translation process assumes that the retrieved search-result pages of a term exactly contain snippets from a certain region (e.g. Hong Kong) and written in the target language (e.g. traditional Chinese). However, the assumption might not be reliable because the location (e.g. URL) of a Web page may not imply that it is written by the principle language used in that region. Also, we cannot identify the language of a snippet simply using its character encoding scheme, because different regions may use the same character encoding schemes (e.g. Taiwan and Hong Kong mainly use the same traditional Chinese encoding scheme). From previous work (Tsou et al., 2004) we know that word entropies significantly reflect language differences in Hong Kong, Taiwan and China. Herein, we propose another method for dealing with the above problem. Since our goal is trying to eliminate the translation candidates {tj} that are not from the snippets in language lt, for each candidate tj we merge all of the snippets that contain tj into a document and then identify the corresponding language of tj based on the document. We train a uni-gram language model for each language of concern and perform language identification based on a discrimination function, which locates maximum character or word entropy and is defined as:       = ∑ ∈ ∈ ) | ( ln ) | ( max arg ) ( ) ( l w p l w p t lang tj N w L l j , where N(tj) is the collection of the snippets containing tj and L is a set of languages to be identified. The candidate tj will be eliminated if ≠ ) ( jt lang lt. To examine the feasibility of the proposed method in identifying Chinese in Taiwan, mainland China and Hong Kong, we conducted a preliminary experiment. To avoid the data sparseness of using a tri-gram language model, we simply use the above unigram model to perform language identification. Even so, the experimental result has shown that very high identification accuracy can be achieved. Some Web portals contain different versions for specific regions such as Yahoo! Taiwan (http://tw.yahoo. com) and Yahoo! Hong Kong (http://hk.yahoo.com). This allows us to collect regional training data for constructing language models. In the task of translating English terms into traditional Chinese in Taiwan, the extracted candidates for “laser” contained “雷 射” (translation of laser mainly used in Taiwan) and “激光” (translation of laser mainly used in mainland China). Based on the merged snippets, we found that “激光” had higher entropy value for the language model of mainland China while “雷射” had higher entropy value for the language models of Taiwan and Hong Kong. 5 Performance Evaluation We conducted extensive experiments to examine the performance of the proposed approach. We obtained the search-result pages of a term by submitting it to the real-world search engines, including Google and Openfind (http://www.openfind.com.tw). Only the first 100 snippets received were used as the corpus. Performance Metric: The average top-n inclusion rate was adopted as a metric on the extraction of translation equivalents. For a set of terms to be translated, its top-n inclusion rate was defined as the percentage of the terms whose translations could be found in the first n extracted translations. The experiments were categorized into direct translation and transitive translation. 5.1 Direct Translation Data set: We collected English terms from two realworld Chinese search engine logs in Taiwan, i.e. Dreamer (http://www.dreamer.com.tw) and GAIS (http://gais.cs.ccu.edu.tw). These English terms were potential ones in the Chinese logs that needed correct translations. The Dreamer log contained 228,566 unique query terms from a period of over 3 months in 1998, while the GAIS log contained 114,182 unique query terms from a period of two weeks in 1999. The collection contained a set of 430 frequent English terms, which were obtained from the 1,230 English terms out of the most popular 9,709 ones (with frequencies above 10 in both logs). About 36% (156/430) of the collection could be found in the LDC (Linguistic Data Consortium, http://www.ldc.upenn. edu/Projects/Chinese) English-to-Chinese lexicon with 120K entries, while about 64% (274/430) were not covered by the lexicon. English-to-Chinese Translation: In this experiment, we tried to directly translate the collected 430 English terms into traditional Chinese. Table 1 shows the results in terms of the top 1-5 inclusion rates for the translation of the collected English terms. “χ2”, “CV”, and “χ2+CV” represent the methods based on the chisquare, context-vector, and chi-square plus contextvector methods, respectively. Although either the chi-square or context-vector method was effective, the method based on both of them (χ2+CV) achieved the best performance in maximizing the inclusion rates in every case because they looked complementary. The proposed approach was found to be effective in finding translations of proper names, e.g. personal names “Jordan” ( 喬丹, 喬登), “Keanu Reeves” (基努李維, 基諾李維), companies’ names “TOYOTA” (豐田), “EPSON” (愛普生), and technical terms “EDI” (電子資料交換), “Ethernet” (乙 太網路), etc. English-to-Chinese Translation for Mainland China, Taiwan and Hong Kong: Chinese can be classified into simplified Chinese (SC) and traditional Chinese (TC) based on its writing form or character encoding scheme. SC is mainly used in mainland China while TC is mainly used in Taiwan and Hong Kong (HK). In this experiment, we further investigated the effectiveness of the proposed approach in English-to-Chinese translation for the three different regions. The collected 430 English terms were classified into five types: people, organization, place, computer and network, and others. Tables 2 and 3 show the statistical results and some examples, respectively. In Table 3, the number stands for a translated term’s ranking. The underlined terms were correct translations and the others were relevant translations. These translations might benefit the CLIR tasks, whose performance could be referred to our earlier work which emphasized on translating unknown queries (Cheng et al., 2004). The results in Table 2 show that the translations for mainland China and HK were not reliable enough in the top-1, compared with the translations for Taiwan. One possible reason was that the test terms were collected from Taiwan’s search engine logs. Most of them were popular in Taiwan but not in the others. Only 100 snippets retrieved might not balance or be sufficient for translation extraction. However, the inclusion rates for the three regions were close in the top-5. Observing the five types, we could find that type place containing the names of well-known countries and cities achieved the best performance in maximizing the inclusion rates in every case and almost had no regional variations (9%, 1/11) except Table 4: Inclusion rates of transitive translations of proper names and technical terms Type Source Language Target Language Intermediate Language Top-1 Top-3 Top5 Chinese English None 70.0% 84.0% 86.0% English Japanese None 32.0% 56.0% 64.0% English Korean None 34.0% 58.0% 68.0% Chinese Japanese English 26.0% 40.0% 48.0% Scientist Name Chinese Korean English 30.0% 42.0% 50.0% Chinese English None 50.0% 74.0% 74.0% English Japanese None 38.0% 48.0% 62.0% English Korean None 30.0% 50.0% 58.0% Chinese Japanese English 32.0% 44.0% 50.0% Disease Name Chinese Korean English 24.0% 38.0% 44.0% that the city “Sydney” was translated into 悉尼 (Sydney) in SC for mainland China and HK and 雪梨 (Sydney) in TC for Taiwan. Type computer and network containing technical terms had the most regional variations (41%, 47/115) and type people had 36% (5/14). In general, the translations in the two types were adapted to the use in different regions. On the other hand, 10% (15/147) and 8% (12/143) of the translations in types organization and others, respectively, had regional variations, because most of the terms in type others were general terms such as “bank” and “movies” and in type organization many local companies in Taiwan had no translation variations in mainland China and HK. Moreover, many translations in the types of people, organization, and computer and network were quite different in Taiwan and mainland China such as the personal name “Bred Pitt” was translated into “毕彼特” in SC and “布萊德彼特” in TC, the company name “Ericsson” into “爱立信” in SC and “易 利信” in TC, and the computer-related term “EDI” into “電子數據聯通” in SC and “電子資料交換” in TC. In general, the translations in HK had a higher chance to cover both of the translations in mainland China and Taiwan. 5.2 Multilingual & Transitive Translation Table 1: Inclusion rates for Web query terms using various similarity measurements Dic OOV All Method Top-1 Top-3 Top-5 Top-1 Top-3 Top-5 Top-1 Top-3 Top-5 χ2 42.1% 57.9% 62.1% 40.2% 53.8% 56.2% 41.4% 56.3% 59.8% CV 51.7% 59.8% 62.5% 45.0% 55.6% 57.4% 49.1% 58.1% 60.5% χ2+ CV 52.5% 60.4% 63.1% 46.1% 56.2% 58.0% 50.7% 58.8% 61.4% Table 2: Inclusion rates for different types of Web query terms Extracted Translations Taiwan (Big5) Mainland China (GB) Hong Kong (Big5) Type Top-1 Top-3 Top-5 Top-1 Top-3 Top-5 Top-1 Top-3 Top-5 People (14) 57.1% 64.3% 64.3% 35.7% 57.1% 64.3% 21.4% 57.1% 57.1% Organization (147) 44.9% 55.1% 56.5% 47.6% 58.5% 62.6% 37.4% 46.3% 53.1% Place (11) 90.9% 90.9% 90.9% 63.6% 100.0% 100.0% 81.8% 81.8% 81.8% Computer & Network (115) 55.8% 59.3% 63.7% 32.7% 59.3% 64.6% 42.5% 65.5% 68.1% Others (143) 49.0% 58.7% 62.2% 30.8% 49.7% 58.7% 28.7% 50.3% 60.8% Total (430) 50.7% 58.8% 61.4% 38.1% 56.7% 62.8% 36.5% 54.0% 60.5% Table 3: Examples of extracted correct/relevant translations of English terms in three Chinese regions Extracted Correct or Relevant Target Translations English Terms Taiwan (Traditional Chinese) Mainland China (Simplified Chinese) Hong Kong (Traditional Chinese) Police 警察 (1) 警察隊 (2) 警察局 (4) 警察 (1) 警务 (2) 公安 (4) 警務處 (1) 警察 (3) 警司 (5) Taxi 計程車 (1) 交通 (3) 出租车 (1) 的士 (4) 的士 (1) 的士司機 (2) 收費表 (15) Laser 雷射 (1) 雷射光源 (3) 測距槍(4) 激光 (1) 中国 (2) 激光器 (3) 雷射 (4) 激光 (1) 雷射 (2) 激光的 (3) 鐳射 (4) Hacker 駭客 (1) 網路 (2) 軟體 (7) 黑客 (1) 网络安全 (5) 防火墙 (6) 駭客 (1) 黑客 (2) 互聯網 (9) Database 資料庫 (1) 中文資料庫 (3) 数据库 (1) 数据库维护 (9) 資料庫 (1) 數據庫 (3) 資料 (5) Information 資訊 (1) 新聞 (3) 資訊網 (4) 信息 (1) 信息网 (3) 资讯 (7) 資料 (1) 資訊 (6) Internet café 網路咖啡 (3) 網路 (4) 網咖 (5) 网络咖啡 (1) 网络咖啡屋 (2) 网吧 (6) 網吧 (1) 香港 (3) 網站 (4) Search Engine 搜尋器 (2) 搜尋引擎 (5) 搜索引擎工厂 (1) 搜索引擎 (3) 搜索器 (1) 搜尋器 (8) Digital Camera 相機 (1) 數位相機 (2) 数码相机 (1) 数码影像 (6) 像素 (1) 數碼相機 (2) 相機 (3) Data set: Since technical terms had the most region variations among the five types as mentioned in the previous subsection, we collected two other data sets for examining the performance of the proposed approach in multilingual and transitive translation. The data sets contained 50 scientists’ names and 50 disease names in English, which were randomly selected from 256 scientists (Science/People) and 664 diseases (Health/Diseases) in the Yahoo! Directory (http://www.yahoo.com), respectively. English-to-Japanese/Korean Translation: In this experiment, the collected scientists’ and disease names in English were translated into Japanese and Korean to examine if the proposed approach could be applicable to other Asian languages. As the result in Table 4 shows, for the English-to-Japanese translation, the top-1, top-3, and top-5 inclusion rates were 35%, 52%, and 63%, respectively; for the English-to-Korean translation, the top-1, top-3, and top5 inclusion rates were 32%, 54%, and 63%, respectively, on average. Chinese-to-Japanese/Korean Translation via English: To further investigate if the proposed transitive approach can be applicable to other language pairs that are not frequently mixed in documents such as Chinese and Japanese (or Korean), we did transitive translation via English. In this experiment, we first manually translated the collected data sets in English into traditional Chinese and then did the Chinese-to-Japanese/Korean translation via the third language English. The results in Table 4 show that the propagation of translation errors reduced the translation accuracy. For example, the inclusion rates of the Chinese-toJapanese translation were lower than those of the English-to-Japanese translation since only 70%-86% inclusion rates were reached in the Chinese-toEnglish translation in the top 1-5. Although transitive translation might produce more noisy translations, it still produced acceptable translation candidates for human verification. In Table 4, 45%50% of the extracted top 5 Japanese or Korean terms might have correct translations. 6 Conclusion It is important that the translation of a term can be automatically adapted to its usage in different dialectal regions. We have proposed a Web-based translation approach that takes into account limited bilingual search-result pages from real search engines as comparable corpora. The experimental results have shown the feasibility of the automatic approach in generation of effective translation equivalents of various terms and construction of multilingual translation lexicons that reflect regional translation variations. References L. Borin. 2000. You’ll take the high road and I’ll take the low road: using a third language to improve bilingual word alignment. In Proc. of COLING-2000, pp. 97-103. P. F. Brown, J. Cocke, S. A. D. Pietra, V. J. D. Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79-85. Y.-B. Cao and H. Li. 2002. Base noun phrase translation using Web data the EM algorithm. In Proc. of COLING-2002, pp. 127-133. P.-J. Cheng, J.-W. Teng, R.-C. Chen, J.-H. Wang, W.-H. Lu, and L.-F. Chien. 2004. Translating unknown queries with Web corpora for cross-language information retrieval. In Proc. of ACM SIGIR-2004. P. Fung and L. Y. Yee. 1998. An IR approach for translating new words from nonparallel, comparable texts. In Proc. of ACL-98, pp. 414-420. T. Gollins and M. Sanderson. 2001. Improving cross language information with triangulated translation. In Proc. of ACM SIGIR-2001, pp. 90-95. J. Halpern. 2000. Lexicon-based orthographic disambiguation in CJK intelligent information retrieval. In Proc. of Workshop on Asian Language Resources and International Standardization. A. Kilgarriff and G. Grefenstette. 2003. Introduction to the special issue on the web as corpus. Computational Linguistics 29(3): 333-348. J. M. Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. In Proc. of ACL93, pp. 17-22. W.-H. Lu, L.-F. Chien, and H.-J. Lee. 2004. Anchor text mining for translation of web queries: a transitive translation Approach. ACM TOIS 22(2): 242-269. W.-H. Lu, L.-F. Chien, and H.-J. Lee. 2002. Translation of Web queries using anchor text mining. ACM TALIP: 159-172. I. D. Melamed. 2000. Models of translational equivalence among words. Computational Linguistics, 26(2): 221249. J.-Y. Nie, P. Isabelle, M. Simard, and R. Durand. 1999. Cross-language information retrieval based on parallel texts and automatic mining of parallel texts from the Web. In Proc. of ACM SIGIR-99, pp. 74-81. R. Rapp. 1999. Automatic identification of word translations from unrelated English and German corpora, In Proc. of ACL-99, pp. 519-526. P. Resnik. 1999. Mining the Web for bilingual text. In Proc. of ACL-99, pp. 527-534. M. Simard. 2000. Multilingual Text Alignment. In “Parallel Text Processing”, J. Veronis, ed., pages 49-67, Kluwer Academic Publishers, Netherlands. F. Smadja, K. McKeown, and V. Hatzivassiloglou. 1996. Translating collocations for bilingual lexicons: a statistical approach. Computational Linguistics, 22(1): 1-38. B. K. Tsou, T. B. Y. Lai, and K. Chow. 2004. Comparing entropies within the Chinese language. In Proc. of IJCNLP-2004. C. C. Yang and K.-W. Li. 2003. Automatic construction of English/Chinese parallel corpora. JASIST 54(8): 730-742.
2004
68
Probabilistic Parsing Strategies Mark-Jan Nederhof Faculty of Arts University of Groningen P.O. Box 716 NL-9700 AS Groningen The Netherlands [email protected] Giorgio Satta Dept. of Information Engineering University of Padua via Gradenigo, 6/A I-35131 Padova Italy [email protected] Abstract We present new results on the relation between context-free parsing strategies and their probabilistic counter-parts. We provide a necessary condition and a sufficient condition for the probabilistic extension of parsing strategies. These results generalize existing results in the literature that were obtained by considering parsing strategies in isolation. 1 Introduction Context-free grammars (CFGs) are standardly used in computational linguistics as formal models of the syntax of natural language, associating sentences with all their possible derivations. Other computational models with the same generative capacity as CFGs are also adopted, as for instance push-down automata (PDAs). One of the advantages of the use of PDAs is that these devices provide an operational specification that determines which steps must be performed when parsing an input string, something that is not offered by CFGs. In other words, PDAs can be associated to parsing strategies for contextfree languages. More precisely, parsing strategies are traditionally specified as constructions that map CFGs to language-equivalent PDAs. Popular examples of parsing strategies are the standard constructions of top-down PDAs (Harrison, 1978), leftcorner PDAs (Rosenkrantz and Lewis II, 1970), shift-reduce PDAs (Aho and Ullman, 1972) and LR PDAs (Sippu and Soisalon-Soininen, 1990). CFGs and PDAs have probabilistic counterparts, called probabilistic CFGs (PCFGs) and probabilistic PDAs (PPDAs). These models are very popular in natural language processing applications, where they are used to define a probability distribution function on the domain of all derivations for sentences in the language of interest. In PCFGs and PPDAs, probabilities are assigned to rules or transitions, respectively. However, these probabilities cannot be chosen entirely arbitrarily. For example, for a given nonterminal A in a PCFG, the sum of the probabilities of all rules rewriting A must be 1. This means that, out of a total of say m rules rewriting A, only m −1 rules represent “free” parameters. Depending on the choice of the parsing strategy, the constructed PDA may allow different probability distributions than the underlying CFG, since the set of free parameters may differ between the CFG and the PDA, both quantitatively and qualitatively. For example, (Sornlertlamvanich et al., 1999) and (Roark and Johnson, 1999) have shown that a probability distribution that can be obtained by training the probabilities of a CFG on the basis of a corpus can be less accurate than the probability distribution obtained by training the probabilities of a PDA constructed by a particular parsing strategy, on the basis of the same corpus. Also the results from (Chitrao and Grishman, 1990), (Charniak and Carroll, 1994) and (Manning and Carpenter, 2000) could be seen in this light. The question arises of whether parsing strategies can be extended probabilistically, i.e., whether a given construction of PDAs from CFGs can be “augmented” with a function defining the probabilities for the target PDA, given the probabilities associated with the input CFG, in such a way that the obtained probabilistic distributions on the CFG derivations and the corresponding PDA computations are equivalent. Some first results on this issue have been presented by (Tendeau, 1995), who shows that the already mentioned left-corner parsing strategy can be extended probabilistically, and later by (Abney et al., 1999) who show that the pure top-down parsing strategy and a specific type of shift-reduce parsing strategy can be probabilistically extended. One might think that any “practical” parsing strategy can be probabilistically extended, but this turns out not to be the case. We briefly discuss here a counter-example, in order to motivate the approach we have taken in this paper. Probabilistic LR parsing has been investigated in the literature (Wright and Wrigley, 1991; Briscoe and Carroll, 1993; Inui et al., 2000) under the assumption that it would allow more fine-grained probability distributions than the underlying PCFGs. However, this is not the case in general. Consider a PCFG with rule/probability pairs: S →AB, 1 B →bC, 2 3 A →aC, 1 3 B →bD, 1 3 A →aD, 2 3 C →xc, 1 D →xd, 1 There are two key transitions in the associated LR automaton, which represent shift actions over c and d (we denote LR states by their sets of kernel items and encode these states into stack symbols): τc : {C →x • c, D →x • d} c7→ {C →x • c, D →x • d} {C →xc •} τd : {C →x • c, D →x • d} d7→ {C →x • c, D →x • d} {D →xd •} Assume a proper assignment of probabilities to the transitions of the LR automaton, i.e., the sum of transition probabilities for a given LR state is 1. It can be easily seen that we must assign probability 1 to all transitions except τc and τd, since this is the only pair of distinct transitions that can be applied for one and the same top-of-stack symbol, viz. {C →x • c, D →x • d}. However, in the PCFG model we have Pr(axcbxd) Pr(axdbxc) = Pr(A→aC)·Pr(B→bD) Pr(A→aD)·Pr(B→bC) = 1 3· 1 3 2 3· 2 3 = 1 4 whereas in the LR PPDA model we have Pr(axcbxd) Pr(axdbxc) = Pr(τc)·Pr(τd) Pr(τd)·Pr(τc) = 1 ̸= 1 4. Thus we conclude that there is no proper assignment of probabilities to the transitions of the LR automaton that would result in a distribution on the generated language that is equivalent to the one induced by the source PCFG. Therefore the LR strategy does not allow probabilistic extension. One may seemingly solve this problem by dropping the constraint of properness, letting each transition that outputs a rule have the same probability as that rule in the PCFG, and letting other transitions have probability 1. However, the properness condition for PDAs has been heavily exploited in parsing applications, in doing incremental left-to-right probability computation for beam search (Roark and Johnson, 1999; Manning and Carpenter, 2000), and more generally in integration with other linear probabilistic models. Furthermore, commonly used training algorithms for PCFGS/PPDAs always produce proper probability assignments, and many desired mathematical properties of these methods are based on such an assumption (Chi and Geman, 1998; S´anchez and Bened´ı, 1997). We may therefore discard non-proper probability assignments in the current study. However, such probability assignments are outside the reach of the usual training algorithms for PDAs, which always produce proper PDAs. Therefore, we may discard such assignments in the current study, which investigates aspects of the potential of training algorithms for CFGs and PDAs. What has been lacking in the literature is a theoretical framework to relate the parameter space of a CFG to that of a PDA constructed from the CFG by a particular parsing strategy, in terms of the set of allowable probability distributions over derivations. Note that the number of free parameters alone is not a satisfactory characterization of the parameter space. In fact, if the “nature” of the parameters is ill-chosen, then an increase in the number of parameters may lead to a deterioration of the accuracy of the model, due to sparseness of data. In this paper we extend previous results, where only a few specific parsing strategies were considered in isolation, and provide some general characterization of parsing strategies that can be probabilistically extended. Our main contribution can be stated as follows. • We define a theoretical framework to relate the parameter space defined by a CFG and that defined by a PDA constructed from the CFG by a particular parsing strategy. • We provide a necessary condition and a sufficient condition for the probabilistic extension of parsing strategies. We use the above findings to establish new results about probabilistic extensions of parsing strategies that are used in standard practice in computational linguistics, as well as to provide simpler proofs of already known results. We introduce our framework in Section 3 and report our main results in Sections 4 and 5. We discuss applications of our results in Section 6. 2 Preliminaries In this paper we assume some familiarity with definitions of (P)CFGs and (P)PDAs. We refer the reader to standard textbooks and publications as for instance (Harrison, 1978; Booth and Thompson, 1973; Santos, 1972). A CFG G is a tuple (Σ, N, S, R), with Σ and N the sets of terminals and nonterminals, respectively, S the start symbol and R the set of rules. In this paper we only consider left-most derivations, represented as strings d ∈R∗and simply called derivations. For α, β ∈(Σ ∪N)∗, we write α ⇒d β with the usual meaning. If α = S and β = w ∈Σ∗, we call d a complete derivation of w. We say a CFG is reduced if each rule in R occurs in some complete derivation. A PCFG is a pair (G, p) consisting of a CFG G and a probability function p from R to real numbers in the interval [0, 1]. A PCFG is proper if P π=(A→α)∈R p(π) = 1 for each A ∈N. The probability of a (left-most) derivation d = π1 · · · πm, πi ∈R for 1 ≤i ≤m, is p(d) = Qm i=1 p(πi). The probability of a string w ∈Σ∗ is p(w) = P S⇒dw p(d). A PCFG is consistent if Σw∈Σ∗p(w) = 1. A PCFG (G, p) is reduced if G is reduced. In this paper we will mainly consider push-down transducers rather than push-down automata. Pushdown transducers not only compute derivations of the grammar while processing an input string, but they also explicitly produce output strings from which these derivations can be obtained. We use transducers for two reasons. First, constraints on the output strings allow us to restrict our attention to “reasonable” parsing strategies. Those strategies that cannot be formalized within these constraints are unlikely to be of practical interest. Secondly, mappings from input strings to derivations, such as those realized by push-down transducers, turn out to be a very powerful abstraction and allow direct proofs of several general results. Contrary to many textbooks, our push-down devices do not possess states next to stack symbols. This is without loss of generality, since states can be encoded into the stack symbols, given the types of transitions that we allow. Thus, a PDT A is a 6-tuple (Σ, Σ, Q, Xin, Xfin, ∆), with Σand Σthe input and output alphabets, respectively, Q the set of stack symbols, including the initial and final stack symbols Xin and Xfin, respectively, and ∆the set of transitions. Each transition has one of the following three forms: X 7→XY , called a push transition, YX 7→Z, called a pop transition, or X x,y 7→Y , called a swap transition; here X, Y , Z ∈Q, x ∈Σ∪{ε} is the input read by the transition and y ∈Σ∗ is the written output. Note that in our notation, stacks grow from left to right, i.e., the top-most stack symbol will be found at the right end. A configuration of a PDT is a triple (α, w, v), where α ∈Q∗is a stack, w ∈Σ∗ is the remaining input, and v ∈Σ∗ is the output generated so far. Computations are represented as strings c ∈ ∆∗. For configurations (α, w, v) and (β, w′, v′), we write (α, w, v) ⊢c (β, w′, v′) with the usual meaning, and write (α, w, v) ⊢∗(β, w′, v′) when c is of no importance. If (Xin, w, ε) ⊢c (Xfin, ε, v), then c is a complete computation of w, and the output string v is denoted out(c). A PDT is reduced if each transition in ∆occurs in some complete computation. Without loss of generality, we assume that combinations of different types of transitions are not allowed for a given stack symbol. More precisely, for each stack symbol X ̸= Xfin, the PDA can only take transitions of a single type (push, pop or swap). A PDT can easily be brought in this form by introducing for each X three new stack symbols Xpush, Xpop and Xswap and new swap transitions X ε,ε 7→Xpush, X ε,ε 7→Xpop and X ε,ε 7→Xswap. In each existing transition that operates on top-of-stack X, we then replace X by one from Xpush, Xpop or Xswap, depending on the type of that transition. We also assume that Xfin does not occur in the lefthand side of a transition, again without loss of generality. A PPDT is a pair (A, p) consisting of a PDT A and a probability function p from ∆to real numbers in the interval [0, 1]. A PPDT is proper if • Στ=(X7→XY )∈∆p(τ) = 1 for each X ∈Q such that there is at least one transition X 7→ XY , Y ∈Q; • Στ=(X x,y 7→Y )∈∆p(τ) = 1 for each X ∈Q such that there is at least one transition X x,y 7→Y , x ∈Σ∪{ε}, y ∈Σ∗ , Y ∈Q; and • Στ=(Y X7→Z)∈∆p(τ) = 1, for each X, Y ∈Q such that there is at least one transition Y X 7→ Z, Z ∈Q. The probability of a computation c = τ1 · · · τm, τi ∈ ∆for 1 ≤ i ≤ m, is p(c) = Qm i=1 p(τi). The probability of a string w is p(w) = P (Xin,w,ε)⊢c(Xfin,ε,v) p(c). A PPDT is consistent if Σw∈Σ∗p(w) = 1. A PPDT (A, p) is reduced if A is reduced. 3 Parsing Strategies The term “parsing strategy” is often used informally to refer to a class of parsing algorithms that behave similarly in some way. In this paper, we assign a formal meaning to this term, relying on the observation by (Lang, 1974) and (Billot and Lang, 1989) that many parsing algorithms for CFGs can be described in two steps. The first is a construction of push-down devices from CFGs, and the second is a method for handling nondeterminism (e.g. backtracking or dynamic programming). Parsing algorithms that handle nondeterminism in different ways but apply the same construction of push-down devices from CFGs are seen as realizations of the same parsing strategy. Thus, we define a parsing strategy to be a function S that maps a reduced CFG G = (Σ, N, S, R) to a pair S(G) = (A, f) consisting of a reduced PDT A = (Σ, Σ, Q, Xin, Xfin, ∆), and a function f that maps a subset of Σ∗ to a subset of R∗, with the following properties: • R ⊆Σ. • For each string w ∈Σ∗ and each complete computation c on w, f(out(c)) = d is a (leftmost) derivation of w. Furthermore, each symbol from R occurs as often in out(c) as it occurs in d. • Conversely, for each string w ∈ Σ∗ and each derivation d of w, there is precisely one complete computation c on w such that f(out(c)) = d. If c is a complete computation, we will write f(c) to denote f(out(c)). The conditions above then imply that f is a bijection from complete computations to complete derivations. Note that output strings of (complete) computations may contain symbols that are not in R, and the symbols that are in R may occur in a different order in v than in f(v) = d. The purpose of the symbols in Σ−R is to help this process of reordering of symbols from R in v, as needed for instance in the case of the left-corner parsing strategy (see (Nijholt, 1980, pp. 22–23) for discussion). A probabilistic parsing strategy is defined to be a function S that maps a reduced, proper and consistent PCFG (G, pG) to a triple S(G, pG) = (A, pA, f), where (A, pA) is a reduced, proper and consistent PPDT, with the same properties as a (non-probabilistic) parsing strategy, and in addition: • For each complete derivation d and each complete computation c such that f(c) = d, pG(d) equals pA(c). In other words, a complete computation has the same probability as the complete derivation that it is mapped to by function f. An implication of this property is that for each string w ∈Σ∗ , the probabilities assigned to that string by (G, pG) and (A, pA) are equal. We say that probabilistic parsing strategy S′ is an extension of parsing strategy S if for each reduced CFG G and probability function pG we have S(G) = (A, f) if and only if S′(G, pG) = (A, pA, f) for some pA. 4 Correct-Prefix Property In this section we present a necessary condition for the probabilistic extension of a parsing strategy. For a given PDT, we say a computation c is dead if (Xin, w1, ε) ⊢c (α, ε, v1), for some α ∈Q∗, w1 ∈ Σ∗ and v1 ∈Σ∗ , and there are no w2 ∈Σ∗ and v2 ∈Σ∗ such that (α, w2, ε) ⊢∗(Xfin, ε, v2). Informally, a dead computation is a computation that cannot be continued to become a complete computation. We say that a PDT has the correct-prefix property (CPP) if it does not allow any dead computations. We also say that a parsing strategy has the CPP if it maps each reduced CFG to a PDT that has the CPP. Lemma 1 For each reduced CFG G, there is a probability function pG such that PCFG (G, pG) is proper and consistent, and pG(d) > 0 for all complete derivations d. Proof. Since G is reduced, there is a finite set D consisting of complete derivations d, such that for each rule π in G there is at least one d ∈D in which π occurs. Let nπ,d be the number of occurrences of rule π in derivation d ∈D, and let nπ be Σd∈D nπ,d, the total number of occurrences of π in D. Let nA be the sum of nπ for all rules π with A in the left-hand side. A probability function pG can be defined through “maximum-likelihood estimation” such that pG(π) = nπ nA for each rule π = A →α. For all nonterminals A, Σπ=A→α pG(π) = Σπ=A→α nπ nA = nA nA = 1, which means that the PCFG (G, pG) is proper. Furthermore, it has been shown in (Chi and Geman, 1998; S´anchez and Bened´ı, 1997) that a PCFG (G, pG) is consistent if pG was obtained by maximum-likelihood estimation using a set of derivations. Finally, since nπ > 0 for each π, also pG(π) > 0 for each π, and pG(d) > 0 for all complete derivations d. We say a computation is a shortest dead computation if it is dead and none of its proper prefixes is dead. Note that each dead computation has a unique prefix that is a shortest dead computation. For a PDT A, let TA be the union of the set of all complete computations and the set of all shortest dead computations. Lemma 2 For each proper PPDT (A, pA), Σc∈TA pA(c) ≤1. Proof. The proof is a trivial variant of the proof that for a proper PCFG (G, pG), the sum of pG(d) for all derivations d cannot exceed 1, which is shown by (Booth and Thompson, 1973). From this, the main result of this section follows. Theorem 3 A parsing strategy that lacks the CPP cannot be extended to become a probabilistic parsing strategy. Proof. Take a parsing strategy S that does not have the CPP. Then there is a reduced CFG G = (Σ, N, S, R), with S(G) = (A, f) for some A and f, and a shortest dead computation c allowed by A. It follows from Lemma 1 that there is a probability function pG such that (G, pG) is a proper and consistent PCFG and pG(d) > 0 for all complete derivations d. Assume we also have a probability function pA such that (A, pA) is a proper and consistent PPDT and pA(c′) = pG(f(c′)) for each complete computation c′. Since A is reduced, each transition τ must occur in some complete computation c′. Furthermore, for each complete computation c′ there is a complete derivation d such that f(c′) = d, and pA(c′) = pG(d) > 0. Therefore, pA(τ) > 0 for each transition τ, and pA(c) > 0, where c is the above-mentioned dead computation. Due to Lemma 2, 1 ≥ Σc′∈TA pA(c′) ≥ Σw∈Σ∗pA(w) + pA(c) > Σw∈Σ∗pA(w) = Σw∈Σ∗pG(w). This is in contradiction with the consistency of (G, pG). Hence, a probability function pA with the properties we required above cannot exist, and therefore S cannot be extended to become a probabilistic parsing strategy. 5 Strong Predictiveness In this section we present our main result, which is a sufficient condition allowing the probabilistic extension of a parsing strategy. We start with a technical result that was proven in (Abney et al., 1999; Chi, 1999; Nederhof and Satta, 2003). Lemma 4 Given a non-proper PCFG (G, pG), G = (Σ, N, S, R), there is a probability function p′ G such that PCFG (G, p′ G) is proper and, for every complete derivation d, p′ G(d) = 1 C · pG(d), where C = P S⇒d′w,w∈Σ∗pG(d′). Note that if PCFG (G, pG) in the above lemma is consistent, then C = 1 and (G, p′ G) and (G, pG) define the same distribution on derivations. The normalization procedure underlying Lemma 4 makes use of quantities P A⇒dw,w∈Σ∗pG(d) for each A ∈ N. These quantities can be computed to any degree of precision, as discussed for instance in (Booth and Thompson, 1973) and (Stolcke, 1995). Thus normalization of a PCFG can be effectively computed. For a fixed PDT, we define the binary relation ; on stack symbols by: Y ; Y ′ if and only if (Y, w, ε) ⊢∗(Y ′, ε, v) for some w ∈Σ∗ and v ∈Σ∗ . In words, some subcomputation of the PDT may start with stack Y and end with stack Y ′. Note that all stacks that occur in such a subcomputation must have height of 1 or more. We say that a (P)PDA or a (P)PDT has the strong predictiveness property (SPP) if the existence of three transitions X 7→XY , XY1 7→Z1 and XY2 7→Z2 such that Y ; Y1 and Y ; Y2 implies Z1 = Z2. Informally, this means that when a subcomputation starts with some stack α and some push transition τ, then solely on the basis of τ we can uniquely determine what stack symbol Z1 = Z2 will be on top of the stack in the firstly reached configuration with stack height equal to |α|. Another way of looking at it is that no information may flow from higher stack elements to lower stack elements that was not already predicted before these higher stack elements came into being, hence the term “strong predictiveness”. We say that a parsing strategy has the SPP if it maps each reduced CFG to a PDT with the SPP. Theorem 5 Any parsing strategy that has the CPP and the SPP can be extended to become a probabilistic parsing strategy. Proof. Consider a parsing strategy S that has the CPP and the SPP, and a proper, consistent and reduced PCFG (G, pG), G = (Σ, N, S, R). Let S(G) = (A, f), A = (Σ, Σ, Q, Xin, Xfin, ∆). We will show that there is a probability function pA such that (A, pA) is a proper and consistent PPDT, and pA(c) = pG(f(c)) for all complete computations c. We first construct a PPDT (A, p′ A) as follows. For each scan transition τ = X x,y 7→Y in ∆, let p′ A(τ) = pG(y) in case y ∈R, and p′ A(τ) = 1 otherwise. For all remaining transitions τ ∈∆, let p′ A(τ) = 1. Note that (A, p′ A) may be non-proper. Still, from the definition of f it follows that, for each complete computation c, we have p′ A(c) = pG(f(c)), (1) and so our PPDT is consistent. We now map (A, p′ A) to a language-equivalent PCFG (G′, pG′), G′ = (Σ, Q, Xin, R′), where R′ contains the following rules with the specified associated probabilities: • X →YZ with pG′(X →YZ) = p′ A(X 7→ XY ), for each X 7→XY ∈∆with Z the unique stack symbol such that there is at least one transition XY ′ 7→Z with Y ; Y ′; • X →xY with pG′(X →xY ) = p′ A(X x7→ Y ), for each transition X x7→Y ∈∆; • Y →ε with pG′(X →ε) = 1, for each stack symbol Y such that there is at least one transition XY 7→Z ∈∆or such that Y = Xfin. It is not difficult to see that there exists a bijection f′ from complete computations of A to complete derivations of G′, and that we have pG′(f′(c)) = p′ A(c), (2) for each complete computation c. Thus (G′, pG′) is consistent. However, note that (G′, pG′) is not proper. By Lemma 4, we can construct a new PCFG (G′, p′ G′) that is proper and consistent, and such that pG′(d) = p′ G′(d), for each complete derivation d of G′. Thus, for each complete computation c of A, we have p′ G′(f′(c)) = pG′(f′(c)). (3) We now transfer back the probabilities of rules of (G′, p′ G′) to the transitions of A. Formally, we define a new probability function pA such that, for each τ ∈∆, pA(τ) = p′ G′(π), where π is the rule in R′ that has been constructed from τ as specified above. It is easy to see that PPDT (A, pA) is now proper. Furthermore, for each complete computation c of A we have pA(c) = p′ G′(f′(c)), (4) and so (A, pA) is also consistent. By combining equations (1) to (4) we conclude that, for each complete computation c of A, pA(c) = p′ G′(f′(c)) = pG′(f′(c)) = p′ A(c) = pG(f(c)). Thus our parsing strategy S can be probabilistically extended. Note that the construction in the proof above can be effectively computed (see discussion in Section 4 for effective computation of normalized PCFGs). The definition of p′ A in the proof of Theorem 5 relies on the strings output by A. This is the main reason why we needed to consider PDTs rather than PDAs. Now assume an appropriate probability function pA has been computed, such that the source PCFG and (A, pA) define equivalent distributions on derivations/computations. Then the probabilities assigned to strings over the input alphabet are also equal. We may subsequently ignore the output strings if the application at hand merely requires probabilistic recognition rather than probabilistic transduction, or in other words, we may simplify PDTs to PDAs. The proof of Theorem 5 also leads to the observation that parsing strategies with the CPP and the SPP as well as their probabilistic extensions can be described as grammar transformations, as follows. A given (P)CFG is mapped to an equivalent (P)PDT by a (probabilistic) parsing strategy. By ignoring the output components of swap transitions we obtain a (P)PDA, which can be mapped to an equivalent (P)CFG as shown above. This observation gives rise to an extension with probabilities of the work on covers by (Nijholt, 1980; Leermakers, 1989). 6 Applications Many well-known parsing strategies with the CPP also have the SPP. This is for instance the case for top-down parsing and left-corner parsing. As discussed in the introduction, it has already been shown that for any PCFG G, there are equivalent PPDTs implementing these strategies, as reported in (Abney et al., 1999) and (Tendeau, 1995), respectively. Those results more simply follow now from our general characterization. Furthermore, PLR parsing (Soisalon-Soininen and Ukkonen, 1979; Nederhof, 1994) can be expressed in our framework as a parsing strategy with the CPP and the SPP, and thus we obtain as a new result that this strategy allows probabilistic extension. The above strategies are in contrast to the LR parsing strategy, which has the CPP but lacks the SPP, and therefore falls outside our sufficient condition. As we have already seen in the introduction, it turns out that LR parsing cannot be extended to become a probabilistic parsing strategy. Related to LR parsing is ELR parsing (Purdom and Brown, 1981; Nederhof, 1994), which also lacks the SPP. By an argument similar to the one provided for LR, we can show that also ELR parsing cannot be extended to become a probabilistic parsing strategy. (See (Tendeau, 1997) for earlier observations related to this.) These two cases might suggest that the sufficient condition in Theorem 5 is tight in practice. Decidability of the CPP and the SPP obviously depends on how a parsing strategy is specified. As far as we know, in all practical cases of parsing strategies these properties can be easily decided. Also, observe that our results do not depend on the general behaviour of a parsing strategy S, but just on its “point-wise” behaviour on each input CFG. Specifically, if S does not have the CPP and the SPP, but for some fixed CFG G of interest we obtain a PDT A that has the CPP and the SPP, then we can still apply the construction in Theorem 5. In this way, any probability function pG associated with G can be converted into a probability function pA, such that the resulting PCFG and PPDT induce equivalent distributions. We point out that decidability of the CPP and the SPP for a fixed PDT can be efficiently decided using dynamic programming. One more consequence of our results is this. As discussed in the introduction, the properness condition reduces the number of parameters of a PPDT. However, our results show that if the PPDT has the CPP and the SPP then the properness assumption is not restrictive, i.e., by lifting properness we do not gain new distributions with respect to those induced by the underlying PCFG. 7 Conclusions We have formalized the notion of CFG parsing strategy as a mapping from CFGs to PDTs, and have investigated the extension to probabilities. We have shown that the question of which parsing strategies can be extended to become probabilistic heavily relies on two properties, the correct-prefix property and the strong predictiveness property. As far as we know, this is the first general characterization that has been provided in the literature for probabilistic extension of CFG parsing strategies. We have also shown that there is at least one strategy of practical interest with the CPP but without the SPP, namely LR parsing, that cannot be extended to become a probabilistic parsing strategy. Acknowledgements The first author is supported by the PIONIER Project Algorithms for Linguistic Processing, funded by NWO (Dutch Organization for Scientific Research). The second author is partially supported by MIUR under project PRIN No. 2003091149 005. References S. Abney, D. McAllester, and F. Pereira. 1999. Relating probabilistic grammars and automata. In 37th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 542–549, Maryland, USA, June. A.V. Aho and J.D. Ullman. 1972. Parsing, volume 1 of The Theory of Parsing, Translation and Compiling. Prentice-Hall. S. Billot and B. Lang. 1989. The structure of shared forests in ambiguous parsing. In 27th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 143–151, Vancouver, British Columbia, Canada, June. T.L. Booth and R.A. Thompson. 1973. Applying probabilistic measures to abstract languages. IEEE Transactions on Computers, C-22(5):442– 450, May. T. Briscoe and J. Carroll. 1993. Generalized probabilistic LR parsing of natural language (corpora) with unification-based grammars. Computational Linguistics, 19(1):25–59. E. Charniak and G. Carroll. 1994. Contextsensitive statistics for improved grammatical language models. In Proceedings Twelfth National Conference on Artificial Intelligence, volume 1, pages 728–733, Seattle, Washington. Z. Chi and S. Geman. 1998. Estimation of probabilistic context-free grammars. Computational Linguistics, 24(2):299–305. Z. Chi. 1999. Statistical properties of probabilistic context-free grammars. Computational Linguistics, 25(1):131–160. M.V. Chitrao and R. Grishman. 1990. Statistical parsing of messages. In Speech and Natural Language, Proceedings, pages 263–266, Hidden Valley, Pennsylvania, June. M.A. Harrison. 1978. Introduction to Formal Language Theory. Addison-Wesley. K. Inui, V. Sornlertlamvanich, H. Tanaka, and T. Tokunaga. 2000. Probabilistic GLR parsing. In H. Bunt and A. Nijholt, editors, Advances in Probabilistic and other Parsing Technologies, chapter 5, pages 85–104. Kluwer Academic Publishers. B. Lang. 1974. Deterministic techniques for efficient non-deterministic parsers. In Automata, Languages and Programming, 2nd Colloquium, volume 14 of Lecture Notes in Computer Science, pages 255–269, Saarbr¨ucken. Springer-Verlag. R. Leermakers. 1989. How to cover a grammar. In 27th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 135–142, Vancouver, British Columbia, Canada, June. C.D. Manning and B. Carpenter. 2000. Probabilistic parsing using left corner language models. In H. Bunt and A. Nijholt, editors, Advances in Probabilistic and other Parsing Technologies, chapter 6, pages 105–124. Kluwer Academic Publishers. M.-J. Nederhof and G. Satta. 2003. Probabilistic parsing as intersection. In 8th International Workshop on Parsing Technologies, pages 137– 148, LORIA, Nancy, France, April. M.-J. Nederhof. 1994. An optimal tabular parsing algorithm. In 32nd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 117–124, Las Cruces, New Mexico, USA, June. A. Nijholt. 1980. Context-Free Grammars: Covers, Normal Forms, and Parsing, volume 93 of Lecture Notes in Computer Science. SpringerVerlag. P.W. Purdom, Jr. and C.A. Brown. 1981. Parsing extended LR(k) grammars. Acta Informatica, 15:115–127. B. Roark and M. Johnson. 1999. Efficient probabilistic top-down and left-corner parsing. In 37th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 421–428, Maryland, USA, June. D.J. Rosenkrantz and P.M. Lewis II. 1970. Deterministic left corner parsing. In IEEE Conference Record of the 11th Annual Symposium on Switching and Automata Theory, pages 139–152. J.-A. S´anchez and J.-M. Bened´ı. 1997. Consistency of stochastic context-free grammars from probabilistic estimation based on growth transformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(9):1052–1055, September. E.S. Santos. 1972. Probabilistic grammars and automata. Information and Control, 21:27–47. S. Sippu and E. Soisalon-Soininen. 1990. Parsing Theory, Vol. II: LR(k) and LL(k) Parsing, volume 20 of EATCS Monographs on Theoretical Computer Science. Springer-Verlag. E. Soisalon-Soininen and E. Ukkonen. 1979. A method for transforming grammars into LL(k) form. Acta Informatica, 12:339–369. V. Sornlertlamvanich, K. Inui, H. Tanaka, T. Tokunaga, and T. Takezawa. 1999. Empirical support for new probabilistic generalized LR parsing. Journal of Natural Language Processing, 6(3):3–22. A. Stolcke. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):167–201. F. Tendeau. 1995. Stochastic parse-tree recognition by a pushdown automaton. In Fourth International Workshop on Parsing Technologies, pages 234–249, Prague and Karlovy Vary, Czech Republic, September. F. Tendeau. 1997. Analyse syntaxique et s´emantique avec ´evaluation d’attributs dans un demi-anneau. Ph.D. thesis, University of Orl´eans. J.H. Wright and E.N. Wrigley. 1991. GLR parsing with probability. In M. Tomita, editor, Generalized LR Parsing, chapter 8, pages 113–128. Kluwer Academic Publishers.
2004
69
Discriminative Language Modeling with Conditional Random Fields and the Perceptron Algorithm Brian Roark Murat Saraclar AT&T Labs - Research {roark,murat}@research.att.com Michael Collins Mark Johnson MIT CSAIL Brown University [email protected] Mark [email protected] Abstract This paper describes discriminative language modeling for a large vocabulary speech recognition task. We contrast two parameter estimation methods: the perceptron algorithm, and a method based on conditional random fields (CRFs). The models are encoded as deterministic weighted finite state automata, and are applied by intersecting the automata with word-lattices that are the output from a baseline recognizer. The perceptron algorithm has the benefit of automatically selecting a relatively small feature set in just a couple of passes over the training data. However, using the feature set output from the perceptron algorithm (initialized with their weights), CRF training provides an additional 0.5% reduction in word error rate, for a total 1.8% absolute reduction from the baseline of 39.2%. 1 Introduction A crucial component of any speech recognizer is the language model (LM), which assigns scores or probabilities to candidate output strings in a speech recognizer. The language model is used in combination with an acoustic model, to give an overall score to candidate word sequences that ranks them in order of probability or plausibility. A dominant approach in speech recognition has been to use a “source-channel”, or “noisy-channel” model. In this approach, language modeling is effectively framed as density estimation: the language model’s task is to define a distribution over the source – i.e., the possible strings in the language. Markov (n-gram) models are often used for this task, whose parameters are optimized to maximize the likelihood of a large amount of training text. Recognition performance is a direct measure of the effectiveness of a language model; an indirect measure which is frequently proposed within these approaches is the perplexity of the LM (i.e., the log probability it assigns to some held-out data set). This paper explores alternative methods for language modeling, which complement the source-channel approach through discriminatively trained models. The language models we describe do not attempt to estimate a generative model P(w) over strings. Instead, they are trained on acoustic sequences with their transcriptions, in an attempt to directly optimize error-rate. Our work builds on previous work on language modeling using the perceptron algorithm, described in Roark et al. (2004). In particular, we explore conditional random field methods, as an alternative training method to the perceptron. We describe how these models can be trained over lattices that are the output from a baseline recognizer. We also give a number of experiments comparing the two approaches. The perceptron method gave a 1.3% absolute improvement in recognition error on the Switchboard domain; the CRF methods we describe give a further gain, the final absolute improvement being 1.8%. A central issue we focus on concerns feature selection. The number of distinct n-grams in our training data is close to 45 million, and we show that CRF training converges very slowly even when trained with a subset (of size 12 million) of these features. Because of this, we explore methods for picking a small subset of the available features.1 The perceptron algorithm can be used as one method for feature selection, selecting around 1.5 million features in total. The CRF trained with this feature set, and initialized with parameters from perceptron training, converges much more quickly than other approaches, and also gives the optimal performance on the held-out set. We explore other approaches to feature selection, but find that the perceptron-based approach gives the best results in our experiments. While we focus on n-gram models, we stress that our methods are applicable to more general language modeling features – for example, syntactic features, as explored in, e.g., Khudanpur and Wu (2000). We intend to explore methods with new features in the future. Experimental results with n-gram models on 1000-best lists show a very small drop in accuracy compared to the use of lattices. This is encouraging, in that it suggests that models with more flexible features than n-gram models, which therefore cannot be efficiently used with lattices, may not be unduly harmed by their restriction to n-best lists. 1.1 Related Work Large vocabulary ASR has benefitted from discriminative estimation of Hidden Markov Model (HMM) parameters in the form of Maximum Mutual Information Estimation (MMIE) or Conditional Maximum Likelihood Estimation (CMLE). Woodland and Povey (2000) have shown the effectiveness of lattice-based MMIE/CMLE in challenging large scale ASR tasks such as Switchboard. In fact, state-of-the-art acoustic modeling, as seen, for example, at annual Switchboard evaluations, invariably includes some kind of discriminative training. Discriminative estimation of language models has also been proposed in recent years. Jelinek (1995) suggested an acoustic sensitive language model whose parameters 1Note also that in addition to concerns about training time, a language model with fewer features is likely to be considerably more efficient when decoding new utterances. are estimated by minimizing H(W|A), the expected uncertainty of the spoken text W, given the acoustic sequence A. Stolcke and Weintraub (1998) experimented with various discriminative approaches including MMIE with mixed results. This work was followed up with some success by Stolcke et al. (2000) where an “antiLM”, estimated from weighted N-best hypotheses of a baseline ASR system, was used with a negative weight in combination with the baseline LM. Chen et al. (2000) presented a method based on changing the trigram counts discriminatively, together with changing the lexicon to add new words. Kuo et al. (2002) used the generalized probabilistic descent algorithm to train relatively small language models which attempt to minimize string error rate on the DARPA Communicator task. Banerjee et al. (2003) used a language model modification algorithm in the context of a reading tutor that listens. Their algorithm first uses a classifier to predict what effect each parameter has on the error rate, and then modifies the parameters to reduce the error rate based on this prediction. 2 Linear Models, the Perceptron Algorithm, and Conditional Random Fields This section describes a general framework, global linear models, and two parameter estimation methods within the framework, the perceptron algorithm and a method based on conditional random fields. The linear models we describe are general enough to be applicable to a diverse range of NLP and speech tasks – this section gives a general description of the approach. In the next section of the paper we describe how global linear models can be applied to speech recognition. In particular, we focus on how the decoding and parameter estimation problems can be implemented over lattices using finite-state techniques. 2.1 Global linear models We follow the framework outlined in Collins (2002; 2004). The task is to learn a mapping from inputs x ∈X to outputs y ∈Y. We assume the following components: (1) Training examples (xi, yi) for i = 1 . . . N. (2) A function GEN which enumerates a set of candidates GEN(x) for an input x. (3) A representation Φ mapping each (x, y) ∈X × Y to a feature vector Φ(x, y) ∈Rd. (4) A parameter vector ¯α ∈Rd. The components GEN, Φ and ¯α define a mapping from an input x to an output F(x) through F(x) = argmax y∈GEN(x) Φ(x, y) · ¯α (1) where Φ(x, y) · ¯α is the inner product P s αsΦs(x, y). The learning task is to set the parameter values ¯α using the training examples as evidence. The decoding algorithm is a method for searching for the y that maximizes Eq. 1. 2.2 The Perceptron algorithm We now turn to methods for training the parameters ¯α of the model, given a set of training examples Inputs: Training examples (xi, yi) Initialization: Set ¯α = 0 Algorithm: For t = 1 . . . T, i = 1 . . . N Calculate zi = argmaxz∈GEN(xi) Φ(xi, z) · ¯α If(zi ̸= yi) then ¯α = ¯α + Φ(xi, yi) −Φ(xi, zi) Output: Parameters ¯α Figure 1: A variant of the perceptron algorithm. (x1, y1) . . . (xN, yN). This section describes the perceptron algorithm, which was previously applied to language modeling in Roark et al. (2004). The next section describes an alternative method, based on conditional random fields. The perceptron algorithm is shown in figure 1. At each training example (xi, yi), the current best-scoring hypothesis zi is found, and if it differs from the reference yi , then the cost of each feature2 is increased by the count of that feature in zi and decreased by the count of that feature in yi. The features in the model are updated, and the algorithm moves to the next utterance. After each pass over the training data, performance on a held-out data set is evaluated, and the parameterization with the best performance on the held out set is what is ultimately produced by the algorithm. Following Collins (2002), we used the averaged parameters from the training algorithm in decoding heldout and test examples in our experiments. Say ¯αt i is the parameter vector after the i’th example is processed on the t’th pass through the data in the algorithm in figure 1. Then the averaged parameters ¯αAV G are defined as ¯αAV G = P i,t ¯αt i/NT. Freund and Schapire (1999) originally proposed the averaged parameter method; it was shown to give substantial improvements in accuracy for tagging tasks in Collins (2002). 2.3 Conditional Random Fields Conditional Random Fields have been applied to NLP tasks such as parsing (Ratnaparkhi et al., 1994; Johnson et al., 1999), and tagging or segmentation tasks (Lafferty et al., 2001; Sha and Pereira, 2003; McCallum and Li, 2003; Pinto et al., 2003). CRFs use the parameters ¯α to define a conditional distribution over the members of GEN(x) for a given input x: p¯α(y|x) = 1 Z(x, ¯α) exp (Φ(x, y) · ¯α) where Z(x, ¯α) = P y∈GEN(x) exp (Φ(x, y) · ¯α) is a normalization constant that depends on x and ¯α. Given these definitions, the log-likelihood of the training data under parameters ¯α is LL(¯α) = N X i=1 log p¯α(yi|xi) = N X i=1 [Φ(xi, yi) · ¯α −log Z(xi, ¯α)] (2) 2Note that here lattice weights are interpreted as costs, which changes the sign in the algorithm presented in figure 1. Following Johnson et al. (1999) and Lafferty et al. (2001), we use a zero-mean Gaussian prior on the parameters resulting in the regularized objective function: LLR(¯α) = N X i=1 [Φ(xi, yi) · ¯α −log Z(xi, ¯α)] −||¯α||2 2σ2 (3) The value σ dictates the relative influence of the loglikelihood term vs. the prior, and is typically estimated using held-out data. The optimal parameters under this criterion are ¯α∗= argmax¯α LLR(¯α). We use a limited memory variable metric method (Benson and Mor´e, 2002) to optimize LLR. There is a general implementation of this method in the Tao/PETSc software libraries (Balay et al., 2002; Benson et al., 2002). This technique has been shown to be very effective in a variety of NLP tasks (Malouf, 2002; Wallach, 2002). The main interface between the optimizer and the training data is a procedure which takes a parameter vector ¯α as input, and in turn returns LLR(¯α) as well as the gradient of LLR at ¯α. The derivative of the objective function with respect to a parameter αs at parameter values ¯α is ∂LLR ∂αs = N X i=1  Φs(xi, yi) − X y∈GEN(xi) p¯α(y|xi)Φs(xi, y)  −αs σ2 (4) Note that LLR(¯α) is a convex function, so that there is a globally optimal solution and the optimization method will find it. The use of the Gaussian prior term ||¯α||2/2σ2 in the objective function has been found to be useful in several NLP settings. It effectively ensures that there is a large penalty for parameter values in the model becoming too large – as such, it tends to control over-training. The choice of LLR as an objective function can be justified as maximum a-posteriori (MAP) training within a Bayesian approach. An alternative justification comes through a connection to support vector machines and other large margin approaches. SVM-based approaches use an optimization criterion that is closely related to LLR – see Collins (2004) for more discussion. 3 Linear models for speech recognition We now describe how the formalism and algorithms in section 2 can be applied to language modeling for speech recognition. 3.1 The basic approach As described in the previous section, linear models require definitions of X, Y, xi, yi, GEN, Φ and a parameter estimation method. In the language modeling setting we take X to be the set of all possible acoustic inputs; Y is the set of all possible strings, Σ∗, for some vocabulary Σ. Each xi is an utterance (a sequence of acoustic feature-vectors), and GEN(xi) is the set of possible transcriptions under a first pass recognizer. (GEN(xi) is a huge set, but will be represented compactly using a lattice – we will discuss this in detail shortly). We take yi to be the member of GEN(xi) with lowest error rate with respect to the reference transcription of xi. All that remains is to define the feature-vector representation, Φ(x, y). In the general case, each component Φi(x, y) could be essentially any function of the acoustic input x and the candidate transcription y. The first feature we define is Φ0(x, y) as the log-probability of y given x under the lattice produced by the baseline recognizer. Thus this feature will include contributions from the acoustic model and the original language model. The remaining features are restricted to be functions over the transcription y alone and they track all n-grams up to some length (say n = 3), for example: Φ1(x, y) = Number of times “the the of” is seen in y At an abstract level, features of this form are introduced for all n-grams up to length 3 seen in some training data lattice, i.e., n-grams seen in any word sequence within the lattices. In practice, we consider methods that search for sparse parameter vectors ¯α, thus assigning many ngrams 0 weight. This will lead to more efficient algorithms that avoid dealing explicitly with the entire set of n-grams seen in training data. 3.2 Implementation using WFA We now give a brief sketch of how weighted finite-state automata (WFA) can be used to implement linear models for speech recognition. There are several papers describing the use of weighted automata and transducers for speech in detail, e.g., Mohri et al. (2002), but for clarity and completeness this section gives a brief description of the operations which we use. For our purpose, a WFA A = (Σ, Q, qs, F, E, ρ), where Σ is the vocabulary, Q is a (finite) set of states, qs ∈Q is a unique start state, F ⊆Q is a set of final states, E is a (finite) set of transitions, and ρ : F →R is a function from final states to final weights. Each transition e ∈E is a tuple e = (l[e], p[e], n[e], w[e]), where l[e] ∈Σ is a label (in our case, words), p[e] ∈Q is the origin state of e, n[e] ∈Q is the destination state of e, and w[e] ∈R is the weight of the transition. A successful path π = e1 . . . ej is a sequence of transitions, such that p[e1] = qs, n[ej] ∈F, and for 1 < k ≤j, n[ek−1] = p[ek]. Let ΠA be the set of successful paths π in a WFA A. For any π = e1 . . . ej, l[π] = l[e1] . . . l[ej]. The weights of the WFA in our case are always in the log semiring, which means that the weight of a path π = e1 . . . ej ∈ΠA is defined as: wA[π] = j X k=1 w[ek] ! + ρ(ej) (5) By convention, we use negative log probabilities as weights, so lower weights are better. All WFA that we will discuss in this paper are deterministic, i.e. there are no ϵ transitions, and for any two transitions e, e′ ∈E, if p[e] = p[e′], then l[e] ̸= l[e′]. Thus, for any string w = w1 . . . wj, there is at most one successful path π ∈ΠA, such that π = e1 . . . ej and for 1 ≤k ≤j, l[ek] = wk, i.e. l[π] = w. The set of strings w such that there exists a π ∈ΠA with l[π] = w define a regular language LA ⊆Σ. We can now define some operations that will be used in this paper. • λA. For a set of transitions E and λ ∈R, define λE = {(l[e], p[e], n[e], λw[e]) : e ∈E}. Then, for any WFA A = (Σ, Q, qs, F, E, ρ), define λA for λ ∈R as follows: λA = (Σ, Q, qs, F, λE, λρ). • A ◦A′. The intersection of two deterministic WFAs A ◦A′ in the log semiring is a deterministic WFA such that LA◦A′ = LA T LA′. For any π ∈ΠA◦A′, wA◦A′[π] = wA[π1] + wA′[π2], where l[π] = l[π1] = l[π2]. • BestPath(A). This operation takes a WFA A, and returns the best scoring path ˆπ = argminπ∈ΠA wA[π]. • MinErr(A, y). Given a WFA A, a string y, and an error-function E(y, w), this operation returns ˆπ = argminπ∈ΠA E(y, l[π]). This operation will generally be used with y as the reference transcription for a particular training example, and E(y, w) as some measure of the number of errors in w when compared to y. In this case, the MinErr operation returns the path π ∈ΠA such l[π] has the smallest number of errors when compared to y. • Norm(A). Given a WFA A, this operation yields a WFA A′ such that LA = LA′ and for every π ∈ΠA there is a π′ ∈ΠA′ such that l[π] = l[π′] and wA′[π′] = wA[π] + log X ¯π∈ΠA exp(−wA[¯π]) ! (6) Note that X π∈Norm(A) exp(−wNorm(A)[π]) = 1 (7) In other words the weights define a probability distribution over the paths. • ExpCount(A, w). Given a WFA A and an n-gram w, we define the expected count of w in A as ExpCount(A, w) = X π∈ΠA wNorm(A)[π]C(w, l[π]) where C(w, l[π]) is defined to be the number of times the n-gram w appears in a string l[π]. Given an acoustic input x, let Lx be a deterministic word-lattice produced by the baseline recognizer. The lattice Lx is an acyclic WFA, representing a weighted set of possible transcriptions of x under the baseline recognizer. The weights represent the combination of acoustic and language model scores in the original recognizer. The new, discriminative language model constructed during training consists of a deterministic WFA which we will denote D, together with a single parameter α0. The parameter α0 is the weight for the log probability feature Φ0 given by the baseline recognizer. The WFA D is constructed so that LD = Σ∗and for all π ∈ΠD wD[π] = d X j=1 Φj(x, l[π])αj Recall that Φj(x, w) for j > 0 is the count of the j’th ngram in w, and αj is the parameter associated with that w w i-2 i-1 w w i-1 i wi wi-1 φ wi φ wi ε φ wi Figure 2: Representation of a trigram model with failure transitions. n-gram. Then, by definition, α0L ◦D accepts the same set of strings as L, but wα0L◦D[π] = d X j=0 Φj(x, l[π])αj and argmin π∈L Φ(x, l[π]) · ¯α = BestPath(α0L ◦D). Thus decoding under our new model involves first producing a lattice L from the baseline recognizer; second, scaling L with α0 and intersecting it with the discriminative language model D; third, finding the best scoring path in the new WFA. We now turn to training a model, or more explicitly, deriving a discriminative language model (D, α0) from a set of training examples. Given a training set (xi, ri) for i = 1 . . . N, where xi is an acoustic sequence, and ri is a reference transcription, we can construct lattices Li for i = 1 . . . N using the baseline recognizer. We can also derive target transcriptions yi = MinErr(Li, ri). The training algorithm is then a mapping from (Li, yi) for i = 1 . . . N to a pair (D, α0). Note that the construction of the language model requires two choices. The first concerns the choice of the set of n-gram features Φi for i = 1 . . . d implemented by D. The second concerns the choice of parameters αi for i = 0 . . . d which assign weights to the n-gram features as well as the baseline feature Φ0. Before describing methods for training a discriminative language model using perceptron and CRF algorithms, we give a little more detail about the structure of D, focusing on how n-gram language models can be implemented with finite-state techniques. 3.3 Representation of n-gram language models An n-gram model can be efficiently represented in a deterministic WFA, through the use of failure transitions (Allauzen et al., 2003). Every string accepted by such an automaton has a single path through the automaton, and the weight of the string is the sum of the weights of the transitions in that path. In such a representation, every state in the automaton represents an n-gram history h, e.g. wi−2wi−1, and there are transitions leaving the state for every word wi such that the feature hwi has a weight. There is also a failure transition leaving the state, labeled with some reserved symbol φ, which can only be traversed if the next symbol in the input does not match any transition leaving the state. This failure transition points to the backoff state h′, i.e. the n-gram history h minus its initial word. Figure 2 shows how a trigram model can be represented in such an automaton. See Allauzen et al. (2003) for more details. Note that in such a deterministic representation, the entire weight of all features associated with the word wi following history h must be assigned to the transition labeled with wi leaving the state h in the automaton. For example, if h = wi−2wi−1, then the trigram wi−2wi−1wi is a feature, as is the bigram wi−1wi and the unigram wi. In this case, the weight on the transition wi leaving state h must be the sum of the trigram, bigram and unigram feature weights. If only the trigram feature weight were assigned to the transition, neither the unigram nor the bigram feature contribution would be included in the path weight. In order to ensure that the correct weights are assigned to each string, every transition encoding an order k n-gram must carry the sum of the weights for all n-gram features of orders ≤k. To ensure that every string in Σ∗receives the correct weight, for any n-gram hw represented explicitly in the automaton, h′w must also be represented explicitly in the automaton, even if its weight is 0. 3.4 The perceptron algorithm The perceptron algorithm is incremental, meaning that the language model D is built one training example at a time, during several passes over the training set. Initially, we build D to accept all strings in Σ∗with weight 0. For the perceptron experiments, we chose the parameter α0 to be a fixed constant, chosen by optimization on the held-out set. The loop in the algorithm in figure 1 is implemented as: For t = 1 . . . T, i = 1 . . . N: • Calculate zi = argmaxy∈GEN(x) Φ(x, y) · ¯α = BestPath(α0Li ◦D) • If zi ̸= MinErr(Li, ri), then update the feature weights as in figure 1 (modulo the sign, because of the use of costs), and modify D so as to assign the correct weight to all strings. In addition, averaged parameters need to be stored (see section 2.2). These parameters will replace the unaveraged parameters in D once training is completed. Note that the only n-gram features to be included in D at the end of the training process are those that occur in either a best scoring path zi or a minimum error path yi at some point during training. Thus the perceptron algorithm is in effect doing feature selection as a by-product of training. Given N training examples, and T passes over the training set, O(NT) n-grams will have non-zero weight after training. Experiments in Roark et al. (2004) suggest that the perceptron reaches optimal performance after a small number of training iterations, for example T = 1 or T = 2. Thus O(NT) can be very small compared to the full number of n-grams seen in all training lattices. In our experiments, the perceptron method chose around 1.4 million n-grams with non-zero weight. This compares to 43.65 million possible n-grams seen in the training data. This is a key contrast with conditional random fields, which optimize the parameters of a fixed feature set. Feature selection can be critical in our domain, as training and applying a discriminative language model over all n-grams seen in the training data (in either correct or incorrect transcriptions) may be computationally very demanding. One training scenario that we will consider will be using the output of the perceptron algorithm (the averaged parameters) to provide the feature set and the initial feature weights for use in the CRF algorithm. This leads to a model which is reasonably sparse, but has the benefit of CRF training, which as we will see gives gains in performance. 3.5 Conditional Random Fields The CRF methods that we use assume a fixed definition of the n-gram features Φi for i = 1 . . . d in the model. In the experimental section we will describe a number of ways of defining the feature set. The optimization methods we use begin at some initial setting for ¯α, and then search for the parameters ¯α∗which maximize LLR(¯α) as defined in Eq. 3. The optimization method requires calculation of LLR(¯α) and the gradient of LLR(¯α) for a series of values for ¯α. The first step in calculating these quantities is to take the parameter values ¯α, and to construct an acceptor D which accepts all strings in Σ∗, such that wD[π] = d X j=1 Φj(x, l[π])αj For each training lattice Li, we then construct a new lattice L′ i = Norm(α0Li ◦D). The lattice L′ i represents (in the log domain) the distribution p¯α(y|xi) over strings y ∈GEN(xi). The value of log p¯α(yi|xi) for any i can be computed by simply taking the path weight of π such that l[π] = yi in the new lattice L′ i. Hence computation of LLR(¯α) in Eq. 3 is straightforward. Calculating the n-gram feature gradients for the CRF optimization is also relatively simple, once L′ i has been constructed. From the derivative in Eq. 4, for each i = 1 . . . N, j = 1 . . . d the quantity Φj(xi, yi) − X y∈GEN(xi) p¯α(y|xi)Φj(xi, y) (8) must be computed. The first term is simply the number of times the j’th n-gram feature is seen in yi. The second term is the expected number of times that the j’th n-gram is seen in the acceptor L′ i. If the j’th n-gram is w1 . . . wn, then this can be computed as ExpCount(L′ i, w1 . . . wn). The GRM library, which was presented in Allauzen et al. (2003), has a direct implementation of the function ExpCount, which simultaneously calculates the expected value of all n-grams of order less than or equal to a given n in a lattice L. The one non-ngram feature weight that is being estimated is the weight α0 given to the baseline ASR negative log probability. Calculation of the gradient of LLR with respect to this parameter again requires calculation of the term in Eq. 8 for j = 0 and i = 1 . . . N. Computation of P y∈GEN(xi) p¯α(y|xi)Φ0(xi, y) turns out to be not as straightforward as calculating n-gram expectations. To do so, we rely upon the fact that Φ0(xi, y), the negative log probability of the path, decomposes to the sum of negative log probabilities of each transition in the path. We index each transition in the lattice Li, and store its negative log probability under the baseline model. We can then calculate the required gradient from L′ i, by calculating the expected value in L′ i of each indexed transition in Li. We found that an approximation to the gradient of α0, however, performed nearly identically to this exact gradient, while requiring substantially less computation. Let wn 1 be a string of n words, labeling a path in wordlattice L′ i. For brevity, let Pi(wn 1 ) = p¯α(wn 1 |xi) be the conditional probability under the current model, and let Qi(wn 1 ) be the probability of wn 1 in the normalized baseline ASR lattice Norm(Li). Let Li be the set of strings in the language defined by Li. Then we wish to compute Ei for i = 1 . . . N, where Ei = X wn 1 ∈Li Pi(wn 1 ) log Qi(wn 1 ) = X wn 1 ∈Li X k=1...n Pi(wn 1 ) log Qi(wk|wk−1 1 ) (9) The approximation is to make the following Markov assumption: Ei ≈ X wn 1 ∈Li X k=1...n Pi(wn 1 ) log Qi(wk|wk−1 k−2) = X xyz∈Si ExpCount(L′ i, xyz) log Qi(z|xy)(10) where Si is the set of all trigrams seen in Li. The term log Qi(z|xy) can be calculated once before training for every lattice in the training set; the ExpCount term is calculated as before using the GRM library. We have found this approximation to be effective in practice, and it was used for the trials reported below. When the gradients and conditional likelihoods are collected from all of the utterances in the training set, the contributions from the regularizer are combined to give an overall gradient and objective function value. These values are provided to the parameter estimation routine, which then returns the parameters for use in the next iteration. The accumulation of gradients for the feature set is the most time consuming part of the approach, but this is parallelizable, so that the computation can be divided among many processors. 4 Empirical Results We present empirical results on the Rich Transcription 2002 evaluation test set (rt02), which we used as our development set, as well as on the Rich Transcription 2003 Spring evaluation CTS test set (rt03). The rt02 set consists of 6081 sentences (63804 words) and has three subsets: Switchboard 1, Switchboard 2, Switchboard Cellular. The rt03 set consists of 9050 sentences (76083 words) and has two subsets: Switchboard and Fisher. We used the same training set as that used in Roark et al. (2004). The training set consists of 276726 transcribed utterances (3047805 words), with an additional 20854 utterances (249774 words) as held out data. For 0 500 1000 37 37.5 38 38.5 39 39.5 40 Iterations over training Word error rate Baseline recognizer Perceptron, Feat=PL, Lattice Perceptron, Feat=PN, N=1000 CRF, σ = ∞, Feat=PL, Lattice CRF, σ = 0.5, Feat=PL, Lattice CRF, σ = 0.5, Feat=PN, N=1000 Figure 3: Word error rate on the rt02 eval set versus training iterations for CRF trials, contrasted with baseline recognizer performance and perceptron performance. Points are at every 20 iterations. Each point (x,y) is the WER at the iteration with the best objective function value in the interval (x-20,x]. each utterance, a weighted word-lattice was produced, representing alternative transcriptions, from the ASR system. From each word-lattice, the oracle best path was extracted, which gives the best word-error rate from among all of the hypotheses in the lattice. The oracle word-error rate for the training set lattices was 12.2%. We also performed trials with 1000-best lists for the same training set, rather than lattices. The oracle score for the 1000-best lists was 16.7%. To produce the word-lattices, each training utterance was processed by the baseline ASR system. However, these same utterances are what the acoustic and language models are built from, which leads to better performance on the training utterances than can be expected when the ASR system processes unseen utterances. To somewhat control for this, the training set was partitioned into 28 sets, and baseline Katz backoff trigram models were built for each set by including only transcripts from the other 27 sets. Since language models are generally far more prone to overtrain than standard acoustic models, this goes a long way toward making the training conditions similar to testing conditions. There are three baselines against which we are comparing. The first is the ASR baseline, with no reweighting from a discriminatively trained n-gram model. The other two baselines are with perceptron-trained n-gram model re-weighting, and were reported in Roark et al. (2004). The first of these is for a pruned-lattice trained trigram model, which showed a reduction in word error rate (WER) of 1.3%, from 39.2% to 37.9% on rt02. The second is for a 1000-best list trained trigram model, which performed only marginally worse than the latticetrained perceptron, at 38.0% on rt02. 4.1 Perceptron feature set We use the perceptron-trained models as the starting point for our CRF algorithm: the feature set given to the CRF algorithm is the feature set selected by the perceptron algorithm; the feature weights are initialized to those of the averaged perceptron. Figure 3 shows the performance of our three baselines versus three trials of 0 500 1000 1500 2000 2500 37 37.5 38 38.5 39 39.5 40 Iterations over training Word error rate Baseline recognizer Perceptron, Feat=PL, Lattice CRF, σ = 0.5, Feat=PL, Lattice CRF, σ = 0.5, Feat=E, θ=0.01 CRF, σ = 0.5, Feat=E, θ=0.9 Figure 4: Word error rate on the rt02 eval set versus training iterations for CRF trials, contrasted with baseline recognizer performance and perceptron performance. Points are at every 20 iterations. Each point (x,y) is the WER at the iteration with the best objective function value in the interval (x-20,x]. the CRF algorithm. In the first two trials, the training set consists of the pruned lattices, and the feature set is from the perceptron algorithm trained on pruned lattices. There were 1.4 million features in this feature set. The first trial set the regularizer constant σ = ∞, so that the algorithm was optimizing raw conditional likelihood. The second trial is with the regularizer constant σ = 0.5, which we found empirically to be a good parameterization on the held-out set. As can be seen from these results, regularization is critical. The third trial in this set uses the feature set from the perceptron algorithm trained on 1000-best lists, and uses CRF optimization on these on these same 1000-best lists. There were 0.9 million features in this feature set. For this trial, we also used σ = 0.5. As with the perceptron baselines, the n-best trial performs nearly identically with the pruned lattices, here also resulting in 37.4% WER. This may be useful for techniques that would be more expensive to extend to lattices versus n-best lists (e.g. models with unbounded dependencies). These trials demonstrate that the CRF algorithm can do a better job of estimating feature weights than the perceptron algorithm for the same feature set. As mentioned in the earlier section, feature selection is a by-product of the perceptron algorithm, but the CRF algorithm is given a set of features. The next two trials looked at selecting feature sets other than those provided by the perceptron algorithm. 4.2 Other feature sets In order for the feature weights to be non-zero in this approach, they must be observed in the training set. The number of unigram, bigram and trigram features with non-zero observations in the training set lattices is 43.65 million, or roughly 30 times the size of the perceptron feature set. Many of these features occur only rarely with very low conditional probabilities, and hence cannot meaningfully impact system performance. We pruned this feature set to include all unigrams and bigrams, but only those trigrams with an expected count of greater than 0.01 in the training set. That is, to be included, a Trial Iter rt02 rt03 ASR Baseline 39.2 38.2 Perceptron, Lattice 37.9 36.9 Perceptron, N-best 38.0 37.2 CRF, Lattice, Percep Feats (1.4M) 769 37.4 36.5 CRF, N-best, Percep Feats (0.9M) 946 37.4 36.6 CRF, Lattice, θ = 0.01 (12M) 2714 37.6 36.5 CRF, Lattice, θ = 0.9 (1.5M) 1679 37.5 36.6 Table 1: Word-error rate results at convergence iteration for various trials, on both Switchboard 2002 test set (rt02), which was used as the dev set, and Switchboard 2003 test set (rt03). trigram must occur in a set of paths, the sum of the conditional probabilities of which must be greater than our threshold θ = 0.01. This threshold resulted in a feature set of roughly 12 million features, nearly 10 times the size of the perceptron feature set. For better comparability with that feature set, we set our thresholds higher, so that trigrams were pruned if their expected count fell below θ = 0.9, and bigrams were pruned if their expected count fell below θ = 0.1. We were concerned that this may leave out some of the features on the oracle paths, so we added back in all bigram and trigram features that occurred on oracle paths, giving a feature set of 1.5 million features, roughly the same size as the perceptron feature set. Figure 4 shows the results for three CRF trials versus our ASR baseline and the perceptron algorithm baseline trained on lattices. First, the result using the perceptron feature set provides us with a WER of 37.4%, as previously shown. The WER at convergence for the big feature set (12 million features) is 37.6%; the WER at convergence for the smaller feature set (1.5 million features) is 37.5%. While both of these other feature sets converge to performance close to that using the perceptron features, the number of iterations over the training data that are required to reach that level of performance are many more than for the perceptron-initialized feature set. Table 1 shows the word-error rate at the convergence iteration for the various trials, on both rt02 and rt03. All of the CRF trials are significantly better than the perceptron performance, using the Matched Pair Sentence Segment test for WER included with SCTK (NIST, 2000). On rt02, the N-best and perceptron initialized CRF trials were were significantly better than the lattice perceptron at p < 0.001; the other two CRF trials were significantly better than the lattice perceptron at p < 0.01. On rt03, the N-best CRF trial was significantly better than the lattice perceptron at p < 0.002; the other three CRF trials were significantly better than the lattice perceptron at p < 0.001. Finally, we measured the time of a single iteration over the training data on a single machine for the perceptron algorithm, the CRF algorithm using the approximation to the gradient of α0, and the CRF algorithm using an exact gradient of α0. Table 2 shows these times in hours. Because of the frequent update of the weights in the model, the perceptron algorithm is more expensive than the CRF algorithm for a single iteration. Further, the CRF algorithm is parallelizable, so that most of the work of an CRF Features Percep approx exact Lattice, Percep Feats (1.4M) 7.10 1.69 3.61 N-best, Percep Feats (0.9M) 3.40 0.96 1.40 Lattice, θ = 0.01 (12M) 2.24 4.75 Table 2: Time (in hours) for one iteration on a single Intel Xeon 2.4Ghz processor with 4GB RAM. iteration can be shared among multiple processors. Our most common training setup for the CRF algorithm was parallelized between 20 processors, using the approximation to the gradient. In that setup, using the 1.4M feature set, one iteration of the perceptron algorithm took the same amount of real time as approximately 80 iterations of CRF. 5 Conclusion We have contrasted two approaches to discriminative language model estimation on a difficult large vocabulary task, showing that they can indeed scale effectively to handle this size of a problem. Both algorithms have their benefits. The perceptron algorithm selects a relatively small subset of the total feature set, and requires just a couple of passes over the training data. The CRF algorithm does a better job of parameter estimation for the same feature set, and is parallelizable, so that each pass over the training set can require just a fraction of the real time of the perceptron algorithm. The best scenario from among those that we investigated was a combination of both approaches, with the output of the perceptron algorithm taken as the starting point for CRF estimation. As a final point, note that the methods we describe do not replace an existing language model, but rather complement it. The existing language model has the benefit that it can be trained on a large amount of text that does not have speech transcriptions. It has the disadvantage of not being a discriminative model. The new language model is trained on the speech transcriptions, meaning that it has less training data, but that it has the advantage of discriminative training – and in particular, the advantage of being able to learn negative evidence in the form of negative weights on n-grams which are rarely or never seen in natural language text (e.g., “the of”), but are produced too frequently by the recognizer. The methods we describe combines the two language models, allowing them to complement each other. References Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2003. Generalized algorithms for constructing language models. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 40–47. Satish Balay, William D. Gropp, Lois Curfman McInnes, and Barry F. Smith. 2002. Petsc users manual. Technical Report ANL-95/11Revision 2.1.2, Argonne National Laboratory. Satanjeev Banerjee, Jack Mostow, Joseph Beck, and Wilson Tam. 2003. Improving language models by learning from speech recognition errors in a reading tutor that listens. In Proceedings of the Second International Conference on Applied Artificial Intelligence, Fort Panhala, Kolhapur, India. Steven J. Benson and Jorge J. Mor´e. 2002. A limited memory variable metric method for bound constrained minimization. Preprint ANL/ACSP909-0901, Argonne National Laboratory. Steven J. Benson, Lois Curfman McInnes, Jorge J. Mor´e, and Jason Sarich. 2002. Tao users manual. Technical Report ANL/MCS-TM242-Revision 1.4, Argonne National Laboratory. Zheng Chen, Kai-Fu Lee, and Ming Jing Li. 2000. Discriminative training on language model. In Proceedings of the Sixth International Conference on Spoken Language Processing (ICSLP), Beijing, China. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1–8. Michael Collins. 2004. Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods. In Harry Bunt, John Carroll, and Giorgio Satta, editors, New Developments in Parsing Technology. Kluwer. Yoav Freund and Robert Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 3(37):277–296. Frederick Jelinek. 1995. Acoustic sensitive language modeling. Technical report, Center for Language and Speech Processing, Johns Hopkins University, Baltimore, MD. Mark Johnson, Stuart Geman, Steven Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic “unification-based” grammars. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 535–541. Sanjeev Khudanpur and Jun Wu. 2000. Maximum entropy techniques for exploiting syntactic, semantic and collocational dependencies in language modeling. Computer Speech and Language, 14(4):355– 372. Hong-Kwang Jeff Kuo, Eric Fosler-Lussier, Hui Jiang, and ChinHui Lee. 2002. Discriminative training of language models for speech recognition. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Orlando, Florida. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML, pages 282–289, Williams College, Williamstown, MA, USA. Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proc. CoNLL, pages 49–55. Andrew McCallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proc. CoNLL. Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. 2002. Weighted finite-state transducers in speech recognition. Computer Speech and Language, 16(1):69–88. NIST. 2000. Speech recognition scoring toolkit (sctk) version 1.2c. Available at http://www.nist.gov/speech/tools. David Pinto, Andrew McCallum, Xing Wei, and W. Bruce Croft. 2003. Table extraction using conditional random fields. In Proc. ACM SIGIR. Adwait Ratnaparkhi, Salim Roukos, and R. Todd Ward. 1994. A maximum entropy model for parsing. In Proceedings of the International Conference on Spoken Language Processing (ICSLP), pages 803–806. Brian Roark, Murat Saraclar, and Michael Collins. 2004. Corrective language modeling for large vocabulary ASR with the perceptron algorithm. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 749–752. Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proc. HLT-NAACL, Edmonton, Canada. A. Stolcke and M. Weintraub. 1998. Discriminitive language modeling. In Proceedings of the 9th Hub-5 Conversational Speech Recognition Workshop. A. Stolcke, H. Bratt, J. Butzberger, H. Franco, V. R. Rao Gadde, M. Plauche, C. Richey, E. Shriberg, K. Sonmez, F. Weng, and J. Zheng. 2000. The SRI March 2000 Hub-5 conversational speech transcription system. In Proceedings of the NIST Speech Transcription Workshop. Hanna Wallach. 2002. Efficient training of conditional random fields. Master’s thesis, University of Edinburgh. P.C. Woodland and D. Povey. 2000. Large scale discriminative training for speech recognition. In Proc. ISCA ITRW ASR2000, pages 7–16.
2004
7
An alternative method of training probabilistic LR parsers Mark-Jan Nederhof Faculty of Arts University of Groningen P.O. Box 716 NL-9700 AS Groningen The Netherlands [email protected] Giorgio Satta Dept. of Information Engineering University of Padua via Gradenigo, 6/A I-35131 Padova Italy [email protected] Abstract We discuss existing approaches to train LR parsers, which have been used for statistical resolution of structural ambiguity. These approaches are nonoptimal, in the sense that a collection of probability distributions cannot be obtained. In particular, some probability distributions expressible in terms of a context-free grammar cannot be expressed in terms of the LR parser constructed from that grammar, under the restrictions of the existing approaches to training of LR parsers. We present an alternative way of training that is provably optimal, and that allows all probability distributions expressible in the context-free grammar to be carried over to the LR parser. We also demonstrate empirically that this kind of training can be effectively applied on a large treebank. 1 Introduction The LR parsing strategy was originally devised for programming languages (Sippu and SoisalonSoininen, 1990), but has been used in a wide range of other areas as well, such as for natural language processing (Lavie and Tomita, 1993; Briscoe and Carroll, 1993; Ruland, 2000). The main difference between the application to programming languages and the application to natural languages is that in the latter case the parsers should be nondeterministic, in order to deal with ambiguous context-free grammars (CFGs). Nondeterminism can be handled in a number of ways, but the most efficient is tabulation, which allows processing in polynomial time. Tabular LR parsing is known from the work by (Tomita, 1986), but can also be achieved by the generic tabulation technique due to (Lang, 1974; Billot and Lang, 1989), which assumes an input pushdown transducer (PDT). In this context, the LR parsing strategy can be seen as a particular mapping from context-free grammars to PDTs. The acronym ‘LR’ stands for ‘Left-to-right processing of the input, producing a Right-most derivation (in reverse)’. When we construct a PDT A from a CFG G by the LR parsing strategy and apply it on an input sentence, then the set of output strings of A represents the set of all right-most derivations that G allows for that sentence. Such an output string enumerates the rules (or labels that identify the rules uniquely) that occur in the corresponding right-most derivation, in reversed order. If LR parsers do not use lookahead to decide between alternative transitions, they are called LR(0) parsers. More generally, if LR parsers look ahead k symbols, they are called LR(k) parsers; some simplified LR parsing models that use lookahead are called SLR(k) and LALR(k) parsing (Sippu and Soisalon-Soininen, 1990). In order to simplify the discussion, we abstain from using lookahead in this article, and ‘LR parsing’ can further be read as ‘LR(0) parsing’. We would like to point out however that our observations carry over to LR parsing with lookahead. The theory of probabilistic pushdown automata (Santos, 1972) can be easily applied to LR parsing. A probability is then assigned to each transition, by a function that we will call the probability function pA, and the probability of an accepting computation of A is the product of the probabilities of the applied transitions. As each accepting computation produces a right-most derivation as output string, a probabilistic LR parser defines a probability distribution on the set of parses, and thereby also a probability distribution on the set of sentences generated by grammar G. Disambiguation of an ambiguous sentence can be achieved on the basis of a comparison between the probabilities assigned to the respective parses by the probabilistic LR model. The probability function can be obtained on the basis of a treebank, as proposed by (Briscoe and Carroll, 1993) (see also (Su et al., 1991)). The model by (Briscoe and Carroll, 1993) however incorporated a mistake involving lookahead, which was corrected by (Inui et al., 2000). As we will not discuss lookahead here, this matter does not play a significant role in the current study. Noteworthy is that (Sornlertlamvanich et al., 1999) showed empirically that an LR parser may be more accurate than the original CFG, if both are trained on the basis of the same treebank. In other words, the resulting probability function pA on transitions of the PDT allows better disambiguation than the corresponding function pG on rules of the original grammar. A plausible explanation of this is that stack symbols of an LR parser encode some amount of left context, i.e. information on rules applied earlier, so that the probability function on transitions may encode dependencies between rules that cannot be encoded in terms of the original CFG extended with rule probabilities. The explicit use of left context in probabilistic context-free models was investigated by e.g. (Chitrao and Grishman, 1990; Johnson, 1998), who also demonstrated that this may significantly improve accuracy. Note that the probability distributions of language may be beyond the reach of a given context-free grammar, as pointed out by e.g. (Collins, 2001). Therefore, the use of left context, and the resulting increase in the number of parameters of the model, may narrow the gap between the given grammar and ill-understood mechanisms underlying actual language. One important assumption that is made by (Briscoe and Carroll, 1993) and (Inui et al., 2000) is that trained probabilistic LR parsers should be proper, i.e. if several transitions are applicable for a given stack, then the sum of probabilities assigned to those transitions by probability function pA should be 1. This assumption may be motivated by pragmatic considerations, as such a proper model is easy to train by relative frequency estimation: count the number of times a transition is applied with respect to a treebank, and divide it by the number of times the relevant stack symbol (or pair of stack symbols) occurs at the top of the stack. Let us call the resulting probability function prfe. This function is provably optimal in the sense that the likelihood it assigns to the training corpus is maximal among all probability functions pA that are proper in the above sense. However, properness restricts the space of probability distributions that a PDT allows. This means that a (consistent) probability function pA may exist that is not proper and that assigns a higher likelihood to the training corpus than prfe does. (By ‘consistent’ we mean that the probabilities of all strings that are accepted sum to 1.) It may even be the case that a (proper and consistent) probability function pG on the rules of the input grammar G exists that assigns a higher likelihood to the corpus than prfe, and therefore it is not guaranteed that LR parsers allow better probability estimates than the CFGs from which they were constructed, if we constrain probability functions pA to be proper. In this respect, LR parsing differs from at least one other well-known parsing strategy, viz. left-corner parsing. See (Nederhof and Satta, 2004) for a discussion of a property that is shared by left-corner parsing but not by LR parsing, and which explains the above difference. As main contribution of this paper we establish that this restriction on expressible probability distributions can be dispensed with, without losing the ability to perform training by relative frequency estimation. What comes in place of properness is reverse-properness, which can be seen as properness of the reversed pushdown automaton that processes input from right to left instead of from left to right, interpreting the transitions of A backwards. As we will show, reverse-properness does not restrict the space of probability distributions expressible by an LR automaton. More precisely, assume some probability distribution on the set of derivations is specified by a probability function pA on transitions of PDT A that realizes the LR strategy for a given grammar G. Then the same probability distribution can be specified by an alternative such function p′ A that is reverse-proper. In addition, for each probability distribution on derivations expressible by a probability function pG for G, there is a reverse-proper probability function pA for A that expresses the same probability distribution. Thereby we ensure that LR parsers become at least as powerful as the original CFGs in terms of allowable probability distributions. This article is organized as follows. In Section 2 we outline our formalization of LR parsing as a construction of PDTs from CFGs, making some superficial changes with respect to standard formulations. Properness and reverse-properness are discussed in Section 3, where we will show that reverse-properness does not restrict the space of probability distributions. Section 4 reports on experiments, and Section 5 concludes this article. 2 LR parsing As LR parsing has been extensively treated in existing literature, we merely recapitulate the main definitions here. For more explanation, the reader is referred to standard literature such as (Harrison, 1978; Sippu and Soisalon-Soininen, 1990). An LR parser is constructed on the basis of a CFG that is augmented with an additional rule S† →⊢S, where S is the former start symbol, and the new nonterminal S† becomes the start symbol of the augmented grammar. The new terminal ⊢acts as an imaginary start-of-sentence marker. We denote the set of terminals by Σ and the set of nonterminals by N. We assume each rule has a unique label r. As explained before, we construct LR parsers as pushdown transducers. The main stack symbols of these automata are sets of dotted rules, which consist of rules from the augmented grammar with a distinguished position in the right-hand side indicated by a dot ‘•’. The initial stack symbol is pinit = {S† →⊢• S}. We define the closure of a set p of dotted rules as the smallest set closure(p) such that: 1. p ⊆closure(p); and 2. for (B →α • Aβ) ∈closure(p) and A → γ a rule in the grammar, also (A →• γ) ∈ closure(p). We define the operation goto on a set p of dotted rules and a grammar symbol X ∈Σ ∪N as: goto(p, X) = {A →αX • β | (A →α • Xβ) ∈closure(p)} The set of LR states is the smallest set such that: 1. pinit is an LR state; and 2. if p is an LR state and goto(p, X) = q ̸= ∅, for some X ∈Σ ∪N, then q is an LR state. We will assume that PDTs consist of three types of transitions, of the form P a,b 7→P Q (a push transition), of the form P a,b 7→Q (a swap transition), and of the form P Q a,b 7→R (a pop transition). Here P, Q and R are stack symbols, a is one input terminal or is the empty string ε, and b is one output terminal or is the empty string ε. In our notation, stacks grow from left to right, so that P a,b 7→P Q means that Q is pushed on top of P. We do not have internal states next to stack symbols. For the PDT that implements the LR strategy, the stack symbols are the LR states, plus symbols of the form [p; X], where p is an LR state and X is a grammar symbol, and symbols of the form (p, A, m), where p is an LR state, A is the left-hand side of some rule, and m is the length of some prefix of the right-hand side of that rule. More explanation on these additional stack symbols will be given below. The stack symbols and transitions are simultaneously defined in Figure 1. The final stack symbol is pfinal = (pinit, S†, 0). This means that an input a1 · · · an is accepted if and only if it is entirely read by a sequence of transitions that take the stack consisting only of pinit to the stack consisting only of pfinal. The computed output consists of the string of terminals b1 · · · bn′ from the output components of the applied transitions. For the PDTs that we will use, this output string will consist of a sequence of rule labels expressing a right-most derivation of the input. On the basis of the original grammar, the corresponding parse tree can be constructed from such an output string. There are a few superficial differences with LR parsing as it is commonly found in the literature. The most obvious difference is that we divide reductions into ‘binary’ steps. The main reason is that this allows tabular interpretation with a time complexity cubic in the length of the input. Otherwise, the time complexity would be O(nm+1), where m is the length of the longest right-hand side of a rule in the CFG. This observation was made before by (Kipps, 1991), who proposed a solution similar to ours, albeit formulated differently. See also a related formulation of tabular LR parsing by (Nederhof and Satta, 1996). To be more specific, instead of one step of the PDT taking stack: σp0p1 · · · pm immediately to stack: σp0q where (A →X1 · · · Xm •) ∈pm, σ is a string of stack symbols and goto(p0, A) = q, we have a number of smaller steps leading to a series of stacks: σp0p1 · · · pm−1pm σp0p1 · · · pm−1(A, m−1) σp0p1 · · · (A, m−2) ... σp0(A, 0) σp0q There are two additional differences. First, we want to avoid steps of the form: σp0(A, 0) σp0q by transitions p0 (A, 0) ε,ε 7→p0 q, as such transitions complicate the generic definition of ‘properness’ for PDTs, to be discussed in the following section. For this reason, we use stack symbols of the form [p; X] next to p, and split up p0 (A, 0) ε,ε 7→p0 q into pop [p0; X0] (A, 0) ε,ε 7→[p0; A] and push [p0; A] ε,ε 7→ [p0; A] q. This is a harmless modification, which increases the number of steps in any computation by at most a factor 2. Secondly, we use stack symbols of the form (p, A, m) instead of (A, m). This concerns the conditions of reverse-properness to be discussed in the • For LR state p and a ∈Σ such that goto(p, a) ̸= ∅: p a,ε 7→[p; a] (1) • For LR state p and (A →•) ∈p, where A →ε has label r: p ε,r 7→[p; A] (2) • For LR state p and (A →α •) ∈p, where |α| = m > 0 and A →α has label r: p ε,r 7→(p, A, m −1) (3) • For LR state p and (A →α • Xβ) ∈p, where |α| = m > 0, such that goto(p, X) = q ̸= ∅: [p; X] (q, A, m) ε,ε 7→(p, A, m −1) (4) • For LR state p and (A →• Xβ) ∈p, such that goto(p, X) = q ̸= ∅: [p; X] (q, A, 0) ε,ε 7→[p; A] (5) • For LR state p and X ∈Σ ∪N such that goto(p, X) = q ̸= ∅: [p; X] ε,ε 7→[p; X] q (6) Figure 1: The transitions of a PDT implementing LR(0) parsing. following section. By this condition, we consider LR parsing as being performed from right to left, so backwards with regard to the normal processing order. If we were to omit the first components p from stack symbols (p, A, m), we may obtain ‘dead ends’ in the computation. We know that such dead ends make a (reverse-)proper PDT inconsistent, as probability mass lost in dead ends causes the sum of probabilities of all computations to be strictly smaller than 1. (See also (Nederhof and Satta, 2004).) It is interesting to note that the addition of the components p to stack symbols (p, A, m) does not increase the number of transitions, and the nature of LR parsing in the normal processing order from left to right is preserved. With all these changes together, reductions are implemented by transitions resulting in the following sequence of stacks: σ′[p0; X0][p1; X1] · · · [pm−1; Xm−1]pm σ′[p0; X0][p1; X1] · · · [pm−1; Xm−1](pm, A, m−1) σ′[p0; X0][p1; X1] · · · (pm−1, A, m−2) ... σ′[p0; X0](p1, A, 0) σ′[p0; A] σ′[p0; A]q Please note that transitions of the form [p; X] (q, A, m) ε,ε 7→ (p, A, m −1) may correspond to several dotted rules (A →α • Xβ) ∈p, with different α of length m and different β. If we were to multiply such transitions for different α and β, the PDT would become prohibitively large. 3 Properness and reverse-properness If a PDT is regarded to process input from left to right, starting with a stack consisting only of pinit, and ending in a stack consisting only of pfinal, then it seems reasonable to cast this process into a probabilistic framework in such a way that the sum of probabilities of all choices that are possible at any given moment is 1. This is similar to how the notion of ‘properness’ is defined for probabilistic contextfree grammars (PCFGs); we say a PCFG is proper if for each nonterminal A, the probabilities of all rules with left-hand side A sum to 1. Properness for PCFGs does not restrict the space of probability distributions on the set of parse trees. In other words, if a probability distribution can be defined by attaching probabilities to rules, then we may reassign the probabilities such that that PCFG becomes proper, while preserving the probability distribution. This even holds if the input grammar is non-tight, meaning that probability mass is lost in ‘infinite derivations’ (S´anchez and Bened´ı, 1997; Chi and Geman, 1998; Chi, 1999; Nederhof and Satta, 2003). Although CFGs and PDTs are weakly equivalent, they behave very differently when they are extended with probabilities. In particular, there seems to be no notion similar to PCFG properness that can be imposed on all types of PDTs without losing generality. Below we will discuss two constraints, which we will call properness and reverseproperness. Neither of these is suitable for all types of PDTs, but as we will show, the second is more suitable for probabilistic LR parsing than the first. This is surprising, as only properness has been described in existing literature on probabilistic PDTs (PPDTs). In particular, all existing approaches to probabilistic LR parsing have assumed properness rather than anything related to reverse-properness. For properness we have to assume that for each stack symbol P, we either have one or more transitions of the form P a,b 7→P Q or P a,b 7→Q, or one or more transitions of the form Q P a,b 7→R, but no combination thereof. In the first case, properness demands that the sum of probabilities of all transitions P a,b 7→P Q and P a,b 7→Q is 1, and in the second case properness demands that the sum of probabilities of all transitions Q P a,b 7→R is 1 for each Q. Note that our assumption above is without loss of generality, as we may introduce swap transitions P ε,ε 7→P1 and P ε,ε 7→P2, where P1 and P2 are new stack symbols, and replace transitions P a,b 7→P Q and P a,b 7→Q by P1 a,b 7→P1 Q and P1 a,b 7→Q, and replace transitions Q P a,b 7→R by Q P2 a,b 7→R. The notion of properness underlies the normal training process for PDTs, as follows. We assume a corpus of PDT computations. In these computations, we count the number of occurrences for each transition. For each P we sum the total number of all occurrences of transitions P a,b 7→P Q or P a,b 7→Q. The probability of, say, a transition P a,b 7→P Q is now estimated by dividing the number of occurrences thereof in the corpus by the above total number of occurrences of transitions with P in the lefthand side. Similarly, for each pair (Q, P) we sum the total number of occurrences of all transitions of the form Q P a,b 7→R, and thereby estimate the probability of a particular transition Q P a,b 7→R by relative frequency estimation. The resulting PPDT is proper. It has been shown that imposing properness is without loss of generality in the case of PDTs constructed by a wide range of parsing strategies, among which are top-down parsing and left-corner parsing. This does not hold for PDTs constructed by the LR parsing strategy however, and in fact, properness for such automata may reduce the expressive power in terms of available probability distributions to strictly less than that offered by the original CFG. This was formally proven by (Nederhof and Satta, 2004), after (Ng and Tomita, 1991) and (Wright and Wrigley, 1991) had already suggested that creating a probabilistic LR parser that is equivalent to an input PCFG is difficult in general. The same difficulty for ELR parsing was suggested by (Tendeau, 1997). For this reason, we investigate a practical alternative, viz. reverse-properness. Now we have to assume that for each stack symbol R, we either have one or more transitions of the form P a,b 7→R or Q P a,b 7→R, or one or more transitions of the form P a,b 7→P R, but no combination thereof. In the first case, reverse-properness demands that the sum of probabilities of all transitions P a,b 7→R or Q P a,b 7→R is 1, and in the second case reverse-properness demands that the sum of probabilities of transitions P a,b 7→P R is 1 for each P. Again, our assumption above is without loss of generality. In order to apply relative frequency estimation, we now sum the total number of occurrences of transitions P a,b 7→R or Q P a,b 7→R for each R, and we sum the total number of occurrences of transitions P a,b 7→P R for each pair (P, R). We now prove that reverse-properness does not restrict the space of probability distributions, by means of the construction of a ‘cover’ grammar from an input CFG, as reported in Figure 2. This cover CFG has almost the same structure as the PDT resulting from Figure 1. Rules and transitions almost stand in a one-to-one relation. The only noteworthy difference is between transitions of type (6) and rules of type (12). The right-hand sides of those rules can be ε because the corresponding transitions are deterministic if seen from right to left. Now it becomes clear why we needed the components p in stack symbols of the form (p, A, m). Without it, one could obtain an LR state q that does not match the underlying [p; X] in a reversed computation. We may assume without loss of generality that rules of type (12) are assigned probability 1, as a probability other than 1 could be moved to corresponding rules of types (10) or (11) where state q was introduced. In the same way, we may assume that transitions of type (6) are assigned probability 1. After making these assumptions, we obtain a bijection between probability functions pA for the PDT and probability functions pG for the cover CFG. As was shown by e.g. (Chi, 1999) and (Nederhof and Satta, 2003), properness for CFGs does not restrict the space of probability distributions, and thereby the same holds for reverse-properness for PDTs that implement the LR parsing strategy. It is now also clear that a reverse-proper LR parser can describe any probability distribution that the original CFG can. The proof is as follows. Given a probability function pG for the input CFG, we define a probability function pA for the LR parser, by letting transitions of types (2) and (3) • For LR state p and a ∈Σ such that goto(p, a) ̸= ∅: [p; a] →p (7) • For LR state p and (A →•) ∈p, where A →ε has label r: [p; A] →p r (8) • For LR state p and (A →α •) ∈p, where |α| = m > 0 and A →α has label r: (p, A, m −1) →p r (9) • For LR state p and (A →α • Xβ) ∈p, where |α| = m > 0, such that goto(p, X) = q ̸= ∅: (p, A, m −1) →[p; X] (q, A, m) (10) • For LR state p and (A →• Xβ) ∈p, such that goto(p, X) = q ̸= ∅: [p; A] →[p; X] (q, A, 0) (11) • For LR state q: q →ε (12) Figure 2: A grammar that describes the set of computations of the LR(0) parser. Start symbol is pfinal = (pinit, S†, 0). Terminals are rule labels. Generated language consists of right-most derivations in reverse. have probability pG(r), and letting all other transitions have probability 1. This gives us the required probability distribution in terms of a PPDT that is not reverse-proper in general. This PPDT can now be recast into reverse-proper form, as proven by the above. 4 Experiments We have implemented both the traditional training method for LR parsing and the novel one, and have compared their performance, with two concrete objectives: 1. We show that the number of free parameters is significantly larger with the new training method. (The number of free parameters is the number of probabilities of transitions that can be freely chosen within the constraints of properness or reverse-properness.) 2. The larger number of free parameters does not make the problem of sparse data any worse, and precision and recall are at least comparable to, if not better than, what we would obtain with the established method. The experiments were performed on the Wall Street Journal (WSJ) corpus, from the Penn Treebank, version II. Training was done on sections 0221, i.e., first a context-free grammar was derived from the ‘stubs’ of the combined trees, taking parts of speech as leaves of the trees, omitting all affixes from the nonterminal names, and removing εgenerating subtrees. Such preprocessing of the WSJ corpus is consistent with earlier attempts to derive CFGs from that corpus, as e.g. by (Johnson, 1998). The obtained CFG has 10,035 rules. The dimensions of the LR parser constructed from this grammar are given in Table 1. The PDT was then trained on the trees from the same sections 02-21, to determine the number of times that transitions are used. At first sight it is not clear how to determine this on the basis of the treebank, as the structure of LR parsers is very different from the structure of the grammars from which they are constructed. The solution is to construct a second PDT from the PDT to be trained, replacing each transition α a,b 7→β with label r by transition α b,r 7→β. By this second PDT we parse the treebank, encoded as a series of right-most derivations in reverse.1 For each input string, there is exactly one parse, of which the output is the list of used transitions. The same method can be used for other parsing strategies as well, such as left-corner parsing, replacing right-most derivations by a suitable alternative representation of parse trees. By the counts of occurrences of transitions, we may then perform maximum likelihood estimation to obtain probabilities for transitions. This can be done under the constraints of properness or of reverse-properness, as explained in the previous section. We have not applied any form of smooth1We have observed an enormous gain in computational efficiency when we also incorporate the ‘shifts’ next to ‘reductions’ in these right-most derivations, as this eliminates a considerable amount of nondeterminism. total # transitions 8,340,315 # push transitions 753,224 # swap transitions 589,811 # pop transitions 6,997,280 Table 1: Dimensions of PDT implementing LR strategy for CFG derived from WSJ, sect. 02-21. proper rev.-prop. # free parameters 577,650 6,589,716 # non-zero probabilities 137,134 137,134 labelled precision 0.772 0.777 labelled recall 0.747 0.749 Table 2: The two methods of training, based on properness and reverse-properness. ing or back-off, as this could obscure properties inherent in the difference between the two discussed training methods. (Back-off for probabilistic LR parsing has been proposed by (Ruland, 2000).) All transitions that were not seen during training were given probability 0. The results are outlined in Table 2. Note that the number of free parameters in the case of reverseproperness is much larger than in the case of normal properness. Despite of this, the number of transitions that actually receive non-zero probabilities is (predictably) identical in both cases, viz. 137,134. However, the potential for fine-grained probability estimates and for smoothing and parameter-tying techniques is clearly greater in the case of reverseproperness. That in both cases the number of non-zero probabilities is lower than the total number of parameters can be explained as follows. First, the treebank contains many rules that occur a small number of times. Secondly, the LR automaton is much larger than the CFG; in general, the size of an LR automaton is bounded by a function that is exponential in the size of the input CFG. Therefore, if we use the same treebank to estimate the probability function, then many transitions are never visited and obtain a zero probability. We have applied the two trained LR automata on section 22 of the WSJ corpus, measuring labelled precision and recall, as done by e.g. (Johnson, 1998).2 We observe that in the case of reverseproperness, precision and recall are slightly better. 2We excluded all sentences with more than 30 words however, as some required prohibitive amounts of memory. Only one of the remaining 1441 sentences was not accepted by the parser. The most important conclusion that can be drawn from this is that the substantially larger space of obtainable probability distributions offered by the reverse-properness method does not come at the expense of a degradation of accuracy for large grammars such as those derived from the WSJ. For comparison, with a standard PCFG we obtain labelled precision and recall of 0.725 and 0.670, respectively.3 We would like to stress that our experiments did not have as main objective the improvement of state-of-the-art parsers, which can certainly not be done without much additional fine-tuning and the incorporation of some form of lexicalization. Our main objectives concerned the relation between our newly proposed training method for LR parsers and the traditional one. 5 Conclusions We have presented a novel way of assigning probabilities to transitions of an LR automaton. Theoretical analysis and empirical data reveal the following. • The efficiency of LR parsing remains unaffected. Although a right-to-left order of reading input underlies the novel training method, we may continue to apply the parser from left to right, and benefit from the favourable computational properties of LR parsing. • The available space of probability distributions is significantly larger than in the case of the methods published before. In terms of the number of free parameters, the difference that we found empirically exceeds one order of magnitude. By the same criteria, we can now guarantee that LR parsers are at least as powerful as the CFGs from which they are constructed. • Despite the larger number of free parameters, no increase of sparse data problems was observed, and in fact there was a small increase in accuracy. Acknowledgements Helpful comments from John Carroll and anonymous reviewers are gratefully acknowledged. The first author is supported by the PIONIER Project Algorithms for Linguistic Processing, funded by NWO (Dutch Organization for Scientific Research). The second author is partially supported by MIUR under project PRIN No. 2003091149 005. 3In this case, all 1441 sentences were accepted. References S. Billot and B. Lang. 1989. The structure of shared forests in ambiguous parsing. In 27th Annual Meeting of the Association for Computational Linguistics, pages 143–151, Vancouver, British Columbia, Canada, June. T. Briscoe and J. Carroll. 1993. Generalized probabilistic LR parsing of natural language (corpora) with unification-based grammars. Computational Linguistics, 19(1):25–59. Z. Chi and S. Geman. 1998. Estimation of probabilistic context-free grammars. Computational Linguistics, 24(2):299–305. Z. Chi. 1999. Statistical properties of probabilistic context-free grammars. Computational Linguistics, 25(1):131–160. M.V. Chitrao and R. Grishman. 1990. Statistical parsing of messages. In Speech and Natural Language, Proceedings, pages 263–266, Hidden Valley, Pennsylvania, June. M. Collins. 2001. Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods. In Proceedings of the Seventh International Workshop on Parsing Technologies, Beijing, China, October. M.A. Harrison. 1978. Introduction to Formal Language Theory. Addison-Wesley. K. Inui, V. Sornlertlamvanich, H. Tanaka, and T. Tokunaga. 2000. Probabilistic GLR parsing. In H. Bunt and A. Nijholt, editors, Advances in Probabilistic and other Parsing Technologies, chapter 5, pages 85–104. Kluwer Academic Publishers. M. Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24(4):613–632. J.R. Kipps. 1991. GLR parsing in time O(n3). In M. Tomita, editor, Generalized LR Parsing, chapter 4, pages 43–59. Kluwer Academic Publishers. B. Lang. 1974. Deterministic techniques for efficient non-deterministic parsers. In Automata, Languages and Programming, 2nd Colloquium, volume 14 of Lecture Notes in Computer Science, pages 255–269, Saarbr¨ucken. Springer-Verlag. A. Lavie and M. Tomita. 1993. GLR∗– an efficient noise-skipping parsing algorithm for context free grammars. In Third International Workshop on Parsing Technologies, pages 123–134, Tilburg (The Netherlands) and Durbuy (Belgium), August. M.-J. Nederhof and G. Satta. 1996. Efficient tabular LR parsing. In 34th Annual Meeting of the Association for Computational Linguistics, pages 239–246, Santa Cruz, California, USA, June. M.-J. Nederhof and G. Satta. 2003. Probabilistic parsing as intersection. In 8th International Workshop on Parsing Technologies, pages 137– 148, LORIA, Nancy, France, April. M.-J. Nederhof and G. Satta. 2004. Probabilistic parsing strategies. In 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain, July. S.-K. Ng and M. Tomita. 1991. Probabilistic LR parsing for general context-free grammars. In Proc. of the Second International Workshop on Parsing Technologies, pages 154–163, Cancun, Mexico, February. T. Ruland. 2000. A context-sensitive model for probabilistic LR parsing of spoken language with transformation-based postprocessing. In The 18th International Conference on Computational Linguistics, volume 2, pages 677–683, Saarbr¨ucken, Germany, July–August. J.-A. S´anchez and J.-M. Bened´ı. 1997. Consistency of stochastic context-free grammars from probabilistic estimation based on growth transformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(9):1052–1055, September. E.S. Santos. 1972. Probabilistic grammars and automata. Information and Control, 21:27–47. S. Sippu and E. Soisalon-Soininen. 1990. Parsing Theory, Vol. II: LR(k) and LL(k) Parsing, volume 20 of EATCS Monographs on Theoretical Computer Science. Springer-Verlag. V. Sornlertlamvanich, K. Inui, H. Tanaka, T. Tokunaga, and T. Takezawa. 1999. Empirical support for new probabilistic generalized LR parsing. Journal of Natural Language Processing, 6(3):3–22. K.-Y. Su, J.-N. Wang, M.-H. Su, and J.-S. Chang. 1991. GLR parsing with scoring. In M. Tomita, editor, Generalized LR Parsing, chapter 7, pages 93–112. Kluwer Academic Publishers. F. Tendeau. 1997. Analyse syntaxique et s´emantique avec ´evaluation d’attributs dans un demi-anneau. Ph.D. thesis, University of Orl´eans. M. Tomita. 1986. Efficient Parsing for Natural Language. Kluwer Academic Publishers. J.H. Wright and E.N. Wrigley. 1991. GLR parsing with probability. In M. Tomita, editor, Generalized LR Parsing, chapter 8, pages 113–128. Kluwer Academic Publishers.
2004
70
Wrapping of Trees James Rogers Department of Computer Science Earlham College Richmond, IN 47374, USA [email protected] Abstract We explore the descriptive power, in terms of syntactic phenomena, of a formalism that extends TreeAdjoining Grammar (TAG) by adding a fourth level of hierarchical decomposition to the three levels TAG already employs. While extending the descriptive power minimally, the additional level of decomposition allows us to obtain a uniform account of a range of phenomena that has heretofore been difficult to encompass, an account that employs unitary elementary structures and eschews synchronized derivation operations, and which is, in many respects, closer to the spirit of the intuitions underlying TAG-based linguistic theory than previously considered extensions to TAG. 1 Introduction Tree-Adjoining Grammar (TAG) (Joshi and Schabes, 1997; Joshi et al., 1975) is a grammar formalism which comes with a well-developed theory of natural language syntax (Frank, 2002; Frank, 1992; Kroch and Joshi, 1985). There are, however, a number of constructions, many in the core of language, which present difficulties for the linguistic underpinnings of TAG systems, although not necessarily for the implemented systems themselves. Most of these involve the combining of trees in ways that are more complicated than the simple embedding provided by the tree-adjunction operation. The most widely studied way of addressing these constructions within TAG-based linguistic theory (Kroch and Joshi, 1987; Kroch, 1989; Frank, 2002) has been to assume some sort of multi-component adjoining (MCTAG (Weir, 1988)), in which elementary structures are factored into sets of trees that are adjoined simultaneously at multiple points. Depending on the restrictions placed on where this adjoining can occur the effect of such extensions range from no increase in complexity of either the licensed tree sets or the computational complexity of parsing, to substantial increases in both. In this paper we explore these issues within the framework of an extension of TAG that is conservative in the sense that it preserves the unitary nature of the elementary structures and of the adjunction operation and extends the descriptive power minimally. While the paper is organized around particular syntactic phenomena, it is not a study of syntax itself. We make no attempt to provide a comprehensive theory of syntax. In fact, we attempt to simply instantiate the foundations of existing theory (Frank, 2002) in as faithful a way as possible. Our primary focus is the interplay between the linguistic theory and the formal language theory. All of the phenomena we consider can be (and in practice are (Group, 1998)) handled ad hoc with featurestructure based TAG (FTAG, (Vijay-Shanker and Joshi, 1991)). From a practical perspective, the role of the underlying linguistic theory is, at least in part, to insure consistent and comprehensive implementation of ad hoc mechanisms. From a theoretical perspective, the role of the formal language framework is, at least in part, to insure coherent and computationally well-grounded theories. Our overall goal is to find formal systems that are as close as possible to being a direct embodiment of the principles guiding the linguistic theory and which are maximally constrained in their formal and computational complexity. 2 Hierarchical Decomposition of Strings and Trees Like many approaches to formalization of natural language syntax, TAG is based on a hierarchical decomposition of strings which is represented by ordered trees. (Figure 1.) These trees are, in essence, graphs representing two relationships—the left-toright ordering of the structural components of the string and the relationship between a component and its immediate constituents. The distinguishing characteristic of TAG is that it identifies an additional hierarchical decomposition of these trees. This shows up, for instance when a clause which has the form of a wh-question is embedded as an argument within another clause. In the VP I’ I does VP V like DP Bob Alice IP DP CP DP who C’ I does Alice DP IP I  I’ t t like V DP Figure 1: Wh-movement and subj-aux inversion. VP V think that  C Carol IP DP Alice IP DP  I does VP V like DP  t who CP DP   I  does  I  t DP Alice IP  I does DP  t V like VP C  DP  who CP I t V think  VP  I does  DP Carol IP  Figure 2: Bridge verbs and subj-aux inversion. wh-form (as in the right-hand tree of Figure 1), one of the arguments of the verb is fronted as a wh-word and the inflectional element (does, in this case) precedes the subject. This is generally known in the literature as wh-movement and subj-aux inversion, but TAG does not necessarily assume there is any actual transformational movement involved, only that there is a systematic relationship between the whform and the canonical configuration. The ‘ ’s in the trees mark the position of the corresponding components in the canonical trees.1 When such a clause occurs as the argument of a bridge verb (such as think or believe) it is split, with the wh-word appearing to the left of the matrix clause and the rest of the subordinate clause occurring to the right (Figure 2). Standardly, TAG accounts analyze this as insertion of the tree for the matrix clause between the upper an lower portions 1This systematic relationship between the wh-form and the canonical configuration has been a fundamental component of syntactic theories dating back, at least, to the work of Harris in the ’50’s. of the tree for the embedded clause, an operation known as tree-adjunction. In effect, the tree for the embedded clause is wrapped around that of the matrix clause. This process may iterate, with adjunction of arbitrarily many instances of bridge verb trees: Who does Bob believe . . . Carol thinks that Alice likes. One of the key advantages of this approach is that the wh-word is introduced into the derivation within the same elementary structure as the verb it is an argument of. Hence these structures are semantically coherent—they express all and only the structural relationships between the elements of a single functional domain (Frank, 2002). The adjoined structures are similarly coherent and the derivation preserves that coherence at all stages. Following Rogers (2003) we will represent this by connecting the adjoined tree to the point at which it adjoins via a third, “tree constituency” relation as in the right hand part of Figure 2. This gives us like I does VP V seem I to VP V DP Bob Alice IP DP VP V seem I to Alice IP DP I  does  who CP DP  I  t VP V like DP  t Figure 3: Raising verbs. structures that we usually conceptualize as threedimensional trees, but which can simply be regarded as graphs with three sorts of edges, one for each of the hierarchical relations expressed by the structures. Within this context, tree-adjunction is a process of concatenating these structures, identifying the root of the adjoined structure with the point at which it is adjoined.2 The resulting complex structures are formally equivalent to the derivation trees in standard formalizations of TAG. The derived tree is obtained by concatenating the tree yield of the structure analogously to the way that the string yield of a derivation tree is concatenated to form the derived string of a context-free grammar. Note that in this case it is essential to identify the point in the frontier of each tree component at which the components it dominates will be attached. This point is referred to as the foot of the tree and the path to it from the root is referred to as the (principal) spine of the tree. Here we have marked the spines by doubling the corresponding edges of the graphs. Following Rogers (2002), we will treat the subject of the clause as if it were “adjoined” into the rest of the clause at the root of the  . At this point, this is for purely theory-internal reasons—it will allow us to exploit the additional formal power we will shortly bring to bear. It should be noted that it does not represent ordinary adjunction. The subject originates in the same elementary structure as the rest of the clause, it is just a somewhat richer structure than the more standard tree. 3 Raising Verbs and Subj-Aux Inversion A problem arises, for this account, when the matrix verb is a raising verb, such as seems or appears as in 2Context-free derivation can be viewed as a similar process of concatenating trees. Alice seems to like Bob Who does Alice seem to like Here the matrix clause and the embedded clause share, in some sense, the same subject argument. (Figure 3.) Raising verbs are distinguished, further, from the control verbs (such as want or promise) in the fact that they may realize their subject as an expletive it: It seems Alice likes Bob. Note, in particular, that in each of these cases the inflection is carried by the matrix clause. In order to maintain semantic coherence, we will assume that the subject originates in the elementary structure of the embedded clause. This, then, interprets the raising verb as taking an  to an  , adjoining at the  between the subject and the inflectional element of the embedded clause (as in the left-hand side of Figure 3). For the declarative form this provides a nesting of the trees similar to that of the bridge verbs; the embedded clause tree is wrapped around that of the matrix clause. For the wh-form, however, the wrapping pattern is more complex. Since who and Alice must originate in the same elementary structure as like, while does must originate in the same elementary structure as seem, the trees evidently must factor and be interleaved as shown in the right-hand side of the figure. Such a wrapping pattern is not possible in ordinary TAG. The sequences of labels occurring along the spines of TAG tree sets must form contextfree languages (Weir, 1988). Hence the “centerembedded” wrapping patterns of the bridge verbs and the declarative form of the raising verbs are possible but the “cross-serial” pattern of the wh-form of the raising verbs is not. DP who CP DP Alice IP V seem VP I  t VP DP  t V seem  DP  VP I to VP I to IP DP t V like I  t  I  does I does V like VP I does  DP Alice DP who CP DP  DP Alice who IP CP seem V VP I  t I to V like t Figure 4: An higher-order account. 4 Higher-order Decomposition One approach to obtaining the more complicated wrapping pattern that occurs in the wh-form of the raising verb trees is to move to a formalism in which the spine languages of the derived trees are TALs (the string languages derived by TAGs), which can describe such patterns. One such formalism is the third level of Weir’s Control Language Hierarchy (Weir, 1992) which admits sets of derivation trees generated by CFGs which are filtered by a requirement that the sequences of labels on the spines occur in some particular TAL.3 The problem with this approach is that it abandons the notion of semantic coherence of the elementary structures. It turns out, however, that one can generate exactly the same tree sets if one moves to a formalism in which another level of hierarchical decomposition is introduced (Rogers, 2003). This now gives structures which employ four hierarchical relations—the fourth representing the constituency relation encoding a hierarchical decomposition of the third-level structures. In this framework, the seem structure can be taken to be inserted between the subject and the rest of the like structure as shown in Figure 4. Again, spines are marked by doubling 3TAG is equivalent to the second level of this hierarchy, in which the spine languages are Context-Free. the edges. The third-order yield of the corresponding derived structure now wraps the third-order like structure around that of the seem structure, with the fragment of like that contains the subject attaching at the third-order “foot” node in the tree-yield of the seem structure (the  ) as shown at the bottom of the figure. The center-embedding wrapping pattern of these third-order spines guarantees that the wrapping pattern of spines of the tree yield will be a TAL, in particular, the “cross-serial” pattern needed by raising of wh-form structures. The fourth-order structure has the added benefit of clearly justifying the status of the like structure as a single elementary structure despite of the apparent extraction of the subject along the third relation. 5 Locality Effects Note that it is the  to  recursion along the thirdorder spine of the seem structure that actually does the raising of the subject. One of the consequences of this is that that-trace violations, such as  Who  does Alice  seem that  does like   . cannot occur. If the complementizer originates in the seem structure, it will occur under the . If it originates in the like tree it will occur in a similar position between the CP and the  . In either case, VP seem V I does  IP it DP Bob V like DP CP DP Alice IP  I does C that Figure 5: Expletive it. the complementizer must precede the raised subject in the derived string. If we fill the subject position of the seem structure with expletive it, as in Figure 5, the  position in the yield of the structure is occupied and we no longer have  to  recursion. This motivates analyzing these structures as to recursion, similar to bridge verbs, rather than  to  . (Figure 5.) More importantly the presence of the expletive subject in the seem tree rules out super-raising violations such as  Alice does it seems  does like Bob.  Alice does appear it seems  does like Bob. No matter how the seem structure is interpreted, if it is to raise Alice then the Alice structure will have to settle somewhere in its yield. Without extending the seem structure to include the position, none of the possible positions will yield the correct string (and all can be ruled out on simple structural grounds). If the seem structure is extended to include the , the raising will be ruled out on the assumption that the structure must attach at . 6 Subject-Object Asymmetry Another phenomenon that has proved problematic for standard TAG accounts is extraction from nominals, such as Who did Alice publish a picture of  . Here the wh-word is an argument of the prepositional phrase in the object nominal picture of. Apparently, the tree structure involves wrapping of the picture tree around the publish tree. (See Figure 6.) The problem, as normally analyzed (Frank, 2002; Kroch, 1989), is that the the publish tree does have the recursive structure normally assumed for auxiliary trees. We will take a somewhat less strict view and rule out the adjunction of the publish tree simply on the grounds that it would involve attaching a structure rooted in (or possibly CP) to a DP node. The usual way around this difficulty has been to assume that the who is introduced in the publish tree, corresponding, presumably, to the as yet missing DP. The picture tree is then factored into two components, an isolated DP node which adjoins at the wh-DP, establishing its connection to the argument trace, and the picture DP which combines at the object position of publish. This seems to at least test the spirit of the semantic coherence requirement. If the who is not extraneous in the publish tree then it must be related in some way to the object position. But the identity of who is ultimately not the object of publish (a picture) but rather the object of the embedded preposition (the person the picture is of). If we analyze this in terms of a fourth hierarchical relation, we can allow the who to originate in the picture structure, which would now be rooted in CP. This could be allowed to attach at the root of the publish structure on the assumption that it is a C-node of some sort, providing the wrapping of its tree-yield around that of the publish. (See Figure 6.) Thus we get an account with intact elementary structures which are unquestionably semantically coherent. One of the striking characteristics of extraction of this sort is the asymmetry between extraction from the object, which is acceptable, and extraction from the subject, which is not:  Who did a picture of  illustrate the point. In the account under consideration, we might contemplate a similar combination of structures, but in this case the picture DP has to somehow migrate up to combine at the subject position. Under our assumption that the subject structure is attached to the illustrate tree via the third relation, this would require the subject structure to, in effect, have two PP of P DP t P of PP a picture DP t DPDP I t a picture V publish DP VP who IP CP IP  did  CP who DP DP Alice IP  VP DP publish V IP  did  DP who CP t a picture DP DP I  DPt P of PP Alice DP  DPCP IP  DP Alice  I  t did IP  V publish DP VP Figure 6: Extraction from object nominal. CP V IP DP DP  t a picture P of PP VP the point illustrate DP DP who V DP illustrate the point VP  DP CP who DP I t PP of P V DP illustrate the point VP t DP IP  did  a picture DP DP  did DP  who CP IP  IP  did  t I  DP IP  DP t P of PP DP a picture DP  IP I t Figure 7: Extraction from subject nominal. feet, an extension that strictly increases the generative power of the formalism. Alternatively, we might assume that the picture structure attaches in the yield of the illustrate structure or between the main part of the structure and the subject tree, but either of these would fail to promote the who to the root of the yield structure. 7 Processing As with any computationally oriented formalism, the ability to define the correct set of structures is only one aspect of the problem. Just as important is the question of the complexity of processing language relative to that definition. Fortunately, the languages of the Control Language Hierarchy are well understood and recognition algorithms, based on a CKY-style dynamic programming approach, are know for each level. The time complexity of the algorithm for the  level, as a function of the length of the input (  ), is    (Palis and Shende, 1992). In the case of the fourth-order grammars, which correspond to the third level of the CLH, this gives an upper bound of   . While, strictly speaking, this is a feasible time complexity, in practice we expect that approaches with better average-case complexity, such as Earlystyle algorithms, will be necessary if these grammars are to be parsed directly. But, as we noted in the introduction, grammars of this complexity are not necessarily intended to be used as working grammars. Rather they are mechanisms for expressing the linguistic theory serving as the foundation of working grammars of more practical complexity. Since all of our proposed use of the higher-order relations involve either combining at a root (without properly embedding) or embedding with finitely bounded depth of nesting, the effect of the higherdimensional combining operations are expressible using a finite set of features. Hence, the sets of derived trees can be generated by adding finitely many features to ordinary TAGs and the theory entailed by our accounts of these phenomena (as expressed in the sets of derived trees) is expressible in FTAG. Thus, a complete theory of syntax incorporating them would be (not necessarily not) compatible with implementation within existing TAG-based systems. A more long term goal is to implement a compilation mechanism which will translate the linguistic theory, stated in terms of the hierarchical relations, directly into grammars stated in terms of the existing TAG-based systems. 8 Conclusion In many ways the formalism we have working with is a minimal extension of ordinary TAGs. Formally, the step from TAG to add the fourth hierarchical relation is directly analogous to the step from CFG to TAG. Moreover, while the graphs describing the derived structures are often rather complicated, conceptually they involve reasoning in terms of only a single additional relation. The benefit of the added complexity is a uniform account of a range of phenomena that has heretofore been difficult to encompass, an account that employs unitary elementary structures and eschews synchronized derivation operations, and which is, in many respects, closer to the spirit of the intuitions underlying TAG-based linguistic theory than previously considered extensions to TAG. While it is impossible to determine how comprehensive the coverage of a more fully developed theory of syntax based on this formalism will be without actually completing such a theory, we believe that the results presented here suggest that the uniformity provided by adding this fourth level of decomposition to our vocabulary is likely to more than compensate for the added complexity of the fourth level elementary structures. References Robert Evan Frank. 1992. Syntactic Locality and Tree Adjoining Grammar: Grammatical, Acquisition and Processing Perspectives. Ph.D. dissertation, Univ. of Penn. Robert Frank. 2002. Phrase Structure Composition and Syntactic Dependencies. MIT Press. The XTAG Research Group. 1998. A lexicalized tree adjoining grammar for english. Technical Report IRCS-98-18, Institute for Research in Cognitive Science. Aravind K. Joshi and Yves Schabes. 1997. Treeadjoining grammars. In Handbook of Formal Languages and Automata, volume 3, pages 69– 123. Springer-Verlag. Aravind K. Joshi, Leon Levy, and Masako Takahashi. 1975. Tree adjunct grammars. Journal of the Computer and Systems Sciences, 10:136–163. Anthony Kroch and Aravind K. Joshi. 1985. The linquistic relevance of tree adjoining grammar. Technical Report MS-CS-85-16, Dept. of Computer and Information Sciences. Anthony S. Kroch and Aravind K. Joshi. 1987. Analyzing extraposition in a tree adjoining grammar. In Syntax and Semantics, pages 107–149. Academic Press. Vol. 20. Anthony Kroch. 1989. Asymmetries in long distance extraction in a tree adjoining grammar. In Mark Baltin and Anthony Kroch, editors, Alternative Conceptions of Phrase Structure, pages 66–98. University of Chicago Press. Michael A. Palis and Sunil M. Shende. 1992. Upper bounds on recognition of a hierarchy of noncontext-free languages. Theoretical Computer Science, 98:289–319. James Rogers. 2002. One more perspective on semantic relations in TAG. In Proceedings of the Sixth International Workshop on Tree Adjoining Grammars and Related Frameworks, Venice, IT, May. James Rogers. 2003. Syntactic structures as multidimensional trees. Research on Language and Computation, 1(3–4):265–305. K. Vijay-Shanker and Aravind K. Joshi. 1991. Unification based tree adjoining grammars. In J. Wedekind, editor, Unification-based Grammars. MIT Press, Cambridge, MA. David J. Weir. 1988. Characterizing Mildly Context-Sensitive Grammar Formalisms. Ph.D. thesis, University of Pennsylvania. David J. Weir. 1992. A geometric hierarchy beyond context-free languages. Theoretical Computer Science, 104:235–261.
2004
71
Splitting Complex Temporal Questions for Question Answering systems E. Saquete, P. Mart´ınez-Barco, R. Mu˜noz, J.L. Vicedo Grupo de investigaci´on del Procesamiento del Lenguaje y Sistemas de Informaci´on. Departamento de Lenguajes y Sistemas Inform´aticos. Universidad de Alicante. Alicante, Spain  stela,patricio,rafael,vicedo  @dlsi.ua.es Abstract This paper presents a multi-layered Question Answering (Q.A.) architecture suitable for enhancing current Q.A. capabilities with the possibility of processing complex questions. That is, questions whose answer needs to be gathered from pieces of factual information scattered in different documents. Specifically, we have designed a layer oriented to process the different types of temporal questions. Complex temporal questions are first decomposed into simpler ones, according to the temporal relationships expressed in the original question. In the same way, the answers of each simple question are re-composed, fulfilling the temporal restrictions of the original complex question. Using this architecture, a Temporal Q.A. system has been developed. In this paper, we focus on explaining the first part of the process: the decomposition of the complex questions. Furthermore, it has been evaluated with the TERQAS question corpus of 112 temporal questions. For the task of question splitting our system has performed, in terms of precision and recall, 85% and 71%, respectively. 1 Introduction Question Answering could be defined as the process of computer-answering to precise or arbitrary questions formulated by users. Q.A. systems are especially useful to obtain a specific piece of information without the need of manually going through all the available documentation related to the topic. Research in Question Answering mainly focuses on the treatment of factual questions. These require as an answer very specific items of data, such as dates, names of entities or quantities, e.g., “What is the capital of Brazil?”.  This paper has been supported by the Spanish government, projects FIT-150500-2002-244, FIT-150500-2002-416, TIC2003-07158-C04-01 and TIC2000-0664-C02-02. Temporal Q.A. is not a trivial task due to the complexity temporal questions may reach. Current operational Q.A. systems can deal with simple factual temporal questions. That is, questions requiring to be answered with a date, e.g. “When did Bob Marley die?”. or questions that include simple temporal expressions in their formulation, e.g., “Who won the U.S. Open in 1999?”. Processing this sort of questions is usually performed by identifying explicit temporal expressions in questions and relevant documents, in order to gather the necessary information to answer the queries. Even though, it seems necessary to emphasize that the system described in (Breck et al., 2000) is the only one also using implicit temporal expression recognition for Q.A. purposes. It does so by applying the temporal tagger developed by Mani and Wilson (2000). However, issues like addressing the temporal properties or the ordering of events in questions, remain beyond the scope of current Q.A. systems:  “Who was spokesman of the Soviet Embassy in Baghdad during the invasion of Kuwait?”  “Is Bill Clinton currently the President of the United States?” This work presents a Question Answering system capable of answering complex temporal questions. This approach tries to imitate human behavior when responding this type of questions. For example, a human that wants to answer the question: “Who was spokesman of the Soviet Embassy in Baghdad during the invasion of Kuwait?” would follow this process: 1. First, he would decompose this question into two simpler ones: “Who was spokesman of the Soviet Embassy in Baghdad?” and “When did the invasion of Kuwait occur?”. 2. He would look for all the possible answers to the first simple question: “Who was spokesman of the Soviet Embassy in Baghdad?”. 3. After that, he would look for the answer to the second simple question: “When did the invasion of Kuwait occur?” 4. Finally, he would give as a final answer one of the answers to the first question (if there is any), whose associated date stays within the period of dates implied by the answer to the second question. That is, he would obtain the final answer by discarding all answers to the simple questions which do not accomplish the restrictions imposed by the temporal signal provided by the original question (during). Therefore, the treatment of complex question is based on the decomposition of these questions into simpler ones, to be resolved using conventional Question Answering systems. Answers to simple questions are used to build the answer to the original question. This paper has been structured in the following fashion: first of all, section 2 presents our proposal of a taxonomy for temporal questions. Section 3 describes the general architecture of our temporal Q.A. system. Section 4 deepens into the first part of the system: the decomposition unit. Finally, the evaluation of the decomposition unit and some conclusions are shown. 2 Proposal of a Temporal Questions Taxonomy Before explaining how to answer temporal questions, it is necessary to classify them, since the way to solve them will be different in each case. Our classification distinguishes first between simple questions and complex questions. We will consider as simple those questions that can be solved directly by a current General Purpose Question Answering system, since they are formed by a single event. On the other hand, we will consider as complex those questions that are formed by more than one event related by a temporal signal which establishes an order relation between these events. Simple Temporal Questions: Type 1: Single event temporal questions without temporal expression (TE). This kind of questions are formed by a single event and can be directly resolved by a Q.A. System, without pre- or postprocessing them. There are not temporal expressions in the question. Example: “When did Jordan close the port of Aqaba to Kuwait?” Type 2: Single event temporal questions with temporal expression. There is a single event in the question, but there are one or more temporal expressions that need to be recognized, resolved and annotated. Each piece of temporal information could help to search for an answer. Example: “Who won the 1988 New Hampshire republican primary?”. TE: 1988 Complex Temporal Questions: Type 3: Multiple events temporal questions with temporal expression. Questions that contain two or more events, related by a temporal signal. This signal establishes the order between the events in the question. Moreover, there are one or more temporal expressions in the question. These temporal expressions need to be recognized, resolved and annotated, and they introduce temporal constraints to the answers of the question. Example: “What did George Bush do after the U.N. Security Council ordered a global embargo on trade with Iraq in August 90?” In this example, the temporal signal is after and the temporal constraint is “between 8/1/1990 and 8/31/1990”. This question can be divided into the following ones:  Q1: What did George Bush do?  Q2: When the U.N. Security Council ordered a global embargo on trade with Iraq? Type 4: Multiple events temporal questions without temporal expression. Questions that consist of two or more events, related by a temporal signal. This signal establishes the order between the events in the question. Example: “What happened to world oil prices after the Iraqi annexation of Kuwait?”. In this example, the temporal signal is after and the question would be decomposed into:  Q1: What happened to world oil prices?  Q2: When did the Iraqi “annexation” of Kuwait occur? How to process each type will be explained in detail in the following sections. 3 Multi-layered Question-Answering System Architecture Current Question Answering system architectures do not allow to process complex questions. That is, questions whose answer needs to be gathered from pieces of factual information that is scattered in a document or through different documents. In order to be able to process these complex questions, we propose a multi-layered architecture. This architecture increases the functionality of the current Question-Answering systems, allowing us to solve any type of temporal questions. Moreover, this system could be easily augmented with new layers to cope with questions that need complex processing and are not temporal oriented. Some examples of complex questions are:  Temporal questions like “Where did Michael Milken study before going to the University of Pennsylvania?”. This kind of questions needs to use temporal information and event ordering to obtain the right answer.  Script questions like “How do I assemble a bicycle?”. In these questions, the final answer is a set of ordered answers.  Template-based questions like “Which are the main biographical data of Nelson Mandela?”. This question should be divided in a number of factual questions asking for different aspects of Nelson Mandela’s biography. Gathering their respective answers will make it possible to answer the original question. These three types of question have in common the necessity of an additional processing in order to be solved. Our proposal to deal with them is to superpose an additional processing layer, one by each type, to a current General Purpose Question Answering system, as it is shown in Figure 1. This layer will perform the following steps:  Decomposition of the question into simple events to generate simple questions (subquestions) and the ordering of the subquestions.  Sending simple questions to a current General Purpose Question Answering system.  Receiving the answers to the simple questions from the current General Purpose Question Answering system.  Filtering and comparison between sub-answers to build the final complex answer.                                     ! Figure 1: Multi-layered Architecture of a Q.A. The main advantages of performing this multilayered system are:  It allows you to use any existing general Q.A. system, with the only effort of adapting the output of the processing layer to the type of input that the Q.A. system uses.  Due to the fact that the process of complex questions is performed at an upper layer, it is not necessary to modify the Q.A. system when you want to deal with more complex questions.  Each additional processing layer is independent from each other and only processes those questions within the type accepted by that layer. Next, we present a layer oriented to process temporal questions according to the taxonomy shown in section 2. 3.1 Architecture of a Question Answering System applied to Temporality The main components of the Temporal Question Answering System are (c.f. figure 2) top-down: Question Decomposition Unit, General purpose Q.A. system and Answer, Recomposition Unit.                                                       !   !       " " " " " " " " " Figure 2: Temporal Question Answering System These components work all together for the obtainment of a final answer. The Question Decomposition Unit and the Answer Recomposition Unit are the units that conform the Temporal Q.A. layer which process the temporal questions, before and after using a General Purpose Q.A. system.  The Question Decomposition Unit is a preprocessing unit which performs three main tasks. First of all, the recognition and resolution of temporal expressions in the question. Secondly, there are different types of questions, according to the taxonomy shown in section 2. Each type of them needs to be treated in a different manner. For this reason, type identification must be done. After that, complex questions of types 3 and 4 only, are split into simple ones, which are used as the input of a General Purpose Question-Answering system. For example, the question “Where did Bill Clinton study before going to Oxford University?”, is divided into two sub-questions related through the temporal signal before: – Q1: Where did Bill Clinton study? – Q2: When did Bill Clinton go to Oxford University?  A General Purpose Question Answering system. Simple factual questions generated are processed by a General Purpose Question Answering system. Any Question Answering system could be used here. In this case, the SEMQA system (Vicedo and Ferr´andez, 2000) has been used. The only condition is to know the output format of the Q.A. system to accordingly adapt the layer interface. For the example above, a current Q.A. system returns the following answers: – Q1 Answers: Georgetown University (1964-68) // Oxford University (1968-70) // Yale Law School (1970-73) – Q2 Answer: 1968  The Answer Recomposition Unit is the last stage in the process. This unit builds the answer to the original question from the answers to the sub-questions and the temporal information extracted from the questions (temporal signals or temporal expressions). As a result, the correct answer to the original question is returned. Apart from proposing a taxonomy of temporal questions, we have presented a multilayered Q.A. architecture suitable for enhancing current Q.A. capabilities with the possibility of adding new layers for processing different kinds of complex questions. Moreover, we have proposed a specific layer oriented to process each type of temporal questions. The final goal of this paper is to introduce and evaluate the first part of the temporal question processing layer: the Question Decomposition Unit. Next section shows the different parts of the unit together with some examples of their behavior. 4 Question Decomposition Unit The main task of this unit is the decomposition of the question, which is divided in three main tasks or modules:  Type Identification (according to the taxonomy proposed in section 2)  Temporal Expression Recognition and Resolution  Question Splitter These modules are fully explained below. Once the decomposition of the question has been made, the output of this unit is:  A set of sub-questions, that are the input of the General Purpose Question-Answering system.  Temporal tags, containing concrete dates returned by TERSEO system (Saquete et al., 2003), that are part of the input of the Answer Recomposition Unit and are used by this unit as temporal constraints in order to filter the individual answers.  A set of temporal signals that are part of the input of the Answer Recomposition Unit as well, because this information is necessary in order to compose the final answer. Once the decomposition has been made, the General Purpose Question-Answering system is used to treat with simple questions. The temporal information goes directly to the Answer Recomposition unit. 4.1 Type Identification The Type Identification Unit classifies the question in one of the four types of the taxonomy proposed in section 2. This identification is necessary because each type of question causes a different behavior (scenario) in the system. Type 1 and Type 2 questions are classified as simple, and the answer can be obtained without splitting the original question. However, Type 3 and Type 4 questions need to be split in a set of simple sub-questions. The types of these sub-questions are always Type 1 or Type 2 or a non-temporal question, which are considered simple questions. The question type is established according to the rules in figure 3:                                Figure 3: Decision tree for Type Identification 4.2 Temporal Expression Recognition and Resolution This module uses TERSEO system (Saquete et al., 2003) to recognize, annotate and resolve temporal expressions in the question. The tags this module returns exhibit the following structure: Explicit dates: <DATE_TIME ID="value" TYPE="value" VALDATE1="value"VALTIME1="value" VALDATE2="value" VALTIME2="value"> expression </DATE_TIME> Implicit dates: <DATE_TIME_REF ID="value" TYPE="value" VALDATE1="value"VALTIME1="value" VALDATE2="value" VALTIME2="value"> expression </DATE_TIME_REF> Every expression is identified by a numeric ID. VALDATE# and VALTIME# store the range of dates and times obtained from the system, where VALDATE2 and VALTIME2 are only used to establish ranges. Furthermore, VALTIME1 could be omitted if a single date is specified. VALDATE2, VALTIME1 and VALTIME2 are optional attributes. These temporal tags are the output of this module and they are used in the Answer Recomposition Unit in order to filter the individual answers obtained by the General Purpose Question-Answering system. The tags are working as temporal constraints. Following, a working example is introduced. Given the next question “Which U.S. ship was attacked by Israeli forces during the Six Day war in the sixties?”: 1. Firstly, the unit recognizes the temporal expression in the question, resolves and tags it, resulting in: <DATETIMEREF valdate1="01/01/1960" valdate2="31/12/1969"> in the sixties </DATETIMEREF> 2. The temporal constraint is that the date of the answers should be between the values valdate1 and valdate2. 4.3 Question Splitter This task is only necessary when the type of the question, obtained by the Type Identification Module, is 3 or 4. These questions are considered complex questions and need to be divided into simple ones (Type 1, Type 2). The decomposition of a complex question is based on the identification of temporal signals, which relate simple events in the question and establish an order between the answers of the sub-questions. Finally, these signals are the output of this module and are described in next subsection. 4.3.1 Temporal Signals Temporal signals denote the relationship between the dates of the related events. Assuming that F1 is the date related to the first event in the question and F2 is the date related to the second event, the signal will establish an order between them. This we have named the ordering key. An example of some ordering keys is introduced in table 1. SIGNAL ORDERING KEY After F1 > F2 When F1 = F2 Before F1 < F2 During F2i <= F1 <= F2f From F2 to F3 F2 <= F1 <= F3 About F2 -- F3 F2 <= F1 <= F3 On / in F1 = F2 While F2i <= F1 <= F2f For F2i <= F1 <= F2f At the time of F1 = F2 Since F1 > F2 Table 1: Example of signals and ordering keys 4.3.2 Implementation One have divided each complex question into two parts, based on the temporal signal. The former is a simple question, therefore, no transformation is required. However, the latter (the bit after the temporal signal) needs transformation into a correct question pattern, always corresponding to a “When” type-question. Moreover, three different kinds of question structures have been determined, being the transformation different for each of them. The implementation of this module is shown in figure 4.                       !!"#  $  %        & &        "'(!) % *) +), ,#'              *) +), ,#' Figure 4: Decision tree for the Question Splitter The three possible cases are:  The question that follows the temporal signal does not contain any verb, for example: “What happened to the world oil prices after the Iraqi annexation of Kuwait?” In this case, our system returns the following transformation: “When did the Iraqi annexation of Kuwait occur?” This case is the simplest, since the only transformation needed is adding the words “When did... occur?” to the second sentence.  The question that follows the temporal signal contains a verb, but this verb is a gerund tense, for example: “Where did Bill Clinton study before going to Oxford University?” In this case two previous steps to the transformation are necessary: 1. Extracting the subject of the previous question. 2. Converting the verb of the second sentence to infinitive tense. The final question returned by the system is: “When did Bill Clinton go to Oxford University?”.  In the last type of transformation the second sentence in the question contains a tensed verb and its own subject, e.g., “What did George Bush do after the U.N. Security Council ordered a global embargo on trade with Iraq?” In this case, the infinitive and the tense of the sentence are obtained. Hence, the question results in the following form: “When did the U.N. Security Council order a global embargo on trade with Iraq?”. 4.3.3 Example In the following example a part of the returned file of our Decomposition Unit is shown. 1.Where did Bill Clinton study before going to Oxford University? Temporal Signal: before Q1: Where did Bill Clinton study? Q2: When did Bill Clinton go to Oxford University? 2.What did George Bush do after the U.N. Security Council ordered a global embargo on trade with Iraq in August 90? Temporal Signal: after Temporal Expression: in August 90 Q1: What did George Bush do? Q2: When did the U.N. Security Council order a global embargo on trade with Iraq in August 90? DateQ2:[01/08/1990--31/08/1990] 3.When did Iraq invade Kuwait? Temporal Signal: Temporal Expression: Q1: When did Iraq invade Kuwait? 4.Who became governor of New Hampshire in 1949? Temporal Signal: Temporal Expression: in 1949 Q1: Who became governor of New Hampshire in 1949? DateQ1:[01/01/1949--31/12/1949] 4.4 Decomposition Unit Evaluation This section presents an evaluation of the Decomposition Unit for the treatment of complex questions. For the evaluation a corpus of questions containing as many simple as complex questions is required. Due to the fact that question corpora used in TREC (TREC, ) and CLEF (CLEF, ) do not contain complex questions, the TERQAS question corpus has been chosen (Radev and Sundheim, 2002; Pustejovsky, 2002). It consists of 123 temporal questions. TOTAL TREATED SUCCESSES PRECISION RECALL FMEASURE TE Recognition and Resolution 62 52 47 90% 75% 86% Type Identification 112 112 104 92% 100% 93% Signal Detection 17 14 14 100% 82% 95% Question Splitter 17 14 12 85% 71% 81% DECOMPOSITION UNIT 112 112 93 83% 83% 83% Table 2: Evaluation of the system From these, 11 were discarded due to requiring the need of a treatment beyond the capabilities of the system introduced hereby. Questions of the type: “Who was the second man on the moon” can not be answered by applying the question decomposition. They need a special treatment. For the aforementioned phrase, this would consist of obtaining the names of all the men having been on the moon, ordering the dates and picking the second in the ordered list of names. Therefore, for this evaluation, we have just been focusing on trying to resolve the 112 left. The evaluation has been made manually by three annotators. Four different aspects of the unit have been considered:  Recognition and resolution of Temporal Expressions: In this corpus, there were 62 temporal expressions and our system was able to recognize 52, from which 47 were properly resolved by this module.  Type Identification: There were 112 temporal questions in the corpus. Each of them was processed by the module, resulting in 104 properly identified according to the taxonomy proposed in section 2.  Signal Detection: In the corpus, there were 17 questions that were considered complex (Type 3 and Type 4). Our system was able to treat and recognize correctly the temporal signal of 14 of these questions.  Question Splitter: From this set of 17 complex questions, the system was able to process 14 questions and divided properly 12 of them. The results, in terms of precision and recall are shown in Table 2. In the evaluation, only 19 questions are wrongly pre-processed. Errors provoking a wrong pre-processing have been analyzed thoroughly:  There were 8 errors in the identification of the type of the question and they were due to: – Not treated TE or wrong TE recognition: 6 questions. – Wrong Temporal Signal detection: 2 questions.  There were 5 errors in the Question Splitter module: – Wrong Temporal Signal detection: 3 questions. – Syntactic parser problems: 2 questions.  There were 15 errors not affecting the treatment of the question by the General Purpose Question Answering system. Nevertheless, they do affect the recomposition of the final answer. They are due to: – Not treated TE or wrong TE recognition: 6 questions. – Wrong temporal expression resolution: 9 questions. Some of these questions provoke more than one problem, causing that both, type identification and division turn to be wrong. 5 Conclusions This paper presents a new and intuitive method for answering complex temporal questions using an embedded current factual-based Q.A. system. The method proposed is based on a new procedure for the decomposition of temporal questions, where complex questions are divided into simpler ones by means of the detection of temporal signals. The TERSEO system, a temporal information extraction system applied to event ordering has been used to detect and resolve temporal expressions in questions and answers. Moreover, this work proposes a new multilayered architecture that enables to solve complex questions by enhancing current Q.A. capabilities. The multi-layered approach can be applied to any kind of complex questions that allow question decomposition such as script questions, e.g., “How do I assemble a bicycle?”, or template-like questions, e.g., “Which are the main biographical data of Nelson Mandela?”. This paper has specifically focused on a process of decomposition of complex temporal questions and on its evaluation on a temporal question corpus. In the future, our work is directed to fine tune this system and increase its capabilities towards processing questions of higher complexity. References E. Breck, J. Burger, L. Ferro, W. Greiff, M. Light, I. Mani, and J. Rennie. 2000. Another sys called quanda. In Ninth Text REtrieval Conference, volume 500-249 of NIST Special Publication, pages 369–378, Gaithersburg, USA, nov. National Institute of Standards and Technology. CLEF. Cross-language evaluation forum. http://clef.iei.pi.cnr.it/. I. Mani and G. Wilson. 2000. Robust temporal processing of news. In ACL, editor, Proceedings of the 38th Meeting of the Association of Computational Linguistics (ACL 2000), Hong Kong, October. J. Pustejovsky. 2002. Terqas:time and event recognition for question answering systems. http://time2002.org/. D. Radev and B. Sundheim. 2002. Using timeml in question answering. http://www.cs.brandeis.edu/ ˜jamesp/ arda/ time/ documentation/ TimeML-use-in-qa-v1.0.pdf. E. Saquete, R. Mu¯noz, and P. Mart´ınez-Barco. 2003. Terseo: Temporal expression resolution system applied to event ordering. In TSD, editor, Proceedings of the 6th International Conference ,TSD 2003, Text, Speech and Dialogue, pages 220–228, Ceske Budejovice,Czech Republic, September. TREC. Text retrieval conference. http://trec.nist.gov/. J. L. Vicedo and A. Ferr´andez. 2000. A semantic approach to question answering systems. In Ninth Text REtrieval Conference, volume 500249 of NIST Special Publication, pages 13–16, Gaithersburg, USA, nov. National Institute of Standards and Technology.
2004
72
Question Answering using Constraint Satisfaction: QA-by-Dossier-with-Constraints John Prager T.J. Watson Research Ctr. Yorktown Heights N.Y. 10598 [email protected] Jennifer Chu-Carroll T.J. Watson Research Ctr. Yorktown Heights N.Y. 10598 [email protected] Krzysztof Czuba T.J. Watson Research Ctr. Yorktown Heights N.Y. 10598 [email protected] Abstract QA-by-Dossier-with-Constraints is a new approach to Question Answering whereby candidate answers’ confidences are adjusted by asking auxiliary questions whose answers constrain the original answers. These constraints emerge naturally from the domain of interest, and enable application of real-world knowledge to QA. We show that our approach significantly improves system performance (75% relative improvement in F-measure on select question types) and can create a “dossier” of information about the subject matter in the original question. 1 Introduction Traditionally, Question Answering (QA) has drawn on the fields of Information Retrieval, Natural Language Processing (NLP), Ontologies, Data Bases and Logical Inference, although it is at heart a problem of NLP. These fields have been used to supply the technology with which QA components have been built. We present here a new methodology which attempts to use QA holistically, along with constraint satisfaction, to better answer questions, without requiring any advances in the underlying fields. Because NLP is still very much an error-prone process, QA systems make many mistakes; accordingly, a variety of methods have been developed to boost the accuracy of their answers. Such methods include redundancy (getting the same answer from multiple documents, sources, or algorithms), deep parsing of questions and texts (hence improving the accuracy of confidence measures), inferencing (proving the answer from information in texts plus background knowledge) and sanity-checking (verifying that answers are consistent with known facts). To our knowledge, however, no QA system deliberately asks additional questions in order to derive constraints on the answers to the original questions. We have found empirically that when our own QA system’s (Prager et al., 2000; Chu-Carroll et al., 2003) top answer is wrong, the correct answer is often present later in the ranked answer list. In other words, the correct answer is in the passages retrieved by the search engine, but the system was unable to sufficiently promote the correct answer and/or deprecate the incorrect ones. Our new approach of QA-by-Dossier-with-Constraints (QDC) uses the answers to additional questions to provide more information that can be used in ranking candidate answers to the original question. These auxiliary questions are selected such that natural constraints exist among the set of correct answers. After issuing both the original question and auxiliary questions, the system evaluates all possible combinations of the candidate answers and scores them by a simple function of both the answers’ intrinsic confidences, and how well the combination satisfies the aforementioned constraints. Thus we hope to improve the accuracy of an essentially NLP task by making an end-run around some of the more difficult problems in the field. We describe QDC and experiments to evaluate its effectiveness. Our results show that on our test set, substantial improvement is achieved by using constraints, compared with our baseline system, using standard evaluation metrics. 2 Related Work Logic and inferencing have been a part of Question-Answering since its earliest days. The first such systems employed natural-language interfaces to expert systems, e.g. SHRDLU (Winograd, 1972), or to databases e.g. LUNAR (Woods, 1973) and LIFER/LADDER (Hendrix et al. 1977). CHAT-80 (Warren & Pereira, 1982) was a DCG-based NLquery system about world geography, entirely in Prolog. In these systems, the NL question is transformed into a semantic form, which is then processed further; the overall architecture and system operation is very different from today’s systems, however, primarily in that there is no text corpus to process. Inferencing is used in at least two of the more visible systems of the present day. The LCC system (Moldovan & Rus, 2001) uses a Logic Prover to establish the connection between a candidate answer passage and the question. Text terms are converted to logical forms, and the question is treated as a goal which is “proven”, with real-world knowledge being provided by Extended WordNet. The IBM system PIQUANT (Chu-Carroll et al., 2003) uses Cyc (Lenat, 1995) in answer verification. Cyc can in some cases confirm or reject candidate answers based on its own store of instance information; in other cases, primarily of a numerical nature, Cyc can confirm whether candidates are within a reasonable range established for their subtype. At a more abstract level, the use of constraints discussed in this paper can be viewed as simply an example of finding support (or lack of it) for candidate answers. Many current systems (see, e.g. (Clarke et al., 2001), (Prager et al., 2004)) employ redundancy as a significant feature of operation: if the same answer appears multiple times in an internal top-n list, whether from multiple sources or multiple algorithms/agents, it is given a confidence boost, which will affect whether and how it gets returned to the end-user. Finally, our approach is somewhat reminiscent of the scripts introduced by Schank (Schank et al., 1975, and see also Lehnert, 1978). In order to generate meaningful auxiliary questions and constraints, we need a model (“script”) of the situation the question is about. Among others, we have identified one such script modeling the human life cycle that seems common to different question types regarding people. 3 Introducing QDC QA-by-Dossier-with-Constraints is an extension of on-going work of ours called QA-by-Dossier (QbD) (Prager et al., 2004). In the latter, definitional questions of the form “Who/What is X” are answered by asking a set of specific factoid questions about properties of X. So if X is a person, for example, these auxiliary questions may be about important dates and events in the person’s life-cycle, as well as his/her achievement. Likewise, question sets can be developed for other entities such as organizations, places and things. QbD employs the notion of follow-on questions. Given an answer to a first-round question, the system can ask more specific questions based on that knowledge. For example, on discovering a person’s profession, it can ask occupation-specific follow-on questions: if it finds that people are musicians, it can ask what they have composed, if it finds they are explorers, then what they have discovered, and so on. QA-by-Dossier-with-Constraints extends this approach by capitalizing on the fact that a set of answers about a subject must be mutually consistent, with respect to constraints such as time and geography. The essence of the QDC approach is to initially return instead of the best answer to appropriately selected factoid questions, the top n answers (we use n=5), and to choose out of this top set the highest confidence answer combination that satisfies consistency constraints. We illustrate this idea by way of the example, “When did Leonardo da Vinci paint the Mona Lisa?”. Table 1 shows our system’s top answers to this question, with associated scores in the range 0-1. Score Painting Date 1 .64 2000 2 .43 1988 3 .34 1911 4 .31 1503 5 .30 1490 Table 1. Answers for “When did Leonardo da Vinci paint the Mona Lisa?” The correct answer is “1503”, which is in 4th place, with a low confidence score. Using QA-byDossier, we ask two related questions “When was Leonardo da Vinci born?” and “When did Leonardo da Vinci die?” The answers to these auxiliary questions are shown in Table 2. Given common knowledge about a person’s life expectancy and that a painting must be produced while its author is alive, we observe that the best dates proposed in Table 2 consistent with one another are that Leonardo da Vinci was born in 1452, died in 1519, and painted the Mona Lisa in 1503. [The painting date of 1490 also satisfies the constraints, but with a lower confidence.] We will examine the exact constraints used a little later. This example illustrates how the use of auxiliary questions helps constrain answers to the original question, and promotes correct answers with initial low confidence scores. As a side-effect, a short dossier is produced. Score Born Score Died 1 .66 1452 .99 1519 2 .12 1519 .98 1989 3 .04 1920 .96 1452 4 .04 1987 .60 1988 5 .04 1501 .60 1990 Table 2. Answers for auxiliary questions “When was Leonardo da Vinci born?” and “When did Leonardo da Vinci die?”. 3.1 Reciprocal Questions QDC also employs the notion of reciprocal questions. These are a type of follow-on question used solely to provide constraints, and do not add to the dossier. The idea is simply to double-check the answer to a question by inverting it, substituting the first-round answer and hoping to get the original subject back. For example, to double-check “Sacramento” as the answer to “What is the capital of California?” we would ask “Of what state is Sacramento the capital?”. The reciprocal question would be asked of all of the candidate answers, and the confidences of the answers to the reciprocal questions would contribute to the selection of the optimum answer. We will discuss later how this reciprocation may be done automatically. In a separate study of reciprocal questions (Prager et al., 2004), we demonstrated an increase in precision from .43 to .95, with only a 30% drop in recall. Although the reciprocal questions seem to be symmetrical and thus redundant, their power stems from the differences in the search for answers inherent in our system. The search is primarily based on the expected answer type (STATE vs. CAPITAL in the above example). This results in different document sets being passed to the answer selection module. Subsequently, the answer selection module works with a different set of syntactic and semantic relationships, and the process of asking a reciprocal question ends up looking more like the process of asking an independent one. The only difference between this and the “regular” QDC case is in the type of constraint applied to resolve the resulting answer set. 3.2 Applying QDC In order to automatically apply QDC during question answering, several problems need to be addressed. First, criteria must be developed to determine when this process should be invoked. Second, we must identify the set of question types that would potentially benefit from such an approach, and, for each question type, develop a set of auxiliary questions and appropriate constraints among the answers. Third, for each question type, we must determine how the results of applying constraints should be utilized. 3.2.1 When to apply QDC To address these questions we must distinguish between “planned” and “ad-hoc” uses of QDC. For answering definitional questions (“Who/what is X?”) of the sort used in TREC2003, in which collections of facts can be gathered by QA-by-Dossier, we can assume that QDC is always appropriate. By defining broad enough classes of entities for which these questions might be asked (e.g. people, places, organizations and things, or major subclasses of these), we can for each of these classes manually establish once and for all a set of auxiliary questions for QbD and constraints for QDC. This is the approach we have taken in the experiments reported here. We are currently working on automatically learning effective auxiliary questions for some of these classes. In a more ad-hoc situation, we might imagine that a simple variety of QDC will be invoked using solely reciprocal questions whenever the difference between the scores of the first and second answer is below a certain threshold. 3.2.2 How to apply QDC We will posit three methods of generating auxiliary question sets: o By hand o Through a structured repository, such as a knowledge-base of real-world information o Through statistical techniques tied to a machinelearning algorithm, and a text corpus. We think that all three methods are appropriate, but we initially concentrate on the first for practical reasons. Most TREC-style factoid questions are about people, places, organizations, and things, and we can generate generic auxiliary question sets for each of these classes. Moreover, the purpose of this paper is to explain the QDC methodology and to investigate its value. 3.2.3 Constraint Networks The constraints that apply to a given situation can be naturally represented in a network, and we find it useful for visualization purposes to depict the constraints graphically. In such a graph the entities and values are represented as nodes, and the constraints and questions as edges. It is not clear how possible, or desirable, it is to automatically develop such constraint networks (other than the simple one for reciprocal questions), since so much real-world knowledge seems to be required. To illustrate, let us look at the constraints required for the earlier example. A more complex constraint system is used in our experiments described later. For our Leonardo da Vinci example, the set of constraints applied can be expressed as follows1: Date(Died) <= Date(Born) + 100 Date(Painting) >= Date(Born) + 7 Date(Painting) <= Date(Died) The corresponding graphical representation is in Figure 1. Although the numerical constants in these constraints betray a certain arbitrariness, we found it a useful practice to find a middle ground between absolute minima or maxima that the values can achieve and their likely values. Furthermore, although these constraints are manually derived for our prototype system, they are fairly general for the human life-cycle and can be easily reused for other, similar questions, or for more complex dossiers, as described below. Figure 1. Constraint Network for Leonardo example. Dashed lines represent question-answer pairs, solid lines constraints between the answers. We also note that even though a constraint network might have been inspired by and centered around a particular question, once the network is established, any question employed in it could be the end-user question that triggers it. There exists the (general) problem of when more than one set of answers satisfies our constraints. Our approach is to combine the first-round scores of the individual answers to provide a score for the dossier as a whole. There are several ways to do this, and we found experimentally that it does not appear critical exactly how this is done. In the example in the evaluation we mention one particular combination algorithm. 3.2.4 Kinds of constraint network There are an unlimited number of possible constraint networks that can be constructed. We have experimented with the following: Timelines. People and even artifacts have lifecycles. The examples in this paper exploit these. 1 Painting is only an example of an activity in these constraints. Any other achievement that is usually associated with adulthood can be used. Geographic (“Where is X”). Neighboring entities are in the same part of the world. Kinship (“Who is married to X”). Most kinship relationships have named reciprocals e.g. husbandwife, parent-child, and cousin-cousin. Even though these are not in practice one-one relationships, we can take advantage of sufficiency even if necessity is not entailed. Definitional (“What is X?”, “What does XYZ stand for?”) For good definitions, a term and its definition are interchangeable. Part-whole. Sizes of parts are no bigger than sizes of wholes. This fact can be used for populations, areas, etc. 3.2.5 QDC potential We performed a manual examination of the 500 TREC2002 questions2 to see for how many of these questions the QDC framework would apply. Being a manual process, these numbers provide an upper bound on how well we might expect a future automatic process to work. We noted that for 92 questions (18%) a nontrivial constraint network of the above kinds would apply. For a total of 454 questions (91%), a simple reciprocal constraint could be generated. However, for 61 of those, the reciprocal question was sufficiently non-specific that the sought reciprocal answer was unlikely to be found in a reasonably-sized hit-list. For example, the reciprocal question to “How did Mickey Mantle die?” would be “Who died of cancer?” However, we can imagine using other facts in the dossier to craft the question, giving us “What famous baseball player (or Yankees player) died of cancer?”, giving us a much better chance of success. For the simple reciprocation, though, subtracting these doubtful instances leaves 79% of the questions appearing to be good candidates for QDC. 4 Experimental Setup 4.1 Test set generation To evaluate QDC, we had our system develop dossiers of people in the creative arts, unseen in previous TREC questions. However, we wanted to use the personalities in past TREC questions as independent indicators of appropriate subject matter. Therefore we collected all of the “creative” people in the TREC9 question set, and divided them up into classes by profession, so we had, for example, male singers Bob Marley, Ray Charles, Billy Joel and Alice Cooper; poets William Wordsworth and Langston Hughes; painters Picasso, Jackson Pollock 2 This set did not contain definition questions, which, by our inspection, lend themselves readily to reciprocation. Birthdate Deathdate Leonardo Painting and Vincent Van Gogh, etc. – twelve such groupings in all. For each set, we entered the individuals in the “Google Sets” interface (http://labs.google.com/sets), which finds “similar” entities to the ones entered. For example, from our set of male singers it found: Elton John, Sting, Garth Brooks, James Taylor, Phil Collins, Melissa Etheridge, Alanis Morissette, Annie Lennox, Jackson Browne, Bryan Adams, Frank Sinatra and Whitney Houston. Altogether, we gathered 276 names of creative individuals this way, after removing duplicates, items that were not names of individuals, and names that did not occur in our test corpus (the AQUAINT corpus). We then used our system manually to help us develop “ground truth” for a randomly selected subset of 109 names. This ground truth served both as training material and as an evaluation key. We split the 109 names randomly into a set of 52 for training and 57 for testing. The training process used a hill-climbing method to find optimal values for three internal rejection thresholds. In developing the ground truth we might have missed some instances of assertions we were looking for, so the reported recall (and hence F-measure) figures should be considered to be upper bounds, but we believe the calculated figures are not far from the truth. 4.2 QDC Operation The system first asked three questions for each subject X: In what year was X born? In what year did X die? What compositions did X have? The third of these triggers our named-entity type COMPOSITION that is used for all kinds of titled works – books, films, poems, music, plays and so on, and also quotations. Our named-entity recognizer has rules to detect works of art by phrases that are in apposition to “the film …” or the “the book …” etc., and also captures any short phrase in quotes beginning with a capital letter. The particular question phrasing we used does not commit us to any specific creative verb. This is of particular importance since it very frequently happens in text that titled works are associated with their creators by means of a possessive or parenthetical construction, rather than subject-verb-object. The top five answers, with confidences, are returned for the born and died questions (subject to also passing a confidence threshold test). The compositions question is treated as a list question, meaning that all answers that pass a certain threshold are returned. For each such returned work Wi, two additional questions are asked: What year did X have Wi? Who had Wi? The top 5 answers to each of these are returned, again as long as they pass a confidence threshold. We added a sixth answer “NIL” to each of the date sets, with a confidence equal to the rejection threshold. (NIL is the code used in TREC ever since TREC10 to indicate the assertion that there is no answer in the corpus.) We used a two stage constraint-satisfaction process: Stage 1: For each work Wi for subject X, we added together its original confidence to the confidence of the answer X in the answer set of the reciprocal question (if it existed – otherwise we added zero). If the total did not exceed a learned threshold (.50) the work was rejected. Stage 2. For each subject, with the remaining candidate works we generated all possible combinations of the date answers. We rejected any combination that did not satisfy the following constraints: DIED >= BORN + 7 DIED <= BORN + 100 WORK >= BORN + 7 WORK <= BORN + 100 WORK <= DIED DIED <= WORK + 100 The apparent redundancy here is because of the potential NIL answers for some of the date slots. We also rejected combinations of works whose years spanned more than 100 years (in case there were no BORN or DIED dates). In performing these constraint calculations, NIL satisfied every test by fiat. The constraint network we used is depicted in Figure 2. Figure 2. Constraint Network for evaluation example. Dashed lines represent question-answer pairs, solid lines constraints between the answers. We used as a test corpus the AQUAINT corpus used in TREC-QA since 2002. Since this was not the same corpus from which the test questions were generated (the Web), we acknowledged that there might be some difference in the most common spelling of certain names, but we made no attempt to correct for this. Neither did we attempt to normalize, translate or aggregate names of the titled works that were returned, so that, for example, “WellBirthdate of X Deathdate of X Work Wi Author X Date of Wi Xi = Author of Wi Tempered Klavier” and “Well-Tempered Clavier” were treated as different. Since only individuals were used in the question set, we did not have instances of problems we saw in training, such as where an ensemble (such as The Beatles) created a certain piece, which in turn via the reciprocal question was found to have been written by a single person (Paul McCartney). The reverse situation was still possible, but we did not handle it. We foresee a future version of our system having knowledge of ensembles and their composition, thus removing this restriction. In general, a variety of ontological relationships could occur between the original individual and the discovered performer(s) of the work. We generated answer keys by reading the passages that the system had retrieved and from which the answers were generated, to determine “truth”. In cases of absent information in these passages, we did our own corpus searches. This of course made the issue of evaluation of recall only relative, since we were not able to guarantee we had found all existing instances. We encountered some grey areas, e.g., if a painting appeared in an exhibition or if a celebrity endorsed a product, then should the exhibition’s or product’s name be considered an appropriate “work” of the artist? The general perspective adopted was that we were not establishing or validating the nature of the relationship between an individual and a creative work, but rather its existence. We answered “yes” if we subjectively felt the association to be both very strong and with the individual’s participation – for example, Pamela Anderson and Playboy. However, books/plays about a person or dates of performances of one’s work were considered incorrect. As we shall see, these decisions would not have a big impact on the outcome. 4.3 Effect of Constraints The answers collected from these two rounds of questions can be regarded as assertions about the subject X. By applying constraints, two possible effects can occur to these assertions: 1. Some works can get thrown out. 2. An asserted date (which was the top candidate from its associated question) can get replaced by a candidate date originally in positions 2-6 (where sixth place is NIL) Effect #1 is expected to increase precision at the risk of worsening recall; effect #2 can go either way. We note that NIL, which is only used for dates, can be the correct answer if the desired date assertion is absent from the corpus; NIL is considered a “value” in this evaluation. By inspection, performances and other indirect works (discussed in the previous section) were usually associated with the correct artist, so our decision to remove them from consideration resulted in a decrease in both the numerator and denominator of the precision and recall calculations, resulting in a minimal effect. The results of applying QDC to the 57 test individuals are summarized in Table 3. The baseline assertions for individual X were: o Top-ranking birthdate/NIL o Top-ranking deathdate/NIL o Set of works Wi that passed threshold o Top-ranking date for Wi /NIL The sets of baseline assertions (by individual) are in effect the results of QA-by-Dossier WITHOUT Constraints (QbD). Assertions Micro-Average Macro-Average Total Correct Truth Prec Rec F Prec Rec F Baseline 1671 517 933 .309 .554 .396 .331 .520 .386 QDC 1417 813 933 .573 .871 .691 .603 .865 .690 Table 3. Results of Performance Evaluation. Two calculations of P/R/F are made, depending on whether the averaging is done over the whole set, or first by individual; the results are very similar. The QDC assertions were the same as those for QbD, but reflecting the following effects: o Some {Wi, date} pairs were thrown out (3 out of 14 on average) o Some dates in positions 2-6 moved up (applicable to birth, death and work dates) The results show improvement in both precision and recall, in turn determining a 75-80% relative increase in F-measure. 5 Discussion This exposition of QA-by-Dossier-withConstraints is very short and undoubtedly leaves may questions unanswered. We have not presented a precise method for computing the QDC scores. One way to formalize this process would be to treat it as evidence gathering and interpret the results in a Bayesian-like fashion. The original system confidences would represent prior probabilities reflecting the system’s belief that the answers are correct. As more evidence is found, the confidences would be updated to reflect the changed likelihood that an answer is correct. We do not know a priori how much “slop” should be allowed in enforcing the constraints, since auxiliary questions are as likely to be answered incorrectly as the original ones. A further problem is to determine the best metric for evaluating such approaches, which is a question for QA in general. The task of generating auxiliary questions and constraint sets is a matter of active research. Even for simple questions like the ones considered here, the auxiliary questions and constraints we looked at were different and manually chosen. Hand-crafting a large number of such sets might not be feasible, but it is certainly possible to build a few for common situations, such as a person’s life-cycle. More generally, QDC could be applied to situations in which a certain structure is induced by natural temporal (our Leonardo example) and/or spatial constraints, or by properties of the relation mentioned in the question (evaluation example). Temporal and spatial constraints appear general to all relevant question types, and include relations of precedence, inclusion, etc. For certain relationships, there are naturallyoccurring reciprocals (if X is married to Y, then Y is married to X; if X is a child of Y then Y is a parent of X; compound-term to acronym and vice versa). Transitive relationships (e.g. greater-than, locatedin, etc.) offer the immediate possibility of constraints, but this avenue has not yet been explored. 5.1 Automatic Generation of Reciprocal Questions While not done in the work reported here, we are looking at generating reciprocal questions automatically. Consider the following transformations: “What is the capital of California?” -> “Of what state is <candidate> the capital?” “What is Frank Sinatra’s nickname?” -> “Whose (or what person’s) nickname is <candidate>?” “How deep is Crater Lake?” -> “What (or what lake) is <candidate> deep?” “Who won the Oscar for best actor in 1970?” -> “In what year did <candidate> win the Oscar for best actor?” (and/or “What award did <candidate> win in 1970?”) These are precisely the transformations necessary to generate the auxiliary reciprocal questions from the given original questions and candidate answers to them. Such a process requires identifying an entity in the question that belongs to a known class, and substituting the class name for the entity. This entity is made the subject of the question, the previous subject (or trace) being replaced by the candidate answer. We are looking at parse-tree rather than string transformations to achieve this. This work will be reported in a future paper. 5.2 Final Thoughts Despite these open questions, initial trials with QA-by-Dossier-with-Constraints have been very encouraging, whether it is by correctly answering previously missed questions, or by improving confidences of correct answers. An interesting question is when it is appropriate to apply QDC. Clearly, if the base QA system is too poor, then the answers to the auxiliary questions will be useless; if the base system is highly accurate, the increase in accuracy will be negligible. Thus our approach seems most beneficial to middle-performance levels, which, by inspection of TREC results for the last 5 years, is where the leading systems currently lie. We had initially thought that use of constraints would obviate the need for much of the complexity inherent in NLP. As mentioned earlier, with the case of “The Beatles” being the reciprocal answer to the auxiliary composition question to “Who is Paul McCartney?”, we see that structured, ontological information would benefit QDC. Identifying alternate spellings and representations of the same name (e.g. Clavier/Klavier, but also taking care of variations in punctuation and completeness) is also necessary. When we asked “Who is Ian Anderson?”, having in mind the singer-flautist for the Jethro Tull rock band, we found that he is not only that, but also the community investment manager of the English conglomerate Whitbread, the executive director of the U.S. Figure Skating Association, a writer for New Scientist, an Australian medical advisor to the WHO, and the general sales manager of Houseman, a supplier of water treatment systems. Thus the problem of word sense disambiguation has returned in a particularly nasty form. To be fully effective, QDC must be configured not just to find a consistent set of properties, but a number of independent sets that together cover the highest-confidence returned answers3. Altogether, we see that some of the very problems we aimed to skirt are still present and need to be addressed. However, we have shown that even disregarding these issues, QDC was able to provide substantial improvement in accuracy. 6 Summary We have presented a method to improve the accuracy of a QA system by asking auxiliary questions for which natural constraints exist. Using these constraints, sets of mutually consistent answers can be generated. We have explored questions in the biographical areas, and identified other areas of applicability. We have found that our methodology exhibits a double advantage: not only can it im 3 Possibly the smallest number of sets that provide such coverage. prove QA accuracy, but it can return a set of mutually-supporting assertions about the topic of the original question. We have identified many open questions and areas of future work, but despite these gaps, we have shown an example scenario where QA-by-Dossier-with-Constraints can improve the Fmeasure by over 75%. 7 Acknowledgements We wish to thank Dave Ferrucci, Elena Filatova and Sasha Blair-Goldensohn for helpful discussions. This work was supported in part by the Advanced Research and Development Activity (ARDA)'s Advanced Question Answering for Intelligence (AQUAINT) Program under contract number MDA904-01-C-0988. References Chu-Carroll, J., J. Prager, C. Welty, K. Czuba and D. Ferrucci. “A Multi-Strategy and Multi-Source Approach to Question Answering”, Proceedings of the 11th TREC, 2003. Clarke, C., Cormack, G., Kisman, D.. and Lynam, T. “Question answering by passage selection (Multitext experiments for TREC-9)” in Proceedings of the 9th TREC, pp. 673-683, 2001. Hendrix, G., E. Sacerdoti, D. Sagalowicz, J. Slocum: Developing a Natural Language Interface to Complex Data. VLDB 1977: 292 Lehnert, W. The Process of Question Answering. A Computer Simulation of Cognition. Lawrence Erlbaum Associates, Publishers, 1978. Lenat, D. 1995. "Cyc: A Large-Scale Investment in Knowledge Infrastructure." Communications of the ACM 38, no. 11. Moldovan, D. and V. Rus, “Logic Form Transformation of WordNet and its Applicability to Question Answering”, Proceedings of the ACL, 2001. Prager, J., E. Brown, A. Coden, and D. Radev. 2000. "Question-Answering by Predictive Annotation”. In Proceedings of SIGIR 2000, pp. 184-191. Prager, J., J. Chu-Carroll and K. Czuba, "A MultiAgent Approach to using Redundancy and Reinforcement in Question Answering" in New Directions in Question-Answering, Maybury, M. (Ed.), to appear in 2004. Schank, R. and R. Abelson. “Scripts, Plans and Knowledge”, Proceedings of IJCAI’75. Voorhees, E. “Overview of the TREC 2002 Question Answering Track”, Proceedings of the 11th TREC, 2003. Warren, D., and F. Pereira "An efficient easily adaptable system for interpreting natural language queries," Computational Linguistics, 8:3-4, 110122, 1982. Winograd, T. Procedures as a representation for data in a computer program for under-standing natural language. Cognitive Psychology, 3(1), 1972. Woods, W. Progress in natural language understanding --- an application in lunar geology. Proceedings of the 1973 National Computer Conference, AFIPS Conference Proceedings, Vol. 42, 441-450, 1973.
2004
73
Applying Machine Learning to Chinese Temporal Relation Resolution Wenjie Li Department of Computing The Hong Kong Polytechnic University, Hong Kong [email protected] Kam-Fai Wong Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong, Hong Kong [email protected] Guihong Cao Department of Computing The Hong Kong Polytechnic University, Hong Kong [email protected] Chunfa Yuan Department of Computer Science and Technology Tsinghua University, Beijing, China. [email protected] Abstract Temporal relation resolution involves extraction of temporal information explicitly or implicitly embedded in a language. This information is often inferred from a variety of interactive grammatical and lexical cues, especially in Chinese. For this purpose, inter-clause relations (temporal or otherwise) in a multiple-clause sentence play an important role. In this paper, a computational model based on machine learning and heterogeneous collaborative bootstrapping is proposed for analyzing temporal relations in a Chinese multiple-clause sentence. The model makes use of the fact that events are represented in different temporal structures. It takes into account the effects of linguistic features such as tense/aspect, temporal connectives, and discourse structures. A set of experiments has been conducted to investigate how linguistic features could affect temporal relation resolution. 1 Introduction In language studies, temporal information describes changes and time of changes expressed in a language. Such information is critical in many typical natural language processing (NLP) applications, e.g. language generation and machine translation, etc. Modeling temporal aspects of an event in a written text is more complex than capturing time in a physical time-stamped system. Event time may be specified explicitly in a sentence, e.g. “他们在1997 年解 决了该市的交通问题 (They solved the traffic problem of the city in 1997)”; or it may be left implicit, to be recovered by readers from context. For example, one may know that “修成立交桥以后,他们解决了该 市的交通问题 (after the street bridge had been built, they solved the traffic problem of the city)”, yet without knowing the exact time when the street bridge was built. As reported by Partee (Partee, 1984), the expression of relative temporal relations in which precise times are not stated is common in natural language. The objective of relative temporal relation resolution is to determine the type of relative relation embedded in a sentence. In English, temporal expressions have been widely studied. Lascarides and Asher (Lascarides, Asher and Oberlander, 1992) suggested that temporal relations between two events followed from discourse structures. They investigated various contextual effects on five discourse relations (namely narration, elaboration, explanation, background and result) and then corresponded each of them to a kind of temporal relations. Hitzeman et al. (Hitzeman, Moens and Grover, 1995) described a method for analyzing temporal structure of a discourse by taking into account the effects of tense, aspect, temporal adverbials and rhetorical relations (e.g. causation and elaboration) on temporal ordering. They argued that rhetorical relations could be further constrained by event temporal classification. Later, Dorr and Gaasterland (Dorr and Gaasterland, 2002) developed a constraint-based approach to generate sentences, which reflect temporal relations, by making appropriate selections of tense, aspect and connecting words (e.g. before, after and when). Their works, however, are theoretical in nature and have not investigated computational aspects. The pioneer work on Chinese temporal relation extraction was first reported by Li and Wong (Li and Wong, 2002). To discover temporal relations embedded in a sentence, they devised a set of simple rules to map the combined effects of temporal indicators, which are gathered from different grammatical categories, to their corresponding relations. However, their work did not focus on relative temporal relations. Given a sentence describing two temporally related events, Li and Wong only took the temporal position words (including before, after and when, which serve as temporal connectives) and the tense/aspect markers of the second event into consideration. The proposed rule-based approach was simple; but it suffered from low coverage and was particularly ineffective when the interaction between the linguistic elements was unclear. This paper studies how linguistic features in Chinese interact to influence relative relation resolution. For this purpose, statistics-based machine learning approaches are applied. The remainder of the paper is structured as follows: Section 2 summarizes the linguistic features, which must be taken into account in temporal relation resolution, and introduces how these features are expressed in Chinese. In Section 3, the proposed machine learning algorithms to identify temporal relations are outlined; furthermore, a heterogeneous collaborative bootstrapping technique for smoothing is presented. Experiments designed for studying the impact of different approaches and linguistic features are described in Section 4. Finally, Section 5 concludes the paper. 2 Modeling Temporal Relations 2.1 Temporal Relation Representations As the importance of temporal information processing has become apparent, a variety of temporal systems have been introduced, attempting to accommodate the characteristics of relative temporal information. Among those who worked on temporal relation representations, many took the work of Reichenbach (Reichenbach, 1947) as a starting point, while some others based their works on Allen’s (Allen, 1981). Reichenbach proposed a point-based temporal theory. This was later enhanced by Bruce who defined seven relative temporal relations (Bruce. 1972). Given two durative events, the interval relations between them were modeled by the order between the greatest lower bounding points and least upper bounding points of the two events. In the other camp, instead of adopting time points, Allen took intervals as temporal primitives and introduced thirteen basic binary relations. In this interval-based theory, points are relegated to a subsidiary status as ‘meeting places’ of intervals. An extension to Allen’s theory, which treated both points and intervals as primitives on an equal footing, was later investigated by Ma and Knight (Ma and Knight, 1994). In natural language, events can either be punctual (e.g. 爆炸 (explore)) or durative (e.g. 盖楼 (built a house)) in nature. Thus Ma and Knight’s model is adopted in our work (see Figure 1). Taking the sentence “修成立交桥以后,他们解决了该市的交通问题 (after the street bridge had been built, they solved the traffic problem of the city)” as an example, the relation held between building the bridge (i.e. an interval) and solving the problem (i.e. a point) is BEFORE. Figure 1. Thirteen temporal relations between points and intervals 2.2 Linguistic Features for Determining Relative Relations Relative relations are generally determined by tense/aspect, connecting words (temporal or otherwise) and event classes. Tense/Aspect in English is manifested by verb inflections. But such morphological variations are inapplicable to Chinese verbs; instead, they are conveyed lexically (Li and Wong, 2002). In other words, tense and aspect in Chinese are expressed using a combination of time words, auxiliaries, temporal position words, adverbs and prepositions, and particular verbs. Temporal Connectives in English primarily involve conjunctions, e.g. after, before and when (Dorr and Gaasterland, 2002). They are key components in discourse structures. In Chinese, however, conjunctions, conjunctive adverbs, prepositions and position words are required to represent connectives. A few verbs which express cause and effect also imply a forward movement of event time. The words, which contribute to the tense/aspect and temporal connective expressions, are explicit in a sentence and generally known as Temporal Indicators. Event Class is implicit in a sentence. Events can be classified according to their inherent temporal characteristics, such as the degree of telicity and/or atomicity (Li and Wong, 2002). The four widespread accepted temporal classes1 are state, process, punctual event and developing event. Based on their classes, events interact with the tense/aspect of verbs to define the temporal relations between two events. Temporal indicators and event classes are together referred to as Linguistic Features (see Table 1). For example, linguistic features are underlined in the sentence “(因为)修成立交桥(以后),他们解决了该市 的交通问题after/because the street bridge had been built (i.e. a developing event), they solved the traffic problem of the city (i.e. a punctual event)”. 1 Temporal classification refers to aspectual classification. A punctual event (i.e. represented in time point) A durative event (i.e. represented in time interval) BEFORE/AFTER MEETS/MET-BY OVERLAPS/OVERLAPPED-BY STARTS/STARTED-BY DURING/CONTAINS FINISHES/FINISHED-BY SAME-AS Table 1 shows the mapping between a temporal indicator and its effects. Notice that the mapping is not one-to-one. For example, adverbs affect tense/aspect as well as discourse structure. For another example, tense/aspect can be affected by auxiliary words, trend verbs, etc. This shows that classification of temporal indicators based on partof-speech (POS) information alone cannot determine relative temporal relations. 3 Machine Learning Approaches for Relative Relation Resolution Previous efforts in corpus-based natural language processing have incorporated machine learning methods to coordinate multiple linguistic features for example in accent restoration (Yarowsky, 1994) and event classification (Siegel and McKeown, 1998), etc. Relative relation resolution can be modeled as a relation classification task. We model the thirteen relative temporal relations (see Figure 1) as the classes to be decided by a classifier. The resolution process is to assign an event pair (i.e. the two events under concern)2 to one class according to their linguistic features. For this purpose, we train two classifiers, a Probabilistic Decision Tree Classifier (PDT) and a Naïve Bayesian Classifier (NBC). We then combine the results by the Collaborative Bootstrapping (CB) technique which is used to mediate the sparse data problem arose due to the limited number of training cases. 2 It is an object in machine learning algorithms. 3.1 Probabilistic Decision Tree (PDT) Due to two domain-specific characteristics, we encounter some difficulties in classification. (a) Unknown values are common, for many events are modified by less than three linguistic features. (b) Both training and testing data are noisy. For this reason, it is impossible to obtain a tree which can completely classify all training examples. To overcome this predicament, we aim to obtain more adjusted probability distributions of event pairs over their possible classes. Therefore, a probabilistic decision tree approach is preferred over conventional decision tree approaches (e.g. C4.5, ID3). We adopt a non-incremental supervised learning algorithm in TDIDT (Top Down Induction of Decision Trees) family. It constructs a tree top-down and the process is guided by distributional information learned from examples (Quinlan, 1993). 3.1.1 Parameter Estimation Based on probabilities, each object in the PDT approach can belong to a number of classes. These probabilities could be estimated from training cases with Maximum Likelihood Estimation (MLE). Let l be the decision sequence, z the object and c the class. The probability of z belonging to c is: ∑ ∑ ≈ = l l z l p l c p z c l p z c p ) | ( ) | ( ) | , ( ) | ( (1) let n B B B l ... 2 1 = , by MLE we have: ) ( ) , ( ) | ( ) | ( n n n B f B c f B c p l c p = ≈ (2) ) , ( n B c f is the count of the items whose leaf nodes are Bn and belonging to class c. And Linguistic Feature Symbol POS Tag Effect Example With/Without punctuations PT Not Applicable Not Applicable Not Applicable Speech verbs VS TI_vs Tense 报告, 表示, 称 Trend verbs TR TI_tr Aspect 起来, 下去 Preposition words P TI_p Discourse Structure/Aspect 当, 到, 继 Position words PS TI_f Discourse Structure 底, 后, 开始 Verbs with verb objects VV TI_vv Tense/Aspect 继续, 进行, 续 Verbs expressing wish/hope VA TI_va Tense 必须, 会, 可 Verbs related to causality VC TI_vc Discourse Structure 导致, 致使, 引起 Conjunctive words C TI_c Discourse Structure 并, 并且, 不过 Auxiliary words U TI_u Aspect 着, 了, 过 Time words T TI_t Tense 过去, 今后, 今年 Adverbs D TI_d Tense/Aspect/Discourse Structure 便, 并, 并未, 不 Event class EC E0/E1/E2/E3 Event Classification State, Punctual Event, Developing Event, Process Table 1. Linguistic features: eleven temporal indicators and one event class ) , ... | ( ... ) , , | ( ) , | ( ) | ( ) | ( 1 1 2 1 3 1 2 1 z B B B p z B B B p z B B p z B p z l p n n − = (3) where ) | ... ( ) | ... ( ) , ... | ( 1 2 1 1 2 1 1 2 1 z B B B p z B B B B p z B B B B p m m m m m m m m − − − − − − = ) | ... ( ) | ... ( 1 2 1 1 2 1 z B B B f z B B B B f m m m m m − − − − = , ( n m ,..., 3,2 = ). An object might traverse more than one decision path if it has unknown attribute values. ) | ... ( 1 2 1 z B B B B f m m m − − is the count of the item z, which owns the decision paths from B1 to Bm. 3.1.2 Classification Attributes Objects are classified into classes based on their attributes. In the context of temporal relation resolution, how to categorize linguistic features into classification attributes is a major design issue. We extract all temporal indicators surrounding an event. Assume m and n are the anterior and posterior window size. They represent the numbers of the indicators BEFORE and AFTER respectively. Consider the most extreme case where an event consists of at most 4 temporal indicators before and 2 after. We set m and n to 4 and 2 initially. Experiments show that learning performance drops when m>4 and n>2 and there is only very little difference otherwise (i.e. when m≤4 and n≤2). In addition to temporal indicators alone, the position of the punctuation mark separating the two clauses describing the events and the classes of the events are also useful classification attributes. We will outline why this is so in Section 4.1. Altogether, the following 15 attributes are used to train the PDT and NBC classifiers: , , ), ( , , , , 2 1 1 1 1 1 2 1 3 1 4 1 1 r e r e l e l e l e l e TI TI e class TI TI TI TI 2 2 1 2 1 2 2 2 3 2 4 2 , ), ( , , , , , / 2 r e r e l e l e l e l e TI TI e class TI TI TI TI punc wo wi li (i=1,2,3,4) and rj (j=1,2) are the ith indictor before and the jth indicator after the event ek (k=1,2). Given a sentence, for example, 先/TI_d 有/E0 了/TI_u 马车 /n ,/w 才/TI_d 修/E2 了/TI_u 驿道/n 。/w, the attribute vector could be represented as: [0, 0, 0, 先, E0, 了, 0, 1, 0, 0, 0, 才, E2, 了, 0]. 3.1.3 Attribute Selection Function Many similar attribute selection functions were used to construct a decision tree (Marquez, 2000). These included information gain and information gain ratio (Quinlan, 1993), 2 χ Test and Symmetrical Tau (Zhou and Dillon, 1991). We adopt the one proposed by Lopez de Mantaraz (Mantaras, 1991) for it shows more stable performance than Quinlan’s information gain ratio in our experiments. Compared with Quinlan’s information gain ratio, Lopez’s distance-based measurement is unbiased towards the attributes with a large number of values and is capable of generating smaller trees with no loss of accuracy (Marquez, Padro and Rodriguez, 2000). This characteristic makes it an ideal choice for our work, where most attributes have more than 200 values. 3.2 Naïve Bayesian Classifier (NBC) NBC assumes independence among features. Given the class label c, NBC learns from training data the conditional probability of each attribute Ai (see Section 3.1.2). Classification is then performed by applying Bayes rule to compute the probability of c given the particular instance of A1,…,An, and then predicting the class with the highest posterior probability ratio. ) ,..., , , | ( max arg 3 2 1 * n c A A A A c score c = (4) ) ,..., , , | ( ) ,..., , , | ( ) ,..., , , | ( 3 2 1 3 2 1 3 2 1 n n n A A A A c p A A A A c p A A A A c score = (5) Apply Bayesian rule to (5), we have: ) ,..., , , | ( ) ,..., , , | ( ) ,..., , , | ( 3 2 1 3 2 1 3 2 1 n n n A A A A c p A A A A c p A A A A c score = ) ( ) | ,..., , , ( ) ( ) | ,..., , , ( 3 2 1 3 2 1 c p c A A A A p c p c A A A A p n n = ) ( ) | ( ) ( ) | ( 1 1 c p c A p c p c A p n i i n i i ∏ ∏ = = ≈ (6) ) | ( c A p i and ) | ( c A p i are estimated by MLE from training data with Dirichlet Smoothing method: ∑ = × + + = n j j i i n u c A c u c A c c A p 1 ) , ( ) , ( ) | ( (7) ∑ = × + + = n j j i i n u c A c u c A c c A p 1 ) , ( ) , ( ) | ( (8) 3.3 Collaborative Bootstrapping (CB) PDT and NB are both supervised learning approach. Thus, the training processes require many labeled cases. Recent results (Blum and Mitchell, 1998; Collins, 1999) have suggested that unlabeled data could also be used effectively to reduce the amount of labeled data by taking advantage of collaborative bootstrapping (CB) techniques. In previous works, CB trained two homogeneous classifiers based on different independent feature spaces. However, this approach is not applicable to our work since only a few temporal indicators occur in each case. Therefore, we develop an alternative CB algorithm, i.e. to train two different classifiers based on the same feature spaces. PDT (a non-linear classifier) and NBC (a linear classifier) are under consideration. This is inspired by Blum and Mitchell’s theory that two collaborative classifiers should be conditionally independent so that each classifier can make its own contribution (Blum and Mitchell, 1998). The learning steps are outlined in Figure 2. Inputs: A collection of the labeled cases and unlabeled cases is prepared. The labeled cases are separated into three parts, training cases, test cases and held-out cases. Loop: While the breaking criteria is not satisfied 1 Build the PDT and NBC classifiers using training cases 2 Use PDT and NBC to classify the unlabeled cases, and exchange with the selected cases which have higher Classification Confidence (i.e. the uncertainty is less than a threshold). 3 Evaluate the PDT and NBC classifiers with the held-out cases. If the error rate increases or its reduction is below a threshold break the loop; else go to step 1. Output: Use the optimal classifier to label the test cases Figure 2. Collaborative bootstrapping algorithm 3.4 Classification Confidence Measurement Classification confidence is the metric used to measure the correctness of each labeled case automatically (see Step 2 in Figure 2). The desirable metric should satisfy two principles: • It should be able to measure the uncertainty/ certainty of the output of the classifiers; and • It should be easy to calculate. We adopt entropy, i.e. an information theory based criterion, for this purpose. Let x be the classified object, and } ,..., , , { 3 2 1 nc c c c C = the set of output. x is classified as ci with the probability ) | ( x c p i n i ,.., 3,2,1 = . The entropy of the output is then calculated as: ∑ = − = n i i i x c p x c p x C e 1 ) | ( log ) | ( ) | ( (9) Once ) | ( x c p i is known, the entropy can be determined. These parameters can be easily determined in PDT, as each incoming case is classified into each class with a probability. However, the incoming cases in NBC are grouped into one class which is assigned the highest score. We then have to estimate ) | ( x c p i from those scores. Without loss of generality, the probability is estimated as: ∑ = = n j j i i x c score x c score x c p 1 ) | ( ) | ( ) | ( (10) where ) | ( x c score i is the ranking score of x belonging to ci. 4 Experiment Setup and Evaluation Several experiments have been designed to evaluate the proposed learning approaches and to reveal the impact of linguistic features on learning performance. 700 sentences are extracted from Ta Kong Pao (a local Hong Kong Chinese newspaper) financial version. 600 cases are labeled manually and 100 left unlabeled. Among those labeled, 400 are used as training data, 100 as test data and the rest as held-out data. 4.1 Use of Linguistic Features As Classification Attributes The impact of a temporal indicator is determined by its position in a sentence. In PDT and NBC, we consider an indicator located in four positions: (1) BEFORE the first event; (2) AFTER the first event and BEFORE the second and it modifies the first event; (3) the same as (2) but it modifies the second event; and (4) AFTER the second event. Cases (2) and (3) are ambiguous. The positions of the temporal indicators are the same. But it is uncertain whether these indicators modify the first or the second event if there is no punctuation separating their roles. We introduce two methods, namely NA and SAP to check if the ambiguity affects the two learning approaches. N(atural) O(rder): the temporal indicators between the two events are extracted and compared according to their occurrence in the sentences regardless which event they modify. S(eparate) A(uxiliary) and P(osition) words: we try to resolve the above ambiguity with the grammatical features of the indicators. In this method, we assume that an indicator modifies the first event if it is an auxiliary word (e.g. 了), a trend verb (e.g. 起来) or a position word (e.g. 前); otherwise it modifies the second event. Temporal indicators are either tense/aspect or connectives (see Section 2.2). Intuitively, it seems that classification could be better achieved if connective features are isolated from tense/ aspect features, allowing like to be compared with like. Methods SC1 and SC2 are designed based on this assumption. Table 2 shows the effect the different classification methods. SC1 (Separate Connecting words 1): it separates conjunctions and verbs relating to causality from others. They are assumed to contribute to discourse structure (intra- or inter-sentence structure), and the others contribute to the tense/aspect expressions for each individual event. They are built into 2 separate attributes, one for each event. SC2 (Separate Connecting words 2): it is the same as SC1 except that it combines the connecting word pairs (i.e. as a single pattern) into one attribute. EC (Event Class): it takes event classes into consideration. Accuracy Method PDT NBC NO 82.00% 81.00% SAP 82.20% 81.50% SAP +SC1 80.20% 78.00% SAP +SC2 81.70% 79.20% SAP +EC 85.70% 82.25% Table 2. Effect of encoding linguistic features in the different ways 4.2 Impact of Individual Features From linguistic perspectives, 13 features (see Table 1) are useful for relative relation resolution. To examine the impact of each individual feature, we feed a single linguistic feature to the PDT learning algorithm one at a time and study the accuracy of the resultant classifier. The experimental results are given in Table 3. It shows that event classes have greatest accuracy, followed by conjunctions in the second place, and adverbs in the third. Feature Accuracy Feature Accuracy PT 50.5% VA 56.5% VS 54% C 62% VC 54% U 51.5% TR 50.5% T 57.2% P 52.2 % D 61.7% PS 58.7% EC 68.2% VS 51.2% None 50.5% Table 3. Impact of individual linguistic features 4.3 Discussions Analysis of the results in Tables 2 and 3 reveals some linguistic insights: 1. In a situation where temporal indicators appear between two events and there is no punctuation mark separating them, POS information help reduce the ambiguity. Compared with NO, SAP shows a slight improvement from 82% to 82.2%. But the improvement seems trivial and is not as good as our prediction. This might due to the small percent of such cases in the corpus. 2. Separating conjunctions and verbs relating to causality from others is ineffective. This reveals the complexity of Chinese in connecting expressions. It is because other words (such as adverbs, proposition and position words) also serve such a function. Meanwhile, experiments based on SC1 and SC2 suggest that the connecting expressions generally involve more than one word or phrase. Although the words in a connecting expression are separated in a sentence, the action is indeed interactive. It would be more useful to regard them as one attribute. 3. The effect of event classification is striking. Taking this feature into account, the accuracies of both PDT and NB improved significantly. As a matter of fact, different event classes may introduce different relations even if they are constrained by the same temporal indicators. 4.4 Collaborative Bootstrapping Table 4 presents the evaluation results of the four different classification approaches. DM is the default model, which classifies all incoming cases as the most likely class. It is used as evaluation baseline. Compare with DM, PDT and NBC show improvement in accuracy (i.e. above 60% improvement). And CB in turn outperforms PDT and NBC. This proves that using unlabeled data to boost the performance of the two classifiers is effective. Accuracy Approach Close test Open test DM 50.50% 55.00% NBC 82.25% 72.00% PDT 85.70% 74.00% CB 88.70% 78.00% Table 4. Evaluation of NBC, PDT and CB approaches 5 Conclusions Relative temporal relation resolution received growing attentions in recent years. It is important for many natural language processing applications, such as information extraction and machine translation. This topic, however, has not been well studied, especially in Chinese. In this paper, we propose a model for relative temporal relation resolution in Chinese. Our model combines linguistic knowledge and machine learning approaches. Two learning approaches, namely probabilistic decision tree (PDT) and naive Bayesian classifier (NBC) and 13 linguistic features are employed. Due to the limited labeled cases, we also propose a collaborative bootstrapping technique to improve learning performance. The experimental results show that our approaches are encouraging. To our knowledge, this is the first attempt of collaborative bootstrapping, which involves two heterogeneous classifiers, in NLP application. This lays down the main contribution of our research. In this pilot work, temporal indicators are selected based on linguistic knowledge. It is time-consuming and could be error-prone. This suggests two directions for future studies. We will try to automate or at least semi-automate feature selection process. Another future work worth investigating is temporal indicator clustering. There are two methods we could investigate, i.e. clustering the recognized indicators which occur in training corpus according to co-occurrence information or grouping them into two semantic roles, one related to tense/aspect expressions and the other to connecting expressions between two events. Acknowledgements The work presented in this paper is partially supported by Research Grants Council of Hong Kong (RGC reference number PolyU5085/02E) and CUHK Strategic Grant (account number 4410001). References Allen J., 1981. An Interval-based Represent Action of Temporal Knowledge. In Proceedings of 7th International Joint Conference on Artificial Intelligence, pages 221-226. Los Altos, CA. Blum, A. and Mitchell T., 1998. Combining Labeled and Unlabeled Data with Co-Training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, Madison, Wisconsin, pages 92-100 Bruce B., 1972. A Model for Temporal References and its Application in Question-Answering Program. Artificial Intelligence, 3(1):1-25. Collins M. and Singer Y, 1999. Unsupervised Models for Named Entity Classification. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 189-196. University of Maryland. Dorr B. and Gaasterland T., 2002. Constraints on the Generation of Tense, Aspect, and Connecting Words from Temporal Expressions. (submitted to JAIR) Hitzeman J., Moens M. and Grover C., 1995. Algorithms for Analyzing the Temporal Structure of Discourse. In Proceedings of the 7th European Meeting of the Association for Computational Linguistics, pages 253-260. Dublin, Ireland. Lascarides A., Asher N. and Oberlander J., 1992. Inferring Discourse Relations in Context. In Proceedings of the 30th Meeting of the Association for Computational Linguistics, pages 1-8, Newark, Del. Li W.J. and Wong K.F., 2002. A Word-based Approach for Modeling and Discovering Temporal Relations Embedded in Chinese Sentences, ACM Transaction on Asian Language Processing, 1(3):173-206. Ma J. and Knight B., 1994. A General Temporal Theory. The Computer Journal, 37(2):114- 123. Màntaras L., 1991. A Distance-based Attribute Selection Measure for Decision Tree Induction. Machine Learning, 6(1): 81–92. Màrquez L., Padró L. and Rodríguez H., 2000. A Machine Learning Approach to POS Tagging. Machine Learning, 39(1):59-91. Kluwer Academic Publishers. Partee, B., 1984. Nominal and Temporal Anaphora. Linguistics and Philosophy, 7(3):287-324. Quinlan J., 1993. C4.5 Programs for Machine Learning. Morgan Kauman Press. Reichenbach H., 1947. Elements of Symbolic Logic. Berkeley CA, University of California Press. Siegel E. and McKeown K., 2000. Learning Methods to Combine Linguistic Indicators: Improving Aspectual Classification and Revealing Linguistic Insights. Computational Linguistics, 26(4): 595627. Wiebe, J.M., O'Hara, T.P., Ohrstrom-Sandgren, T. and McKeever, K.J, 1998. An Empirical Approach to Temporal Reference Resolution. Journal of Artificial Intelligence Research, 9:247-293. Wong F., Li W., Yuan C., etc., 2002. Temporal Representation and Classification in Chinese. International Journal of Computer Processing of Oriental Languages, 15(2):211-230. Yarowsky D., 1994. Decision Lists for Lexical Ambiguity Resolution: Application to the Accent Restoration in Spanish and French. In Proceeding of the 32rd Annual Meeting of ACL, San Francisco, CA. Zhou X., Dillon T., 1991. A Statistical-heuristic Feature Selection Criterion for Decision Tree Induction. IEEE Transaction on Pattern Analysis and Machine Intelligence, 13(8): 834-841.
2004
74
Multi-Criteria-based Active Learning for Named Entity Recognition Dan Shen†‡1 Jie Zhang†‡ Jian Su† Guodong Zhou† Chew-Lim Tan‡ † Institute for Infocomm Technology 21 Heng Mui Keng Terrace Singapore 119613 ‡ Department of Computer Science National University of Singapore 3 Science Drive 2, Singapore 117543 {shendan,zhangjie,sujian,zhougd}@i2r.a-star.edu.sg {shendan,zhangjie,tancl}@comp.nus.edu.sg 1 Current address of the first author: Universität des Saarlandes, Computational Linguistics Dept., 66041 Saarbrü cken, Germany [email protected] Abstract In this paper, we propose a multi-criteriabased active learning approach and effectively apply it to named entity recognition. Active learning targets to minimize the human annotation efforts by selecting examples for labeling. To maximize the contribution of the selected examples, we consider the multiple criteria: informativeness, representativeness and diversity and propose measures to quantify them. More comprehensively, we incorporate all the criteria using two selection strategies, both of which result in less labeling cost than single-criterion-based method. The results of the named entity recognition in both MUC-6 and GENIA show that the labeling cost can be reduced by at least 80% without degrading the performance. 1 Introduction In the machine learning approaches of natural language processing (NLP), models are generally trained on large annotated corpus. However, annotating such corpus is expensive and timeconsuming, which makes it difficult to adapt an existing model to a new domain. In order to overcome this difficulty, active learning (sample selection) has been studied in more and more NLP applications such as POS tagging (Engelson and Dagan 1999), information extraction (Thompson et al. 1999), text classification (Lewis and Catlett 1994; McCallum and Nigam 1998; Schohn and Cohn 2000; Tong and Koller 2000; Brinker 2003), statistical parsing (Thompson et al. 1999; Tang et al. 2002; Steedman et al. 2003), noun phrase chunking (Ngai and Yarowsky 2000), etc. Active learning is based on the assumption that a small number of annotated examples and a large number of unannotated examples are available. This assumption is valid in most NLP tasks. Different from supervised learning in which the entire corpus are labeled manually, active learning is to select the most useful example for labeling and add the labeled example to training set to retrain model. This procedure is repeated until the model achieves a certain level of performance. Practically, a batch of examples are selected at a time, called batchedbased sample selection (Lewis and Catlett 1994) since it is time consuming to retrain the model if only one new example is added to the training set. Many existing work in the area focus on two approaches: certainty-based methods (Thompson et al. 1999; Tang et al. 2002; Schohn and Cohn 2000; Tong and Koller 2000; Brinker 2003) and committee-based methods (McCallum and Nigam 1998; Engelson and Dagan 1999; Ngai and Yarowsky 2000) to select the most informative examples for which the current model are most uncertain. Being the first piece of work on active learning for name entity recognition (NER) task, we target to minimize the human annotation efforts yet still reaching the same level of performance as a supervised learning approach. For this purpose, we make a more comprehensive consideration on the contribution of individual examples, and more importantly maximizing the contribution of a batch based on three criteria: informativeness, representativeness and diversity. First, we propose three scoring functions to quantify the informativeness of an example, which can be used to select the most uncertain examples. Second, the representativeness measure is further proposed to choose the examples representing the majority. Third, we propose two diversity considerations (global and local) to avoid repetition among the examples of a batch. Finally, two combination strategies with the above three criteria are proposed to reach the maximum effectiveness on active learning for NER. We build our NER model using Support Vector Machines (SVM). The experiment shows that our active learning methods achieve a promising result in this NER task. The results in both MUC6 and GENIA show that the amount of the labeled training data can be reduced by at least 80% without degrading the quality of the named entity recognizer. The contributions not only come from the above measures, but also the two sample selection strategies which effectively incorporate informativeness, representativeness and diversity criteria. To our knowledge, it is the first work on considering the three criteria all together for active learning. Furthermore, such measures and strategies can be easily adapted to other active learning tasks as well. 2 Multi-criteria for NER Active Learning Support Vector Machines (SVM) is a powerful machine learning method, which has been applied successfully in NER tasks, such as (Kazama et al. 2002; Lee et al. 2003). In this paper, we apply active learning methods to a simple and effective SVM model to recognize one class of names at a time, such as protein names, person names, etc. In NER, SVM is to classify a word into positive class “1” indicating that the word is a part of an entity, or negative class “-1” indicating that the word is not a part of an entity. Each word in SVM is represented as a high-dimensional feature vector including surface word information, orthographic features, POS feature and semantic trigger features (Shen et al. 2003). The semantic trigger features consist of some special head nouns for an entity class which is supplied by users. Furthermore, a window (size = 7), which represents the local context of the target word w, is also used to classify w. However, for active learning in NER, it is not reasonable to select a single word without context for human to label. Even if we require human to label a single word, he has to make an addition effort to refer to the context of the word. In our active learning process, we select a word sequence which consists of a machine-annotated named entity and its context rather than a single word. Therefore, all of the measures we propose for active learning should be applied to the machineannotated named entities and we have to further study how to extend the measures for words to named entities. Thus, the active learning in SVMbased NER will be more complex than that in simple classification tasks, such as text classification on which most SVM active learning works are conducted (Schohn and Cohn 2000; Tong and Koller 2000; Brinker 2003). In the next part, we will introduce informativeness, representativeness and diversity measures for the SVM-based NER. 2.1 Informativeness The basic idea of informativeness criterion is similar to certainty-based sample selection methods, which have been used in many previous works. In our task, we use a distance-based measure to evaluate the informativeness of a word and extend it to the measure of an entity using three scoring functions. We prefer the examples with high informative degree for which the current model are most uncertain. 2.1.1 Informativeness Measure for Word In the simplest linear form, training SVM is to find a hyperplane that can separate the positive and negative examples in training set with maximum margin. The margin is defined by the distance of the hyperplane to the nearest of the positive and negative examples. The training examples which are closest to the hyperplane are called support vectors. In SVM, only the support vectors are useful for the classification, which is different from statistical models. SVM training is to get these support vectors and their weights from training set by solving quadratic programming problem. The support vectors can later be used to classify the test data. Intuitively, we consider the informativeness of an example as how it can make effect on the support vectors by adding it to training set. An example may be informative for the learner if the distance of its feature vector to the hyperplane is less than that of the support vectors to the hyperplane (equal to 1). This intuition is also justified by (Schohn and Cohn 2000; Tong and Koller 2000) based on a version space analysis. They state that labeling an example that lies on or close to the hyperplane is guaranteed to have an effect on the solution. In our task, we use the distance to measure the informativeness of an example. The distance of a word’s feature vector to the hyperplane is computed as follows: 1 ( ) ( , ) N i i i i Dist y k b α = = + ∑ w s w where w is the feature vector of the word, ai, yi, si corresponds to the weight, the class and the feature vector of the ith support vector respectively. N is the number of the support vectors in current model. We select the example with minimal Dist, which indicates that it comes closest to the hyperplane in feature space. This example is considered most informative for current model. 2.1.2 Informativeness Measure for Named Entity Based on the above informativeness measure for a word, we compute the overall informativeness degree of a named entity NE. In this paper, we propose three scoring functions as follows. Let NE = w1…wN in which wi is the feature vector of the ith word of NE. • Info_Avg: The informativeness of NE is scored by the average distance of the words in NE to the hyperplane. ( ) 1 ( ) i i N E Info NE Dist ∈ = −∑ w w where, wi is the feature vector of the ith word in NE. • Info_Min: The informativeness of NE is scored by the minimal distance of the words in NE. ( ) 1 { ( )} i i NE Info NE Min Dist ∈ = − w w • Info_S/N: If the distance of a word to the hyperplane is less than a threshold a (= 1 in our task), the word is considered with short distance. Then, we compute the proportion of the number of words with short distance to the total number of words in the named entity and use this proportion to quantify the informativeness of the named entity. ( ( ) ) ( ) i i N E NUM Dist Info NE N α ∈ < = w w In Section 4.3, we will evaluate the effectiveness of these scoring functions. 2.2 Representativeness In addition to the most informative example, we also prefer the most representative example. The representativeness of an example can be evaluated based on how many examples there are similar or near to it. So, the examples with high representative degree are less likely to be an outlier. Adding them to the training set will have effect on a large number of unlabeled examples. There are only a few works considering this selection criterion (McCallum and Nigam 1998; Tang et al. 2002) and both of them are specific to their tasks, viz. text classification and statistical parsing. In this section, we compute the similarity between words using a general vector-based measure, extend this measure to named entity level using dynamic time warping algorithm and quantify the representativeness of a named entity by its density. 2.2.1 Similarity Measure between Words In general vector space model, the similarity between two vectors may be measured by computing the cosine value of the angle between them. The smaller the angle is, the more similar between the vectors are. This measure, called cosine-similarity measure, has been widely used in information retrieval tasks (Baeza-Yates and Ribeiro-Neto 1999). In our task, we also use it to quantify the similarity between two words. Particularly, the calculation in SVM need be projected to a higher dimensional space by using a certain kernel function ( , ) i j K w w . Therefore, we adapt the cosine-similarity measure to SVM as follows: ( , ) ( , ) ( , ) ( , ) i j i j i i j j k Sim k k = w w w w w w w w where, wi and wj are the feature vectors of the words i and j. This calculation is also supported by (Brinker 2003)’s work. Furthermore, if we use the linear kernel ( , ) i j i j k = ⋅ w w w w , the measure is the same as the traditional cosine similarity measure cos i j i j θ ⋅ = ⋅ w w w w and may be regarded as a general vector-based similarity measure. 2.2.2 Similarity Meas ure between Named Entities In this part, we compute the similarity between two machine-annotated named entities given the similarities between words. Regarding an entity as a word sequence, this work is analogous to the alignment of two sequences. We employ the dynamic time warping (DTW) algorithm (Rabiner et al. 1978) to find an optimal alignment between the words in the sequences which maximize the accumulated similarity degree between the sequences. Here, we adapt it to our task. A sketch of the modified algorithm is as follows. Let NE1 = w11w12…w1n…w1N, (n = 1,…, N) and NE2 = w21w22…w2m…w2M, (m = 1,…, M) denote two word sequences to be matched. NE1 and NE2 consist of M and N words respectively. NE1(n) = w1n and NE2(m) = w2m. A similarity value Sim(w1n ,w2m) has been known for every pair of words (w1n,w2m) within NE1 and NE2. The goal of DTW is to find a path, m = map(n), which map n onto the corresponding m such that the accumulated similarity Sim* along the path is maximized. 1 2 { ( )} 1 * { ( ( ), ( ( ))} N map n n Sim M a x Sim N E n N E map n = = ∑ A dynamic programming method is used to determine the optimum path map(n). The accumulated similarity SimA to any grid point (n, m) can be recursively calculated as 1 2 ( , ) ( , ) ( 1, ) A n m A q m Sim n m Sim w w M a x S i m n q ≤ = + − Finally, * ( , ) A Sim Sim N M = Certainly, the overall similarity measure Sim* has to be normalized as longer sequences normally give higher similarity value. So, the similarity between two sequences NE1 and NE2 is calculated as 1 2 * ( , ) ( , ) Sim Sim NE NE Max N M = 2.2.3 Representativeness Measure for Named Entity Given a set of machine-annotated named entities NESet = {NE1, …, NEN}, the representativeness of a named entity NEi in NESet is quantified by its density. The density of NEi is defined as the average similarity between NEi and all the other entities NEj in NESet as follows. ( , ) ( ) 1 i j j i i Sim NE NE Density N E N ≠ = − ∑ If NEi has the largest density among all the entities in NESet, it can be regarded as the centroid of NESet and also the most representative examples in NESet. 2.3 Diversity Diversity criterion is to maximize the training utility of a batch. We prefer the batch in which the examples have high variance to each other. For example, given the batch size 5, we try not to select five repetitious examples at a time. To our knowledge, there is only one work (Brinker 2003) exploring this criterion. In our task, we propose two methods: local and global, to make the examples diverse enough in a batch. 2.3.1 Global Consideration For a global consideration, we cluster all named entities in NESet based on the similarity measure proposed in Section 2.2.2. The named entities in the same cluster may be considered similar to each other, so we will select the named entities from different clusters at one time. We employ a Kmeans clustering algorithm (Jelinek 1997), which is shown in Figure 1. Given: NESet = {NE1, …, NEN} Suppose: The number of clusters is K Initialization: Randomly equally partition {NE1, …, NEN} into K initial clusters Cj (j = 1, …, K). Loop until the number of changes for the centroids of all clusters is less than a threshold • Find the centroid of each cluster Cj (j = 1, …, K). arg ( ( , )) j i j j i NE C NE C NECent max Sim NE NE ∈ ∈ = ∑ • Repartition {NE1, …, NEN} into K clusters. NEi will be assigned to Cluster Cj if ( , ) ( , ), i j i w Sim NE NECent Sim NE NECent w j ≥ ≠ Figure 1: Global Consideration for Diversity: KMeans Clustering algorithm In each round, we need to compute the pairwise similarities within each cluster to get the centroid of the cluster. And then, we need to compute the similarities between each example and all centroids to repartition the examples. So, the algorithm is time-consuming. Based on the assumption that N examples are uniformly distributed between the K clusters, the time complexity of the algorithm is about O(N2/K+NK) (Tang et al. 2002). In one of our experiments, the size of the NESet (N) is around 17000 and K is equal to 50, so the time complexity is about O(106). For efficiency, we may filter the entities in NESet before clustering them, which will be further discussed in Section 3. 2.3.2 Local Consideration When selecting a machine-annotated named entity, we compare it with all previously selected named entities in the current batch. If the similarity between them is above a threshold ß, this example cannot be allowed to add into the batch. The order of selecting examples is based on some measure, such as informativeness measure, representativeness measure or their combination. This local selection method is shown in Figure 2. In this way, we avoid selecting too similar examples (similarity value ≥ ß) in a batch. The threshold ß may be the average similarity between the examples in NESet. Given: NESet = {NE1, …, NEN} BatchSet with the maximal size K. Initialization: BatchSet = empty Loop until BatchSet is full • Select NEi based on some measure from NESet. • RepeatFlag = false; • Loop from j = 1 to CurrentSize(BatchSet) If ( , ) i j Sim NE NE β ≥ Then RepeatFlag = true; Stop the Loop; • If RepeatFlag == false Then add NEi into BatchSet • remove NEi from NESet Figure 2: Local Consideration for Diversity This consideration only requires O(NK+K2) computational time. In one of our experiments (N ˜ 17000 and K = 50), the time complexity is about O(105). It is more efficient than clustering algorithm described in Section 2.3.1. 3 Sample Selection strategies In this section, we will study how to combine and strike a proper balance between these criteria, viz. informativeness, representativeness and diversity, to reach the maximum effectiveness on NER active learning. We build two strategies to combine the measures proposed above. These strategies are based on the varying priorities of the criteria and the varying degrees to satisfy the criteria. • Strategy 1: We first consider the informativeness criterion. We choose m examples with the most informativeness score from NESet to an intermediate set called INTERSet. By this preselecting, we make the selection process faster in the later steps since the size of INTERSet is much smaller than that of NESet. Then we cluster the examples in INTERSet and choose the centroid of each cluster into a batch called BatchSet. The centroid of a cluster is the most representative example in that cluster since it has the largest density. Furthermore, the examples in different clusters may be considered diverse to each other. By this means, we consider representativeness and diversity criteria at the same time. This strategy is shown in Figure 3. One limitation of this strategy is that clustering result may not reflect the distribution of whole sample space since we only cluster on INTERSet for efficiency. The other is that since the representativeness of an example is only evaluated on a cluster. If the cluster size is too small, the most representative example in this cluster may not be representative in the whole sample space. Given: NESet = {NE1, …, NEN} BatchSet with the maximal size K. INTERSet with the maximal size M Steps: • BatchSet = ∅ • INTERSet = ∅ • Select M entities with most Info score from NESet to INTERSet. • Cluster the entities in INTERSet into K clusters • Add the centroid entity of each cluster to BatchSet Figure 3: Sample Selection Strategy 1 • Strategy 2: (Figure 4) We combine the informativeness and representativeness criteria using the functio ( ) (1 ) ( ) i i Info NE Density NE λ λ + − , in which the Info and Density value of NEi are normalized first. The individual importance of each criterion in this function is adjusted by the tradeoff parameter λ ( 0 1 λ ≤ ≤ ) (set to 0.6 in our experiment). First, we select a candidate example NEi with the maximum value of this function from NESet. Second, we consider diversity criterion using the local method in Section 3.3.2. We add the candidate example NEi to a batch only if NEi is different enough from any previously selected example in the batch. The threshold ß is set to the average pair-wise similarity of the entities in NESet. Given: NESet = {NE1, …, NEN} BatchSet with the maximal size K. Initialization: BatchSet = ∅ Loop until BatchSet is full • Select NEi which have the maximum value for the combination function between Info score and Density socre from NESet. arg ( ( ) (1 ) ( )) i i i i N E NESet NE Max Info NE Density NE λ λ ∈ = + − • RepeatFlag = false; • Loop from j = 1 to CurrentSize(BatchSet) If ( , ) i j Sim NE NE β ≥ Then RepeatFlag = true; Stop the Loop; • If RepeatFlag == false Then add NEi into BatchSet • remove NEi from NESet Figure 4: Sample Selection Strategy 2 4 Experimental Results and Analysis 4.1 Experiment Settings In order to evaluate the effectiveness of our selection strategies, we apply them to recognize protein (PRT) names in biomedical domain using GENIA corpus V1.1 (Ohta et al. 2002) and person (PER), location (LOC), organization (ORG) names in newswire domain using MUC-6 corpus. First, we randomly split the whole corpus into three parts: an initial training set to build an initial model, a test set to evaluate the performance of the model and an unlabeled set to select examples. The size of each data set is shown in Table 1. Then, iteratively, we select a batch of examples following the selection strategies proposed, require human experts to label them and add them into the training set. The batch size K = 50 in GENIA and 10 in MUC-6. Each example is defined as a machine-recognized named entity and its context words (previous 3 words and next 3 words). Domain Class Corpus Initial Training Set Test Set Unlabeled Set Biomedical PRT GENIA1.1 10 sent. (277 words) 900 sent. (26K words) 8004 sent. (223K words) PER 5 sent. (131 words) 7809 sent. (157K words) LOC 5 sent. (130 words) 7809 sent. (157K words) Newswire ORG MUC-6 5 sent. (113 words) 602 sent. (14K words) 7809 sent. (157K words) Table 1: Experiment settings for active learning using GENIA1.1(PRT) and MUC-6(PER,LOC,ORG) The goal of our work is to minimize the human annotation effort to learn a named entity recognizer with the same performance level as supervised learning. The performance of our model is evaluated using “precision/recall/F-measure”. 4.2 Overall Result in GENIA and MUC-6 In this section, we evaluate our selection strategies by comparing them with a random selection method, in which a batch of examples is randomly selected iteratively, on GENIA and MUC-6 corpus. Table 2 shows the amount of training data needed to achieve the performance of supervised learning using various selection methods, viz. Random, Strategy1 and Strategy2. In GENIA, we find: • The model achieves 63.3 F-measure using 223K words in the supervised learning. • The best performer is Strategy2 (31K words), requiring less than 40% of the training data that Random (83K words) does and 14% of the training data that the supervised learning does. • Strategy1 (40K words) performs slightly worse than Strategy2, requiring 9K more words. It is probably because Strategy1 cannot avoid selecting outliers if a cluster is too small. • Random (83K words) requires about 37% of the training data that the supervised learning does. It indicates that only the words in and around a named entity are useful for classification and the words far from the named entity may not be helpful. Class Supervised Random Strategy1 Strategy2 PRT 223K (F=63.3) 83K 40K 31K PER 157K (F=90.4) 11.5K 4.2K 3.5K LOC 157K (F=73.5) 13.6K 3.5K 2.1K ORG 157K (F=86.0) 20.2K 9.5K 7.8K Table 2: Overall Result in GENIA and MUC-6 Furthermore, when we apply our model to newswire domain (MUC-6) to recognize person, location and organization names, Strategy1 and Strategy2 show a more promising result by comparing with the supervised learning and Random, as shown in Table 2. On average, about 95% of the data can be reduced to achieve the same performance with the supervised learning in MUC-6. It is probably because NER in the newswire domain is much simpler than that in the biomedical domain (Shen et al. 2003) and named entities are less and distributed much sparser in the newswire texts than in the biomedical texts. 4.3 Effectiveness of Informativeness-based Selection Method In this section, we investigate the effectiveness of informativeness criterion in NER task. Figure 5 shows a plot of training data size versus F-measure achieved by the informativeness-based measures in Section 3.1.2: Info_Avg, Info_Min and Info_S/N as well as Random. We make the comparisons in GENIA corpus. In Figure 5, the horizontal line is the performance level (63.3 F-measure) achieved by supervised learning (223K words). We find that the three informativeness-based measures perform similarly and each of them outperforms Random. Table 3 highlights the various data sizes to achieve the peak performance using these selection methods. We find that Random (83K words) on average requires over 1.5 times as much as data to achieve the same performance as the informativeness-based selection methods (52K words). 0.5 0.55 0.6 0.65 0 20 40 60 80 K words F Supervised Random Info_Min Info_S/N Info_Avg Figure 5: Active learning curves: effectiveness of the three informativeness-criterion-based selections comparing with the Random selection. Supervised Random Info_Avg Info_Min Info_ S/N 223K 83K 52.0K 51.9K 52.3K Table 3: Training data sizes for various selection methods to achieve the same performance level as the supervised learning 4.4 Effectiveness of Two Sample Selection Strategies In addition to the informativeness criterion, we further incorporate representativeness and diversity criteria into active learning using two strategies described in Section 3. Comparing the two strategies with the best result of the single-criterionbased selection methods Info_Min, we are to justify that representativeness and diversity are also important factors for active learning. Figure 6 shows the learning curves for the various methods: Strategy1, Strategy2 and Info_Min. In the beginning iterations (F-measure < 60), the three methods performed similarly. But with the larger training set, the efficiencies of Stratety1 and Strategy2 begin to be evident. Table 4 highlights the final result of the three methods. In order to reach the performance of supervised learning, Strategy1 (40K words) and Strategyy2 (31K words) require about 80% and 60% of the data that Info_Min (51.9K) does. So we believe the effective combinations of informativeness, representativeness and diversity will help to learn the NER model more quickly and cost less in annotation. 0.5 0.55 0.6 0.65 0 20 40 60 K words F Supervised Info_Min Strategy1 Strategy2 Figure 6: Active learning curves: effectiveness of the two multi-criteria-based selection strategies comparing with the informativeness-criterion-based selection (Info_Min). Info_Min Strategy1 Strategy2 51.9K 40K 31K Table 4: Comparisons of training data sizes for the multicriteria-based selection strategies and the informativenesscriterion-based selection (Info_Min) to achieve the same performance level as the supervised learning. 5 Related Work Since there is no study on active learning for NER task previously, we only introduce general active learning methods here. Many existing active learning methods are to select the most uncertain examples using various measures (Thompson et al. 1999; Schohn and Cohn 2000; Tong and Koller 2000; Engelson and Dagan 1999; Ngai and Yarowsky 2000). Our informativeness-based measure is similar to these works. However these works just follow a single criterion. (McCallum and Nigam 1998; Tang et al. 2002) are the only two works considering the representativeness criterion in active learning. (Tang et al. 2002) use the density information to weight the selected examples while we use it to select examples. Moreover, the representativeness measure we use is relatively general and easy to adapt to other tasks, in which the example selected is a sequence of words, such as text chunking, POS tagging, etc. On the other hand, (Brinker 2003) first incorporate diversity in active learning for text classification. Their work is similar to our local consideration in Section 2.3.2. However, he didn’t further explore how to avoid selecting outliers to a batch. So far, we haven’t found any previous work integrating the informativeness, representativeness and diversity all together. 6 Conclusion and Future Work In this paper, we study the active learning in a more complex NLP task, named entity recognition. We propose a multi-criteria-based approach to select examples based on their informativeness, representativeness and diversity, which are incorporated all together by two strategies (local and global). Experiments show that, in both MUC6 and GENIA, both of the two strategies combining the three criteria outperform the single criterion (informativeness). The labeling cost can be significantly reduced by at least 80% comparing with the supervised learning. To our best knowledge, this is not only the first work to report the empirical results of active learning for NER, but also the first work to incorporate the three criteria all together for selecting examples. Although the current experiment results are very promising, some parameters in our experiment, such as the batch size K and the λ in the function of strategy 2, are decided by our experience in the domain. In practical application, the optimal value of these parameters should be decided automatically based on the training process. Furthermore, we will study how to overcome the limitation of the strategy 1 discussed in Section 3 by using more effective clustering algorithm. Another interesting work is to study when to stop active learning. References R. Baeza-Yates and B. Ribeiro-Neto. 1999. Modern Information Retrieval. ISBN 0-201-39829-X. K. Brinker. 2003. Incorporating Diversity in Active Learning with Support Vector Machines. In Proceedings of ICML, 2003. S. A. Engelson and I. Dagan. 1999. CommitteeBased Sample Selection for Probabilistic Classifiers. Journal of Artifical Intelligence Research. F. Jelinek. 1997. Statistical Methods for Speech Recognition. MIT Press. J. Kazama, T. Makino, Y. Ohta and J. Tsujii. 2002. Tuning Support Vector Machines for Biomedical Named Entity Recognition. In Proceedings of the ACL2002 Workshop on NLP in Biomedicine. K. J. Lee, Y. S. Hwang and H. C. Rim. 2003. TwoPhase Biomedical NE Recognition based on SVMs. In Proceedings of the ACL2003 Workshop on NLP in Biomedicine. D. D. Lewis and J. Catlett. 1994. Heterogeneous Uncertainty Sampling for Supervised Learning. In Proceedings of ICML, 1994. A. McCallum and K. Nigam. 1998. Employing EM in Pool-Based Active Learning for Text Classification. In Proceedings of ICML, 1998. G. Ngai and D. Yarowsky. 2000. Rule Writing or Annotation: Cost-efficient Resource Usage for Base Noun Phrase Chunking. In Proceedings of ACL, 2000. T. Ohta, Y. Tateisi, J. Kim, H. Mima and J. Tsujii. 2002. The GENIA corpus: An annotated research abstract corpus in molecular biology domain. In Proceedings of HLT 2002. L. R. Rabiner, A. E. Rosenberg and S. E. Levinson. 1978. Considerations in Dynamic Time Warping Algorithms for Discrete Word Recognition. In Proceedings of IEEE Transactions on acoustics, speech and signal processing. Vol. ASSP-26, NO.6. D. Schohn and D. Cohn. 2000. Less is More: Active Learning with Support Vector Machines. In Proceedings of the 17th International Conference on Machine Learning. D. Shen, J. Zhang, G. D. Zhou, J. Su and C. L. Tan. 2003. Effective Adaptation of a Hidden Markov Model-based Named Entity Recognizer for Biomedical Domain. In Proceedings of the ACL2003 Workshop on NLP in Biomedicine. M. Steedman, R. Hwa, S. Clark, M. Osborne, A. Sarkar, J. Hockenmaier, P. Ruhlen, S. Baker and J. Crim. 2003. Example Selection for Bootstrapping Statistical Parsers. In Proceedings of HLTNAACL, 2003. M. Tang, X. Luo and S. Roukos. 2002. Active Learning for Statistical Natural Language Parsing. In Proceedings of the ACL 2002. C. A. Thompson, M. E. Califf and R. J. Mooney. 1999. Active Learning for Natural Language Parsing and Information Extraction. In Proceedings of ICML 1999. S. Tong and D. Koller. 2000. Support Vector Machine Active Learning with Applications to Text Classification. Journal of Machine Learning Research. V. Vapnik. 1998. Statistical learning theory. N.Y.:John Wiley.
2004
75
Weakly Supervised Learning for Cross-document Person Name Disambiguation Supported by Information Extraction Cheng Niu, Wei Li, and Rohini K. Srihari Cymfony Inc. 600 Essjay Road, Williamsville, NY 14221, USA. {cniu, wei, rohini}@cymfony.com Abstract It is fairly common that different people are associated with the same name. In tracking person entities in a large document pool, it is important to determine whether multiple mentions of the same name across documents refer to the same entity or not. Previous approach to this problem involves measuring context similarity only based on co-occurring words. This paper presents a new algorithm using information extraction support in addition to co-occurring words. A learning scheme with minimal supervision is developed within the Bayesian framework. Maximum entropy modeling is then used to represent the probability distribution of context similarities based on heterogeneous features. Statistical annealing is applied to derive the final entity coreference chains by globally fitting the pairwise context similarities. Benchmarking shows that our new approach significantly outperforms the existing algorithm by 25 percentage points in overall F-measure. 1 Introduction Cross document name disambiguation is required for various tasks of knowledge discovery from textual documents, such as entity tracking, link discovery, information fusion and event tracking. This task is part of the co-reference task: if two mentions of the same name refer to same (different) entities, by definition, they should (should not) be co-referenced. As far as names are concerned, co-reference consists of two sub-tasks: (i) name disambiguation to handle the problem of different entities happening to use the same name; (ii) alias association to handle the problem of the same entity using multiple names (aliases). Message Understanding Conference (MUC) community has established within-document coreference standards [MUC-7 1998]. Compared with within-document name disambiguation which can leverage highly reliable discourse heuristics such as one sense per discourse [Gale et al 1992], cross-document name disambiguation is a much harder problem. Among major categories of named entities (NEs, which in this paper refer to entity names, excluding the MUC time and numerical NEs), company and product names are often trademarked or uniquely registered, and hence less subject to name ambiguity. This paper focuses on cross-document disambiguation of person names. Previous research for cross-document name disambiguation applies vector space model (VSM) for context similarity, only using co-occurring words [Bagga & Baldwin 1998]. A pre-defined threshold decides whether two context vectors are different enough to represent two different entities. This approach faces two challenges: i) it is difficult to incorporate natural language processing (NLP) results in the VSM framework; 1 ii) the algorithm focuses on the local pairwise context similarity, and neglects the global correlation in the data: this may cause inconsistent results, and hurts the performance. This paper presents a new algorithm that addresses these problems. A learning scheme with minimal supervision is developed within the Bayesian framework. Maximum entropy modeling is then used to represent the probability distribution of context similarities based on heterogeneous features covering both co-occurring words and natural language information extraction (IE) results. Statistical annealing is used to derive the final entity co-reference chains by globally fitting the pairwise context similarities. Both the previous algorithm and our new algorithm are implemented, benchmarked and 1 Based on our experiment, only using co-occurring words often cannot fulfill the name disambiguation task. For example, the above algorithm identifies the mentions of Bill Clinton as referring to two different persons, one represents his role as U. S. president, and the other is strongly associated with the scandal, although in both mention clusters, Bill Clinton has been mentioned as U.S. president. Proper name disambiguation calls for NLP/IE support which may have extracted the key person’s identification information from the textual documents. compared. Significant performance enhancement up to 25 percentage points in overall F-measure is observed with the new approach. The generality of this algorithm ensures that this approach is also applicable to other categories of NEs. The remaining part of the paper is structured as follows. Section 2 presents the algorithm design and task definition. The name disambiguation algorithm is described in Sections 3, 4 and 5, corresponding to the three key aspects of the algorithm, i.e. minimally supervised learning scheme, maximum entropy modeling and annealing-based optimization. Benchmarks are shown in Section 6, followed by Conclusion in Section 7. 2 Task Definition and Algorithm Design Given n name mentions, we first introduce the following symbols. i C refers to the context of the i -th mention. iP refers to the entity for the i -th mention. i Name refers to the name string of the i -th mention. j i CS , refers to the context similarity between the i -th mention and the j -th mention, which is a subset of the predefined context similarity features. αf refers to theα -th predefined context similarity feature. So j i CS , takes the form of { } αf . The name disambiguation task is defined as hard clustering of the multiple mentions of the same name. Its final solution is represented as { } M K, where K refers to the number of distinct entities, and M represents the many-to-one mapping (from mentions to a cluster) such that ( ) K]. [1, j n], [1, i j, i M ∈ ∈ = One way of combining natural language IE results with traditional co-occurring words is to design a new context representation scheme and then define the context similarity measure based on the new scheme. The challenge to this approach lies in the lack of a proper weighting scheme for these high-dimensional heterogeneous features. In our research, the algorithm directly models the pairwise context similarity. For any given context pair, a set of predefined context similarity features are defined. Then with n mentions of a same name, 2 )1 ( − n n context similarities [ ] [ ) ( ) i j n i CS j i ,1 , ,1 , ∈ ∈ are computed. The name disambiguation task is formulated as searching for { } M K, which maximizes the following conditional probability: { } ( ) [ ] [ ) ( ) i j n i CS M K j i ,1 , ,1 } { , Pr , ∈ ∈ Based on Bayesian Equity, this is equivalent to maximizing the following joint probability { } ( ) [ ] [ ) ( ) { } ( ) { } ( ) { } ( ) { } ( ) M K M K CS M K M K CS i j n i CS M K i j N i j i j i j i , Pr , Pr , Pr , } { Pr ,1 , ,1 } {, , Pr 1 ,1 ,1 , , , ∏ − = = ≈ = ∈ ∈ (1) Eq. (1) contains a prior probability distribution of name disambiguation { } ( ) M K, Pr . Because there is no prior knowledge available about what solution is preferred, it is reasonable to take an equal distribution as the prior probability distribution. So the name disambiguation is equivalent to searching for { } M K, which maximizes Expression (2). { } ( ) ∏ − = = 1 ,1 ,1 , , Pr i j N i j i M K CS (2) where { } ( ) ( ) ( ) ( ) ( )    ≠ = = = otherwise , Pr j M i M if , Pr , Pr , , , j i j i j i j i j i P P CS P P CS M K CS (3) To learn the conditional probabilities ( ) j i j i P P CS = | Pr , and ( ) j i j i P P CS ≠ | Pr , in Eq. (3), we use a machine learning scheme which only requires minimal supervision. Within this scheme, maximum entropy modeling is used to combine heterogeneous context features. With the learned conditional probabilities in Eq. (3), for a given { } M K, candidate, we can compute the conditional probability of Expression (2). In the final step, optimization is performed to search for { } M K, that maximizes the value of Expression (2). To summarize, there are three key elements in this learning scheme: (i) the use of automatically constructed corpora to estimate conditional probabilities of Eq. (3); (ii) maximum entropy modeling for combining heterogeneous context similarity features; and (iii) statistical annealing for optimization. 3 Learning Using Automatically Constructed Corpora This section presents our machine learning scheme to estimate the conditional probabilities ( ) j i j i P P CS = | Pr , and ( ) j i j i P P CS ≠ | Pr , in Eq. (3). Considering j i CS , is in the form of { } αf , we re-formulate the two conditional probabilities as { } ( ) j i P P f = | Pr α and { } ( ) j i P P f ≠ | Pr α . The learning scheme makes use of automatically constructed large corpora. The rationale is illustrated in the figure below. The symbol + represents a positive instance, namely, a mention pair that refers to the same entity. The symbol – represents a negative instance, i.e. a mention pair that refers to different entities. Corpus I Corpus II +++++---++++++ ---------------------- +-----+++--+++++ --+------------------ ++++++++++--++ --------------+------ +++++++---++++ ----------------------- +++----++++++++ --------+------------- As shown in the figure, two training corpora are automatically constructed. Corpus I contains mention pairs of the same names; these are the most frequently mentioned names in the document pool. It is observed that frequently mentioned person names in the news domain are fairly unambiguous, hence enabling the corpus to contain mainly positive instances.2 Corpus II contains mention pairs of different person names, these pairs overwhelmingly correspond to negative instances (with statistically negligible exceptions). Thus, typical patterns of negative instances can be learned from Corpus II. We use these patterns to filter away the negative instances in Corpus I. The purified Corpus I can then be used to learn patterns for positive instances. The algorithm is formulated as follows. Following the observation that different names usually refer to different entities, it is safe to derive Eq. (4). ( ) ( ) 2 1 2 1 } { Pr } { Pr name name f P P f ≠ = ≠ α α (4) For ( ) 2 1 } { Pr P P f = α , we can derive the following relation (Eq. 5): 2 Based on our data analysis, there is no observable difference in linguistic expressions involving frequently mentioned vs. occasionally occurring person names. Therefore, the use of frequently mentioned names in the corpus construction process does not affect the effectiveness of the learned model to be applicable to all the person names in general. ( ) ( ) [ ( )] ( ) [ ( ) ( )] 2 1 2 1 2 1 2 1 2 1 2 1 2 1 Pr 1 * } { Pr Pr * } { Pr } { Pr name name P P P P f name name P P P P f name name f = = − ≠ + = = = = = α α α (5) So ( ) 2 1 } { Pr P P f = α can be determined if ( )) ( ) ( } { Pr 2 1 P name P name f = α , ( )) ( ) ( } { Pr 2 1 P name P name f ≠ α , and ( )) ( ) ( Pr 2 1 2 1 P name P name P P = = are all known. By using Corpus I and Corpus II to estimate the above three probabilities, we achieve Eq. (6.1) and Eq. (6.2) ( ) 2 1 } { Pr P P f = α ( ) ( ) ( ) X X f f − − = 1 * } { Pr } { Pr maxEnt II maxEnt I α α . (6.1) ( ) }) ({ Pr } { Pr maxEnt II 2 1 α α f P P f = ≠ (6.2) where ( )} { Pr maxEnt I αf denotes the maximum entropy model of ( )) ( ) ( } { Pr 2 1 P name P name f = α using Corpus I, ( )} { Pr maxEnt II αf denotes the maximum entropy model of ( )) ( ) ( } { Pr 2 1 P name P name f ≠ α using Corpus II, and X stands for the Maximum Likelihood Estimation (MLE) of ( )) ( ) ( Pr 2 1 2 1 P name P name P P = = using Corpus I. Maximum entropy modeling is used here due to its strength of combining heterogeneous features. It is worth noting that ( )} { Pr maxEnt I αf and ( )} { Pr maxEnt II αf can be automatically computed using Corpus I and Corpus II. Only X requires manual truthing. Because X is context independent, the required truthing is very limited (in our experiment, only 100 truthed mention pairs were used). The details of corpus construction and truthing will be presented in the next section. 4 Maximum Entropy Modeling This section presents the definition of context similarity features } { αf , and how to estimate the maximum entropy model of ( )} { Pr maxEnt I αf and ( )} { Pr maxEnt II αf . First, we describe how Corpus I and Corpus II are constructed. Before the person name disambiguation learning starts, a large pool of textual documents are processed by an IE engine InfoXtract [Srihari et al 2003]. The InfoXtract engine contains a named entity tagger, an aliasing module, a parser and an entity relationship extractor. In our experiments, we used ~350,000 AP and WSJ news articles (a total of ~170 million words) from the TIPSTER collection. All the documents and the IE results are stored into an IE Repository. The top 5,000 most frequently mentioned multi-token person names are retrieved from the repository. For each name, all the contexts are retrieved while the context is defined as containing three categories of features: (i) The surface string sequence centering around a key person name (or its aliases as identified by the aliasing module) within a predefined window size equal to 50 tokens to both sides of the key name. (ii) The automatically tagged entity names co occurring with the key name (or its aliases) within the same predefined window as in (i). (iii) The automatically extracted relationships associated with the key name (or its aliases). The relationships being utilized are listed below: Age, Where-from, Affiliation, Position, Leader-of, Owner-of, Has-Boss, Boss-of, Spouse-of, Has-Parent, Parent-of, HasTeacher, Teacher-of, Sibling-of, Friend-of, Colleague-of, Associated-Entity, Title, Address, Birth-Place, Birth-Time, DeathTime, Education, Degree, Descriptor, Modifier, Phone, Email, Fax. A recent manual benchmarking of the InfoXtract relationship extraction in the news domain is 86% precision and 67% recall (75% F-measure). To construct Corpus I, a person name is randomly selected from the list of the top 5,000 frequently mentioned multi-token names. For each selected name, a pair of contexts are extracted, and inserted into Corpus I. This process repeats until 10,000 pairs of contexts are selected. It is observed that, in the news domain, the top frequently occurring multi-token names are highly unambiguous. For example, Bill Clinton exclusively stands for the previous U.S. president although in real life, although many other people may also share this name. Based on manually checking 100 sample pairs in Corpus I, we have ( ) 95 .0 Pr 2 1 ≈ = = P P X I , which means for the 100 sample pairs mentioning the same person name, only 5 pairs are found to refer to different person entities. Note that the value of X − 1 represents the estimation of the noise in Corpus I, which is used in Eq (6.1) to correct the bias caused by the noise in the corpus. To construct Corpus II, two person names are randomly selected from the same name list. Then a context for each of the two names is extracted, and this context pair is inserted into Corpus II. This process repeats until 10,000 pairs of contexts are selected. Based on the above three categories of context features, four context similarity features are defined: (1) VSM-based context similarity using cooccurring words The surface string sequence centering around the key name is represented as a vector, and the word i in context j is weighted as follows. ) ( log * ) , ( ) , ( i df D j i tf j i weight = (7) where ) , ( j i tf is the frequency of word i in the j-th surface string sequence; D is the number of documents in the pool; and ) (i df is the number of documents containing the word i. Then, the cosine of the angle between the two resulting vectors is used as the context similarity measure. (2) Co-occurring NE Similarity The latent semantic analysis (LSA) [Deerwester et al 1990] is used to compute the co-occurring NE similarities. LSA is a technique to uncover the underlining semantics based on co-occurrence data. The first step of LSA is to construct wordvs.-document co-occurrence table. We use 100,000 documents from the TIPSTER corpus, and select the following types of top n most frequently mentioned words as base words: top 20,000 common nouns top 10,000 verbs top 10,000 adjectives top 2,000 adverbs top 10,000 person names top 15,000 organization names top 6,000 location names top 5,000 product names Then, a word-vs.-document co-occurrence table Matrix is built so that ) ( log * ) , ( i df D j i tf Matrixij = . The second step of LSA is to perform singular value decomposition (SVD) on the co-occurrence matrix. SVD yields the following Matrix decomposition: T D S T Matrix 0 0 0 = (8) where T and D are orthogonal matrices (the row vector is called singular vectors), and S is a diagonal matrix with the diagonal elements (called singular values) sorted decreasingly. The key idea of LSA is to reduce noise or insignificant association patterns by filtering the insignificant components uncovered by SVD. This is done by keeping only top k singular values. In our experiment, k is set to 200, following the practice reported in [Deerwester et al. 1990] and [Landauer & Dumais, 1997]. This procedure yields the following approximation to the co-occurrence matrix: T TSD Matrix ≈ (9) where S is attained from 0 S by deleting non-top k elements, and T ( D ) is obtained from 0 T ( 0 D ) by deleting the corresponding columns. It is believed that the approximate matrix is more proper to induce underlining semantics than the original one. In the framework of LSA, the cooccurring NE similarities are computed as follows: suppose the first context in the pair contains NEs { } i t0 , and the second context in the pair contains NEs { } it1 . Then the similarity is computed as     = i i i i t i t i t i t i T w T w T w T w S 1 0 1 0 1 0 1 0 where i w0 and i w1 are term weights defined in Eq (7). (3) Relationship Similarity We define four different similarity values based on entity relationship sharing: (i) sharing no common relationships, (ii) relationship conflicts only, (iii) relationship with consistence and conflicts, and (iv) relationship with consistence only. The consistency checking between extracted relationships is supported by the InfoXtract number normalization and time normalization as well as entity aliasing procudures. (4) Detailed Relationship Similarity For each relationship type, four different similarity values are defined based on sharing of that specific relationship i: (i) no sharing of relationship i, (ii) conflicts for relationship i, (iii) consistence and conflicts for relationship i, and (iv) consistence for relationship i. To facilitate the maximum entropy modeling in the later stage, the values of the first and second categories of similarity measures are discretized into integers. The number of integers being used may impact the final performance of the system. If the number is too small, significant information may be lost during the discretization process. On the other hand, if the number is too large, the training data may become too sparse. We trained a conditional maximum entropy model to disambiguate context pairs between Corpus I and Corpus II. The performance of this model is used to select the optimal number of integers. There is no significant performance change when the integer number is within the range of [5,30], with 12 as the optimal number. Now the context similarity for a context pair is a vector of similarity features, e.g. {VSM_Similairty_equal_to_2, NE_Similarity_equal_to_1, Relationship_Conflicts_only, No_Sharing_for_Age, Conflict_for_Affiliation}. Besides the four categories of basic context similarity features defined above, we define induced context similarity features by combining basic context similarity features using the logical AND operator. With induced features, the context similarity vector in the previous example is represented as {VSM_Similairty_equal_to_2, NE_Similarity_equal_to_1, Relationship_Conflicts_only, No_Sharing_for_Age, Conflict_for_Affiliation, [VSM_Similairty_equal_to_2 and NE_Similarity_equal_to_1], [VSM_Similairty=2 and Relationship_Conflicts_only], …… [VSM_Similairty_equal_to_2 and NE_Similarity_equal_to_1 and Relationship_Conflicts_only and No_Sharing_for_Age and Conflict_for_Affiliation] }. The induced features provide direct and finegrained information, but suffer from less sampling space. Combining basic features and induced features under a smoothing scheme, maximum entropy modeling may achieve optimal performance. Now the maximum entropy modeling can be formulated as follows: given a pairwise context similarity vector } { αf the probability of } { αf is given as ( ) { } ∏ ∈ = α α f f f w Z f 1 } { Pr maxEnt (10) where Z is the normalization factor, f w is the weight associated with feature f . The Iterative Scaling algorithm combined with Monte Carlo simulation [Pietra, Pietra & Lafferty 1995] is used to train the weights in this generative model. Unlike the commonly used conditional maximum entropy modeling which approximates the feature configuration space as the training corpus [Ratnaparkhi 1998], Monte Carlo techniques are required in the generative modeling to simulate the possible feature configurations. The exponential prior smoothing scheme [Goodman 2003] is adopted. The same training procedure is performed using Corpus I and Corpus II to estimate ( )} { Pr maxEnt I if and ( )} { Pr maxEnt II if respectively. 5 Annealing-based Optimization With the maximum entropy modeling presented in the last section, for a given name disambiguation candidate solution{ } M K, , we can compute the conditional probability of Expression (2). Statistical annealing [Neal 1993]-based optimization is used to search for { } M K, which maximizes Expression (2). The optimization process consists of two steps. First, a local optimal solution{ }0 ,M K is computed by a greedy algorithm. Then by setting { }0 ,M K as the initial state, statistical annealing is applied to search for the global optimal solution. Given n same name mentions, assuming the input of 2 )1 ( − n n probabilities ( ) j i j i P P CS = , Pr and 2 )1 ( − n n probabilities ( ) j i j i P P CS ≠ , Pr , the greedy algorithm performs as follows: 1. Set the initial state { } M K, as n K = , and [ ] n 1, i , ) ( ∈ = i i M ; 2. Sort ( ) j i j i P P CS = , Pr in decreasing order; 3. Scan the sorted probabilities one by one. If the current probability is ( ) j i j i P P CS = , Pr , ) ( ) ( j M i M ≠ , and there exist no such l and m that ( ) ( ) ( ) ( )j M m M i M l M = = , and ( ) ( ) m l m l j i j i P P CS P P CS ≠ < = , , Pr Pr then update { } M K, by merging cluster ) (i M and ) ( j M . 4. Output { } M K, as a local optimal solution. Using the output { }0 ,M K of the greedy algorithm as the initial state, the statistical annealing is described using the following pseudocode: Set { } { }0 , , M K M K = ; for( 1.01 β* ; β β ; β β final 0 = < = ) { iterate pre-defined number of times { set { } { } M K M K , , 1 = ; update { }1 ,M K by randomly changing the number of clusters K and the content of each cluster. set { } ( ) { } ( ) ∏ ∏ − = = − = = = 1 ,1 ,1 , 1 ,1 ,1 1 , , Pr , Pr i j N i j i i j N i j i M K CS M K CS x if(x>=1) { set { } { }1 , , M K M K = } else { set { } { }1 , , M K M K = with probability β x . } if { } ( ) { } ( ) 1 , Pr , Pr 1 ,1 ,1 0 , 1 ,1 ,1 , > ∏ ∏ − = = − = = i j N i j i i j N i j i M K CS M K CS set { } { } M K M K , , 0 = } } output { }0 ,M K as the optimal state. 6 Benchmarking To evaluate the effectiveness of our new algorithm, we implemented the previous algorithm described in [Bagga & Baldwin 1998] as our baseline. The threshold is selected as 0.19 by optimizing the pairwise disambiguation accuracy using the 80 truthed mention pairs of “John Smith”. To clearly benchmark the performance enhancement from IE support, we also implemented a system using the same weakly supervised learning scheme but only VSM-based similarity as the pairwise context similarity measure. We benchmarked the three systems for comparison. The following three scoring measures are implemented. (1) Precision (P):  = i N P i of cluster output in the mentions of # i of cluster output in the mentions correct of # 1 (2) Recall (R):  = i N P i of cluster key in the mentions of # i of cluster output in the mentions correct of # 1 (3) F-measure (F): R P R P F + = * 2 The name co-reference precision and recall used here is adopted from the B_CUBED scoring scheme used in [Bagga & Baldwin 1998], which is believed to be an appropriate benchmarking standard for this task. Traditional benchmarking requires manually dividing person name mentions into clusters, which is labor intensive and difficult to scale up. In our experiments, an automatic corpus construction scheme is used in order to perform large-scale testing for reliable benchmarks. The intuition is that in the general news domain, some multi-token names associated with mass media celebrities is highly unambiguous. For example, “Bill Gates”, “Bill Clinton”, etc. mentioned in the news almost always refer to unique entities. Therefore, we can retrieve contexts of these unambiguous names, and mix them together. The name disambiguation algorithm should recognize mentions of the same name. The capability of recognizing mentions of an unambiguous name is equivalent to the capability of disambiguating ambiguous names. For the purpose of benchmarking, we automatically construct eight testing datasets (Testing Corpus I), listed in Table 1. Table 1. Constructed Testing Corpus I # of Mentions Name Set 1a Set 1b Mikhail S. Gorbachev 20 50 Dick Cheney 20 10 Dalai Lama 20 10 Bill Clinton 20 10 Set 2a Set 2b Bob Dole 20 50 Hun Sen 20 10 Javier Perez de Cuellar 20 10 Kim Young Sam 20 10 Set 3a Set 3b Jiang Qing 20 10 Ingrid Bergman 20 10 Margaret Thatcher 20 50 Aung San Suu Kyi 20 10 Set 4a Set 4b Bill Gates 20 10 Jiang Zemin 20 10 Boris Yeltsin 20 50 Kim Il Sung 20 10 Table 2. Testing Corpus I Benchmarking P R F P R F Set 1a Set 1b Baseline 0.79 0.37 0.58 0.78 0.34 0.56 VSMOnly 0.86 0.33 0.60 0.78 0.23 0.51 Full 0.98 0.75 0.86 0.90 0.79 0.85 Set 2a Set 2b Baseline 0.82 0.58 0.70 0.94 0.50 0.72 VSMOnly 0.90 0.54 0.72 0.98 0.45 0.71 Full 0.93 0.84 0.88 1.00 0.93 0.96 Set 3a Set 3b Baseline 0.84 0.69 0.77 0.80 0.34 0.57 VSMOnly 0.95 0.72 0.83 0.93 0.29 0.61 Full 0.95 0.86 0.90 0.98 0.57 0.77 Set 4a Set 4b Baseline 0.88 0.74 0.81 0.80 0.49 0.64 VSMOnly 0.93 0.77 0.85 0.88 0.42 0.65 Full 0.95 0.93 0.94 0.98 0.84 0.91 Overall P R F Baseline 0.83 0.51 0.63 VSMOnly 0.90 0.47 0.69 Full 0.96 0.82 0.88 Table 2 shows the benchmarks for each dataset, using the three measures just defined. The new algorithm when only using VSM-based similarity (VSMOnly) outperforms the existing algorithm (Baseline) by 5%. The new algorithm using the full context similarity measures including IE features (Full) significantly outperforms the existing algorithm (Baseline) in every test: the overall Fmeasure jumps from 64% to 88%, with 25 percentage point enhancement. This performance breakthrough is mainly due to the additional support from IE, in addition to the optimization method used in our algorithm. We have also manually truthed an additional testing corpus of two datasets containing mentions associated with the same name (Testing Corpus II). Truthed Dataset 5a contains 25 mentions of Peter Sutherland and Truthed Dataset 5b contains 68 mentions of John Smith. John Smith is a highly ambiguous name. With its 68 mentions, they represent totally 29 different entities. On the other hand, all the mentions of Peter Sutherland are found to refer to the same person. The benchmark using this corpus is shown below. Table 3. Testing Corpus II Benchmarking P R F P R F Set 5a Set 5b Baseline 0.96 0.92 0.94 0.62 0.57 0.60 VSMOnly 0.96 0.92 0.94 0.75 0.51 0.63 Full 1.00 0.92 0.96 0.90 0.81 0.85 Based on these benchmarks, using either manually truthed corpora or automatically constructed corpora, using either ambiguous corpora or unambiguous corpora, our algorithm consistently and significantly outperforms the existing algorithm. In particular, our system achieves a very high precision (0.96 precision). This shows the effective use of IE results which provide much more fine-grained evidence than cooccurring words. It is interesting to note that the recall enhancement is greater than the precision enhancement (0.31 recall enhancement vs. 0.13 precision enhancement). This demonstrates the complementary nature between evidence from the co-occurring words and the evidence carried by IE results. The system recall can be further improved once the recall of the currently precision-oriented IE engine is enhanced over time. 7 Conclusion We have presented a new person name disambiguation algorithm which demonstrates a successful use of natural language IE support in performance enhancement. Our algorithm is benchmarked to outperform the previous algorithm by 25 percentage points in overall F-measure, where the effective use of IE contributes to 20 percentage points. The core of this algorithm is a learning system trained on automatically constructed large corpora, only requiring minimal supervision in estimating a context-independent probability. 8 Acknowledgements This work was partly supported by a grant from the Air Force Research Laboratory’s Information Directorate (AFRL/IF), Rome, NY, under contract F30602-03-C-0170. The authors wish to thank Carrie Pine of AFRL for supporting and reviewing this work. References Bagga, A., and B. Baldwin. 1998. Entity-Based Cross-Document Coreferencing Using the Vector Space Model. In Proceedings of COLING-ACL'98. Deerwester, S., S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. 1990. Indexing by Latent Semantic Analysis. In Journal of the American Society of Information Science Gale, W., K. Church, and D. Yarowsky. 1992. One Sense Per Discourse. In Proceedings of the 4th DARPA Speech and Natural Language Workshop. Goodman, J. 2003. Exponential Priors for Maximum Entropy Models. Landauer, T. K., & Dumais, S. T. 1997. A solution to Plato's problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104, 211-240, 1997. MUC-7. 1998. Proceedings of the Seventh Message Understanding Conference. Neal, R. M. 1993. Probabilistic Inference Using Markov Chain Monte Carlo Methods. Technical Report, Univ. of Toronto. Pietra, S. D., V. D. Pietra, and J. Lafferty. 1995. Inducing Features Of Random Fields. In IEEE Transactions on Pattern Analysis and Machine Intelligence. Srihari, R. K., W. Li, C. Niu and T. Cornell. InfoXtract: An Information Discovery Engine Supported by New Levels of Information Extraction. In Proceeding of HLT-NAACL 2003 Workshop on Software Engineering and Architecture of Language Technology Systems, Edmonton, Canada.
2004
76
Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics Chin-Yew Lin and Franz Josef Och Information Sciences Institute University of Southern California 4676 Admiralty Way Marina del Rey, CA 90292, USA {cyl,och}@isi.edu Abstract In this paper we describe two new objective automatic evaluation methods for machine translation. The first method is based on longest common subsequence between a candidate translation and a set of reference translations. Longest common subsequence takes into account sentence level structure similarity naturally and identifies longest co-occurring insequence n-grams automatically. The second method relaxes strict n-gram matching to skipbigram matching. Skip-bigram is any pair of words in their sentence order. Skip-bigram cooccurrence statistics measure the overlap of skip-bigrams between a candidate translation and a set of reference translations. The empirical results show that both methods correlate with human judgments very well in both adequacy and fluency. 1 Introduction Using objective functions to automatically evaluate machine translation quality is not new. Su et al. (1992) proposed a method based on measuring edit distance (Levenshtein 1966) between candidate and reference translations. Akiba et al. (2001) extended the idea to accommodate multiple references. Nießen et al. (2000) calculated the lengthnormalized edit distance, called word error rate (WER), between a candidate and multiple reference translations. Leusch et al. (2003) proposed a related measure called position-independent word error rate (PER) that did not consider word position, i.e. using bag-of-words instead. Instead of error measures, we can also use accuracy measures that compute similarity between candidate and reference translations in proportion to the number of common words between them as suggested by Melamed (1995). An n-gram co-occurrence measure, BLEU, proposed by Papineni et al. (2001) that calculates co-occurrence statistics based on n-gram overlaps have shown great potential. A variant of BLEU developed by NIST (2002) has been used in two recent large-scale machine translation evaluations. Recently, Turian et al. (2003) indicated that standard accuracy measures such as recall, precision, and the F-measure can also be used in evaluation of machine translation. However, results based on their method, General Text Matcher (GTM), showed that unigram F-measure correlated best with human judgments while assigning more weight to higher n-gram (n > 1) matches achieved similar performance as Bleu. Since unigram matches do not distinguish words in consecutive positions from words in the wrong order, measures based on position-independent unigram matches are not sensitive to word order and sentence level structure. Therefore, systems optimized for these unigram-based measures might generate adequate but not fluent target language. Since BLEU has been used to report the performance of many machine translation systems and it has been shown to correlate well with human judgments, we will explain BLEU in more detail and point out its limitations in the next section. We then introduce a new evaluation method called ROUGE-L that measures sentence-to-sentence similarity based on the longest common subsequence statistics between a candidate translation and a set of reference translations in Section 3. Section 4 describes another automatic evaluation method called ROUGE-S that computes skipbigram co-occurrence statistics. Section 5 presents the evaluation results of ROUGE-L, and ROUGES and compare them with BLEU, GTM, NIST, PER, and WER in correlation with human judgments in terms of adequacy and fluency. We conclude this paper and discuss extensions of the current work in Section 6. 2 BLEU and N-gram Co-Occurrence To automatically evaluate machine translations the machine translation community recently adopted an n-gram co-occurrence scoring procedure BLEU (Papineni et al. 2001). In two recent large-scale machine translation evaluations sponsored by NIST, a closely related automatic evaluation method, simply called NIST score, was used. The NIST (NIST 2002) scoring method is based on BLEU. The main idea of BLEU is to measure the similarity between a candidate translation and a set of reference translations with a numerical metric. They used a weighted average of variable length ngram matches between system translations and a set of human reference translations and showed that the weighted average metric correlating highly with human assessments. BLEU measures how well a machine translation overlaps with multiple human translations using ngram co-occurrence statistics. N-gram precision in BLEU is computed as follows: ∑ ∑ ∑ ∑ ∈ ∈ − ∈ ∈ − − − = } { } { ) ( ) ( Candidates C C gram n Candidates C C gram n clip n gram n Count gram n Count p (1) Where Countclip(n-gram) is the maximum number of n-grams co-occurring in a candidate translation and a reference translation, and Count(ngram) is the number of n-grams in the candidate translation. To prevent very short translations that try to maximize their precision scores, BLEU adds a brevity penalty, BP, to the formula: ) 2 ( 1 |) |/| | 1 ( ⎭ ⎬ ⎫ ⎩ ⎨ ⎧ ≤ > = − r c if e r c if BP c r Where |c| is the length of the candidate translation and |r| is the length of the reference translation. The BLEU formula is then written as follows: )3 ( log exp 1 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ • = ∑ = N n n n p w BP BLEU The weighting factor, wn, is set at 1/N. Although BLEU has been shown to correlate well with human assessments, it has a few things that can be improved. First the subjective application of the brevity penalty can be replaced with a recall related parameter that is sensitive to reference length. Although brevity penalty will penalize candidate translations with low recall by a factor of e(1|r|/|c|), it would be nice if we can use the traditional recall measure that has been a well known measure in NLP as suggested by Melamed (2003). Of course we have to make sure the resulting composite function of precision and recall is still correlates highly with human judgments. Second, although BLEU uses high order n-gram (n>1) matches to favor candidate sentences with consecutive word matches and to estimate their fluency, it does not consider sentence level structure. For example, given the following sentences: S1. police killed the gunman S2. police kill the gunman1 S3. the gunman kill police We only consider BLEU with unigram and bigram, i.e. N=2, for the purpose of explanation and call this BLEU-2. Using S1 as the reference and S2 and S3 as the candidate translations, S2 and S3 would have the same BLEU-2 score, since they both have one bigram and three unigram matches2. However, S2 and S3 have very different meanings. Third, BLEU is a geometric mean of unigram to N-gram precisions. Any candidate translation without a N-gram match has a per-sentence BLEU score of zero. Although BLEU is usually calculated over the whole test corpus, it is still desirable to have a measure that works reliably at sentence level for diagnostic and introspection purpose. To address these issues, we propose three new automatic evaluation measures based on longest common subsequence statistics and skip bigram co-occurrence statistics in the following sections. 3 Longest Common Subsequence 3.1 ROUGE-L A sequence Z = [z1, z2, ..., zn] is a subsequence of another sequence X = [x1, x2, ..., xm], if there exists a strict increasing sequence [i1, i2, ..., ik] of indices of X such that for all j = 1, 2, ..., k, we have xij = zj (Cormen et al. 1989). Given two sequences X and Y, the longest common subsequence (LCS) of X and Y is a common subsequence with maximum length. We can find the LCS of two sequences of length m and n using standard dynamic programming technique in O(mn) time. LCS has been used to identify cognate candidates during construction of N-best translation lexicons from parallel text. Melamed (1995) used the ratio (LCSR) between the length of the LCS of two words and the length of the longer word of the two words to measure the cognateness between them. He used as an approximate string matching algorithm. Saggion et al. (2002) used normalized pairwise LCS (NP-LCS) to compare similarity between two texts in automatic summarization evaluation. NP-LCS can be shown as a special case of Equation (6) with β = 1. However, they did not provide the correlation analysis of NP-LCS with 1 This is a real machine translation output. 2 The “kill” in S2 or S3 does not match with “killed” in S1 in strict word-to-word comparison. human judgments and its effectiveness as an automatic evaluation measure. To apply LCS in machine translation evaluation, we view a translation as a sequence of words. The intuition is that the longer the LCS of two translations is, the more similar the two translations are. We propose using LCS-based F-measure to estimate the similarity between two translations X of length m and Y of length n, assuming X is a reference translation and Y is a candidate translation, as follows: Rlcs m Y X LCS ) , ( = (4) Plcs n Y X LCS ) , ( = (5) Flcs lcs lcs lcs lcs P R P R 2 2) 1( β β + + = (6) Where LCS(X,Y) is the length of a longest common subsequence of X and Y, and β = Plcs/Rlcs when ∂Flcs/∂Rlcs_=_∂Flcs/∂Plcs. We call the LCS-based Fmeasure, i.e. Equation 6, ROUGE-L. Notice that ROUGE-L is 1 when X = Y since LCS(X,Y) = m or n; while ROUGE-L is zero when LCS(X,Y) = 0, i.e. there is nothing in common between X and Y. Fmeasure or its equivalents has been shown to have met several theoretical criteria in measuring accuracy involving more than one factor (Van Rijsbergen 1979). The composite factors are LCS-based recall and precision in this case. Melamed et al. (2003) used unigram F-measure to estimate machine translation quality and showed that unigram F-measure was as good as BLEU. One advantage of using LCS is that it does not require consecutive matches but in-sequence matches that reflect sentence level word order as ngrams. The other advantage is that it automatically includes longest in-sequence common n-grams, therefore no predefined n-gram length is necessary. ROUGE-L as defined in Equation 6 has the property that its value is less than or equal to the minimum of unigram F-measure of X and Y. Unigram recall reflects the proportion of words in X (reference translation) that are also present in Y (candidate translation); while unigram precision is the proportion of words in Y that are also in X. Unigram recall and precision count all co-occurring words regardless their orders; while ROUGE-L counts only in-sequence co-occurrences. By only awarding credit to in-sequence unigram matches, ROUGE-L also captures sentence level structure in a natural way. Consider again the example given in Section 2 that is copied here for convenience: S1. police killed the gunman S2. police kill the gunman S3. the gunman kill police As we have shown earlier, BLEU-2 cannot differentiate S2 from S3. However, S2 has a ROUGE-L score of 3/4 = 0.75 and S3 has a ROUGE-L score of 2/4 = 0.5, with β = 1. Therefore S2 is better than S3 according to ROUGE-L. This example also illustrated that ROUGE-L can work reliably at sentence level. However, LCS only counts the main in-sequence words; therefore, other longest common subsequences and shorter sequences are not reflected in the final score. For example, consider the following candidate sentence: S4. the gunman police killed Using S1 as its reference, LCS counts either “the gunman” or “police killed”, but not both; therefore, S4 has the same ROUGE-L score as S3. BLEU-2 would prefer S4 over S3. In Section 4, we will introduce skip-bigram co-occurrence statistics that do not have this problem while still keeping the advantage of in-sequence (not necessary consecutive) matching that reflects sentence level word order. 3.2 Multiple References So far, we only demonstrated how to compute ROUGE-L using a single reference. When multiple references are used, we take the maximum LCS matches between a candidate translation, c, of n words and a set of u reference translations of mj words. The LCS-based F-measure can be computed as follows: Rlcs-multi ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = = j j u j m c r LCS ) , ( max 1 (7) Plcs-multi ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = = n c r LCS j u j ) , ( max 1 (8) Flcs-multi multi lcs multi lcs multi lcs multi lcs P R P R − − − − + + = 2 2) 1( β β (9) where β = Plcs-multi/Rlcs-multi when ∂Flcs-multi/∂Rlcsmulti_=_∂Flcs-multi/∂Plcs-multi. This procedure is also applied to computation of ROUGE-S when multiple references are used. In the next section, we introduce the skip-bigram cooccurrence statistics. In the next section, we describe how to extend ROUGE-L to assign more credits to longest common subsequences with consecutive words. 3.3 ROUGE-W: Weighted Longest Common Subsequence LCS has many nice properties as we have described in the previous sections. Unfortunately, the basic LCS also has a problem that it does not differentiate LCSes of different spatial relations within their embedding sequences. For example, given a reference sequence X and two candidate sequences Y1 and Y2 as follows: X: [A B C D E F G] Y1: [A B C D H I K] Y2: [A H B K C I D] Y1 and Y2 have the same ROUGE-L score. However, in this case, Y1 should be the better choice than Y2 because Y1 has consecutive matches. To improve the basic LCS method, we can simply remember the length of consecutive matches encountered so far to a regular two dimensional dynamic program table computing LCS. We call this weighted LCS (WLCS) and use k to indicate the length of the current consecutive matches ending at words xi and yj. Given two sentences X and Y, the WLCS score of X and Y can be computed using the following dynamic programming procedure: (1) For (i = 0; i <=m; i++) c(i,j) = 0 // initialize c-table w(i,j) = 0 // initialize w-table (2) For (i = 1; i <= m; i++) For (j = 1; j <= n; j++) If xi = yj Then // the length of consecutive matches at // position i-1 and j-1 k = w(i-1,j-1) c(i,j) = c(i-1,j-1) + f(k+1) – f(k) // remember the length of consecutive // matches at position i, j w(i,j) = k+1 Otherwise If c(i-1,j) > c(i,j-1) Then c(i,j) = c(i-1,j) w(i,j) = 0 // no match at i, j Else c(i,j) = c(i,j-1) w(i,j) = 0 // no match at i, j (3) WLCS(X,Y) = c(m,n) Where c is the dynamic programming table, c(i,j) stores the WLCS score ending at word xi of X and yj of Y, w is the table storing the length of consecutive matches ended at c table position i and j, and f is a function of consecutive matches at the table position, c(i,j). Notice that by providing different weighting function f, we can parameterize the WLCS algorithm to assign different credit to consecutive in-sequence matches. The weighting function f must have the property that f(x+y) > f(x) + f(y) for any positive integers x and y. In other words, consecutive matches are awarded more scores than non-consecutive matches. For example, f(k)-=-αk – β when k >= 0, and α, β > 0. This function charges a gap penalty of –β for each non-consecutive n-gram sequences. Another possible function family is the polynomial family of the form kα where -α > 1. However, in order to normalize the final ROUGE-W score, we also prefer to have a function that has a close form inverse function. For example, f(k)-=-k2 has a close form inverse function f -1(k)-=-k1/2. F-measure based on WLCS can be computed as follows, given two sequences X of length m and Y of length n: Rwlcs ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = − ) ( ) , ( 1 m f Y X WLCS f (10) Pwlcs ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = − ) ( ) , ( 1 n f Y X WLCS f (11) Fwlcs wlcs wlcs wlcs wlcs P R P R 2 2) 1( β β + + = (12) Where f -1 is the inverse function of f. We call the WLCS-based F-measure, i.e. Equation 12, ROUGE-W. Using Equation 12 and f(k)-=-k2 as the weighting function, the ROUGE-W scores for sequences Y1 and Y2 are 0.571 and 0.286 respectively. Therefore, Y1 would be ranked higher than Y2 using WLCS. We use the polynomial function of the form kα in the ROUGE evaluation package. In the next section, we introduce the skip-bigram cooccurrence statistics. 4 ROUGE-S: Skip-Bigram Co-Occurrence Statistics Skip-bigram is any pair of words in their sentence order, allowing for arbitrary gaps. Skipbigram co-occurrence statistics measure the overlap of skip-bigrams between a candidate translation and a set of reference translations. Using the example given in Section 3.1: S1. police killed the gunman S2. police kill the gunman S3. the gunman kill police S4. the gunman police killed Each sentence has C(4,2)3 = 6 skip-bigrams. For example, S1 has the following skip-bigrams: 3 Combination: C(4,2) = 4!/(2!*2!) = 6. (“police killed”, “police the”, “police gunman”, “killed the”, “killed gunman”, “the gunman”) S2 has three skip-bigram matches with S1 (“police the”, “police gunman”, “the gunman”), S3 has one skip-bigram match with S1 (“the gunman”), and S4 has two skip-bigram matches with S1 (“police killed”, “the gunman”). Given translations X of length m and Y of length n, assuming X is a reference translation and Y is a candidate translation, we compute skip-bigram-based F-measure as follows: Rskip2 ) 2, ( ) , ( 2 m C Y X SKIP = (13) Pskip2 ) 2, ( ) , ( 2 n C Y X SKIP = (14) Fskip2 2 2 2 2 2 2) 1( skip skip skip skip P R P R β β + + = (15) Where SKIP2(X,Y) is the number of skip-bigram matches between X and Y, β = Pskip2/Rskip2 when ∂Fskip2/∂Rskip2_=_∂Fskip2/∂Pskip2, and C is the combination function. We call the skip-bigram-based Fmeasure, i.e. Equation 15, ROUGE-S. Using Equation 15 with β = 1 and S1 as the reference, S2’s ROUGE-S score is 0.5, S3 is 0.167, and S4 is 0.333. Therefore, S2 is better than S3 and S4, and S4 is better than S3. This result is more intuitive than using BLEU-2 and ROUGE-L. One advantage of skip-bigram vs. BLEU is that it does not require consecutive matches but is still sensitive to word order. Comparing skip-bigram with LCS, skip-bigram counts all in-order matching word pairs while LCS only counts one longest common subsequence. We can limit the maximum skip distance, dskip, between two in-order words that is allowed to form a skip-bigram. Applying such constraint, we limit skip-bigram formation to a fix window size. Therefore, computation time can be reduced and hopefully performance can be as good as the version without such constraint. For example, if we set dskip to 0 then ROUGE-S is equivalent to bigram overlap. If we set dskip to 4 then only word pairs of at most 4 words apart can form skip-bigrams. Adjusting Equations 13, 14, and 15 to use maximum skip distance limit is straightforward: we only count the skip-bigram matches, SKIP2(X,Y), within the maximum skip distance and replace denominators of Equations 13, C(m,2), and 14, C(n,2), with the actual numbers of within distance skip-bigrams from the reference and the candidate respectively. In the next section, we present the evaluations of ROUGE-L, ROUGE-S, and compare their performance with other automatic evaluation measures. 5 Evaluations One of the goals of developing automatic evaluation measures is to replace labor-intensive human evaluations. Therefore the first criterion to assess the usefulness of an automatic evaluation measure is to show that it correlates highly with human judgments in different evaluation settings. However, high quality large-scale human judgments are hard to come by. Fortunately, we have access to eight MT systems’ outputs, their human assessment data, and the reference translations from 2003 NIST Chinese MT evaluation (NIST 2002a). There were 919 sentence segments in the corpus. We first computed averages of the adequacy and fluency scores of each system assigned by human evaluators. For the input of automatic evaluation methods, we created three evaluation sets from the MT outputs: 1. Case set: The original system outputs with case information. 2. NoCase set: All words were converted into lower case, i.e. no case information was used. This set was used to examine whether human assessments were affected by case information since not all MT systems generate properly cased output. 3. Stem set: All words were converted into lower case and stemmed using the Porter stemmer (Porter 1980). Since ROUGE computed similarity on surface word level, stemmed version allowed ROUGE to perform more lenient matches. To accommodate multiple references, we use a Jackknifing procedure. Given N references, we compute the best score over N sets of N-1 references. The final score is the average of the N best scores using N different sets of N-1 references. The Jackknifing procedure is adopted since we often need to compare system and human performance and the reference translations are usually the only human translations available. Using this procedure, we are able to estimate average human performance by averaging N best scores of one reference vs. the rest N-1 references. We then computed average BLEU1-12 4, GTM with exponents of 1.0, 2.0, and 3.0, NIST, WER, and PER scores over these three sets. Finally we applied ROUGE-L, ROUGE-W with weighting function k1.2, and ROUGE-S without skip distance 4 BLEUN computes BLEU over n-grams up to length N. Only BLEU1, BLEU4, and BLEU12 are shown in Table 1. limit and with skip distant limits of 0, 4, and 9. Correlation analysis based on two different correlation statistics, Pearson’s ρ and Spearman’s ρ, with respect to adequacy and fluency are shown in Table 1. The Pearson’s correlation coefficient5 measures the strength and direction of a linear relationship between any two variables, i.e. automatic metric score and human assigned mean coverage score in our case. It ranges from +1 to -1. A correlation of 1 means that there is a perfect positive linear relationship between the two variables, a correlation of -1 means that there is a perfect negative linear relationship between them, and a correlation of 0 means that there is no linear relationship between them. Since we would like to use automatic evaluation metric not only in comparing systems 5 For a quick overview of the Pearson’s coefficient, see: http://davidmlane.com/hyperstat/A34739.html. but also in in-house system development, a good linear correlation with human judgment would enable us to use automatic scores to predict corresponding human judgment scores. Therefore, Pearson’s correlation coefficient is a good measure to look at. Spearman’s correlation coefficient 6 is also a measure of correlation between two variables. It is a non-parametric measure and is a special case of the Pearson’s correlation coefficient when the values of data are converted into ranks before computing the coefficient. Spearman’s correlation coefficient does not assume the correlation between the variables is linear. Therefore it is a useful correlation indicator even when good linear correlation, for example, according to Pearson’s correlation coefficient between two variables could 6 For a quick overview of the Spearman’s coefficient, see: http://davidmlane.com/hyperstat/A62436.html. Adequacy Method P 95%L 95%U S 95%L 95%U P 95%L 95%U S 95%L 95%U P 95%L 95%U S 95%L 95%U BLEU1 0.86 0.83 0.89 0.80 0.71 0.90 0.87 0.84 0.90 0.76 0.67 0.89 0.91 0.89 0.93 0.85 0.76 0.95 BLEU4 0.77 0.72 0.81 0.77 0.71 0.89 0.79 0.75 0.82 0.67 0.55 0.83 0.82 0.78 0.85 0.76 0.67 0.89 BLEU12 0.66 0.60 0.72 0.53 0.44 0.65 0.72 0.57 0.81 0.65 0.25 0.88 0.72 0.58 0.81 0.66 0.28 0.88 NIST 0.89 0.86 0.92 0.78 0.71 0.89 0.87 0.85 0.90 0.80 0.74 0.92 0.90 0.88 0.93 0.88 0.83 0.97 WER 0.47 0.41 0.53 0.56 0.45 0.74 0.43 0.37 0.49 0.66 0.60 0.82 0.48 0.42 0.54 0.66 0.60 0.81 PER 0.67 0.62 0.72 0.56 0.48 0.75 0.63 0.58 0.68 0.67 0.60 0.83 0.72 0.68 0.76 0.69 0.62 0.86 ROUGE-L 0.87 0.84 0.90 0.84 0.79 0.93 0.89 0.86 0.92 0.84 0.71 0.94 0.92 0.90 0.94 0.87 0.76 0.95 ROUGE-W 0.84 0.81 0.87 0.83 0.74 0.90 0.85 0.82 0.88 0.77 0.67 0.90 0.89 0.86 0.91 0.86 0.76 0.95 ROUGE-S* 0.85 0.81 0.88 0.83 0.76 0.90 0.90 0.88 0.93 0.82 0.70 0.92 0.95 0.93 0.97 0.85 0.76 0.94 ROUGE-S0 0.82 0.78 0.85 0.82 0.71 0.90 0.84 0.81 0.87 0.76 0.67 0.90 0.87 0.84 0.90 0.82 0.68 0.90 ROUGE-S4 0.82 0.78 0.85 0.84 0.79 0.93 0.87 0.85 0.90 0.83 0.71 0.90 0.92 0.90 0.94 0.84 0.74 0.93 ROUGE-S9 0.84 0.80 0.87 0.84 0.79 0.92 0.89 0.86 0.92 0.84 0.76 0.93 0.94 0.92 0.96 0.84 0.76 0.94 GTM10 0.82 0.79 0.85 0.79 0.74 0.83 0.91 0.89 0.94 0.84 0.79 0.93 0.94 0.92 0.96 0.84 0.79 0.92 GTM20 0.77 0.73 0.81 0.76 0.69 0.88 0.79 0.76 0.83 0.70 0.55 0.83 0.83 0.79 0.86 0.80 0.67 0.90 GTM30 0.74 0.70 0.78 0.73 0.60 0.86 0.74 0.70 0.78 0.63 0.52 0.79 0.77 0.73 0.81 0.64 0.52 0.80 Fluency Method P 95%L 95%U S 95%L 95%U P 95%L 95%U S 95%L 95%U P 95%L 95%U S 95%L 95%U BLEU1 0.81 0.75 0.86 0.76 0.62 0.90 0.73 0.67 0.79 0.70 0.62 0.81 0.70 0.63 0.77 0.79 0.67 0.90 BLEU4 0.86 0.81 0.90 0.74 0.62 0.86 0.83 0.78 0.88 0.68 0.60 0.81 0.83 0.78 0.88 0.70 0.62 0.81 BLEU12 0.87 0.76 0.93 0.66 0.33 0.79 0.93 0.81 0.97 0.78 0.44 0.94 0.93 0.84 0.97 0.80 0.49 0.94 NIST 0.81 0.75 0.87 0.74 0.62 0.86 0.70 0.64 0.77 0.68 0.60 0.79 0.68 0.61 0.75 0.77 0.67 0.88 WER 0.69 0.62 0.75 0.68 0.57 0.85 0.59 0.51 0.66 0.70 0.57 0.82 0.60 0.52 0.68 0.69 0.57 0.81 PER 0.79 0.74 0.85 0.67 0.57 0.82 0.68 0.60 0.73 0.69 0.60 0.81 0.70 0.63 0.76 0.65 0.57 0.79 ROUGE-L 0.83 0.77 0.88 0.80 0.67 0.90 0.76 0.69 0.82 0.79 0.64 0.90 0.73 0.66 0.80 0.78 0.67 0.90 ROUGE-W 0.85 0.80 0.90 0.79 0.63 0.90 0.78 0.73 0.84 0.72 0.62 0.83 0.77 0.71 0.83 0.78 0.67 0.90 ROUGE-S* 0.84 0.78 0.89 0.79 0.62 0.90 0.80 0.74 0.86 0.77 0.64 0.90 0.78 0.71 0.84 0.79 0.69 0.90 ROUGE-S0 0.87 0.81 0.91 0.78 0.62 0.90 0.83 0.78 0.88 0.71 0.62 0.82 0.82 0.77 0.88 0.76 0.62 0.90 ROUGE-S4 0.84 0.79 0.89 0.80 0.67 0.90 0.82 0.77 0.87 0.78 0.64 0.90 0.81 0.75 0.86 0.79 0.67 0.90 ROUGE-S9 0.84 0.79 0.89 0.80 0.67 0.90 0.81 0.76 0.87 0.79 0.69 0.90 0.79 0.73 0.85 0.79 0.69 0.90 GTM10 0.73 0.66 0.79 0.76 0.60 0.87 0.71 0.64 0.78 0.80 0.67 0.90 0.66 0.58 0.74 0.80 0.64 0.90 GTM20 0.86 0.81 0.90 0.80 0.67 0.90 0.83 0.77 0.88 0.69 0.62 0.81 0.83 0.77 0.87 0.74 0.62 0.89 GTM30 0.87 0.81 0.91 0.79 0.67 0.90 0.83 0.77 0.87 0.73 0.62 0.83 0.83 0.77 0.88 0.71 0.60 0.83 With Case Information (Case) Lower Case (NoCase) Lower Case & Stemmed (Stem) With Case Information (Case) Lower Case (NoCase) Lower Case & Stemmed (Stem) Table 1. Pearson’s ρ and Spearman’s ρ correlations of automatic evaluation measures vs. adequacy and fluency: BLEU1, 4, and 12 are BLEU with maximum of 1, 4, and 12 grams, NIST is the NIST score, ROUGE-L is LCS-based F-measure (β = 1), ROUGE-W is weighted LCS-based F-measure (β = 1). ROUGE-S* is skip-bigram-based co-occurrence statistics with any skip distance limit, ROUGESN is skip-bigram-based F-measure (β = 1) with maximum skip distance of N, PER is position independent word error rate, and WER is word error rate. GTM 10, 20, and 30 are general text matcher with exponents of 1.0, 2.0, and 3.0. (Note, only BLEU1, 4, and 12 are shown here to preserve space.) not be found. It also suits the NIST MT evaluation scenario where multiple systems are ranked according to some performance metrics. To estimate the significance of these correlation statistics, we applied bootstrap resampling, generating random samples of the 919 different sentence segments. The lower and upper values of 95% confidence interval are also shown in the table. Dark (green) cells are the best correlation numbers in their categories and light gray cells are statistically equivalent to the best numbers in their categories. Analyzing all runs according to the adequacy and fluency table, we make the following observations: Applying the stemmer achieves higher correlation with adequacy but keeping case information achieves higher correlation with fluency except for BLEU7-12 (only BLEU12 is shown). For example, the Pearson’s ρ (P) correlation of ROUGE-S* with adequacy increases from 0.85 (Case) to 0.95 (Stem) while its Pearson’s ρ correlation with fluency drops from 0.84 (Case) to 0.78 (Stem). We will focus our discussions on the Stem set in adequacy and Case set in fluency. The Pearson's ρ correlation values in the Stem set of the Adequacy Table, indicates that ROUGEL and ROUGE-S with a skip distance longer than 0 correlate highly and linearly with adequacy and outperform BLEU and NIST. ROUGE-S* achieves that best correlation with a Pearson’s ρ of 0.95. Measures favoring consecutive matches, i.e. BLEU4 and 12, ROUGE-W, GTM20 and 30, ROUGE-S0 (bigram), and WER have lower Pearson’s ρ. Among them WER (0.48) that tends to penalize small word movement is the worst performer. One interesting observation is that longer BLEU has lower correlation with adequacy. Spearman’s ρ values generally agree with Pearson's ρ but have more equivalents. The Pearson's ρ correlation values in the Stem set of the Fluency Table, indicates that BLEU12 has the highest correlation (0.93) with fluency. However, it is statistically indistinguishable with 95% confidence from all other metrics shown in the Case set of the Fluency Table except for WER and GTM10. GTM10 has good correlation with human judgments in adequacy but not fluency; while GTM20 and GTM30, i.e. GTM with exponent larger than 1.0, has good correlation with human judgment in fluency but not adequacy. ROUGE-L and ROUGE-S*, 4, and 9 are good automatic evaluation metric candidates since they perform as well as BLEU in fluency correlation analysis and outperform BLEU4 and 12 significantly in adequacy. Among them, ROUGE-L is the best metric in both adequacy and fluency correlation with human judgment according to Spearman’s correlation coefficient and is statistically indistinguishable from the best metrics in both adequacy and fluency correlation with human judgment according to Pearson’s correlation coefficient. 6 Conclusion In this paper we presented two new objective automatic evaluation methods for machine translation, ROUGE-L based on longest common subsequence (LCS) statistics between a candidate translation and a set of reference translations. Longest common subsequence takes into account sentence level structure similarity naturally and identifies longest co-occurring in-sequence ngrams automatically while this is a free parameter in BLEU. To give proper credit to shorter common sequences that are ignored by LCS but still retain the flexibility of non-consecutive matches, we proposed counting skip bigram co-occurrence. The skip-bigram-based ROUGE-S* (without skip distance restriction) had the best Pearson's ρ correlation of 0.95 in adequacy when all words were lower case and stemmed. ROUGE-L, ROUGE-W, ROUGE-S*, ROUGE-S4, and ROUGE-S9 were equal performers to BLEU in measuring fluency. However, they have the advantage that we can apply them on sentence level while longer BLEU such as BLEU12 would not differentiate any sentences with length shorter than 12 words (i.e. no 12-gram matches). We plan to explore their correlation with human judgments on sentence-level in the future. We also confirmed empirically that adequacy and fluency focused on different aspects of machine translations. Adequacy placed more emphasis on terms co-occurred in candidate and reference translations as shown in the higher correlations in Stem set than Case set in Table 1; while the reverse was true in the terms of fluency. The evaluation results of ROUGE-L, ROUGEW, and ROUGE-S in machine translation evaluation are very encouraging. However, these measures in their current forms are still only applying string-to-string matching. We have shown that better correlation with adequacy can be reached by applying stemmer. In the next step, we plan to extend them to accommodate synonyms and paraphrases. For example, we can use an existing thesaurus such as WordNet (Miller 1990) or creating a customized one by applying automated synonym set discovery methods (Pantel and Lin 2002) to identify potential synonyms. Paraphrases can also be automatically acquired using statistical methods as shown by Barzilay and Lee (2003). Once we have acquired synonym and paraphrase data, we then need to design a soft matching function that assigns partial credits to these approximate matches. In this scenario, statistically generated data has the advantage of being able to provide scores reflecting the strength of similarity between synonyms and paraphrased. ROUGE-L, ROUGE-W, and ROUGE-S have also been applied in automatic evaluation of summarization and achieved very promising results (Lin 2004). In Lin and Och (2004), we proposed a framework that automatically evaluated automatic MT evaluation metrics using only manual translations without further human involvement. According to the results reported in that paper, ROUGE-L, ROUGE-W, and ROUGE-S also outperformed BLEU and NIST. References Akiba, Y., K. Imamura, and E. Sumita. 2001. Using Multiple Edit Distances to Automatically Rank Machine Translation Output. In Proceedings of the MT Summit VIII, Santiago de Compostela, Spain. Barzilay, R. and L. Lee. 2003. Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence Alignmen. In Proceeding of NAACL-HLT 2003, Edmonton, Canada. Leusch, G., N. Ueffing, and H. Ney. 2003. A Novel String-to-String Distance Measure with Applications to Machine Translation Evaluation. In Proceedings of MT Summit IX, New Orleans, U.S.A. Levenshtein, V. I. 1966. Binary codes capable of correcting deletions, insertions and reversals. Soviet Physics Doklady. Lin, C.Y. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Proceedings of the Workshop on Text Summarization Branches Out, post-conference workshop of ACL 2004, Barcelona, Spain. Lin, C.-Y. and F. J. Och. 2004. ORANGE: a Method for Evaluating Automatic Evaluation Metrics for Machine Translation. In Proceedings of 20th International Conference on Computational Linguistic (COLING 2004), Geneva, Switzerland. Miller, G. 1990. WordNet: An Online Lexical Database. International Journal of Lexicography, 3(4). Melamed, I.D. 1995. Automatic Evaluation and Uniform Filter Cascades for Inducing N-best Translation Lexicons. In Proceedings of the 3rd Workshop on Very Large Corpora (WVLC3). Boston, U.S.A. Melamed, I.D., R. Green and J. P. Turian. 2003. Precision and Recall of Machine Translation. In Proceedings of NAACL/HLT 2003, Edmonton, Canada. Nießen S., F.J. Och, G, Leusch, H. Ney. 2000. An Evaluation Tool for Machine Translation: Fast Evaluation for MT Research. In Proceedings of the 2nd International Conference on Language Resources and Evaluation, Athens, Greece. NIST. 2002. Automatic Evaluation of Machine Translation Quality using N-gram CoOccurrence Statistics. AAAAAAAAAAA http://www.nist.gov/speech/tests/mt/doc/ngramstudy.pdf Pantel, P. and Lin, D. 2002. Discovering Word Senses from Text. In Proceedings of SIGKDD02. Edmonton, Canada. Papineni, K., S. Roukos, T. Ward, and W.-J. Zhu. 2001. BLEU: a Method for Automatic Evaluation of Machine Translation. IBM Research Report RC22176 (W0109-022). Porter, M.F. 1980. An Algorithm for Suffix Stripping. Program, 14, pp. 130-137. Saggion H., D. Radev, S. Teufel, and W. Lam. 2002. Meta-Evaluation of Summaries in a Cross-Lingual Environment Using ContentBased Metrics. In Proceedings of COLING2002, Taipei, Taiwan. Su, K.-Y., M.-W. Wu, and J.-S. Chang. 1992. A New Quantitative Quality Measure for Machine Translation System. In Proceedings of COLING-92, Nantes, France. Thompson, H. S. 1991. Automatic Evaluation of Translation Quality: Outline of Methodology and Report on Pilot Experiment. In Proceedings of the Evaluator’s Forum, ISSCO, Geneva, Switzerland. Turian, J. P., L. Shen, and I. D. Melamed. 2003. Evaluation of Machine Translation and its Evaluation. In Proceedings of MT Summit IX, New Orleans, U.S.A. Van Rijsbergen, C.J. 1979. Information Retrieval. Butterworths. London.
2004
77
A Unified Framework for Automatic Evaluation using N-gram Co-Occurrence Statistics Radu SORICUT Information Sciences Institute University of Southern California 4676 Admiralty Way Marina del Rey, CA 90292, USA [email protected] Eric BRILL Microsoft Research One Microsoft Way Redmond, WA 98052, USA [email protected] Abstract In this paper we propose a unified framework for automatic evaluation of NLP applications using N-gram co-occurrence statistics. The automatic evaluation metrics proposed to date for Machine Translation and Automatic Summarization are particular instances from the family of metrics we propose. We show that different members of the same family of metrics explain best the variations obtained with human evaluations, according to the application being evaluated (Machine Translation, Automatic Summarization, and Automatic Question Answering) and the evaluation guidelines used by humans for evaluating such applications. 1 Introduction With the introduction of the BLEU metric for machine translation evaluation (Papineni et al, 2002), the advantages of doing automatic evaluation for various NLP applications have become increasingly appreciated: they allow for faster implement-evaluate cycles (by by-passing the human evaluation bottleneck), less variation in evaluation performance due to errors in human assessor judgment, and, not least, the possibility of hill-climbing on such metrics in order to improve system performance (Och 2003). Recently, a second proposal for automatic evaluation has come from the Automatic Summarization community (Lin and Hovy, 2003), with an automatic evaluation metric called ROUGE, inspired by BLEU but twisted towards the specifics of the summarization task. An automatic evaluation metric is said to be successful if it is shown to have high agreement with human-performed evaluations. Human evaluations, however, are subject to specific guidelines given to the human assessors when performing the evaluation task; the variation in human judgment is therefore highly influenced by these guidelines. It follows that, in order for an automatic evaluation to agree with a humanperformed evaluation, the evaluation metric used by the automatic method must be able to account, at least to some degree, for the bias induced by the human evaluation guidelines. None of the automatic evaluation methods proposed to date, however, explicitly accounts for the different criteria followed by the human assessors, as they are defined independently of the guidelines used in the human evaluations. In this paper, we propose a framework for automatic evaluation of NLP applications which is able to account for the variation in the human evaluation guidelines. We define a family of metrics based on N-gram co-occurrence statistics, for which the automatic evaluation metrics proposed to date for Machine Translation and Automatic Summarization can be seen as particular instances. We show that different members of the same family of metrics explain best the variations obtained with human evaluations, according to the application being evaluated (Machine Translation, Automatic Summarization, and Question Answering) and the guidelines used by humans when evaluating such applications. 2 An Evaluation Plane for NLP In this section we describe an evaluation plane on which we place various NLP applications evaluated using various guideline packages. This evaluation plane is defined by two orthogonal axes (see Figure 1): an Application Axis, on which we order NLP applications according to the faithfulness/compactness ratio that characterizes the application’s input and output; and a Guideline Axis, on which we order various human guideline packages, according to the precision/recall ratio that characterizes the evaluation guidelines. 2.1 An Application Axis for Evaluation When trying to define what translating and summarizing means, one can arguably suggest that a translation is some “as-faithful-as-possible” rendering of some given input, whereas a summary is some “as-compact-as-possible” rendering of some given input. As such, Machine Translation (MT) and Automatic Summarization (AS) are on the extremes of a faithfulness/compactness (f/c) ratio between inputs and outputs. In between these two extremes lie various other NLP applications: a high f/c ratio, although lower than MT’s, characterizes Automatic Paraphrasing (paraphrase: To express, interpret, or translate with latitude); close to the other extreme, a low f/c ratio, although higher than AS’s, characterizes Automatic Summarization with view-points (summarization which needs to focus on a given point of view, extern to the document(s) to be summarized). Another NLP application, Automatic Question Answering (QA), has arguably a close-to-1 f/c ratio: the task is to render an answer about the thing(s) inquired for in a question (the faithfulness side), in a manner that is concise enough to be regarded as a useful answer (the compactness side). 2.2 An Guideline Axis for Evaluation Formal human evaluations make use of various guidelines that specify what particular aspects of the output being evaluated are considered important, for the particular application being evaluated. For example, human evaluations of MT (e.g., TIDES 2002 evaluation, performed by NIST) have traditionally looked at two different aspects of a translation: adequacy (how much of the content of the original sentence is captured by the proposed translation) and fluency (how correct is the proposed translation sentence in the target language). In many instances, evaluation guidelines can be linearly ordered according to the precision/recall (p/r) ratio they specify. For example, evaluation guidelines for adequacy evaluation of MT have a low p/r ratio, because of the high emphasis on recall (i.e., content is rewarded) and low emphasis on precision (i.e., verbosity is not penalized); on the other hand, evaluation guidelines for fluency of MT have a high p/r ratio, because of the low emphasis on recall (i.e., content is not rewarded) and high emphasis on wording (i.e., extraneous words are penalized). Another evaluation we consider in this paper, the DUC 2001 evaluation for Automatic Summarization (also performed by NIST), had specific guidelines for coverage evaluation, which means a low p/r ratio, because of the high emphasis on recall (i.e., content is rewarded). Last but not least, the QA evaluation for correctness we discuss in Section 4 has a close-to-1 p/r ratio for evaluation guidelines (i.e., both correct content and precise answer wording are rewarded). When combined, the application axis and the guideline axis define a plane in which particular evaluations are placed according to their application/guideline coordinates. In Figure 1 we illustrate this evaluation plane, and the evaluation examples mentioned above are placed in this plane according to their coordinates. 3 A Unified Framework for Automatic Evaluation In this section we propose a family of evaluation metrics based on N-gram co-occurrence statistics. Such a family of evaluation metrics provides flexibility in terms of accommodating both various NLP applications and various values of precision/recall ratio in the human guideline packages used to evaluate such applications. 3.1 A Precision-focused Family of Metrics Inspired by the work of Papineni et al. (2002) on BLEU, we define a precision-focused family of metrics, using as parameter a non-negative integer N. Part of the definition includes a list of stopwords (SW) and a function for extracting the stem of a given word (ST). Suppose we have a given NLP application for which we want to evaluate the candidate answer set Candidates for some input sequences, given a Figure 1: Evaluation plane for NLP applications adequacy evaluation TIDES−MT(2002) precision recall precision recall faithfulness compactness low faithfulness compactness AS MT fluency evaluation TIDES−MT(2002) QA(2004) correctness evaluation coverageevaluation DUC−AS (2001) Guideline Axis QA low high high Application Axis reference answer set References. For each individual candidate answer C, we define S(C,n) as the multi-set of n-grams obtained from the candidate answer C after stemming the unigrams using ST and eliminating the unigrams found in SW. We therefore define a precision score: ∑ ∑ ∑ ∑ ∈ ∈ ∈ ∈ = } { ) , ( } { ) , ( ) ( ) ( ) ( Candidates C n C S ngram Candidates C n C S ngram clip ngram Count ngram Count n P where Count(ngram) is the number of n-gram counts, and Countclip(ngram) is the maximum number of co-occurrences of ngram in the candidate answer and its reference answer. Because the denominator in the P(n) formula consists of a sum over the proposed candidate answers, this formula is a precision-oriented formula, penalizing verbose candidates. This precision score, however, can be made artificially higher when proposing shorter and shorter candidate answers. This is offset by adding a brevity penalty, BP:    < ⋅ ≥ ⋅ = − | | | | , | | | | , 1 |) | /| | 1 ( r c B if e r c B if BP c B r where |c| equals the sum of the lengths of the proposed answers, |r| equals the sum of the lengths of the reference answers, and B is a brevity constant. We define now a precision-focused family of metrics, parameterized by a non-negative integer N, as: ))) ( log( exp( ) ( 1 n P w BP N PS N n n ∑ = ⋅ = This family of metrics can be interpreted as a weighted linear average of precision scores for increasingly longer n-grams. As the values of the precision scores decrease roughly exponentially with the increase of N, the logarithm is needed to obtain a linear average. Note that the metrics of this family are well-defined only for N’s small enough to yield non-zero P(n) scores. For test corpora of reasonable size, the metrics are usually well-defined for N≤4. The BLEU proposed by Papineni et al. (2002) for automatic evaluation of machine translation is part of the family of metrics PS(N), as the particular metric obtained when N=4, wn–s are 1/N, the brevity constant B=1, the list of stop-words SW is empty, and the stemming function ST is the identity function. 3.2 A Recall-focused Family of Metrics As proposed by Lin and Hovy (2003), a precision-focused metric such as BLEU can be twisted such that it yields a recall-focused metric. In a similar manner, we define a recall-focused family of metrics, using as parameter a nonnegative integer N, with a list of stop-words (SW) and a function for extracting the stem of a given word (ST) as part of the definition. As before, suppose we have a given NLP application for which we want to evaluate the candidate answer set Candidates for some input sequences, given a reference answer set References. For each individual reference answer R, we define S(R,n) as the multi-set of n-grams obtained from the reference answer R after stemming the unigrams using ST and eliminating the unigrams found in SW. We therefore define a recall score as: ∑ ∑ ∑ ∑ ∈ ∈ ∈ ∈ = } {Re ) , ( } {Re ) , ( ) ( ) ( ) ( ferences R n R S ngram ferences R n R S ngram clip ngram Count ngram Count n R where, as before, Count(ngram) is the number of n-gram counts, and Countclip(ngram) is the maximum number of co-occurrences of ngram in the reference answer and its corresponding candidate answer. Because the denominator in the R(n) formula consists of a sum over the reference answers, this formula is essentially a recalloriented formula, which penalizes incomplete candidates. This recall score, however, can be made artificially higher when proposing longer and longer candidate answers. This is offset by adding a wordiness penalty, WP:    > ⋅ ≤ ⋅ = − | | | | , | | | | ,1 |) |/| | 1 ( r c W if e r c W if WP r c W where |c| and |r| are defined as before, and W is a wordiness constant. We define now a recall-focused family of metrics, parameterized by a non-negative integer N, as: ))) ( log( exp( ) ( 1 n R w WP N RS N n n ∑ = ⋅ = This family of metrics can be interpreted as a weighted linear average of recall scores for increasingly longer n-grams. For test corpora of reasonable size, the metrics are usually welldefined for N≤4. The ROUGE metric proposed by Lin and Hovy (2003) for automatic evaluation of machineproduced summaries is part of the family of metrics RS(N), as the particular metric obtained when N=1, wn–s are 1/N, the wordiness constant W=∞, the list of stop-words SW is their own , and the stemming function ST is the one defined by the Porter stemmer (Porter 1980). 3.3 A Unified Framework for Automatic Evaluation The precision-focused metric family PS(N) and the recall-focused metric family RS(N) defined in the previous sections are unified under the metric family AEv(α,N), defined as: ) ( ) 1( ) ( ) ( ) ( ) , ( N PS N RS N PS N RS N AEv ⋅ − + ⋅ = α α α This formula extends the well-known F-measure that combines recall and precision numbers into a single number (van Rijsbergen, 1979), by combining recall and precision metric families into a single metric family. For α=0, AEv(α,N) is the same as the recall-focused family of metrics RS(N); for α=1, AEv(α,N) is the same as the precision-focused family of metrics PS(N). For α in between 0 and 1, AEv(α,N) are metrics that balance recall and precision according to α. For the rest of the paper, we restrict the parameters of the AEv(α,N) family as follows: α varies continuously in [0,1], N varies discretely in {1,2,3,4}, the linear weights wn are 1/N, the brevity constant is 1, the wordiness constant is 2, the list of stop-words SW is our own 626 stop-word list, and the stemming function ST is the one defined by the Porter stemmer (Porter 1980). We establish a correspondence between the parameters of the family of metrics AEv(α,N) and the evaluation plane in Figure 1 as follows: α parameterizes the guideline axis (x-axis) of the plane, such that α=0 corresponds to a low precision/recall (p/r) ratio, and α=1 corresponds to a high p/r ratio; N parameterizes the application axis (y-axis) of the plane, such that N=1 corresponds to a low faithfulness/compactness (f/c) ratio (unigram statistics allow for a low representation of faithfulness, but a high representation of compactness), and N=4 corresponds to a high f/c ratio (n-gram statistics up to 4-grams allow for a high representation of faithfulness, but a low representation of compactness). This framework enables us to predict that a human-performed evaluation is best approximated by metrics that have similar f/c ratio as the application being evaluated and similar p/r ratio as the evaluation package used by the human assessors. For example, an application with a high f/c ratio, evaluated using a low p/r ratio evaluation guideline package (an example of this is the adequacy evaluation for MT in TIDES 2002), is best approximated by the automatic evaluation metric defined by a low α and a high N; an application with a close-to-1 f/c ratio, evaluated using an evaluation guideline package characterized by a close-to-1 p/r ratio (such as the correctness evaluation for Question Answering in Section 4.3) is best approximated by an automatic metric defined by a median α and a median N. 4 Evaluating the Evaluation Framework In this section, we present empirical results regarding the ability of our family of metrics to approximate human evaluations of various applications under various evaluation guidelines. We measure the amount of approximation of a human evaluation by an automatic evaluation as the value of the coefficient of determination R2 between the human evaluation scores and the automatic evaluation scores for various systems implementing Machine Translation, Summarization, and Question Answering applications. In this framework, the coefficient of determination R2 is to be interpreted as the percentage from the total variation of the human evaluation (that is, why some system’s output is better than some other system’s output, from the human evaluator’s perspective) that is captured by the automatic evaluation (that is, why some system’s output is better than some other system’s output, from the automatic evaluation perspective). The values of R2 vary between 0 and 1, with a value of 1 indicating that the automatic evaluation explains perfectly the human evaluation variation, and a value of 0 indicating that the automatic evaluation explains nothing from the human evaluation variation. All the results for the values of R2 for the family of metrics AEv(α,N) are reported with α varying from 0 to 1 in 0.1 increments, and N varying from 1 to 4. 4.1 Machine Translation Evaluation The Machine Translation evaluation carried out by NIST in 2002 for DARPA’s TIDES programme involved 7 systems that participated in the Chinese-English track. Each system was evaluated by a human judge, using one reference extracted from a list of 4 available reference translations. Each of the 878 test sentences was evaluated both for adequacy (how much of the content of the original sentence is captured by the proposed translation) and fluency (how correct is the proposed translation sentence in the target language). From the publicly available data for this evaluation (TIDES 2002), we compute the values of R2 for 7 data points (corresponding to the 7 systems participating in the Chinese-English track), using as a reference set one of the 4 sets of reference translations available. In Table 1, we present the values of the coefficient of determination R2 for the family of metrics AEv(α,N), when considering only the fluency scores from the human evaluation. As mentioned in Section 2, the evaluation guidelines for fluency have a high precision/recall ratio, whereas MT is an application with a high faithfulness/compactness ratio. In this case, our evaluation framework predicts that the automatic evaluation metrics that explain most of the variation in the human evaluation must have a high α and a high N. As seen in Table 1, our evaluation framework correctly predicts the automatic evaluation metrics that explain most of the variation in the human evaluation: metrics AEv(1,3), AEv(0.9,3), and AEv(1,4) capture most of the variation: 79.04%, 78.94%, and 78.87%, respectively. Since metric AEv(1,4) is almost the same as the BLEU metric (modulo stemming and stop word elimination for unigrams), our results confirm the current practice in the Machine Translation community, which commonly uses BLEU for automatic evaluation. For comparison purposes, we also computed the value of R2 for fluency using the BLEU score formula given in (Papineni et al., 2002), for the 7 systems using the same one reference, and we obtained a similar value, 78.52%; computing the value of R2 for fluency using the BLEU scores computed with all 4 references available yielded a lower value for R2, 64.96%, although BLEU scores obtained with multiple references are usually considered more reliable. In Table 2, we present the values of the coefficient of determination R2 for the family of metrics AEv(α,N), when considering only the adequacy scores from the human evaluation. As mentioned in Section 2, the evaluation guidelines for adequacy have a low precision/recall ratio, whereas MT is an application with high faithfulness/compactness ratio. In this case, our evaluation framework predicts that the automatic evaluation metrics that explain most of the variation in the human evaluation must have a low α and a high N. As seen in Table 2, our evaluation framework correctly predicts the automatic evaluation metric that explains most of the variation in the human evaluation: metric AEv(0,4) captures most of the variation, 83.04%. For comparison purposes, we also computed the value of R2 for adequacy using the BLEU score formula given in (Papineni et al., 2002), for the 7 systems using the same one reference, and we obtain a similar value, 83.91%; computing the value of R2 for adequacy using the BLEU scores computed with all 4 references available also yielded a lower value for R2, 62.21%. 4.2 Automatic Summarization Evaluation The Automatic Summarization evaluation carried out by NIST for the DUC 2001 conference involved 15 participating systems. We focus here on the multi-document summarization task, in which 4 generic summaries (of 50, 100, 200, and 400 words) were required for a given set of documents on a single subject. For this evaluation 30 test sets were used, and each system was evaluated by a human judge using one reference extracted from a list of 2 reference summaries. One of the evaluations required the assessors to judge the coverage of the summaries. The coverage of a summary was measured by comparing a system’s units versus the units of a reference summary, and assessing whether each system unit expresses all, most, some, hardly any, or none of the current reference unit. A final evaluation score for coverage was obtained using a coverage score computed as a weighted recall score (see (Lin and Hovy 2003) for more information on the human summary evaluation). From the publicly available data for this evaluation (DUC 2001), we compute the values of R2 for 15 data points available (corresponding to the 15 participating systems). In Tables 3-4 we present the values of the coefficient of determination R2 for the family of metrics AEv(α,N), when considering the coverage 4 76.10 76.45 76.78 77.10 77.40 77.69 77.96 78.21 78.45 78.67 78.87 3 76.11 76.6 77.04 77.44 77.80 78.11 78.38 78.61 78.80 78.94 79.04 2 73.19 74.21 75.07 75.78 76.32 76.72 76.96 77.06 77.03 76.87 76.58 1 31.71 38.22 44.82 51.09 56.59 60.99 64.10 65.90 66.50 66.12 64.99 N/α 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Table 1: R2 values for the family of metrics AEv(α,N), for fluency scores in MT evaluation 4 83.04 82.58 82.11 81.61 81.10 80.56 80.01 79.44 78.86 78.26 77.64 3 81.80 81.00 80.16 79.27 78.35 77.39 76.40 75.37 74.31 73.23 72.11 2 80.84 79.46 77.94 76.28 74.51 72.63 70.67 68.64 66.55 64.42 62.26 1 62.16 66.26 69.18 70.59 70.35 68.48 65.24 60.98 56.11 50.98 45.88 N/α 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Table 2: R2 values for the family of metrics AEv(α,N), for adequacy scores in MT evaluation scores from the human evaluation, for summaries of 200 and 400 words, respectively (the values of R2 for summaries of 50 and 100 words show similar patterns). As mentioned in Section 2, the evaluation guidelines for coverage have a low precision/recall ratio, whereas AS is an application with low faithfulness/compactness ratio. In this case, our evaluation framework predicts that the automatic evaluation metrics that explain most of the variation in the human evaluation must have a low α and a low N. As seen in Tables 3-4, our evaluation framework correctly predicts the automatic evaluation metric that explain most of the variation in the human evaluation: metric AEv(0,1) explains 90.77% and 92.28% of the variation in the human evaluation of summaries of length 200 and 400, respectively. Since metric AEv(0, 1) is almost the same as the ROUGE metric proposed by Lin and Hovy (2003) (they only differ in the stop-word list they use), our results also confirm the proposal for such metrics to be used for automatic evaluation by the Automatic Summarization community. 4.3 Question Answering Evaluation One of the most common approaches to automatic question answering (QA) restricts the domain of questions to be handled to so-called factoid questions. Automatic evaluation of factoid QA is often straightforward, as the number of correct answers is most of the time limited, and exhaustive lists of correct answers are available. When removing the factoid constraint, however, the set of possible answer to a (complex, beyondfactoid) question becomes unfeasibly large, and consequently automatic evaluation becomes a challenge. In this section, we focus on an evaluation carried out in order to assess the performance of a QA system for answering questions from the Frequently-Asked-Question (FAQ) domain (Soricut and Brill, 2004). These are generally questions requiring a more elaborated answer than a simple factoid (e.g., questions such as: “How does a film qualify for an Academy Award?”). In order to evaluate such a system a humanperformed evaluation was performed, in which 11 versions of the QA system (various modules were implemented using various algorithms) were separately evaluated. Each version was evaluated by a human evaluator, with no reference answer available. For this evaluation 115 test questions were used, and the human evaluator was asked to assess whether the proposed answer was correct, somehow related, or wrong. A unique ranking number was achieved using a weighted average of the scored answers. (See (Soricut and Brill, 2004) for more details concerning the QA task and the evaluation procedure.) One important aspect in the evaluation procedure was devising criteria for assigning a rating to an answer which was not neither correct nor wrong. One of such cases involved so-called flooded answers: answers which contain the correct information, along with several other unrelated pieces of information. A first evaluation has been carried with a guideline package asking the human assessor to assign the rating correct to flooded answers. In Table 5, we present the values of the coefficient of determination R2 for the family of metrics AEv(α,N) for this first QA evaluation. On the guideline side, the guideline package used in this first QA evaluation has a low precision/recall ratio, because the human judge is asked to evaluate based on the content provided by a given answer (high recall), but is asked to disregard the conciseness (or lack thereof) of the answer (low precision); consequently, systems that focus on 4 67.10 66.51 65.91 65.29 64.65 64.00 63.34 62.67 61.99 61.30 60.61 3 69.55 68.81 68.04 67.24 66.42 65.57 64.69 63.79 62.88 61.95 61.00 2 74.43 73.29 72.06 70.74 69.35 67.87 66.33 64.71 63.03 61.30 59.51 1 90.77 90.77 90.66 90.42 90.03 89.48 88.74 87.77 86.55 85.05 83.21 N/α 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Table 3: R2 for the family of metrics AEv(α,N), for coverage scores in AS evaluation (200 words) 4 81.24 81.04 80.78 80.47 80.12 79.73 79.30 78.84 78.35 77.84 77.31 3 84.72 84.33 83.86 83.33 82.73 82.08 81.39 80.65 79.88 79.07 78.24 2 89.54 88.56 87.47 86.26 84.96 83.59 82.14 80.65 79.10 77.53 75.92 1 92.28 91.11 89.70 88.07 86.24 84.22 82.05 79.74 77.30 74.77 72.15 N/α 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Table 4: R2 for the family of metrics AEv(α,N), for coverage scores in AS evaluation (400 words) giving correct and concise answers are not distinguished from systems that give correct answers, but have no regard for concision. On the application side, as mentioned in Section 2, QA is arguably an application characterized by a closeto-1 faithfulness/compactness ratio. In this case, our evaluation framework predicts that the automatic evaluation metrics that explain most of the variation in the human evaluation must have a low α and a median N. As seen in Table 5, our evaluation framework correctly predicts the automatic evaluation metric that explain most of the variation in the human evaluation: metric AEv(0,2) explains most of the human variation, 91.72%. Note that other members of the AEv(α,N) family do not explain nearly as well the variation in the human evaluation. For example, the ROUGE-like metric AEv(0,1) explains only 61.61% of the human variation, while the BLEUlike metric AEv(1,4) explains a mere 17.7% of the human variation (to use such a metric in order to automatically emulate the human QA evaluation is close to performing an evaluation assigning random ratings to the output answers). In order to further test the prediction power of our evaluation framework, we carried out a second QA evaluation, using a different evaluation guideline package: a flooded answer was rated only somehow-related. In Table 6, we present the values of the coefficient of determination R2 for the family of metrics AEv(α,N) for this second QA evaluation. Instead of performing this second evaluation from scratch, we actually simulated it using the following methodology: 2/3 of the output answers rated correct of the systems ranked 1st, 2nd, 3rd, and 6th by the previous human evaluation have been intentionally over-flooded using two long and out-of-context sentences, while their ratings were changed from correct to somehow-related. Such a change simulated precisely the change in the guideline package, by downgrading flooded answers. This means that, on the guideline side, the guideline package used in this second QA evaluation has a close-to-1 precision/recall ratio, because the human judge evaluates now based both on the content and the conciseness of a given answer. At the same time, the application remains unchanged, which means that on the application side we still have a close-to-1 faithfulness/compactness ratio. In this case, our evaluation framework predicts that the automatic evaluation metrics that explain most of the variation in the human evaluation must have a median α and a median N. As seen in Table 6, our evaluation framework correctly predicts the automatic evaluation metric that explain most of the variation in the human evaluation: metric AEv(0.3,2) explains most of the variation in the human evaluation, 86.26%. Also note that, while the R2 values around AEv(0.3,2) are still reasonable, evaluation metrics that are further and further away from it have increasingly lower R2 values, meaning that they are more and more unreliable for this task. The high correlation of metric AEv(0.3,2) with human judgment, however, suggests that such a metric is a good candidate for performing automatic evaluation of QA systems that go beyond answering factoid questions. 5 Conclusions In this paper, we propose a unified framework for automatic evaluation based on N-gram cooccurrence statistics, for NLP applications for which a correct answer is usually an unfeasibly large set (e.g., Machine Translation, Paraphrasing, Question Answering, Summarization, etc.). The success of BLEU in doing automatic evaluation of machine translation output has often led researchers to blindly try to use this metric for evaluation tasks for which it was more or less 4 63.40 57.62 51.86 46.26 40.96 36.02 31.51 27.43 23.78 20.54 17.70 3 81.39 76.38 70.76 64.76 58.61 52.51 46.63 41.09 35.97 31.33 27.15 2 91.72 89.21 85.54 80.78 75.14 68.87 62.25 55.56 49.04 42.88 37.20 1 61.61 58.83 55.25 51.04 46.39 41.55 36.74 32.12 27.85 23.97 20.54 N/α 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Table 5: R2 for the family of metrics AEv(α,N), for correctness scores, first QA evaluation 4 79.94 79.18 75.80 70.63 64.58 58.35 52.39 46.95 42.11 37.87 34.19 3 76.15 80.44 81.19 78.45 73.07 66.27 59.11 52.26 46.08 40.68 36.04 2 67.76 77.48 84.34 86.26 82.75 75.24 65.94 56.65 48.32 41.25 35.42 1 56.55 60.81 59.60 53.56 45.38 37.40 30.68 25.36 21.26 18.12 15.69 N/α 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Table 6: R2 for the family of metrics AEv(α,N), for correctness scores, second QA evaluation appropriate (see, e.g., the paper of Lin and Hovy (2003), in which the authors start with the assumption that BLEU might work for summarization evaluation, and discover after several trials a better candidate). Our unifying framework facilitates the understanding of when various automatic evaluation metrics are able to closely approximate human evaluations for various applications. Given an application app and an evaluation guideline package eval, the faithfulness/compactness ratio of the application and the precision/recall ratio of the evaluation guidelines determine a restricted area in the evaluation plane in Figure 1 which best characterizes the (app, eval) pair. We have empirically demonstrated that the metrics from the AEv(α,N) family that best approximate human judgment are those that have the α and N parameters in the determined restricted area. To our knowledge, this is the first proposal regarding automatic evaluation in which the automatic evaluation metrics are able to account for the variation in human judgment due to specific evaluation guidelines. References DUC. 2001. The Document Understanding Conference. http://duc.nist.gov. C.Y. Lin and E. H. Hovy. 2003. Automatic Evaluation of Summaries Using N-gram CoOccurrence Statistics. In Proceedings of the HLT/NAACL 2003: Main Conference, 150-156. K. Papineni, S. Roukos, T. Ward, and W.J. Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the ACL 2002, 311-318. M. F. Porter. 1980. An algorithm for Suffix Stripping. Program, 14: 130-137. F. J. Och. 2003. Minimum Error Rate Training for Statistical Machine Translation. In Proceedings of the ACL 2003, 160-167. R. Soricut and E. Brill. 2004. Automatic Question Answering: Beyond the Factoid. In Proceedings of the HLT/NAACL 2004: Main Conference, 5764. TIDES. 2002. The Translingual Information Detection, Extraction, and Summarization programme. http://tides.nist.gov. C. J. van Rijsbergen. 1979. Information Retrieval. London: Butterworths. Second Edition.
2004
78
Extending the BLEU MT Evaluation Method with Frequency Weightings Bogdan Babych Centre for Translation Studies University of Leeds Leeds, LS2 9JT, UK [email protected] Anthony Hartley Centre for Translation Studies University of Leeds Leeds, LS2 9JT, UK [email protected] Abstract We present the results of an experiment on extending the automatic method of Machine Translation evaluation BLUE with statistical weights for lexical items, such as tf.idf scores. We show that this extension gives additional information about evaluated texts; in particular it allows us to measure translation Adequacy, which, for statistical MT systems, is often overestimated by the baseline BLEU method. The proposed model uses a single human reference translation, which increases the usability of the proposed method for practical purposes. The model suggests a linguistic interpretation which relates frequency weights and human intuition about translation Adequacy and Fluency. 1. Introduction Automatic methods for evaluating different aspects of MT quality – such as Adequacy, Fluency and Informativeness – provide an alternative to an expensive and time-consuming process of human MT evaluation. They are intended to yield scores that correlate with human judgments of translation quality and enable systems (machine or human) to be ranked on this basis. Several such automatic methods have been proposed in recent years. Some of them use human reference translations, e.g., the BLEU method (Papineni et al., 2002), which is based on comparison of N-gram models in MT output and in a set of human reference translations. However, a serious problem for the BLEU method is the lack of a model for relative importance of matched and mismatched items. Words in text usually carry an unequal informational load, and as a result are of differing importance for translation. It is reasonable to expect that the choices of right translation equivalents for certain key items, such as expressions denoting principal events, event participants and relations in a text are more important in the eyes of human evaluators then choices of function words and a syntactic perspective for sentences. Accurate rendering of these key items by an MT system boosts the quality of translation. Therefore, at least for evaluation of translation Adequacy (Fidelity), the proper choice of translation equivalents for important pieces of information should count more than the choice of words which are used for structural purposes and without a clear translation equivalent in the source text. (The latter may be more important for Fluency evaluation). The problem of different significance of Ngram matches is related to the issue of legitimate variation in human translations, when certain words are less stable than others across independently produced human translations. BLEU accounts for legitimate translation variation by using a set of several human reference translations, which are believed to be representative of several equally acceptable ways of translating any source segment. This is motivated by the need not to penalise deviations from the set of Ngrams in a single reference, although the requirement of multiple human references makes automatic evaluation more expensive. However, the “significance” problem is not directly addressed by the BLEU method. On the one hand, the matched items that are present in several human references receive the same weights as items found in just one of the references. On the other hand the model of legitimate translation variation cannot fully accommodate the issue of varying degrees of “salience” for matched lexical items, since alternative synonymic translation equivalents may also be highly significant for an adequate translation from the human perspective (Babych and Hartley, 2004). Therefore it is reasonable to suggest that introduction of a model which approximates intuitions about the significance of the matched N-grams will improve the correlation between automatically computed MT evaluation scores and human evaluation scores for translation Adequacy. In this paper we present the result of an experiment on augmenting BLEU N-gram comparison with statistical weight coefficients which capture a word’s salience within a given document: the standard tf.idf measure used in the vector-space model for Information Retrieval (Salton and Leck, 1968) and the S-score proposed for evaluating MT output corpora for the purposes of Information Extraction (Babych et al., 2003). Both scores are computed for each term in each of the 100 human reference translations from French into English available in DARPA-94 MT evaluation corpus (White et al., 1994). The proposed weighted N-gram model for MT evaluation is tested on a set of translations by four different MT systems available in the DARPA corpus, and is compared with the results of the baseline BLEU method with respect to their correlation with human evaluation scores. The scores produced by the N-gram model with tf.idf and S-Score weights are shown to be consistent with baseline BLEU evaluation results for Fluency and outperform the BLEU scores for Adequacy (where the correlation for the S-score weighting is higher). We also show that the weighted model may still be reliably used if there is only one human reference translation for an evaluated text. Besides saving cost, the ability to dependably work with a single human translation has an additional advantage: it is now possible to create Recall-based evaluation measures for MT, which has been problematic for evaluation with multiple reference translations, since only one of the choices from the reference set is used in translation (Papineni et al. 2002:314). Notably, Recall of weighted N-grams is found to be a good estimation of human judgements about translation Adequacy. Using weighted N-grams is essential for predicting Adequacy, since correlation of Recall for non-weighted N-grams is much lower. It is possible that other automatic methods which use human translations as a reference may also benefit from an introduction of an explicit model for term significance, since so far these methods also implicitly assume that all words are equally important in human translation, and use all of them, e.g., for measuring edit distances (Akiba et al, 2001; 2003). The weighted N-gram model has been implemented as an MT evaluation toolkit (which includes a Perl script, example files and documentation). It computes evaluation scores with tf.idf and S-score weights for translation Adequacy and Fluency. The toolkit is available at http://www.comp.leeds.ac.uk/bogdan/evalMT.html 2. Set-up of the experiment The experiment used French–English translations available in the DARPA-94 MT evaluation corpus. The corpus contains 100 French news texts (each text is about 350 words long) translated into English by 5 different MT systems: “Systran”, “Reverso”, “Globalink”, “Metal”, “Candide” and scored by human evaluators; there are no human scores for “Reverso”, which was added to the corpus on a later stage. The corpus also contains 2 independent human translations of each text. Human evaluation scores are available for each of the 400 texts translated by the 4 MT systems for 3 parameters of translation quality: “Adequacy”, “Fluency” and “Informativeness”. The Adequacy (Fidelity) scores are given on a 5-point scale by comparing MT with a human reference translation. The Adequacy parameter captures how much of the original content of a text is conveyed, regardless of how grammatically imperfect the output might be. The Fluency scores (also given on a 5-point scale) determine intelligibility of MT without reference to the source text, i.e., how grammatical and stylistically natural the translation appears to be. The Informativeness scores (which we didn’t use for our experiment) determine whether there is enough information in MT output to enable evaluators to answer multiplechoice questions on its content (White, 2003:237) In the first stage of the experiment, each of the two sets of human translations was used to compute tf.idf and S-scores for each word in each of the 100 texts. The tf.idf score was calculated as: tf.idf(i,j) = (1 + log (tfi,j)) log (N / dfi), if tfi,j ≥ 1; where: – tfi,j is the number of occurrences of the word wi in the document dj; – dfi is the number of documents in the corpus where the word wi occurs; – N is the total number of documents in the corpus. The S-score was calculated as: ( ) ) ( ) ( ) ( ) , ( /) ( log ) , ( i corp i i doc corp j i doc P N df N P P j i S − × − = − where: – Pdoc(i,j) is the relative frequency of the word in the text; (“Relative frequency” is the number of tokens of this word-type divided by the total number of tokens). – Pcorp-doc(i) is the relative frequency of the same word in the rest of the corpus, without this text; – (N – df(i)) / N is the proportion of texts in the corpus, where this word does not occur (number of texts, where it is not found, divided by number of texts in the corpus); – Pcorp(i) is the relative frequency of the word in the whole corpus, including this particular text. In the second stage we carried out N-gram based MT evaluation, measuring Precision and Recall of N-grams in MT output using a single human reference translation. N-gram counts were adjusted with the tf.idf weights and S-scores for every matched word. The following procedure was used to integrate the S-scores / tf.idf scores for a lexical item into N-gram counts. For every word in a given text which received an S-score and tf.idf score on the basis of the human reference corpus, all counts for the N-grams containing this word are increased by the value of the respective score (not just by 1, as in the baseline BLEU approach). The original matches used for BLEU and the weighted matches are both calculated. The following changes have been made to the Perl script of the BLEU tool: apart from the operator which increases counts for every matched N-gram $ngr by 1, i.e.: $ngr .= $words[$i+$j] . " "; $$hashNgr{$ngr}++; the following code was introduced: […] $WORD = $words[$i+$j]; $WEIGHT = 0; if(exists $WordWeight{$TxtN}{$WORD}){ $WEIGHT= $WordWeight{$TxtN}{$WORD}; } $ngr .= $words[$i+$j] . " "; $$hashNgr{$ngr}++; $$hashNgrWEIGHTED{$ngr}+= $WEIGHT; […] – where the hash data structure: $WordWeight{$TxtN}{$WORD}=$WEIGHT represents the table of tf.idf scores or S-scores for words in every text in the corpus. The weighted N-gram evaluation scores of Precision, Recall and F-measure may be produced for a segment, for a text or for a corpus of translations generated by an MT system. In the third stage of the experiment the weighted Precision and Recall scores were tested for correlation with human scores for the same texts and compared to the results of similar tests for standard BLEU evaluation. Finally we addressed the question whether the proposed MT evaluation method allows us to use a single human reference translation reliably. In order to assess the stability of the weighted evaluation scores with a single reference, two runs of the experiment were carried out. The first run used the “Reference” human translation, while the second run used the “Expert” human translation (each time a single reference translation was used). The scores for both runs were compared using a standard deviation measure. 3. The results of the MT evaluation with frequency weights With respect to evaluating MT systems, the correlation for the weighted N-gram model was found to be stronger, for both Adequacy and Fluency, the improvement being highest for Adequacy. These results are due to the fact that the weighted N-gram model gives much more accurate predictions about the statistical MT system “Candide”, whereas the standard BLEU approach tends to over-estimate its performance for translation Adequacy. Table 1 present the baseline results for nonweighted Precision, Recall and F-score. It shows the following figures: – Human evaluation scores for Adequacy and Fluency (the mean scores for all texts produced by each MT system); – BLEU scores produced using 2 human reference translations and the default script settings (N-gram size = 4); – Precision, Recall and F-score for the weighted N-gram model produced with 1 human reference translation and N-gram size = 4. – Pearson’s correlation coefficient r for Precision, Recall and F-score correlated with human scores for Adequacy and Fluency r(2) (with 2 degrees of freedom) for the sets which include scores for the 4 MT systems. The scores at the top of each cell show the results for the first run of the experiment, which used the “Reference” human translation; the scores at the bottom of the cells represent the results for the second run with the “Expert” human translation. System [ade] / [flu] BLEU [1&2] Prec. 1/2 Recall 1/2 Fscore 1/2 CANDIDE 0.677 / 0.455 0.3561 0.4068 0.4012 0.3806 0.3790 0.3933 0.3898 GLOBALINK 0.710 / 0.381 0.3199 0.3429 0.3414 0.3465 0.3484 0.3447 0.3449 MS 0.718 / 0.382 0.3003 0.3289 0.3286 0.3650 0.3682 0.3460 0.3473 REVERSO NA / NA 0.3823 0.3948 0.3923 0.4012 0.4025 0.3980 0.3973 SYSTRAN 0.789 / 0.508 0.4002 0.4029 0.3981 0.4129 0.4118 0.4078 0.4049 Corr r(2) with [ade] – MT 0.5918 0.1809 0.1871 0.6691 0.6988 0.4063 0.4270 Corr r(2) with [flu] – MT 0.9807 0.9096 0.9124 0.9540 0.9353 0.9836 0.9869 Table 1. Baseline non-weighted scores. Table 2 summarises the evaluation scores for BLEU as compared to tf.idf weighted scores, and Table 3 summarises the same scores as compared to S-score weighed evaluation. System [ade] / [flu] BLEU [1&2] Prec. (w) 1/2 Recall (w) 1/2 Fscore (w) 1/2 CANDIDE 0.677 / 0.455 0.3561 0.5242 0.5176 0.3094 0.3051 0.3892 0.3839 GLOBALINK 0.710 / 0.381 0.3199 0.4905 0.4890 0.2919 0.2911 0.3660 0.3650 MS 0.718 / 0.382 0.3003 0.4919 0.4902 0.3083 0.3100 0.3791 0.3798 REVERSO NA / NA 0.3823 0.5336 0.5342 0.3400 0.3413 0.4154 0.4165 SYSTRAN 0.789 / 0.508 0.4002 0.5442 0.5375 0.3521 0.3491 0.4276 0.4233 Corr r(2) with [ade] – MT 0.5918 0.5248 0.5561 0.8354 0.8667 0.7691 0.8119 Corr r(2) with [flu] – MT 0.9807 0.9987 0.9998 0.8849 0.8350 0.9408 0.9070 Table 2. BLEU vs tf.idf weighted scores. System [ade] / [flu] BLEU [1&2] Prec. (w) 1/2 Recall (w) 1/2 Fscore (w) 1/2 CANDIDE 0.677 / 0.455 0.3561 0.5034 0.4982 0.2553 0.2554 0.3388 0.3377 GLOBALINK 0.710 / 0.381 0.3199 0.4677 0.4672 0.2464 0.2493 0.3228 0.3252 MS 0.718 / 0.382 0.3003 0.4766 0.4793 0.2635 0.2679 0.3394 0.3437 REVERSO NA / NA 0.3823 0.5204 0.5214 0.2930 0.2967 0.3749 0.3782 SYSTRAN 0.789 / 0.508 0.4002 0.5314 0.5218 0.3034 0.3022 0.3863 0.3828 Corr r(2) with [ade] – MT 0.5918 0.6055 0.6137 0.9069 0.9215 0.8574 0.8792 Corr r(2) with [flu] – MT 0.9807 0.9912 0.9769 0.8022 0.7499 0.8715 0.8247 Table 3. BLEU vs S-score weights. It can be seen from the table that there is a strong positive correlation between the baseline BLEU scores and human scores for Fluency: r(2)=0.9807, p <0.05. However, the correlation with Adequacy is much weaker and is not statistically significant: r(2)= 0.5918, p >0.05. The most serious problem for BLEU is predicting scores for the statistical MT system Candide, which was judged to produce relatively fluent, but largely inadequate translation. For other MT systems (developed with the knowledge-based MT architecture) the scores for Adequacy and Fluency are consistent with each other: more fluent translations are also more adequate. BLEU scores go in line with Candide’s Fluency scores, but do not account for its Adequacy scores. When Candide is excluded from the evaluation set, r correlation goes up, but it is still lower than the correlation for Fluency and remains statistically insignificant: r(1)=0.9608, p > 0.05. Therefore, the baseline BLEU approach fails to consistently predict scores for Adequacy. Correlation figures between non-weighted Ngram counts and human scores are similar to the results for BLEU: the highest and statistically significant correlation is between the F-score and Fluency: r(2)=0.9836, p<0.05, r(2)=0.9869, p<0.01, and there is somewhat smaller and statistically significant correlation with Precision. This confirms the need to use modified Precision in the BLEU method that also in certain respect integrates Recall. The proposed weighted N-gram model outperforms BLEU and non-weighted N-gram evaluation in its ability to predict Adequacy scores: weighted Recall scores have much stronger correlation with Adequacy (which for MT-only evaluation is still statistically insignificant at the level p<0.05, but come very close to that point: t=3.729 and t=4.108; the required value for p<0.05 is t=4.303). Correlation figures for S-score-based weights are higher than for tf.idf weights (S-score: r(2)= 0.9069, p > 0.05; r(2)= 0.9215, p > 0.05, tf.idf score: r(2)= 0.8354, p >0.05; r(2)= 0.8667, p >0.05). The improvement in the accuracy of evaluation for the weighted N-gram model can be illustrated by the following example of translating the French sentence: ORI-French: Les trente-huit chefs d'entreprise mis en examen dans le dossier ont déjà fait l'objet d'auditions, mais trois d'entre eux ont été confrontés, mercredi, dans la foulée de la confrontation "politique". English translations of this sentence by the knowledge-based system Systran and statistical MT system Candide have an equal number of matched unigrams (highlighted in italic), therefore conventional unigram Precision and Recall scores are the same for both systems. However, for each translation two of the matched unigrams are different (underlined) and receive different frequency weights (shown in brackets): MT “Systran”: The thirty-eight heads (tf.idf=4.605; S=4.614) of undertaking put in examination in the file already were the subject of hearings, but three of them were confronted, Wednesday, in the tread of "political" confrontation (tf.idf=5.937; S=3.890). Human translation “Expert”: The thirty-eight heads of companies questioned in the case had already been heard, but three of them were brought together Wednesday following the "political" confrontation. MT “Candide”: The thirty-eight counts of company put into consideration in the case (tf.idf=3.719; S=2.199) already had (tf.idf=0.562; S=0.000) the object of hearings, but three of them were checked, Wednesday, in the path of confrontal "political." (In the human translation the unigrams matched by the Systran output sentence are in italic, those matched by the Candide sentence are in bold). It can be seen from this example that the unigrams matched by Systran have higher term frequency weights (both tf.idf and S-scores): heads (tf.idf=4.605;S=4.614) confrontation (tf.idf=5.937;S=3.890) The output sentence of Candide instead matched less salient unigrams: case (tf.idf=3.719;S=2.199) had (tf.idf=0.562;S=0.000) Therefore for the given sentence weighted unigram Recall (i.e., the ability to avoid undergeneration of salient unigrams) is higher for Systran than for Candide (Table 4): Systran Candide R 0.6538 0.6538 R * tf.idf 0.5332 0.4211 R * S-score 0.5517 0.3697 P 0.5484 0.5484 P * tf.idf 0.7402 0.9277 P * S-score 0.7166 0.9573 Table 4. Recall, Precision, and weighted scores Weighted Recall scores capture the intuition that the translation generated by Systran is more adequate than the one generated by Candide, since it preserves more important pieces of information. On the other hand, weighted Precision scores are higher for Candide. This is due to the fact that Systran over-generates (doesn’t match in the human translation) much more “exotic”, unordinary words, which on average have higher cumulative salience scores, e.g., undertaking, examination, confronted, tread – vs. the corresponding words “over-generated” by Candide: company, consideration, checked, path. In some respect higher weighted precision can be interpreted as higher Fluency of the Candide’s output sentence, which intuitively is perceived as sounding more naturally (although not making much sense). On the level of corpus statistics the weighted Recall scores go in line with Adequacy, and weighted Precision scores (as well as the Precision-based BLEU scores) – with Fluency, which confirms such interpretation of weighted Precision and Recall scores in the example above. On the other hand, Precision-based scores and nonweighted Recall scores fail to capture Adequacy. The improvement in correlation for weighted Recall scores with Adequacy is achieved by reducing overestimation for the Candide system, moving its scores closer to human judgements about its quality in this respect. However, this is not completely achieved: although in terms of Recall weighted by the S-scores Candide is correctly ranked below MS (and not ahead of it, as with the BLEU scores), it is still slightly ahead of Globalink, contrary to human evaluation results. For both methods – BLEU and the Weighted N-gram evaluation – Adequacy is found to be harder to predict than Fluency. This is due to the fact that there is no good linguistic model of translation adequacy which can be easily formalised. The introduction of S-score weights may be a useful step towards developing such a model, since correlation scores with Adequacy are much better for the Weighted N-gram approach than for BLEU. Also from the linguistic point of view, S-score weights and N-grams may only be reasonably good approximations of Adequacy, which involves a wide range of factors, like syntactic and semantic issues that cannot be captured by Ngram matches and require a thesaurus and other knowledge-based extensions. Accurate formal models of translation variation may also be useful for improving automatic evaluation of Adequacy. The proposed evaluation method also preserves the ability of BLEU to consistently predict scores for Fluency: Precision weighted by tf.idf scores has the strongest positive correlation with this aspect of MT quality, which is slightly better than the values for BLEU; (S-score: r(2)= 0.9912, p<0.01; r(2)= 0.9769, p<0.05; tf.idf score: r(2)= 0.9987, p<0.001; r(2)= 0.9998, p<0.001). The results suggest that weighted Precision gives a good approximation of Fluency. Similar results with non-weighted approach are only achieved if some aspect of Recall is integrated into the evaluation metric (either as modified precision, as in BLEU, or as an aspect of the Fscore). Weighted Recall (especially with Sscores) gives a reasonably good approximation of Adequacy. On the one hand using 1 human reference with uniform results is essential for our methodology, since it means that there is no more “trouble with Recall” (Papineni et al., 2002:314) – a system’s ability to avoid under-generation of N-grams can now be reliably measured. On the other hand, using a single human reference translation instead of multiple translations will certainly increase the usability of N-gram based MT evaluation tools. The fact that non-weighted F-scores also have high correlation with Fluency suggests a new linguistic interpretation of the nature of these two quality criteria: it is intuitively plausible that Fluency subsumes, i.e. presupposes Adequacy (similarly to the way the F-score subsumes Recall, which among all other scores gives the best correlation with Adequacy). The non-weighted Fscore correlates more strongly with Fluency than either of its components: Precision and Recall; similarly Adequacy might make a contribution to Fluency together with some other factors. It is conceivable that people need adequate translations (or at least translations that make sense) in order to be able to make judgments about naturalness, or Fluency. Being able to make some sense out of a text could be the major ground for judging Adequacy: sensible mistranslations in MT are relatively rare events. This may be the consequence of a principle similar to the “second law of thermodynamics” applied to text structure, – in practice it is much rarer to some alternative sense to be created (even if the number of possible error types could be significant), than to destroy the existing sense in translation, so the majority of inadequate translations are just nonsense. However, in contrast to human translation, fluent mistranslations in MT are even rarer than disfluent ones, according to the same principle. A real difference in scores is made by segments which make sense and may or may not be fluent, and things which do not make any sense and about which it is hard to tell whether they are fluent. This suggestion may be empirically tested: if Adequacy is a necessary precondition for Fluency, there should be a greater inter-annotator disagreement in Fluency scores on texts or segments which have lower Adequacy scores. This will be a topic of future research. We note that for the DARPA corpus the correlation scores presented are highest if the evaluation unit is an entire corpus of translations produced by an MT system, and for text-level evaluation, correlation is much lower. A similar observation was made in (Papineni et al., 2002: 313). This may be due to the fact that human judges are less consistent, especially for puzzling segments that do not fit the scoring guidelines, like nonsense segments for which it is hard to decide whether they are fluent or even adequate. However, this randomness is leveled out if the evaluation unit increases in size – from the text level to the corpus level. Automatic evaluation methods such as BLEU (Papineni et al., 2002), RED (Akiba et al., 2001), or the weighted N-gram model proposed here may be more consistent in judging quality as compared to human evaluators, but human judgments remain the only criteria for metaevaluating the automatic methods. 4. Stability of weighted evaluation scores In this section we investigate how reliable is the use of a single human reference translation. The stability of the scores is central to the issue of computing Recall and reducing the cost of automatic evaluation. We also would like to compare the stability of our results with the stability of the baseline non-weighted N-gram model using a single reference. In this stage of the experiment we measured the changes that occur for the scores of MT systems if an alternative reference translation is used – both for the baseline N-gram counts and for the weighted N-gram model. Standard deviation was computed for each pair of evaluation scores produced by the two runs of the system with alternative human references. An average of these standard deviations is the measure of stability for a given score. The results of these calculations are presented in Table 5. systems StDevbasln StDevtf.idf StDevS-score P candide 0.004 0.0047 0.0037 globalink 0.0011 0.0011 0.0004 ms 0.0002 0.0012 0.0019 reverso 0.0018 0.0004 0.0007 systran 0.0034 0.0047 0.0068 AVE SDEV 0.0021 0.0024 0.0027 R candide 0.0011 0.003 0.0001 globalink 0.0013 0.0006 0.0021 ms 0.0023 0.0012 0.0031 reverso 0.0009 0.0009 0.0026 systran 0.0008 0.0021 0.0008 AVE SDEV 0.0013 0.0016 0.0017 F candide 0.0025 0.0037 0.0008 globalink 0.0001 0.0007 0.0017 ms 0.0009 0.0005 0.003 reverso 0.0005 0.0008 0.0023 systran 0.0021 0.003 0.0025 AVE SDEV 0.0012 0.0018 0.0021 Table 5. Stability of scores Standard deviation for weighted scores is generally slightly higher, but both the baseline and the weighted N-gram approaches give relatively stable results: the average standard deviation was not greater than 0.0027, which means that both will produce reliable figures with just a single human reference translation (although interpretation of the score with a single reference should be different than with multiple references). Somewhat higher standard deviation figures for the weighted N-gram model confirm the suggestion that a word’s importance for translation cannot be straightforwardly derived from the model of the legitimate translation variation implemented in BLEU and needs the salience weights, such as tf.idf or S-scores. 5. Conclusion and future work The results for weighted N-gram models have a significantly higher correlation with human intuitive judgements about translation Adequacy and Fluency than the baseline N-gram evaluation measures which are used in the BLEU MT evaluation toolkit. This shows that they are a promising direction of research. Future work will apply our approach to evaluating MT into languages other than English, extending the experiment to a larger number of MT systems built on different architectures and to larger corpora. However, the results of the experiment may also have implications for MT development: significance weights may be used to rank the relative “importance” of translation equivalents. At present all MT architectures (knowledge-based, example-based, and statistical) treat all translation equivalents equally, so MT systems cannot dynamically prioritise rule applications, and translations of the central concepts in texts are often lost among excessively literal translations of less important concepts and function words. For example, for statistical MT significance weights of lexical items may indicate which words have to be introduced into the target text using the translation model for source and target languages, and which need to be brought there by the language model for the target corpora. Similar ideas may be useful for the Example-based and Rule-based MT architectures. The general idea is that different pieces of information expressed in the source text are not equally important for translation: MT systems that have no means for prioritising this information often introduce excessive information noise into the target text by literally translating structural information, etymology of proper names, collocations that are unacceptable in the target language, etc. This information noise often obscures important translation equivalents and prevents the users from focusing on the relevant bits. MT quality may benefit from filtering out this excessive information as much as from frequently recommended extension of knowledge sources for MT systems. The significance weights may schedule the priority for retrieving translation equivalents and motivate application of compensation strategies in translation, e.g., adding or deleting implicitly inferable information in the target text, using non-literal strategies, such as transposition or modulation (Vinay and Darbelnet, 1995). Such weights may allow MT systems to make an approximate distinction between salient words which require proper translation equivalents and structural material both in the source and in the target texts. Exploring applicability of this idea to various MT architectures is another direction for future research. Acknowledgments We are very grateful for the insightful comments of the three anonymous reviewers. References Akiba, Y., K. Imamura and E. Sumita. 2001. Using multiple edit distances to automatically rank machine translation output. In Proc. MT Summit VIII. p. 15– 20. Akiba, Y., E. Sumita, H. Nakaiwa, S. Yamamoto and H.G. Okuno. 2003. Experimental Comparison of MT Evaluation Methods: RED vs. BLEU. In Proc. MT Summit IX, URL: http://www.amtaweb.org/summit/ MTSummit/ FinalPapers/55-Akiba-final.pdf. Babych, B., A. Hartley and E. Atwell. 2003. Statistical Modelling of MT output corpora for Information Extraction. In: Proceedings of the Corpus Linguistics 2003 conference, Lancaster University (UK), 28 - 31 March 2003, pp. 62-70. Babych, B. and A. Hartley. 2004. Modelling legitimate translation variation for automatic evaluation of MT quality. In: Proceedings of LREC 2004 (forthcoming). Papineni, K., S. Roukos, T. Ward, W.-J. Zhu. 2002 BLEU: a method for automatic evaluation of machine translation. Proceedings of the 40th Annual Meeting of the Association for the Computational Linguistics (ACL), Philadelphia, July 2002, pp. 311-318. Salton, G. and M.E. Lesk. 1968. Computer evaluation of indexing and text processing. Journal of the ACM, 15(1) , 8-36. Vinay, J.P. and J.Darbelnet. 1995. Comparative stylistics of French and English : a methodology for translation / translated and edited by Juan C. Sager, M.-J. Hamel. J. Benjamins Pub., Amsterdam, Philadelphia. White, J., T. O’Connell and F. O’Mara. 1994. The ARPA MT evaluation methodologies: evolution, lessons and future approaches. Proceedings of the 1st Conference of the Association for Machine Translation in the Americas. Columbia, MD, October 1994. pp. 193-205. White, J. 2003. How to evaluate machine translation. In: H. Somers. (Ed.) Computers and Translation: a translator’s guide. Ed. J. Benjamins B.V., Amsterdam, Philadelphia, pp. 211-244.
2004
79
Statistical Modeling for Unit Selection in Speech Synthesis Cyril Allauzen and Mehryar Mohri and Michael Riley∗ AT&T Labs – Research 180 Park Avenue, Florham Park, NJ 07932, USA {allauzen, mohri, riley}@research.att.com Abstract Traditional concatenative speech synthesis systems use a number of heuristics to define the target and concatenation costs, essential for the design of the unit selection component. In contrast to these approaches, we introduce a general statistical modeling framework for unit selection inspired by automatic speech recognition. Given appropriate data, techniques based on that framework can result in a more accurate unit selection, thereby improving the general quality of a speech synthesizer. They can also lead to a more modular and a substantially more efficient system. We present a new unit selection system based on statistical modeling. To overcome the original absence of data, we use an existing high-quality unit selection system to generate a corpus of unit sequences. We show that the concatenation cost can be accurately estimated from this corpus using a statistical n-gram language model over units. We used weighted automata and transducers for the representation of the components of the system and designed a new and more efficient composition algorithm making use of string potentials for their combination. The resulting statistical unit selection is shown to be about 2.6 times faster than the last release of the AT&T Natural Voices Product while preserving the same quality, and offers much flexibility for the use and integration of new and more complex components. 1 Motivation A concatenative speech synthesis system (Hunt and Black, 1996; Beutnagel et al., 1999a) consists of three components. The first component, the textanalysis frontend, takes text as input and outputs a sequence of feature vectors that characterize the acoustic signal to synthesize. The first element of each of these vectors is the predicted phone or halfphone; other elements are features such as the phonetic context, acoustic features (e.g., pitch, duration), or prosodic features. ∗ This author’s new address is: Google, Inc, 1440 Broadway, New York, NY 10018, [email protected]. The second component, unit selection, determines in a set of recorded acoustic units corresponding to phones (Hunt and Black, 1996) or halfphones (Beutnagel et al., 1999a) the sequence of units that is the closest to the sequence of feature vectors predicted by the text analysis frontend. The final component produces an acoustic signal from the unit sequence chosen by unit selection using simple concatenation or other methods such as PSOLA (Moulines and Charpentier, 1990) and HNM (Stylianou et al., 1997). Unit selection is performed by defining two cost functions: the target cost that estimates how the features of a recorded unit match the specified feature vector and the concatenation cost that estimates how well two units will be perceived to match when appended. Unit selection then consists of finding, given a specified sequence of feature vectors, the unit sequence that minimizes the sum of these two costs. The target and concatenation cost functions have traditionally been formed from a variety of heuristic or ad hoc quality measures based on features of the audio and text. In this paper, we follow a different approach: our goal is a system based purely on statistical modeling. The starting point is to assume that we have a training corpus of utterances labeled with the appropriate unit sequences. Specifically, for each training utterance, we assume available a sequence of feature vectors f = f1 . . . fn and the corresponding units u = u1 . . . un that should be used to synthesize this utterance. We wish to estimate from this corpus two probability distributions, P(f|u) and P(u). Given these estimates, we can perform unit selection on a novel utterance using: u = argmax u P(u|f) (1) = argmin u (−log P(f|u) −log P(u)) (2) Equation 1 states that the most likely unit sequence is selected given the probabilistic model used. Equation 2 follows from the definition of conditional probability and that P(f) is fixed for a given utterance. The two terms appearing in Equation 2 can be viewed as the statistical counterparts of the target and concatenation costs in traditional unit selection. The statistical framework just outlined is similar to the one used in speech recognition (Jelinek, 1976). We also use several techniques that have been very successfully applied to speech recognition. For instance, in this paper, we show how −log P(u) (the concatenation cost) can be accurately estimated using a statistical n-gram language model over units. Two questions naturally arise. (a) How can we collect a training corpus for building a statistical model? Ideally, the training corpus could be human-labeled, as in speech recognition and other natural language processing tasks. But this seemed impractical given the size of the unit inventory, the number of utterances needed for good statistical estimates, and our limited resources. Instead, we chose to use a training corpus generated by an existing high-quality unit selection system, that of the AT&T Natural Voices Product. Of course, building a statistical model on that output can, at best, only match the quality of the original. But, it can serve as an exploratory trial to measure the quality of our statistical modeling. As we will see, it can also result in a synthesis system that is significantly faster and modular than the original since there are well-established algorithms for representing and optimizing statistical models of the type we will employ. To further simplify the problem, we will use the existing traditional target costs, providing statistical estimates only of the concatenation costs (−log P(u)). (b) What are the benefits of a statistical modeling approach? (1) High-quality cost functions. One issue with traditional unit selection systems is that their cost functions are the result of the following compromise: they need to be complex enough to have a perceptual meaning but simple enough to be computed efficiently. With our statistical modeling approach, the labeling phase could be performed offline by a highly accurate unit selection system, potentially slow and complex, while the run-time statistical system could still be fast. Moreover, if we had audio available for our training corpus, we could exploit that in the initial labeling phase for the design of the unit selection system. (2) Weighted finite-state transducer representation. In addition to the already mentioned synthesis speed and the opportunity of high-quality measures in the initial offline labeling phase, another benefit of this approach is that it leads to a natural representation by weighted transducers, and hence enables us to build a unit selection system using general and flexible representations and methods already in use for speech recognition, e.g., those found in the FSM (Mohri et al., 2000), GRM (Allauzen et al., 2004) and DCD (Allauzen et al., 2003) libraries. Other unit selection systems based on weighted transducers were also proposed in (Yi et al., 2000; Bulyko and Ostendorf, 2001). (3) Unit selection algorithms and speed-up. We present a new unit selection system based on statistical modeling. We used weighted automata and transducers for the representation of the components of the system and designed a new and efficient composition algorithm making use of string potentials for their combination. The resulting statistical unit selection is shown to be about 2.6 times faster than the last release of the AT&T Natural Voices Product while preserving the same quality, and offers much flexibility for the use and integration of new and more complex components. 2 Unit Selection Methods 2.1 Overview of a Traditional Unit Selection System This section describes in detail the cost functions used in the AT&T Natural Voices Product that we will use as the baseline in our experimental results, see (Beutnagel et al., 1999a) for more details about this system. In this system, unit selection is based on (Hunt and Black, 1996) but using units corresponding to halfphones instead of phones. Let U be the set of recorded units. Two cost functions are defined: the target cost Ct(fi, ui) is used to estimate the mismatch between the features of the feature vector fi and the unit ui; the concatenation cost Cc(ui, uj) is used to estimate the smoothness of the acoustic signal when concatenating the units ui and uj. Given a sequence f = f1 . . . fn of feature vectors, unit selection can then be formulated as the problem of finding the sequence of units u = u1 . . . un that minimizes these two costs: u = argmin u∈Un ( n X i=1 Ct(fi, ui) + n X i=2 Cc(ui−1, ui)) In practice, not all unit sequences of a given length are considered. A preselection method such as the one proposed by (Conkie et al., 2000) is used. The computation of the target cost can be split in two parts: the context cost Cp that is the component of the target cost corresponding to the phonetic context, and the feature cost Cf that corresponds the other components of the target cost: Ct(fi, ui) = Cp(fi, ui) + Cf(fi, ui) (3) For each phonetic context ρ of length 5, a list L(ρ) of the units that are the most frequently used in the phonetic context ρ is computed. For each feature vector fi in f, the candidate units for fi are computed in the following way. Let ρi be the 5-phone context of fi in f. The context costs between fi and all the units in the preselection list of the phonetic context ρi are computed and the M units with the best context cost are selected: Ui = M-best ui∈L(ρi)(Cp(fi, ui)) The feature costs between fi and the units in Ui are then computed and the N units with the best target cost are selected: U′ i = N-best ui∈Ui (Cp(fi, ui) + Cf(fi, ui)) The unit sequence u verifying: u = argmin u∈U′ 1···U′n ( n X i=1 Ct(fi, ui) + n X i=2 Cc(ui−1, ui)) is determined using a classical Viterbi search. Thus, for each position i, the N 2 concatenation costs between the units in U′ i and U′ i+1 need to be computed. The caching method for concatenation costs proposed in (Beutnagel et al., 1999b) can be used to improve the efficiency of the system. 2.2 Statistical Modeling Approach Our statistical modeling approach was described in Section 1. As already mentioned, our general approach would consists of deriving both the target cost −log P(f|u) and the concatenation cost −log P(u) from appropriate training data using general statistical methods. To simplify the problem, we will use the existing target cost provided by the traditional unit selection system and concentrate on the problem of estimating the concatenation cost. We used the unit selection system presented in the previous section to generate a large corpus of more than 8M unit sequences, each unit corresponding to a unique recorded halfphone. This corpus was used to build an n-gram statistical language model using Katz backoff smoothing technique (Katz, 1987). This model provides us with a new cost function, the grammar cost Cg, defined by: Cg(uk|u1...uk−1) = −log(P(uk|u1...uk−1)) where P is the probability distribution estimated by our model. We used this new cost function to replace both the concatenation and context costs used in the traditional approach. Unit selection then consists of finding the unit sequence u such that: u = argmin u∈Un n X i=1 (Cf(fi, ui)+Cg(ui|ui−k . . . ui−1)) In this approach, rather than using a preselection method such as that of (Conkie et al., 2000), we are using the statistical language model to restrict the candidate space (see Section 4.2). 3 Representation by Weighted Finite-State Transducers An important advantage of the statistical framework we introduced for unit selection is that the resulting components can be naturally represented by weighted finite-state transducers. This casts unit selection into a familiar schema, that of a Viterbi decoder applied to a weighted transducer. 3.1 Weighted Finite-State Transducers We give a brief introduction to weighted finite-state transducers. We refer the reader to (Mohri, 2004; Mohri et al., 2000) for an extensive presentation of these devices and will use the definitions and notation introduced by these authors. A weighted finite-state transducer T is an 8-tuple T = (Σ, ∆, Q, I, F, E, λ, ρ) where Σ is the finite input alphabet of the transducer, ∆is the finite output alphabet, Q is a finite set of states, I ⊆Q the set of initial states, F ⊆Q the set of final states, E ⊆Q × (Σ ∪{ϵ}) × (∆∪{ϵ}) × R × Q a finite set of transitions, λ : I →R the initial weight function, and ρ : F →R the final weight function mapping F to R. In our statistical framework, the weights can be interpreted as log-likelihoods, thus there are added along a path. Since we use the standard Viterbi approximation, the weight associated by T to a pair of strings (x, y) ∈Σ∗× ∆∗is given by: [[T]](x, y) = min π∈R(I,x,y,F ) λ[p[π]] + w[π] + ρ[n[π]] where R(I, x, y, F) denotes the set of paths from an initial state p ∈I to a final state q ∈F with input label x and output label y, w[π] the weight of the path π, λ[p[π]] the initial weight of the origin state of π, and ρ[n[π]] the final weight of its destination. A Weighted automaton A = (Σ, Q, I, F, E, λ, ρ) is defined in a similar way by simply omitting the output (or input) labels. We denote by Π2(T) the 0 1 a 2 b 3 c 4 d (a) 0 1 a:x 5 a:u 2 b:y 6 b:v 3 c:z 4 d:t 7 c:w 8 a:s (b) 0 1 a:x 2 a:u 3 b:y 4 b:v 5 c:z 6 c:w 7 d:t (c) Figure 1: (a) Weighted automaton T1. (b) Weighted transducer T2. (c) T1 ◦T2, the result of the composition of T1 and T2. weighted automaton obtained from T by removing its input labels. A general composition operation similar to the composition of relations can be defined for weighted finite-state transducers (Eilenberg, 1974; Berstel, 1979; Salomaa and Soittola, 1978; Kuich and Salomaa, 1986). The composition of two transducers T1 and T2 is a weighted transducer denoted by T1 ◦T2 and defined by: [[T1 ◦T2]](x, y) = min z∈∆∗{[[T1]](x, z) + [[T2]](z, y)} There exists a simple algorithm for constructing T = T1 ◦T2 from T1 and T2 (Pereira and Riley, 1997; Mohri et al., 1996). The states of T are identified as pairs of a state of T1 and a state of T2. A state (q1, q2) in T1◦T2 is an initial (final) state if and only if q1 is an initial (resp. final) state of T1 and q2 is an initial (resp. final) state of T2. The transitions of T are the result of matching a transition of T1 and a transition of T2 as follows: (q1, a, b, w1, q′ 1) and (q2, b, c, w2, q′ 2) produce the transition ((q1, q2), a, c, w1 + w2, (q′ 1, q′ 2)) (4) in T. The efficiency of this algorithm was critical to that of our unit selection system. Thus, we designed an improved composition that we will describe later. Figure 1(c) gives the resulting of the composition of the weighted transducers given figure 2(a) and (b). 3.2 Language Model Weighted Transducer The n-gram statistical language model we construct for unit sequences can be represented by a weighted automaton G which assigns to each sequence u its log-likelihood: [[G]](u) = −log(P(u)). (5) according to our probability estimate P. Since a unit sequence u uniquely determines the corresponding halfphone sequence x, the n-gram statistical model equivalently defines a model of the joint distribution of P(x, u). G can be augmented to define a weighted transducer ˆG assigning to pairs (x, u) their log-likelihoods. For any halfphone sequence x and unit sequence u, we define ˆG by: [[ ˆG]](x, u) = −log P(u) (6) The weighted transducer ˆG can be used to generate all the unit sequences corresponding to a specific halfphone sequence given by a finite automaton p, using composition: p ◦ˆG. In our case, we also wish to use the language model transducer ˆG to limit the number of candidate unit sequences considered. We will do that by giving a strong precedence to ngrams of units that occurred in the training corpus (see Section 4.2). Example Figure 2(a) shows the bigram model G estimated from the following corpus: <s> u1 u2 u1 u2 </s> <s> u1 u3 </s> <s> u1 u3 u1 u2 </s> where ⟨s⟩and ⟨/s⟩are the symbols marking the start and the end of an utterance. When the unit u1 is associated to the halfphone p1 and both units u1 and u2 are associated to the halfphone p2, the corresponding weighted halfphone-to-unit transducer ˆG is the one shown in Figure 2(b). 3.3 Unit Selection with Weighted Finite-State Transducers From each sequence f = f1 . . . fn of feature vectors specified by the text analysis frontend, we can straightforwardly derive the halfphone sequence to be synthesized and represent it by a finite automaton p, since the first component of each feature vector fi is the corresponding halfphone. Let W be the weighted automaton obtained by composition of p with ˆG and projection on the output: W = Π2(p ◦ˆG) (7) W represents the set of candidate unit sequences with their respective grammar costs. We can then use a speech recognition decoder to search for the best sequence u since W can be thought of as the </s> u3 </s>/0.703 . ε/3.647 u1 u1/0.703 </s>/1.466 u3/1.871 u1/0.955 u2 u2/1.466 u3/0.921 ε/5.034 u2/0.514 </s>/0.410 ε/4.053 u1/1.108 <s> ε/5.216 u1/0.003 </s> u3 ε:</s>/0.703 . ε:ε/3.647 u1 p1:u1/0.703 ε:</s>/1.466 p2:u3/1.871 p1:u1/0.955 u2 p2:u2/1.466 p2:u3/0.921 ε:ε/5.034 p2:u2/0.514 ε:</s>/0.410 ε:ε/4.053 p1:u1/1.108 <s> ε:ε/5.216 p1:u1/0.003 (a) (b) Figure 2: (a) n-gram language model G for unit sequences. (b) Corresponding halfphone-to-unit weighted transducer ˆG. counterpart of a speech recognition transducer, f the equivalent of the acoustic features and Cf the analogue of the acoustic cost. Our decoder uses a standard beam search of W to determine the best path by computing on-the-fly the feature cost between each unit and its corresponding feature vector. Composition constitutes the most costly operation in this framework. Section 4 presents several of the techniques that we used to speed up that algorithm in the context of unit selection. 4 Algorithms 4.1 Composition with String Potentials In general, composition may create noncoaccessible states, i.e., states that do not admit a path to a final state. These states can be removed after composition using a standard connection (or trimming) algorithm that removes unnecessary states. However, our purpose here is to avoid the creation of such states to save computational time. To that end, we introduce the notion of string potential at each state. Let i[π] (o[π]) be the input (resp. output) label of a path π, and denote by x ∧y the longest common prefix of two strings x and y. Let q be a state in a weighted transducer. The input (output) string potential of q is defined as the longest common prefix of the input (resp. output) labels of all the paths in T from q to a final state: pi(q) = ^ π∈Π(q,F ) i[π] po(q) = ^ π∈Π(q,F ) o[π] The string potentials of the states of T can be computed using the generic shortest-distance algorithm of (Mohri, 2002) over the string semiring. They can be used in composition in the following way. We will say that two strings x and y are comparable if x is a prefix of y or y is a prefix of x. Let (q1, q2) be a state in T = T1 ◦T2. Note that (q1, q2) is a coaccessible state only if the output string potential of q1 in T1 and the input string potential of q2 in T2 are comparable, i.e., po(q1) is a prefix of pi(q2) or pi(q2) is a prefix of po(q1). Hence, composition can be modified to create only those states for which the string potentials are compatible. As an example, state 2 = (1, 5) of the transducer T = T1 ◦T2 in Figure 1 needs not be created since po(1) = bcd and pi(5) = bca are not comparable strings. The notion of string potentials can be extended to further reduce the number of non-coaccessible states created by composition. The extended input string potential of q in T, is denoted by ¯pi(q) and is the set of strings defined by: ¯pi(q) = pi(q) · ζi(q) (8) where ζi(q) ⊆Σ and is such that for every σ ∈ ζi(q), there exist a path π from q to a final state such that pi(q)σ is a prefix of the input label of π. The extended output string potential of q, ¯po(q), is defined similarly. A state (q1, q2) in T1 ◦T2 is coaccessible only if (¯po(q1) · Σ∗) ∩(¯pi(q2) · Σ∗) ̸= ∅ (9) Using string potentials helped us substantially improve the efficiency of composition in unit selection. 4.2 Language Model Transducer – Backoff As mentioned before, the transducer ˆG represents an n-gram backoff model for the joint probability distribution P(x, u). Thus, backoff transitions are used in a standard fashion when ˆG is viewed as an automaton over paired sequences (x, u). Since we use ˆG as a transducer mapping halfphone sequences to unit sequences to determine the most likely unit sequence u given a halfphone sequence x 1we need to clarify the use of the backoff transitions in the composition p ◦ˆG. Denote by O(V ) the set of output labels of a set of transitions V . Then, the correct use derived from the definition of the backoff transitions in the joint model is as follows. At a given state s of ˆG and for a given input halfphone a, the outgoing transitions with input a are the transitions V of s with input label a, and for each b ̸∈O(V ), the transition of the first backoff state of s with input label a and output b. For the purpose of our unit selection system, we had to resort to an approximation. This is because in general, the backoff use just outlined leads to examining, for a given halfphone, the set of all units possible at each state, which is typically quite large.2 Instead, we restricted the inspection of the backoff states in the following way within the composition p ◦ˆG. A state s1 in p corresponds in the composed transducer p ◦ˆG to a set of states (s1, s2), s2 ∈S2, where S2 is a subset of the states of ˆG. When computing the outgoing transitions of the states in (s1, s2) with input label a, the backoff transitions of a state s2 are inspected if and only if none of the states in S2 has an outgoing transition with input label a. 1This corresponds to the conditional probability P(u|x) = P(x, u)/P(x). 2Note that more generally the vocabulary size of our statistical language models, about 400,000, is quite large compared to the usual word-based models. 4.3 Language Model Transducer – Shrinking A classical algorithm for reducing the size of an n-gram language model is shrinking using the entropy-based method of (Stolcke, 1998) or the weighted difference method (Seymore and Rosenfeld, 1996), both quite similar in practice. In our experiments, we used a modified version of the weighted difference method. Let w be a unit and let h be its conditioning history within the n-gram model. For a given shrink factor γ, the transition corresponding to the n-gram hw is removed from the weighted automaton if: log( eP(w|h)) −log(αh eP(w|h′)) ≤ γ c(hw) (10) where h′ is the backoff sequence associated with h. Thus, a higher-order n-gram hw is pruned when it does not provide a probability estimate significantly different from the corresponding lower-order n-gram sequence h′w. This standard shrinking method needs to be modified to be used in the case of our halfphone-to-unit weighted transducer model with the restriction on the traversal of the backoff transitions described in the previous section. The shrinking methods must take into account all the transitions sharing the same input label at the state identified with h and its backoff state h′. Thus, at each state identified with h in ˆG, a transition with input label x is pruned when the following condition holds: X w∈Xx h log( eP (w|h)) − X w∈Xx h′ log(αh eP(w|h′)) ≤ γ c(hw) where h′ is the backoff sequence associate with h and Xx k is the set of output labels of all the outgoing transitions with input label x of the state identified with k. 5 Experimental results We used the AT&T Natural Voices Product speech synthesis system to synthesize 107,987 AP news articles, generating a large corpus of 8,731,662 unit sequences representing a total of 415,227,388 units. We used this corpus to build several n-gram Katz backoff language models with n = 2 or 3. Table 1 gives the size of the resulting language model weighted automata. These language models were built using the GRM Library (Allauzen et al., 2004). We evaluated these models by using them to synthesize an AP news article of 1,000 words, corresponding to 8250 units or 6 minutes of synthesized speech. Table 2 gives the unit selection time (in seconds) taken by our new system to synthesize this AP Model No. of states No. of transitions 2-gram, unshrunken 293,935 5,003,336 3-gram, unshrunken 4,709,404 19,027,244 3-gram, γ = −4 2,967,472 14,223,284 3-gram, γ = −1 2,060,031 12,133,965 3-gram, γ = 0 1,681,233 10,217,164 3-gram, γ = 1 1,370,220 9,146,797 3-gram, γ = 4 934,914 7,844,250 Table 1: Size of the stochastic language models for different n-gram order and shrinking factor. Model composition search total time baseline system 4.5s 2-gram, unshrunken 2.9s 1.0s 3.9s 3-gram, unshrunken 1.2s 0.5s 1.7s 3-gram, γ = −4 1.3s 0.5s 1.8s 3-gram, γ = −1 1.5s 0.5s 2.0s 3-gram, γ = 0 1.7s 0.5s 2.2s 3-gram, γ = 1 2.1s 0.6s 2.7s 3-gram, γ = 4 2.7s 0.9s 3.6s Table 2: Computation time for each unit selection system when used to synthesize the same AP news article. news article. Experiments were run on a 1GHz Pentium III processor with 256KB of cache and 2GB of memory. The baseline system mentioned in this table is the AT&T Natural Voices Product which was also used to generate our training corpus using the concatenation cost caching method from (Beutnagel et al., 1999b). For the new system, both the computation times due to composition and to the search are displayed. Note that the AT&T Natural Voices Product system was highly optimized for speed. In our new systems, the standard research software libraries already mentioned were used. The search was performed using the standard speech recognition Viterbi decoder from the DCD library (Allauzen et al., 2003). With a trigram language model, our new statistical unit selection system was about 2.6 times faster than the baseline system. A formal test using the standard mean of opinion score (MOS) was used to compare the quality of the high-quality AT&T Natural Voices Product synthesizer and that of the synthesizers based on our new unit selection system with shrunken and unshrunken trigram language models. In such tests, several listeners are asked to rank the quality of each utterance from 1 (worst score) to 5 (best). The MOS results of the three systems with 60 utterances tested by 21 listeners are reported in Table 3 with their correspondModel raw score normalized score baseline system 3.54 ± .20 3.09 ± .22 3-gram, unshrunken 3.45 ± .20 2.98 ± .21 3-gram, γ = −1 3.40 ± .20 2.93 ± .22 Table 3: Quality testing results: we report for each system, the mean and standard error of the raw and the listener-normalized scores. ing standard error. The difference of scores between the three systems is not statistically significant (first column), in particular, the absolute difference between the two best systems is less than .1. Different listeners may rank utterances in different ways. Some may choose the full range of scores (1–5) to rank each utterance, others may select a smaller range near 5, near 3, or some other range. To factor out such possible discrepancies in ranking, we also computed the listener-normalized scores (second column of the table). This was done for each listener by removing the average score over the full set of utterances, dividing it by the standard deviation, and by centering it around 3. The results show that the difference between the normalized scores of the three systems is not significantly different. Thus, the MOS results show that the three systems have the same quality. We also measured the similarity of the two best systems by comparing the number of common units they produce for each utterance. On the AP news article already mentioned, more than 75% of the units were common. 6 Conclusion We introduced a statistical modeling approach to unit selection in speech synthesis. This approach is likely to lead to more accurate unit selection systems based on principled learning algorithms and techniques that radically depart from the heuristic methods used in the traditional systems. Our preliminary experiments using a training corpus generated by the AT&T Natural Voices Product demonstrates that statistical modeling techniques can be used to build a high-quality unit selection system. It also shows other important benefits of this approach: a substantial increase of efficiency and a greater modularity and flexibility. Acknowledgments We thank Mark Beutnagel for helping us clarify some of the details of the unit selection system in the AT&T Natural Voices Product speech synthesizer. Mark also generated the training corpora and set up the listening test used in our experiments. We also acknowledge discussions with Brian Roark about various statistical language modeling topics in the context of unit selection. References Cyril Allauzen, Mehryar Mohri, and Michael Riley. 2003. DCD Library - Decoder Library, software collection for decoding and related functions. In AT&T Labs - Research. http://www.research.att.com/sw/tools/dcd. Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2004. A General Weighted Grammar Library. In Proceedings of the Ninth International Conference on Automata (CIAA 2004), Kingston, Ontario, Canada, July. http://www.research.att.com/sw/tools/grm. Jean Berstel. 1979. Transductions and ContextFree Languages. Teubner Studienbucher: Stuttgart. Mark Beutnagel, Alistair Conkie, Juergen Schroeter, and Yannis Stylianou. 1999a. The AT&T Next-Gen system. In Proceedings of the Joint Meeting of ASA, EAA and DAGA, pages 18–24, Berlin, Germany. Mark Beutnagel, Mehryar Mohri, and Michael Riley. 1999b. Rapid unit selection from a large speech corpus for concatenative speech synthesis. In Proceedings of Eurospeech, volume 2, pages 607–610. Ivan Bulyko and Mari Ostendorf. 2001. Unit selection for speech synthesis using splicing costs with weighted finite-state trasnducers. In Proceedings of Eurospeech, volume 2, pages 987–990. Alistair Conkie, Mark Beutnagel, Ann Syrdal, and Philip Brown. 2000. Preselection of candidate units in a unit selection-based text-to-speech synthesis system. In Proceedings of ICSLP, volume 3, pages 314–317. Samuel Eilenberg. 1974. Automata, Languages and Machines, volume A. Academic Press. Andrew Hunt and Alan Black. 1996. Unit selection in a concatenative speech synthesis system. In Proceedings of ICASSP’96, volume 1, pages 373–376, Atlanta, GA. Frederick Jelinek. 1976. Continuous speech recognition by statistical methods. IEEE Proceedings, 64(4):532–556. Slava M. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recogniser. IEEE Transactions on Acoustic, Speech, and Signal Processing, 35(3):400–401. Werner Kuich and Arto Salomaa. 1986. Semirings, Automata, Languages. Number 5 in EATCS Monographs on Theoretical Computer Science. Springer-Verlag, Berlin, Germany. Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. 1996. Weighted automata in text and speech processing. In Proceedings of the 12th European Conference on Artificial Intelligence (ECAI 1996), Workshop on Extended finite state models of language, Budapest, Hungary. John Wiley and Sons, Chichester. Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. 2000. The Design Principles of a Weighted Finite-State Transducer Library. Theoretical Computer Science, 231(1):17–32. http://www.research.att.com/sw/tools/fsm. Mehryar Mohri. 2002. Semiring Frameworks and Algorithms for Shortest-Distance Problems. Journal of Automata, Languages and Combinatorics, 7(3):321–350. Mehryar Mohri. 2004. Weighted Finite-State Transducer Algorithms: An Overview. In Carlos Mart´ın-Vide, Victor Mitrana, and Gheorghe Paun, editors, Formal Languages and Applications, volume 148, VIII, 620 p. Springer, Berlin. Eric Moulines and Francis Charpentier. 1990. Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones. Speech Communication, 9(5-6):453– 467. Fernando C. N. Pereira and Michael D. Riley. 1997. Speech Recognition by Composition of Weighted Finite Automata. In Finite-State Language Processing, pages 431–453. MIT Press. Arto Salomaa and Matti Soittola. 1978. AutomataTheoretic Aspects of Formal Power Series. Springer-Verlag: New York. Kristie Seymore and Ronald Rosenfeld. 1996. Scalable backoff language models. In Proceedings of ICSLP, volume 1, pages 232–235, Philadelphia, Pennsylvania. Andreas Stolcke. 1998. Entropy-based pruning of backoff language models. In Proc. DARPA Broadcast News Transcription and Understanding Workshop, pages 270–274. Yannis Stylianou, Thierry Dutoit, and Juergen Schroeter. 1997. Diphone conactenation using a harmonic plus noise model of speech. In Proceedings of Eurospeech. Jon Yi, James Glass, and Lee Hetherington. 2000. A flexible scalable finite-state transducer architecture for corpus-based concatenative speech synthesis. In Proceedings of ICSLP, volume 3, pages 322–325.
2004
8
Learning Word Senses With Feature Selection and Order Identification Capabilities Zheng-Yu Niu, Dong-Hong Ji Institute for Infocomm Research 21 Heng Mui Keng Terrace 119613 Singapore {zniu, dhji}@i2r.a-star.edu.sg Chew-Lim Tan Department of Computer Science National University of Singapore 3 Science Drive 2 117543 Singapore [email protected] Abstract This paper presents an unsupervised word sense learning algorithm, which induces senses of target word by grouping its occurrences into a “natural” number of clusters based on the similarity of their contexts. For removing noisy words in feature set, feature selection is conducted by optimizing a cluster validation criterion subject to some constraint in an unsupervised manner. Gaussian mixture model and Minimum Description Length criterion are used to estimate cluster structure and cluster number. Experimental results show that our algorithm can find important feature subset, estimate model order (cluster number) and achieve better performance than another algorithm which requires cluster number to be provided. 1 Introduction Sense disambiguation is essential for many language applications such as machine translation, information retrieval, and speech processing (Ide and V´eronis, 1998). Almost all of sense disambiguation methods are heavily dependant on manually compiled lexical resources. However these lexical resources often miss domain specific word senses, even many new words are not included inside. Learning word senses from free text will help us dispense of outside knowledge source for defining sense by only discriminating senses of words. Another application of word sense learning is to help enriching or even constructing semantic lexicons (Widdows, 2003). The solution of word sense learning is closely related to the interpretation of word senses. Different interpretations of word senses result in different solutions to word sense learning. One interpretation strategy is to treat a word sense as a set of synonyms like synset in WordNet. The committee based word sense discovery algorithm (Pantel and Lin, 2002) followed this strategy, which treated senses as clusters of words occurring in similar contexts. Their algorithm initially discovered tight clusters called committees by grouping top n words similar with target word using averagelink clustering. Then the target word was assigned to committees if the similarity between them was above a given threshold. Each committee that the target word belonged to was interpreted as one of its senses. There are two difficulties with this committee based sense learning. The first difficulty is about derivation of feature vectors. A feature for target word here consists of a contextual content word and its grammatical relationship with target word. Acquisition of grammatical relationship depends on the output of a syntactic parser. But for some languages, ex. Chinese, the performance of syntactic parsing is still a problem. The second difficulty with this solution is that two parameters are required to be provided, which control the number of committees and the number of senses of target word. Another interpretation strategy is to treat a word sense as a group of similar contexts of target word. The context group discrimination (CGD) algorithm presented in (Sch¨utze, 1998) adopted this strategy. Firstly, their algorithm selected important contextual words using χ2 or local frequency criterion. With the χ2 based criterion, those contextual words whose occurrence depended on whether the ambiguous word occurred were chosen as features. When using local frequency criterion, their algorithm selected top n most frequent contextual words as features. Then each context of occurrences of target word was represented by second order cooccurrence based context vector. Singular value decomposition (SVD) was conducted to reduce the dimensionality of context vectors. Then the reduced context vectors were grouped into a pre-defined number of clusters whose centroids corresponded to senses of target word. Some observations can be made about their feature selection and clustering procedure. One observation is that their feature selection uses only first order information although the second order cooccurrence data is available. The other observation is about their clustering procedure. Similar with committee based sense discovery algorithm, their clustering procedure also requires the predefinition of cluster number. Their method can capture both coarse-gained and fine-grained sense distinction as the predefined cluster number varies. But from a point of statistical view, there should exist a partitioning of data at which the most reliable, “natural” sense clusters appear. In this paper, we follow the second order representation method for contexts of target word, since it is supposed to be less sparse and more robust than first order information (Sch¨utze, 1998). We introduce a cluster validation based unsupervised feature wrapper to remove noises in contextual words, which works by measuring the consistency between cluster structures estimated from disjoint data subsets in selected feature space. It is based on the assumption that if selected feature subset is important and complete, cluster structure estimated from data subset in this feature space should be stable and robust against random sampling. After determination of important contextual words, we use a Gaussian mixture model (GMM) based clustering algorithm (Bouman et al., 1998) to estimate cluster structure and cluster number by minimizing Minimum Description Length (MDL) criterion (Rissanen, 1978). We construct several subsets from widely used benchmark corpus as test data. Experimental results show that our algorithm (FSGMM) can find important feature subset, estimate cluster number and achieve better performance compared with CGD algorithm. This paper is organized as follows. In section 2 we will introduce our word sense learning algorithm, which incorporates unsupervised feature selection and model order identification technique. Then we will give out the experimental results of our algorithm and discuss some findings from these results in section 3. Section 4 will be devoted to a brief review of related efforts on word sense discrimination. In section 5 we will conclude our work and suggest some possible improvements. 2 Learning Procedure 2.1 Feature selection Feature selection for word sense learning is to find important contextual words which help to discriminate senses of target word without using class labels in data set. This problem can be generalized as selecting important feature subset in an unsupervised manner. Many unsupervised feature selection algorithms have been presented, which can be categorized as feature filter (Dash et al., 2002; Talavera, 1999) and feature wrapper (Dy and Brodley, 2000; Law et al., 2002; Mitra et al., 2002; Modha and Spangler, 2003). In this paper we propose a cluster validation based unsupervised feature subset evaluation method. Cluster validation has been used to solve model order identification problem (Lange et al., 2002; Levine and Domany, 2001). Table 1 gives out our feature subset evaluation algorithm. If some features in feature subset are noises, the estimated cluster structure on data subset in selected feature space is not stable, which is more likely to be the artifact of random splitting. Then the consistency between cluster structures estimated from disjoint data subsets will be lower. Otherwise the estimated cluster structures should be more consistent. Here we assume that splitting does not eliminate some of the underlying modes in data set. For comparison of different clustering structures, predictors are constructed based on these clustering solutions, then we use these predictors to classify the same data subset. The agreement between class memberships computed by different predictors can be used as the measure of consistency between cluster structures. We use the stability measure (Lange et al., 2002) (given in Table 1) to assess the agreement between class memberships. For each occurrence, one strategy is to construct its second order context vector by summing the vectors of contextual words, then let the feature selection procedure start to work on these second order contextual vectors to select features. However, since the sense associated with a word’s occurrence is always determined by very few feature words in its contexts, it is always the case that there exist more noisy words than the real features in the contexts. So, simply summing the contextual word’s vectors together may result in noise-dominated second order context vectors. To deal with this problem, we extend the feature selection procedure further to the construction of second order context vectors: to select better feature words in contexts to construct better second order context vectors enabling better feature selection. Since the sense associated with a word’s occurrence is always determined by some feature words in its contexts, it is reasonable to suppose that the selected features should cover most of occurrences. Formally, let coverage(D, T) be the coverage rate of the feature set T with respect to a set of contexts D, i.e., the ratio of the number of the occurrences with at least one feature in their local contexts against the total number of occurrences, then we assume that coverage(D, T) ≥τ. In practice, we set τ = 0.9. This assumption also helps to avoid the bias toward the selection of fewer features, since with fewer features, there are more occurrences without features in contexts, and their context vectors will be zero valued, which tends to result in more stable cluster structure. Let D be a set of local contexts of occurrences of target word, then D = {di}N i=1, where di represents local context of the i-th occurrence, and N is the total number of this word’s occurrences. W is used to denote bag of words occurring in context set D, then W = {wi}M i=1, where wi denotes a word occurring in D, and M is the total number of different contextual words. Let V denote a M × M second-order cooccurrence symmetric matrix. Suppose that the i-th , 1 ≤i ≤M, row in the second order matrix corresponds to word wi and the j-th , 1 ≤j ≤M, column corresponds to word wj, then the entry specified by i-th row and j-th column records the number of times that word wi occurs close to wj in corpus. We use v(wi) to represent the word vector of contextual word wi, which is the i-th row in matrix V . HT is a weight matrix of contextual word subset T, T ⊆W. Then each entry hi,j represents the weight of word wj in di, wj ∈T, 1 ≤i ≤N. We use binary term weighting method to derive context vectors: hi,j = 1 if word wj occurs in di, otherwise zero. Let CT = {cT i }N i=1 be a set of context vectors in feature space T, where cT i is the context vector of the i-th occurrence. cT i is defined as: cT i = X j (hi,jv(wj)), wj ∈T, 1 ≤i ≤N. (1) The feature subset selection in word set W can be formulated as: ˆT = arg max T {criterion(T, H, V, q)}, T ⊆W, (2) subject to coverage(D, T) ≥τ, where ˆT is the optimal feature subset, criterion is the cluster validation based evaluation function (the function in Table 1), q is the resampling frequency for estimate of stability, and coverage(D, T) is the proportion of contexts with occurrences of features in T. This constrained optimization results in a solution which maximizes the criterion and meets the given constraint at the same time. In this paper we use sequential greedy forward floating search (Pudil et al., 1994) in sorted word list based on χ2 or local frequency criterion. We set l = 1, m = 1, where l is plus step, and m is take-away step. 2.2 Clustering with order identification After feature selection, we employ a Gaussian mixture modelling algorithm, Cluster (Bouman et al., Table 1: Unsupervised Feature Subset Evaluation Algorithm. Intuitively, for a given feature subset T, we iteratively split data set into disjoint halves, and compute the agreement of clustering solutions estimated from these sets using stability measure. The average of stability over q resampling is the estimation of the score of T. Function criterion(T, H, V , q) Input parameter: feature subset T, weight matrix H, second order co-occurrence matrix V , resampling frequency q; (1) ST = 0; (2) For i = 1 to q do (2.1) Randomly split CT into disjoint halves, denoted as CT A and CT B; (2.2) Estimate GMM parameter and cluster number on CT A using Cluster, and the parameter set is denoted as ˆθA; The solution ˆθA can be used to construct a predictor ρA; (2.3) Estimate GMM parameter and cluster number on CT B using Cluster, and the parameter set is denoted as ˆθB, The solution ˆθB can be used to construct a predictor ρB; (2.4) Classify CT B using ρA and ρB; The class labels assigned by ρA and ρB are denoted as LA and LB; (2.5) ST + = maxπ 1 |CT B| P i 1{π(LA(cT Bi)) = LB(cT Bi)}, where π denotes possible permutation relating indices between LA and LB, and cT Bi ∈CT B; (3) ST = 1 q ST ; (4) Return ST ; 1998), to estimate cluster structure and cluster number. Let Y = {yn}N n=1 be a set of M dimensional vectors to be modelled by GMM. Assuming that this model has K subclasses, let πk denote the prior probability of subclass k, µk denote the M dimensional mean vector for subclass k, Rk denote the M × M dimensional covariance matrix for subclass k, 1 ≤k ≤K. The subclass label for pixel yn is represented by xn. MDL criterion is used for GMM parameter estimation and order identification, which is given by: MDL(K, θ) = − N X n=1 log (pyn|xn(yn|Θ)) + 1 2L log (NM), (3) pyn|xn(yn|Θ) = K X k=1 pyn|xn(yn|k, θ)πk, (4) L = K(1 + M + (M + 1)M 2 ) −1, (5) The log likelihood measures the goodness of fit of a model to data sample, while the second term penalizes complex model. This estimator works by attempting to find a model order with minimum code length to describe the data sample Y and parameter set Θ. If the cluster number is fixed, the estimation of GMM parameter can be solved using EM algorithm to address this type of incomplete data problem (Dempster et al., 1977). The initialization of mixture parameter θ(1) is given by: π(1) k = 1 Ko (6) µ(1) k = yn, where n = ⌊(k −1)(N −1)/(Ko −1)⌋+ 1 (7) R(1) k = 1 N ΣN n=1ynyt n (8) Ko is a given initial subclass number. Then EM algorithm is used to estimate model parameters by minimizing MDL: E-step: re-estimate the expectations based on previous iteration: pxn|yn(k|yn, θ(i)) = pyn|xn(yn|k, θ(i))πk PK l=1(pyn|xn(yn|l, θ(i))πl) , (9) M-step: estimate the model parameter θ(i) to maximize the log-likelihood in MDL: N k = N X n=1 pxn|yn(k|yn, θ(i)) (10) πk = N k N (11) µk = 1 N k N X n=1 ynpxn|yn(k|yn, θ(i)) (12) Rk = 1 N k N X n=1 (yn −µk)(yn −µk)tpxn|yn(k|yn, θ(i)) (13) pyn|xn(yn|k, θ(i)) = 1 (2π)M/2 |Rk|−1/2 exp{λ} (14) λ = −1 2(yn −µk)tR −1 k (yn −µk) (15) The EM iteration is terminated when the change of MDL(K, θ) is less than ϵ: ϵ = 1 100(1 + M + (M + 1)M 2 )log(NM) (16) For inferring the cluster number, EM algorithm is applied for each value of K, 1 ≤K ≤Ko, and the value ˆK which minimizes the value of MDL is chosen as the correct cluster number. To make this process more efficient, two cluster pair l and m are selected to minimize the change in MDL criteria when reducing K to K −1. These two clusters l and m are then merged. The resulting parameter set is chosen as an initial condition for EM iteration with K −1 subclasses. This operation will avoid a complete minimization with respect to π, µ, and R for each value of K. Table 2: Four ambiguous words, their senses and frequency distribution of each sense. Word Sense Percentage hard not easy (difficult) 82.8% (adjective) not soft (metaphoric) 9.6% not soft (physical) 7.6% interest money paid for the use of money 52.4% a share in a company or business 20.4% readiness to give attention 14% advantage, advancement or favor 9.4% activity that one gives attention to 3.6% causing attention to be given to 0.2% line product 56% (noun) telephone connection 10.6% written or spoken text 9.8% cord 8.6% division 8.2% formation 6.8% serve supply with food 42.6% (verb) hold an office 33.6% function as something 16% provide a service 7.8% 3 Experiments and Evaluation 3.1 Test data We constructed four datasets from hand-tagged corpus 1 by randomly selecting 500 instances for each ambiguous word - “hard”, “interest”, “line”, and “serve”. The details of these datasets are given in Table 2. Our preprocessing included lowering the upper case characters, ignoring all words that contain digits or non alpha-numeric characters, removing words from a stop word list, and filtering out low frequency words which appeared only once in entire set. We did not use stemming procedure. The sense tags were removed when they were used by FSGMM and CGD. In evaluation procedure, these sense tags were used as ground truth classes. A second order co-occurrence matrix for English words was constructed using English version of Xinhua News (Jan. 1998-Dec. 1999). The window size for counting second order co-occurrence was 50 words. 3.2 Evaluation method for feature selection For evaluation of feature selection, we used mutual information between feature subset and class label set to assess the importance of selected feature subset. Our assessment measure is defined as: M(T) = 1 |T| X w∈T X l∈L p(w, l)log p(w, l) p(w)p(l), (17) where T is the feature subset to be evaluated, T ⊆ W, L is class label set, p(w, l) is the joint distribution of two variables w and l, p(w) and p(l) are marginal probabilities. p(w, l) is estimated based 1http://www.d.umn.edu/∼tpederse/data.html on contingency table of contextual word set W and class label set L. Intuitively, if M(T1) > M(T2), T1 is more important than T2 since T1 contains more information about L. 3.3 Evaluation method for clustering result When assessing the agreement between clustering result and hand-tagged senses (ground truth classes) in benchmark data, we encountered the difficulty that there was no sense tag for each cluster. In (Lange et al., 2002), they defined a permutation procedure for calculating the agreement between two cluster memberships assigned by different unsupervised learners. In this paper, we applied their method to assign different sense tags to only min(|U|, |C|) clusters by maximizing the accuracy, where |U| is the number of clusters, and |C| is the number of ground truth classes. The underlying assumption here is that each cluster is considered as a class, and for any two clusters, they do not share same class labels. At most |C| clusters are assigned sense tags, since there are only |C| classes in benchmark data. Given the contingency table Q between clusters and ground truth classes, each entry Qi,j gives the number of occurrences which fall into both the ith cluster and the j-th ground truth class. If |U| < |C|, we constructed empty clusters so that |U| = |C|. Let Ωrepresent a one-to-one mapping function from C to U. It means that Ω(j1) ̸= Ω(j2) if j1 ̸= j2 and vice versa, 1 ≤j1, j2 ≤|C|. Then Ω(j) is the index of the cluster associated with the j-th class. Searching a mapping function to maximize the accuracy of U can be formulated as: ˆΩ= arg max Ω |C| X j=1 QΩ(j),j. (18) Then the accuracy of solution U is given by Accuracy(U) = P j QˆΩ(j),j P i,j Qi,j . (19) In fact, P i,j Qi,j is equal to N, the number of occurrences of target word in test set. 3.4 Experiments and results For each dataset, we tested following procedures: CGDterm:We implemented the context group discrimination algorithm. Top max(|W| × 20%, 100) words in contextual word list was selected as features using frequency or χ2 based ranking. Then k-means clustering2 was performed on context vector matrix using normalized Euclidean distance. K-means clustering was repeated 5 times 2We used k-means function in statistics toolbox of Matlab. and the partition with best quality was chosen as final result. The number of clusters used by k-means was set to be identical with the number of ground truth classes. We tested CGDterm using various word vector weighting methods when deriving context vectors, ex. binary, idf, tf · idf. CGDSV D: The context vector matrix was derived using same method in CGDterm. Then kmeans clustering was conducted on latent semantic space transformed from context vector matrix, using normalized Euclidean distance. Specifically, context vectors were reduced to 100 dimensions using SVD. If the dimension of context vector was less than 100, all of latent semantic vectors with non-zero eigenvalue were used for subsequent clustering. We also tested it using different weighting methods, ex. binary, idf, tf · idf. FSGMM: We performed cluster validation based feature selection in feature set used by CGD. Then Cluster algorithm was used to group target word’s instances using Euclidean distance measure. τ was set as 0.90 in feature subset search procedure. The random splitting frequency is set as 10 for estimation of the score of feature subset. The initial subclass number was 20 and full covariance matrix was used for parameter estimation of each subclass. For investigating the effect of different context window size on the performance of three procedures, we tested these procedures using various context window sizes: ±1, ±5, ±15, ±25, and all of contextual words. The average length of sentences in 4 datasets is 32 words before preprocessing. Performance on each dataset was assessed by equation 19. The scores of feature subsets selected by FSGMM and CGD are listed in Table 3 and 4. The average accuracy of three procedures with different feature ranking and weighting method is given in Table 5. Each figure is the average over 5 different context window size and 4 datasets. We give out the detailed results of these three procedures in Figure 1. Several results should be noted specifically: From Table 3 and 4, we can find that FSGMM achieved better score on mutual information (MI) measure than CGD over 35 out of total 40 cases. This is the evidence that our feature selection procedure can remove noise and retain important features. As it was shown in Table 5, with both χ2 and freq based feature ranking, FSGMM algorithm performed better than CGDterm and CGDSV D if we used average accuracy to evaluate their performance. Specifically, with χ2 based feature ranking, FSGMM attained 55.4% average accuracy, while the best average accuracy of CGDterm and CGDSV D were 40.9% and 51.3% respectively. With freq based feature ranking, FSGMM achieved 51.2% average accuracy, while the best average accuracy of CGDterm and CGDSV D were 45.1% and 50.2%. The automatically estimated cluster numbers by FSGMM over 4 datasets are given in Table 6. The estimated cluster number was 2 ∼4 for “hard”, 3 ∼6 for “interest”, 3 ∼6 for “line”, and 2 ∼4 for “serve”. It is noted that the estimated cluster number was less than the number of ground truth classes in most cases. There are some reasons for this phenomenon. First, the data is not balanced, which may lead to that some important features cannot be retrieved. For example, the fourth sense of “serve”, and the sixth sense of “line”, their corresponding features are not up to the selection criteria. Second, some senses can not be distinguished using only bag-of-words information, and their difference lies in syntactic information held by features. For example, the third sense and the sixth sense of “interest” may be distinguished by syntactic relation of feature words, while the bag of feature words occurring in their context are similar. Third, some senses are determined by global topics, rather than local contexts. For example, according to global topics, it may be easier to distinguish the first and the second sense of “interest”. Figure 2 shows the average accuracy over three procedures in Figure 1 as a function of context window size for 4 datasets. For “hard”, the performance dropped as window size increased, and the best accuracy(77.0%) was achieved at window size 1. For “interest”, sense discrimination did not benefit from large window size and the best accuracy(40.1%) was achieved at window size 5. For “line”, accuracy dropped when increasing window size and the best accuracy(50.2%) was achieved at window size 1. For “serve”, the performance benefitted from large window size and the best accuracy(46.8%) was achieved at window size 15. In (Leacock et al., 1998), they used Bayesian approach for sense disambiguation of three ambiguous words, “hard”, “line”, and “serve”, based on cues from topical and local context. They observed that local context was more reliable than topical context as an indicator of senses for this verb and adjective, but slightly less reliable for this noun. Compared with their conclusion, we can find that our result is consistent with it for “hard”. But there is some differences for verb “serve” and noun “line”. For Table 3: Mutual information between feature subset and class label with χ2 based feature ranking. Word Cont. Size of MI Size of MI wind. feature ×10−2 feature ×10−2 size subset subset of CGD of FSGMM hard 1 18 6.4495 14 8.1070 5 100 0.4018 80 0.4300 15 100 0.1362 80 0.1416 25 133 0.0997 102 0.1003 all 145 0.0937 107 0.0890 interest 1 64 1.9697 55 2.0639 5 100 0.3234 89 0.3355 15 157 0.1558 124 0.1531 25 190 0.1230 138 0.1267 all 200 0.1163 140 0.1191 line 1 39 4.2089 32 4.6456 5 100 0.4628 84 0.4871 15 183 0.1488 128 0.1429 25 263 0.1016 163 0.0962 all 351 0.0730 192 0.0743 serve 1 22 6.8169 20 6.7043 5 100 0.5057 85 0.5227 15 188 0.2078 164 0.2094 25 255 0.1503 225 0.1536 all 320 0.1149 244 0.1260 Table 4: Mutual information between feature subset and class label with freq based feature ranking. Word Cont. Size of MI Size of MI wind. feature ×10−2 feature ×10−2 size subset subset of CGD of FSGMM hard 1 18 6.4495 14 8.1070 5 100 0.4194 80 0.4832 15 100 0.1647 80 0.1774 25 133 0.1150 102 0.1259 all 145 0.1064 107 0.1269 interest 1 64 1.9697 55 2.7051 5 100 0.6015 89 0.8309 15 157 0.2526 124 0.3495 25 190 0.1928 138 0.2982 all 200 0.1811 140 0.2699 line 1 39 4.2089 32 4.4606 5 100 0.6895 84 0.7816 15 183 0.2301 128 0.2929 25 263 0.1498 163 0.2181 all 351 0.1059 192 0.1630 serve 1 22 6.8169 20 7.0021 5 100 0.7045 85 0.8422 15 188 0.2763 164 0.3418 25 255 0.1901 225 0.2734 all 320 0.1490 244 0.2309 “serve”, the possible reason is that we do not use position of local word and part of speech information, which may deteriorate the performance when local context(≤5 words) is used. For “line”, the reason might come from the feature subset, which is not good enough to provide improvement when Table 5: Average accuracy of three procedures with various settings over 4 datasets. Algorithm Feature Feature Average ranking weighting accuracy method method FSGMM χ2 binary 0.554 CGDterm χ2 binary 0.404 CGDterm χ2 idf 0.407 CGDterm χ2 tf · idf 0.409 CGDSV D χ2 binary 0.513 CGDSV D χ2 idf 0.512 CGDSV D χ2 tf · idf 0.508 FSGMM freq binary 0.512 CGDterm freq binary 0.451 CGDterm freq idf 0.437 CGDterm freq tf · idf 0.447 CGDSV D freq binary 0.502 CGDSV D freq idf 0.498 CGDSV D freq tf · idf 0.485 Table 6: Automatically determined mixture component number. Word Context Model Model window order order size with χ2 with freq hard 1 3 4 5 2 2 15 2 3 25 2 3 all 2 3 interest 1 5 4 5 3 4 15 4 6 25 4 6 all 3 4 line 1 5 6 5 4 3 15 5 4 25 5 4 all 3 4 serve 1 3 3 5 3 4 15 3 3 25 3 3 all 2 4 context window size is no less than 5. 4 Related Work Besides the two works (Pantel and Lin, 2002; Sch¨utze, 1998), there are other related efforts on word sense discrimination (Dorow and Widdows, 2003; Fukumoto and Suzuki, 1999; Pedersen and Bruce, 1997). In (Pedersen and Bruce, 1997), they described an experimental comparison of three clustering algorithms for word sense discrimination. Their feature sets included morphology of target word, part of speech of contextual words, absence or presence of particular contextual words, and collocation of fre0 1 5 15 25 all 0.4 0.5 0.6 0.7 0.8 0.9 Hard dataset Accuracy 0 1 5 15 25 all 0.2 0.3 0.4 0.5 0.6 Accuracy Interest dataset 0 1 5 15 25 all 0.2 0.3 0.4 0.5 0.6 0.7 Line dataset Accuracy 0 1 5 15 25 all 0.3 0.35 0.4 0.45 0.5 0.55 0.6 Serve dataset Accuracy Figure 1: Results for three procedures over 4 datases. The horizontal axis corresponds to the context window size. Solid line represents the result of FSGMM + binary, dashed line denotes the result of CGDSV D + idf, and dotted line is the result of CGDterm + idf. Square marker denotes χ2 based feature ranking, while cross marker denotes freq based feature ranking. 0 1 5 15 25 all 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Average Accuracy Hard dataset Interest dataset Line dataset Serve dataset Figure 2: Average accuracy over three procedures in Figure 1 as a function of context window size (horizontal axis) for 4 datasets. quent words. Then occurrences of target word were grouped into a pre-defined number of clusters. Similar with many other algorithms, their algorithm also required the cluster number to be provided. In (Fukumoto and Suzuki, 1999), a term weight learning algorithm was proposed for verb sense disambiguation, which can automatically extract nouns co-occurring with verbs and identify the number of senses of an ambiguous verb. The weakness of their method is to assume that nouns co-occurring with verbs are disambiguated in advance and the number of senses of target verb is no less than two. The algorithm in (Dorow and Widdows, 2003) represented target noun word, its neighbors and their relationships using a graph in which each node denoted a noun and two nodes had an edge between them if they co-occurred with more than a given number of times. Then senses of target word were iteratively learned by clustering the local graph of similar words around target word. Their algorithm required a threshold as input, which controlled the number of senses. 5 Conclusion and Future Work Our word sense learning algorithm combined two novel ingredients: feature selection and order identification. Feature selection was formalized as a constrained optimization problem, the output of which was a set of important features to determine word senses. Both cluster structure and cluster number were estimated by minimizing a MDL criterion. Experimental results showed that our algorithm can retrieve important features, estimate cluster number automatically, and achieve better performance in terms of average accuracy than CGD algorithm which required cluster number as input. Our word sense learning algorithm is unsupervised in two folds: no requirement of sense tagged data, and no requirement of predefinition of sense number, which enables the automatic discovery of word senses from free text. In our algorithm, we treat bag of words in local contexts as features. It has been shown that local collocations and morphology of target word play important roles in word sense disambiguation or discrimination (Leacock et al., 1998; Widdows, 2003). It is necessary to incorporate these more structural information to improve the performance of word sense learning. References Bouman, C. A., Shapiro, M., Cook, G. W., Atkins, C. B., & Cheng, H. (1998) Cluster: An Unsupervsied Algorithm for Modeling Gaussian Mixtures. http://dynamo.ecn.purdue.edu/ ∼bouman/software/cluster/. Dash, M., Choi, K., Scheuermann, P., & Liu, H. (2002) Feature Selection for Clustering - A Filter Solution. Proc. of IEEE Int. Conf. on Data Mining(pp. 115– 122). Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977) Maximum likelihood from incomplete data using the EM algorithm. Journal of the Royal Statistical Society, 39(B). Dorow, B, & Widdows, D. (2003) Discovering CorpusSpecific Word Senses. Proc. of the 10th Conf. of the European Chapter of the Association for Computational Linguistics, Conference Companion (research notes and demos)(pp.79–82). Dy, J. G., & Brodley, C. E. (2000) Feature Subset Selection and Order Identification for Unsupervised Learning. Proc. of the 17th Int. Conf. on Machine Learning(pp. 247–254). Fukumoto, F., & Suzuki, Y. (1999) Word Sense Disambiguation in Untagged Text Based on Term Weight Learning. Proc. of the 9th Conf. of European Chapter of the Association for Computational Linguistics(pp. 209–216). Ide, N., & V´eronis, J. (1998) Word Sense Disambiguation: The State of the Art. Computational Linguistics, 24:1, 1–41. Lange, T., Braun, M., Roth, V., & Buhmann, J. M. (2002) Stability-Based Model Selection. Advances in Neural Information Processing Systems 15. Law, M. H., Figueiredo, M., & Jain, A. K. (2002) Feature Selection in Mixture-Based Clustering. Advances in Neural Information Processing Systems 15. Leacock, C., Chodorow, M., & Miller A. G. (1998) Using Corpus Statistics and WordNet Relations for Sense Identification. Computational Linguistics, 24:1, 147– 165. Levine, E., & Domany, E. (2001) Resampling Method for Unsupervised Estimation of Cluster Validity. Neural Computation, Vol. 13, 2573–2593. Mitra, P., Murthy, A. C., & Pal, K. S. (2002) Unsupervised Feature Selection Using Feature Similarity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24:4, 301–312. Modha, D. S., & Spangler, W. S. (2003) Feature Weighting in k-Means Clustering. Machine Learning, 52:3, 217–237. Pantel, P. & Lin, D. K. (2002) Discovering Word Senses from Text. Proc. of ACM SIGKDD Conf. on Knowledge Discovery and Data Mining(pp. 613-619). Pedersen, T., & Bruce, R. (1997) Distinguishing Word Senses in Untagged Text. Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing(pp. 197–207). Pudil, P., Novovicova, J., & Kittler, J. (1994) Floating Search Methods in Feature Selection. Pattern Recognigion Letters, Vol. 15, 1119-1125. Rissanen, J. (1978) Modeling by Shortest Data Description. Automatica, Vol. 14, 465–471. Sch¨utze, H. (1998) Automatic Word Sense Discrimination. Computational Linguistics, 24:1, 97–123. Talavera, L. (1999) Feature Selection as a Preprocessing Step for Hierarchical Clustering. Proc. of the 16th Int. Conf. on Machine Learning(pp. 389–397). Widdows, D. (2003) Unsupervised methods for developing taxonomies by combining syntactic and statistical information. Proc. of the Human Language Technology / Conference of the North American Chapter of the Association for Computational Linguistics(pp. 276–283).
2004
80
A Kernel PCA Method for Superior Word Sense Disambiguation Dekai WU1 Weifeng SU Marine CARPUAT [email protected] [email protected] [email protected] Human Language Technology Center HKUST Department of Computer Science University of Science and Technology Clear Water Bay, Hong Kong Abstract We introduce a new method for disambiguating word senses that exploits a nonlinear Kernel Principal Component Analysis (KPCA) technique to achieve accuracy superior to the best published individual models. We present empirical results demonstrating significantly better accuracy compared to the state-of-the-art achieved by either na¨ıve Bayes or maximum entropy models, on Senseval-2 data. We also contrast against another type of kernel method, the support vector machine (SVM) model, and show that our KPCA-based model outperforms the SVM-based model. It is hoped that these highly encouraging first results on KPCA for natural language processing tasks will inspire further development of these directions. 1 Introduction Achieving higher precision in supervised word sense disambiguation (WSD) tasks without resorting to ad hoc voting or similar ensemble techniques has become somewhat daunting in recent years, given the challenging benchmarks set by na¨ıve Bayes models (e.g., Mooney (1996), Chodorow et al. (1999), Pedersen (2001), Yarowsky and Florian (2002)) as well as maximum entropy models (e.g., Dang and Palmer (2002), Klein and Manning (2002)). A good foundation for comparative studies has been established by the Senseval data and evaluations; of particular relevance here are the lexical sample tasks from Senseval-1 (Kilgarriff and Rosenzweig, 1999) and Senseval-2 (Kilgarriff, 2001). We therefore chose this problem to introduce an efficient and accurate new word sense disambiguation approach that exploits a nonlinear Kernel PCA technique to make predictions implicitly based on generalizations over feature combinations. The 1The author would like to thank the Hong Kong Research Grants Council (RGC) for supporting this research in part through grants RGC6083/99E, RGC6256/00E, and DAG03/04.EG09. technique is applicable whenever vector representations of a disambiguation task can be generated; thus many properties of our technique can be expected to be highly attractive from the standpoint of natural language processing in general. In the following sections, we first analyze the potential of nonlinear principal components with respect to the task of disambiguating word senses. Based on this, we describe a full model for WSD built on KPCA. We then discuss experimental results confirming that this model outperforms stateof-the-art published models for Senseval-related lexical sample tasks as represented by (1) na¨ıve Bayes models, as well as (2) maximum entropy models. We then consider whether other kernel methods—in particular, the popular SVM model— are equally competitive, and discover experimentally that KPCA achieves higher accuracy than the SVM model. 2 Nonlinear principal components and WSD The Kernel Principal Component Analysis technique, or KPCA, is a nonlinear kernel method for extraction of nonlinear principal components from vector sets in which, conceptually, the ndimensional input vectors are nonlinearly mapped from their original space Rn to a high-dimensional feature space F where linear PCA is performed, yielding a transform by which the input vectors can be mapped nonlinearly to a new set of vectors (Sch¨olkopf et al., 1998). A major advantage of KPCA is that, unlike other common analysis techniques, as with other kernel methods it inherently takes combinations of predictive features into account when optimizing dimensionality reduction. For natural language problems in general, of course, it is widely recognized that significant accuracy gains can often be achieved by generalizing over relevant feature combinations (e.g., Kudo and Matsumoto (2003)). Another advantage of KPCA for the WSD task is that the dimensionality of the input data is generally very Table 1: Two of the Senseval-2 sense classes for the target word “art”, from WordNet 1.7 (Fellbaum 1998). Class Sense 1 the creation of beautiful or significant things 2 a superior skill large, a condition where kernel methods excel. Nonlinear principal components (Diamantaras and Kung, 1996) may be defined as follows. Suppose we are given a training set of M pairs (xt, ct) where the observed vectors xt ∈Rn in an ndimensional input space X represent the context of the target word being disambiguated, and the correct class ct represents the sense of the word, for t = 1, .., M. Suppose Φ is a nonlinear mapping from the input space Rn to the feature space F. Without loss of generality we assume the M vectors are centered vectors in the feature space, i.e., PM t=1 Φ (xt) = 0; uncentered vectors can easily be converted to centered vectors (Sch¨olkopf et al., 1998). We wish to diagonalize the covariance matrix in F: C = 1 M M X j=1 Φ (xj) ΦT (xj) (1) To do this requires solving the equation λv = Cv for eigenvalues λ ≥0 and eigenvectors v ∈F. Because Cv = 1 M M X j=1 (Φ( xj) · v)Φ (xj) (2) we can derive the following two useful results. First, λ (Φ( xt) · v) = Φ (xt) · Cv (3) for t = 1, .., M. Second, there exist αi for i = 1, ..., M such that v = M X i=1 αiΦ (xi) (4) Combining (1), (3), and (4), we obtain Mλ M X i=1 αi (Φ( xt) · Φ(xi )) = M X i=1 αi(Φ (xt) · M X j=1 Φ (xj)) (Φ( xj) · Φ(xi )) for t = 1, .., M. Let ˆK be the M × M matrix such that ˆKij = Φ (xi) · Φ (xj) (5) and let ˆλ1 ≥ˆλ2 ≥. . . ≥ˆλM denote the eigenvalues of ˆK and ˆα1 ,..., ˆαM denote the corresponding complete set of normalized eigenvectors, such that ˆλt(ˆαt· ˆαt) = 1 when ˆλt > 0. Then the lth nonlinear principal component of any test vector xt is defined as yl t = M X i=1 ˆαl i (Φ( xi) · Φ(xt )) (6) where ˆαl i is the lth element of ˆαl . To illustrate the potential of nonlinear principal components for WSD, consider a simplified disambiguation example for the ambiguous target word “art”, with the two senses shown in Table 1. Assume a training corpus of the eight sentences as shown in Table 2, adapted from Senseval-2 English lexical sample corpus. For each sentence, we show the feature set associated with that occurrence of “art” and the correct sense class. These eight occurrences of “art” can be transformed to a binary vector representation containing one dimension for each feature, as shown in Table 3. Extracting nonlinear principal components for the vectors in this simple corpus results in nonlinear generalization, reflecting an implicit consideration of combinations of features. Table 3 shows the first three dimensions of the principal component vectors obtained by transforming each of the eight training vectors xt into (a) principal component vectors zt using the linear transform obtained via PCA, and (b) nonlinear principal component vectors yt using the nonlinear transform obtained via KPCA as described below. Similarly, for the test vector x9, Table 4 shows the first three dimensions of the principal component vectors obtained by transforming it into (a) a principal component vector z9 using the linear PCA transform obtained from training, and (b) a nonlinear principal component vector y9 using the nonlinear KPCA transform obtained obtained from training. The vector similarities in the KPCA-transformed space can be quite different from those in the PCAtransformed space. This causes the KPCA-based model to be able to make the correct class prediction, whereas the PCA-based model makes the Table 2: A tiny corpus for the target word “art”, adapted from the Senseval-2 English lexical sample corpus (Kilgarriff 2001), together with a tiny example set of features. The training and testing examples can be represented as a set of binary vectors: each row shows the correct class c for an observed vector x of five dimensions. TRAINING design/N media/N the/DT entertainment/N world/N Class x1 He studies art in London. 1 x2 Punch’s weekly guide to the world of the arts, entertainment, media and more. 1 1 1 1 x3 All such studies have influenced every form of art, design, and entertainment in some way. 1 1 1 x4 Among the technical arts cultivated in some continental schools that began to affect England soon after the Norman Conquest were those of measurement and calculation. 1 2 x5 The Art of Love. 1 2 x6 Indeed, the art of doctoring does contribute to better health results and discourages unwarranted malpractice litigation. 1 2 x7 Countless books and classes teach the art of asserting oneself. 1 2 x8 Pop art is an example. 1 TESTING x9 In the world of design arts particularly, this led to appointments made for political rather than academic reasons. 1 1 1 1 wrong class prediction. What permits KPCA to apply stronger generalization biases is its implicit consideration of combinations of feature information in the data distribution from the high-dimensional training vectors. In this simplified illustrative example, there are just five input dimensions; the effect is stronger in more realistic high dimensional vector spaces. Since the KPCA transform is computed from unsupervised training vector data, and extracts generalizations that are subsequently utilized during supervised classification, it is quite possible to combine large amounts of unsupervised data with reasonable smaller amounts of supervised data. It can be instructive to attempt to interpret this example graphically, as follows, even though the interpretation in three dimensions is severely limiting. Figure 1(a) depicts the eight original observed training vectors xt in the first three of the five dimensions; note that among these eight vectors, there happen to be only four unique points when restricting our view to these three dimensions. Ordinary linear PCA can be straightforwardly seen as projecting the original points onto the principal axis, Table 3: The original observed training vectors (showing only the first three dimensions) and their first three principal components as transformed via PCA and KPCA. Observed vectors PCA-transformed vectors KPCA-transformed vectors Class t (x1 t, x2 t , x3 t ) (z1 t , z2 t , z3 t ) (y1 t , y2 t , y3 t ) ct 1 (0, 0, 0) (-1.961, 0.2829, 0.2014) (0.2801, -1.005, -0.06861) 1 2 (0, 1, 1) (1.675, -1.132, 0.1049) (1.149, 0.02934, 0.322) 1 3 (1, 0, 0) (-0.367, 1.697, -0.2391) (0.8209, 0.7722, -0.2015) 1 4 (0, 0, 1) (-1.675, -1.132, -0.1049) (-1.774, -0.1216, 0.03258) 2 5 (0, 0, 1) (-1.675, -1.132, -0.1049) (-1.774, -0.1216, 0.03258) 2 6 (0, 0, 1) (-1.675, -1.132, -0.1049) (-1.774, -0.1216, 0.03258) 2 7 (0, 0, 1) (-1.675, -1.132, -0.1049) (-1.774, -0.1216, 0.03258) 2 8 (0, 0, 0) (-1.961, 0.2829, 0.2014) (0.2801, -1.005, -0.06861) 1 Table 4: Testing vector (showing only the first three dimensions) and its first three principal components as transformed via the trained PCA and KPCA parameters. The PCA-based and KPCA-based sense class predictions disagree. Observed vectors PCA-transformed vectors KPCA-transformed vectors Predicted Class Correct Class t (x1 t, x2 t , x3 t ) (z1 t , z2 t , z3 t ) (y1 t , y2 t , y3 t ) ˆct ct 9 (1, 0, 1) (-0.3671, -0.5658, -0.2392) 2 1 9 (1, 0, 1) (4e-06, 8e-07, 1.111e-18) 1 1 as can be seen for the case of the first principal axis in Figure 1(b). Note that in this space, the sense 2 instances are surrounded by sense 1 instances. We can traverse each of the projections onto the principal axis in linear order, simply by visiting each of the first principal components z1 t along the principle axis in order of their values, i.e., such that z1 1 ≤z1 8 ≤z1 4 ≤z1 5 ≤z1 6 ≤z1 7 ≤z1 2 ≤z1 3 ≤z1 9 It is significantly more difficult to visualize the nonlinear principal components case, however. Note that in general, there may not exist any principal axis in X, since an inverse mapping from F may not exist. If we attempt to follow the same procedure to traverse each of the projections onto the first principal axis as in the case of linear PCA, by considering each of the first principal components y1 t in order of their value, i.e., such that y1 4 ≤y1 5 ≤y1 6 ≤y1 7 ≤y1 9 ≤y1 1 ≤y1 8 ≤y1 3 ≤y1 2 then we must arbitrarily select a “quasi-projection” direction for each y1 t since there is no actual principal axis toward which to project. This results in a “quasi-axis” roughly as shown in Figure 1(c) which, though not precisely accurate, provides some idea as to how the nonlinear generalization capability allows the data points to be grouped by principal components reflecting nonlinear patterns in the data distribution, in ways that linear PCA cannot do. Note that in this space, the sense 1 instances are already better separated from sense 2 data points. Moreover, unlike linear PCA, there may be up to M of the “quasi-axes”, which may number far more than five. Such effects can become pronounced in the high dimensional spaces are actually used for real word sense disambiguation tasks. 3 A KPCA-based WSD model To extract nonlinear principal components efficiently, note that in both Equations (5) and (6) the explicit form of Φ (xi) is required only in the form of (Φ (xi)·Φ (xj)), i.e., the dot product of vectors in F. This means that we can calculate the nonlinear principal components by substituting a kernel function k(xi, xj) for (Φ( xi) · Φ(xj )) in Equations (5) and (6) without knowing the mapping Φ explicitly; instead, the mapping Φ is implicitly defined by the kernel function. It is always possible to construct a mapping into a space where k acts as a dot product so long as k is a continuous kernel of a positive integral operator (Sch¨olkopf et al., 1998). the/DT 4, 5, 6, 7 1, 8 3 2 design/N media/N (a) 9 the/DT 4, 5, 6, 7 1, 8 3 2 design/N media/N (b) 9 the/DT 4, 5, 6, 7 1, 8 3 2 design/N media/N (c) 9 first principal axis : training example with sense class 1 : training example with sense class 2 : test example with unknown sense class : test example with predicted sense first principal “ quasi-axis ” class 2 (correct sense class=1) : test example with predicted sense class 1 (correct sense class=1) Figure 1: Original vectors, PCA projections, and KPCA “quasi-projections” (see text). Table 5: Experimental results showing that the KPCA-based model performs significantly better than na¨ıve Bayes and maximum entropy models. Significance intervals are computed via bootstrap resampling. WSD Model Accuracy Sig. Int. na¨ıve Bayes 63.3% +/-0.91% maximum entropy 63.8% +/-0.79% KPCA-based model 65.8% +/-0.79% Thus we train the KPCA model using the following algorithm: 1. Compute an M × M matrix ˆK such that ˆKij = k(xi, xj) (7) 2. Compute the eigenvalues and eigenvectors of matrix ˆK and normalize the eigenvectors. Let ˆλ1 ≥ˆλ2 ≥. . . ≥ˆλM denote the eigenvalues and ˆα1,..., ˆαM denote the corresponding complete set of normalized eigenvectors. To obtain the sense predictions for test instances, we need only transform the corresponding vectors using the trained KPCA model and classify the resultant vectors using nearest neighbors. For a given test instance vector x, its lth nonlinear principal component is yl t = M X i=1 ˆαl ik(xi, xt) (8) where ˆαl i is the ith element of ˆαl. For our disambiguation experiments we employ a polynomial kernel function of the form k(xi, xj) = (xi · xj)d, although other kernel functions such as gaussians could be used as well. Note that the degenerate case of d = 1 yields the dot product kernel k(xi, xj) = (xi·xj) which covers linear PCA as a special case, which may explain why KPCA always outperforms PCA. 4 Experiments 4.1 KPCA versus na¨ıve Bayes and maximum entropy models We established two baseline models to represent the state-of-the-art for individual WSD models: (1) na¨ıve Bayes, and (2) maximum entropy models. The na¨ıve Bayes model was found to be the most accurate classifier in a comparative study using a subset of Senseval-2 English lexical sample data by Yarowsky and Florian (2002). However, the maximum entropy (Jaynes, 1978) was found to yield higher accuracy than na¨ıve Bayes in a subsequent comparison by Klein and Manning (2002), who used a different subset of either Senseval-1 or Senseval-2 English lexical sample data. To control for data variation, we built and tuned models of both kinds. Note that our objective in these experiments is to understand the performance and characteristics of KPCA relative to other individual methods. It is not our objective here to compare against voting or other ensemble methods which, though known to be useful in practice (e.g., Yarowsky et al. (2001)), would not add to our understanding. To compare as evenly as possible, we employed features approximating those of the “featureenhanced na¨ıve Bayes model” of Yarowsky and Florian (2002), which included position-sensitive, syntactic, and local collocational features. The models in the comparative study by Klein and Manning (2002) did not include such features, and so, again for consistency of comparison, we experimentally verified that our maximum entropy model (a) consistently yielded higher scores than when the features were not used, and (b) consistently yielded higher scores than na¨ıve Bayes using the same features, in agreement with Klein and Manning (2002). We also verified the maximum entropy results against several different implementations, using various smoothing criteria, to ensure that the comparison was even. Evaluation was done on the Senseval 2 English lexical sample task. It includes 73 target words, among which nouns, adjectives, adverbs and verbs. For each word, training and test instances tagged with WordNet senses are provided. There are an average of 7.8 senses per target word type. On average 109 training instances per target word are available. Note that we used the set of sense classes from Senseval’s ”fine-grained” rather than ”coarse-grained” classification task. The KPCA-based model achieves the highest accuracy, as shown in Table 5, followed by the maximum entropy model, with na¨ıve Bayes doing the poorest. Bear in mind that all of these models are significantly more accurate than any of the other reported models on Senseval. “Accuracy” here refers to both precision and recall since disambiguation of all target words in the test set is attempted. Results are statistically significant at the 0.10 level, using bootstrap resampling (Efron and Tibshirani, 1993); moreover, we consistently witnessed the same level of accuracy gains from the KPCA-based model over Table 6: Experimental results comparing the KPCA-based model versus the SVM model. WSD Model Accuracy Sig. Int. SVM-based model 65.2% +/-1.00% KPCA-based model 65.8% +/-0.79% many variations of the experiments. 4.2 KPCA versus SVM models Support vector machines (e.g., Vapnik (1995), Joachims (1998)) are a different kind of kernel method that, unlike KPCA methods, have already gained high popularity for NLP applications (e.g., Takamura and Matsumoto (2001), Isozaki and Kazawa (2002), Mayfield et al. (2003)) including the word sense disambiguation task (e.g., Cabezas et al. (2001)). Given that SVM and KPCA are both kernel methods, we are frequently asked whether SVM-based WSD could achieve similar results. To explore this question, we trained and tuned an SVM model, providing the same rich set of features and also varying the feature representations to optimize for SVM biases. As shown in Table 6, the highest-achieving SVM model is also able to obtain higher accuracies than the na¨ıve Bayes and maximum entropy models. However, in all our experiments the KPCA-based model consistently outperforms the SVM model (though the margin falls within the statistical significance interval as computed by bootstrap resampling for this single experiment). The difference in KPCA and SVM performance is not surprising given that, aside from the use of kernels, the two models share little structural resemblance. 4.3 Running times Training and testing times for the various model implementations are given in Table 7, as reported by the Unix time command. Implementations of all models are in C++, but the level of optimization is not controlled. For example, no attempt was made to reduce the training time for na¨ıve Bayes, or to reduce the testing time for the KPCA-based model. Nevertheless, we can note that in the operating range of the Senseval lexical sample task, the running times of the KPCA-based model are roughly within the same order of magnitude as for na¨ıve Bayes or maximum entropy. On the other hand, training is much faster than the alternative kernel method based on SVMs. However, the KPCAbased model’s times could be expected to suffer in situations where significantly larger amounts of Table 7: Comparison of training and testing times for the different WSD model implementations. WSD Model Training time [CPU sec] Testing time [CPU sec] na¨ıve Bayes 103.41 16.84 maximum entropy 104.62 59.02 SVM-based model 5024.34 16.21 KPCA-based model 216.50 128.51 training data are available. 5 Conclusion This work represents, to the best of our knowledge, the first application of Kernel PCA to a true natural language processing task. We have shown that a KPCA-based model can significantly outperform state-of-the-art results from both na¨ıve Bayes as well as maximum entropy models, for supervised word sense disambiguation. The fact that our KPCA-based model outperforms the SVMbased model indicates that kernel methods other than SVMs deserve more attention. Given the theoretical advantages of KPCA, it is our hope that this work will encourage broader recognition, and further exploration, of the potential of KPCA modeling within NLP research. Given the positive results, we plan next to combine large amounts of unsupervised data with reasonable smaller amounts of supervised data such as the Senseval lexical sample. Earlier we mentioned that one of the promising advantages of KPCA is that it computes the transform purely from unsupervised training vector data. We can thus make use of the vast amounts of cheap unannotated data to augment the model presented in this paper. References Clara Cabezas, Philip Resnik, and Jessica Stevens. Supervised sense tagging using support vector machines. In Proceedings of Senseval-2, Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 59–62, Toulouse, France, July 2001. SIGLEX, Association for Computational Linguistics. Martin Chodorow, Claudia Leacock, and George A. Miller. A topical/local classifier for word sense identification. Computers and the Humanities, 34(1-2):115–120, 1999. Special issue on SENSEVAL. Hoa Trang Dang and Martha Palmer. Combining contextual features for word sense disambiguation. In Proceedings of the SIGLEX/SENSEVAL Workshop on Word Sense Disambiguation: Recent Successes and Future Directions, pages 88– 94, Philadelphia, July 2002. SIGLEX, Association for Computational Linguistics. Konstantinos I. Diamantaras and Sun Yuan Kung. Principal Component Neural Networks. Wiley, New York, 1996. Bradley Efron and Robert J. Tibshirani. An Introduction to the Bootstrap. Chapman and Hall, 1993. Hideki Isozaki and Hideto Kazawa. Efficient support vector classifiers for named entity recognition. In Proceedings of COLING-2002, pages 390–396, Taipei, 2002. E.T. Jaynes. Where do we Stand on Maximum Entropy? MIT Press, Cambridge MA, 1978. Thorsten Joachims. Text categorization with support vector machines: Learning with many relevant features. In Proceedings of ECML-98, 10th European Conference on Machine Learning, pages 137–142, 1998. Adam Kilgarriff and Joseph Rosenzweig. Framework and results for English Senseval. Computers and the Humanities, 34(1):15–48, 1999. Special issue on SENSEVAL. Adam Kilgarriff. English lexical sample task description. In Proceedings of Senseval-2, Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 17–20, Toulouse, France, July 2001. SIGLEX, Association for Computational Linguistics. Dan Klein and Christopher D. Manning. Conditional structure versus conditional estimation in NLP models. In Proceedings of EMNLP2002, Conference on Empirical Methods in Natural Language Processing, pages 9–16, Philadelphia, July 2002. SIGDAT, Association for Computational Linguistics. Taku Kudo and Yuji Matsumoto. Fast methods for kernel-based text analysis. In Proceedings of the 41set Annual Meeting of the Asoociation for Computational Linguistics, pages 24–31, 2003. James Mayfield, Paul McNamee, and Christine Piatko. Named entity recognition using hundreds of thousands of features. In Walter Daelemans and Miles Osborne, editors, Proceedings of CoNLL2003, pages 184–187, Edmonton, Canada, 2003. Raymond J. Mooney. Comparative experiments on disambiguating word senses: An illustration of the role of bias in machine learning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Philadelphia, May 1996. SIGDAT, Association for Computational Linguistics. Ted Pedersen. Machine learning with lexical features: The Duluth approach to SENSEVAL-2. In Proceedings of Senseval-2, Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 139–142, Toulouse, France, July 2001. SIGLEX, Association for Computational Linguistics. Bernhard Sch¨olkopf, Alexander Smola, and KlausRober M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5), 1998. Hiroya Takamura and Yuji Matsumoto. Feature space restructuring for SVMs with application to text categorization. In Proceedings of EMNLP2001, Conference on Empirical Methods in Natural Language Processing, pages 51–57, 2001. Vladimir N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, New York, 1995. David Yarowsky and Radu Florian. Evaluating sense disambiguation across diverse parameter spaces. Natural Language Engineering, 8(4):293–310, 2002. David Yarowsky, Silviu Cucerzan, Radu Florian, Charles Schafer, and Richard Wicentowski. The Johns Hopkins SENSEVAL2 system descriptions. In Proceedings of Senseval-2, Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 163–166, Toulouse, France, July 2001. SIGLEX, Association for Computational Linguistics.
2004
81
Using linguistic principles to recover empty categories Richard CAMPBELL Microsoft Research One Microsoft Way Redmond, WA 98052 USA [email protected] Abstract This paper describes an algorithm for detecting empty nodes in the Penn Treebank (Marcus et al., 1993), finding their antecedents, and assigning them function tags, without access to lexical information such as valency. Unlike previous approaches to this task, the current method is not corpus-based, but rather makes use of the principles of early Government-Binding theory (Chomsky, 1981), the syntactic theory that underlies the annotation. Using the evaluation metric proposed by Johnson (2002), this approach outperforms previously published approaches on both detection of empty categories and antecedent identification, given either annotated input stripped of empty categories or the output of a parser. Some problems with this evaluation metric are noted and an alternative is proposed along with the results. The paper considers the reasons a principlebased approach to this problem should outperform corpus-based approaches, and speculates on the possibility of a hybrid approach. 1 Introduction Many recent approaches to parsing (e.g. Charniak, 2000) have focused on labeled bracketing of the input string, ignoring aspects of structure that are not reflected in the string, such as phonetically null elements and long-distance dependencies, many of which provide important semantic information such as predicate-argument structure. In the Penn Treebank (Marcus et al., 1993), null elements, or empty categories, are used to indicate non-local dependencies, discontinuous constituents, and certain missing elements. Empty categories are coindexed with their antecedents in the same sentence. In addition, if a node has a particular grammatical function (such as subject) or semantic role (such as location), it has a function tag indicating that role; empty categories may also have function tags. Thus in the sentence below, who is coindexed with the empty category *T* in the embedded S; the function tag SBJ indicates that this empty category is the subject of that S: [WHNP-1 who] NP want [S [NP-SBJ-1*T*] to VP] Empty categories, with coindexation and function tags, allow a transparent reconstruction of predicate-argument structure not available from a simple bracketed string. In addition to bracketing the input string, a fully adequate syntactic analyzer should also locate empty categories in the parse tree, identify their antecedents, if any, and assign them appropriate function tags. State-of-the-art statistical parsers (e.g. Charniak, 2000) typically provide a labeled bracketing only; i.e., a parse tree without empty categories. This paper describes an algorithm for inserting empty categories in such impoverished trees, coindexing them with their antecedents, and assigning them function tags. This is the first approach to include function tag assignment as part of the more general task of empty category recovery. Previous approaches to the problem (Collins, 1997; Johnson, 2002; Dienes and Dubey, 2003a,b; Higgins, 2003) have all been learning-based; the primary difference between the present algorithm and earlier ones is that it is not learned, but explicitly incorporates principles of GovernmentBinding theory (Chomsky, 1981), since that theory underlies the annotation. The absence of rulebased approaches up until now is not motivated by the failure of such approaches in this domain; on the contrary, no one seems to have tried a rulebased approach to this problem. Instead it appears that there is an understandable predisposition against rule-based approaches, given the fact that data-driven, especially machine-learning, approaches have worked so much better in many other domains.1 Empty categories however seem different, in that, for the most part, their location and existence is determined, not by observable data, but by explicitly constructed linguistic principles, which 1Both Collins (1997: 19) and Higgins (2003: 100) are explicit about this predisposition. were consciously used in the annotation; i.e., unlike overt words and phrases, which correspond to actual strings in the data, empty categories are in the data only because linguists doing the annotation put them there. This paper therefore explores a rule-based approach to empty category recovery, with two purposes in mind: first, to explore the limits of such an approach; and second, to establish a more realistic baseline for future (possibly data-driven or hybrid) approaches. Although it does not seem likely that any application trying to glean semantic information from a parse tree would care about the exact string position of an empty category, the algorithm described here does try to insert empty categories in the correct position in the string. The main reason for this is to facilitate comparison with previous approaches to the problem, which evaluate accuracy by including such information. In Section 5, however, a revised evaluation metric is proposed that does not depend on string position per se. Before proceeding, a note on terminology is in order. I use the term detection (of empty categories) for the insertion of a labeled empty category into the tree (and/or string), and the term resolution for the coindexation of the empty category with its antecedent(s), if any. The term recovery refers to the complete package: detection, resolution, and assignment of function tags to empty categories. 2 Empty nodes in the Penn Treebank The major types of empty category in the Penn Treebank (PTB) are shown in Table 1, along with their distribution in section 24 of the Wall Street Journal portion of the PTB. Empty category type Count Description NP * 1044 NP trace or PRO NP *T* 265 Trace of WHNP *U* 227 Empty unit 0 178 Empty complementizer ADVP *T* 97 Trace of WHADVP S *T* 76 Trace of topicalized quoted S WHNP 0 43 Null WHNP SBAR 41 Trace of topicalized non-quoted S WHADVP 0 25 Null WHADVP others 95 Total: 2091 Table 1: Common empty categories and their distribution in section 24 of the PTB A detailed description of the categories and their uses in the treebank is provided in Chapter 4 of the annotation guidelines (Bies et al., 1995). Following Johnson (2002) and Dienes and Dubey (2003a), the compound empty SBAR consisting of an empty complementizer followed by *T* of category S is treated as a single item for purposes of evaluation. This compound category is labeled SBAR in Table 1. The PTB annotation in general, but especially the annotation of empty categories, follows a modified version of Government-Binding (GB) theory (Chomsky, 1981). In GB, the existence and location of empty categories is determined by the interaction of linguistic principles. In addition, the type of a given empty category is determined by its syntactic context, with the result that the various types of empty category are in complementary distribution. For example, the GB categories NPtrace and PRO (which are conflated to a single category in the PTB) occur only in argument positions in which an overt NP could not occur, namely as the object of a passive verb or as the subject of certain kinds of infinitive. 3 Previous work Previous approaches to this task have all been learning-based. Collins’ (1997) Model 3 integrates the detection and resolution of WH-traces in relative clauses into a lexicalized PCFG. Collins’ results are not directly comparable to the works cited below, since he does not provide a separate evaluation of the empty category detection and resolution task. Johnson (2002) proposes a pattern-matching algorithm, in which the minimal connected tree fragments containing an empty node and its antecedent(s) are extracted from the training corpus, and matched at runtime to an input tree. As in the present approach, Johnson inserts empty nodes as a post-process on an existing tree. He proposes an evaluation metric (discussed further below), and presents results for both detection and detection plus resolution, given two different kinds of input: perfect trees (with empty nodes removed) and parser output. Dienes and Dubey (2003a,b), on the other hand, integrate their empty node resolution algorithm into their own PCFG parser. They first locate empty nodes in the string, taking a POS-tagged string as input, and outputting a POS-tagged string with labeled empty nodes inserted. The PCFG parser is then trained, using the enhanced strings as input, without inserting any additional empty nodes. Antecedent resolution is handled by a separate post-process. Using Johnson’s (2002) evaluation metric, Dienes and Dubey present results on the detection task alone (i.e., inserting empty categories into the POS-tagged string), as well as on the combined detection and resolution tasks in combination with their parser.2 Higgins (2003) considers only the detection and resolution of WH-traces, and only evaluates the results given perfect input. Higgins’ method, like Johnson’s (2002) and the present one, involves post-processing of trees. Higgins’ results are not directly comparable to the other works cited, since he assumes all WH-phrases as given, even those that are themselves empty. 4 The recovery algorithm 4.1 The algorithm The proposed algorithm for recovering empty categories is shown in Figure 1; the algorithm walks the tree from top to bottom, at each node X deterministically inserting an empty category of a given type (usually as a daughter of X) if the syntactic context for that type is met by X. It makes four separate passes over the tree, on each pass applying a different set of rules. 1 for each tree, iterate over nodes from top down 2 for each node X 3 try to insert NP* in X 4 try to insert 0 in X 5 try to insert WHNP 0 or WHADVP 0 in X 6 try to insert *U* in X 7 try to insert a VP ellipsis site in X 8 try to insert S*T* or SBAR in X 9 try to insert trace of topicalized XP in X 10 try to insert trace of extraposition in X 11 for each node X 12 try to insert WH-trace in X 13 for each node X 14 try to insert NP-SBJ * in finite clause X 15 for each node X 16 if X = NP*, try to find antecedent for X Figure 1: Empty category recovery algorithm The rules called by this algorithm that try to insert empty categories of a particular type specify the syntactic context in which that type of empty category can occur, and if the context exists, specify where to insert the empty category. For example, the category NP*, which conflates the GB categories NP-trace and PRO, occurs typically3 2 It is unclear whether Dienes and Dubey’s evaluation of empty category detection is based on actual tags provided by the annotation (perfect input), or on the output of a POS-tagger. 3 NP* is used in roles that go beyond the GB notions of NP-trace and PRO, including e.g. the subject of as the object of a passive verb or as the subject of an infinitive. The rule which tries to insert this category and assign it a function tag is called in line 3 of Figure 1 and given in pseudo-code in Figure 2. Some additional rules are given in the Appendix. if X is a passive VP & X has no complement S if there is a postmodifying dangling PP Y then insert NP* before all postmodifiers of Y else insert NP* before all postmodifiers of X else if X is a non-finite S and X has no subject then insert NP-SBJ* after all premodifiers of X Figure 2: Rule to insert NP* This rule, which accounts for about half the empty category tokens in the PTB, makes no use of lexical information such as valency of the verb, etc. This is potentially a problem, since in GB the infinitives that can have NP-trace or PRO as subjects (raising and control infinitives) are distinguished from those that can have overt NPs or WH-trace as subjects (exceptional-Casemarked, or ECM, infinitives), and the distinction relies on the class of the governing verb. Nevertheless, the rules that insert empty nodes do not have access to a lexicon, and very little lexical information is encoded in the rules: reference is made in the rules to individual function words such as complementizers, auxiliaries, and the infinitival marker to, but never to lexical properties of content words such as valency or the raising/ECM distinction. In fact, the only reference to content words at all is in the rule which tries to insert null WH-phrases, called in line 5 of Figure 1: when this rule has found a relative clause in which it needs to insert a null WH-phrase, it checks if the head of the NP the relative clause modifies is reason(s), way(s), time(s), day(s), or place(s); if it is, then it inserts WHADVP with the appropriate function tag, rather than WHNP. The rule shown in Figure 2 depends for its successful application on the system’s being able to identify passives, non-finite sentences, heads of phrases (to identify pre- and post-modifiers), and functional information such as subject; similar information is accessed by the other rules used in the algorithm. Simple functions to identify passives, etc. are therefore called by the implemented versions of these rules. Functional information, such as subject, can be gleaned from the function tags in the treebank annotation; the rules make frequent use of a variety of function tags as they occur on various nodes. The output of imperatives; see below. Charniak’s parser (Charniak, 2000), however, does not include function tags, so in order for the algorithm to work properly on parser output (see Section 5), additional functions were written to approximate the required tags. Presumably, the accuracy of the algorithm on parser output would be enhanced by accurate prior assignment of the tags to all relevant nodes, as in Blaheta and Charniak (2000) (see also Section 5). Each empty category insertion rule, in addition to inserting an empty node in the tree, also may assign a function tag to the empty node. This is illustrated in Figure 2, where the final line inserts NP* with the function tag SBJ in the case where it is the subject of an infinitive clause. The rule that inserts WH-trace (called in line 12 in Figure 1) takes a WHXP needing a trace as input, and walks the tree until an appropriate insertion site is found (see Appendix for a fuller description). Since this rule requires a WHXP as input, and that WHXP may itself be an empty category (inserted by an earlier rule), it is handled in a separate pass through the tree. A separate rule inserts NP* as the subject in sentences which have no overt subject, and which have not had a subject inserted by any of the other rules. Most commonly, these are imperative sentences, but calling this rule in a separate pass through the tree, as in Figure 1, ensures that any subject position missed by the other rules is filled. Finally, a separate rule tries to find an antecedent for NP* under certain conditions. The antecedent of NP* may be an empty node inserted by rules in any of the first three passes through the tree, even the subject of an imperative; therefore this rule is applied in a separate pass through the tree. This rule is also fairly simple, assigning the local subject as antecedent for a non-subject NP*, while for an NP* in the subject position of a nonfinite S it searches up the tree, given certain locality conditions, for another NP subject. All the rules that insert empty categories are fairly simple, and derive straighforwardly from standard GB theory and from the annotation guidelines. The most complex rule is the rule that inserts WH-trace when it finds a WHXP daughter of SBAR; most are about as simple as the rule shown in Figure 2, some more so. Representative examples are given in the Appendix. 4.2 Development method After implementing the algorithm, it was run over sections 1, 3, and 11 of the WSJ portion of the PTB, followed by manual inspection of the trees to perform error analysis, with revisions made as necessary to correct errors. Initially sections 22 and 24 were used for development testing. However, it was found that these two sections differ from each other substantially with respect to the annotation of antecedents of NP* (which is described somewhat vaguely in the annotation guidelines), so all of sections 2-21 were used as a development test corpus. Section 23 was used only for the final evaluation, reported in Section 5 below. 5 Evaluation Following Johnson (2002), the system was evaluated on two different kinds of input: first, on perfect input, i.e., PTB annotations stripped of all empty categories and information related to them; and second, on imperfect input, in this case the output of Charniak’s (2000) parser. Each is discussed in turn below. 5.1 Perfect input The system was run on PTB trees stripped of all empty categories. To facilitate comparison to previous approaches, we used Johnson’s label and string position evaluation metric, according to which an empty node is identified by its label plus its string position, and evaluated the detection task alone. We then evaluated detection and resolution combined, identifying each empty category as before, plus the label and string position of its antecedent, if any, again following Johnson’s work. The results are shown in Table 2. Precision here and throughout is the percentage of empty nodes proposed by the system that are in the gold standard (section 23 of the PTB), recall is the percentage of empty nodes in the gold standard that are proposed by the system, and F1 is balanced f-measure; i.e., 2PR/(P+R). Task Prec. Rec. F1 Detection only 94.9 91.1 93.0 Detection + resolution 90.1 86.6 88.4 Table 2: Detection and resolution of empty categories given perfect input (label + string position method), expressed as percentage These results compare favorably to previously reported results, exceeding them mainly by achieving higher recall. Johnson (2002) reports 93% precision and 83% recall (F1 = 88%) for the detection task alone, and 80% precision and 70% recall (F1 = 75%) for detection plus resolution. In contrast to Johnson (2002) and the present work, Dienes and Dubey (2003a) take a POS-tagged string, rather than a tree, as input; they report 86.5% precision and 72.9% recall (F1 = 79.1%) on the detection task. For Dienes and Dubey, the further task of finding antecedents for empty categories is integrated with their own PCFG parser, so they report no numbers directly relevant to the task of detection and resolution given perfect input. 5.2 Parser output The system was also run using as input the output of Charniak’s parser (Charniak, 2000). The results, again using the label and string position method, are given in Table 3. Task Prec. Rec. F1 Detection only 85.2 81.7 83.4 Detection + resolution 78.3 75.1 76.7 Table 3: Detection and resolution of empty categories on parser output (label + string position method), expressed as percentage Again the results exceed those previously reported. Johnson (2002) reports 85% precision and 74% recall (F1 = 79%) for detection and 73% precision and 63% recall (F1 = 68%) for detection plus resolution on the output of Charniak’s parser. Dienes and Dubey (2003b) integrate the results of their detection task into their own PCFG parser, and report 81.5% precision and 68.7% recall (F1 = 74.6%) on the combined task of detection and resolution. 5.3 Perfect input with no function tags The lower results on parser output obviously reflect errors introduced by the parser, but may also be due to the parser not outputting function tags on any nodes. As mentioned in Section 4, it is believed that the results of the current method on parser output would improve if that output were reliably assigned function tags, perhaps along the lines of Blaheta and Charniak (2000). Testing this hypothesis directly is beyond the scope of the present work, but a simple experiment can give some idea of the extent to which the current algorithm relies on function tags in the input. The system was run on PTB trees with all nodes stripped of function tags; the results are given in Table 4. Task Prec. Rec. F1 Detection only 94.1 89.5 91.7 Detection + resolution 89.5 85.2 87.3 Table 4: Detection and resolution of empty categories on PTB input without function tags (label + string position method), expressed as percentage While not as good as the results on perfect input with function tags, these results are much better than the results on parser output. This suggests that function tag assignment should improve the results shown on parser output, but that the greater part of the difference between the results on perfect input and on parser output is due to errors introduced by the parser. 5.4 Refining the evaluation The results reported in the previous subsections are quite good, and demonstrate that the current approach outperforms previously reported approaches on the detection and resolution of empty categories. In this subsection some refinements to the evaluation method are considered. The label and string position method is useful if one sees the task as inserting empty nodes into a string, and thus is quite useful for evaluating systems that detect empty categories without parse trees, as in Dienes and Dubey (2003a). However, if the task is to insert empty nodes into a tree, then the method leads both to false positives and to false negatives. Suppose for example that the sentence When do you expect to finish? has the bracketing shown below, where ‘1’ and ‘2’ indicate two possible locations in the tree for the trace of the WHADVP: When do you [VP expect to [VP finish 1 ] 2 ] Suppose position 1 is correct; i.e. it represents the position of the trace in the gold standard. Since 1 and 2 correspond to the same string position, if a system inserts the trace in position 2, the string position evaluation method will count it as correct. This is a serious problem with the string-based method of evaluation, if one assumes, as seems reasonable, that the purpose of inserting empty categories into trees is to be able to recover semantic information such as predicate-argument structure and modification relations. In the above example, it is clearly semantically relevant whether the system proposes that when modifies expect instead of finish. Conversely, suppose the sentence Who (besides me) cares? has the bracketing shown: Who [S 1 (besides me) 2 [VP cares]] Again suppose that position 1 represents the placement of the WHNP trace in the gold standard. If a system places the trace in position 2 instead, the string position method will count it as an error, since 1 and 2 have different string positions. However it is not at all clear what it means to say that one of those two positions is correct and the other not, since there is no semantic, grammatical, or textual indicator of its exact position. If the task is to be able to recover semantic information using traces, then it does not matter in this case whether the system inserts the trace to the left or to the right of the parenthetical. Given that both false positives and false negatives are possible, I propose that future evaluations of this task should identify empty categories by their label and by their parent category, instead of, or perhaps in addition to, doing so by label and string position. Since the parent of an empty node is always an overt node4, the parent could be identified by its label and string position (left and right edges). Resolution is evaluated by a natural extension, by identifying the antecedent (which could itself be an empty category) according to its label and its parent’s label and string position. This would serve to identify an empty category by its position in the tree, rather than in the string, and would avoid the false positives and false negatives described above. In addition to an evaluation based on tree position rather than string position, I propose to evaluate the entire recovery task, i.e., including function tag assignment, not just detection and resolution. The revised evaluation is still not perfect: when inserting an NP* or NP*T* into a double-object construction, it clearly matters semantically whether it is the first or second object, though both positions have the same parent.5 Ideally, we would evaluate based on a richer set of grammatical relations than are annotated in the PTB, or perhaps based on thematic roles. However, it is difficult to see how to accomplish this without additional annotation. It is probable that constructions of this sort are relatively rare in the PTB in any case, so for now the proposed evaluation method, however imperfect, will suffice. The result of this revised evaluation, given perfect input, is presented in Table 5. The first two rows are comparable to the string-based results in Table 2; the last row, showing the results of the full recovery task (i.e., including antecedents and function tags), is not much lower, suggesting that labeling empty categories with function tags does not pose any serious difficulties. 4 The only exception is the 0 complementizer and S*T* daughters of the SBAR category in Table 1; but since the entire SBAR is treated as a single empty node for evaluation purposes, this does not pose a problem. 5 I am indebted to two ACL reviewers for calling this to my attention. Task Prec. Rec. F1 Detection only 95.6 91.9 93.7 Detection + resolution 90.8 87.3 89.0 Recovery (det.+res.+func. tags) 89.8 86.3 88.0 Table 5: Detection, resolution and recovery of empty categories given perfect input (label + parent method), expressed as percentage Three similar evaluations were also run, using parser output as input to the algorithm; the results are given in Table 6. Task Prec. Rec. F1 Detection only 78.4 75.2 76.7 Detection + resolution 72.3 69.3 70.8 Recovery (det.+res.+func. tags) 69.7 66.8 68.2 Table 6: Detection, resolution and recovery of empty categories on parser output (label + parent method), expressed as percentage The results here are less impressive, no doubt reflecting errors introduced by the parser in the labeling and bracketing of the parent category, a problem which does not affect a string-based evaluation. However it does not seem reasonable to have an effective evaluation of empty node insertion in parser output that does not depend to some extent on the correctness of the parse. The fact that our proposed evaluation metric depends more heavily on the accuracy of the input structure may be an unavoidable consequence of using a tree-based evaluation. 6 Discussion The empty category recovery algorithm reported on here outperforms previously published approaches on the detection and resolution tasks; it also does well on the task of function tag assignment to empty categories, which has not been considered in other work. As suggested in the introduction, the reason a rule-based approach works so well in this domain may be that empty categories are not naturally in the text, but are only inserted by the annotator, who is consciously following explicit linguistic principles, in this case, the principles of early GB theory. As a result, the recovery of empty categories is, for the most part, more amenable to a rule-based approach than to a learning approach. It makes little sense to learn, for example, that NP* occurs as the object of a passive verb or as the subject of certain infinitives in the PTB, if that information is already explicit in the annotation guidelines. This is not to say that learning approaches have nothing to contribute to this task. Information about individual lexical items, such as valency, the raising/ECM distinction, or subject vs. object control, which is presumably most robustly acquired from large amounts of data, would probably help in the task of detecting certain empty categories. Consider for example an input structure V [S to VP]. GB principles, which are enforced in the annotation guidelines, dictate that an empty category must be inserted as the subject of the infinitival S; but exactly which empty category, NP* or NP*T*, depends on properties of the governing verb, including whether it is a raising or control verb, such as seem or try, or an ECM verb, such as believe. In the present algorithm, the rule that inserts NP* applies first, without access to lexical information of any kind, so NP* is inserted, instead of NP*T*, regardless of the value of V. This leads to some errors which might be corrected given learned lexical information. Such errors are fewer than might have been expected, however: the present system achieved 97.7% precision and 97.3% recall (F1 = 97.5%) on the isolated task of detecting NP*, even without lexical knowledge (see Table 7). A combined learning and rule-based algorithm might stand to make a bigger gain in the task of deciding whether NP* in subject position has an antecedent or not, and if it does, whether the antecedent is a subject or not. The annotation guidelines and the theory that underlies it are less explicit on the principles underlying this task than they are on the other subtasks. As a result, the accuracy of the current system drops considerably when this task is taken into account, from 97.5% F1 to 86.9% (see Table 7). Dienes and Dubey (2003a), on the other hand, claim this as one of the strengths of their learning-based system. Empty category type Detection only (F1) Detection + resolution (F1) NP* 97.5 86.9 NP*T* 96.2 96.0 *U* 98.6 - 0 98.5 - ADVP*T* 79.9 79.9 S*T* 92.7 92.7 WHNP 0 92.4 - SBAR 84.4 84.4 WHADVP 0 73.3 - Table 7: F1 for detection and resolution of empty categories by type, using perfect input (label + parent method), expressed as percentage 7 Conclusion In this paper I have presented an algorithm for the recovery of empty categories in PTB-style trees that otherwise lack them. Unlike previous approaches, the current algorithm is rule-based rather than learning-based, which I have argued is appropriate for this task, given the highly theoretical nature of empty categories in the PTB. Moreover, the algorithm has no access to lexical information such as valency or verb class. Using the string-based evaluation metric proposed by Johnson (2002), the current system outperforms previously published algorithms on detection alone, as well as on detection combined with resolution, both on perfect input and in combination with parsing. In addition, we have performed additional evaluation using a tree-based metric, and including an evaluation of function tag assignment as well. 8 Acknowledgements I would like to thank Simon Corston-Oliver, Mark Johnson, and Hisami Suzuki for their helpful input. References Bies, A., M. Ferguson, K. Katz and R. MacIntyre. 1995. Bracketing Guidelines for Treebank II style Penn Treebank Project. Linguistic Data Consortium. Blaheta, D. and E. Charniak. 2000. Assigning Function Tags to Parsed Text. In Proceedings of the North American Chapter of the Association for Computational Linguistics, pages 234-240. Charniak, E. 2000. A maximum-entropy-inspired parser. In In Proceedings of the North American Chapter of the Association for Computational Linguistics, pages 132-139. Chomsky, N. 1981. Lectures on Government and Binding. Foris Publications, Dordrecht. Collins, M. 1997. Three Generative, Lexicalised Models for Statistical Parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, pages 16-23. Dienes, P. and A. Dubey. 2003a. Deep Syntactic Processing by Combining Shallow Methods. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 431-438. Dienes, P. and A. Dubey. 2003b. Antecedent Recovery: Experiments with a Trace Tagger. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 33-40. Higgins, D. 2003. A machine-learning approach to the identification of WH gaps. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics, pages 99-102. Johnson, M. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 136-143. Marcus, M., B. Santorini and M.A.Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330. Appendix: Sample rules To insert 0 Comp: if X=SBAR & !Comp(X) & !WHXP daughter(X) & ∃ S daughter Y of X & !(parent(X)=NP & sister(X)=NP) then insert 0 to left of Y To insert WHNP/WHADVP: if X=SBAR & parent(X)=NP & sister(X)=NP & !Comp(X) & !WHXP daughter(X) & ∃ S daughter Y of X if head(parent(X)) in {reason(s) way(s) time(s) day(s) place(s)} then insert WHADVP to left of Y else insert WHNP to left of Y To insert *U*: insert *U* / $ CD+ _ To insert WH-trace: if X=SBAR & ∃ S daughter Y of X & ∃ WHXP daughter W of X then find trace(W) in Y To find trace(W) in X: insert trace: (for W = WHXP, insert XP*T*) if X has conjuncts then find trace(W) in each conjunct of X else if X has a PP daughter Y with no object & W=WHNP then insert *T* to right of P else if X=S and !subject(X) & W=WHNP then insert *T* as last pre-mod of X else if X contains a VP Y then find trace(W) in Y else if X contains ADJP or clausal complement Y & W=WHNP then find trace(W) in Y else if W=WHNP & ∃ infinival rel. clause R, R=sister(W) & X=VP & X has an object NP & subject(R) is an empty node E then insert *T* as last pre-mod of R then delete E else if W=WHNP then insert *T* as first post-mod of X else insert *T* as last post-mod of X assign function tag: if W = WHNP & *T* a pre-mod of S then assign ‘SBJ’ to *T* if W = WHADVP & W is not empty if W = ‘why’ then assign ‘PRP’ to *T* if W = ‘when’ then assign ‘TMP’ to *T* if W = ‘where’ then assign ‘LOC’ to *T* if W = ‘how’ then assign ‘MNR’ to *T* else if W = WHADVP & parent(parent(W)) =NP if head(sister(parent(W))) = ‘reason(s)’ then assign ‘PRP’ to *T* if head(sister(parent(W)))=‘time(s)’ or ‘day(s)’ then assign ‘TMP’ to *T* if head(sister(parent(W))) = ‘place(s)’ then assign ‘LOC’ to *T* if head(sister(parent(W))) = ‘way(s)’ then assign ‘MNR’ to *T*
2004
82
Statistical Machine Translation by Parsing I. Dan Melamed Computer Science Department New York University New York, NY, U.S.A. 10003-6806 lastname  @cs.nyu.edu Abstract In an ordinary syntactic parser, the input is a string, and the grammar ranges over strings. This paper explores generalizations of ordinary parsing algorithms that allow the input to consist of string tuples and/or the grammar to range over string tuples. Such algorithms can infer the synchronous structures hidden in parallel texts. It turns out that these generalized parsers can do most of the work required to train and apply a syntax-aware statistical machine translation system. 1 Introduction A parser is an algorithm for inferring the structure of its input, guided by a grammar that dictates what structures are possible or probable. In an ordinary parser, the input is a string, and the grammar ranges over strings. This paper explores generalizations of ordinary parsing algorithms that allow the input to consist of string tuples and/or the grammar to range over string tuples. Such inference algorithms can perform various kinds of analysis on parallel texts, also known as multitexts. Figure 1 shows some of the ways in which ordinary parsing can be generalized. A synchronous parser is an algorithm that can infer the syntactic structure of each component text in a multitext and simultaneously infer the correspondence relation between these structures.1 When a parser’s input can have fewer dimensions than the parser’s grammar, we call it a translator. When a parser’s grammar can have fewer dimensions than the parser’s input, we call it a synchronizer. The corresponding processes are called translation and synchronization. To our knowledge, synchronization has never been explored as a class of algorithms. Neither has the relationship between parsing and word alignment. The relationship between translation and ordinary parsing was noted a long time 1A suitable set of ordinary parsers can also infer the syntactic structure of each component, but cannot infer the correspondence relation between these structures. translation synchronization synchronous parsing 1 parsing 3 2 2 3 1 ... ... ordinary I = dimensionality of input D = dimensionality of grammar synchronization (I >= D) parsing synchronous (D=I) word alignment translation (D >= I) ordinary parsing (D=I=1) generalized parsing (any D; any I) Figure 1: Generalizations of ordinary parsing. ago (Aho & Ullman, 1969), but here we articulate it in more detail: ordinary parsing is a special case of synchronous parsing, which is a special case of translation. This paper offers an informal guided tour of the generalized parsing algorithms in Figure 1. It culminates with a recipe for using these algorithms to train and apply a syntax-aware statistical machine translation (SMT) system. 2 Multitext Grammars and Multitrees The algorithms in this paper can be adapted for any synchronous grammar formalism. The vehicle for the present guided tour shall be multitext grammar (MTG), which is a generalization of context-free grammar to the synchronous case (Melamed, 2003). We shall limit our attention to MTGs in Generalized Chomsky Normal Form (GCNF) (Melamed et al., 2004). This normal form allows simpler algorithm descriptions than the normal forms used by Wu (1997) and Melamed (2003). In GCNF, every production is either a terminal production or a nonterminal production. A nonterminal production might look like this:          A  D(2) B  E  (1) There are nonterminals on the left-hand side (LHS) and in parentheses on the right-hand side (RHS). Each row of the production describes rewriting in a different component text of a multitext. In each row, a role template describes the relative order and contiguity of the RHS nonterminals. E.g., in the top row, [1,2] indicates that the first nonterminal (A) precedes the second (B). In the bottom row, [1,2,1] indicates that the first nonterminal both precedes and follows the second, i.e. D is discontinuous. Discontinuous nonterminals are annotated with the number of their contiguous segments, as in    . The  (“join”) operator rearranges the nonterminals in each component according to their role template. The nonterminals on the RHS are written in columns called links. Links express translational equivalence. Some nonterminals might have no translation in some components, indicated by (), as in the 2nd row. Terminal productions have exactly one “active” component, in which there is exactly one terminal on the RHS. The other components are inactive. E.g.,      (2) The semantics of  are the usual semantics of rewriting systems, i.e., that the expression on the LHS can be rewritten as the expression on the RHS. However, all the nonterminals in the same link must be rewritten simultaneously. In this manner, MTGs generate tuples of parse trees that are isomorphic up to reordering of sibling nodes and deletion. Figure 2 shows two representations of a tree that might be generated by an MTG in GCNF for the imperative sentence pair Wash the dishes / Pasudu moy . The tree exhibits both deletion and inversion in translation. We shall refer to such multidimensional trees as multitrees. The different classes of generalized parsing algorithms in this paper differ only in their grammars and in their logics. They are all compatible with the same parsing semirings and search strategies. Therefore, we shall describe these algorithms in terms of their underlying logics and grammars, abstracting away the semirings and search strategies, in order to elucidate how the different classes of algorithms are related to each other. Logical descriptions of inference algorithms involve inference rules:    means that  can be inferred from  and  . An item that appears in an inference rule stands for the proposition that the item is in the parse chart. A production rule that appears in an inference rule stands for the proposition that the production is in the grammar. Such specifications are nondeter           !   Wash !   ! "#%$   ! moy  '&)( *+ &)(  ', !   the !  '& &- ', # . !   dishes !   ! (/0  ! Pasudu  Figure 2: Above: A tree generated by a 2-MTG in English and (transliterated) Russian. Every internal node is annotated with the linear order of its children, in every component where there are two children. Below: A graphical representation of the same tree. Rectangles are 2D constituents. dishes the Wash moy Pasudu S NP N V WASH D DISH PAS MIT V N NP S ministic: they do not indicate the order in which a parser should attempt inferences. A deterministic parsing strategy can always be chosen later, to suit the application. We presume that readers are familiar with declarative descriptions of inference algorithms, as well as with semiring parsing (Goodman, 1999). 3 A Synchronous CKY Parser Figure 3 shows Logic C. Parser C is any parser based on Logic C. As in Melamed (2003)’s Parser A, Parser C’s items consist of a -dimensional label vector 21 3 and a -dimensional d-span vector 4 1 3 .2 The items contain d-spans, rather than ordinary spans, because 2Superscripts and subscripts indicate the range of dimensions of a vector. E.g., 5-6 7 is a vector spanning dimensions 1 through 8 . See Melamed (2003) for definitions of cardinality, d-span, and the operators 9 and : . Parser C needs to know all the boundaries of each item, not just the outermost boundaries. Some (but not all) dimensions of an item can be inactive, denoted  , and have an empty d-span (). The input to Parser C is a tuple of parallel texts, with lengths 1  3 . The notation    1 3 indicates that the Goal item must span the input from the left of the first word to the right of the last word in each component      . Thus, the Goal item must be contiguous in all dimensions. Parser C begins with an empty chart. The only inferences that can fire in this state are those with no antecedent items (though they can have antecedent production rules). In Logic C,    is the value that the grammar assigns to the terminal production   . The range of this value depends on the semiring used. A Scan inference can fire for the  th word    in component  for every terminal production in the grammar where    appears in the  th component. Each Scan consequent has exactly one active d-span, and that d-span always has the form     because such items always span one word, so the distance between the item’s boundaries is always one. The Compose inference in Logic C is the same as in Melamed’s Parser A, using slightly different notation: In Logic C, the function       represents the value that the grammar assigns to the nonterminal production      . Parser C can compose two items if their labels appear on the RHS of a production rule in the grammar, and if the contiguity and relative order of their intervals is consistent with the role templates of that production rule. Item Form:  1 3  4 1 3! Goal: #" 1 3   $  1 3%! Inference Rules Scan component d, &   : ')(+*, , 1 /. 1   10 1 3 2  1 /. 1 +    10 1 3 34 4 5 67 7 8  1 /. 1   10 1 3 2  1 /. 1  9    10 1 3 : ; ; < Compose: =?>A@ BDC E @ BGF =#H%@ BDC I @ BGF$JLK NM @ BLC E @ BPOQI @ B  >R@ B  H%@ B ! M @ B C E @ B%S I @ B  Figure 3: Logic C (“C” for CKY) These constraints are enforced by the d-span operators T and U . Parser C is conceptually simpler than the synchronous parsers of Wu (1997), Alshawi et al. (2000), and Melamed (2003), because it uses only one kind of item, and it never composes terminals. The inference rules of Logic C are the multidimensional generalizations of inference rules with the same names in ordinary CKY parsers. For example, given a suitable grammar and the input (imperative) sentence pair Wash the dishes / Pasudu moy, Parser C might make the 9 inferences in Figure 4 to infer the multitree in Figure 2. Note that there is one inference per internal node of the multitree. Goodman (1999) shows how a parsing logic can be combined with various semirings to compute different kinds of information about the input. Depending on the chosen semiring, a parsing logic can compute the single most probable derivation and/or its probability, the V most probable derivations and/or their total probability, all possible derivations and/or their total probability, the number of possible derivations, etc. All the parsing semirings catalogued by Goodman apply the same way to synchronous parsing, and to all the other classes of algorithms discussed in this paper. The class of synchronous parsers includes some algorithms for word alignment. A translation lexicon (weighted or not) can be viewed as a degenerate MTG (not in GCNF) where every production has a link of terminals on the RHS. Under such an MTG, the logic of word alignment is the one in Melamed (2003)’s Parser A, but without Compose inferences. The only other difference is that, instead of a single item, the Goal of word alignment is any set of items that covers all dimensions of the input. This logic can be used with the expectation semiring (Eisner, 2002) to find the maximum likelihood estimates of the parameters of a word-to-word translation model. An important application of Parser C is parameter estimation for probabilistic MTGs (PMTGs). Eisner (2002) has claimed that parsing under an expectation semiring is equivalent to the Inside-Outside algorithm for PCFGs. If so, then there is a straightforward generalization for PMTGs. Parameter estimation is beyond the scope of this paper, however. The next section assumes that we have an MTG, probabilistic or not, as required by the semiring. 4 Translation A -MTG can guide a synchronous parser to infer the hidden structure of a -component multitext. Now suppose that we have a -MTG and an input multitext with only W components, WYX . J   . ! C   !   ! C  ! !   J   , # . ! C   !  , # . ! C    ! !  J   , ! C  !  , ! C * ! !   J   ! (/ C ! (    ! (/  C !  !   J   ! "#$ C ! ! #"  ! "#%$ C ! % !  $  , #  ! C    ! !   ! (/ C !  !  JLK  & & C    , #  !  ! (/  & & C    !  !  %  , ! C % ! !   & & C    !  !  JLK 2&)( &)( C  %    , !  & &  & ( & ( C   ! &  !  '  . ! C  ! !   ! "#%$ C ! % !  JLK   C    . !  ! "#$    C  ! % !  (    C  ! % !   &)( &)( C %  !  !  JLK    C  %        & ( & (    C &   ! & * !  Figure 4: Possible sequence of inferences of Parser C on input Wash the dishes / Pasudu moy. When some of the component texts are missing, we can ask the parser to infer a -dimensional multitree that includes the missing components. The resulting multitree will cover the W input components/dimensions among its dimensions. It will also express the  W output components/dimensions, along with their syntactic structures. Item Form:  1 3  4 1 ) ! Goal: " 1 3   $  1 ) ! Inference Rules Scan component  &   W : ')(+*, , 1 /. 1   10 1 ) 2  1 . 1 +     0 1 ) 34 4 5 6 7 7 7 7 7 8  1 . 1    0 1 )  ) 0 1 3 2  1 /. 1  9    10 1 ) : ; ; ; ; ; < Load component  , W X   : ' (R* , , ) 0 1 /. 1   10 1 3 2  ) 0 1 /. 1 *  10 1 3 3 4 4 5 6 7 7 7 7 7 8  1 )  ) 0 1 /. 1   10 1 3 2  1 ) : ; ; ; ; ; < Compose: = >@ BDC E @ + F = H%@ BDC I @ +F J K-, M @ BDC . 1 ) U 4 1 )  ) 0 1 3  >@ B  H%@ B0/ M @ B C E @ + S I @ +  Figure 5: Logic CT (“T” for Translation) Figure 5 shows Logic CT, which is a generalization of Logic C. Translator CT is any parser based on Logic CT. The items of Translator CT have a -dimensional label vector, as usual. However, their d-span vectors are only W -dimensional, because it is not necessary to constrain absolute word positions in the output dimensions. Instead, we need only constrain the cardinality of the output nonterminals, which is accomplished by the role templates  ) 0 1 3 in the & term. Translator CT scans only the input components. Terminal productions with active output components are simply loaded from the grammar, and their LHSs are added to the chart without d-span information. Composition proceeds as before, except that there are no constraints on the role templates in the output dimensions – the role templates in  ) 0 1 3 are free variables. In summary, Logic CT differs from Logic C as follows: 1 Items store no position information (d-spans) for the output components. 1 For the output components, the Scan inferences are replaced by Load inferences, which are not constrained by the input. 1 The Compose inference does not constrain the d-spans of the output components. (Though it still constrains their cardinality.) We have constructed a translator from a synchronous parser merely by relaxing some constraints on the output dimensions. Logic C is just Logic CT for the special case where W. The relationship between the two classes of algorithms is easier to see from their declarative logics than it would be from their procedural pseudocode or equations. Like Parser C, Translator CT can Compose items that have no dimensions in common. If one of the items is active only in the input dimension(s), and the other only in the output dimension(s), then the inference is, de facto, a translation. The possible translations are determined by consulting the grammar. Thus, in addition to its usual function of evaluating syntactic structures, the grammar simultaneously functions as a translation model. Logic CT can be coupled with any parsing semiring. For example, under a boolean semiring, this logic will succeed on an W -dimensional input if and only if it can infer a -dimensional multitree whose root is the goal item. Such a tree would contain a   W  -dimensional translation of the input. Thus, under a boolean semiring, Translator CT can determine whether a translation of the input exists. Under an inside-probability semiring, Translator CT can compute the total probability of all multitrees containing the input and its translations in the AW output components. All these derivation trees, along with their probabilities, can be efficiently represented as a packed parse forest, rooted at the goal item. Unfortunately, finding the most probable output string still requires summing probabilities over an exponential number of trees. This problem was shown to be NP-hard in the one-dimensional case (Sima’an, 1996). We have no reason to believe that it is any easier when  . The Viterbi-derivation semiring would be the most often used with Translator CT in practice. Given a -PMTG, Translator CT can use this semiring to find the single most probable -dimensional multitree that covers the W -dimensional input. The multitree inferred by the translator will have the words of both the input and the output components in its leaves. For example, given a suitable grammar and the input Pasudu moy, Translator CT could infer the multitree in Figure 2. The set of inferences would be exactly the same as those listed in Figure 4, except that the items would have no d-spans in the English component. In practice, we usually want the output as a string tuple, rather than as a multitree. Under the various derivation semirings (Goodman, 1999), Translator CT can store the output role templates  ) 0 1 3 in each internal node of the tree. The intended ordering of the terminals in each output dimension can be assembled from these templates by a linear-time linearization post-process that traverses the finished multitree in postorder. To the best of our knowledge, Logic CT is the first published translation logic to be compatible with all of the semirings catalogued by Goodman (1999), among others. It is also the first to simultaneously accommodate multiple input components and multiple output components. When a source document is available in multiple languages, a translator can benefit from the disambiguating information in each. Translator CT can take advantage of such information without making the strong independence assumptions of Och & Ney (2001). When output is desired in multiple languages, Translator CT offers all the putative benefits of the interlingual approach to MT, including greater efficiency and greater consistency across output components. Indeed, the language of multitrees can be viewed as an interlingua. 5 Synchronization We have explored inference of W -dimensional multitrees under a -dimensional grammar, where  W . Now we generalize along the other axis of Figure 1(a). Multitext synchronization is most often used to infer W -dimensional multitrees without the benefit of an W -dimensional grammar. One application is inducing a parser in one language from a parser in another (L¨u et al., 2002). The application that is most relevant to this paper is bootstrapping an W -dimensional grammar. In theory, it is possible to induce a PMTG from multitext in an unsupervised manner. A more reliable way is to start from a corpus of multitrees — a multitreebank.3 We are not aware of any multitreebanks at this time. The most straightforward way to create one is to parse some multitext using a synchronous parser, such as Parser C. However, if the goal is to bootstrap an W -PMTG, then there is no W -PMTG that can evaluate the terms in the parser’s logic. Our solution is to orchestrate lower-dimensional knowledge sources to evaluate the terms. Then, we can use the same parsing logic to synchronize multitext into a multitreebank. To illustrate, we describe a relatively simple synchronizer, using the Viterbi-derivation semiring.4 Under this semiring, a synchronizer computes the single most probable multitree for a given multitext. 3In contrast, a parallel treebank might contain no information about translational equivalence. 4The inside-probability semiring would be required for maximum-likelihood synchronization. ya kota kormil I fed the cat Figure 6: Synchronization. Only one synchronous dependency structure (dashed arrows) is compatible with the monolingual structure (solid arrows) and word alignment (shaded cells). If we have no suitable PMTG, then we can use other criteria to search for trees that have high probability. We shall consider the common synchronization scenario where a lexicalized monolingual grammar is available for at least one component.5 Also, given a tokenized set of W -tuples of parallel sentences, it is always possible to estimate a word-to-word translation model   3 1  ) 3 0 1  (e.g., Och & Ney, 2003).6 A word-to-word translation model and a lexicalized monolingual grammar are sufficient to drive a synchronizer. For example, in Figure 6 a monolingual grammar has allowed only one dependency structure on the English side, and a word-to-word translation model has allowed only one word alignment. The syntactic structures of all dimensions of a multitree are isomorphic up to reordering of sibling nodes and deletion. So, given a fixed correspondence between the tree leaves (i.e. words) across components, choosing the optimal structure for one component is tantamount to choosing the optimal synchronous structure for all components.7 Ignoring the nonterminal labels, only one dependency structure is compatible with these constraints – the one indicated by dashed arrows. Bootstrapping a PMTG from a lower-dimensional PMTG and a word-to-word translation model is similar in spirit to the way that regular grammars can help to estimate CFGs (Lari & Young, 1990), and the way that simple translation models can help to bootstrap more sophisticated ones (Brown et al., 1993). 5Such a grammar can be induced from a treebank, for example. We are currently aware of treebanks for English, Spanish, German, Chinese, Czech, Arabic, and Korean. 6Although most of the literature discusses word translation models between only two languages, it is possible to combine several 2D models into a higher-dimensional model (Mann & Yarowsky, 2001). 7Except where the unstructured components have words that are linked to nothing. We need only redefine the terms in a way that does not rely on an W -PMTG. Without loss of generality, we shall assume a -PMTG that ranges over the first components, where X W . We shall then refer to the structured components and the W  unstructured components. We begin with A . For the structured components      , we retain the grammarbased definition:                   ,8 where the latter probability can be looked up in our -PMTG. For the unstructured components, there are no useful nonterminal labels. Therefore, we assume that the unstructured components use only one (dummy) nonterminal label , so that R           if  and undefined otherwise for X   W . Our treatment of nonterminal productions begins by applying the chain rule9 &   1 )  1 )   1 )  1 ) 1 )   1 )  1 )     1 )  1 )  1 )  1 )   1 )  1 )  (3)   1 3  1 3  1 3  1 3   1 )  1 )      3 0 1 )  3 0 1 )   1 3  1 3  1 3  1 3  1 )  1 )     3 0 1 )   1 3  1 3  1 )  1 )  1 )  1 )     3 0 1 )   1 3  1 )  1 )  1 )  1 )  1 )  (4) and continues by making independence assumptions. The first assumption is that the structured components of the production’s RHS are conditionally independent of the unstructured components of its LHS:   1 3  1 3  1 3  1 3   1 )  1 )    1 3  1 3  1 3  1 3   1 3  1 3  (5) The above probability can be looked up in the -PMTG. Second, since we have no useful nonterminals in the unstructured components, we let    3 0 1 )  3 0 1 )   1 3  1 3  1 3  1 3  1 )  1 )  (6) if  3 0 1 )  3 0 1 )  3 0 1 ) and  otherwise. Third, we assume that the word-to-word translation probabilities are independent of anything else:   3 0 1 )   1 3  1 3  1 )  1 )  1 )  1 )    3 0 1 )  1 3  (7) 8We have ignored lexical heads so far, but we need them for this synchronizer. 9The procedure is analogous when the heir is the first nonterminal link on the RHS, rather than the second. These probabilities can be obtained from our wordto-word translation model, which would typically be estimated under exactly such an independence assumption. Finally, we assume that the output role templates are independent of each other and uniformly distributed, up to some maximum cardinality . Let    be the number of unique role templates of cardinality or less. Then   3 0 1 )   1 3  1 )  1 )  1 )  1 )  1 )  (8)   3 0 1 )  )   3 0 1       ) . 3 Under Assumptions 5–8,    1 )  1 )   1 )  1 ) 1 )   1 )  1 )   (9)   1 3  1 3  1 3  1 3   1 3  1 3    3 0 1 )  1 3     ) . 3 if  3 0 1 )  3 0 1 ) 3 0 1 ) and 0 otherwise. We can use these definitions of the grammar terms in the inference rules of Logic C to synchronize multitexts into multitreebanks. More sophisticated synchronization methods are certainly possible. For example, we could project a part-of-speech tagger (Yarowsky & Ngai, 2001) to improve our estimates in Equation 6. Yet, despite their relative simplicity, the above methods for estimating production rule probabilities use all of the available information in a consistent manner, without double-counting. This kind of synchronizer stands in contrast to more ad-hoc approaches (e.g., Matsumoto, 1993; Meyers, 1996; Wu, 1998; Hwa et al., 2002). Some of these previous works fix the word alignments first, and then infer compatible parse structures. Others do the opposite. Information about syntactic structure can be inferred more accurately given information about translational equivalence, and vice versa. Commitment to either kind of information without consideration of the other increases the potential for compounded errors. 6 Multitree-based Statistical MT Multitree-based statistical machine translation (MTSMT) is an architecture for SMT that revolves around multitrees. Figure 7 shows how to build and use a rudimentary MTSMT system, starting from some multitext and one or more monolingual treebanks. The recipe follows: T1. Induce a word-to-word translation model. T2. Induce PCFGs from the relative frequencies of productions in the monolingual treebanks. T3. Synchronize some multitext, e.g. using the approximations in Section 5. T4. Induce an initial PMTG from the relative frequencies of productions in the multitreebank. T5. Re-estimate the PMTG parameters, using a synchronous parser with the expectation semiring. A1. Use the PMTG to infer the most probable multitree covering new input text. A2. Linearize the output dimensions of the multitree. Steps T2, T4 and A2 are trivial. Steps T1, T3, T5, and A1 are instances of the generalized parsers described in this paper. Figure 7 is only an architecture. Computational complexity and generalization error stand in the way of its practical implementation. Nevertheless, it is satisfying to note that all the non-trivial algorithms in Figure 7 are special cases of Translator CT. It is therefore possible to implement an MTSMT system using just one inference algorithm, parameterized by a grammar, a semiring, and a search strategy. An advantage of building an MT system in this manner is that improvements invented for ordinary parsing algorithms can often be applied to all the main components of the system. For example, Melamed (2003) showed how to reduce the computational complexity of a synchronous parser by   3  , just by changing the logic. The same optimization can be applied to the inference algorithms in this paper. With proper software design, such optimizations need never be implemented more than once. For simplicity, the algorithms in this paper are based on CKY logic. However, the architecture in Figure 7 can also be implemented using generalizations of more sophisticated parsing logics, such as those inherent in Earley or Head-Driven parsers. 7 Conclusion This paper has presented generalizations of ordinary parsing that emerge when the grammar and/or the input can be multidimensional. Along the way, it has elucidated the relationships between ordinary parsers and other classes of algorithms, some previously known and some not. It turns out that, given some multitext and a monolingual treebank, a rudimentary multitree-based statistical machine translation system can be built and applied using only generalized parsers and some trivial glue. There are three research benefits of using generalized parsers to build MT systems. First, we can synchronization PCFG(s) word−to−word translation model parameter parsing synchronous estimation via PMTG word alignment monolingual treebank(s) multitext training multitreebank relative frequency computation relative frequency computation translation input multitext multitree output multitext linearization A2 A1 T3 T5 T1 T2 T4 training application Figure 7: Data-flow diagram for a rudimentary MTSMT system based on generalizations of parsing. take advantage of past and future research on making parsers more accurate and more efficient. Therefore, second, we can concentrate our efforts on better models, without worrying about MT-specific search algorithms. Third, more generally and most importantly, this approach encourages MT research to be less specialized and more transparently related to the rest of computational linguistics. Acknowledgments Thanks to Joseph Turian, Wei Wang, Ben Wellington, and the anonymous reviewers for valuable feedback. This research was supported by an NSF CAREER Award, the DARPA TIDES program, and an equipment gift from Sun Microsystems. References A. Aho & J. Ullman (1969) “Syntax Directed Translations and the Pushdown Assembler,” Journal of Computer and System Sciences 3, 37-56. H. Alshawi, S. Bangalore, & S. Douglas (2000) “Learning Dependency Translation Models as Collections of Finite State Head Transducers,” Computational Linguistics 26(1):45-60. P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, & R. L. Mercer (1993) “The Mathematics of Statistical Machine Translation: Parameter Estimation,” Computational Linguistics 19(2):263–312. J. Goodman (1999) “Semiring Parsing,” Computational Linguistics 25(4):573–305. R. Hwa, P. Resnik, A. Weinberg, & O. Kolak (2002) “Evaluating Translational Correspondence using Annotation Projection,” Proceedings of the ACL. J. Eisner (2002) “Parameter Estimation for Probabilistic FiniteState Transducers,” Proceedings of the ACL. K. Lari & S. Young (1990) “The Estimation of Stochastic Context-Free Grammars using the Inside-Outside Algorithm,” Computer Speech and Language Processing 4:35– 56. Y. L¨u, S. Li, T. Zhao, & M. Yang (2002) “Learning Chinese Bracketing Knowledge Based on a Bilingual Language Model,” Proceedings of COLING. G. S. Mann & D. Yarowsky (2001) “Multipath Translation Lexicon Induction via Bridge Languages,” Proceedings of HLT/NAACL. Y. Matsumoto (1993) “Structural Matching of Parallel Texts,” Proceedings of the ACL. I. D. Melamed (2003) “Multitext Grammars and Synchronous Parsers,” Proceedings of HLT/NAACL. I. D. Melamed, G. Satta, & B. Wellington (2004) “Generalized Multitext Grammars,” Proceedings of the ACL (this volume). A. Meyers, R. Yangarber, & R. Grishman (1996) “Alignment of Shared Forests for Bilingual Corpora,” Proceedings of COLING. F. Och & H. Ney (2001) “Statistical Multi-Source Translation,” Proceedings of MT Summit VIII. F. Och & H. Ney (2003) “A Systematic Comparison of Various Statistical Alignment Models,” Computational Linguistics 29(1):19-51. K. Sima’an (1996) “Computational Complexity of Probabilistic Disambiguation by means of Tree-Grammars,” Proceedings of COLING. D. Wu (1996) “A polynomial-time algorithm for statistical machine translation,” Proceedings of the ACL. D. Wu (1997) “Stochastic inversion transduction grammars and bilingual parsing of parallel corpora,” Computational Linguistics 23(3):377-404. D. Wu & H. Wong (1998) “Machine translation with a stochastic grammatical channel,” Proceedings of the ACL. K. Yamada & K. Knight (2002) “A Decoder for Syntax-based Statistical MT,” Proceedings of the ACL. D. Yarowsky & G. Ngai (2001) “Inducing Multilingual POS Taggers and NP Bracketers via Robust Projection Across Aligned Corpora,” Proceedings of the NAACL.
2004
83
Generalized Multitext Grammars I. Dan Melamed Computer Science Department New York University 715 Broadway, 7th Floor New York, NY, 10003, USA lastname  @cs.nyu.edu Giorgio Satta Dept. of Information Eng’g University of Padua via Gradenigo 6/A I-35131 Padova, Italy lastname  @dei.unipd.it Benjamin Wellington Computer Science Department New York University 715 Broadway, 7th Floor New York, NY, 10003, USA lastname  @cs.nyu.edu Abstract Generalized Multitext Grammar (GMTG) is a synchronous grammar formalism that is weakly equivalent to Linear Context-Free Rewriting Systems (LCFRS), but retains much of the notational and intuitive simplicity of Context-Free Grammar (CFG). GMTG allows both synchronous and independent rewriting. Such flexibility facilitates more perspicuous modeling of parallel text than what is possible with other synchronous formalisms. This paper investigates the generative capacity of GMTG, proves that each component grammar of a GMTG retains its generative power, and proposes a generalization of Chomsky Normal Form, which is necessary for synchronous CKY-style parsing. 1 Introduction Synchronous grammars have been proposed for the formal description of parallel texts representing translations of the same document. As shown by Melamed (2003), a plausible model of parallel text must be able to express discontinuous constituents. Since linguistic expressions can vanish in translation, a good model must be able to express independent (in addition to synchronous) rewriting. Inversion Transduction Grammar (ITG) (Wu, 1997) and Syntax-Directed Translation Schema (SDTS) (Aho and Ullman, 1969) lack both of these properties. Synchronous Tree Adjoining Grammar (STAG) (Shieber, 1994) lacks the latter and allows only limited discontinuities in each tree. Generalized Multitext Grammar (GMTG) offers a way to synchronize Mildly Context-Sensitive Grammar (MCSG), while satisfying both of the above criteria. The move to MCSG is motivated by our desire to more perspicuously account for certain syntactic phenomena that cannot be easily captured by context-free grammars, such as clitic climbing, extraposition, and other types of longdistance movement (Becker et al., 1991). On the other hand, MCSG still observes some restrictions that make the set of languages it generates less expensive to analyze than the languages generated by (properly) context-sensitive formalisms. More technically, our proposal starts from Multitext Grammar (MTG), a formalism for synchronizing context-free grammars recently proposed by Melamed (2003). In MTG, synchronous rewriting is implemented by means of an indexing relation that is maintained over occurrences of nonterminals in a sentential form, using essentially the same machinery as SDTS. Unlike SDTS, MTG can extend the dimensionality of the translation relation beyond two, and it can implement independent rewriting by means of partial deletion of syntactic structures. Our proposal generalizes MTG by moving from component grammars that generate contextfree languages to component grammars whose generative power is equivalent to Linear Context-Free Rewriting Systems (LCFRS), a formalism for describing a class of MCSGs. The generalization is achieved by allowing context-free productions to rewrite tuples of strings, rather than single strings. Thus, we retain the intuitive top-down definition of synchronous derivation original in SDTS and MTG but not found in LCFRS, while extending the generative power to linear context-free rewriting languages. In this respect, GMTG has also been inspired by the class of Local Unordered Scattered Context Grammars (Rambow and Satta, 1999). A syntactically very different synchronous formalism involving LCFRS has been presented by Bertsch and Nederhof (2001). This paper begins with an informal description of GMTG. It continues with an investigation of this formalism’s generative capacity. Next, we prove that in GMTG each component grammar retains its generative power, a requirement for synchronous formalisms that Rambow and Satta (1996) called the “weak language preservation property.” Lastly, we propose a synchronous generalization of Chomsky Normal Form, which lays the groundwork for synchronous parsing under GMTG using a CKYstyle algorithm (Younger, 1967; Melamed, 2004). 2 Informal Description and Comparisons GMTG is a generalization of MTG, which is itself a generalization of CFG to the synchronous case. Here we present MTG in a new notation that shows the relation to CFG more clearly. For example, the following MTG productions can generate the multitext [(I fed the cat), (ya kota kormil)]:1 (S)  (S)   PN  VP    PN  VP   (1)  PN   PN   I   ya  (2)  VP   VP   V  NP    NP  V   (3)  V   V   fed   kormil  (4)  NP   NP   D  N    N   (5)  D     the    (6)  N   N   cat   kota  (7) Each production in this example has two components, the first modeling English and the second (transliterated) Russian. Nonterminals with the same index must be rewritten together (synchronous rewriting). One strength of MTG, and thus also GMTG, is shown in Productions (5) and (6). There is a determiner in English, but not in Russian, so Production (5) does not have the nonterminal D in the Russian component and (6) applies only to the English component (independent rewriting). Formalisms that do not allow independent rewriting require a corresponding  to appear in the second component on the right-hand side (RHS) of Production (5), and this  would eventually generate the empty string. This approach has the disadvantage that it introduces spurious ambiguity about the position of the “empty” nonterminal with respect to the other nonterminals in its component. Spurious ambiguity leads to wasted effort during parsing. GMTG’s implementation of independent rewriting through the empty tuple () serves a very different function from the empty string. Consider the following GMTG:         (8)           (9)                 (10)    !   "    #    %$  (11) Production (8) asserts that symbol  vanishes in translation. Its application removes both of the nonterminals on the left-hand side (LHS), pre-empting any other production. In contrast, Production (9) 1We write production components both side by side and one above another to save space, but each component is always in parentheses. explicitly relaxes the synchronization constraint, so that the two components can be rewritten independently. The other six productions make assertions about only one component and are agnostic about the other component. Incidentally, generating the same language with only fully synchronized productions would raise the number of required productions to 11, so independent rewriting also helps to reduce grammar size. Independent rewriting is also useful for modeling paraphrasing. Take, for example, [(Tim got a pink slip), (Tim got laid off)]. While the two sentences have the same meaning, the objects of their verb phrases are structured very differently. GMTG can express their relationships as follows:  S   S   NP  VP    NP  VP &  (12)  VP   VP   V  NP    V  PP   (13)  NP   PP   DT  A ' N (   VB )* R +  (14)  NP   NP   Tim   Tim  (15) , V   V   got   got  (16) , DT     a    (17)  A     pink    (18)  N     slip    (19) ,   VB     laid  (20)    R     off  (21) As described by Melamed (2003), MTG requires production components to be contiguous, except after binarization. GMTG removes this restriction. Take, for example, the sentence pair [(The doctor treats his teeth), (El m´edico le examino los dientes)] (Dras and Bleam, 2000). The Spanish clitic le and the NP los dientes should both be paired with the English NP his teeth, giving rise to a discontinuous constituent in the Spanish component. A GMTG fragment for the sentence is shown below:  S   S   NP  VP    NP  VP    VP   VP !  V  NP    NP  V  NP    NP   NP !  The doctor   El m´edico   V   V   treats   examino   NP   NP  NP !  his teeth   le  los dientes  Note the discontinuity between le and los dientes. Such discontinuities are marked by commas on both the LHS and the RHS of the relevant component. GMTG’s flexibility allows it to deal with many complex syntactic phenomena. For example, Becker et al. (1991) point out that TAG does not have the generative capacity to model certain kinds of scrambling in German, when the so-called “cooccurrence constraint” is imposed, requiring the derivational pairing between verbs and their complements. They examine the English/German sentence fragment [(... that the detective has promised the client to indict the suspect of the crime), (... daß des Verbrechens der Detektiv den Verd¨achtigen dem Klienten zu ¨uberf¨uhren versprochen hat)]. The verbs versprochen and ¨uberf¨uhren both have two noun phrases as arguments. In German, these noun phrases can appear to the left of the verbs in any order. The following is a GMTG fragment for the above sentence pair2:  S  S     N   has promised N    S (  S ( N   S ( N    S ( versprochen hat  (22)  S  S  S  S    to indict N     N   !  N  " #  N  " $  zu ¨uberf¨uhren  (23) The discontinuities allow the noun arguments of versprochen to be placed in any order with the noun arguments of ¨uberf¨uhren. Rambow (1995) gives a similar analysis. 3 Formal Definitions Let %'& be a finite set of nonterminal symbols and let ( be the set of integers.3 We define )  % & +* ,.-0/21$3  -54 % & 76 4 (98 .4 Elements of )  % & will be called indexed nonterminal symbols. In what follows we also consider a finite set of terminal symbols %;: , disjoint from % & , and work with strings in %=< > , where % > *?)  % & @A%;: . For B 4 %C< > , we define D EGFIHKJ  B L* , 6 !BM*NBPO /21$3 B'O OQBPO%RBPO O 4 % < >  -S/ 1$3T4 )  % & U8 , i.e. the set of indexes that appear in B . An indexed tuple vector, or ITV, is a vector of tuples of strings over % > , having the form B *  B   VVV RB WYX VVV   B[Z  VVV RB;Z W\  where ^]`_ , abc]ed and Bfbhg 4 % < > for _CikjAi  , _Simlnioab . We write B j , _Siojpi  , to denote the j -th component of B and q  B j to denote the arity of such a tuple, which is a.b . When q  B j r*sd , B j is the empty tuple, written  . This should not be confused with  , that is the tuple of arity one containing the empty string. A link is an ITV where 2These are only a small subset of the necessary productions. The subscripts on the nonterminals indicate what terminals they will eventually yield; the terminal productions have been left out to save space. 3Any other infinite set of indexes would suit too. 4The parentheses around indexes distinguish them from other uses of superscripts in formal language theory. However, we shall omit the parentheses when the context is unambiguous. each Bfbtg consists of one indexed nonterminal and all of these nonterminals are coindexed. As we shall see, the notion of a link generalizes the notion of nonterminal in context-free grammars: each production rewrites a single link. Definition 1 Let  ] _ be some integer constant. A generalized multitext grammar with  dimensions (  -GMTG for short) is a tuple uv*  %'& K% : w   where %'& , % : are finite, disjoint sets of nonterminal and terminal symbols, respectively, x4 % & is the start symbol and w is a finite set of productions. Each production has the form y  z , where y is a  -dimensional link and z is a  dimensional ITV such that q  y j {*|q  z j for _Si}jpi  . If y j contains  , then q  y j c*~_ . We omit symbol  from  -GMTG whenever it is not relevant. To simplify notation, we write productions as €*   VVV PZ  , with each 'b* b  VVV  b W‚   y b  VVV Uy b W‚ , bhg 4 %'& . I.e. we omit the unique index appearing on the LHS of  . Each  b is called a production component. The production component    is called the inactive production component. All other production components are called active and we set ƒG„…KD †‡H   ˆ* , j ‰abTŠ‹df8 . Inactive production components are used to relax synchronous rewriting on some dimensions, that is to implement rewriting on 7Œ  components. When *_ , rewriting is licensed on one component, independently of all the others. Two grammar parameters play an important role in this paper. Let Ž*   VVV PZ  4 w and b* b  VVV  b W‚   y b  VVV Uy b W‚ . Definition 2 The rank ‘ of a production  is the number of links on its RHS: ‘   *  D2E’F“HKJ  y  ””” y WX y  ””” y•Z W\ & . The rank of a GMTG u is ‘  u c*—–+˜’™ š‡›’œ‘   . Definition 3 The fan-out of  b ,  and u are, respectively, q   b ž*|a b , q   ž* Ÿ Z b¢¡  q   b and q  u Q*N–+˜’™ 𣛒œ¤q   . For example, the rank of Production (23) is two and its fan-out is four. In GMTG, the derives relation is defined over ITVs. GMTG derivation proceeds by synchronous application of all the active components in some production. The indexed nonterminals to be rewritten simultaneously must all have the same index 6 , and all nonterminals indexed with 6 in the ITV must be rewritten simultaneously. Some additional notation will help us to define rewriting precisely. A reindexing is a one-to-one function on ( , and is extended to % > by letting #  ¥*  for ¦4 %§: and # -0/21$3 —* -0/‡/21$3$3 for -0/ 1$3N4 )  % & . We also extend # to strings in % < > analogously. We say that y Uy O 4 % < > are independent if D EGF“HKJ  y  D2E’F“HKJ  yO, c* . Definition 4 Let u *  % & K%[: w   be a  -GMTG and let  *   VVV  Z  with  4 w and 'b* b  VVV  b W‚   yb  VVV Uyb W‚ . Let B and  be two ITVs with B j*  B[b  VVV RBfb W‚ and  jT*  b  VVV b W‚ . Assume that y is some concatenation of all y•btg and that B is some concatenation of all Bfbhg , _=iojAi  , _0iŽl ikab , and let # be some reindexing such that strings #  y and B are independent. The derives relation B  š  holds whenever there exists an index 6 4 ( such that the following two conditions are satisfied: (i) for each j 4 ƒ’„…KD †’H   we have Bfb  ””” BIb W‚ * BPO b /21$3 b  B'O b  /21$3 b  ””” B'O b W‚  / 1$3 b W‚ BPO b W‚ such that 6 4 D2E’F“HKJ  B O b BPO b  ””” B'O b W‚ , and each bhg is obtained from B[btg by replacing each /21$3 bhg with #  y bhg ; (ii) for each j 4 ƒ’„…UD †’H   we have 6 4 D EGF“HKJ  BIb p””” Bfb W‚ and B j *  j . We generalize the  š relation to  and o< in the usual way, to represent derivations. We can now introduce the notion of generated language (or generated relation). A start link of a  -GMTG is a  -dimensional link where at least one component is  /  3 ,  the start symbol, and the rest of the components are  . Thus, there are  Z _ start links. The language generated by a  -GMTG u is   u —* , B  B < B   B a start link  B  j*  or B  j*  b with  b 4 % < : _—i jži  8 . Each ITV in   u is called a multitext. For every  -GMTG u ,   u can be partitioned into  Z  _ subsets, each containing multitexts derived from a different start link. These subsets are disjoint, since every nonempty tuple of a start link is eventually rewritten as a string, either empty or not.5 A start production is a production whose LHS is a start link. A GMTG writer can choose the combinations of components in which the grammar can generate, by including start productions with the desired combinations of active components. If a grammar contains no start productions with a certain combination of active components, then the corresponding subset of   u will be empty. Allowing a single GMTG u to generate multitexts with 5We are assuming that there are no useless nonterminals. some empty tuples corresponds to modeling relations of different dimensionalities. This capability enables a synchronous grammar to govern lowerdimensional sublanguages/translations. For example, an English/Italian GMTG can include Production (9), an English CFG, and an Italian CFG. A single GMTG can then govern both translingual and monolingual information in applications. Furthermore, this capability simplifies the normalization procedure described in Section 6. Otherwise, this procedure would require exceptions to be made when eliminating epsilons from start productions. 4 Generative Capacity In this section we compare the generative capacity of GMTG with that of mildly context-sensitive grammars. We focus on LCFRS, using the notational variant introduced by Rambow and Satta (1999), briefly summarized below. Throughout this section, strings v4 %=< : and vectors of the form   will be identified. For lack of space, some proofs are only sketched, or entirely omitted when relatively intuitive: Melamed et al. (2004) provide more details. Let %;: be some terminal alphabet. A function $ has rank ¤]od if it is defined on  % < : X"!  % < : $# ! ””” !  %=< : &% , for integers # bc]`_ , _Ci jAi' . Also, $ has fan-out # ] _ if its range is a subset of  %¤< : . Let (*) , +'btg , _ i-,ki # , _ i~j=i. and _ ixlmi # b , be string-valued variables. Function $ is linear regular if it is defined by an equation of the form $ 0/ +   VVV 0+  X 1 VVV  / +32  VVV 0+ 2 &% 1 * / (  VVV 0( 1 (24) where / (  VVV 0( 1 represents some grouping into # strings of all and only the variables appearing in the left-hand side, possibly with some additional terminal symbols. (Symbols ‘ , q and  are overloaded below.) Definition 5 A Linear Context-Free Rewriting System (LCFRS) is a quadruple u *  % & K%;: w   where % & , %;: and  are as in GMTGs, every 4 % & is associated with an integer q  |] _ with q  5* _ , and w is a finite set of productions of the form  $ 54   4  VVV  476 /98K3 , where ‘ %$ ] d ,  4 b 4 %'& , _+iNjTi ‘ %$ and where $ is a linear regular function having rank ‘ %$ and fan-out q  , defined on  %C< : $: / ; X 3 ! ””” !  %=< : : / ;3<= >$?3 . For every 4 %P& and @ 4  %=< : $: /9A 3 , we write  @ if (i)  $  4 w and $  c*@ ; or else (ii)  $ 54  VVV  4 6 / 8K3 4 w , 4 b  @b 4  %C< : $: / ; ‚ 3 for every _ i j i ‘ %$ , and $  @  VVV @ 6 /98K3 * @ . The language generated by u is defined as   u Q* ,        4 %0< : 8 . Let  4 w , ~*  $ 54   4  VVV  4 6 /98K3 . The rank of  and u are, respectively, ‘   =* ‘ %$ and ‘  u * –+˜’™£š‡›’œ=‘   . The fan-out of  and u are, respectively, q   c* q  and q  u Q*N–+˜’™ 𣛒œ q   . The proof of the following theorem is relatively intuitive and therefore omitted. Theorem 1 For any LCFRS u , there exists some 1-GMTG u O with ‘  u9O, N* ‘  u and q  u O, N* q  u such that   u O, *   u . Next, we show that the generative capacity of GMTG does not exceed that of LCFRS. In order to compare string tuples with bare strings, we introduce two special functions ranging over multitexts. Assume two fresh symbols    4  %': @ % & . For a multitext B we write ’ƒ’F  B * B'O , where BPO j *   if B j *  and B'O j * B j otherwise, _vi jsi  . For a multitext      VVV   Z  with no empty tuple, we write  GH       VVV   Z  *       ”””   Z . We extend both functions to sets of multitexts in the obvious way:  ’H   * ,  ’H   !4 8 and ’ƒ’F   Q* , ’ƒ’F   •4 A8 . In a  -GMTG, a production with active components, _—i i  , is said to be -active. A  -GMTG whose start productions are all  -active is called properly synchronous. Lemma 1 For any properly synchronous  -GMTG u , there exists some LCFRS u O with ‘  u9O, p*N‘  u and q  u9O% *q  u such that   u O, c*  GH    u . Outline of the proof. We set u O *  % O & P%;: w O    , where % O & * ,  Y6  ' 4 w c6 4 D EGF“HKJ  u U8@ ,  8 , D2E’F“HKJ  u is the set of all indexes appearing in the productions of u , and wCO is constructed as follows. Let   'O 4 w with ^*   VVV  Z  , O * O  VVV O Z  , b * b  VVV  b Z   yb  VVV Uyb W‚ , and 'O b * 54 b  VVV  4 b Z   z b  VVV Yz b W  ‚ . Assume that  can rewrite the righthand side of PO , that is  z   VVV Yz W  X VVV   z Z  VVV Yz Z W  \   š     VVV  WYX VVV   Z  VVV Z W\  V Then there must be at least one index 6 such that for each j 4 ƒ’„…UD †’H   ,  zb  VVV Yz b W  ‚ contains exactly ab occurrences of 6 . Let y š * y  ””” y WYX y  ””” y•Z W\ . Also let D2E’F“HKJ  y š Ž* , 6  VVV Y6 6 / š 3 8 and let q  6Rb be the number of occurrences of 6Yb appearing in y š . We define an alphabet  š|* , +'bhg  _ i j i ‘   —_xi l i q  6b U8 . For each j and l with _ i jei  , j 4 ƒ’„…UD †’H   and _~i l i ab , we define a string ,   Yj l over  šn@Ž%[: as follows. Let y bhgn*    !”””  , each M4 % > . Then ,   Yj l L*  O   O  ”””  O  , where   O  *  in case C4 %;: ; and   O  * + 1   in case e4 )  % & , where 6 is the index of  and the indicated occurrence of  is the  -th occurrence of such symbol appearing from left to right in string y š . Next, for every possible  , PO , and 6 as above, we add to wSO a production  1 *  O Y6   $   Y6  VVV   Y6 6 / š 3   where $ 0/ +   VVV 0+  : /21 X 3 1 VVV  / + 6 / š 3  VVV 0+ 6 / š 3 : / 1 <0= ? 3 1 * / ,   _*_ VVV*,     aGZ 1 (each ,   Yj l above satisfies j 4 ƒG„…KD †‡H   ). Note that $ is a function with rank ‘   and fan-out Ÿ Z b¢¡  ab * q   . Thus we have ‘   1 * ‘   and q   1 * q   . Without loss of generality, we assume that u contains only one production with  appearing on the left-hand side, having the form   *  VVV      VVV    . To complete the construction of wCO , we then add a last production    $    _  where $ 0/ +   0+   VVV 0+  Z 1 }* / +    +    ”””  +  Z 1 . We claim that, for each  ,  O and 6 as above    VVV   WYX VVV   Z  VVV   Z W\  }<    VVV   WYX VVV   Z  VVV   Z W \  iff 'OY6   /   VVV   WYX     VVV   Z W\ 1 . The lemma follows from this claim. The proof of the next lemma is relatively intuitive and therefore omitted. Lemma 2 For any  -GMTG u , there exists a properly synchronous  -GMTG u O such that ‘  u O, +* ‘  u , q  u9O% ‹* –+˜’™ , q  u  r8 , and   u9O, x* ’ƒ’F    u . Combining Lemmas 1 and 2, we have Theorem 2 For any  -GMTG u , there exists some LCFRS u O with ‘  u9O, * ‘  u and q  u9O% * –+˜’™ , q  u   8 such that   u O, 5*  ’H  ’ƒ’F    u . 5 Weak Language Preservation Property GMTGs have the weak language preservation property, which is one of the defining requirements of synchronous rewriting systems (Rambow and Satta, 1996). Informally stated, the generative capacity of the class of all component grammars of a GMTG exactly corresponds to the class of all projected languages. In other words, the interaction among different grammar components in the rewriting process of GMTG does not increase the generative power beyond the above mentioned class. The next result states this property more formally. Let u be a  -GMTG with production set w . For _5i j i  , the j -th component grammar of u , written    u Yj , is the 1-GMTG with productions w!b~* , 'b    VVV PZ  4 w  b  *    U8 . Similarly, the j -th projected language of   u is      u Yj * ,  b    VVV   Z  4   u   b  *  U8 . In general      u Yj  *      u Yj , because component grammars    u Yj interact with each other in the rewriting process of u . To give a simple example, consider the 2GMTG u with productions          ,     I-0/  3   p/  3  and       /  3   /  3   . Then   u * ,        ]df8 , and thus      u  ¤* ,     ] df8 . On the other hand,      u  ¤* ,        ]‹df8 . Let  LCFRS be the class of all languages generated by LCFRSs. Also let š / 3 and š / [3 be the classes of languages      u  and      u  , respectively, for every |] _ , every  -GMTG u and every with _Si i  . Theorem 3 š / 3 *    and š / f3 *    . Proof. The  cases directly follow from Theorem 1. Let u be some  -GMTG and let be an integer such that _ i i  . It is not difficult to see that  ’H  ’ƒ’F       u  ‰*      u  . Hence      u  can be generated by some LCFRS, by Theorem 2. We now define a LCFRS u O such that   u O ‹*    ‡ƒGF    u  . Assume u O O}*  %'& K% : w   is a properly synchronous  -GMTG generating ’ƒ’F    u (Lemma 2). Let uSO¦*  % O & K%;: w O    , where % O & and w O are constructed from u O O almost as in the proof of Lemma 1. The only difference is in the definition of strings ,   Yj l and the production rewriting   , specified as follows (we use the same notation as in the proof of Lemma 1). ,   Yj l L*  O   O  ”””  O  , where for each  : (i)  O  *  if e4 %[: and j * ; (ii)  O  *  if  4 %[: and j  * ; (iii)  O  * + 1   if {4 )  % & , with 6 ,  as in the original proof. Finally, the production rewriting   has the form    $   _  , where $ 0/ +   0+   VVV 0+  Z 1 =* / +   +  !””” +  Z 1 . To conclude the proof, note that      u  and    ’ƒ’F    u  can differ only with respect to string  . The theorem then follows from the fact that LCFRS is closed under intersection with regular languages (Weir, 1988). 6 Generalized Chomsky Normal Form Certain kinds of text analysis require a grammar in a convenient normal form. The prototypical example for CFG is Chomsky Normal Form (CNF), which is required for CKY-style parsing. A  -GMTG is in Generalized Chomsky Normal Form (GCNF) if it has no useless links or useless terminals, and every production is in one of two forms: (i) A nonterminal production has rank = 2 and no terminals or  ’s on the RHS. (ii) A terminal production has exactly one component of the form   , where -x4 % & and  4 %;: . The other components are inactive. The algorithm to convert a GMTG to GCNF has the following steps: (1) add a new start-symbol (2) isolate terminals, (3) binarize productions, (4) remove  ’s, (5) eliminate useless links and terminals, and (6) eliminate unit productions. The steps are generalizations of those presented by Hopcroft et al. (2001) to the multidimensional case with discontinuities. The ordering of these steps is important, as some steps can restore conditions that others eliminate. Traditionally, the terminal isolation and binarization steps came last, but the alternative order reduces the number of productions that can be created during  -elimination. Steps (1), (2), (5) and (6) are the same for CFG and GMTG, except that the notion of nonterminal in CFG is replaced with links in GMTG. Some complications arise, however, in the generalization of steps (3) and (4). 6.1 Step 3: Binarize The third step of converting to GCNF is binarization of the productions, making the rank of the grammar two. For ¤]od and # ] _ , we write D-GMTG / 2 3 to represent the class of all  -GMTGs with rank and fan-out # . A CFG can always be binarized into another CFG: two adjacent nonterminals are replaced with a single nonterminal that yields them. In contrast, it can be impossible to binarize a  -GMTG / 2 3 into an equivalent  -GMTG  . From results presented by Rambow and Satta (1999) it follows that, (S) (S)    N PatV wentP (homeA )early  P (damoyN PatA )ranoV pashol  Pat went home early damoy Pat rano pashol Figure 1: A production that requires an increased fan-out to binarize, and its 2D illustration. for every fan-out # ]  and rank ] , there are some index orderings that can be generated by  -GMTG / 2 3 but not  -GMTG / 2  3 . The distinguishing characteristic of such index orderings is apparent in Figure 1, which shows a production in a grammar with fan-out two, and a graph that illustrates which nonterminals are coindexed. No two nonterminals are adjacent in both components, so replacing any two nonterminals with a single nonterminal causes a discontinuity. Increasing the fanout of the grammar allows a single nonterminal to rewrite as non-adjacent nonterminals in the same string. Increasing the fan-out can be necessary even for binarizing a 1-GMTG production such as:  S,S    N  V  P ( A )  P ( N  A ) V   (25) To binarize, we nondeterministically split each nonterminal production  of rank Š  into two nonterminal productions   and   of rank Œ , but possibly with higher fan-out. Since this algorithm replaces with two productions that have rank Œ , recursively applying the algorithm to productions of rank greater than two will reduce the rank of the grammar to two. The algorithm follows: (i) Nondeterministically chose links to be removed from  and replaced with a single link to make   , where }i i  _ . We call these links the m-links. (ii) Create a new ITV B . Two nonterminals are neighbors if they are adjacent in the same string in a production RHS. For each set of mlink neighbors in component in  , place that set of neighbors into the ’th component of B in the order in which they appeared in  , so that each set of neighbors becomes a different string, for _Si i  . (iii) Create a new unique nonterminal, say 4 , and replace each set of neighbors in production  with 4 , to create   . The production   is 4 VVV  4   B For example, binarization of the productions for the English/Russian multitext [(Pat went home early), (damoy Pat rano pashol)]6 in Figure 1 requires that we increase the fan-out of the language to three. The binarized productions are as follows:  S  S    N PatVP   VP  N PatVP   (26)  VP  VP  VP    V  A early  V   A ranoV   (27)  V  V  V    V wentP home  P damoy  V pashol  (28) 6.2 Step 4: Eliminate  ’s Grammars in GCNF cannot have  ’s in their productions. Thus, GCNF is a more restrictive normal form than those used by Wu (1997) and Melamed (2003). The absence of  ’s simplifies parsers for GMTG (Melamed, 2004). Given a GMTG u with  in some productions, we give the construction of a weakly equivalent grammar u9O without any  ’s. First, determine all nullable links and associated strings in u . A link  *  VVV   VVV  Z VVV  Z  is nullable if  <  B , where B *  y   VVV Uy WYX VVV   y•Z  VVV Uy•Z W\  is an ITV where at least one y•bhg is  . We say the link  is nullable and the string at address  a in  is nullable. For each nullable link, we create   versions of the link, where is the number of nullable strings of that link. There is one version for each of the possible combinations of the nullable strings being present or absent. The version of the link with all strings present is its original version. Each non-original version of the link (except in the case of start links) gets a unique subscript, which is applied to all the nonterminals in the link, so that each link is unique in the grammar. We construct a new grammar u O whose set of productions w O is determined as follows: for each production, we identify the nullable links on the RHS and replace them with each combination of the non-original versions found earlier. If a string is left empty during this process, that string is removed from the RHS and the fan-out of the production component is reduced by one. The link on the LHS is replaced with its appropriate matching non-original link. There is one exception to the replacements. If a production consists of all nullable strings, do not include this case. Lastly, we remove all strings on the RHS of productions that have  ’s, and reduce the fan-out of the productions accordingly. Once 6The Russian is topicalized but grammatically correct. again, we replace the LHS link with the appropriate version. Consider the example grammar:      4    54    (29)        4   54   (30) 54  54      (31) 54  54       (32) We first identify which links are nullable. In this case     and 54  54  are nullable so we create a new version of both links:       and 54     . We then alter the productions. Production (31) gets replaced by (40). A new production based on (30) is Production (38). Lastly, Production (29) has two nullable strings on the RHS, so it gets altered to add three new productions, (34), (35) and (36). The altered set of productions are the following:       4    54    (33)       4       (34)        4     54   (35)        4        (36)        4   54  (37)         4      (38) 54  54       (39) 54         (40) Melamed et al. (2004) give more details about conversion to GCNF, as well as the full proof of our final theorem: Theorem 4 For each GMTG u there exists a GMTG u O in GCNF generating the same set of multitexts as u but with each  component in a multitext replaced by  . 7 Conclusions Generalized Multitext Grammar is a convenient and intuitive model of parallel text. In this paper, we have presented some formal properties of GMTG, including proofs that the generative capacity of GMTG is comparable to ordinary LCFRS, and that GMTG has the weak language preservation property. We also proposed a synchronous generalization of Chomsky Normal Form, laying the foundation for synchronous CKY parsing under GMTG. In future work, we shall explore the empirical properties of GMTG, by inducing stochastic GMTGs from real multitexts. Acknowledgments Thanks to Owen Rambow and the anonymous reviewers for valuable feedback. This research was supported by an NSF CAREER Award, the DARPA TIDES program, the Italian MIUR under project PRIN No. 2003091149 005, and an equipment gift from Sun Microsystems. References A. Aho and J. Ullman. 1969. Syntax directed translations and the pushdown assembler. Journal of Computer and System Sciences, 3:37–56, February. T. Becker, A. Joshi, and O. Rambow. 1991. Long-distance scrambling and tree adjoining grammars. In Proceedings of the 5th Meeting of the European Chapter of the Association for Computational Linguistics (EACL), Berlin, Germany. E. Bertsch and M. J. Nederhof. 2001. On the complexity of some extensions of RCG parsing. In Proceedings of the 7th International Workshop on Parsing Technologies (IWPT), pages 66–77, Beijing, China. M. Dras and T. Bleam. 2000. How problematic are clitics for S-TAG translation? In Proceedings of the 5th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+5), Paris, France. J. Hopcroft, R. Motwani, and J. Ullman. 2001. Introduction to Automota Theory, Languages and Computation. AddisonWesley, USA. I. Dan Melamed, G. Satta, and B. Wellington. 2004. Generalized multitext grammars. Technical Report 04-003, NYU Proteus Project. http://nlp.cs.nyu.edu/pubs/. I. Dan Melamed. 2003. Multitext grammars and synchronous parsers. In Proceedings of the Human Language Technology Conference and the North American Association for Computational Linguistics (HLT-NAACL), pages 158–165, Edmonton, Canada. I. Dan Melamed. 2004. Statistical machine translation by parsing. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL), Barcelona, Spain. O. Rambow and G. Satta. 1996. Synchronous models of language. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL), Santa Cruz, USA. O. Rambow and G. Satta. 1999. Independent parallelism in finite copying parallel rewriting systems. Theoretical Computer Science, 223:87–120, July. O. Rambow. 1995. Formal and Computational Aspects of Natural Language Syntax. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA. S. Shieber. 1994. Restricting the weak-generative capactiy of synchronous tree-adjoining grammars. Computational Intelligence, 10(4):371–386. D. J. Weir. 1988. Characterizing Mildly Context-Sensitive Grammar Formalisms. Ph.D. thesis, Department of Computer and Information Science, University of Pennsylvania. D. Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–404, September. D. H. Younger. 1967. Recognition and parsing of context-free languages in time  . Information and Control, 10(2):189– 208, February.
2004
84
Identifying Agreement and Disagreement in Conversational Speech: Use of Bayesian Networks to Model Pragmatic Dependencies Michel Galley , Kathleen McKeown , Julia Hirschberg , Columbia University Computer Science Department 1214 Amsterdam Avenue New York, NY 10027, USA  galley,kathy,julia  @cs.columbia.edu and Elizabeth Shriberg   SRI International Speech Technology and Research Laboratory 333 Ravenswood Avenue Menlo Park, CA 94025, USA [email protected] Abstract We describe a statistical approach for modeling agreements and disagreements in conversational interaction. Our approach first identifies adjacency pairs using maximum entropy ranking based on a set of lexical, durational, and structural features that look both forward and backward in the discourse. We then classify utterances as agreement or disagreement using these adjacency pairs and features that represent various pragmatic influences of previous agreement or disagreement on the current utterance. Our approach achieves 86.9% accuracy, a 4.9% increase over previous work. 1 Introduction One of the main features of meetings is the occurrence of agreement and disagreement among participants. Often meetings include long stretches of controversial discussion before some consensus decision is reached. Our ultimate goal is automated summarization of multi-participant meetings and we hypothesize that the ability to automatically identify agreement and disagreement between participants will help us in the summarization task. For example, a summary might resemble minutes of meetings with major decisions reached (consensus) along with highlighted points of the pros and cons for each decision. In this paper, we present a method to automatically classify utterances as agreement, disagreement, or neither. Previous work in automatic identification of agreement/disagreement (Hillard et al., 2003) demonstrates that this is a feasible task when various textual, durational, and acoustic features are available. We build on their approach and show that we can get an improvement in accuracy when contextual information is taken into account. Our approach first identifies adjacency pairs using maximum entropy ranking based on a set of lexical, durational and structural features that look both forward and backward in the discourse. This allows us to acquire, and subsequently process, knowledge about who speaks to whom. We hypothesize that pragmatic features that center around previous agreement between speakers in the dialog will influence the determination of agreement/disagreement. For example, if a speaker disagrees with another person once in the conversation, is he more likely to disagree with him again? We model context using Bayesian networks that allows capturing of these pragmatic dependencies. Our accuracy for classifying agreements and disagreements is 86.9%, which is a 4.9% improvement over (Hillard et al., 2003). In the following sections, we begin by describing the annotated corpus that we used for our experiments. We then turn to our work on identifying adjacency pairs. In the section on identification of agreement/disagreement, we describe the contextual features that we model and the implementation of the classifier. We close with a discussion of future work. 2 Corpus The ICSI Meeting corpus (Janin et al., 2003) is a collection of 75 meetings collected at the International Computer Science Institute (ICSI), one among the growing number of corpora of humanto-human multi-party conversations. These are naturally occurring, regular weekly meetings of various ICSI research teams. Meetings in general run just under an hour each; they have an average of 6.5 participants. These meetings have been labeled with adjacency pairs (AP), which provide information about speaker interaction. They reflect the structure of conversations as paired utterances such as questionanswer and offer-acceptance, and their labeling is used in our work to determine who are the addressees in agreements and disagreements. The annotation of the corpus with adjacency pairs is described in (Shriberg et al., 2004; Dhillon et al., 2004). Seven of those meetings were segmented into spurts, defined as periods of speech that have no pauses greater than .5 second, and each spurt was labeled with one of the four categories: agreement, disagreement, backchannel, and other.1 We used spurt segmentation as our unit of analysis instead of sentence segmentation, because our ultimate goal is to build a system that can be fully automated, and in that respect, spurt segmentation is easy to obtain. Backchannels (e.g. “uhhuh” and “okay”) were treated as a separate category, since they are generally used by listeners to indicate they are following along, while not necessarily indicating agreement. The proportion of classes is the following: 11.9% are agreements, 6.8% are disagreements, 23.2% are backchannels, and 58.1% are others. Inter-labeler reliability estimated on 500 spurts with 2 labelers was considered quite acceptable, since the kappa coefficient was .63 (Cohen, 1960). 3 Adjacency Pairs 3.1 Overview Adjacency pairs (AP) are considered fundamental units of conversational organization (Schegloff and Sacks, 1973). Their identification is central to our problem, since we need to know the identity of addressees in agreements and disagreements, and adjacency pairs provide a means of acquiring this knowledge. An adjacency pair is said to consist of two parts (later referred to as A and B) that are ordered, adjacent, and produced by different speakers. The first part makes the second one immediately relevant, as a question does with an answer, or an offer does with an acceptance. Extensive work in conversational analysis uses a less restrictive definition of adjacency pair that does not impose any actual adjacency requirement; this requirement is problematic in many respects (Levinson, 1983). Even when APs are not directly adjacent, the same constraints between pairs and mechanisms for selecting the next speaker remain in place (e.g. the case of embedded question and answer pairs). This relaxation on a strict adjacency requirement is particularly important in interactions of multiple speakers since other speakers have more opportunities to insert utterances between the two elements of the AP construction (e.g. interrupted, abandoned or ignored utterances; backchannels; APs with multiple second elements, e.g. a question followed by answers of multiple speakers).2 Information provided by adjacency pairs can be used to identify the target of an agreeing or disagreeing utterance. We define the problem of AP 1Part of these annotated meetings were provided by the authors of (Hillard et al., 2003). 2The percentage of APs labeled in our data that have noncontiguous parts is about 21%. identification as follows: given the second element (B) of an adjacency pair, determine who is the speaker of the first element (A). A quite effective baseline algorithm is to select as speaker of utterance A the most recent speaker before the occurrence of utterance B. This strategy selects the right speaker in 79.8% of the cases in the 50 meetings that were annotated with adjacency pairs. The next subsection describes the machine learning framework used to significantly outperform this already quite effective baseline algorithm. 3.2 Maximum Entropy Ranking We view the problem as an instance of statistical ranking, a general machine learning paradigm used for example in statistical parsing (Collins, 2000) and question answering (Ravichandran et al., 2003).3 The problem is to select, given a set of possible candidates       (in our case, potential A speakers), the one candidate  that maximizes a given conditional probability distribution. We use maximum entropy modeling (Berger et al., 1996) to directly model the conditional probability   , where each  in  !   "  is an observation associated with the corresponding speaker   .   is represented here by only one variable for notational ease, but it possibly represents several lexical, durational, structural, and acoustic observations. Given # feature functions $ %&' !( and # model parameters )*+-,.!   /,102 , the probability of the maximum entropy model is defined as: 1345 67 8 9;: <>=@?BA C 0 D %FE4 ,%$%&' !-HG The only role of the denominator 9 6 is to ensure that 13 is a proper probability distribution. It is defined as: 9 :  <  D JIKE4 =@?LA C 0 D %/E4 ,"%$%&'   I HG To find the most probable speaker of part A, we use the following decision rule: M   NOP'QRN ? S(TU&VWS5XWY[Z[Z[Z[Y SH\.].^ 134   6_  NOP'QRN ? S(TU&VWS5XWY[Z[Z[Z[Y SH\.].^ =@?LA C 0 D %/E4 , % $ % '   HGR_ Note that we have also attempted to model the problem as a binary classification problem where 3The approach is generally called re-ranking in cases where candidates are assigned an initial rank beforehand. each speaker is either classified as speaker A or not, but we abandoned that approach, since it gives much worse performance. This finding is consistent with previous work (Ravichandran et al., 2003) that compares maximum entropy classification and re-ranking on a question answering task. 3.3 Features We will now describe the features used to train the maximum entropy model mentioned previously. To rank all speakers (aside from the B speaker) and to determine how likely each one is to be the A speaker of the adjacency pair involving speaker B, we use four categories of features: structural, durational, lexical, and dialog act (DA) information. For the remainder of this section, we will interchangeably use A to designate either the potential A speaker or the most recent utterance4 of that speaker, assuming the distinction is generally unambiguous. We use B to designate either the B speaker or the current spurt for which we need to identify a corresponding A part. The feature sets are listed in Table 1. Structural features encode some helpful information regarding ordering and overlap of spurts. Note that with only the first feature listed in the table, the maximum entropy ranker matches exactly the performance of the baseline algorithm (79.8% accuracy). Regarding lexical features, we used a countbased feature selection algorithm to remove many first-word and last-word features that occur infrequently and that are typically uninformative for the task at hand. Remaining features essentially contained function words, in particular sentence-initial indicators of questions (“where”, “when”, and so on). Note that all features in Table 1 are “backwardlooking”, in the sense that they result from an analysis of context preceding B. For many of them, we built equivalent “forward-looking” features that pertain to the closest utterance of the potential speaker A that follows part B. The motivation for extracting these features is that speaker A is generally expected to react if he or she is addressed, and thus, to take the floor soon after B is produced. 3.4 Results We used the labeled adjacency pairs of 50 meetings and selected 80% of the pairs for training. To train the maximum entropy ranking model, we used the generalized iterative scaling algorithm (Darroch and Ratcliff, 1972) as implemented in YASMET.5 4We build features for both the entire speaker turn of A and the most recent spurt of A. 5http://www.isi.edu/˜och/YASMET.html Structural features: number of speakers taking the floor between A and B number of spurts between A and B number of spurts of speaker B between A and B do A and B overlap? Durational features: duration of A if A and B do not overlap: time separating A and B if they do overlap: duration of overlap seconds of overlap with any other speaker speech rate in A Lexical features: number of words in A number of content words in A ratio of words of A (respectively B) that are also in B (respectively A) ratio of content words of A (respectively B) that are also in B (respectively A) number of  -grams present both in A and B (we built 3 features for  ranging from 2 to 4) first and last word of A number of instances at any position of A of each cue word listed in (Hirschberg and Litman, 1994) does A contain the first/last name of speaker B? Table 1. Speaker ranking features Feature sets Accuracy Baseline 79.80% Structural 83.97% Durational 84.71% Lexical 75.43% Structural and durational 87.88% All 89.38% All (only backward looking) 86.99% All (Gaussian smoothing, FS) 90.20% Table 2. Speaker ranking accuracy Table 2 summarizes the accuracy of our statistical ranker on the test data with different feature sets: the performance is 89.39% when using all feature sets, and reaches 90.2% after applying Gaussian smoothing and using incremental feature selection as described in (Berger et al., 1996) and implemented in the yasmetFS package.6 Note that restricting ourselves to only backward looking features decreases the performance significantly, as we can see in Table 2. We also wanted to determine if information about 6http://www.isi.edu/˜ravichan/YASMET.html dialog acts (DA) helps the ranking task. If we hypothesize that only a limited set of paired DAs (e.g. offer-accept, question-answer, and apologydownplay) can be realized as adjacency pairs, then knowing the DA category of the B part and of all potential A parts should help in finding the most meaningful dialog act tag among all potential A parts; for example, the question-accept pair is admittedly more likely to correspond to an AP than e.g. backchannel-accept. We used the DA annotation that we also had available, and used the DA tag sequence of part A and B as a feature.7 When we add the DA feature set, the accuracy reaches 91.34%, which is only slightly better than our 90.20% accuracy, which indicates that lexical, durational, and structural features capture most of the informativeness provided by DAs. This improved accuracy with DA information should of course not be considered as the actual accuracy of our system, since DA information is difficult to acquire automatically (Stolcke et al., 2000). 4 Agreements and Disagreements 4.1 Overview This section focusses on the use of contextual information, in particular the influence of previous agreements and disagreements and detected adjacency pairs, to improve the classification of agreements and disagreements. We first define the classification problem, then describe non-contextual features, provide some empirical evidence justifying our choice of contextual features, and finally evaluate the classifier. 4.2 Agreement/Disagreement Classification We need to first introduce some notational conventions and define the classification problem with the agreement/disagreement tagset. In our classification problem, each spurt ! among the  spurts of a meeting must be assigned a tag   AGREE  DISAGREE  BACKCHANNEL  OTHER . To specify the speaker of the spurt (e.g. speaker B), the notation will sometimes be augmented to incorporate speaker information, as with   , and to designate the addressee of B (e.g. listener A), we will use the notation   . For example,    AGREE simply means that B agrees with A in the spurt of index . This notation makes it obvious that we do not necessarily assume that agreements and disagreements are reflexive 7The annotation of DA is particularly fine-grained with a choice of many optional tags that can be associated with each DA. To deal with this problem, we used various scaled-down versions of the original tagset. relations. We define: A O =  :   < as the tag of the most recent spurt before   that is produced by Y and addresses X. This definition will help our multi-party analyses of agreement and disagreement behaviors. 4.3 Local Features Many of the local features described in this subsection are similar in spirit to the ones used in the previous work of (Hillard et al., 2003). We did not use acoustic features, since the main purpose of the current work is to explore the use of contextual information. Table 3 lists the features that were found most helpful at identifying agreements and disagreements. Regarding lexical features, we selected a list of lexical items we believed are instrumental in the expression of agreements and disagreements: agreement markers, e.g. “yes” and “right”, as listed in (Cohen, 2002), general cue phrases, e.g. “but” and “alright” (Hirschberg and Litman, 1994), and adjectives with positive or negative polarity (Hatzivassiloglou and McKeown, 1997). We incorporated a set of durational features that were described in the literature as good predictors of agreements: utterance length distinguishes agreement from disagreement, the latter tending to be longer since the speaker elaborates more on the reasons and circumstances of her disagreement than for an agreement (Cohen, 2002). Duration is also a good predictor of backchannels, since they tend to be quite short. Finally, a fair amount of silence and filled pauses is sometimes an indicator of disagreement, since it is a dispreferred response in most social contexts and can be associated with hesitation (Pomerantz, 1984). 4.4 Contextual Features: An Empirical Study We first performed several empirical analyses in order to determine to what extent contextual information helps in discriminating between agreement and disagreement. By integrating the interpretation of the pragmatic function of an utterance into a wider context, we aim to detect cases of mismatch between a correct pragmatic interpretation and the surface form of the utterance, e.g. the case of weak or “empty” agreement, which has some properties of downright agreement (lexical items of positive polarity), but which is commonly considered to be a disagreement (Pomerantz, 1984). While the actual classification problem incorporates four classes, the BACKCHANNEL class is igStructural features: is the previous/next spurt of the same speaker? is the previous/next spurt involving the same B speaker? Durational features: duration of the spurt seconds of overlap with any other speaker seconds of silence during the spurt speech rate in the spurt Lexical features: number of words in the spurt number of content words in the spurt perplexity of the spurt with respect to four language models, one for each class first and last word of the spurt number of instances of adjectives with positive polarity (Hatzivassiloglou and McKeown, 1997) idem, with adjectives of negative polarity number of instances in the spurt of each cue phrase and agreement/disagreement token listed in (Hirschberg and Litman, 1994; Cohen, 2002) Table 3. Local features for agreement and disagreement classification nored here to make the empirical study easier to interpret. We assume in that study that accurate AP labeling is available, but for the purpose of building and testing a classifier, we use only automatically extracted adjacency pair information. We tested the validity of four pragmatic assumptions: 1. previous tag dependency: a tag ! is influenced by its predecessor  2. same-interactants previous tag dependency: a tag    is influenced by A O =  :   < , the most recent tag of the same speaker addressing the same listener; for example, it might be reasonable to assume that if speaker B disagrees with A, B is likely to disagree with A in his or her next speech addressing A. 3. reflexivity: a tag   is influenced by A O =    :   < ; the assumption is that   is influenced by the polarity (agreement or disagreement) of what A said last to B. 4. transitivity: assuming there is a speaker  for which A O =  : A O =  :   << exists, then a tag   is influenced by A O =  :   < and A O =   : A O =  :   << ; an example of such an influence is a case where speaker  first agrees with  , then speaker  disagrees with  , from which one could possibly conclude that  is actually in disagreement with  . Table 4 presents the results of our empirical evaluation of the first three assumptions. For comparison, the distribution of classes is the following: 18.8% are agreements, 10.6% disagreements, and 70.6% other. The dependencies empirically evaluated in the two last columns are non-local; they create dependencies between spurts separated by an arbitrarily long time span. Such long range dependencies are often undesirable, since the influence of one spurt on the other is often weak or too difficult to capture with our model. Hence, we made a Markov assumption by limiting context to an arbitrarily chosen value . In this analysis subsection and for all classification results presented thereafter, we used a value of  8 . The table yields some interesting results, showing quite significant variations in class distribution when it is conditioned on various types of contextual information. We can see for example, that the proportion of agreements and disagreements (respectively 18.8% and 10.6%) changes to 13.9% and 20.9% respectively when we restrict the counts to spurts that are preceded by a DISAGREE. Similarly, that distribution changes to 21.3% and 7.3% when the previous tag is an AGREE. The variable is even more noticeable between probabilities  ( and     A O =      5 . In 26.1% of the cases where a given speaker B disagrees with A, he or she will continue to disagree in the next exchange involving the same speaker and the same listener. Similarly with the same probability distribution, a tendency to agree is confirmed in 25% of the cases. The results in the last column are quite different from the two preceding ones. While agreements in response to agreements (  AGREE  AGREE    8 ) are slightly less probable than agreements without conditioning on any previous tag (  AGREE    8 ), the probability of an agreement produced in response to a disagreement is quite high (with 23.4%), even higher than the proportion of agreements in the entire data (18.8%). This last result would arguably be quite different with more quarrelsome meeting participants. Table 5 represents results concerning the fourth pragmatic assumption. While none of the results characterize any strong conditioning of  by F and 5% , we can nevertheless notice some interesting phenomena. For example, there is a tendency for agreements to be transitive, i.e. if X agrees with A and B agrees with X within a limited segment of speech, then agreement between B and A is confirmed in 22.5% of the cases, while the probability of the agreement class is only 18.8%. The only slightly surprising result appears in the last column of the table, from which we cannot conclude that disagreement with a disagreement is equivalent to agreement. This might be explained by the fact that these sequences of agreement and disagreement do not necessarily concern the same propositional content. The probability distributions presented here are admittedly dependent on the meeting genre and particularly speaker personalities. Nonetheless, we believe this model can as well be used to capture salient interactional patterns specific to meetings with different social dynamics. We will next discuss our choice of a statistical model to classify sequence data that can deal with non-local label dependencies, such as the ones tested in our empirical study. 4.5 Sequence Classification with Maximum Entropy Models Extensive research has targeted the problem of labeling sequence information to solve a variety of problems in natural language processing. Hidden Markov models (HMM) are widely used and considerably well understood models for sequence labeling. Their drawback is that, as most generative models, they are generally computed to maximize the joint likelihood of the training data. In order to define a probability distribution over the sequences of observation and labels, it is necessary to enumerate all possible sequences of observations. Such enumeration is generally prohibitive when the model incorporates many interacting features and long-range dependencies (the reader can find a discussion of the problem in (McCallum et al., 2000)). Conditional models address these concerns. Conditional Markov models (CMM) (Ratnaparkhi, 1996; Klein and Manning, 2002) have been successfully used in sequence labeling tasks incorporating rich feature sets. In a left-to-right CMM as shown in Figure 1(a), the probability of a sequence of L tags + !      is decomposed as: 1        E4            !   . is the vector of observations and each is the index of a spurt. The probability distribution  @  /!- associated with each state of the Markov chain only depends on the preceding tag F and the local observation " . However, in order to incorporate more than one label dependency and, in particular, to take into account the four pragmatic c 1 c 2 c 1 c 2 c 3 (a) (b) d 1 d 2 d 1 d 2 d 3 Figure 1. (a) Left-to-right CMM. (b) More complex Bayesian network. Assuming for example that  X    X and      , there is then a direct dependency between  X and  , and the probability model becomes     X   X     X !      " !  X"!    . This is a simplifying example; in practice, each label is dependent on a fixed number of other labels. contextual dependencies discussed in the previous subsection, we must augment the structure of our model to obtain a more general one. Such a model is shown in Figure 1(b), a Bayesian network model that is well-understood and that has precisely defined semantics. To this Bayesian network representation, we apply maximum entropy modeling to define a probability distribution at each node ( ! ) dependent on the observation variable L and the five contextual tags used in the four pragmatic dependencies.8 For notational simplicity, the contextual tags representing these pragmatic dependencies are represented here as a vector # (  , A O =       , and so on). Given # feature functions $%&#'&5 F( (both local and contextual, like previous tag features) and # model parameters )  -,     /, 0  , the probability of the model is defined as: 134 F  #'&(6 8 9 : #'  < =@?BA C 0 D %FE4 ,%$%&#'&W F-HG Again, the only role of the denominator 9  is to ensure that  3 sums to 1, and need not be computed when searching for the most probable tags. Note that in our case, the structure of the Bayesian network is known and need not be inferred, since AP identification is performed before the actual agreement and disagreement classification. Since tag sequences are known during training, the inference of a model for sequence labels is no more difficult than inferring a model in a non-sequential case. We compute the most probable sequence by performing a left-to-right decoding using a beam search. The algorithm is exactly the same as the one described in (Ratnaparkhi, 1996) to find the most probable part-of-speech sequence. We used a large beam of size =100, which is not computationally prohibitive, since the tagset contains only four ele8The transitivity dependency is conditioned on two tags, while all others on only one. These five contextual tags are defaulted to OTHER when dependency spans exceed the threshold of $ &%"' .             ! "     AGREE  AGREE .213 .250 .175  OTHER  AGREE .713 .643 .737  DISAGREE  AGREE .073 .107 .088  AGREE  OTHER .187 .115 .177  OTHER  OTHER .714 .784 .710  DISAGREE  OTHER .098 .100 .113  AGREE  DISAGREE .139 .087 .234  OTHER  DISAGREE .651 .652 .638  DISAGREE  DISAGREE .209 .261 .128 Table 4. Contextual dependencies (previous tag, same-interactants previous tag, and reflexivity)   #  $% , where %'&(! )   # and *&+!  ), %   & AGREE   & AGREE   & DISAGREE   & DISAGREE %'& AGREE %'& DISAGREE -%.& AGREE -%.& DISAGREE  AGREE  $% .225 .147 .131 .152  OTHER    $ % .658 .677 .683 .668  DISAGREE  $% .117 .177 .186 .180 Table 5. Contextual dependencies (transitivity) ments. Note however that this algorithm can lead to search errors. An alternative would be to use a variant of the Viterbi algorithm, which was successfully used in (McCallum et al., 2000) to decode the most probable sequence in a CMM. 4.6 Results We had 8135 spurts available for training and testing, and performed two sets of experiments to evaluate the performance of our system. The tools used to perform the training are the same as those described in section 3.4. In the first set of experiments, we reproduced the experimental setting of (Hillard et al., 2003), a three-way classification (BACKCHANNEL and OTHER are merged) using hand-labeled data of a single meeting as a test set and the remaining data as training material; for this experiment, we used the same training set as (Hillard et al., 2003). Performance is reported in Table 6. In the second set of experiments, we aimed at reducing the expected variance of our experimental results and performed N-fold cross-validation in a four-way classification task, at each step retaining the hand-labeled data of a meeting for testing and the rest of the data for training. Table 7 summarizes the performance of our classifier with the different feature sets in this classification task, distinguishing the case where the four label-dependency pragmatic features are available during decoding from the case where they are not. First, the analysis of our results shows that with our three local feature sets only, we obtain substantially better results than (Hillard et al., 2003). This Feature sets Accuracy (Hillard et al., 2003) 82% Lexical 84.95% Structural and durational 71.23% All (no label dependencies) 85.62% All (with label dependencies) 86.92% Table 6. 3-way classification accuracy Feature sets Label dep. No label dep. Lexical 83.54% 82.62% Structural, durational 62.10% 58.86% All 84.07% 83.11% Table 7. 4-way classification accuracy might be due to some additional features the latter work didn’t exploit (e.g. structural features and adjective polarity), and to the fact that the learning algorithm used in our experiments might be more accurate than decision trees in the given task. Second, the table corroborates the findings of (Hillard et al., 2003) that lexical information make the most helpful local features. Finally, we observe that by incorporating label-dependency features representing pragmatic influences, we further improve the performance (about 1% in Table 7). This seems to indicate that modeling label dependencies in our classification problem is useful. 5 Conclusion We have shown how identification of adjacency pairs can help in designing features representing pragmatic dependencies between agreement and disagreement labels. These features are shown to be informative and to help the classification task, yielding a substantial improvement (1.3% to reach a 86.9% accuracy in three-way classification). We also believe that the present work may be useful in other computational pragmatic research focusing on multi-party dialogs, such as dialog act (DA) classification. Most previous work in that area is limited to interaction between two speakers (e.g. Switchboard, (Stolcke et al., 2000)). When more than two speakers are involved, the question of who is the addressee of an utterance is crucial, since it generally determines what DAs are relevant after the addressee’s last utterance. So, knowledge about adjacency pairs is likely to help DA classification. In future work, we plan to extend our inference process to treat speaker ranking (i.e. AP identification) and agreement/disagreement classification as a single, joint inference problem. Contextual information about agreements and disagreements can also provide useful cues regarding who is the addressee of a given utterance. We also plan to incorporate acoustic features to increase the robustness of our procedure in the case where only speech recognition output is available. Acknowledgments We are grateful to Mari Ostendorf and Dustin Hillard for providing us with their agreement and disagreement labeled data. This material is based on research supported by the National Science Foundation under Grant No. IIS-012196. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. References A. Berger, S. Della Pietra, and V Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–72. J. Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological measurements, 20:37–46. S. Cohen. 2002. A computerized scale for monitoring levels of agreement during a conversation. In Proc. of the 26th Penn Linguistics Colloquium. M. Collins. 2000. Discriminative reranking for natural language parsing. In Proc. 17th International Conf. on Machine Learning, pages 175– 182. J. N. Darroch and D. Ratcliff. 1972. Generalized iterative scaling for log-linear models. Annals of Mathematical Statistics, 43:1470–1480. R. Dhillon, S. Bhagat, H. Carvey, and E. Shriberg. 2004. Meeting recorder project: Dialog act labeling guide. Technical Report TR-04-002, ICSI. V. Hatzivassiloglou and K. McKeown. 1997. Predicting the semantic orientation of adjectives. In Proc. of ACL. D. Hillard, M. Ostendorf, and E Shriberg. 2003. Detection of agreement vs. disagreement in meetings: training with unlabeled data. In Proc. of HLT/NAACL. J. Hirschberg and D. Litman. 1994. Empirical studies on the disambiguation of cue phrases. Computational Linguistics, 19(3):501–530. A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin, T. Pfau, E. Shriberg, A. Stolcke, and C. Wooters. 2003. The ICSI meeting corpus. In Proc. of ICASSP-03, Hong Kong. D. Klein and C. D. Manning. 2002. Conditional structure versus conditional estimation in NLP models. Technical report. S. Levinson. 1983. Pragmatics. Cambridge University Press. A. McCallum, D. Freitag, and F. Pereira. 2000. Maximum entropy markov models for information extraction and segmentation. In Proc. of ICML. A. Pomerantz. 1984. Agreeing and disagreeing with assessments: some features of preferred/dispreferred turn shapes. In J.M. Atkinson and J.C. Heritage, editors, Structures of Social Action, pages 57–101. A. Ratnaparkhi. 1996. A maximum entropy partof-speech tagger. In Proc. of EMNLP. D. Ravichandran, E. Hovy, and F. J. Och. 2003. Statistical QA - classifier vs re-ranker: What’s the difference? In Proc. of the ACL Workshop on Multilingual Summarization and Question Answering. E. A. Schegloff and H Sacks. 1973. Opening up closings. Semiotica, 7-4:289–327. E. Shriberg, R. Dhillon, S. Bhagat, J. Ang, and H. Carvey. 2004. The ICSI meeting recorder dialog act (MRDA) corpus. In SIGdial Workshop on Discourse and Dialogue, pages 97–100. A. Stolcke, K. Ries, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky, P. Taylor, R. Martin, C. Van Ess-Dykema, and M. Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3):339–373.
2004
85
Using Conditional Random Fields to Predict Pitch Accents in Conversational Speech Michelle L. Gregory Linguistics Department University at Buffalo Buffalo, NY 14260 [email protected] Yasemin Altun Department of Computer Science Brown University Providence, RI 02912 [email protected] Abstract The detection of prosodic characteristics is an important aspect of both speech synthesis and speech recognition. Correct placement of pitch accents aids in more natural sounding speech, while automatic detection of accents can contribute to better wordlevel recognition and better textual understanding. In this paper we investigate probabilistic, contextual, and phonological factors that influence pitch accent placement in natural, conversational speech in a sequence labeling setting. We introduce Conditional Random Fields (CRFs) to pitch accent prediction task in order to incorporate these factors efficiently in a sequence model. We demonstrate the usefulness and the incremental effect of these factors in a sequence model by performing experiments on hand labeled data from the Switchboard Corpus. Our model outperforms the baseline and previous models of pitch accent prediction on the Switchboard Corpus. 1 Introduction The suprasegmental features of speech relay critical information in conversation. Yet, one of the major roadblocks to natural sounding speech synthesis has been the identification and implementation of prosodic characteristics. The difficulty with this task lies in the fact that prosodic cues are never absolute; they are relative to individual speakers, gender, dialect, discourse context, local context, phonological environment, and many other factors. This is especially true of pitch accent, the acoustic cues that make one word more prominent than others in an utterance. For example, a word with a fundamental frequency (f0) of 120 Hz would likely be quite prominent in a male speaker, but not for a typical female speaker. Likewise, the accent on the utterance ”Jon’s leaving.” is critical in determining whether it is the answer to the question ”Who is leaving?” (”JON’s leaving.”) or ”What is Jon doing?” (”Jon’s LEAVING.”). Accurate pitch accent prediction lies in the successful combination of as many of the contextual variables as possible. Syntactic information such as part of speech has proven to be a successful predictor of accentuation (Hirschberg, 1993; Pan and Hirschberg, 2001). In general, function words are not accented, while content words are. Various measures of a word’s informativeness, such as the information content (IC) of a word (Pan and McKeown, 1999) and its collocational strength in a given context (Pan and Hirschberg, 2001) have also proven to be useful models of pitch accent. However, in open topic conversational speech, accent is very unpredictable. Part of speech and the informativeness of a word do not capture all aspects of accentuation, as we see in this example taken from Switchboard, where a function word gets accented (accented words are in uppercase): I, I have STRONG OBJECTIONS to THAT. Accent is also influenced by aspects of rhythm and timing. The length of words, in both number of phones and normalized duration, affect its likelihood of being accented. Additionally, whether the immediately surrounding words bear pitch accent also affect the likelihood of accentuation. In other words, a word that might typically be accented may be unaccented because the surrounding words also bear pitch accent. Phrase boundaries seem to play a role in accentuation as well. The first word of intonational phrases (IP) is less likely to be accented while the last word of an IP tends be accented. In short, accented words within the same IP are not independent of each other. Previous work on pitch accent prediction, however, neglected the dependency between labels. Different machine learning techniques, such as decision trees (Hirschberg, 1993), rule induction systems (Pan and McKeown, 1999), bagging (Sun, 2002), boosting (Sun, 2002) have been used in a scenario where the accent of each word is predicted independently. One exception to this line of research is the use of Hidden Markov Models (HMM) for pitch accent prediction (Pan and McKeown, 1999; Conkie et al., 1999). Pan and McKeown (1999) demonstrate the effectiveness of a sequence model over a rule induction system, RIPPER, that treats each label independently by showing that HMMs outperform RIPPER when the same variables are used. Until recently, HMMs were the predominant formalism to model label sequences. However, they have two major shortcomings. They are trained non-discriminatively using maximum likelihood estimation to model the joint probability of the observation and label sequences. Also, they require questionable independence assumptions to achieve efficient inference and learning. Therefore, variables used in Hidden Markov models of pitch accent prediction have been very limited, e.g. part of speech and frequency (Pan and McKeown, 1999). Discriminative learning methods, such as Maximum Entropy Markov Models (McCallum et al., 2000), Projection Based Markov Models (Punyakanok and Roth, 2000), Conditional Random Fields (Lafferty et al., 2001), Sequence AdaBoost (Altun et al., 2003a), Sequence Perceptron (Collins, 2002), Hidden Markov Support Vector Machines (Altun et al., 2003b) and Maximum-Margin Markov Networks (Taskar et al., 2004), overcome the limitations of HMMs. Among these methods, CRFs is the most common technique used in NLP and has been successfully applied to Part-of-Speech Tagging (Lafferty et al., 2001), Named-Entity Recognition (Collins, 2002) and shallow parsing (Sha and Pereira, 2003; McCallum, 2003). The goal of this study is to better identify which words in a string of text will bear pitch accent. Our contribution is two-fold: employing new predictors and utilizing a discriminative model. We combine the advantages of probabilistic, syntactic, and phonological predictors with the advantages of modeling pitch accent in a sequence labeling setting using CRFs (Lafferty et al., 2001). The rest of the paper is organized as follows: In Section 2, we introduce CRFs. Then, we describe our corpus and the variables in Section 3 and Section 4. We present the experimental setup and report results in Section 5. Finally, we discuss our results (Section 6) and conclude (Section 7). 2 Conditional Random Fields CRFs can be considered as a generalization of logistic regression to label sequences. They define a conditional probability distribution of a label sequence y given an observation sequence x. In this paper, x = (x1, x2, . . . , xn) denotes a sentence of length n and y = (y1, y2, . . . , yn) denotes the label sequence corresponding to x. In pitch accent prediction, xt is a word and yt is a binary label denoting whether xt is accented or not. CRFs specify a linear discriminative function F parameterized by Λ over a feature representation of the observation and label sequence Ψ(x, y). The model is assumed to be stationary, thus the feature representation can be partitioned with respect to positions t in the sequence and linearly combined with respect to the importance of each feature ψk, denoted by λk. Then the discriminative function can be stated as in Equation 1: F(x, y; Λ) = X t ⟨Λ, Ψt(x, y)⟩ (1) Then, the conditional probability is given by p(y|x; Λ) = 1 Z(x, Λ)F(x, y; Λ) (2) where Z(x, Λ) = P ¯y F(x, ¯y; Λ) is a normalization constant which is computed by summing over all possible label sequences ¯y of the observation sequence x. We extract two types of features from a sequence pair: 1. Current label and information about the observation sequence, such as part-of-speech tag of a word that is within a window centered at the word currently labeled, e.g. Is the current word pitch accented and the part-of-speech tag of the previous word=Noun? 2. Current label and the neighbors of that label, i.e. features that capture the inter-label dependencies, e.g. Is the current word pitch accented and the previous word not accented? Since CRFs condition on the observation sequence, they can efficiently employ feature representations that incorporate overlapping features, i.e. multiple interacting features or long-range dependencies of the observations, as opposed to HMMs which generate observation sequences. In this paper, we limit ourselves to 1-order Markov model features to encode inter-label dependencies. The information used to encode the observation-label dependencies is explained in detail in Section 4. In CRFs, the objective function is the log-loss of the model with Λ parameters with respect to a training set D. This function is defined as the negative sum of the conditional probabilities of each training label sequence yi, given the observation sequence xi, where D ≡{(xi, yi) : i = 1, . . . , m}. CRFs are known to overfit, especially with noisy data if not regularized. To overcome this problem, we penalize the objective function by adding a Gaussian prior (a term proportional to the squared norm ||Λ||2) as suggested in (Johnson et al., 1999). Then the loss function is given as: L(Λ; D) = − m X i log p(yi|xi; Λ) + 1 2c||Λ||2 = − m X i F(xi, yi; Λ) + log Z(xi, Λ) + 1 2c||Λ||2 (3) where c is a constant. Lafferty et al. (2001), proposed a modification of improved iterative scaling for parameter estimation in CRFs. However, gradient-based methods have often found to be more efficient for minimizing Equation 3 (Minka, 2001; Sha and Pereira, 2003). In this paper, we use the conjugate gradient descent method to optimize the above objective function. The gradients are computed as in Equation 4: ∇ΛL = m X i X t Ep[Ψt(xi, y)] −Ψt(xi, yi) + cΛ (4) where the expectation is with respect to all possible label sequences of the observation sequence xi and can be computed using the forward backward algorithm. Given an observation sequence x, the best label sequence is given by: ˆy = arg max y F(x, y; ˆΛ) (5) where ˆΛ is the parameter vector that minimizes L(Λ; D). The best label sequence can be identified by performing the Viterbi algorithm. 3 Corpus The data for this study were taken from the Switchboard Corpus (Godfrey et al., 1992), which consists of 2430 telephone conversations between adult speakers (approximately 2.4 million words). Participants were both male and female and represented all major dialects of American English. We used a portion of this corpus that was phonetically handtranscribed (Greenberg et al., 1996) and segmented into speech boundaries at turn boundaries or pauses of more than 500 ms on both sides. Fragments contained seven words on average. Additionally, each word was coded for probabilistic and contextual information, such as word frequency, conditional probabilities, the rate of speech, and the canonical pronunciation (Fosler-Lussier and Morgan, 1999). The dataset used in all analysis in this study consists of only the first hour of the database, comprised of 1,824 utterances with 13,190 words. These utterances were hand coded for pitch accent and intonational phrase brakes. 3.1 Pitch Accent Coding The utterances were hand labeled for accents and boundaries according to the Tilt Intonational Model (Taylor, 2000). This model is characterized by a series of intonational events: accents and boundaries. Labelers were instructed to use duration, amplitude, pausing information, and changes in f0 to identify events. In general, labelers followed the basic conventions of EToBI for coding (Taylor, 2000). However, the Tilt coding scheme was simplified. Accents were coded as either major or minor (and some rare level accents) and breaks were either rising or falling. Agreement for the Tilt coding was reported at 86%. The CU coding also used a simplified EToBI coding scheme, with accent types conflated and only major breaks coded. Accent and break coding pair-wise agreement was between 8595% between coders, with a kappa κ of 71%-74% where κ is the difference between expected agreement and actual agreement. 4 Variables The label we were predicting was a binary distinction of accented or not. The variables we used for prediction fall into three main categories: syntactic, probabilistic variables, which include word frequency and collocation measures, and phonological variables, which capture aspects of rhythm and timing that affect accentuation. 4.1 Syntactic variables The only syntactic category we used was a fourway classification for hand-generated part of speech (POS): Function, Noun, Verb, Other, where Other includes all adjectives and adverbs1. Table 1 gives the percentage of accented and unaccented items by POS. 1We also tested a categorization of 14 distinct part of speech classes, but the results did not improve, so we only report on the four-way classification. Accented Unaccented Function 21% 79% Verb 59% 41% Noun 30% 70% Other 49% 51% Table 1: Percentage of accented and unaccented items by POS. Variable Definition Example Unigram log p(wi) and, I Bigram log p(wi|wi−1) roughing it Rev Bigram log p(wi|wi+1) rid of Joint log p(wi−1, wi) and I Rev Joint log p(wi, wi+1) and I Table 2: Definition of probabilistic variables. 4.2 Probabilistic variables Following a line of research that incorporates the information content of a word as well as collocation measures (Pan and McKeown, 1999; Pan and Hirschberg, 2001) we have included a number of probabilistic variables. The probabilistic variables we used were the unigram frequency, the predictability of a word given the preceding word (bigram), the predictability of a word given the following word (reverse bigram), the joint probability of a word with the preceding (joint), and the joint probability of a word with the following word (reverse joint). Table 2 provides the definition for these, as well as high probability examples from the corpus (the emphasized word being the current target). Note all probabilistic variables were in log scale. The values for these probabilities were obtained using the entire 2.4 million words of SWBD2. Table 3 presents the Spearman’s rank correlation coefficient between the probabilistic measures and accent (Conover, 1980). These values indicate the strong correlation of accents to the probabilistic variables. As the probability increases, the chance of an accent decreases. Note that all values are significant at the p < .001 level. We also created a combined part of speech and unigram frequency variable in order to have a variable that corresponds to the variable used in (Pan 2Our current implementation of CRF only takes categorical variables, thus for the experiments, all probabilistic variables were binned into 5 equal categories. We also tried more bins and produced similar results, so we only report on the 5-binned categories. We computed correlations between pitch accent and the original 5 variables as well as the binned variables and they are very similar. Variables Spearman’s ρ Unigram -.451 Bigram -.309 Reverse Bigram -.383 Joint -.207 Reverse joint -.265 Table 3: Spearman’s correlation values for the probabilistic measures. and McKeown, 1999). 4.3 Phonological variables The last category of predictors, phonological variables, concern aspects of rhythm and timing of an utterance. We have two main sources for these variables: those that can be computed solely from a string of text (textual), and those that require some sort of acoustic information (acoustic). Sun (2002) demonstrated that the number of phones in a syllable, the number of syllables in a word, and the position of a word in a sentence are useful predictors of which syllables get accented. While Sun was concerned with predicting accented syllables, some of the same variables apply to word level targets as well. For our textual phonological features, we included the number of syllables in a word and the number of phones (both in citation form as well as transcribed form). Instead of position in a sentence, we used the position of the word in an utterance since the fragments do not necessarily correspond to sentences in the database we used. We also made use of the utterance length. Below is the list of our textual features: • Number of canonical syllables • Number of canonical phones • Number of transcribed phones • The length of the utterance in number of words • The position of the word in the utterance The main purpose of this study is to better predict which words in a string of text receive accent. So far, all of our predictors are ones easily computed from a string of text. However, we have included a few variables that affect the likelihood of a word being accented that require some acoustic data. To the best of our knowledge, these features have not been used in acoustic models of pitch accent prediction. These features include the duration of the word, speech rate, and following intonational phrase boundaries. Given the nature of the SWBD corpus, there are many disfluencies. Thus, we also Feature χ2 Sig canonical syllables 1636 p < .001 canonical phones 2430 p < .001 transcribed phones 2741 p < .001 utt length 80 p < .005 utt position 295 p < .001 duration 3073 p < .001 speech rate 101 p < .001 following pause 27 p < .001 foll filled pause 328 p < .001 foll IP boundary 1047 p < .001 Table 4: Significance of phonological features on pitch accent prediction. included following pauses and filled pauses as predictors. Below is the list of our acoustic features: • Log of duration in milliseconds normalized by number of canonical phones binned into 5 equal categories. • Log Speech Rate; calculated on strings of speech bounded on either side by pauses of 300 ms or greater and binned into 5 equal categories. • Following pause; a binary distinction of whether a word is followed by a period of silence or not. • Following filled pause; a binary distinction of whether a word was followed by a filled pause (uh, um) or not. • Following IP boundary Table 4 indicates that each of these features significantly affect the presence of pitch accent. While certainly all of these variables are not independent of on another, using CRFs, one can incorporate all of these variables into the pitch accent prediction model with the advantage of making use of the dependencies among the labels. 4.4 Surrounding Information Sun (2002) has shown that the values immediately preceding and following the target are good predictors for the value of the target. We also experimented with the effects of the surrounding values by varying the window size of the observation-label feature extraction described in Section 2. When the window size is 1, only values of the word that is labelled are incorporated in the model. When the window size is 3, the values of the previous and the following words as well as the current word are incorporated in the model. Window size 5 captures the values of the current word, the two previous words and the two following words. 5 Experiments and Results All experiments were run using 10 fold crossvalidation. We used Viterbi decoding to find the most likely sequence and report the performance in terms of label accuracy. We ran all experiments with varying window sizes (w ∈{1, 3, 5}). The baseline which simply assigns the most common label, unaccented, achieves 60.53 ± 1.50%. Previous research has demonstrated that part of speech and frequency, or a combination of these two, are very reliable predictors of pitch accent. Thus, to test the worthiness of using a CRF model, the first experiment we ran was a comparison of an HMM to a CRF using just the combination of part of speech and unigram. The HMM score (referred as HMM:POS, Unigram in Table 5) was 68.62 ± 1.78, while the CRF model (referred as CRF:POS, Unigram in Table 5) performed significantly better at 72.56 ± 1.86. Note that Pan and McKeown (1999) reported 74% accuracy with their HMM model. The difference is due to the different corpora used in each case. While they also used spontaneous speech, it was a limited domain in the sense that it was speech from discharge orders from doctors at one medical facility. The SWDB corpus is open domain conversational speech. In order to capture some aspects of the IC and collocational strength of a word, in the second experiment we ran part of speech plus all of the probabilistic variables (referred as CRF:POS, Prob in Table 5). The model accuracy was 73.94%, thus improved over the model using POS and unigram values by 1.38%. In the third experiment we wanted to know if TTS applications that made use of purely textual input could be aided by the addition of timing and rhythm variables that can be gleaned from a text string. Thus, we included the textual features described in Section 4.3 in addition to the probabilistic and syntactic features (referred as CRF:POS, Prob, Txt in Table 5). The accuracy was improved by 1.73%. For the final experiment, we added the acoustic variable, resulting in the use of all the variables described in Section 4 (referred as CRF:All in Table 5). We get about 0.5% increase in accuracy, 76.1% with a window of size w = 1. Using larger windows resulted in minor increases in the performance of the model, as summarized in Table 5. Our best accuracy was 76.36% using all features in a w = 5 window size. Model:Variables w = 1 w = 3 w = 5 Baseline 60.53 HMM: POS,Unigram 68.62 CRF: POS, Unigram 72.56 CRF: POS, Prob 73.94 74.19 74.51 CRF: POS, Prob, Txt 75.67 75.74 75.89 CRF: All 76.1 76.23 76.36 Table 5: Test accuracy of pitch accent prediction on SWDB using various variables and window sizes. 6 Discussion Pitch accent prediction is a difficult task, in that, the number of different speakers, topics, utterance fragments and disfluent production of the SWBD corpus only increase this difficulty. The fact that 21% of the function words are accented indicates that models of pitch accent that mostly rely on part of speech and unigram frequency would not fair well with this corpus. We have presented a model of pitch accent that captures some of the other factors that influence accentuation. In addition to adding more probabilistic variables and phonological factors, we have used a sequence model that captures the interdependence of accents within a phrase. Given the distinct natures of corpora used, it is difficult to compare these results with earlier models. However, in experiment 1 (HMM: POS, Unigram vs CRF: POS, Unigram) we have shown that a CRF model achieves a better performance than an HMM model using the same features. However, the real strength of CRFs comes from their ability to incorporate different sources of information efficiently, as is demonstrated in our experiments. We did not test directly the probabilistic measures (or collocation measures) that have been used before for this task, namely information content (IC) (Pan and McKeown, 1999) and mutual information (Pan and Hirschberg, 2001). However, the measures we have used encompass similar information. For example, IC is only the additive inverse of our unigram measure: IC(w) = −log p(w) (6) Rather than using mutual information as a measure of collocational strength, we used unigram, bigram and joint probabilities. A model that includes both joint probability and the unigram probabilities of wi and wi−1 is comparable to one that includes mutual information. Just as the likelihood of a word being accented is influenced by a following silence or IP boundary, the collocational strength of the target word with the following word (captured by reverse bigram and reverse joint) is also a factor. With the use of POS, unigram, and all bigram and joint probabilities, we have shown that (a) CRFs outperform HMMs, and (b) our probabilistic variables increase accuracy from a model that include POS + unigram (73.94% compared to 72.56%). For tasks in which pitch accent is predicted solely based on a string of text, without the addition of acoustic data, we have shown that adding aspects of rhythm and timing aids in the identification of accent targets. We used the number of words in an utterance, where in the utterance a word falls, how long in both number of syllables and number of phones all affect accentuation. The addition of these variables improved the model by nearly 2%. These results suggest that Accent prediction models that only make use of textual information could be improved with the addition of these variables. While not trying to provide a complete model of accentuation from acoustic information, in this study we tested a few acoustic variables that have not yet been tested. The nature of the SWBD corpus allowed us to investigate the role of disfluencies and widely variable durations and speech rate on accentuation. Especially speech rate, duration and surrounding silence are good predictors of pitch accent. The addition of these predictors only slightly improved the model (about .5%). Acoustic features are very sensitive to individual speakers. In the corpus, there are many different speakers of varying ages and dialects. These variables might become more useful if one controls for individual speaker differences. To really test the usefulness of these variables, one would have to combine them with acoustic features that have been demonstrated to be good predictors of pitch accent (Sun, 2002; Conkie et al., 1999; Wightman et al., 2000). 7 Conclusion We used CRFs with new measures of collocational strength and new phonological factors that capture aspects of rhythm and timing to model pitch accent prediction. CRFs have the theoretical advantage of incorporating all these factors in a principled and efficient way. We demonstrated that CRFs outperform HMMs also experimentally. We also demonstrated the usefulness of some new probabilistic variables and phonological variables. Our results mainly have implications for the textual prediction of accents in TTS applications, but might also be useful in automatic speech recognition tasks such as automatic transcription of multi-speaker meetings. In the near future we would like to incorporate reliable acoustic information, controlling for individual speaker difference and also apply different discriminative sequence labeling techniques to pitch accent prediction task. 8 Acknowledgements This work was partially funded by CAREER award #IIS 9733067 IGERT. We would also like to thank Mark Johnson for the idea of this project, Dan Jurafsky, Alan Bell, Cynthia Girand, and Jason Brenier for their helpful comments and help with the database. References Y. Altun, T. Hofmann, and M. Johnson. 2003a. Discriminative learning for label sequences via boosting. In Proc. of Advances in Neural Information Processing Systems. Y. Altun, I. Tsochantaridis, and T. Hofmann. 2003b. Hidden markov support vector machines. In Proc. of 20th International Conference on Machine Learning. M. Collins. 2002. Discriminative training methods for Hidden Markov Models: Theory and experiments with perceptron algorithms. In Proc. of Empirical Methods of Natural Language Processing. A. Conkie, G. Riccardi, and R. Rose. 1999. Prosody recognition from speech utterances using acoustic and linguistic based models of prosodic events. In Proc. of EUROSPEECH’99. W. J. Conover. 1980. Practical Nonparametric Statistics. Wiley, New York, 2nd edition. E. Fosler-Lussier and N. Morgan. 1999. Effects of speaking rate and word frequency on conversational pronunci ations. In Speech Communication. J. Godfrey, E. Holliman, and J. McDaniel. 1992. SWITCHBOARD: Telephone speech corpus for research and develo pment. In Proc. of the International Conference on Acoustics, Speech, and Signal Processing. S. Greenberg, D. Ellis, and J. Hollenback. 1996. Insights into spoken language gleaned from phonetic transcripti on of the Switchboard corpus. In Proc. of International Conference on Spoken Language Processsing. J. Hirschberg. 1993. Pitch accent in context: Predicting intonational prominence from text. Artificial Intelligence, 63(1-2):305–340. M. Johnson, S. Geman, S. Canon, Z. Chi, and S. Riezler. 1999. Estimators for stochastic unification-based grammars. In Proc. of ACL’99 Association for Computational Linguistics. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. of 18th International Conference on Machine Learning. A. McCallum, D. Freitag, and F. Pereira. 2000. Maximum Entropy Markov Models for Information Extraction and Segmentation. In Proc. of 17th International Conference on Machine Learning. A. McCallum. 2003. Efficiently inducing features of Conditional Random Fields. In Proc. of Uncertainty in Articifical Intelligence. T. Minka. 2001. Algorithms for maximumlikelihood logistic regression. Technical report, CMU, Department of Statistics, TR 758. S. Pan and J. Hirschberg. 2001. Modeling local context for pitch accent prediction. In Proc. of ACL’01, Association for Computational Linguistics. S. Pan and K. McKeown. 1999. Word informativeness and automatic pitch accent modeling. In Proc. of the Joint SIGDAT Conference on EMNLP and VLC. V. Punyakanok and D. Roth. 2000. The use of classifiers in sequential inference. In Proc. of Advances in Neural Information Processing Systems. F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proc. of Human Language Technology. Xuejing Sun. 2002. Pitch accent prediction using ensemble machine learning. In Proc. of the International Conference on Spoken Language Processing. B. Taskar, C. Guestrin, and D. Koller. 2004. Maxmargin markov networks. In Proc. of Advances in Neural Information Processing Systems. P. Taylor. 2000. Analysis and synthesis of intonation using the Tilt model. Journal of the Acoustical Society of America. C. W. Wightman, A. K. Syrdal, G. Stemmer, A. Conkie, and M. Beutnagel. 2000. Perceptually Based Automatic Prosody Labeling and Prosodically Enriched Unit Selection Improve Concatenative Text-To-Speech Synthesis. volume 2, pages 71–74.
2004
86
Acquiring the Meaning of Discourse Markers Ben Hutchinson School of Informatics University of Edinburgh [email protected] Abstract This paper applies machine learning techniques to acquiring aspects of the meaning of discourse markers. Three subtasks of acquiring the meaning of a discourse marker are considered: learning its polarity, veridicality, and type (i.e. causal, temporal or additive). Accuracy of over 90% is achieved for all three tasks, well above the baselines. 1 Introduction This paper is concerned with automatically acquiring the meaning of discourse markers. By considering the distributions of individual tokens of discourse markers, we classify discourse markers along three dimensions upon which there is substantial agreement in the literature: polarity, veridicality and type. This approach of classifying linguistic types by the distribution of linguistic tokens makes this research similar in spirit to that of Baldwin and Bond (2003) and Stevenson and Merlo (1999). Discourse markers signal relations between discourse units. As such, discourse markers play an important role in the parsing of natural language discourse (Forbes et al., 2001; Marcu, 2000), and their correspondence with discourse relations can be exploited for the unsupervised learning of discourse relations (Marcu and Echihabi, 2002). In addition, generating natural language discourse requires the appropriate selection and placement of discourse markers (Moser and Moore, 1995; Grote and Stede, 1998). It follows that a detailed account of the semantics and pragmatics of discourse markers would be a useful resource for natural language processing. Rather than looking at the finer subtleties in meaning of particular discourse markers (e.g. Bestgen et al. (2003)), this paper aims at a broad scale classification of a subclass of discourse markers: structural connectives. This breadth of coverage is of particular importance for discourse parsing, where a wide range of linguistic realisations must be catered for. This work can be seen as orthogonal to that of Di Eugenio et al. (1997), which addresses the problem of learning if and where discourse markers should be generated. Unfortunately, the manual classification of large numbers of discourse markers has proven to be a difficult task, and no complete classification yet exists. For example, Knott (1996) presents a list of around 350 discourse markers, but his taxonomic classification, perhaps the largest classification in the literature, accounts for only around 150 of these. A general method of automatically classifying discourse markers would therefore be of great utility, both for English and for languages with fewer manually created resources. This paper constitutes a step in that direction. It attempts to classify discourse markers whose classes are already known, and this allows the classifier to be evaluated empirically. The proposed task of learning automatically the meaning of discourse markers raises several questions which we hope to answer: Q1. Difficulty How hard is it to acquire the meaning of discourse markers? Are some aspects of meaning harder to acquire than others? Q2. Choice of features What features are useful for acquiring the meaning of discourse markers? Does the optimal choice of features depend on the aspect of meaning being learnt? Q3. Classifiers Which machine learning algorithms work best for this task? Can the right choice of empirical features make the classification problems linearly separable? Q4. Evidence Can corpus evidence be found for the existing classifications of discourse markers? Is there empirical evidence for a separate class of TEMPORAL markers? We proceed by first introducing the classes of discourse markers that we use in our experiments. Section 3 discusses the database of discourse markers used as our corpus. In Section 4 we describe our experiments, including choice of features. The results are presented in Section 5. Finally, we conclude and discuss future work in Section 6. 2 Discourse markers Discourse markers are lexical items (possibly multiword) that signal relations between propositions, events or speech acts. Examples of discourse markers are given in Tables 1, 2 and 3. In this paper we will focus on a subclass of discourse markers known as structural connectives. These markers, even though they may be multiword expressions, function syntactically as if they were coordinating or subordinating conjunctions (Webber et al., 2003). The literature contains many different classifications of discourse markers, drawing upon a wide range of evidence including textual cohesion (Halliday and Hasan, 1976), hypotactic conjunctions (Martin, 1992), cognitive plausibility (Sanders et al., 1992), substitutability (Knott, 1996), and psycholinguistic experiments (Louwerse, 2001). Nevertheless there is also considerable agreement. Three dimensions of classification that recur, albeit under a variety of names, are polarity, veridicality and type. We now discuss each of these in turn. 2.1 Polarity Many discourse markers signal a concession, a contrast or the denial of an expectation. These markers have been described as having the feature polarity=NEG-POL. An example is given in (1). (1) Suzy’s part-time, but she does more work than the rest of us put together. (Taken from Knott (1996, p. 185)) This sentence is true if and only if Suzy both is parttime and does more work than the rest of them put together. In addition, it has the additional effect of signalling that the fact Suzy does more work is surprising — it denies an expectation. A similar effect can be obtained by using the connective and and adding more context, as in (2) (2) Suzy’s efficiency is astounding. She’s part-time, and she does more work than the rest of us put together. The difference is that although it is possible for and to co-occur with a negative polarity discourse relation, it need not. Discourse markers like and are said to have the feature polarity=POS-POL. 1 On 1An alternative view is that discourse markers like and are underspecified with respect to polarity (Knott, 1996). In this the other hand, a NEG-POL discourse marker like but always co-occurs with a negative polarity discourse relation. The gold standard classes of POS-POL and NEGPOL discourse markers used in the learning experiments are shown in Table 1. The gold standards for all three experiments were compiled by consulting a range of previous classifications (Knott, 1996; Knott and Dale, 1994; Louwerse, 2001). 2 POS-POL NEG-POL after, and, as, as soon as, because, before, considering that, ever since, for, given that, if, in case, in order that, in that, insofar as, now, now that, on the grounds that, once, seeing as, since, so, so that, the instant, the moment, then, to the extent that, when, whenever although, but, even if, even though, even when, only if, only when, or, or else, though, unless, until, whereas, yet Table 1: Discourse markers used in the polarity experiment 2.2 Veridicality A discourse relation is veridical if it implies the truth of both its arguments (Asher and Lascarides, 2003), otherwise it is not. For example, in (3) it is not necessarily true either that David can stay up or that he promises, or will promise, to be quiet. For this reason we will say if has the feature veridicality=NON-VERIDICAL. (3) David can stay up if he promises to be quiet. The disjunctive discourse marker or is also NONVERIDICAL, because it does not imply that both of its arguments are true. On the other hand, and does imply this, and so has the feature veridicality=VERIDICAL. The VERIDICAL and NON-VERIDICAL discourse markers used in the learning experiments are shown in Table 2. Note that the polarity and veridicality are independent, for example even if is both NEGPOL and NON-VERIDICAL. 2.3 Type Discourse markers like because signal a CAUSAL relation, for example in (4). account, discourse markers have positive polarity only if they can never be paraphrased using a discourse marker with negative polarity. Interpreted in these terms, our experiment aims to distinguish negative polarity discourse markers from all others. 2An effort was made to exclude discourse markers whose classification could be contentious, as well as ones which showed ambiguity across classes. Some level of judgement was therefore exercised by the author. VERIDICAL NONVERIDICAL after, although, and, as, as soon as, because, but, considering that, even though, even when, ever since, for, given that, in order that, in that, insofar as, now, now that, on the grounds that, once, only when, seeing as, since, so, so that, the instant, the moment, then, though, to the extent that, until, when, whenever, whereas, while, yet assuming that, even if, if, if ever, if only, in case, on condition that, on the assumption that, only if, or, or else, supposing that, unless Table 2: Discourse markers used in the veridicality experiment (4) The tension in the boardroom rose sharply because the chairman arrived. As a result, because has the feature type=CAUSAL. Other discourse markers that express a temporal relation, such as after, have the feature type=TEMPORAL. Just as a POS-POL discourse marker can occur with a negative polarity discourse relation, the context can also supply a causal relation even when a TEMPORAL discourse marker is used, as in (5). (5) The tension in the boardroom rose sharply after the chairman arrived. If the relation a discourse marker signals is neither CAUSAL or TEMPORAL it has the feature type=ADDITIVE. The need for a distinct class of TEMPORAL discourse relations is disputed in the literature. On the one hand, it has been suggested that TEMPORAL relations are a subclass of ADDITIVE ones on the grounds that the temporal reference inherent in the marking of tense and aspect “more or less” fixes the temporal ordering of events (Sanders et al., 1992). This contrasts with arguments that resolving discourse relations and temporal order occur as distinct but inter-related processes (Lascarides and Asher, 1993). On the other hand, several of the discourse markers we count as TEMPORAL, such as as soon as, might be described as CAUSAL (Oberlander and Knott, 1995). One of the results of the experiments described below is that corpus evidence suggests ADDITIVE, TEMPORAL and CAUSAL discourse markers have distinct distributions. The ADDITIVE, TEMPORAL and CAUSAL discourse markers used in the learning experiments are shown in Table 3. These features are independent of the previous ones, for example even though is CAUSAL, VERIDICAL and NEG-POL. ADDITIVE TEMPORAL CAUSAL and, but, whereas after, as soon as, before, ever since, now, now that, once, until, when, whenever although, because, even though, for, given that, if, if ever, in case, on condition that, on the assumption that, on the grounds that, provided that, providing that, so, so that, supposing that, though, unless Table 3: Discourse markers used in the type experiment 3 Corpus The data for the experiments comes from a database of sentences collected automatically from the British National Corpus and the world wide web (Hutchinson, 2004). The database contains example sentences for each of 140 discourse structural connectives. Many discourse markers have surface forms with other usages, e.g. before in the phrase before noon. The following procedure was therefore used to select sentences for inclusion in the database. First, sentences containing a string matching the surface form of a structural connective were extracted. These sentences were then parsed using a statistical parser (Charniak, 2000). Potential structural connectives were then classified on the basis of their syntactic context, in particular their proximity to S nodes. Figure 1 shows example syntactic contexts which were used to identify discourse markers. (S ...) (CC and) (S...) (SBAR (IN after) (S...)) (PP (IN after) (S...)) (PP (VBN given) (SBAR (IN that) (S...))) (NP (DT the) (NN moment) (SBAR...)) (ADVP (RB as) (RB long) (SBAR (IN as) (S...))) (PP (IN in) (SBAR (IN that) (S...))) Figure 1: Identifying structural connectives It is because structural connectives are easy to identify in this manner that the experiments use only this subclass of discourse markers. Due to both parser errors, and the fact that the syntactic heuristics are not foolproof, the database contains noise. Manual analysis of a sample of 500 sentences revealed about 12% of sentences do not contain the discourse marker they are supposed to. Of the discourse markers used in the experiments, their frequencies in the database ranged from 270 for the instant to 331,701 for and. The mean number of instances was 32,770, while the median was 4,948. 4 Experiments This section presents three machine learning experiments into automatically classifying discourse markers according to their polarity, veridicality and type. We begin in Section 4.1 by describing the features we extract for each discourse marker token. Then in Section 4.2 we describe the different classifiers we use. The results are presented in Section 4.3. 4.1 Features used We only used structural connectives in the experiments. This meant that the clauses linked syntactically were also related at the discourse level (Webber et al., 2003). Two types of features were extracted from the conjoined clauses. Firstly, we used lexical co-occurrences with words of various parts of speech. Secondly, we used a range of linguistically motivated syntactic, semantic, and discourse features. 4.1.1 Lexical co-occurrences Lexical co-occurrences have previously been shown to be useful for discourse level learning tasks (Lapata and Lascarides, 2004; Marcu and Echihabi, 2002). For each discourse marker, the words occurring in their superordinate (main) and subordinate clauses were recorded,3 along with their parts of speech. We manually clustered the Penn Treebank parts of speech together to obtain coarser grained syntactic categories, as shown in Table 4. We then lemmatised each word and excluded all lemmas with a frequency of less than 1000 per million in the BNC. Finally, words were attached a prefix of either SUB or SUPER according to whether they occurred in the sub- or superordinate clause linked by the marker. This distinguished, for example, between occurrences of then in the antecedent (subordinate) and consequent (main) clauses linked by if. We also recorded the presence of other discourse markers in the two clauses, as these had previously 3For coordinating conjunctions, the left clause was taken to be superordinate/main clause, the right, the subordinate clause. New label Penn Treebank labels vb vb vbd vbg vbn vbp vbz nn nn nns nnp jj jj jjr jjs rb rb rbr rbs aux aux auxg md prp prp prp$ in in Table 4: Clustering of POS labels been found to be useful on a related classification task (Hutchinson, 2003). The discourse markers used for this are based on the list of 350 markers given by Knott (1996), and include multiword expressions. Due to the sparser nature of discourse markers, compared to verbs for example, no frequency cutoffs were used. 4.1.2 Linguistically motivated features These included a range of one and two dimensional features representing more abstract linguistic information, and were extracted through automatic analysis of the parse trees. One dimensional features Two one dimensional features recorded the location of discourse markers. POSITION indicated whether a discourse marker occurred between the clauses it linked, or before both of them. It thus relates to information structuring. EMBEDDING indicated the level of embedding, in number of clauses, of the discourse marker beneath the sentence’s highest level clause. We were interested to see if some types of discourse relations are more often deeply embedded. The remaining features recorded the presence of linguistic features that are localised to a particular clause. Like the lexical co-occurrence features, these were indexed by the clause they occurred in: either SUPER or SUB. We expected negation to correlate with negative polarity discourse markers, and approximated negation using four features. NEG-SUBJ and NEGVERB indicated the presence of subject negation (e.g. nothing) or verbal negation (e.g. n’t). We also recorded the occurrence of a set of negative polarity items (NPI), such as any and ever. The features NPI-AND-NEG and NPI-WO-NEG indicated whether an NPI occurred in a clause with or without verbal or subject negation. Eventualities can be placed or ordered in time using not just discourse markers but also temporal expressions. The feature TEMPEX recorded the number of temporal expressions in each clause, as returned by a temporal expression tagger (Mani and Wilson, 2000). If the main verb was an inflection of to be or to do we recorded this using the features BE and DO. Our motivation was to capture any correlation of these verbs with states and events respectively. If the final verb was a modal auxiliary, this ellipsis was evidence of strong cohesion in the text (Halliday and Hasan, 1976). We recorded this with the feature VP-ELLIPSIS. Pronouns also indicate cohesion, and have been shown to correlate with subjectivity (Bestgen et al., 2003). A class of features PRONOUNS represented pronouns, with  denoting either 1st person, 2nd person, or 3rd person animate, inanimate or plural. The syntactic structure of each clause was captured using two features, one finer grained and one coarser grained. STRUCTURAL-SKELETON identified the major constituents under the S or VP nodes, e.g. a simple double object construction gives “NP VB NP NP”. ARGS identified whether the clause contained an (overt) object, an (overt) subject, or both, or neither. The overall size of a clause was represented using four features. WORDS, NPS and PPS recorded the numbers of words, NPs and PPs in a clause (not counting embedded clauses). The feature CLAUSES counted the number of clauses embedded beneath a clause. Two dimensional features These features all recorded combinations of linguistic features across the two clauses linked by the discourse marker. For example the MOOD feature would take the value  DECL,IMP  for the sentence John is coming, but don’t tell anyone! These features were all determined automatically by analysing the auxiliary verbs and the main verbs’ POS tags. The features and the possible values for each clause were as follows: MODALITY: one of FUTURE, ABILITY or NULL; MOOD: one of DECL, IMP or INTERR; PERFECT: either YES or NO; PROGRESSIVE: either YES or NO; TENSE: either PAST or PRESENT. 4.2 Classifier architectures Two different classifiers, based on local and global methods of comparison, were used in the experiments. The first, 1 Nearest Neighbour (1NN), is an instance based classifier which assigns each marker to the same class as that of the marker nearest to it. For this, three different distance metrics were explored. The first metric was the Euclidean distance function  , shown in (6), applied to probability distributions.        (6) The second, !"# , is a smoothed variant of the information theoretic Kullback-Leibner divergence (Lee, 2001, with $%'&)(+* , ). Its definition is given in (7). !"# -   . /1032 4  $-65798:;$/<  (7) The third metric, =6>1?@?@A , is a B -test weighted adaption of the Jaccard coefficient (Curran and Moens, 2002). In it basic form, the Jaccard coefficient is essentially a measure of how much two distributions overlap. The B -test variant weights co-occurrences by the strength of their collocation, using the following function: C BD CFE  / CFE GH / CFE < / I / CFE <  This is then used define the weighted version of the Jaccard coefficient, as shown in (8). The words associated with distributions and  are indicated by CJ and CLK , respectively. =>1?M? A  ON PRQTS  C BD CJ U C BM CLK   N P >V/ C BM CJ U C BD CLK   (8) !"# and =6>1?@? A had previously been found to be the best metrics for other tasks involving lexical similarity.  is included to indicate what can be achieved using a somewhat naive metric. The second classifier used, Naive Bayes, takes the overall distribution of each class into account. It essentially defines a decision boundary in the form of a curved hyperplane. The Weka implementation (Witten and Frank, 2000) was used for the experiments, with 10-fold cross-validation. 4.3 Results We began by comparing the performance of the 1NN classifier using the various lexical cooccurrence features against the gold standards. The results using all lexical co-occurrences are shown All POS Best single POS Best Task Baseline  !W# =6>1?@? A  !"# =6>1?@? A subset polarity 67.4 74.4 72.1 74.4 76.7 (rb) 83.7 (rb) 76.7 (rb) 83.7 X veridicality 73.5 81.6 85.7 75.5 83.7 (nn) 91.8 (vb) 87.8 (vb) 91.8 Y type 58.1 74.2 64.5 81.8 74.2 (in) 74.2 (rb) 77.4 (jj) 87.8 Z [ Using \^]`_ and either rb or DMs+rb. a Using both \^]`_ and vb, and bdcfegehcfiUjfk and vb+in. l Using \^]`_ and vb+aux+in Table 5: Results using the 1NN classifier on lexical co-occurrences Feature Positively correlated discourse marker co-occurrences POS-POL though m , but m , althoughm , assuming that m NEG-POL otherwise n , still m , in truth n , still n , after that m , in this way m , granted that m , in contrast m , by then n , in the event n VERIDICAL obviouslyn , now n , even n , indeed m , once more m , considering that m , even after m , once more n , at first sight m NON-VERIDICAL or m , no doubt m , in turn m , then m , by all means m , before then n ADDITIVE also n , in addition n , still n , only n , at the same time n , clearly n , naturally n , now n , of course n TEMPORAL back m , once more m , like m , and m , once more n , which was why m , (D(D( CAUSAL again m ,altogether n ,back n ,finally n , also m , thereby n , at once n , while m , clearly m , (D(D( Table 6: Most informative discourse marker co-occurrences in the super- ( o ) and subordinate ( p ) clauses in Table 5. The baseline was obtained by assigning discourse markers to the largest class, i.e. with the most types. The best results obtained using just a single POS class are also shown. The results across the different metrics suggest that adverbs and verbs are the best single predictors of polarity and veridicality, respectively. We next applied the 1NN classifier to cooccurrences with discourse markers. The results are shown in Table 7. The results show that for each task 1NN with the weighted Jaccard coefficient performs at least as well as the other three classifiers. 1NN with metric: Naive Task  !"# =>1?M? A Bayes polarity 74.4 81.4 81.4 81.4 veridicality 83.7 79.6 83.7 73.5 type 74.2 80.1 80.1 58.1 Table 7: Results using co-occurrences with DMs We also compared using the following combinations of different parts of speech: vb + aux, vb + in, vb + rb, nn + prp, vb + nn + prp, vb + aux + rb, vb + aux + in, vb + aux + nn + prp, nn + prp + in, DMs + rb, DMs + vb and DMs + rb + vb. The best results obtained using all combinations tried are shown in the last column of Table 5. For DMs + rb, DMs + vb and DMs + rb + vb we also tried weighting the cooccurrences so that the sums of the co-occurrences with each of verbs, adverbs and discourse markers were equal. However this did not lead to any better results. One property that distinguishes =6>1?M? A from the other metrics is that it weights features the strength of their collocation. We were therefore interested to see which co-occurrences were most informative. Using Weka’s feature selection utility, we ranked discourse marker co-occurrences by their information gain when predicting polarity, veridicality and type. The most informative co-occurrences are listed in Table 6. For example, if also occurs in the subordinate clause then the discourse marker is more likely to be ADDITIVE. The 1NN and Naive Bayes classifiers were then applied to co-occurrences with just the DMs that were most informative for each task. The results, shown in Table 8, indicate that the performance of 1NN drops when we restrict ourselves to this subset. 4 However Naive Bayes outperforms all previous 1NN classifiers. Base1NN with: Naive Task line  !"# Bayes polarity 67.4 72.1 69.8 90.7 veridicality 73.5 85.7 77.6 91.8 type 58.1 67.7 58.1 93.5 Table 8: Results using most informative DMs 4The bdcfege k metric is omitted because it essentially already has its own method of factoring in informativity. Feature Positively correlated features POS-POL No significantly informative predictors correlated positively NEG-POL NEG-VERBAL m , NEG-SUBJ m , ARGS=NONE m , MODALITY=  ABILITY,ABILITY  VERIDICAL VERB=BE m , WORDS n , WORDS m , MODALITY=  NULL,NULL  NON-VERID TEMPEX m , PRONOUN Jfqhrsht  m , PRONOUN Jfqhrsht  n ADDITIVE WORDS n , WORDS m , CLAUSES n , MODALITY=  ABILITY,FUTURE  , MODALITY=  ABILITY,ABILITY  , NPS n , MODALITY=  FUTURE,FUTURE  , MOOD=  DECLARATIVE,DECLARATIVE  TEMPORAL EMBEDDING=7, PRONOUN Juqvrsgtw Xx Ezy m , MOOD=  INTERROGATIVE,DECLARATIVE  CAUSAL NEG-SUBJ n , NEG-VERBAL n , NPI-WO-NEG n , NPI-AND-NEG n , MODALITY=  NULL,FUTURE  Table 9: The most informative linguistically motivated predictors for each class. The indices o and p indicate that a one dimensional feature belongs to the superordinate or subordinate clause, respectively. Weka’s feature selection utility was also applied to all the linguistically motivated features described in Section 4.1.2. The most informative features are shown in Table 9. Naive Bayes was then applied using both all the linguistically motivated features, and just the most informative ones. The results are shown in Table 10. All Most Task Baseline features informative polarity 67.4 74.4 72.1 veridicality 73.5 77.6 79.6 type 58.1 64.5 77.4 Table 10: Naive Bayes and linguistic features 5 Discussion The results demonstrate that discourse markers can be classified along three different dimensions with an accuracy of over 90%. The best classifiers used a global algorithm (Naive Bayes), with cooccurrences with a subset of discourse markers as features. The success of Naive Bayes shows that with the right choice of features the classification task is highly separable. The high degree of accuracy attained on the type task suggests that there is empirical evidence for a distinct class of TEMPORAL markers. The results also provide empirical evidence for the correlation between certain linguistic features and types of discourse relation. Here we restrict ourselves to making just five observations. Firstly, verbs and adverbs are the most informative parts of speech when classifying discourse markers. This is presumably because of their close relation to the main predicate of the clause. Secondly, Table 6 shows that the discourse marker DM in the structure X, but/though/although Y DM Z is more likely to be signalling a positive polarity discourse relation between Y and Z than a negative polarity one. This suggests that a negative polarity discourse relation is less likely to be embedded directly beneath another negative polarity discourse relation. Thirdly, negation correlates with the main clause of NEG-POL discourse markers, and it also correlates with subordinate clause of CAUSAL ones. Fourthly, NON-VERIDICAL correlates with second person pronouns, suggesting that a writer/speaker is less likely to make assertions about the reader/listener than about other entities. Lastly, the best results with knowledge poor features, i.e. lexical co-occurrences, were better than those with linguistically sophisticated ones. It may be that the sophisticated features are predictive of only certain subclasses of the classes we used, e.g. hypotheticals, or signallers of contrast. 6 Conclusions and future work We have proposed corpus-based techniques for classifying discourse markers along three dimensions: polarity, veridicality and type. For these tasks we were able to classify with accuracy rates of 90.7%, 91.8% and 93.5% respectively. These equate to error reduction rates of 71.5%, 69.1% and 84.5% from the baseline error rates. In addition, we determined which features were most informative for the different classification tasks. In future work we aim to extend our work in two directions. Firstly, we will consider finer-grained classification tasks, such as learning whether a causal discourse marker introduces a cause or a consequence, e.g. distinguishing because from so. Secondly, we would like to see how far our results can be extended to include adverbial discourse markers, such as instead or for example, by using just features of the clauses they occur in. Acknowledgements I would like to thank Mirella Lapata, Alex Lascarides, Bonnie Webber, and the three anonymous reviewers for their comments on drafts of this paper. This research was supported by EPSRC Grant GR/R40036/01 and a University of Sydney Travelling Scholarship. References Nicholas Asher and Alex Lascarides. 2003. Logics of Conversation. Cambridge University Press. Timothy Baldwin and Francis Bond. 2003. Learning the countability of English nouns from corpus data. In Proceedings of ACL 2003, pages 463–470. Yves Bestgen, Liesbeth Degand, and Wilbert Spooren. 2003. On the use of automatic techniques to determine the semantics of connectives in large newspaper corpora: An exploratory study. In Proceedings of the MAD’03 workshop on Multidisciplinary Approaches to Discourse, October. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the First Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-2000), Seattle, Washington, USA. James R. Curran and M. Moens. 2002. Improvements in automatic thesaurus extraction. In Proceedings of the Workshop on Unsupervised Lexical Acquisition, pages 59–67, Philadelphia, PA, USA. Barbara Di Eugenio, Johanna D. Moore, and Massimo Paolucci. 1997. Learning features that predict cue usage. In Proceedings of the 35th Conference of the Association for Computational Linguistics (ACL97), Madrid, Spain, July. Katherine Forbes, Eleni Miltsakaki, Rashmi Prasad, Anoop Sarkar, Aravind Joshi, and Bonnie Webber. 2001. D-LTAG system—discourse parsing with a lexicalised tree adjoining grammar. In Proceedings of the ESSLI 2001 Workshop on Information Structure, Discourse Structure, and Discourse Semantics, Helsinki, Finland. Brigitte Grote and Manfred Stede. 1998. Discourse marker choice in sentence planning. In Eduard Hovy, editor, Proceedings of the Ninth International Workshop on Natural Language Generation, pages 128– 137. Association for Computational Linguistics, New Brunswick, New Jersey. M. Halliday and R. Hasan. 1976. Cohesion in English. Longman. Ben Hutchinson. 2003. Automatic classification of discourse markers by their co-occurrences. In Proceedings of the ESSLLI 2003 workshop on Discourse Particles: Meaning and Implementation, Vienna, Austria. Ben Hutchinson. 2004. Mining the web for discourse markers. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, Portugal. Alistair Knott and Robert Dale. 1994. Using linguistic phenomena to motivate a set of coherence relations. Discourse Processes, 18(1):35–62. Alistair Knott. 1996. A data-driven methodology for motivating a set of coherence relations. Ph.D. thesis, University of Edinburgh. Mirella Lapata and Alex Lascarides. 2004. Inferring sentence-internal temporal relations. In In Proceedings of the Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics Annual Meeting, Boston, MA. Alex Lascarides and Nicholas Asher. 1993. Temporal interpretation, discourse relations and common sense entailment. Linguistics and Philosophy, 16(5):437– 493. Lillian Lee. 2001. On the effectiveness of the skew divergence for statistical language analysis. Artificial Intelligence and Statistics, pages 65–72. Max M Louwerse. 2001. An analytic and cognitive parameterization of coherence relations. Cognitive Linguistics, 12(3):291–315. Inderjeet Mani and George Wilson. 2000. Robust temporal processing of news. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL 2000), pages 69–76, New Brunswick, New Jersey. Daniel Marcu and Abdessamad Echihabi. 2002. An unsupervised approach to recognizing discourse relations. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL2002), Philadelphia, PA. Daniel Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. The MIT Press. Jim Martin. 1992. English Text: System and Structure. Benjamin, Amsterdam. M. Moser and J. Moore. 1995. Using discourse analysis and automatic text generation to study discourse cue usage. In Proceedings of the AAAI 1995 Spring Symposium on Empirical Methods in Discourse Interpretation and Generation, pages 92–98. Jon Oberlander and Alistair Knott. 1995. Issues in cue phrase implicature. In Proceedings of the AAAI Spring Symposium on Empirical Methods in Discourse Interpretation and Generation. Ted J. M. Sanders, W. P. M. Spooren, and L. G. M. Noordman. 1992. Towards a taxonomy of coherence relations. Discourse Processes, 15:1–35. Suzanne Stevenson and Paola Merlo. 1999. Automatic verb classification using distributions of grammatical features. In Proceedings of the 9th Conference of the European Chapter of the ACL, pages 45–52, Bergen, Norway. Bonnie Webber, Matthew Stone, Aravind Joshi, and Alistair Knott. 2003. Anaphora and discourse structure. Computational Linguistics, 29(4):545–588. Ian H. Witten and Eibe Frank. 2000. Data Mining: Practical machine learning tools with Java implementations. Morgan Kaufmann, San Francisco.
2004
87
FLSA: Extending Latent Semantic Analysis with features for dialogue act classification Riccardo Serafin CEFRIEL Via Fucini 2 20133 Milano, Italy Riccardo.Serafi[email protected] Barbara Di Eugenio Computer Science University of Illinois Chicago, IL 60607 USA [email protected] Abstract We discuss Feature Latent Semantic Analysis (FLSA), an extension to Latent Semantic Analysis (LSA). LSA is a statistical method that is ordinarily trained on words only; FLSA adds to LSA the richness of the many other linguistic features that a corpus may be labeled with. We applied FLSA to dialogue act classification with excellent results. We report results on three corpora: CallHome Spanish, MapTask, and our own corpus of tutoring dialogues. 1 Introduction In this paper, we propose Feature Latent Semantic Analysis (FLSA) as an extension to Latent Semantic Analysis (LSA). LSA can be thought as representing the meaning of a word as a kind of average of the meanings of all the passages in which it appears, and the meaning of a passage as a kind of average of the meaning of all the words it contains (Landauer and Dumais, 1997). It builds a semantic space where words and passages are represented as vectors. LSA is based on Single Value Decomposition (SVD), a mathematical technique that causes the semantic space to be arranged so as to reflect the major associative patterns in the data. LSA has been successfully applied to many tasks, such as assessing the quality of student essays (Foltz et al., 1999) or interpreting the student’s input in an Intelligent Tutoring system (Wiemer-Hastings, 2001). A common criticism of LSA is that it uses only words and ignores anything else, e.g. syntactic information: to LSA, man bites dog is identical to dog bites man. We suggest that an LSA semantic space can be built from the co-occurrence of arbitrary textual features, not just words. We are calling LSA augmented with features FLSA, for Feature LSA. Relevant prior work on LSA only includes Structured Latent Semantic Analysis (Wiemer-Hastings, 2001), and the predication algorithm of (Kintsch, 2001). We will show that for our task, dialogue act classification, syntactic features do not help, but most dialogue related features do. Surprisingly, one dialogue related feature that does not help is the dialogue act history. We applied LSA / FLSA to dialogue act classification. Dialogue systems need to perform dialogue act classification, in order to understand the role the user’s utterance plays in the dialogue (e.g., a question for information or a request to perform an action). In recent years, a variety of empirical techniques have been used to train the dialogue act classifier (Samuel et al., 1998; Stolcke et al., 2000). A second contribution of our work is to show that FLSA is successful at dialogue act classification, reaching comparable or better results than other published methods. With respect to a baseline of choosing the most frequent dialogue act (DA), LSA reduces error rates between 33% and 52%, and FLSA reduces error rates between 60% and 78%. LSA is an attractive method for this task because it is straightforward to train and use. More importantly, although it is a statistical theory, it has been shown to mimic many aspects of human competence / performance (Landauer and Dumais, 1997). Thus, it appears to capture important components of meaning. Our results suggest that LSA / FLSA do so also as concerns DA classification. On MapTask, our FLSA classifier agrees with human coders to a satisfactory degree, and makes most of the same mistakes. 2 Feature Latent Semantic Analysis We will start by discussing LSA. The input to LSA is a Word-Document matrix W with a row for each word, and a column for each document (for us, a document is a unit, e.g. an utterance, tagged with a DA). Cell c(i, j) contains the frequency with which wordi appears in documentj.1 Clearly, this w × d matrix W will be very sparse. Next, LSA applies 1Word frequencies are normally weighted according to specific functions, but we used raw frequencies because we wanted to assess our extensions to LSA independently from any bias introduced by the specific weighting technique. to W Singular Value Decomposition (SVD), to decompose it into the product of three other matrices, W = T0S0DT 0 , so that T0 and D0 have orthonormal columns and S0 is diagonal. SVD then provides a simple strategy for optimal approximate fit using smaller matrices. If the singular values in S0 are ordered by size, the first k largest may be kept and the remaining smaller ones set to zero. The product of the resulting matrices is a matrix ˆW of rank k which is approximately equal to W; it is the matrix of rank k with the best possible least-squares-fit to W. The number of dimensions k retained by LSA is an empirical question. However, crucially k is much smaller than the dimension of the original space. The results we will report later are for the best k we experimented with. Figure 1 shows a hypothetical dialogue annotated with MapTask style DAs. Table 1 shows the WordDocument matrix W that LSA starts with – note that as usual stop words such as a, the, you have been eliminated. 2 Table 2 shows the approximate representation of W in a much smaller space. To choose the best tag for a document in the test set, we first compute its vector representation in the semantic space LSA computed, then we compare the vector representing the new document with the vector of each document in the training set. The tag of the document which has the highest similarity with our test vector is assigned to the new document – it is customary to use the cosine between the two vectors as a measure of similarity. In our case, the new document is a unit (utterance) to be tagged with a DA, and we assign to it the DA of the document in the training set to which the new document is most similar. Feature LSA. In general, in FLSA we add extra features to LSA by adding a new “word” for each value that the feature of interest can take (in some cases, e.g. when adding POS tags, we extend the matrix in a different way — see Sec. 4). The only assumption is that there are one or more non word related features associated with each document that can take a finite number of values. In the Word– Document matrix, the word index is increased to include a new place holder for each possible value the feature may take. When creating the matrix, a count of one is placed in the rows related to the new indexes if a particular feature applies to the document under analysis. For instance, if we wish to include the speaker identity as a new feature for the dialogue 2We use a very short list of stop words (< 50), as our experiments revealed that for dialogue act annotation LSA is sensitive to the most common words too. This is why to is included in Table 1. in Figure 1, the initial Word–Document matrix will be modified as in Table 3 (its first 14 rows are as in Table 1). This process is easily extended if more than one non-word feature is desired per document, if more than one feature value applies to a single document or if a single feature appears more than once in a document (Serafin, 2003). 3 Corpora We report experiments on three corpora, Spanish CallHome, MapTask, and DIAG-NLP. The Spanish CallHome corpus (Levin et al., 1998; Ries, 1999) comprises 120 unrestricted phone calls in Spanish between family members and friends, for a total of 12066 unique words and 44628 DAs. The Spanish CallHome corpus is annotated at three levels: DAs, dialogue games and dialogue activities. The DA annotation augments a basic tag such as statement along several dimensions, such as whether the statement describes a psychological state of the speaker. This results in 232 different DA tags, many with very low frequencies. In this sort of situations, tag categories are often collapsed when running experiments so as to get meaningful frequencies (Stolcke et al., 2000). In CallHome37, we collapsed different types of statements and backchannels, obtaining 37 different tags. CallHome37 maintains some subcategorizations, e.g. whether a question is yes/no or rhetorical. In CallHome10, we further collapse these categories. CallHome10 is reduced to 8 DAs proper (e.g., statement, question, answer) plus the two tags ‘‘%’’ for abandoned sentences and ‘‘x’’ for noise. CallHome Spanish is further annotated for dialogue games and activities. Dialogue game annotation is based on the MapTask notion of a dialogue game, a set of utterances starting with an initiation and encompassing all utterances up until the purpose of the game has been fulfilled (e.g., the requested information has been transferred) or abandoned (Carletta et al., 1997). Moves are the components of games, they correspond to a single or more DAs, and each is tagged as Initiative, Response or Feedback. Each game is also given a label, such as Info(rmation) or Direct(ive). Finally, activities pertain to the main goal of a certain discourse stretch, such as gossip or argue. The HCRC MapTask corpus is a collection of dialogues regarding a “Map Task” experiment. Two participants sit opposite one another and each of them receives a map, but the two maps differ. The instruction giver (G)’s map has a route indicated while instruction follower (F)’s map does not in(Doc 1) G: Do you see the lake with the black swan? Query–yn (Doc 2) F: Yes, I do Reply–y (Doc 3) G: Ok, Ready (Doc 4) G: draw a line straight to it Instruct (Doc 5) F: straight to the lake? Check (Doc 6) G: yes, that’s right Reply–y (Doc 7) F: Ok, I’ll do it Acknowledge Figure 1: A hypothetical dialogue annotated with MapTask tags (Doc 1) (Doc 2) (Doc 3) (Doc 4) (Doc 5) (Doc 6) (Doc 7) do 1 1 0 0 0 0 1 see 1 0 0 0 0 0 0 lake 1 0 0 0 1 0 0 black 1 0 0 0 0 0 0 swan 1 0 0 0 0 0 0 yes 0 1 0 0 0 1 0 ok 0 0 1 0 0 0 1 draw 0 0 0 1 0 0 0 line 0 0 0 1 0 0 0 straight 0 0 0 1 1 0 0 to 0 0 0 1 1 0 0 it 0 0 0 1 0 0 1 that 0 0 0 0 0 1 0 right 0 0 0 0 0 1 0 Table 1: The 14-dimensional word-document matrix W clude the drawing of the route. The task is for G to give directions to F, so that, at the end, F is able to reproduce G’s route on her map. The MapTask corpus is composed of 128 dialogues, for a total of 1,835 unique words and 27,084 DAs. It has been tagged at various levels, from POS to disfluencies, from syntax to DAs. The MapTask coding scheme uses 13 DAs (called moves), that include: Instruct (a request that the partner carry out an action), Explain (one of the partners states some information that was not explicitly elicited by the other), Queryyn/-w, Acknowledge, Reply-y/-n/-w and others. The MapTask corpus is also tagged for games as defined above, but differently from CallHome, 6 DAs are identified as potential initiators of games (of course not every initiator DA initiates a game). Finally, transactions provide the subdialogue structure of a dialogue; each is built of several dialogue games and corresponds to one step of the task. DIAG-NLP is a corpus of computer mediated tutoring dialogues between a tutor and a student who is diagnosing a fault in a mechanical system with a tutoring system built with the DIAG authoring tool (Towne, 1997). The student’s input is via menu, the tutor is in a different room and answers via a text window. The DIAG-NLP corpus comprises 23 ’dialogues’ for a total of 607 unique words and 660 DAs (it is thus much smaller than the other two). It has been annotated for a variety of features, including four DAs3 (Glass et al., 2002): problem solving, the tutor gives problem solving directions; judgment, the tutor evaluates the student’s actions or diagnosis; domain knowledge, the tutor imparts domain knowledge; and other, when none of the previous three applies. Other features encode domain objects and their properties, and Consult Type, the type of student query. 4 Results Table 4 reports the results we obtained for each corpus and method (to train and evaluate each method, we used 5-fold cross-validation). We include the baseline, computed as picking the most frequent DA 3They should be more appropriately termed tutor moves. (Doc 1) (Doc 2) (Doc 3) (Doc 4) (Doc 5) (Doc 6) (Doc 7) Dim. 1 1.3076 0.4717 0.1529 1.6668 1.1737 0.1193 0.9101 Dim. 2 1.5991 0.6797 0.0958 -1.3697 -0.4771 0.2844 0.4205 Table 2: The reduced 2-dimensional matrix ˆW (Doc 1) (Doc 2) (Doc 3) (Doc 4) (Doc 5) (Doc 6) (Doc 7) do 1 1 0 0 0 0 1 ... ... ... ... ... ... ... ... right 0 0 0 0 0 1 0 <Giver> 1 0 1 1 0 1 0 <Follower> 0 1 0 0 1 0 1 Table 3: Word-document matrix W augmented with speaker identity in each corpus;4 the accuracy for LSA; the best accuracy for FLSA, and with what combination of features it was obtained; the best published result, taken from (Ries, 1999) and from (Lager and Zinovjeva, 1999) respectively for CallHome and for MapTask. Finally, for both LSA and FLSA, Table 4 includes, in parenthesis, the dimension k of the reduced semantic space. For each LSA method and corpus, we experimented with values of k between 25 and 350. The values of k that give us the best resuls for each method were thus selected empirically. In all cases, we can see that LSA performs much better than baseline. Moreover, we can see that FLSA further improves performance, dramatically in the case of MapTask. FLSA reduces error rates between 60% and 78%, for all corpora other than DIAG-NLP (all differences in performance between LSA and FLSA are significant, other than for DIAG-NLP). DIAG-NLP may be too small a corpus to train FLSA; or Consult Type may not be effective, but it was the only feature appropriate for FLSA (Sec. 5 discusses how we chose appropriate features). Another extension to LSA we developed, Clustered LSA, did give an improvement in performance for DIAG (79.24%) — please see (Serafin, 2003). As regards comparable approaches, the performance of FLSA is as good or better. For Spanish CallHome, (Ries, 1999) reports 76.2% accuracy with a hybrid approach that couples Neural Networks and ngram backoff modeling; the former uses prosodic features and POS tags, and interestingly works best with unigram backoff modeling, i.e., without taking into account the DA history – see our discussion of the ineffectiveness of the DA history below. However, (Ries, 1999) does not mention 4The baselines for CallHome37 and CallHome10 are the same because in both statement is the most frequent DA. his target classification, and the reported baseline of picking the most frequent DA appears compatible with both CallHome37 and CallHome10.5 Thus, our results with FLSA are slightly worse (- 1.33%) or better (+ 2.68%) than Ries’, depending on the target classification. On MapTask, (Lager and Zinovjeva, 1999) achieves 62.1% with Transformation Based Learning using single words, bigrams, word position within the utterance, previous DA, speaker and change of speaker. We achieve much better performance on MapTask with a number of our FLSA models. As regards results on DA classification for other corpora, the best performances obtained are up to 75% for task-oriented dialogues such as Verbmobil (Samuel et al., 1998). (Stolcke et al., 2000) reports an impressive 71% accuracy on transcribed Switchboard dialogues, using a tag set of 42 DAs. These are unrestricted English telephone conversations between two strangers that discuss a general interest topic. The DA classification task appears more difficult for corpora such as Switchboard and CallHome Spanish, that cannot benefit from the regularities imposed on the dialogue by a specific task. (Stolcke et al., 2000) employs a combination of HMM, neural networks and decision trees trained on all available features (words, prosody, sequence of DAs and speaker identity). Table 5 reports a breakdown of the experimental results obtained with FLSA for the three tasks for which it was successful (Table 5 does not include k, which is always 25 for CallHome37 and CallHome10, and varies between 25 and 75 for MapTask). For each corpus, under the line we find results that are significantly better than those obtained with LSA. For MapTask, the first 4 results that are 5An inquiry to clarify this issue went unanswered. Corpus Baseline LSA FLSA Features Best known result CallHome37 42.68% 65.36% (k = 50) 74.87% (k = 25) Game + Initiative 76.20% CallHome10 42.68% 68.91% (k = 25) 78.88% (k = 25) Game + Initiative 76.20% MapTask 20.69% 42.77% (k = 75) 73.91% (k = 25) Game + Speaker 62.10% DIAG-NLP 43.64% 75.73% (k = 50) 74.81% (k = 50) Consult Type n.a. Table 4: Accuracy for LSA and FLSA Corpus accuracy Features CallHome37 62.58% Previous DA CallHome37 71.08% Initiative CallHome37 72.69% Game CallHome37 74.87% Game+Initiative CallHome10 68.32% Previous DA CallHome10 73.97% Initiative CallHome10 76.52% Game CallHome10 78.88% Game+Initiative MapTask 41.84% SRule MapTask 43.28% POS MapTask 43.59% Duration MapTask 46.91% Speaker MapTask 47.09% Previous DA MapTask 66.00% Game MapTask 69.37% Game+Prev. DA MapTask 73.25% Game+Speaker+Prev. DA MapTask 73.91% Game+Speaker Table 5: FLSA Accuracy better than LSA (from POS to Previous DA) are still pretty low; there is a difference of 19% in performance for FLSA when Previous DA is added and when Game is added. Analysis. A few general conclusions can be drawn from Table 5, as they apply in all three cases. First, using the previous DA does not help, either at all (CallHome37 and CallHome10), or very little (MapTask). Increasing the length of the dialogue history does not improve performance. In other experiments, we increased the length up to n = 4: we found that the higher n, the worse the performance. As we will see in Section 5, introducing any new feature results in a larger and sparser initial matrix, which makes the task harder for FLSA; to be effective, the amount of information provided by the new feature must be sufficient to overcome this handicap. It is clear that, the longer the dialogue history, the sparser the initial matrix becomes, which explains why performance decreases. However, this does not explain why using even only the previous DA does not help. This implies that the previous DA does not provide a lot of information, as in fact is shown numerically in Section 5. This is surprising because the DA history is usually considered an important determinant of the current DA (but (Ries, 1999) observed the same). Second, the notion of Game appears to be really powerful, as it vastly improves performance on two very different corpora such as CallHome and MapTask.6 We will come back to discussing the usage of Game in a real dialogue system in Section 6. Third, the syntactic features we had access to do not seem to improve performance (they were available only for MapTask). In MapTask SRule indicates the main structure of the utterance, such as Declarative or Wh-question. It is not surprising that SRule did not help, since it is well known that syntactic form is not predictive of DAs, especially those of indirect speech act flavor (Searle, 1975). POS tags don’t help LSA either, as has already been observed by (Wiemer-Hastings, 2001; Kanejiya et al., 2003) for other tasks. The likely reason is that it is necessary to add a different ’word’ for each distinct pair word-POS, e.g., route becomes split as routeNN and route-VB. This makes the Word-Document matrix much sparser: for MapTask, the number of rows increases from 1,835 for plain LSA to 2,324 for FLSA. These negative results on adding syntactic information to LSA may just reinforce one of the claims of the LSA proponents, that structural information is irrelevant for determining meaning (Landauer and Dumais, 1997). Alternatively, syntactic information may need to be added to LSA in different ways. (Wiemer-Hastings, 2001) discusses applying LSA to each syntactic component of the sentence (subject, verb, rest of sentence), and averaging out those three measures to obtain a final similarity measure. The results are better than with plain LSA. (Kintsch, 2001) proposes an algorithm that successfully differentiates the senses of predicates on the basis on their arguments, in which items of the semantic neighborhood of a predicate that are relevant to an argument are combined with the [LSA] predicate vector ... through a spreading activation process. 6Using Game in MapTask does not introduce circularity, even if a game is identified by its initiating DA. We checked the matching rates for initiating and non initiating DAs with the FLSA model which employs Game + Speaker: they are 78.12% and 71.67% respectively. Hence, even if Game makes initiating moves easier to classify, it is highly beneficial for the classification of non initiating moves as well. 5 How to select features for FLSA An important issue is how to select features for FLSA. One possible answer is to exhaustively train every FLSA model that corresponds to one possible feature combination. The problem is that training LSA models is in general time consuming. For example, training each FLSA model on CallHome37 takes about 35 minutes of CPU time, and on MapTask 17 minutes, on computers with one Pentium 1.7Ghz processor and 1Gb of memory. Thus, it would be better to focus only on the most promising models, especially when the number of features is high, because of the exponential number of combinations. In this work, we trained FLSA on each individual feature. Then, we trained FLSA on each feature combinations that we expected to be effective, either because of the good performances of each individual feature, or because they include features that are deemed predictive of DAs, such as the previous DA(s), even if they did not perform well individually. After we ran our experiments, we performed a post hoc analysis based on the notion of Information Gain (IG) from decision tree learning (Quinlan, 1993). One approach to choosing the next feature to add to the tree at each iteration is to pick the one with the highest IG. Suppose the data set S is classified using n categories v1...vn, each with probability pi. S’s entropy H can be seen as an indicator of how uncertain the outcome of the classification is, and is given by: H(S) = − n X i=1 pilog2(pi) (1) If feature F divides S into k subsets S1...Sk, then IG is the expected reduction in entropy caused by partitioning the data according to the values of F: IG(S, A) = H(S) − k X i=1 |Si| |S| H(Si) (2) In our case, we first computed the entropy of the corpora with respect to the classification induced by the DA tags (see Table 6, which also includes the LSA accuracy for convenience). Then, we computed the IG of the features or feature combinations we used in the FLSA experiments. Table 7 reports the IG for most of the features from Table 5; it is ordered by FLSA performance. On the whole, IG appears to be a reasonably accurate predictor of performance. When a feature or feature combination has a high IG, e.g. over 1, there Corpus Entropy LSA CallHome37 3.004 65.36% CallHome10 2.51 68.91% MapTask 3.38 42.77% Table 6: Entropy measures Corpus Features IG FLSA CallHome37 Previous DA 0.21 62.58% CallHome37 Initiative 0.69 71.08% CallHome37 Game 0.59 72.69% CallHome37 Game+Initiative 1.09 74.87% CallHome10 Previous DA 0.13 68.32% CallHome10 Initiative 0.53 73.97% CallHome10 Game 0.53 76.52% CallHome10 Game+Initiative 1.01 78.88% MapTask Duration 0.54 43.59% MapTask Speaker 0.31 46.91% MapTask Prev. DA 0.58 47.09% MapTask Game 1.21 66.00% MapTask Game+Speaker+Prev. DA 2.04 73.25% MapTask Game+Speaker 1.62 73.91% Table 7: Information gain for FLSA is also a high performance improvement. Occasionally, if the IG is small this does not hold. For example, using the previous DA reduces the entropy by 0.21 for CallHome37, but performance actually decreases. Most likely, the amount of new information introduced is rather low and it is overcome by having a larger and sparser initial matrix, which makes the task harder for FLSA. Also, when performance improves it does not necessarily increase linearly with IG (see e.g. Game + Speaker + Previous DA and Game + Speaker for MapTask). Nevertheless, IG can be effectively used to weed out unpromising features, or to rank feature combinations so that the most promising FLSA models can be trained first. 6 Discussion and future work In this paper, we have presented a novel extension to LSA, that we have called Feature LSA. Our work is the first to show that FLSA is more effective than LSA, at least for the specific task we worked on, DA classification. In parallel, we have shown that FLSA can be effectively used to train a DA classifier. We have reached performances comparable to or better than published results on DA classification, and we have used an easily trainable method. FLSA also highlights the effectiveness of other dialogue related features, such as Game, to classify DAs. The drawback of features such as Game is that Corpus FLSA CallHome37 0.676 CallHome10 0.721 MapTask 0.740 Table 8: κ measures of agreement a dialogue system may not have them at its disposal when doing DA classification in real time. However, this problem may be circumvented. The number of different games is in general rather low (8 in CallHome Spanish, 6 in MapTask), and the game label is constant across DAs belonging to the same game. Each DA can be classified by augmenting it with each possible game label, and by choosing the most accurate match among those returned by each of these classification attempts. Further, if the system can reliably recognize the end of a game, the method just described needs to be used only for the first DA of each game. Then, the game label that gives the best result becomes the game label used for the next few DAs, until the end of the current game is detected. Another reason why we advocate FLSA over other approaches is that it appears to be close to human performance for DA classification, in the same way that LSA approximates well many aspects of human competence / performance (Landauer and Dumais, 1997). To support this claim, first, we used the κ coefficient (Krippendorff, 1980; Carletta, 1996) to assess the agreement between the classification made by FLSA and the classification from the corpora — see Table 8. A general rule of thumb on how to interpret the values of κ (Krippendorff, 1980) is to require a value of κ ≥0.8, with 0.67 < κ < 0.8 allowing tentative conclusions to be drawn. As a whole, Table 8 shows that FLSA achieves a satisfying level of agreement with human coders. To put Table 8 in perspective, note that expert human coders achieved κ = 0.83 on DA classification for MapTask, but also had available the speech source (Carletta et al., 1997). We also compared the confusion matrix from (Carletta et al., 1997) with the confusion matrix we obtained for our best result on MapTask (FLSA using Game + Speaker). For humans, the largest sources of confusion are between: check and queryyn; instruct and clarify; and acknowledge, reply-y and ready. Likewise, our FLSA method makes the most mistakes when distinguishing between instruct and clarify; and acknowledge, reply-y, and ready. Instead it performs better than humans on distinguishing check and query-yn. Thus, most of the sources of confusion for humans are the same as for FLSA. Future work includes further investigating how to select promising feature combinations, e.g. by using logical regression. We are also exploring whether FLSA can be used as the basis for semi-automatic annotation of dialogue acts, to be incorporated into MUP, an annotation tool we have developed (Glass and Di Eugenio, 2002). The problem is that large corpora are necessary to train methods based on LSA. This would seem to defeat the purpose of using FLSA as the basis for semi-automatic dialogue annotation, since, to train FLSA in a new domain, we would need a large hand annotated corpus to start with. Co-training (Blum and Mitchell, 1998) may offer a solution to this problem. In co-training, two different classifiers are initially trained on a small set of annotated data, by using different features. Afterwards, each classifier is allowed to label some unlabelled data, and picks its most confidently predicted positive and negative examples; this data is added to the annotated data. The process repeats until the desired perfomance is achieved. In our scenario, we will experiment with training two different FLSA models, or one FLSA model and a different classifier, such as a naive Bayes classifier, on a small portion of annotated data that includes features like DAs, Game, etc. We will then proceed as described on the unlabelled data. Finally, we have started applying FLSA to a different problem, that of judging the coherence of texts. Whereas LSA has been already successfully applied to this task (Foltz et al., 1998), the issue is whether FLSA could perform better by also taking into account those features of a text that enhance its coherence for humans, such as appropriate cue words. Acknowledgments This work is supported by grant N00014-00-1-0640 from the Office of Naval Research, and in part, by award 0133123 from the National Science Foundation. Thanks to Michael Glass for initially suggesting extending LSA with features and to HCRC (University of Edinburgh) for sharing their annotated MapTask corpus. The work was performed while the first author was at the University of Illinois in Chicago. References Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In COLT98, Proceedings of the Conference on Computational Learning Theory. Jean Carletta, Amy Isard, Stephen Isard, Jacqueline C. Kowtko, Gwyneth Doherty-Sneddon, and Anne H. Anderson. 1997. The reliability of a dialogue structure coding scheme. Computational Lingustics, 23(1):13–31. Jean Carletta. 1996. Assessing agreement on classification tasks: the Kappa statistic. Computational Linguistics, 22(2):249–254. Peter W. Foltz, Walter Kintsch, and Thomas K. Landauer. 1998. The measurement of textual coherence with Latent Semantic Analysis. Discourse Processes, 25:285–308. Peter W. Foltz, Darrell Laham, and Thomas K. Landauer. 1999. The intelligent essay assessor: Applications to educational technology. Interactive Multimedia Electronic Journal of ComputerEnhanced Learning, 1(2). Michael Glass and Barbara Di Eugenio. 2002. MUP: The UIC standoff markup tool. In The Third SigDIAL Workshop on Discourse and Dialogue, Philadelphia, PA, July. Michael Glass, Heena Raval, Barbara Di Eugenio, and Maarika Traat. 2002. The DIAG-NLP dialogues: coding manual. Technical Report UICCS 02-03, University of Illinois - Chicago. Dharmendra Kanejiya, Arun Kumar, and Surendra Prasad. 2003. Automatic Evaluation of Students’ Answers using Syntactically Enhanced LSA. In HLT-NAACL Workshop on Building Educational Applications using Natural Language Processing, pages 53–60, Edmonton, Canada. Walter Kintsch. 2001. Predication. Cognitive Science, 25:173–202. Klaus Krippendorff. 1980. Content Analysis: an Introduction to its Methodology. Sage Publications, Beverly Hills, CA. T. Lager and N. Zinovjeva. 1999. Training a dialogue act tagger with the µ-TBL system. In The Third Swedish Symposium on Multimodal Communication, Link¨oping University Natural Language Processing Laboratory (NLPLAB). Thomas K. Landauer and S.T. Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104:211–240. Lori Levin, Ann Thym´e-Gobbel, Alon Lavie, Klaus Ries, and Klaus Zechner. 1998. A discourse coding scheme for conversational Spanish. In Proceedings ICSLP. J. Ross Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann. Klaus Ries. 1999. HMM and Neural Network Based Speech Act Detection. In Proceedings of ICASSP 99, Phoenix, Arizona, March. Ken Samuel, Sandra Carberry, and K. VijayShanker. 1998. Dialogue act tagging with transformation-based learning. In ACL/COLING 98, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics (joint with the 17th International Conference on Computational Linguistics), pages 1150–1156. John R. Searle. 1975. Indirect Speech Acts. In P. Cole and J.L. Morgan, editors, Syntax and Semantics 3. Speech Acts. Academic Press. Reprinted in Pragmatics. A Reader, Steven Davis editor, Oxford University Press, 1991. Riccardo Serafin. 2003. Feature Latent Semantic Analysis for dialogue act interpretation. Master’s thesis, University of Illinois - Chicago. A. Stolcke, K. Ries, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky, P. Taylor, R. Martin, C. Van Ess-Dykema, and M. Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3):339–373. Douglas M. Towne. 1997. Approximate reasoning techniques for intelligent diagnostic instruction. International Journal of Artificial Intelligence in Education. Peter Wiemer-Hastings. 2001. Rules for syntax, vectors for semantics. In CogSci01, Proceedings of the Twenty-Third Annual Meeting of the Cognitive Science Society, Edinburgh, Scotland.
2004
88
Developing A Flexible Spoken Dialog System Using Simulation Grace Chung Corporation for National Research Initiatives 1895 Preston White Drive, Suite 100 Reston, VA, 20191 [email protected] Abstract In this paper, we describe a new methodology to develop mixed-initiative spoken dialog systems, which is based on the extensive use of simulations to accelerate the development process. With the help of simulations, a system providing information about a database of nearly 1000 restaurants in the Boston area has been developed. The simulator can produce thousands of unique dialogs which benefit not only dialog development but also provide data to train the speech recognizer and understanding components, in preparation for real user interactions. Also described is a strategy for creating cooperative responses to user queries, incorporating an intelligent language generation capability that produces content-dependent verbal descriptions of listed items. 1 Introduction Spoken dialog systems have traditionally been difficult to instantiate because of the extensive efforts required for configuring each component from the natural language (NL) understanding to the domainspecific context tracking and dialog engines. This task is particularly involved when building systems that empower users with greater flexibility at the spoken input via mixed-initiative interaction (Zue et al., 2000; Ferguson and Allen, 1998), and systems that enable querying across large and frequently changing databases. The goal of this work is to assemble natural spoken dialog interfaces that enable flexible interactions through mixed initiative dialog and cooperative responses. Such interfaces aim to help users navigate large information spaces such as on-line databases. Conventional systems generally guide the users through a series of scripted prompts, either through  This research is sponsored in part by SPAWAR SSC-SD. The content of this paper does not necessarily reflect the position or policy of the Government, and no official endorsement should be inferred. User: Are there any Thai restaurants? System: Can you provide a neighborhood or city? User: Boston. System: There are 21 entries found. Can you be more specific? User: A restaurant in Dorchester. System: There are no Thai restaurants in Dorchester. Is there anything else I can help you with? User: What about in the South End? System: There are 2 entries found. Thai Village and House of Siam. Table 1: Example human-computer dialog. a finite-state grammar or a dialog control table. In Table 1, the system prompts present a sequence of questions in an attempt to solicit more constraints from the user, with the goal of obtaining a small data subset from the database. The system responses are generated from a set of rules that only anticipate one of a handful of situations: (1) when the set of entries returned is too large, (2) the set of entries is adequately small to enumerate, and (3) no available entries have been returned. A more flexible scenario would allow the user to browse the content by specifying one or more constraints in any order. The system should then return a succinct summary of the content upon user specification of each constraint. This would provide improved feedback to the user about the available choices so far, guards against stilted conversations with a fixed number of dialog turns for every interaction, and mitigates against repeated scenarios where user queries return no items. However, much effort is then required in configuring the numerous scenarios for users to make sequences of queries in various orders. User queries are likely to differ if the database contents shift over time, changing the frequency and availability of certain entries. Furthermore, there remains the well-known “chicken-andegg” problem of obtaining real-user data. With no real examples of human-computer interactions, it is difficult for developers to instantiate and configure a robust system. Yet without a reasonably operational system, it is equally difficult to convince real users to generate dialogs, particularly those which achieve successful completion. Hence, the usual development process consists of multiple iterations of expensive data collections and incremental system improvements. This paper presents an alternative paradigm for designing such a spoken dialog system. Our methodology employs simulations to reduce the time and effort required to build the system. Simulations facilitate prototyping and testing of an initial version of the system that automatically produces cooperative responses to user queries. We advocate the use of a suite of simulation techniques to create large numbers of synthetic user interactions with the system, including both typed and spoken inputs, where the speech is generated using a speech synthesizer. The resulting dialogs can be used to (1) diagnose the system for any problematic interactions, (2) enable a developer to examine system responses for large numbers of possible user queries, and (3) create an initial corpus for training the language models and probabilistic NL grammar. Thus, the initial phase of development comprises simulating hundreds of dialogs and iterative refinements prior to real-user data collection. In the next sections, we first describe our spoken dialog system architecture. This is followed by a description of a simulator, which operates in concert with a language generation system to output synthetic user queries. We elaborate on how the architecture can simulate coherent dialogs, and can be tuned to simulate a cooperative or uncooperative user. Then, methods for generating cooperative responses for a restaurant information domain are described. We detail how simulations have accelerated these developments. 2 System Architecture with Simulator Figure 1 depicts a spoken dialog system architecture functioning with simulator components, which create synthetic user inputs. Simulations can be customized to generate in text or speech mode. In text mode, text utterances are treated as user inputs to the understanding components. The dialog manager creates reply frames that encode information for generating the system reply string. These are also used by the simulator for selecting a random user response in the next turn. In speech mode, synthetic waveforms are created and recognized by the speech recognizer, yielding an -best list for the understanding components. Reply Frame Simulated Semantic Frame Simulated User Text Semantic Frame Language Generation Dialog System Architecture Speech Synthesizer Simulated User Waveform Speech Recognizer Simulation Architecture Simulator System Reply String Language Generation Database NL Understanding & Context Resolution Dialog Management Figure 1: A spoken dialog system architecture integrated with user simulation components. Examples and experiments in this paper are drawn from a Boston restaurant information system. Obtained from an on-line source, the content offers information for 863 restaurants, located in 106 cities in the Boston metropolitan area (e.g., Newton, Cambridge) and 45 neighborhoods (e.g., Back Bay, South End). Individual restaurant entries are associated with detailed information such as cuisines, phone numbers, opening hours, credit-card acceptance, price range, handicap accessibility, and menu offerings. Additionally, latitude and longitude information for each restaurant location have been obtained. 2.1 Instantiation of a System The concept of driving the instantiation of a dialog system from the data source was described in (Polifroni et al., 2003). In the following, the steps envisioned for creating an initial prototype starting with on-line content are summarized below: 1. Combing the web for database content 2. Identifying the relevant set of keys associated with the domain, and mapping to the information parsed from the content originator 3. Creating an NL grammar covering possible domain queries 4. Configuring the discourse and dialog components for an initial set of interactions 5. Defining templates for system responses The above steps are sufficient for enabling a working prototype to communicate with the proposed simulator in text mode. The next phase will involve iteratively running simulated dialogs and refinements on the spoken dialog system, followed by c summary :count 14 :categories ( c cuisine :ordered counts ( 4 2 2 2 ... :ordered values ( “american” “indian” .. c price range :ordered counts ( 7 2 2 1) :ordered values ( “cheap” “low” “medium” .. Table 2: Example summary frame derived from the system reply frame. examination of successive corpora of simulated dialogs. Later phases will then incorporate the speech recognition and text-to-speech components. 2.2 Simulation with User Modeling The simulator, Figure 1, is composed of several modular components. The core simulator accepts reply frames from the dialog system, and produces a meaning representation of the next synthetic user response. A text generation component paraphrases the meaning representation into a text string. In text mode, this poses as a typed user input, whereas in speech mode, the text is passed to a synthesizer as part of a synthesize/recognize cycle. Configuring a simulation for any domain involves customizing a simple external text file to control the behavior of the domain-independent simulator module, and tailoring text generation rules to output a variety of example user input sentences from the meaning representation. One simulated dialog would commence with an initial query such as “what restaurants do you provide?”. The synthetic user makes successive queries that constrain the search to data subsets. It may (1) continue to browse more data subsets, or (2) when a small list of data entries is in focus, choose to query attributes pertaining to one or more individual items, or (3) terminate the conversation. The entire system is run continuously through hundreds of dialogs to produce log files of user and system sentences, and dialog information for subsequent analyses. The simulator also generates generic kinds of statements such as asking for help, repeat and clearing the dialog history. 2.2.1 Generation of Semantic Frames The simulator takes input from the systemgenerated reply frame, and outputs a flat semantic frame, encapsulating the meaning representation of the next intended user query. The system reply frame contains the essential entities, used in the paraphrase for creating the system prompt. But also, a sub-frame, shown in Figure 2, retains preTerminate? Set Size < N? Use System Reply Frame? Select A Key and Value Select Another Key? Yes Begin Yes Yes Yes No No No Output Frame Output Frame Select Database Item & Key Terminate Frame Load History Frame No Figure 2: A schematic showing the decision making procedure for the simulator. computed counts associated with the frequency of occurrence of values for every key pertaining to the data subset within the discourse focus. During the browsing stage, the simulator randomly selects a key (e.g, a cuisine) from the given frame, and then makes a random selection on the value, (e.g., “Chinese.”). The simulator may choose one or more of these key-value pairs as constraints to narrow the search. For each key, more than one value from the list of possible values may be specified, (e.g., querying for “Chinese or Japanese restaurants.”). When querying about individual restaurants, the simulator randomly selects one restaurant entry from a small list, and then seeks to obtain the value for one key characteristic for a restaurant entry. For example, this could be a phone number or an address. Figure 2 illustrates the decision making performed by the simulator at each turn. At each decision point, the system “throws the dice” to determine how to proceed, for example, whether to select an additional key for constraint within the same turn, and whether to persist in querying about the available attributes of the small list of restaurants or to start over. The behavior of the simulator at each decision point can be tuned from an external text file, which allows the following to be specified:  Probability of combining several constraints into a single query  Probability of querying a different value for a previous key versus selecting from among other keys presented by the reply frame  Probability of continued querying of the attributes of restaurants from a list of one or more restaurants  Probability of the user changing his goals, hence querying with alternative constraints A simple user model is maintained by the simulator to track the key-value pairs that have already been queried in the current dialog. This tracks the dialog history so as to enable the synthetic user to further query about a previously mentioned item. It also prevents the dialog from cycling indefinitely through the same combinations of constraints, helping to make the dialog more coherent. The external configuration file can effectively tune the level of cooperative behavior for the synthetic user. If the synthetic user selects a single keyvalue pair from the reply frame at each turn, a nonempty and successively smaller data subset is guaranteed to result at each turn. Moreover, selections can be configured to bias towards frequencies of instance values. The basis for this stems from the hypothesis that locations populated with more restaurants are likely to be queried. That is, the statistics of the database instances can directly reflect on the distribution of user queries. For instance, users are more likely to query about, “Chinese restaurants in Chinatown.” Hence, the output dialogs may be more suitable for training language models. Alternatively, the synthetic user may be configured to select random combinations of various keys and values from the current or stored summary frame at a turn. Under these circumstances, the subsequent database retrieval may yield no data for those particular combinations of constraints. 2.2.2 Generation of Simulated Utterances Each semantic frame is input to Genesis, a text generation module (Seneff, 2002), to output a synthetic user utterance. Genesis executes surface-form generation via recursive generation rules and an associated lexicon. A recent addition to Genesis is the ability to randomly generate one of several variant sentences for the same semantic frame. A developer can specify several rules for each linguistic entity allowing the generator to randomly select one. Due to the hierarchical nature of these templates, numerous output sentences can be produced from a single semantic frame, with only a few variants specified for each rule. Table 3 depicts example semantic frames and corresponding sample sentences from the simulator. In total, the full corpus of simulated sentences are generated from approximately 55 hand-written rules in the restaurants domain. These rules distinguish themselves from previous text generation tasks by the incorporation of spontaneous speech phenomena such as filled pauses and fragments. In the initial phase, this small rules set is not systematically mined from any existing corpora, but is handcrafted by the developer. However, it may be possible in future to incorporate both statistics and observations learned from real data to augment the generation rules. 2.2.3 Synthetic User Waveforms A concatenative speech synthesizer (Yi et al., 2000) is used to synthesize the simulated user utterances for this domain. The parameters and concatenative units employed in this synthesizer were tailored for a previous domain, and therefore, the naturalness and intelligibility of the output waveforms are expected to be poor. However, the occurrence of some recognition errors may help in assessing their impact on the system. 3 Cooperative Response Strategies We have aimed to design a more cooperative spoken dialog system in two respects. First, the information is delivered so that at each turn a dynamic summary of the database items in focus is presented. Secondly, the dialog manager is augmented with a domain-independent algorithm to handle over-constrained queries. The system gives alternative suggestions that are integrated with the dynamic summaries. 3.1 Flexible System Responses Response planning is performed both in the dialog management and the language generator, Genesis. To enable flexible responses, and avoid rigid system prompts, the dialog manager accesses the database at every turn with the current set of user-specified constraints in focus. With this data subset returned, a data refinement server (Polifroni et al., 2003) then computes frequency characteristics of relevant keys for the subset. This is incorporated into the system reply frame as shown in Table 2. Following this, Genesis provides a summary of the characteristics of the data set, utilizing context information provided by the dialog manager and the frequency statistics. Genesis provides control on how to summarize the data linguistically via explicit rules files. The developer can specify variables ,  , and  which control how lists of items are summarized, separately for different classes of data. If the number of items is under  , all options are enumerated. If the top frequency counts cover more than  of the data, then these categories will be suggested, (e.g. “Some choices are Italian Frame Example Sentences c seek I’m interested in some low end restaurants in Back Bay please. :neighborhood “Back Bay” Inexpensive restaurants in Back Bay. :price range “low” Okay a cheap restaurant in Back Bay.  uh  Are there any cheap restaurants in Back Bay? c request property Can you please tell me the hours for Emma’s? :property ”hours” When is Emma’s open? :name ”Emma’s” Well what are the hours for Emma’s? Okay then what are the opening hours of Emma’s? Table 3: Sample semantic frames from the simulator, along with examples of generated sentence outputs. For each example frame above, hundreds of simulated variant sentences can be obtained. and Chinese.”). Alternatively, summaries can indicate values that are missing or common across the set, (e.g. “All of them are cheap.”). By accessing the database and then examining the data subset at each turn, the system informs the user with a concise description of the choices available at that point in the dialog. This is a more flexible alternative than following a script of prompts where in the end the user may arrive at an empty set. Moreover, we argue that performing the summary in real time yields greater robustness against changes in the database contents. 3.2 Dialog Management The domain-independent dialog manager is configurable via an external dialog control table. A set of generic functions are triggered by logical conditions specified in formal rules, where typically several rules fire in each turn. The dialog manager has been extended to handle scenarios in which the user constraints yield an empty set. The aim is to avoid simply stating that no data items were found, without providing some guidance on how the user could re-formulate his query. Domain-independent routines relax the constraints using a set of pre-defined and configurable criteria. Alternate methods for relaxing constraints are:  If a geographical key has been specified, relax the value according to a geography ontology. For instance, if a particular street name has been specified, the relaxation generates a subsuming neighborhood constraint in place of the street name.  If a geographical key has been specified, remove the geographical constraint and search for the nearest item that satisfies the remaining constraints. The algorithm computes the nearest item according to the central latitude/longitude coordinates of the neighborhood or city.  Relax the key-value with alternative values that have been set to defaults in an external file. For instance, if a Vietnamese restaurant is not available at all, the system relaxes the query to alternative Asian cuisines.  Choose the one constraint to remove that produces the smallest data subset to speak about. If no one constraint is able to produce a nonempty set, successively remove more constraints. The rationale for finding a constraint combination that produces a small data set, is to avoid suggesting very general alternatives: for instance, suggesting and summarizing the “337 cheap restaurants” when “cheap fondue restaurants” were requested. The routine will attempt to apply each of these relaxation techniques in turn until a non-zero data set can be attained. 4 Experiments 4.1 Simulations in Text Mode The first stage of development involved iteratively running the system in text mode and inspecting log files of the generated interactions for problems. This development cycle was particularly useful for extending the coverage of the NL parser and ensuring the proper operation of the end-to-end system. Simulations have helped diagnose initial problems overlooked in the rule-based mechanisms for context tracking; this has served to ensure correct inheritance of attributes given the many permutations of sequences of input sentences that are possible within a single conversation. This is valuable because in such a mixed-initiative system, the user is free to change topics and specify new parameters at any time. For instance, a user may or may not follow up with suggestions for restaurants offered by the system. In fact, the user could continue to modify any of the constraints previously specified in the conversation or query any attributes for an alternate newly spoken restaurant. There are vast numbers of dialog contexts that can result, and simulations have assisted greatly in detecting problems. Furthermore, by generating many variations of possible user constraints, simulations have also helped identify initial problems in the summarization rules for system response generation. The text generation component is handcrafted and benefits largely from examples of real queries to ensure their proper operation. These kinds of problems would otherwise normally be encountered only after many user interactions have occurred. Table 4 shows a typical simulated dialog. In the interaction shown, the simulator provides one or more constraints at each turn. It also selects alternative values according to the previous chosen key. After the dialog has arrived at a small data set, the simulator randomly asks questions about individual items. During one simulation run, we completed 2000 dialogs in text mode. There were a total of 8147 input utterances, resulting in an average of 4.07 input utterances per dialog. Of the input utterances, 5446 were unique. These were generated from 3349 unique semantic frames. There were 4320 unique system replies. 4.2 Simulating Over-Constrained Queries By configuring the simulator to query with multiple constraints in single turns, we were able to synthesize over-constrained queries, in which, initially, empty data sets were retrieved. In earlier development cycles, simulated dialogs were crucial in finding combinations that yielded problematic responses due to over-constrained queries. In one run after refinements, we tested 300 dialogs whereby two or more constraints were combined in each query by the simulator. As a result, 113 queries necessitated the system to invoke the relaxation algorithm in order to present a suggested alternative data set. Under all these cases, the system gave suggestions without fail. Some examples of the replies are shown in Table 5. 4.3 Simulation with Speech Recognition The next step is to implement the synthesize-andrecognize cycle whereby the text-to-speech output is directly piped to the recognition engine. A speech recognizer (Glass, 2003) is built using the utterances produced by the text mode as training data for the language model. This recognizer uses class  -gram models. With the training set of 8147 utterances, the perplexity of the bigram and trigram models are respectively 6.6 and 5.1. The speech recognizer, including the restaurant names contains 2325 vocabSIM: What restaurants do you know? SYS: There are 863 restaurants. Some of the options are American, Pizza, and Italian. Most of them are located in Boston and Cambridge. SIM: Any restaurants in Back Bay? SYS: There are 57 restaurants in Back Bay. Many of them are American, and Nouvelle Cuisine. SIM: So how about food in The South End? SYS: There are 42 restaurants in The South End. They are mostly American, and Nouvelle Cuisine. Many of them are located on Columbus Avenue, and Tremont Street. SIM: I would like a moderately priced restaurant. SYS: There are 9 medium priced restaurants in The South End. Some of the options are Nouvelle Cuisine, and American. Many of them are on Tremont Street and Appleton Street. SIM:  oh  What about any cheap American restaurants? SYS: There are 7 cheap American restaurants in The South End. They are located on Columbus Avenue, Tremont Street, Dartmouth Street, and Washington Street. SIM: Columbus Avenue. SYS: I have found 3 cheap American restaurants in The South End on Columbus Avenue. Their names are Charlie’s Sandwich Shoppe, Bob The Chefs, and Grill Fish. SIM: What is the phone number of first one? SYS: The phone number of Charlie’s Sandwich Shoppe is 617 536 7669. SIM: What is the price range for Grill Fish? SYS: The price range of Grill Fish is cheap. Table 4: Typical example of a simulated dialog. SIM are the simulated inputs and SYS are the system responses. ulary items. The understanding component would select one sentence of an -best list output from the speech recognizer. Presently, when a recognition error causes a hard parse failure in the understanding, the simulator is configured to output the same semantic frame as contained in the previous utterance. This will cause the text generator to output a different variant of the same query. If the parse failures occur multiple times in sequence, the simulated user aborts and terminates the dialog. 4.4 Results from Spoken Dialog System Our initial run of 36 dialogs yielded 213 sentences. To simplify dialogs, the simulator is configured to specify just one constraint at each turn. After the 1. Cheap Restaurants on Rowes Wharf: There are no cheap restaurants on Rowes Wharf. However, there are in total 5 cheap restaurants in the Financial District. They are on Broad Street, Post Office Square, Federal Street, and Bromfield Street. 2. Cheap Thai Restaurants in Dorchester: There are no cheap Thai restaurants in Dorchester. However, there are in total 14 cheap Thai restaurants. They are predominantly in Brighton, and The South End. The nearest one is Thai Village in the South End. It is less than 3 miles away from Dorchester. 3. Expensive Asian Restaurants on Hale Street: There are no high end Asian restaurants on Hale Street. Alternatively Yanks serves high end American food. It is located at 717 Hale Street in Beverly Farms. Table 5: Example summary responses from simulation generated over-constrained queries after relaxation of user constraints. Various schemes for relaxation are shown. (1) relaxes on the geographical location, (2) offers a nearest alternative, and (3) removes the cuisine constraint, outputting a single alternate selection. data subset has been narrowed down to six items or less, the simulator queries focus on one of the six items. For the 213 utterances, the recognition word error rate is 11.2%, and the sentence error rate is 32.4%. Because the synthesizer is highly domain specific and was originally trained on another domain, the synthetic waveforms were in fact highly unnatural. However, the relatively good recognition performance can be attributed to segmental units being well matched to the segment-based recognizer, an exact match to the trained  -gram model and the lack of spontaneous speech phenomena such as disfluencies. These 36 dialogs were analysed by hand. All dialogs successfully arrived at some small data subset at termination, without aborting due to errors. 29 (80.1%) of the dialogs completed without errors, with the correct desired data set achieved. Of the errorful dialogs, 3 exhibited problems due to recognition errors and 4 dialogs exhibited errors in the parse and context tracking mechanisms. All the questions regarding querying of individual restaurants were answered correctly. 5 Discussion The above evaluations have been conducted on highly restricted scenarios in order to focus development on any fundamental problems that may exist in the system. In all, large numbers of synthetic dialogs have helped us identify problems that in the past would have been discovered only after data collections, and possibly after many failed dialogs with frustrated real users. The hope is that using simulation runs will improve system performance to a level such that the first collection of real user data will contain a reasonable rate of task success, ultimately providing a more useful training corpus. Having eliminated many software problems, a final real user evaluation will be more meaningful. 6 Related Work Recently, researchers have begun to address the rapid prototyping of spoken dialog applications. While some are concerned with the generation of systems from on-line content (Feng et al., 2003), others have addressed portability issues within the dialog manager (Denecke et al., 2002) and the understanding components (Dzikovska et al., 2003). Real user simulations have been employed in other areas of software engineering. Various kinds of human-computer user interfaces can be evaluated for usability, via employing simulated human users (Riedl and St. Amant, 2002; Ritter and Young, 2001). These can range from web pages to cockpits and air traffic control systems. Simulated users have also accounted for perceptual and cognitive models. Previous work in dialog systems has addressed simulation techniques towards the goal of training and evaluation. In (Scheffler and Young, 2000), extensive simulations incorporating user modeling were used to train a system to select dialog strategies in clarification sub-dialogs. These simulations required collecting real-user data to build the user model. Other researchers have used simulations for the evaluation of dialog systems (Hone and Baber, 1995; Araki and Doshita, 1997; Lin and Lee, 2001). In (Lopez et al., 2003), recorded utterances with additive noise were used to run a dialog system in simulation-mode. This was used to test alternate confirmation strategies under various recognition accuracies. Their methods did require the recording of scripted user utterances, and hence were limited in the variations of user input. Our specific goals have dealt with creating more cooperative and flexible responses in spoken dialog. The issues of mismatch between user queries and database contents have been addressed by others in database systems (Gaasterland et al., 1992), while the potential for problems with dead-end dialogs caused by over-constrained queries have also been recognized and tackled in (Qu and Green, 2002). 7 Conclusions and Future Work The use of a simulator has greatly facilitated the development of our dialog system, with the availability of thousands of artificial dialogs. Even relatively restricted synthetic dialogs have already accelerated development. In the next phase, real user data collection will be conducted, along with full-scale evaluation. We plan to compare the efficacy of our language models built from simulated data with those trained from real user data. Future research will address issues of graceful recovery from recognition error. We believe that the framework of using simulated dialogs possibly with synthesized speech input augmented with controlled levels of additive noise can be an effective way to develop and evaluate error recovery strategies. Current methods for simulating dialogs are quite rudimentary. The text only produces certain variants that have been observed but does not respect corpus statistics, nor, in the case of synthetic speech, do they account for spontaneous speech phenomena. Improved simulations could use a set of indexed real speech waveforms invoked by the core simulator to create more realistic input. The main functionalities in the simulator software are now customizable from an external file. The simulator is domain independent and can be tailored for development of similar spoken dialog systems for browsing and navigating large databases. However further work is needed to incorporate greater configurability to the dialog flow. Increased flexibility for customizing the model of the dialog is needed to enable the software to be applied to the development of other kinds of dialog systems. 8 Acknowledgment The author wishes to thank Stephanie Seneff for her valuable feedback and the anonymous reviewers for their insightful comments and suggestions. References M. Araki and S. Doshita. 1997. Automatic evaluation environment for spoken dialog system evaluation. In Dialog Processing in Spoken Language Systems, 183–194. M. Denecke et al. 2002. Rapid Prototyping for Spoken Dialog Systems. Proc. COLING, Taipei, Taiwan. M. Dzikovska et al. 2003. Integrating linguistic and domain knowledge for spoken dialog systems in multiple domains. Proc. IJCAI, Acapulco, Mexico. J. Feng et al. 2003. Webtalk: Mining Websites for Automatically Building Dialog Systems. Proc. IEEE ASRU, Virgin Islands. G. Ferguson and J Allen. 1998. TRIPS: An Integrated Intelligent Problem-Solving Assistant. Proc. of the Fifteenth National Conference on AI (AAAI-98), 26–30. Madison, WI. T. Gaasterland et al. 1992. An Overview of Cooperative Answering. Journal of Intelligent Information Systems, 1(2), 123–157. J. Glass. 2003. A Probabilistic Framework for Segment-Based Speech Recognition. Computer Speech and Language, 17, 137–152. K. Hone and C. Baber. 1995. Using a simulation method to predict the transaction time effects of applying alternative levels of constraint to user utterances within speech interactive dialogs. ESCA Workshop on Spoken Dialog Systems. B. S. Lin and L. S. Lee. 2001. Computer-aided analysis and design for spoken dialog systems based on quantitative simulations. IEEE Trans. on Speech and Audio Processing, 9(5), 534–548. R. Lopez-Cozar et al. 2003. Assessment of dialog systems by means of a new simulation technique. Speech Communication, 40, 387–407. J. Polifroni, G. Chung and S. Seneff. 2003. Towards automatic generation of mixed-initiative dialog systems from web content. Proc. EUROSPEECH, 193–196. Geneva, Switzerland. Y. Qu and N. Green. 2002. A Constraint-Based Approach for Cooperative Information-Seeking Dialog. Proc. INLG, New York. M. Riedl and R. St. Amant. 2002. Toward automated exploration of interactive systems. Proc. IUI, 135–142. F. Ritter and R. Young. 2001. Embodied models as simulated users: Introduction to this special issue on using cognitive models to improve interface design. International Journal of HumanComputer Studies, 55, 1–14. K. Scheffler and S. Young. 2000. Probabilistic simulation of human-machine dialogs. Proc. ICASSP, 1217–1220. Istanbul, Turkey. S. Seneff et al. 1998. Galaxy-II: A Reference Architecture For Conversational System Development. Proc. ICSLP. Sydney, Australia. S. Seneff. 2002. Response Planning and Generation in the MERCURY Flight Reservation System. Computer Speech and Language 16, 283– 312. V. Zue, et al. 2000. JUPITER: A Telephone-Based Conversational Interface for Weather Information IEEE Transactions on Speech and Audio Processing, 8(1). J. Yi et al. 2000. A flexible, scalable finite-state transducer architecture for corpus-based concatenative speech synthesis. Proc. ICSLP. Beijing, China.
2004
9
Proceedings of the 43rd Annual Meeting of the ACL, pages 1–9, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics A High-Performance Semi-Supervised Learning Method for Text Chunking Rie Kubota Ando y Tong Zhang z IBM T.J. Watson Research Center Yorktown Heights, NY 10598, U.S.A. [email protected] [email protected] Abstract In machine learning, whether one can build a more accurate classifier by using unlabeled data (semi-supervised learning) is an important issue. Although a number of semi-supervised methods have been proposed, their effectiveness on NLP tasks is not always clear. This paper presents a novel semi-supervised method that employs a learning paradigm which we call structural learning. The idea is to find “what good classifiers are like” by learning from thousands of automatically generated auxiliary classification problems on unlabeled data. By doing so, the common predictive structure shared by the multiple classification problems can be discovered, which can then be used to improve performance on the target problem. The method produces performance higher than the previous best results on CoNLL’00 syntactic chunking and CoNLL’03 named entity chunking (English and German). 1 Introduction In supervised learning applications, one can often find a large amount of unlabeled data without difficulty, while labeled data are costly to obtain. Therefore, a natural question is whether we can use unlabeled data to build a more accurate classifier, given the same amount of labeled data. This problem is often referred to as semi-supervised learning. Although a number of semi-supervised methods have been proposed, their effectiveness on NLP tasks is not always clear. For example, co-training (Blum and Mitchell, 1998) automatically bootstraps labels, and such labels are not necessarily reliable (Pierce and Cardie, 2001). A related idea is to use Expectation Maximization (EM) to impute labels. Although useful under some circumstances, when a relatively large amount of labeled data is available, the procedure often degrades performance (e.g. Merialdo (1994)). A number of bootstrapping methods have been proposed for NLP tasks (e.g. Yarowsky (1995), Collins and Singer (1999), Riloff and Jones (1999)). But these typically assume a very small amount of labeled data and have not been shown to improve state-of-the-art performance when a large amount of labeled data is available. Our goal has been to develop a general learning framework for reliably using unlabeled data to improve performance irrespective of the amount of labeled data available. It is exactly this important and difficult problem that we tackle here. This paper presents a novel semi-supervised method that employs a learning framework called structural learning (Ando and Zhang, 2004), which seeks to discover shared predictive structures (i.e. what good classifiers for the task are like) through jointly learning multiple classification problems on unlabeled data. That is, we systematically create thousands of problems (called auxiliary problems) relevant to the target task using unlabeled data, and train classifiers from the automatically generated ‘training data’. We learn the commonality (or structure) of such many classifiers relevant to the task, and use it to improve performance on the target task. One example of such auxiliary problems for chunking tasks is to ‘mask’ a word and predict whether it is “people” or not from the context, like language modeling. Another example is to predict the pre1 diction of some classifier trained for the target task. These auxiliary classifiers can be adequately learned since we have very large amounts of ‘training data’ for them, which we automatically generate from a very large amount of unlabeled data. The contributions of this paper are two-fold. First, we present a novel robust semi-supervised method based on a new learning model and its application to chunking tasks. Second, we report higher performance than the previous best results on syntactic chunking (the CoNLL’00 corpus) and named entity chunking (the CoNLL’03 English and German corpora). In particular, our results are obtained by using unlabeled data as the only additional resource while many of the top systems rely on hand-crafted resources such as large name gazetteers or even rulebased post-processing. 2 A Model for Learning Structures This work uses a linear formulation of structural learning. We first briefly review a standard linear prediction model and then extend it for structural learning. We sketch an optimization algorithm using SVD and compare it to related methods. 2.1 Standard linear prediction model In the standard formulation of supervised learning, we seek a predictor that maps an input vector x 2 X to the corresponding output y 2 Y. Linear prediction models are based on real-valued predictors of the form f (x) = w T x, where w is called a weight vector. For binary problems, the sign of the linear prediction gives the class label. For k-way classification (with k > 2), a typical method is winner takes all, where we train one predictor per class and choose the class with the highest output value. A frequently used method for finding an accurate predictor ^ f is regularized empirical risk minimization (ERM), which minimizes an empirical loss of the predictor (with regularization) on the n training examples f(X i ; Y i )g: ^ f = arg min f n X i=1 L(f (X i ); Y i ) + r (f ) ! : L() is a loss function to quantify the difference between the prediction f (X i ) and the true output Y i, and r () is a regularization term to control the model complexity. ERM-based methods for discriminative learning are known to be effective for NLP tasks such as chunking (e.g. Kudoh and Matsumoto (2001), Zhang and Johnson (2003)). 2.2 Linear model for structural learning We present a linear prediction model for structural learning, which extends the traditional model to multiple problems. Specifically, we assume that there exists a low-dimensional predictive structure shared by multiple prediction problems. We seek to discover this structure through joint empirical risk minimization over the multiple problems. Consider m problems indexed by ` 2 f1; : : : ; mg, each with n ` samples (X ` i ; Y ` i ) indexed by i 2 f1; : : : ; n ` g. In our joint linear model, a predictor for problem ` takes the following form f ` (; x) = w T ` x + v T ` x ;  T = I ; (1) where we use I to denote the identity matrix. Matrix  (whose rows are orthonormal) is the common structure parameter shared by all the problems; w ` and v ` are weight vectors specific to each prediction problem `. The idea of this model is to discover a common low-dimensional predictive structure (shared by the m problems) parameterized by the projection matrix . In this setting, the goal of structural learning may also be regarded as learning a good feature map x — a low-dimensional feature vector parameterized by . In joint ERM, we seek  (and weight vectors) that minimizes the empirical risk summed over all the problems: [ ^ ; f ^ f ` g℄ = arg min ;ff ` g m X `=1 n ` X i=1 L(f ` (; X ` i ); Y ` i ) n ` + r (f ` ) ! : (2) It can be shown that using joint ERM, we can reliably estimate the optimal joint parameter  as long as m is large (even when each n ` is small). This is the key reason why structural learning is effective. A formal PAC-style analysis can be found in (Ando and Zhang, 2004). 2.3 Alternating structure optimization (ASO) The optimization problem (2) has a simple solution using SVD when we choose square regularization: 2 r (f ` ) = kw ` k 2 2 , where the regularization parameter  is given. For clarity, let u ` be a weight vector for problem ` such that: u ` = w ` +  T v ` : Then, (2) becomes the minimization of the joint empirical risk written as: m X `=1 n ` X i=1 L(u T ` X ` i ; Y ` i ) n ` + ku `  T v ` k 2 2 ! : (3) This minimization can be approximately solved by the following alternating optimization procedure:  Fix (; fv ` g), and find m predictors fu ` g that minimizes the joint empirical risk (3).  Fix m predictors fu ` g, and find (; fv ` g) that minimizes the joint empirical risk (3).  Iterate until a convergence criterion is met. In the first step, we train m predictors independently. It is the second step that couples all the problems. Its solution is given by the SVD (singular value decomposition) of the predictor matrix U = [u 1 ; : : : ; u m ℄: the rows of the optimum  are given by the most significant left singular vectors1 of U. Intuitively, the optimum  captures the maximal commonality of the m predictors (each derived from u `). These m predictors are updated using the new structure matrix  in the next iteration, and the process repeats. Figure 1 summarizes the algorithm sketched above, which we call the alternating structure optimization (ASO) algorithm. The formal derivation can be found in (Ando and Zhang, 2004). 2.4 Comparison with existing techniques It is important to note that this SVD-based ASO (SVD-ASO) procedure is fundamentally different from the usual principle component analysis (PCA), which can be regarded as dimension reduction in the data space X. By contrast, the dimension reduction performed in the SVD-ASO algorithm is on the predictor space (a set of predictors). This is possible because we observe multiple predictors from multiple learning tasks. If we regard the observed predictors as sample points of the predictor distribution in 1In other words,  is computed so that the best low-rank approximation of U in the least square sense is obtained by projecting U onto the row space of ; see e.g. Golub and Loan (1996) for SVD. Input: training data f(X ` i ; Y ` i )g (` = 1; : : : ; m) Parameters: dimension h and regularization param  Output: matrix  with h rows Initialize: u ` = 0 (` = 1 : : : m), and arbitrary  iterate for ` = 1 to m do With fixed  and v ` = u `, solve for ^ w `: ^ w ` = arg min w ` h P n ` i=1 L(w T ` X ` i +(v T ` )X ` i ;Y ` i ) n ` +kw ` k 2 2  Let u ` = ^ w ` +  T v ` endfor Compute the SVD of U = [u 1 ; : : : ; u m ℄. Let the rows of  be the h left singular vectors of U corresponding to the h largest singular values. until converge Figure 1: SVD-based Alternating Structure Optimization (SVD-ASO) Algorithm the predictor space (corrupted with estimation error, or noise), then SVD-ASO can be interpreted as finding the “principle components” (or commonality) of these predictors (i.e., “what good predictors are like”). Consequently the method directly looks for low-dimensional structures with the highest predictive power. By contrast, the principle components of input data in the data space (which PCA seeks) may not necessarily have the highest predictive power. The above argument also applies to the feature generation from unlabeled data using LSI (e.g. Ando (2004)). Similarly, Miller et al. (2004) used word-cluster memberships induced from an unannotated corpus as features for named entity chunking. Our work is related but more general, because we can explore additional information from unlabeled data using many different auxiliary problems. Since Miller et al. (2004)’s experiments used a proprietary corpus, direct performance comparison is not possible. However, our preliminary implementation of the word clustering approach did not provide any improvement on our tasks. As we will see, our starting performance is already high. Therefore the additional information discovered by SVD-ASO appears crucial to achieve appreciable improvements. 3 Semi-supervised Learning Method For semi-supervised learning, the idea is to create many auxiliary prediction problems (relevant to the task) from unlabeled data so that we can learn the 3 shared structure  (useful for the task) using the ASO algorithm. In particular, we want to create auxiliary problems with the following properties:  Automatic labeling: we need to automatically generate various “labeled” data for the auxiliary problems from unlabeled data.  Relevancy: auxiliary problems should be related to the target problem. That is, they should share a certain predictive structure. The final classifier for the target task is in the form of (1), a linear predictor for structural learning. We fix  (learned from unlabeled data through auxiliary problems) and optimize weight vectors w and v on the given labeled data. We summarize this semisupervised learning procedure below. 1. Create training data e Z ` = f( e X j ; e Y ` j )g for each auxiliary problem ` from unlabeled data f e X j g. 2. Compute  from f e Z ` g through SVD-ASO. 3. Minimize the empirical risk on the labeled data: ^ f = arg min f P n i=1 L(f (;X i );Y i ) n + kw k 2 2, where f (; x) = w T x + v T x as in (1). 3.1 Auxiliary problem creation The idea is to discover useful features (which do not necessarily appear in the labeled data) from the unlabeled data through learning auxiliary problems. Clearly, auxiliary problems more closely related to the target problem will be more beneficial. However, even if some problems are less relevant, they will not degrade performance severely since they merely result in some irrelevant features (originated from irrelevant -components), which ERM learners can cope with. On the other hand, potential gains from relevant auxiliary problems can be significant. In this sense, our method is robust. We present two general strategies for generating useful auxiliary problems: one in a completely unsupervised fashion, and the other in a partiallysupervised fashion. 3.1.1 Unsupervised strategy In the first strategy, we regard some observable substructures of the input data X as auxiliary class labels, and try to predict these labels using other parts of the input data. Ex 3.1 Predict words. Create auxiliary problems by regarding the word at each position as an auxiliary label, which we want to predict from the context. For instance, predict whether a word is “Smith” or not from its context. This problem is relevant to, for instance, named entity chunking since knowing a word is “Smith” helps to predict whether it is part of a name. One binary classification problem can be created for each possible word value (e.g., “IBM”, “he”, “get”,    ). Hence, many auxiliary problems can be obtained using this idea. More generally, given a feature representation of the input data, we may mask some features as unobserved, and learn classifiers to predict these ‘masked’ features based on other features that are not masked. The automatic-labeling requirement is satisfied since the auxiliary labels are observable to us. To create relevant problems, we should choose to (mask and) predict features that have good correlation to the target classes, such as words on text tagging/chunking tasks. 3.1.2 Partially-supervised strategy The second strategy is motivated by co-training. We use two (or more) distinct feature maps:  1 and  2. First, we train a classifier F 1 for the target task, using the feature map  1 and the labeled data. The auxiliary tasks are to predict the behavior of this classifier F 1 (such as predicted labels) on the unlabeled data, by using the other feature map  2. Note that unlike co-training, we only use the classifier as a means of creating auxiliary problems that meet the relevancy requirement, instead of using it to bootstrap labels. Ex 3.2 Predict the top-k choices of the classifier. Predict the combination of k (a few) classes to which F 1 assigns the highest output (confidence) values. For instance, predict whether F 1 assigns the highest confidence values to CLASS1 and CLASS2 in this order. By setting k = 1, the auxiliary task is simply to predict the label prediction of classifier F 1. By setting k > 1, fine-grained distinctions (related to intrinsic sub-classes of target classes) can be learned. From a -way classification problem, !=( k )! binary prediction problems can be created. 4 4 Algorithms Used in Experiments Using auxiliary problems introduced above, we study the performance of our semi-supervised learning method on named entity chunking and syntactic chunking. This section describes the algorithmic aspects of the experimental framework. The taskspecific setup is described in Sections 5 and 6. 4.1 Extension of the basic SVD-ASO algorithm In our experiments, we use an extension of SVDASO. In NLP applications, features have natural grouping according to their types/origins such as ‘current words’, ‘parts-of-speech on the right’, and so forth. It is desirable to perform a localized optimization for each of such natural feature groups. Hence, we associate each feature group with a submatrix of structure matrix . The optimization algorithm for this extension is essentially the same as SVD-ASO in Figure 1, but with the SVD step performed separately for each group. See (Ando and Zhang, 2004) for the precise formulation. In addition, we regularize only those components of w ` which correspond to the non-negative part of u `. The motivation is that positive weights are usually directly related to the target concept, while negative ones often yield much less specific information representing ‘the others’. The resulting extension, in effect, only uses the positive components of U in the SVD computation. 4.2 Chunking algorithm, loss function, training algorithm, and parameter settings As is commonly done, we encode chunk information into word tags to cast the chunking problem to that of sequential word tagging. We perform Viterbistyle decoding to choose the word tag sequence that maximizes the sum of tagging confidence values. In all settings (including baseline methods), the loss function is a modification of the Huber’s robust loss for regression: L(p; y ) = max(0; 1 py ) 2 if py  1; and 4py otherwise; with square regularization ( = 10 4). One may select other loss functions such as SVM or logistic regression. The specific choice is not important for the purpose of this paper. The training algorithm is stochastic gradient descent, which is argued to perform well for regularized convex ERM learning formulations (Zhang, 2004). As we will show in Section 7.3, our formulation is relatively insensitive to the change in h (rowdimension of the structure matrix). We fix h (for each feature group) to 50, and use it in all settings. The most time-consuming process is the training of m auxiliary predictors on the unlabeled data (computing U in Figure 1). Fixing the number of iterations to a constant, it runs in linear to m and the number of unlabeled instances and takes hours in our settings that use more than 20 million unlabeled instances. 4.3 Baseline algorithms Supervised classifier For comparison, we train a classifier using the same features and algorithm, but without unlabeled data ( = 0 in effect). Co-training We test co-training since our idea of partially-supervised auxiliary problems is motivated by co-training. Our implementation follows the original work (Blum and Mitchell, 1998). The two (or more) classifiers (with distinct feature maps) are trained with labeled data. We maintain a pool of q unlabeled instances by random selection. The classifier proposes labels for the instances in this pool. We choose s instances for each classifier with high confidence while preserving the class distribution observed in the initial labeled data, and add them to the labeled data. The process is then repeated. We explore q=50K, 100K, s=50,100,500,1K, and commonly-used feature splits: ‘current vs. context’ and ‘current+left-context vs. current+right-context’. Self-training Single-view bootstrapping is sometimes called self-training. We test the basic selftraining2, which replaces multiple classifiers in the co-training procedure with a single classifier that employs all the features. co/self-training oracle performance To avoid the issue of parameter selection for the co- and selftraining, we report their best possible oracle performance, which is the best F-measure number among all the co- and self-training parameter settings including the choice of the number of iterations. 2We also tested “self-training with bagging”, which Ng and Cardie (2003) used for co-reference resolution. We omit results since it did not produce better performance than the supervised baseline. 5  words, parts-of-speech (POS), character types, 4 characters at the beginning/ending in a 5-word window.  words in a 3-syntactic chunk window.  labels assigned to two words on the left.  bi-grams of the current word and the label on the left.  labels assigned to previous occurrences of the current word. Figure 2: Feature types for named entity chunking. POS and syntactic chunk information is provided by the organizer. 5 Named Entity Chunking Experiments We report named entity chunking performance on the CoNLL’03 shared-task3 corpora (English and German). We choose this task because the original intention of this shared task was to test the effectiveness of semi-supervised learning methods. However, it turned out that none of the top performing systems used unlabeled data. The likely reason is that the number of labeled data is relatively large (>200K), making it hard to benefit from unlabeled data. We show that our ASO-based semi-supervised learning method (hereafter, ASO-semi) can produce results appreciably better than all of the top systems, by using unlabeled data as the only additional resource. In particular, we do not use any gazetteer information, which was used in all other systems. The CoNLL corpora are annotated with four types of named entities: persons, organizations, locations, and miscellaneous names (e.g., “World Cup”). We use the official training/development/test splits. Our unlabeled data sets consist of 27 million words (English) and 35 million words (German), respectively. They were chosen from the same sources – Reuters and ECI Multilingual Text Corpus – as the provided corpora but disjoint from them. 5.1 Features Our feature representation is a slight modification of a simpler configuration (without any gazetteer) in (Zhang and Johnson, 2003), as shown in Figure 2. We use POS and syntactic chunk information provided by the organizer. 5.2 Auxiliary problems As shown in Figure 3, we experiment with auxiliary problems from Ex 3.1 and 3.2: “Predict current (or previous or next) words”; and “Predict top-2 choices 3http://cnts.uia.ac.be/conll2003/ner # of aux. Auxiliary Features used for problems labels learning aux problems 1000 previous words all but previous words 1000 current words all but current words 1000 next words all but next words 72 F 1’s top-2 choices  2 (all but left context) 72 F 2’s top-2 choices  1 (left context) 72 F 3’s top-2 choices  4 (all but right context) 72 F 4’s top-2 choices  3 (right context) Figure 3: Auxiliary problems used for named entity chunking. 3000 problems ‘mask’ words and predict them from the other features on unlabeled data. 288 problems predict classifier F i’s predictions on unlabeled data, where F i is trained with labeled data using feature map  i. There are 72 possible top-2 choices from 9 classes (beginning/inside of four types of name chunks and ‘outside’). of the classifier” using feature splits ‘left context vs. the others’ and ‘right context vs. the others’. For word-prediction problems, we only consider the instances whose current words are either nouns or adjectives since named entities mostly consist of these types. Also, we leave out all but at most 1000 binary prediction problems of each type that have the largest numbers of positive examples to ensure that auxiliary predictors can be adequately learned with a sufficiently large number of examples. The results we report are obtained by using all the problems in Figure 3 unless otherwise specified. 5.3 Named entity chunking results methods test diff. from supervised data F prec. recall F English, small (10K examples) training set ASO-semi dev. 81.25 +10.02 +7.00 +8.51 co/self oracle 73.10 +0.32 +0.39 +0.36 ASO-semi test 78.42 +9.39 +10.73 +10.10 co/self oracle 69.63 +0.60 +1.95 +1.31 English, all (204K) training examples ASO-semi dev. 93.15 +2.25 +3.00 +2.62 co/self oracle 90.64 +0.04 +0.20 +0.11 ASO-semi test 89.31 +3.20 +4.51 +3.86 co/self oracle 85.40 0.04 0.05 0.05 German, all (207K) training examples ASO-semi dev. 74.06 +7.04 +10.19 +9.22 co/self oracle 66.47 2.59 +4.39 +1.63 ASO-semi test 75.27 +4.64 +6.59 +5.88 co/self oracle 70.45 1.26 +2.59 +1.06 Figure 4: Named entity chunking results. No gazetteer. Fmeasure and performance improvements over the supervised baseline in precision, recall, and F. For co- and self-training (baseline), the oracle performance is shown. Figure 4 shows results in comparison with the supervised baseline in six configurations, each trained 6 with one of three sets of labeled training examples: a small English set (10K examples randomly chosen), the entire English training set (204K), and the entire German set (207K), tested on either the development set or test set. ASO-semi significantly improves both precision and recall in all the six configurations, resulting in improved F-measures over the supervised baseline by +2.62% to +10.10%. Co- and self-training, at their oracle performance, improve recall but often degrade precision; consequently, their F-measure improvements are relatively low: 0.05% to +1.63%. Comparison with top systems As shown in Figure 5, ASO-semi achieves higher performance than the top systems on both English and German data. Most of the top systems boost performance by external hand-crafted resources such as: large gazetteers4; a large amount (2 million words) of labeled data manually annotated with finer-grained named entities (FIJZ03); and rule-based post processing (KSNM03). Hence, we feel that our results, obtained by using unlabeled data as the only additional resource, are encouraging. System Eng. Ger. Additional resources ASO-semi 89.31 75.27 unlabeled data FIJZ03 88.76 72.41 gazetteers; 2M-word labeled data (English) CN03 88.31 65.67 gazetteers (English); (also very elaborated features) KSNM03 86.31 71.90 rule-based post processing Figure 5: Named entity chunking. F-measure on the test sets. Previous best results: FIJZ03 (Florian et al., 2003), CN03 (Chieu and Ng, 2003), KSNM03 (Klein et al., 2003). 6 Syntactic Chunking Experiments Next, we report syntactic chunking performance on the CoNLL’00 shared-task5 corpus. The training and test data sets consist of the Wall Street Journal corpus (WSJ) sections 15–18 (212K words) and section 20, respectively. They are annotated with eleven types of syntactic chunks such as noun phrases. We 4Whether or not gazetteers are useful depends on their coverage. A number of top-performing systems used their own gazetteers in addition to the organizer’s gazetteers and reported significant performance improvements (e.g., FIJZ03, CN03, and ZJ03). 5http://cnts.uia.ac.be/conll2000/chunking  uni- and bi-grams of words and POS in a 5-token window.  word-POS bi-grams in a 3-token window.  POS tri-grams on the left and right.  labels of the two words on the left and their bi-grams.  bi-grams of the current word and two labels on the left. Figure 6: Feature types for syntactic chunking. POS information is provided by the organizer. prec. recall F =1 supervised 93.83 93.37 93.60 ASO-semi 94.57 94.20 94.39 (+0.79) co/self oracle 93.76 93.56 93.66 (+0.06) Figure 7: Syntactic chunking results. use the WSJ articles in 1991 (15 million words) from the TREC corpus as the unlabeled data. 6.1 Features and auxiliary problems Our feature representation is a slight modification of a simpler configuration (without linguistic features) in (Zhang et al., 2002), as shown in Figure 6. We use the POS information provided by the organizer. The types of auxiliary problems are the same as in the named entity experiments. For word predictions, we exclude instances of punctuation symbols. 6.2 Syntactic chunking results As shown in Figure 7, ASO-semi improves both precision and recall over the supervised baseline. It achieves 94:39% in F-measure, which outperforms the supervised baseline by 0:79%. Co- and selftraining again slightly improve recall but slightly degrade precision at their oracle performance, which demonstrates that it is not easy to benefit from unlabeled data on this task. Comparison with the previous best systems As shown in Figure 8, ASO-semi achieves performance higher than the previous best systems. Though the space constraint precludes providing the detail, we note that ASO-semi outperforms all of the previous top systems in both precision and recall. Unlike named entity chunking, the use of external resources on this task is rare. An exception is the use of output from a grammar-based full parser as features in ZDJ02+, which our system does not use. KM01 and CM03 boost performance by classifier combinations. SP03 trains conditional random fields for NP 7 all NP description ASO-semi 94.39 94.70 +unlabeled data KM01 93.91 94.39 SVM combination CM03 93.74 94.41 perceptron in two layers SP03 – 94.38 conditional random fields ZDJ02 93.57 93.89 generalized Winnow ZDJ02+ 94.17 94.38 +full parser output Figure 8: Syntactic chunking F-measure. Comparison with previous best results: KM01 (Kudoh and Matsumoto, 2001), CM03 (Carreras and Marquez, 2003), SP03 (Sha and Pereira, 2003), ZDJ02 (Zhang et al., 2002). (noun phrases) only. ASO-semi produces higher NP chunking performance than the others. 7 Empirical Analysis 7.1 Effectiveness of auxiliary problems English named entity German named entity 68 70 72 74 76 1 F-measure (%) 85 86 87 88 89 90 dev set F-measure (%) supervised w/ "Predict (previous, current, or next) words" w/ "Predict top-2 choices" w/ "Predict words" + "Predict top-2 choices" Figure 9: Named entity F-measure produced by using individual types of auxiliary problems. Trained with the entire training sets and tested on the test sets. Figure 9 shows F-measure obtained by computing  from individual types of auxiliary problems on named entity chunking. Both types – “Predict words” and “Predict top-2 choices of the classifier” – are useful, producing significant performance improvements over the supervised baseline. The best performance is achieved when  is produced from all of the auxiliary problems. 7.2 Interpretation of  To gain insights into the information obtained from unlabeled data, we examine the  entries associated with the feature ‘current words’, computed for the English named entity task. Figure 10 shows the features associated with the entries of  with the largest values, computed from the 2000 unsupervised auxiliary problems: “Predict previous words” and “Predict next words”. For clarity, the figure only shows row# Features corresponding to Interpretation significant  entries 4 Ltd, Inc, Plc, International, organizations Ltd., Association, Group, Inc. 7 Co, Corp, Co., Company, organizations Authority, Corp., Services 9 PCT, N/A, Nil, Dec, BLN, no names Avg, Year-on-year, UNCH 11 New, France, European, San, locations North, Japan, Asian, India 15 Peter, Sir, Charles, Jose, Paul, persons Lee, Alan, Dan, John, James 26 June, May, July, Jan, March, months August, September, April Figure 10: Interpretation of  computed from wordprediction (unsupervised) problems for named entity chunking. words beginning with upper-case letters (i.e., likely to be names in English). Our method captures the spirit of predictive word-clustering but is more general and effective on our tasks. It is possible to develop a general theory to show that the auxiliary problems we use are helpful under reasonable conditions. The intuition is as follows. Suppose we split the features into two parts  1 and  2 and predict  1 based on  2. Suppose features in  1 are correlated to the class labels (but not necessarily correlated among themselves). Then, the auxiliary prediction problems are related to the target task, and thus can reveal useful structures of  2. Under some conditions, it can be shown that features in  2 with similar predictive performance tend to map to similar low-dimensional vectors through . This effect can be empirically observed in Figure 10 and will be formally shown elsewhere. 7.3 Effect of the  dimension 85 87 89 20 40 60 80 100 dimension F-measure (%) ASO-semi supervised Figure 11: F-measure in relation to the row-dimension of . English named entity chunking, test set. Recall that throughout the experiments, we fix the row-dimension of  (for each feature group) to 50. Figure 11 plots F-measure in relation to the rowdimension of , which shows that the method is relatively insensitive to the change of this parameter, at least in the range which we consider. 8 8 Conclusion We presented a novel semi-supervised learning method that learns the most predictive lowdimensional feature projection from unlabeled data using the structural learning algorithm SVD-ASO. On CoNLL’00 syntactic chunking and CoNLL’03 named entity chunking (English and German), the method exceeds the previous best systems (including those which rely on hand-crafted resources) by using unlabeled data as the only additional resource. The key idea is to create auxiliary problems automatically from unlabeled data so that predictive structures can be learned from that data. In practice, it is desirable to create as many auxiliary problems as possible, as long as there is some reason to believe in their relevancy to the task. This is because the risk is relatively minor while the potential gain from relevant problems is large. Moreover, the auxiliary problems used in our experiments are merely possible examples. One advantage of our approach is that one may design a variety of auxiliary problems to learn various aspects of the target problem from unlabeled data. Structural learning provides a framework for carrying out possible new ideas. Acknowledgments Part of the work was supported by ARDA under the NIMD program PNWD-SW-6059. References Rie Kubota Ando and Tong Zhang. 2004. A framework for learning predictive structures from multiple tasks and unlabeled data. Technical report, IBM. RC23462. Rie Kubota Ando. 2004. Semantic lexicon construction: Learning from unlabeled data via spectral analysis. In Proceedings of CoNLL-2004. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In proceedings of COLT-98. Xavier Carreras and Lluis Marquez. 2003. Phrase recognition by filtering and ranking with perceptrons. In Proceedings of RANLP-2003. Hai Leong Chieu and Hwee Tou Ng. 2003. Named entity recognition with a maximum entropy approach. In Proceedings CoNLL-2003, pages 160–163. Michael Collins and Yoram Singer. 1999. Unsupervised models for named entity classification. In Proceedings of EMNLP/VLC’99. Radu Florian, Abe Ittycheriah, Hongyan Jing, and Tong Zhang. 2003. Named entity recognition through classifier combination. In Proceedings CoNLL-2003, pages 168–171. Gene H. Golub and Charles F. Van Loan. 1996. Matrix computations third edition. Dan Klein, Joseph Smarr, Huy Nguyen, and Christopher D. Manning. 2003. Named entity recognition with character-level models. In Proceedings CoNLL2003, pages 188–191. Taku Kudoh and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proceedings of NAACL 2001. Bernard Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2):155–171. Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name tagging with word clusters and discriminative training. In Proceedings of HLT-NAACL-2004. Vincent Ng and Claire Cardie. 2003. Weakly supervised natural language learning without redundant views. In Proceedings of HLT-NAACL-2003. David Pierce and Claire Cardie. 2001. Limitations of co-training for natural language learning from large datasets. In Proceedings of EMNLP-2001. Ellen Riloff and Rosie Jones. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In Proceedings of AAAI-99. Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proceedings of HLT-NAACL’03. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of ACL-95. Tong Zhang and David E. Johnson. 2003. A robust risk minimization based named entity recognition system. In Proceedings CoNLL-2003, pages 204–207. Tong Zhang, Fred Damerau, and David E. Johnson. 2002. Text chunking based on a generalization of Winnow. Journal of Machine Learning Research, 2:615– 637. Tong Zhang. 2004. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In ICML 04, pages 919–926. 9
2005
1
Proceedings of the 43rd Annual Meeting of the ACL, pages 75–82, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Probabilistic CFG with latent annotations Takuya Matsuzaki Yusuke Miyao Jun’ichi Tsujii  Graduate School of Information Science and Technology, University of Tokyo Hongo 7-3-1, Bunkyo-ku, Tokyo 113-0033  CREST, JST(Japan Science and Technology Agency) Honcho 4-1-8, Kawaguchi-shi, Saitama 332-0012  matuzaki, yusuke, tsujii  @is.s.u-tokyo.ac.jp Abstract This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (F  , sentences  40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection. 1 Introduction Variants of PCFGs form the basis of several broadcoverage and high-precision parsers (Collins, 1999; Charniak, 1999; Klein and Manning, 2003). In those parsers, the strong conditional independence assumption made in vanilla treebank PCFGs is weakened by annotating non-terminal symbols with many ‘features’ (Goodman, 1997; Johnson, 1998). Examples of such features are head words of constituents, labels of ancestor and sibling nodes, and subcategorization frames of lexical heads. Effective features and their good combinations are normally explored using trial-and-error. This paper defines a generative model of parse trees that we call PCFG with latent annotations (PCFG-LA). This model is an extension of PCFG models in which non-terminal symbols are annotated with latent variables. The latent variables work just like the features attached to non-terminal symbols. A fine-grained PCFG is automatically induced from parsed corpora by training a PCFG-LA model using an EM-algorithm, which replaces the manual feature selection used in previous research. The main focus of this paper is to examine the effectiveness of the automatically trained models in parsing. Because exact inference with a PCFG-LA, i.e., selection of the most probable parse, is NP-hard, we are forced to use some approximation of it. We empirically compared three different approximation methods. One of the three methods gives a performance of 86.6% (F  , sentences  40 words) on the standard test set of the Penn WSJ corpus. Utsuro et al. (1996) proposed a method that automatically selects a proper level of generalization of non-terminal symbols of a PCFG, but they did not report the results of parsing with the obtained PCFG. Henderson’s parsing model (Henderson, 2003) has a similar motivation as ours in that a derivation history of a parse tree is compactly represented by induced hidden variables (hidden layer activation of a neural network), although the details of his approach is quite different from ours. 2 Probabilistic model PCFG-LA is a generative probabilistic model of parse trees. In this model, an observed parse tree is considered as an incomplete data, and the corre75   :  :       the    cat !" # !$ % grinned   the  cat !  ! grinned Figure 1: Tree with latent annotations &(' )+* (complete data) and observed tree & (incomplete data). sponding complete data is a tree with latent annotations. Each non-terminal node in the complete data is labeled with a complete symbol of the form ,-' ./* , where , is the non-terminal symbol of the corresponding node in the observed tree and . is a latent annotation symbol, which is an element of a fixed set 0 . A complete/incomplete tree pair of the sentence, “the cat grinned,” is shown in Figure 2. The complete parse tree, &(' )+* (left), is generated through a process just like the one in ordinary PCFGs, but the non-terminal symbols in the CFG rules are annotated with latent symbols, )2143. 65 .87 5:9:9:9<; . Thus, the probability of the complete tree ( &(' )+* ) is = 3>&-' )?* ; 1A@"3CB$' .DE* ;FHG 3CB ' .IE*KJML = ' . 7 *N = ' .POK* ; FQG 3RL = ' .P7K*KJMST&-' .8UK*VLW' .YXK* ; FQG 3RS?&(' .PUK*KJ[Z]\8^ ;F_G 3RL`' .8X<*KJMa<bcZ ; FQG 3CN = ' . O *KJdNe' .8fg* ;FHG 3CNe' .PfK*KJihcjkClmlD^6n ;g5 where @"3CB ' .  * ; denotes the probability of an occurrence of the symbol B$' .  * at a root node and G 3j ; denotes the probability of a CFG rule j . The probability of the observed tree = 3>& ; is obtained by summing = 3>&(' )+* ; for all the assignments to latent annotation symbols, ) : = 3>& ; 1po q Ers o q trs_u:u:u o q % rs = 3>&(' )+* ;g9 (1) Using dynamic programming, the theoretical bound of the time complexity of the summation in Eq. 1 is reduced to be proportional to the number of non-terminal nodes in a parse tree. However, the calculation at node l still has a cost that exponentially grows with the number of l ’s daughters because we must sum up the probabilities of v 0wvyx{z  combinations of latent annotation symbols for a node with n daughters. We thus took a kind of transformation/detransformation approach, in which a tree is binarized before parameter estimation and restored to its original form after parsing. The details of the binarization are explained in Section 4. Using syntactically annotated corpora as training data, we can estimate the parameters of a PCFGLA model using an EM algorithm. The algorithm is a special variant of the inside-outside algorithm of Pereira and Schabes (1992). Several recent work also use similar estimation algorithm as ours, i.e, inside-outside re-estimation on parse trees (Chiang and Bikel, 2002; Shen, 2004). The rest of this section precisely defines PCFGLA models and briefly explains the estimation algorithm. The derivation of the estimation algorithm is largely omitted; see Pereira and Schabes (1992) for details. 2.1 Model definition We define a PCFG-LA | as a tuple | 1 } L~t 5 L€ 5 0 5t-5 @ 5EGƒ‚ , where L ~{…„ a set of observable non-terminal symbols L€ „ a set of terminal symbols 0 „ a set of latent annotation symbols  „ a set of observable CFG rules @"3R,(' .Y* ; „ the probability of the occurrence of a complete symbol ,(' .Y* at a root node G 3j ; „ the probability of a rule j‡†  ' 0ˆ* 9 We use , 5t‰e5:9:9:9 for non-terminal symbols in L~t ; Š :5 Š 7 5:9:9:9 for terminal symbols in L( ; and . 5E‹P5:9:9:9 for latent annotation symbols in 0 . L~t<' 0ˆ* denotes the set of complete non-terminal symbols, i.e., L(~{<' 0ˆ*I1ŒŽ,(' .Y*v,d†L€~{ 5 .†0‘ . Note that latent annotation symbols are not attached to terminal symbols. In the above definition,  is a set of CFG rules of observable (i.e., not annotated) symbols. For simplicity of discussion, we assume that  is a CNF grammar, but extending to the general case is straightforward.  ' 0ˆ* is the set of CFG rules of complete symbols, such as N+' ./*J grinned or B$' .Y*KJML = ' ‹ *N = ' ’* . More precisely,  ' 0ˆ*P1iŒ…3R,(' .Y*mJ“Š ; v”3R,HJ[Š ; † -• .–†ˆ0‘˜— Œ…3R,-' ./*KJ ‰ ' ‹ *™T' ’* ; v”3R,šJ ‰ ™ ; † -• . 5E‹P5 ’?†ˆ0‘ 9 76 We assume that non-terminal nodes in a parse tree & are indexed by integers k›1œ 5:9:9:9ž5EŸ , starting from the root node. A complete tree is denoted by &(' )+* , where ) 1 3. 65:9:9:9Ž5 .Y¡ ; †d0 ¡ is a vector of latent annotation symbols and .m¢ is the latent annotation symbol attached to the k -th non-terminal node. We do not assume any structured parametrizations in G and @ ; that is, each G 3j ; 3j£†  ' 0_* ; and @"3R,-' ./* ; 3R,-' ./*ƒ†_L ~{ ' 0ˆ* ; is itself a parameter to be tuned. Therefore, an annotation symbol, say, . , generally does not express any commonalities among the complete non-terminals annotated by . , such as ,-' ./* 5t‰ ' ./* 5 ^6Za . The probability of a complete parse tree &-' )?* is defined as = 3>&(' )+* ; 1A@"3R,  ' .  * ; ¤ ¥ r¦¨§…© ªY« G 3j ;g5 (2) where ,  ' .  * is the label of the root node of &(' )+* and S(¬D­ ®Q¯ denotes the multiset of annotated CFG rules used in the generation of &(' )+* . We have the probability of an observable tree & by marginalizing out the latent annotation symbols in &(' )+* : = 3>& ; 1 o ° rs ± @"3R,€ž' .IE* ; ¤ ¥ r¦¨§…© ª/« G 3j ;g5 (3) where Ÿ is the number of non-terminal nodes in & . 2.2 Forward-backward probability The sum in Eq. 3 can be calculated using a dynamic programming algorithm analogous to the forward algorithm for HMMs. For a sentence Š  Š 7 9:9:9 Š$² and its parse tree & , backward probabilities ³ ¢ ¬ 3. ; are recursively computed for the k -th non-terminal node and for each .A†´0 . In the definition below, L ¢ †›L~t denotes the non-terminal label of the k -th node. µ If node k is a pre-terminal node above a terminal symbol ж , then ³ ¢ ¬ 3. ; 1 G 3RL ¢ ' ./*·J“Š ¶ ; . µ Otherwise, let ¸ and ¹ be the two daughter nodes of k . Then ³ ¢ ¬ 3. ; 1 o q{º6» q޼ rs G 3RL ¢ ' ./*KJL˜¶…' .¶K*VL(½' .P½6* ; F ³ ¶ ¬ 3.¶ ; ³ ½ ¬ 3.P½ ;g9 Using backward probabilities, = 3>& ; is calculated as = 3>& ; 1¿¾ q  rs @"3RL  ' .  * ; ³  ¬ 3. K; . We define forward probabilities À ¢ ¬ 3. ; , which are used in the estimation described below, as follows: µ If node k is the root node (i.e., k = 1), then À ¢ ¬ 3. ; 1A@"3RL ¢ ' .Y* ; . µ If node k has a right sibling ¹ , let ¸ be the mother node of k . Then À ¢ ¬ 3. ; 1 o q{º<» q޼ rs G 3RL¶c' .Á¶K*KJML ¢ ' .Y*VL½Á' .8½Ž* ; F À ¶ ¬ 3.¶ ; ³ ½ ¬ 3.P½ ;g9 µ If node k has a left sibling, À ¢ ¬ 3. ; is defined analogously. 2.3 Estimation We now derive the EM algorithm for PCFG-LA, which estimates the parameters €13 G"5 @ ; . Let Ã[1 Œ:& 65 &·7 5:9:9:9 ‘ be the training set of parse trees and L ¢  5:9:9:9ž5 L ¢ ¡Ä be the labels of non-terminal nodes in &·¢ . Like the derivations of the EM algorithms for other latent variable models, the update formulas for the parameters, which update the parameters from  to ÂŘ1Æ3 G Å 5 @·Å ; , are obtained by constrained optimization of ÇT3R Šv  ; , which is defined as ÇT3R Šv  ; 1 o ¬ Ä rÉÈ o ® Ä rs ± Ä =ƒÊ 3R) ¢ v & ¢;ËÍÌÎ =ƒÊ]Ï 3>& ¢ ' ) ¢ * ;g5 where =ƒÊ and =ƒÊ Ï denote probabilities under  and  Š, and = 3R)›v & ; is the conditional probability of latent annotation symbols given an observed tree & , i.e., = 3R)v & ; 1 = 3>&-' )?* ;EÐ = 3>& ; . Using the Lagrange multiplier method and re-arranging the results using the backward and forward probabilities, we obtain the update formulas in Figure 2. 3 Parsing with PCFG-LA In theory, we can use PCFG-LAs to parse a given sentence Š by selecting the most probable parse: &·ÑÓÒ]ÔÕ¨1AÖ× ÎØ ÖÉÙ ¬ rÚ¨ÛÝÜmÞ = 3>&-v Š ; 1AÖ× ÎØ ÖÉÙ ¬ rÚ¨ÛßÜ·Þ = 3>& ;g5 (4) where àT3Š ; denotes the set of possible parses for Š under the observable grammar  . While the optimization problem in Eq. 4 can be efficiently solved 77 á Ïâäã  É æå›ç  è6 ßé êg ßë”ì_íQî  ï © ð]«Dñ §tòCógô  â  Ä ëæî <õ ñ ö ºæ÷ ¼ ÷ øyù ó Covered ö § ò ÷ ï…ú"û…ü ù ý º § ò â ë á âäã  É æå›çþ è: ßé êg ßëÿ ¼ § ò â èÉëÿ ø § ò â ê<ë á Ïâäã  É æåƒëÁì_íQî  ï © ð]«Dñ § ò ógô  â  Ä ëæî  ñ º ó Covered ö §{ò ÷ ïcú ù ý º § ò â ë á âäã  É æåƒë  Ï>âäã  ž ë/ì  î  ñ § ò ó Root ö ô ÷ ï ù  â  Ä ëæî   âäã  É ßëÿ  §tò â ë í ï © ð]« ì ñ § ò óKô  â  Ä ë î  ñ º ó Labeled ö § ò ÷ ï ù ý º § ò â ëÁÿ º § ò â ë Covered â  Ä ã åiçéQë/ì â   ëK Ä º å Ä ¼  Ä ø  § ò â  Ä º  Ä ¼  Ä ø ë/ì âäã ç éQë Covered â  Ä ã åƒë/ì { Ä º å  § ò   Ä º ì ã  Labeled â  Ä ã ë/ì K Ä º ì ã  Root â  ã ë/ì  Ä   the root of  Ä is labeled with ã  Figure 2: Parameter update formulas. for PCFGs using dynamic programming algorithms, the sum-of-products form of = 3>& ; in PCFG-LA models (see Eq. 2 and Eq. 3) makes it difficult to apply such techniques to solve Eq. 4. Actually, the optimization problem in Eq. 4 is NPhard for general PCFG-LA models. Although we omit the details, we can prove the NP-hardness by observing that a stochastic tree substitution grammar (STSG) can be represented by a PCFG-LA model in a similar way to one described by Goodman (1996a), and then using the NP-hardness of STSG parsing (Sima´an, 2002). The difficulty of the exact optimization in Eq. 4 forces us to use some approximations of it. The rest of this section describes three different approximations, which are empirically compared in the next section. The first method simply limits the number of candidate parse trees compared in Eq. 4; we first create N-best parses using a PCFG and then, within the N-best parses, select the one with the highest probability in terms of the PCFG-LA. The other two methods are a little more complicated, and we explain them in separate subsections. 3.1 Approximation by Viterbi complete trees The second approximation method selects the best complete tree & Å ' ) Å * , that is, & Å ' ) Å *·1 Ö× ÎØ ÖÉÙ ¬ rÚ¨ÛÝÜmÞ » ® rs ª  = 3>&(' )+* ;g9 (5) We call & ÅÓ' )+ÅÝ* a Viterbi complete tree. Such a tree can be obtained in T3tv Їv U ; time by regarding the PCFG-LA as a PCFG with annotated symbols.1 The observable part of the Viterbi complete tree &þÅÓ' )eÅß* (i.e., & Å ) does not necessarily coincide with the best observable tree &¨ÑÓÒ]ÔCÕ in Eq. 4. However, if &mÑÓÒ]ÔÕ has some ‘dominant’ assignment  to its latent annotation symbols such that = 3>&mÑÓÒ]ÔCÕt' ˆ* ;  = 3>&mÑÓÒÔÕ ; , then = 3>& Å ;! = 3>&mÑÓÒÔÕ ; because = 3>&mÑÓÒ]ÔCÕt' _* ;  = 3>&þÅÓ' )eÅÝ* ; and = 3>& Å ' ) Å * ;  = 3>& Å ; , and thus & Å and &mÑÓÒ]ÔÕ are almost equally ‘good’ in terms of their marginal probabilities. 3.2 Viterbi parse in approximate distribution In the third method, we approximate the true distribution = 3>&-v Š ; by a cruder distribution ÇT3>&‡v Š ; , and then find the tree with the highest Ç?3>&-v Š ; in polynomial time. We first create a packed representation of àT3Š ; for a given sentence Š .2 Then, the approximate distribution ÇT3>&‡v Š ; is created using the packed forest, and the parameters in ÇT3>&-v Š ; are adjusted so that ÇT3>&-v Š ; approximates = 3>&-v Š ; as closely as possible. The form of ÇT3>&‡v Š ; is that of a product of the parameters, just like the form of a PCFG model, and it enables us to use a Viterbi algorithm to select the tree with the highest ÇT3>&‡v Š ; . A packed forest is defined as a tuple } " 5$#ɂ . The first component, " , is a multiset of chart items of the form 3R, 5 ³ 5 ^ ; . A chart item 3R, 5 ³ 5 ^ ; † " indicates that there exists a parse tree in à‡3Š ; that contains a constituent with the non-terminal label , that spans 1For efficiency, we did not actually parse sentences with % &˜ but selected a Viterbi complete tree from a packed representation of candidate parses in the experiments in Section 4. 2In practice, fully constructing a packed representation of ' ⠃ë has an unrealistically high cost for most input sentences. Alternatively, we can use a packed representation of a subset of ' ⠃ë , which can be obtained by parsing with beam thresholding, for instance. An approximate distribution ( â ) ƒë on such subsets can be derived in almost the same way as one for the full ' ⠃ë , but the conditional distribution,  â * ë , is renormalized so that the total mass for the subset sums to 1. 78 ã ç é   D +  ã é  ç  D + I ,ì .   # %   ì âäã /0 1 ë  ì â ç /0 2 ë  ì â ç 23 1 ë , Dì â é /4 / ë #Dì â  526 2 ë % ì â + 17 1 ë 8 â  ëYì â  %të â  ë 8 â EëYì â #Eë 8 â të/ì â # % ë 8 â të/ì 9 8 â #]ëYì D. 8 â % ë/ì I$ Figure 3: Two parse trees and packed representation of them. from the ³ -th to ^ -th word in Š . The second component, # , is a function on " that represents dominance relations among the chart items in " ; # 3k ; is a set of possible daughters of k if k is not a pre-terminal node, and # 3k ; 12Œ6Šþ½…‘ if k is a pre-terminal node above Š ½ . Two parse trees for a sentence Š 1 Š  Š 7KŠ U and a packed representation of them are shown in Figure 3. We require that each tree &M†à‡3Š ; has a unique representation as a set of connected chart items in " . A packed representation satisfying the uniqueness condition is created using the CKY algorithm with the observable grammar  , for instance. The approximate distribution, ÇT3>&-v Š ; , is defined as a PCFG, whose CFG rules  Ü is defined as  Ü 1 Œ…3kJ : ; vIk-† " • : † # 3k ; ‘ . We use ;”3j ; to denote the rule probability of rule j †  Ü and ; ¥ 3k ; to denote the probability with which k-† " is generated as a root node. We define Ç?3>&-v Š ; as ÇT3>&-v Š ; 1<; ¥ 3k K; ¡ ¤ ½6=  ;”3k½ J>:½ ;g5 where the set of connected items Œ6k 65:9:9:9Ž5 kC¡€‘@? " is the unique representation of & . To measure the closeness of approximation by ÇT3>&‡v Š ; , we use the ‘inclusive’ KL-divergence, AB 3 = vÍv Ç ; (Frey et al., 2000): AB 3 = vÍv Ç ; 1 o ¬ rÚ¨ÛÝÜmÞ = 3>&‡v Š ;ËÍÌÎ = 3>&-v Š ; ÇT3>&-v Š ; 9 Minimizing AB 3 = vÍv Ç ; under the normalization constraints on ; ¥ and ; yields closed form solutions for ; ¥ and ; , as shown in Figure 4. = in and = out in Figure 4 are similar to ordinary inside/outside probabilities. We define = in as follows: µ If k›1 3R, 5 ¹ 5 ¹ ; † " is a pre-terminal node above Šþ½ , then = in 3kg' .Y* ; 1 G 3R,-' ./*IJ“Š ½ ; . µ Otherwise, = in 3kg' .Y* ; 1 o ¶ ½ rDC{Û ¢ Þ o EŽ» F rs G 3R,-' ./*KJ ‰ ¶' ‹ *™½' ’* ; F = in 3͸8' ‹ * ; = in 3Ó¹m' ’* ;g5 where ‰ ¶ and ™½ denote non-terminal symbols of chart items ¸ and ¹ . The outside probability, = out, is calculated using = in and PCFG-LA parameters along the packed structure, like the outside probabilities for PCFGs. Once we have computed ;”3kmJG: ; and ; ¥ 3k ; , the parse tree & that maximizes ÇT3>&-v Š ; is found using a Viterbi algorithm, as in PCFG parsing. Several parsing algorithms that also use insideoutside calculation on packed chart have been proposed (Goodman, 1996b; Sima´an, 2003; Clark and Curran, 2004). Those algorithms optimize some evaluation metric of parse trees other than the posterior probability = 3>&-v Š ; , e.g., (expected) labeled constituent recall or (expected) recall rate of dependency relations contained in a parse. It is in contrast with our approach where (approximated) posterior probability is optimized. 4 Experiments We conducted four sets of experiments. In the first set of experiments, the degree of dependency of trained models on initialization was examined because EM-style algorithms yield different results with different initial values of parameters. In the second set of experiments, we examined the relationship between model types and their parsing performances. In the third set of experiments, we compared the three parsing methods described in the previous section. Finally, we show the result of a parsing experiment using the standard test set. We used sections 2 through 20 of the Penn WSJ corpus as training data and section 21 as heldout data. The heldout data was used for early stopping; i.e., the estimation was stopped when the rate 79 H If   , is not a pre-terminal node, for each I ì    8 â ë , let ã ç , and é be non-terminal symbols of   , and  . Then, J â å I ë”ìLK ðKóNM KPO ó6M KRQ ó6M  out â   É ßë á âäã  ž æåç  è: é êg ßëR in â   è6 ßëR in â   ê{ ë K ðKó6M  out â t ž ëR in â E É ßë S H If  , is a pre-terminal node above word ¼ , then J â å¼ ë/ì / . H If  , is a root node, let ã be the non-terminal symbol of - . Then J.T â ë/ì /  â ë ñ ðKó6M  âäã  É ßëR in â  É ßë . Figure 4: Optimal parameters of approximate distribution Ç . U VXW VZY [ \ZW \]Y Figure 5: Original subtree. of increase in the likelihood of the heldout data became lower than a certain threshold. Section 22 was used as test data in all parsing experiments except in the final one, in which section 23 was used. We stripped off all function tags and eliminated empty nodes in the training and heldout data, but any other pre-processing, such as comma raising or base-NP marking (Collins, 1999), was not done except for binarizations. 4.1 Dependency on initial values To see the degree of dependency of trained models on initializations, four instances of the same model were trained with different initial values of parameters.3 The model used in this experiment was created by CENTER-PARENT binarization and v 0wv was set to 16. Table 1 lists training/heldout data loglikelihood per sentence (LL) for the four instances and their parsing performances on the test set (section 22). The parsing performances were obtained using the approximate distribution method in Section 3.2. Different initial values were shown to affect the results of training to some extent (Table 1). 3The initial value for an annotated rule probability, á âäã  É å ç  è6 ßé êg ßë , was created by randomly multiplying the maximum likelihood estimation of the corresponding PCFG rule probability,  âäã åiçéQë , as follows: á âäã  É åiç  è: é êg ßëYì_íQî  ï^._  âäã åiçéQë where ` is a random number that is uniformly distributed in badcfe4g 17 cfe0g 1 and í ï is a normalization constant. 1 2 3 4 average hji training LL -115 -114 -115 -114 -114 h 0.41 heldout LL -114 -115 -115 -114 -114 h 0.29 LR 86.7 86.3 86.3 87.0 86.6 h 0.27 LP 86.2 85.6 85.5 86.6 86.0 h 0.48 Table 1: Dependency on initial values. CENTER-PARENT CENTER-HEAD U V W k UmlDn VZY k Umopn k Umopn [ \ZW \qY U V W k [ lrn VZY k [ o]n k [ opn [ \ZW \qY LEFT RIGHT U VXW k Umn VZY k Umn [ k Umn \ W \ Y U k Umn k Umn k Umn V W V Y [ \ZW \qY Figure 6: Four types of binarization (H: head daughter). 4.2 Model types and parsing performance We compared four types of binarization. The original form is depicted in Figure 5 and the results are shown in Figure 6. In the first two methods, called CENTER-PARENT and CENTER-HEAD, the headfinding rules of Collins (1999) were used. We obtained an observable grammar  for each model by reading off grammar rules from the binarized training trees. For each binarization method, PCFG-LA models with different numbers of latent annotation symbols, v 0wv1Mœ 5$s5ut”5$v , and œ3w , were trained. 80 72 74 76 78 80 82 84 86 10000 100000 1e+06 1e+07 1e+08 F1 # of parameters CENTER-PARENT CENTER-HEAD RIGHT LEFT Figure 7: Model size vs. parsing performance. The relationships between the number of parameters in the models and their parsing performances are shown in Figure 7. Note that models created using different binarization methods have different numbers of parameters for the same v 0wv . The parsing performances were measured using F  scores of the parse trees that were obtained by re-ranking of 1000-best parses by a PCFG. We can see that the parsing performance gets better as the model size increases. We can also see that models of roughly the same size yield similar performances regardless of the binarization scheme used for them, except the models created using LEFT binarization with small numbers of parameters ( v 0Wvc1 œ and s ). Taking into account the dependency on initial values at the level shown in the previous experiment, we cannot say that any single model is superior to the other models when the sizes of the models are large enough. The results shown in Figure 7 suggest that we could further improve parsing performance by increasing the model size. However, both the memory size and the training time are more than linear in v 0wv , and the training time for the largest ( v 0wv1[œ3w ) models was about 15 hours for the models created using CENTER-PARENT, CENTER-HEAD, and LEFT and about 20 hours for the model created using RIGHT. To deal with larger (e.g., v 0wv = 32 or 64) models, we therefore need to use a model search that reduces the number of parameters while maintaining the model’s performance, and an approximation during training to reduce the training time. 84 84.5 85 85.5 86 86.5 0 1 2 3 4 5 6 7 8 9 10 F1 parsing time (sec) N-best re-ranking Viterbi complete tree approximate distribution Figure 8: Comparison of parsing methods. 4.3 Comparison of parsing methods The relationships between the average parse time and parsing performance using the three parsing methods described in Section 3 are shown in Figure 8. A model created using CENTER-PARENT with v 0Wvc1[œ3w was used throughout this experiment. The data points were made by varying configurable parameters of each method, which control the number of candidate parses. To create the candidate parses, we first parsed input sentences using a PCFG4, using beam thresholding with beam width x . The data points on a line in the figure were created by varying x with other parameters fixed. The first method re-ranked the L -best parses enumerated from the chart after the PCFG parsing. The two lines for the first method in the figure correspond to L = 100 and L = 300. In the second and the third methods, we removed all the dominance relations among chart items that did not contribute to any parses whose PCFG-scores were higher than y = max, where = max is the PCFG-score of the best parse in the chart. The parses remaining in the chart were the candidate parses for the second and the third methods. The different lines for the second and the third methods correspond to different values of y . The third method outperforms the other two methods unless the parse time is very limited (i.e., z 1 4The PCFG used in creating the candidate parses is roughly the same as the one that Klein and Manning (2003) call a ‘markovised PCFG with vertical order = 2 and horizontal order = 1’ and was extracted from Section 02-20. The PCFG itself gave a performance of 79.6/78.5 LP/LR on the development set. This PCFG was also used in the experiment in Section 4.4. 81 { 40 words LR LP CB 0 CB This paper 86.7 86.6 1.19 61.1 Klein and Manning (2003) 85.7 86.9 1.10 60.3 Collins (1999) 88.5 88.7 0.92 66.7 Charniak (1999) 90.1 90.1 0.74 70.1 { 100 words LR LP CB 0 CB This paper 86.0 86.1 1.39 58.3 Klein and Manning (2003) 85.1 86.3 1.31 57.2 Collins (1999) 88.1 88.3 1.06 64.0 Charniak (1999) 89.6 89.5 0.88 67.6 Table 2: Comparison with other parsers. sec is required), as shown in the figure. The superiority of the third method over the first method seems to stem from the difference in the number of candidate parses from which the outputs are selected.5 The superiority of the third method over the second method is a natural consequence of the consistent use of = 3>& ; both in the estimation (as the objective function) and in the parsing (as the score of a parse). 4.4 Comparison with related work Parsing performance on section 23 of the WSJ corpus using a PCFG-LA model is shown in Table 2. We used the instance of the four compared in the second experiment that gave the best results on the development set. Several previously reported results on the same test set are also listed in Table 2. Our result is lower than the state-of-the-art lexicalized PCFG parsers (Collins, 1999; Charniak, 1999), but comparable to the unlexicalized PCFG parser of Klein and Manning (2003). Klein and Manning’s PCFG is annotated by many linguistically motivated features that they found using extensive manual feature selection. In contrast, our method induces all parameters automatically, except that manually written head-rules are used in binarization. Thus, our method can extract a considerable amount of hidden regularity from parsed corpora. However, our result is worse than the lexicalized parsers despite the fact that our model has access to words in the sentences. It suggests that certain types of information used in those lexicalized 5Actually, the number of parses contained in the packed forest is more than 1 million for over half of the test sentences when | = /u} î and ` ì /u} î  , while the number of parses for which the first method can compute the exact probability in a comparable time (around 4 sec) is only about 300. parsers are hard to be learned by our approach. References Eugene Charniak. 1999. A maximum-entropy-inspired parser. Technical Report CS-99-12. David Chiang and Daniel M. Bikel. 2002. Recovering latent information in treebanks. In Proc. COLING, pages 183–189. Stephen Clark and James R. Curran. 2004. Parsing the wsj using ccg and log-linear models. In Proc. ACL, pages 104–111. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Brendan J. Frey, Relu Patrascu, Tommi Jaakkola, and Jodi Moran. 2000. Sequentially fitting “inclusive” trees for inference in noisy-OR networks. In Proc. NIPS, pages 493–499. Joshua Goodman. 1996a. Efficient algorithms for parsing the DOP model. In Proc. EMNLP, pages 143–152. Joshua Goodman. 1996b. Parsing algorithms and metric. In Proc. ACL, pages 177–183. Joshua Goodman. 1997. Probabilistic feature grammars. In Proc. IWPT. James Henderson. 2003. Inducing history representations for broad coverage statistical parsing. In Proc. HLT-NAACL, pages 103–110. Mark Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24(4):613–632. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proc. ACL, pages 423–430. Fernando Pereira and Yves Schabes. 1992. Insideoutside reestimation from partially bracketed corpora. In Proc. ACL, pages 128–135. Libin Shen. 2004. Nondeterministic LTAG derivation tree extraction. In Proc. TAG+7, pages 199–203. Khalil Sima´an. 2002. Computational complexity of probabilistic disambiguation. Grammars, 5(2):125– 151. Khalil Sima´an. 2003. On maximizing metrics for syntactic disambiguation. In Proc. IWPT. Takehito Utsuro, Syuuji Kodama, and Yuji Matsumoto. 1996. Generalization/specialization of context free grammars based-on entropy of non-terminals. In Proc. JSAI (in Japanese), pages 327–330. 82
2005
10
Proceedings of the 43rd Annual Meeting of the ACL, pages 83–90, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Probabilistic disambiguation models for wide-coverage HPSG parsing Yusuke Miyao Department of Computer Science University of Tokyo Hongo 7-3-1, Bunkyo-ku, Tokyo, Japan [email protected] Jun’ichi Tsujii Department of Computer Science University of Tokyo Hongo 7-3-1, Bunkyo-ku, Tokyo, Japan CREST, JST [email protected] Abstract This paper reports the development of loglinear models for the disambiguation in wide-coverage HPSG parsing. The estimation of log-linear models requires high computational cost, especially with widecoverage grammars. Using techniques to reduce the estimation cost, we trained the models using 20 sections of Penn Treebank. A series of experiments empirically evaluated the estimation techniques, and also examined the performance of the disambiguation models on the parsing of real-world sentences. 1 Introduction Head-Driven Phrase Structure Grammar (HPSG) (Pollard and Sag, 1994) has been studied extensively from both linguistic and computational points of view. However, despite research on HPSG processing efficiency (Oepen et al., 2002a), the application of HPSG parsing is still limited to specific domains and short sentences (Oepen et al., 2002b; Toutanova and Manning, 2002). Scaling up HPSG parsing to assess real-world texts is an emerging research field with both theoretical and practical applications. Recently, a wide-coverage grammar and a large treebank have become available for English HPSG (Miyao et al., 2004). A large treebank can be used as training and test data for statistical models. Therefore, we now have the basis for the development and the evaluation of statistical disambiguation models for wide-coverage HPSG parsing. The aim of this paper is to report the development of log-linear models for the disambiguation in widecoverage HPSG parsing, and their empirical evaluation through the parsing of the Wall Street Journal of Penn Treebank II (Marcus et al., 1994). This is challenging because the estimation of log-linear models is computationally expensive, and we require solutions to make the model estimation tractable. We apply two techniques for reducing the training cost. One is the estimation on a packed representation of HPSG parse trees (Section 3). The other is the filtering of parse candidates according to a preliminary probability distribution (Section 4). To our knowledge, this work provides the first results of extensive experiments of parsing Penn Treebank with a probabilistic HPSG. The results from the Wall Street Journal are significant because the complexity of the sentences is different from that of short sentences. Experiments of the parsing of realworld sentences can properly evaluate the effectiveness and possibility of parsing models for HPSG. 2 Disambiguation models for HPSG Discriminative log-linear models are now becoming a de facto standard for probabilistic disambiguation models for deep parsing (Johnson et al., 1999; Riezler et al., 2002; Geman and Johnson, 2002; Miyao and Tsujii, 2002; Clark and Curran, 2004b; Kaplan et al., 2004). Previous studies on probabilistic models for HPSG (Toutanova and Manning, 2002; Baldridge and Osborne, 2003; Malouf and van Noord, 2004) also adopted log-linear models. HPSG exploits feature structures to represent linguistic constraints. Such constraints are known 83 to introduce inconsistencies in probabilistic models estimated using simple relative frequency (Abney, 1997). Log-linear models are required for credible probabilistic models and are also beneficial for incorporating various overlapping features. This study follows previous studies on the probabilistic models for HPSG. The probability, , of producing the parse result  from a given sentence  is defined as                                  where  is a reference distribution (usually assumed to be a uniform distribution), and   is a set of parse candidates assigned to . The feature function     represents the characteristics of  and , while the corresponding model parameter     is its weight. Model parameters that maximize the loglikelihood of the training data are computed using a numerical optimization method (Malouf, 2002). Estimation of the above model requires a set of pairs    , where  is the correct parse for sentence . While  is provided by a treebank,   is computed by parsing each  in the treebank. Previous studies assumed   could be enumerated; however, the assumption is impractical because the size of   is exponentially related to the length of . The problem of exponential explosion is inevitable in the wide-coverage parsing of real-world texts because many parse candidates are produced to support various constructions in long sentences. 3 Packed representation of HPSG parse trees To avoid exponential explosion, we represent   in a packed form of HPSG parse trees. A parse tree of HPSG is represented as a set of tuples   , where   and are the signs of mother, left daughter, and right daughter, respectively1. In chart parsing, partial parse candidates are stored in a chart, in which phrasal signs are identified and packed into an equivalence class if they are determined to be equivalent and dominate the same word sequence. A set 1For simplicity, only binary trees are considered. Extension to unary and -ary ( ) trees is trivial. Figure 1: Chart for parsing “he saw a girl with a telescope” of parse trees is then represented as a set of relations among equivalence classes. Figure 1 shows a chart for parsing “he saw a girl with a telescope”, where the modifiee (“saw” or “girl”) of “with” is ambiguous. Each feature structure expresses an equivalence class, and the arrows represent immediate-dominance relations. The phrase, “saw a girl with a telescope”, has two trees (A in the figure). Since the signs of the top-most nodes are equivalent, they are packed into an equivalence class. The ambiguity is represented as two pairs of arrows that come out of the node. Formally, a set of HPSG parse trees is represented in a chart as a tuple     , where is a set of equivalence classes,   is a set of root nodes, and      is a function to represent immediate-dominance relations. Our representation of the chart can be interpreted as an instance of a feature forest (Miyao and Tsujii, 2002; Geman and Johnson, 2002). A feature forest is an “and/or” graph to represent exponentiallymany tree structures in a packed form. If   is represented in a feature forest,   can be estimated using dynamic programming without unpacking the chart. A feature forest is formally defined as a tuple,         Æ , where is a set of conjunctive nodes,  is a set of disjunctive nodes,   is a set of root nodes2,       is a conjunctive daughter function, and Æ     is a disjunctive 2For the ease of explanation, the definition of root node is slightly different from the original. 84 Figure 2: Packed representation of HPSG parse trees in Figure 1 daughter function. The feature functions     are assigned to conjunctive nodes. The simplest way to map a chart of HPSG parse trees into a feature forest is to map each equivalence class   to a conjunctive node   . However, in HPSG parsing, important features for disambiguation are combinations of a mother and its daughters, i.e.,   . Hence, we map the tuple        , which corresponds to   , into a conjunctive node. Figure 2 shows (a part of) the HPSG parse trees in Figure 1 represented as a feature forest. Square boxes are conjunctive nodes, dotted lines express a disjunctive daughter function, and solid arrows represent a conjunctive daughter function. The mapping is formally defined as follows.                             ,    ,                         ,                                      , and  Æ                       . Figure 3: Filtering of lexical entries for “saw” 4 Filtering by preliminary distribution The above method allows for the tractable estimation of log-linear models on exponentially-many HPSG parse trees. However, despite the development of methods to improve HPSG parsing efficiency (Oepen et al., 2002a), the exhaustive parsing of all sentences in a treebank is still expensive. Our idea is that we can omit the computation of parse trees with low probabilities in the estimation stage because   can be approximated with parse trees with high probabilities. To achieve this, we first prepared a preliminary probabilistic model whose estimation did not require the parsing of a treebank. The preliminary model was used to reduce the search space for parsing a training treebank. The preliminary model in this study is a unigram model,      where    is a word in the sentence , and is a lexical entry assigned to . This model can be estimated without parsing a treebank. Given this model, we restrict the number of lexical entries used to parse a treebank. With a threshold  for the number of lexical entries and a threshold  for the probability, lexical entries are assigned to a word in descending order of probability, until the number of assigned entries exceeds , or the accumulated probability exceeds . If the lexical entry necessary to produce the correct parse is not assigned, it is additionally assigned to the word. Figure 3 shows an example of filtering lexical entries assigned to “saw”. With    , four lexical entries are assigned. Although the lexicon includes other lexical entries, such as a verbal entry taking a sentential complement (   in the figure), they are filtered out. This method reduces the time for 85 RULE the name of the applied schema DIST the distance between the head words of the daughters COMMA whether a comma exists between daughters and/or inside of daughter phrases SPAN the number of words dominated by the phrase SYM the symbol of the phrasal category (e.g. NP, VP) WORD the surface form of the head word POS the part-of-speech of the head word LE the lexical entry assigned to the head word Table 1: Templates of atomic features parsing a treebank, while this approximation causes bias in the training data and results in lower accuracy. The trade-off between the parsing cost and the accuracy will be examined experimentally. We have several ways to integrate with the estimated model  . In the experiments, we will empirically compare the following methods in terms of accuracy and estimation time. Filtering only The unigram probability is used only for filtering. Product The probability is defined as the product of and the estimated model . Reference distribution is used as a reference distribution of . Feature function  is used as a feature function of . This method was shown to be a generalization of the reference distribution method (Johnson and Riezler, 2000). 5 Features Feature functions in the log-linear models are designed to capture the characteristics of        . In this paper, we investigate combinations of the atomic features listed in Table 1. The following combinations are used for representing the characteristics of the binary/unary schema applications. binary   RULE,DIST,COMMA  SPAN  SYM  WORD  POS  LE  SPAN   SYM   WORD   POS   LE   unary  RULE,SYM,WORD,POS,LE  In addition, the following is for expressing the condition of the root node of the parse tree. root  SYM,WORD,POS,LE  Figure 4: Example features Figure 4 shows examples: root is for the root node, in which the phrase symbol is S and the surface form, part-of-speech, and lexical entry of the lexical head are “saw”, VBD, and a transitive verb, respectively. binary is for the binary rule application to “saw a girl” and “with a telescope”, in which the applied schema is the Head-Modifier Schema, the left daughter is VP headed by “saw”, and the right daughter is PP headed by “with”, whose part-of-speech is IN and the lexical entry is a VP-modifying preposition. In an actual implementation, some of the atomic features are abstracted (i.e., ignored) for smoothing. Table 2 shows a full set of templates of combined features used in the experiments. Each row represents a template of a feature function. A check means the atomic feature is incorporated while a hyphen means the feature is ignored. Restricting the domain of feature functions to         seems to limit the flexibility of feature design. Although it is true to some extent, this does not necessarily mean the impossibility of incorporating features on nonlocal dependencies into the model. This is because a feature forest model does not assume probabilistic independence of conjunctive nodes. This means that we can unpack a part of the forest without changing the model. Actually, in our previous study (Miyao et al., 2003), we successfully developed a probabilistic model including features on nonlocal predicate-argument dependencies. However, since we could not observe significant improvements by incorporating nonlocal features, this paper investigates only the features described above. 86 RULE DIST COMMA SPAN SYM WORD POS LE    – –       – –   –    – –  –     –   – –  –   –     –   –   –  –   –  –   –     – –    – – –      – – –  –    – – – –     –  – – –  –   – –    –   – –  –  –   – – –   –    – – – RULE SYM WORD POS LE  –     –   –  –  –     – –  – –    – –  –  – – –    – – – SYM WORD POS LE –    –   – –  –    – – – –   – –  – – – –   – – – Table 2: Feature templates for binary schema (left), unary schema (center), and root condition (right) Avg. length LP LR UP UR F-score Section 22 ( 40 words) 20.69 87.18 86.23 90.67 89.68 86.70 Section 22 ( 100 words) 22.43 86.99 84.32 90.45 87.67 85.63 Section 23 ( 40 words) 20.52 87.12 85.45 90.65 88.91 86.27 Section 23 ( 100 words) 22.23 86.81 84.64 90.29 88.03 85.71 Table 3: Accuracy for development/test sets 6 Experiments We used an HPSG grammar derived from Penn Treebank (Marcus et al., 1994) Section 02-21 (39,832 sentences) by our method of grammar development (Miyao et al., 2004). The training data was the HPSG treebank derived from the same portion of the Penn Treebank3. For the training, we eliminated sentences with no less than 40 words and for which the parser could not produce the correct parse. The resulting training set consisted of 33,574 sentences. The treebanks derived from Sections 22 and 23 were used as the development (1,644 sentences) and final test sets (2,299 sentences). We measured the accuracy of predicate-argument dependencies output by the parser. A dependency is defined as a tuple       , where  is the predicate type (e.g., adjective, intransitive verb),  is the head word of the predicate,  is the argument label (MODARG, ARG1, ..., ARG4), and  is the head word of the argument. Labeled precision/recall (LP/LR) is the ratio of tuples correctly identified by the parser, while unlabeled precision/recall (UP/UR) is the ratio of  and  correctly identified regardless of  and . The F-score is the harmonic mean of LP and LR. The accuracy was measured by parsing test sentences with part-of-speech tags pro3The programs to make the grammar and the treebank from Penn Treebank are available at http://wwwtsujii.is.s.u-tokyo.ac.jp/enju/. vided by the treebank. The Gaussian prior was used for smoothing (Chen and Rosenfeld, 1999), and its hyper-parameter was tuned for each model to maximize the F-score for the development set. The optimization algorithm was the limited-memory BFGS method (Nocedal and Wright, 1999). All the following experiments were conducted on AMD Opteron servers with a 2.0-GHz CPU and 12-GB memory. Table 3 shows the accuracy for the development/test sets. Features occurring more than twice were included in the model (598,326 features). Filtering was done by the reference distribution method with    and    . The unigram model for filtering was a log-linear model with two feature templates, WORD  POS  LE  and POS  LE  (24,847 features). Our results cannot be strictly compared with other grammar formalisms because each formalism represents predicate-argument dependencies differently; for reference, our results are competitive with the corresponding measures reported for Combinatory Categorial Grammar (CCG) (LP/LR = 86.6/86.3) (Clark and Curran, 2004b). Different from the results of CCG and PCFG (Collins, 1999; Charniak, 2000), the recall was clearly lower than precision. This results from the HPSG grammar having stricter feature constraints and the parser not being able to produce parse results for around one percent of the sentences. To improve recall, we need techniques of robust processing with HPSG. 87 LP LR Estimation time (sec.) Filtering only 34.90 23.34 702 Product 86.71 85.55 1,758 Reference dist. 87.12 85.45 655 Feature function 84.89 83.06 1,203 Table 4: Estimation method vs. accuracy and estimation time   F-score Estimation time (sec.) Parsing time (sec.) Memory usage (MB) 5, 0.80 84.31 161 7,827 2,377 5, 0.90 84.69 207 9,412 2,992 5, 0.95 84.70 240 12,027 3,648 5, 0.98 84.81 340 15,168 4,590 10, 0.80 84.79 164 8,858 2,658 10, 0.90 85.77 298 13,996 4,062 10, 0.95 86.27 654 25,308 6,324 10, 0.98 86.56 1,778 55,691 11,700 15, 0.80 84.68 180 9,337 2,676 15, 0.90 85.85 308 14,915 4,220 15, 0.95 86.68 854 32,757 7,766 Table 5: Filtering threshold vs. accuracy and estimation time Table 4 compares the estimation methods introduced in Section 4. In all of the following experiments, we show the accuracy for the test set ( 40 words) only. Table 4 revealed that our simple method of filtering caused a fatal bias in training data when a preliminary distribution was used only for filtering. However, the model combined with a preliminary model achieved sufficient accuracy. The reference distribution method achieved higher accuracy and lower cost. The feature function method achieved lower accuracy in our experiments. A possible reason is that a hyper-parameter of the prior was set to the same value for all the features including the feature of the preliminary distribution. Table 5 shows the results of changing the filtering threshold. We can determine the correlation between the estimation/parsing cost and accuracy. In our experiment,   and   seem necessary to preserve the F-score over   . Figure 5 shows the accuracy for each sentence length. It is apparent from this figure that the accuracy was significantly higher for shorter sentences ( 10 words). This implies that experiments with only short sentences overestimate the performance of parsers. Sentences with at least 10 words are nec0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 0 5 10 15 20 25 30 35 40 45 precision/recall sentence length precision recall Figure 5: Sentence length vs. accuracy 70 75 80 85 90 95 100 0 5000 10000 15000 20000 25000 30000 35000 40000 precision/recall training sentences precision recall Figure 6: Corpus size vs. accuracy essary to properly evaluate the performance of parsing real-world texts. Figure 6 shows the learning curve. A feature set was fixed, while the parameter of the prior was optimized for each model. High accuracy was attained even with small data, and the accuracy seemed to be saturated. This indicates that we cannot further improve the accuracy simply by increasing training data. The exploration of new types of features is necessary for higher accuracy. Table 6 shows the accuracy with difference feature sets. The accuracy was measured by removing some of the atomic features from the final model. The last row denotes the accuracy attained by the preliminary model. The numbers in bold type represent that the difference from the final model was significant according to stratified shuffling tests (Cohen, 1995) with p-value   . The results indicate that DIST, COMMA, SPAN, WORD, and POS features contributed to the final accuracy, although the dif88 Features LP LR # features All 87.12 85.45 623,173 –RULE 86.98 85.37 620,511 –DIST 86.74 85.09 603,748 –COMMA 86.55 84.77 608,117 –SPAN 86.53 84.98 583,638 –SYM 86.90 85.47 614,975 –WORD 86.67 84.98 116,044 –POS 86.36 84.71 430,876 –LE 87.03 85.37 412,290 –DIST,SPAN 85.54 84.02 294,971 –DIST,SPAN, COMMA 83.94 82.44 286,489 –RULE,DIST, SPAN,COMMA 83.61 81.98 283,897 –WORD,LE 86.48 84.91 50,258 –WORD,POS 85.56 83.94 64,915 –WORD,POS,LE 84.89 83.43 33,740 –SYM,WORD, POS,LE 82.81 81.48 26,761 None 78.22 76.46 24,847 Table 6: Accuracy with different feature sets ferences were slight. In contrast, RULE, SYM, and LE features did not affect the accuracy. However, if each of them was removed together with another feature, the accuracy decreased drastically. This implies that such features had overlapping information. Table 7 shows the manual classification of the causes of errors in 100 sentences randomly chosen from the development set. In our evaluation, one error source may cause multiple errors of dependencies. For example, if a wrong lexical entry was assigned to a verb, all the argument dependencies of the verb are counted as errors. The numbers in the table include such double-counting. Major causes were classified into three types: argument/modifier distinction, attachment ambiguity, and lexical ambiguity. While attachment/lexical ambiguities are well-known causes, the other is peculiar to deep parsing. Most of the errors cannot be resolved by features we investigated in this study, and the design of other features is crucial for further improvements. 7 Discussion and related work Experiments on deep parsing of Penn Treebank have been reported for Combinatory Categorial Grammar (CCG) (Clark and Curran, 2004b) and Lexical Functional Grammar (LFG) (Kaplan et al., 2004). They developed log-linear models on a packed representation of parse forests, which is similar to our representation. Although HPSG exploits further complicated feature constraints and requires high comError cause # of errors Argument/modifier distinction 58 temporal noun 21 to-infinitive 15 others 22 Attachment 53 prepositional phrase 18 to-infinitive 10 relative clause 8 others 17 Lexical ambiguity 42 participle/adjective 15 preposition/modifier 14 others 13 Comma 19 Coordination 14 Noun phrase identification 13 Zero-pronoun resolution 9 Others 17 Table 7: Error analysis putational cost, our work has proved that log-linear models can be applied to HPSG parsing and attain accurate and wide-coverage parsing. Clark and Curran (2004a) described a method of reducing the cost of parsing a training treebank in the context of CCG parsing. They first assigned to each word a small number of supertags, which correspond to lexical entries in our case, and parsed supertagged sentences. Since they did not mention the probabilities of supertags, their method corresponds to our “filtering only” method. However, they also applied the same supertagger in a parsing stage, and this seemed to be crucial for high accuracy. This means that they estimated the probability of producing a parse tree from a supertagged sentence. Another approach to estimating log-linear models for HPSG is to extract a small informative sample from the original set   (Osborne, 2000). Malouf and van Noord (2004) successfully applied this method to German HPSG. The problem with this method was in the approximation of exponentially many parse trees by a polynomial-size sample. However, their method has the advantage that any features on a parse tree can be incorporated into the model. The trade-off between approximation and locality of features is an outstanding problem. Other discriminative classifiers were applied to the disambiguation in HPSG parsing (Baldridge and Osborne, 2003; Toutanova et al., 2004). The problem of exponential explosion is also inevitable for 89 their methods. An approach similar to ours may be applied to them, following the study on the learning of a discriminative classifier for a packed representation (Taskar et al., 2004). As discussed in Section 6, exploration of other features is indispensable to further improvements. A possible direction is to encode larger contexts of parse trees, which were shown to improve the accuracy (Toutanova and Manning, 2002; Toutanova et al., 2004). Future work includes the investigation of such features, as well as the abstraction of lexical dependencies like semantic classes. References S. P. Abney. 1997. Stochastic attribute-value grammars. Computational Linguistics, 23(4). J. Baldridge and M. Osborne. 2003. Active learning for HPSG parse selection. In CoNLL-03. E. Charniak. 2000. A maximum-entropy-inspiredparser. In Proc. NAACL-2000, pages 132–139. S. Chen and R. Rosenfeld. 1999. A Gaussian prior for smoothing maximum entropy models. Technical Report CMUCS-99-108, Carnegie Mellon University. S. Clark and J. R. Curran. 2004a. The importance of supertagging for wide-coverage CCG parsing. In Proc. COLING-04. S. Clark and J. R. Curran. 2004b. Parsing the WSJ using CCG and log-linear models. In Proc. 42th ACL. P. R. Cohen. 1995. Empirical Methods for Artificial Intelligence. MIT Press. M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, Univ. of Pennsylvania. S. Geman and M. Johnson. 2002. Dynamic programming for parsing and estimation of stochastic unification-based grammars. In Proc. 40th ACL. M. Johnson and S. Riezler. 2000. Exploiting auxiliary distributions in stochastic unification-based grammars. In Proc. 1st NAACL. M. Johnson, S. Geman, S. Canon, Z. Chi, and S. Riezler. 1999. Estimators for stochastic “unification-based” grammars. In Proc. ACL’99, pages 535–541. R. M. Kaplan, S. Riezler, T. H. King, J. T. Maxwell III, and A. Vasserman. 2004. Speed and accuracy in shallow and deep stochastic parsing. In Proc. HLT/NAACL’04. R. Malouf and G. van Noord. 2004. Wide coverage parsing with stochastic attribute value grammars. In Proc. IJCNLP-04 Workshop “Beyond Shallow Analyses”. R. Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proc. CoNLL2002. M. Marcus, G. Kim, M. A. Marcinkiewicz, R. MacIntyre, A. Bies, M. Ferguson, K. Katz, and B. Schasberger. 1994. The Penn Treebank: Annotating predicate argument structure. In ARPA Human Language Technology Workshop. Y. Miyao and J. Tsujii. 2002. Maximum entropy estimation for feature forests. In Proc. HLT 2002. Y. Miyao, T. Ninomiya, and J. Tsujii. 2003. Probabilistic modeling of argument structures including non-local dependencies. In Proc. RANLP 2003, pages 285–291. Y. Miyao, T. Ninomiya, and J. Tsujii. 2004. Corpusoriented grammar development for acquiring a Headdriven Phrase Structure Grammar from the Penn Treebank. In Proc. IJCNLP-04. J. Nocedal and S. J. Wright. 1999. Numerical Optimization. Springer. S. Oepen, D. Flickinger, J. Tsujii, and H. Uszkoreit, editors. 2002a. Collaborative Language Engineering: A Case Study in Efficient Grammar-Based Processing. CSLI Publications. S. Oepen, K. Toutanova, S. Shieber, C. Manning, D. Flickinger, and T. Brants. 2002b. The LinGO, Redwoods treebank. motivation and preliminary applications. In Proc. COLING 2002. M. Osborne. 2000. Estimation of stochastic attributevalue grammar using an informative sample. In Proc. COLING 2000. C. Pollard and I. A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press. S. Riezler, T. H. King, R. M. Kaplan, R. Crouch, J. T. Maxwell III, and M. Johnson. 2002. Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. In Proc. 40th ACL. B. Taskar, D. Klein, M. Collins, D. Koller, and C. Manning. 2004. Max-margin parsing. In EMNLP 2004. K. Toutanova and C. D. Manning. 2002. Feature selection for a rich HPSG grammar using decision trees. In Proc. CoNLL-2002. K. Toutanova, P. Markova, and C. Manning. 2004. The leaf projection path view of parse trees: Exploring string kernels for HPSG parse selection. In EMNLP 2004. 90
2005
11
Proceedings of the 43rd Annual Meeting of the ACL, pages 91–98, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Online Large-Margin Training of Dependency Parsers Ryan McDonald Koby Crammer Fernando Pereira Department of Computer and Information Science University of Pennsylvania Philadelphia, PA {ryantm,crammer,pereira}@cis.upenn.edu Abstract We present an effective training algorithm for linearly-scored dependency parsers that implements online largemargin multi-class training (Crammer and Singer, 2003; Crammer et al., 2003) on top of efficient parsing techniques for dependency trees (Eisner, 1996). The trained parsers achieve a competitive dependency accuracy for both English and Czech with no language specific enhancements. 1 Introduction Research on training parsers from annotated data has for the most part focused on models and training algorithms for phrase structure parsing. The best phrase-structure parsing models represent generatively the joint probability P(x, y) of sentence x having the structure y (Collins, 1999; Charniak, 2000). Generative parsing models are very convenient because training consists of computing probability estimates from counts of parsing events in the training set. However, generative models make complicated and poorly justified independence assumptions and estimations, so we might expect better performance from discriminatively trained models, as has been shown for other tasks like document classification (Joachims, 2002) and shallow parsing (Sha and Pereira, 2003). Ratnaparkhi’s conditional maximum entropy model (Ratnaparkhi, 1999), trained to maximize conditional likelihood P(y|x) of the training data, performed nearly as well as generative models of the same vintage even though it scores parsing decisions in isolation and thus may suffer from the label bias problem (Lafferty et al., 2001). Discriminatively trained parsers that score entire trees for a given sentence have only recently been investigated (Riezler et al., 2002; Clark and Curran, 2004; Collins and Roark, 2004; Taskar et al., 2004). The most likely reason for this is that discriminative training requires repeatedly reparsing the training corpus with the current model to determine the parameter updates that will improve the training criterion. The reparsing cost is already quite high for simple context-free models with O(n3) parsing complexity, but it becomes prohibitive for lexicalized grammars with O(n5) parsing complexity. Dependency trees are an alternative syntactic representation with a long history (Hudson, 1984). Dependency trees capture important aspects of functional relationships between words and have been shown to be useful in many applications including relation extraction (Culotta and Sorensen, 2004), paraphrase acquisition (Shinyama et al., 2002) and machine translation (Ding and Palmer, 2005). Yet, they can be parsed in O(n3) time (Eisner, 1996). Therefore, dependency parsing is a potential “sweet spot” that deserves investigation. We focus here on projective dependency trees in which a word is the parent of all of its arguments, and dependencies are non-crossing with respect to word order (see Figure 1). However, there are cases where crossing dependencies may occur, as is the case for Czech (Hajiˇc, 1998). Edges in a dependency tree may be typed (for instance to indicate grammatical function). Though we focus on the simpler non-typed 91 root John hit the ball with the bat Figure 1: An example dependency tree. case, all algorithms are easily extendible to typed structures. The following work on dependency parsing is most relevant to our research. Eisner (1996) gave a generative model with a cubic parsing algorithm based on an edge factorization of trees. Yamada and Matsumoto (2003) trained support vector machines (SVM) to make parsing decisions in a shift-reduce dependency parser. As in Ratnaparkhi’s parser, the classifiers are trained on individual decisions rather than on the overall quality of the parse. Nivre and Scholz (2004) developed a history-based learning model. Their parser uses a hybrid bottom-up/topdown linear-time heuristic parser and the ability to label edges with semantic types. The accuracy of their parser is lower than that of Yamada and Matsumoto (2003). We present a new approach to training dependency parsers, based on the online large-margin learning algorithms of Crammer and Singer (2003) and Crammer et al. (2003). Unlike the SVM parser of Yamada and Matsumoto (2003) and Ratnaparkhi’s parser, our parsers are trained to maximize the accuracy of the overall tree. Our approach is related to those of Collins and Roark (2004) and Taskar et al. (2004) for phrase structure parsing. Collins and Roark (2004) presented a linear parsing model trained with an averaged perceptron algorithm. However, to use parse features with sufficient history, their parsing algorithm must prune heuristically most of the possible parses. Taskar et al. (2004) formulate the parsing problem in the large-margin structured classification setting (Taskar et al., 2003), but are limited to parsing sentences of 15 words or less due to computation time. Though these approaches represent good first steps towards discriminatively-trained parsers, they have not yet been able to display the benefits of discriminative training that have been seen in namedentity extraction and shallow parsing. Besides simplicity, our method is efficient and accurate, as we demonstrate experimentally on English and Czech treebank data. 2 System Description 2.1 Definitions and Background In what follows, the generic sentence is denoted by x (possibly subscripted); the ith word of x is denoted by xi. The generic dependency tree is denoted by y. If y is a dependency tree for sentence x, we write (i, j) ∈y to indicate that there is a directed edge from word xi to word xj in the tree, that is, xi is the parent of xj. T = {(xt, yt)}T t=1 denotes the training data. We follow the edge based factorization method of Eisner (1996) and define the score of a dependency tree as the sum of the score of all edges in the tree, s(x, y) = X (i,j)∈y s(i, j) = X (i,j)∈y w · f(i, j) where f(i, j) is a high-dimensional binary feature representation of the edge from xi to xj. For example, in the dependency tree of Figure 1, the following feature would have a value of 1: f(i, j) =  1 if xi=‘hit’ and xj=‘ball’ 0 otherwise. In general, any real-valued feature may be used, but we use binary features for simplicity. The feature weights in the weight vector w are the parameters that will be learned during training. Our training algorithms are iterative. We denote by w(i) the weight vector after the ith training iteration. Finally we define dt(x) as the set of possible dependency trees for the input sentence x and bestk(x; w) as the set of k dependency trees in dt(x) that are given the highest scores by weight vector w, with ties resolved by an arbitrary but fixed rule. Three basic questions must be answered for models of this form: how to find the dependency tree y with highest score for sentence x; how to learn an appropriate weight vector w from the training data; and finally, what feature representation f(i, j) should be used. The following sections address each of these questions. 2.2 Parsing Algorithm Given a feature representation for edges and a weight vector w, we seek the dependency tree or 92 h1 h1 h2 h2 ⇒ s h1 h1 r r+1 h2 h2 t h1 h1 h2 h2 ⇒ s h1 h1 h2 h2 t h1 h1 s h1 h1 t Figure 2: O(n3) algorithm of Eisner (1996), needs to keep 3 indices at any given stage. trees that maximize the score function, s(x, y). The primary difficulty is that for a given sentence of length n there are exponentially many possible dependency trees. Using a slightly modified version of a lexicalized CKY chart parsing algorithm, it is possible to generate and represent these sentences in a forest that is O(n5) in size and takes O(n5) time to create. Eisner (1996) made the observation that if the head of each chart item is on the left or right periphery, then it is possible to parse in O(n3). The idea is to parse the left and right dependents of a word independently and combine them at a later stage. This removes the need for the additional head indices of the O(n5) algorithm and requires only two additional binary variables that specify the direction of the item (either gathering left dependents or gathering right dependents) and whether an item is complete (available to gather more dependents). Figure 2 shows the algorithm schematically. As with normal CKY parsing, larger elements are created bottom-up from pairs of smaller elements. Eisner showed that his algorithm is sufficient for both searching the space of dependency parses and, with slight modification, finding the highest scoring tree y for a given sentence x under the edge factorization assumption. Eisner and Satta (1999) give a cubic algorithm for lexicalized phrase structures. However, it only works for a limited class of languages in which tree spines are regular. Furthermore, there is a large grammar constant, which is typically in the thousands for treebank parsers. 2.3 Online Learning Figure 3 gives pseudo-code for the generic online learning setting. A single training instance is considered on each iteration, and parameters updated by applying an algorithm-specific update rule to the instance under consideration. The algorithm in Figure 3 returns an averaged weight vector: an auxiliary weight vector v is maintained that accumulates Training data: T = {(xt, yt)}T t=1 1. w0 = 0; v = 0; i = 0 2. for n : 1..N 3. for t : 1..T 4. w(i+1) = update w(i) according to instance (xt, yt) 5. v = v + w(i+1) 6. i = i + 1 7. w = v/(N ∗T) Figure 3: Generic online learning algorithm. the values of w after each iteration, and the returned weight vector is the average of all the weight vectors throughout training. Averaging has been shown to help reduce overfitting (Collins, 2002). 2.3.1 MIRA Crammer and Singer (2001) developed a natural method for large-margin multi-class classification, which was later extended by Taskar et al. (2003) to structured classification: min ∥w∥ s.t. s(x, y) −s(x, y′) ≥L(y, y′) ∀(x, y) ∈T , y′ ∈dt(x) where L(y, y′) is a real-valued loss for the tree y′ relative to the correct tree y. We define the loss of a dependency tree as the number of words that have the incorrect parent. Thus, the largest loss a dependency tree can have is the length of the sentence. Informally, this update looks to create a margin between the correct dependency tree and each incorrect dependency tree at least as large as the loss of the incorrect tree. The more errors a tree has, the farther away its score will be from the score of the correct tree. In order to avoid a blow-up in the norm of the weight vector we minimize it subject to constraints that enforce the desired margin between the correct and incorrect trees1. 1The constraints may be unsatisfiable, in which case we can relax them with slack variables as in SVM training. 93 The Margin Infused Relaxed Algorithm (MIRA) (Crammer and Singer, 2003; Crammer et al., 2003) employs this optimization directly within the online framework. On each update, MIRA attempts to keep the norm of the change to the parameter vector as small as possible, subject to correctly classifying the instance under consideration with a margin at least as large as the loss of the incorrect classifications. This can be formalized by substituting the following update into line 4 of the generic online algorithm, min w(i+1) −w(i) s.t. s(xt, yt) −s(xt, y′) ≥L(yt, y′) ∀y′ ∈dt(xt) (1) This is a standard quadratic programming problem that can be easily solved using Hildreth’s algorithm (Censor and Zenios, 1997). Crammer and Singer (2003) and Crammer et al. (2003) provide an analysis of both the online generalization error and convergence properties of MIRA. In equation (1), s(x, y) is calculated with respect to the weight vector after optimization, w(i+1). To apply MIRA to dependency parsing, we can simply see parsing as a multi-class classification problem in which each dependency tree is one of many possible classes for a sentence. However, that interpretation fails computationally because a general sentence has exponentially many possible dependency trees and thus exponentially many margin constraints. To circumvent this problem we make the assumption that the constraints that matter for large margin optimization are those involving the incorrect trees y′ with the highest scores s(x, y′). The resulting optimization made by MIRA (see Figure 3, line 4) would then be: min w(i+1) −w(i) s.t. s(xt, yt) −s(xt, y′) ≥L(yt, y′) ∀y′ ∈bestk(xt; w(i)) reducing the number of constraints to the constant k. We tested various values of k on a development data set and found that small values of k are sufficient to achieve close to best performance, justifying our assumption. In fact, as k grew we began to observe a slight degradation of performance, indicating some overfitting to the training data. All the experiments presented here use k = 5. The Eisner (1996) algorithm can be modified to find the k-best trees while only adding an additional O(k log k) factor to the runtime (Huang and Chiang, 2005). A more common approach is to factor the structure of the output space to yield a polynomial set of local constraints (Taskar et al., 2003; Taskar et al., 2004). One such factorization for dependency trees is min w(i+1) −w(i) s.t. s(l, j) −s(k, j) ≥1 ∀(l, j) ∈yt, (k, j) /∈yt It is trivial to show that if these O(n2) constraints are satisfied, then so are those in (1). We implemented this model, but found that the required training time was much larger than the k-best formulation and typically did not improve performance. Furthermore, the k-best formulation is more flexible with respect to the loss function since it does not assume the loss function can be factored into a sum of terms for each dependency. 2.4 Feature Set Finally, we need a suitable feature representation f(i, j) for each dependency. The basic features in our model are outlined in Table 1a and b. All features are conjoined with the direction of attachment as well as the distance between the two words being attached. These features represent a system of backoff from very specific features over words and partof-speech tags to less sparse features over just partof-speech tags. These features are added for both the entire words as well as the 5-gram prefix if the word is longer than 5 characters. Using just features over the parent-child node pairs in the tree was not enough for high accuracy, because all attachment decisions were made outside of the context in which the words occurred. To solve this problem, we added two other types of features, which can be seen in Table 1c. Features of the first type look at words that occur between a child and its parent. These features take the form of a POS trigram: the POS of the parent, of the child, and of a word in between, for all words linearly between the parent and the child. This feature was particularly helpful for nouns identifying their parent, since 94 a) Basic Uni-gram Features p-word, p-pos p-word p-pos c-word, c-pos c-word c-pos b) Basic Big-ram Features p-word, p-pos, c-word, c-pos p-pos, c-word, c-pos p-word, c-word, c-pos p-word, p-pos, c-pos p-word, p-pos, c-word p-word, c-word p-pos, c-pos c) In Between POS Features p-pos, b-pos, c-pos Surrounding Word POS Features p-pos, p-pos+1, c-pos-1, c-pos p-pos-1, p-pos, c-pos-1, c-pos p-pos, p-pos+1, c-pos, c-pos+1 p-pos-1, p-pos, c-pos, c-pos+1 Table 1: Features used by system. p-word: word of parent node in dependency tree. c-word: word of child node. p-pos: POS of parent node. c-pos: POS of child node. p-pos+1: POS to the right of parent in sentence. p-pos-1: POS to the left of parent. c-pos+1: POS to the right of child. c-pos-1: POS to the left of child. b-pos: POS of a word in between parent and child nodes. it would typically rule out situations when a noun attached to another noun with a verb in between, which is a very uncommon phenomenon. The second type of feature provides the local context of the attachment, that is, the words before and after the parent-child pair. This feature took the form of a POS 4-gram: The POS of the parent, child, word before/after parent and word before/after child. The system also used back-off features to various trigrams where one of the local context POS tags was removed. Adding these two features resulted in a large improvement in performance and brought the system to state-of-the-art accuracy. 2.5 System Summary Besides performance (see Section 3), the approach to dependency parsing we described has several other advantages. The system is very general and contains no language specific enhancements. In fact, the results we report for English and Czech use identical features, though are obviously trained on different data. The online learning algorithms themselves are intuitive and easy to implement. The efficient O(n3) parsing algorithm of Eisner allows the system to search the entire space of dependency trees while parsing thousands of sentences in a few minutes, which is crucial for discriminative training. We compare the speed of our model to a standard lexicalized phrase structure parser in Section 3.1 and show a significant improvement in parsing times on the testing data. The major limiting factor of the system is its restriction to features over single dependency attachments. Often, when determining the next dependent for a word, it would be useful to know previous attachment decisions and incorporate these into the features. It is fairly straightforward to modify the parsing algorithm to store previous attachments. However, any modification would result in an asymptotic increase in parsing complexity. 3 Experiments We tested our methods experimentally on the English Penn Treebank (Marcus et al., 1993) and on the Czech Prague Dependency Treebank (Hajiˇc, 1998). All experiments were run on a dual 64-bit AMD Opteron 2.4GHz processor. To create dependency structures from the Penn Treebank, we used the extraction rules of Yamada and Matsumoto (2003), which are an approximation to the lexicalization rules of Collins (1999). We split the data into three parts: sections 02-21 for training, section 22 for development and section 23 for evaluation. Currently the system has 6, 998, 447 features. Each instance only uses a tiny fraction of these features making sparse vector calculations possible. Our system assumes POS tags as input and uses the tagger of Ratnaparkhi (1996) to provide tags for the development and evaluation sets. Table 2 shows the performance of the systems that were compared. Y&M2003 is the SVM-shiftreduce parsing model of Yamada and Matsumoto (2003), N&S2004 is the memory-based learner of Nivre and Scholz (2004) and MIRA is the the system we have described. We also implemented an averaged perceptron system (Collins, 2002) (another online learning algorithm) for comparison. This table compares only pure dependency parsers that do 95 English Czech Accuracy Root Complete Accuracy Root Complete Y&M2003 90.3 91.6 38.4 N&S2004 87.3 84.3 30.4 Avg. Perceptron 90.6 94.0 36.5 82.9 88.0 30.3 MIRA 90.9 94.2 37.5 83.3 88.6 31.3 Table 2: Dependency parsing results for English and Czech. Accuracy is the number of words that correctly identified their parent in the tree. Root is the number of trees in which the root word was correctly identified. For Czech this is f-measure since a sentence may have multiple roots. Complete is the number of sentences for which the entire dependency tree was correct. not exploit phrase structure. We ensured that the gold standard dependencies of all systems compared were identical. Table 2 shows that the model described here performs as well or better than previous comparable systems, including that of Yamada and Matsumoto (2003). Their method has the potential advantage that SVM batch training takes into account all of the constraints from all training instances in the optimization, whereas online training only considers constraints from one instance at a time. However, they are fundamentally limited by their approximate search algorithm. In contrast, our system searches the entire space of dependency trees and most likely benefits greatly from this. This difference is amplified when looking at the percentage of trees that correctly identify the root word. The models that search the entire space will not suffer from bad approximations made early in the search and thus are more likely to identify the correct root, whereas the approximate algorithms are prone to error propagation, which culminates with attachment decisions at the top of the tree. When comparing the two online learning models, it can be seen that MIRA outperforms the averaged perceptron method. This difference is statistically significant, p < 0.005 (McNemar test on head selection accuracy). In our Czech experiments, we used the dependency trees annotated in the Prague Treebank, and the predefined training, development and evaluation sections of this data. The number of sentences in this data set is nearly twice that of the English treebank, leading to a very large number of features — 13, 450, 672. But again, each instance uses just a handful of these features. For POS tags we used the automatically generated tags in the data set. Though we made no language specific model changes, we did need to make some data specific changes. In particular, we used the method of Collins et al. (1999) to simplify part-of-speech tags since the rich tags used by Czech would have led to a large but rarely seen set of POS features. The model based on MIRA also performs well on Czech, again slightly outperforming averaged perceptron. Unfortunately, we do not know of any other parsing systems tested on the same data set. The Czech parser of Collins et al. (1999) was run on a different data set and most other dependency parsers are evaluated using English. Learning a model from the Czech training data is somewhat problematic since it contains some crossing dependencies which cannot be parsed by the Eisner algorithm. One trick is to rearrange the words in the training set so that all trees are nested. This at least allows the training algorithm to obtain reasonably low error on the training set. We found that this did improve performance slightly to 83.6% accuracy. 3.1 Lexicalized Phrase Structure Parsers It is well known that dependency trees extracted from lexicalized phrase structure parsers (Collins, 1999; Charniak, 2000) typically are more accurate than those produced by pure dependency parsers (Yamada and Matsumoto, 2003). We compared our system to the Bikel re-implementation of the Collins parser (Bikel, 2004; Collins, 1999) trained with the same head rules of our system. There are two ways to extract dependencies from lexicalized phrase structure. The first is to use the automatically generated dependencies that are explicit in the lexicalization of the trees, we call this system Collinsauto. The second is to take just the phrase structure output of the parser and run the automatic head rules over it to extract the dependencies, we call this sys96 English Accuracy Root Complete Complexity Time Collins-auto 88.2 92.3 36.1 O(n5) 98m 21s Collins-rules 91.4 95.1 42.6 O(n5) 98m 21s MIRA-Normal 90.9 94.2 37.5 O(n3) 5m 52s MIRA-Collins 92.2 95.8 42.9 O(n5) 105m 08s Table 3: Results comparing our system to those based on the Collins parser. Complexity represents the computational complexity of each parser and Time the CPU time to parse sec. 23 of the Penn Treebank. tem Collins-rules. Table 3 shows the results comparing our system, MIRA-Normal, to the Collins parser for English. All systems are implemented in Java and run on the same machine. Interestingly, the dependencies that are automatically produced by the Collins parser are worse than those extracted statically using the head rules. Arguably, this displays the artificialness of English dependency parsing using dependencies automatically extracted from treebank phrase-structure trees. Our system falls in-between, better than the automatically generated dependency trees and worse than the head-rule extracted trees. Since the dependencies returned from our system are better than those actually learnt by the Collins parser, one could argue that our model is actually learning to parse dependencies more accurately. However, phrase structure parsers are built to maximize the accuracy of the phrase structure and use lexicalization as just an additional source of information. Thus it is not too surprising that the dependencies output by the Collins parser are not as accurate as our system, which is trained and built to maximize accuracy on dependency trees. In complexity and run-time, our system is a huge improvement over the Collins parser. The final system in Table 3 takes the output of Collins-rules and adds a feature to MIRA-Normal that indicates for given edge, whether the Collins parser believed this dependency actually exists, we call this system MIRA-Collins. This is a well known discriminative training trick — using the suggestions of a generative system to influence decisions. This system can essentially be considered a corrector of the Collins parser and represents a significant improvement over it. However, there is an added complexity with such a model as it requires the output of the O(n5) Collins parser. k=1 k=2 k=5 k=10 k=20 Accuracy 90.73 90.82 90.88 90.92 90.91 Train Time 183m 235m 627m 1372m 2491m Table 4: Evaluation of k-best MIRA approximation. 3.2 k-best MIRA Approximation One question that can be asked is how justifiable is the k-best MIRA approximation. Table 4 indicates the accuracy on testing and the time it took to train models with k = 1, 2, 5, 10, 20 for the English data set. Even though the parsing algorithm is proportional to O(k log k), empirically, the training times scale linearly with k. Peak performance is achieved very early with a slight degradation around k=20. The most likely reason for this phenomenon is that the model is overfitting by ensuring that even unlikely trees are separated from the correct tree proportional to their loss. 4 Summary We described a successful new method for training dependency parsers. We use simple linear parsing models trained with margin-sensitive online training algorithms, achieving state-of-the-art performance with relatively modest training times and no need for pruning heuristics. We evaluated the system on both English and Czech data to display state-of-theart performance without any language specific enhancements. Furthermore, the model can be augmented to include features over lexicalized phrase structure parsing decisions to increase dependency accuracy over those parsers. We plan on extending our parser in two ways. First, we would add labels to dependencies to represent grammatical roles. Those labels are very important for using parser output in tasks like information extraction or machine translation. Second, 97 we are looking at model extensions to allow nonprojective dependencies, which occur in languages such as Czech, German and Dutch. Acknowledgments: We thank Jan Hajiˇc for answering queries on the Prague treebank, and Joakim Nivre for providing the Yamada and Matsumoto (2003) head rules for English that allowed for a direct comparison with our systems. This work was supported by NSF ITR grants 0205456, 0205448, and 0428193. References D.M. Bikel. 2004. Intricacies of Collins parsing model. Computational Linguistics. Y. Censor and S.A. Zenios. 1997. Parallel optimization : theory, algorithms, and applications. Oxford University Press. E. Charniak. 2000. A maximum-entropy-inspired parser. In Proc. NAACL. S. Clark and J.R. Curran. 2004. Parsing the WSJ using CCG and log-linear models. In Proc. ACL. M. Collins and B. Roark. 2004. Incremental parsing with the perceptron algorithm. In Proc. ACL. M. Collins, J. Hajiˇc, L. Ramshaw, and C. Tillmann. 1999. A statistical parser for Czech. In Proc. ACL. M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. M. Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proc. EMNLP. K. Crammer and Y. Singer. 2001. On the algorithmic implementation of multiclass kernel based vector machines. JMLR. K. Crammer and Y. Singer. 2003. Ultraconservative online algorithms for multiclass problems. JMLR. K. Crammer, O. Dekel, S. Shalev-Shwartz, and Y. Singer. 2003. Online passive aggressive algorithms. In Proc. NIPS. A. Culotta and J. Sorensen. 2004. Dependency tree kernels for relation extraction. In Proc. ACL. Y. Ding and M. Palmer. 2005. Machine translation using probabilistic synchronous dependency insertion grammars. In Proc. ACL. J. Eisner and G. Satta. 1999. Efficient parsing for bilexical context-free grammars and head-automaton grammars. In Proc. ACL. J. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proc. COLING. J. Hajiˇc. 1998. Building a syntactically annotated corpus: The Prague dependency treebank. Issues of Valency and Meaning. L. Huang and D. Chiang. 2005. Better k-best parsing. Technical Report MS-CIS-05-08, University of Pennsylvania. Richard Hudson. 1984. Word Grammar. Blackwell. T. Joachims. 2002. Learning to Classify Text using Support Vector Machines. Kluwer. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML. M. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of english: the penn treebank. Computational Linguistics. J. Nivre and M. Scholz. 2004. Deterministic dependency parsing of english text. In Proc. COLING. A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proc. EMNLP. A. Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine Learning. S. Riezler, T. King, R. Kaplan, R. Crouch, J. Maxwell, and M. Johnson. 2002. Parsing the Wall Street Journal using a lexical-functional grammar and discriminative estimation techniques. In Proc. ACL. F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proc. HLT-NAACL. Y. Shinyama, S. Sekine, K. Sudo, and R. Grishman. 2002. Automatic paraphrase acquisition from news articles. In Proc. HLT. B. Taskar, C. Guestrin, and D. Koller. 2003. Max-margin Markov networks. In Proc. NIPS. B. Taskar, D. Klein, M. Collins, D. Koller, and C. Manning. 2004. Max-margin parsing. In Proc. EMNLP. H. Yamada and Y. Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proc. IWPT. 98
2005
12
Proceedings of the 43rd Annual Meeting of the ACL, pages 99–106, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Pseudo-Projective Dependency Parsing Joakim Nivre and Jens Nilsson School of Mathematics and Systems Engineering V¨axj¨o University SE-35195 V¨axj¨o, Sweden {nivre,jni}@msi.vxu.se Abstract In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy. This leads to the best reported performance for robust non-projective parsing of Czech. 1 Introduction It is sometimes claimed that one of the advantages of dependency grammar over approaches based on constituency is that it allows a more adequate treatment of languages with variable word order, where discontinuous syntactic constructions are more common than in languages like English (Mel’ˇcuk, 1988; Covington, 1990). However, this argument is only plausible if the formal framework allows non-projective dependency structures, i.e. structures where a head and its dependents may correspond to a discontinuous constituent. From the point of view of computational implementation this can be problematic, since the inclusion of non-projective structures makes the parsing problem more complex and therefore compromises efficiency and in practice also accuracy and robustness. Thus, most broad-coverage parsers based on dependency grammar have been restricted to projective structures. This is true of the widely used link grammar parser for English (Sleator and Temperley, 1993), which uses a dependency grammar of sorts, the probabilistic dependency parser of Eisner (1996), and more recently proposed deterministic dependency parsers (Yamada and Matsumoto, 2003; Nivre et al., 2004). It is also true of the adaptation of the Collins parser for Czech (Collins et al., 1999) and the finite-state dependency parser for Turkish by Oflazer (2003). This is in contrast to dependency treebanks, e.g. Prague Dependency Treebank (Hajiˇc et al., 2001b), Danish Dependency Treebank (Kromann, 2003), and the METU Treebank of Turkish (Oflazer et al., 2003), which generally allow annotations with nonprojective dependency structures. The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective treebanks is often neglected because of the relative scarcity of problematic constructions. While the proportion of sentences containing non-projective dependencies is often 15–25%, the total proportion of non-projective arcs is normally only 1–2%. As long as the main evaluation metric is dependency accuracy per word, with state-of-the-art accuracy mostly below 90%, the penalty for not handling non-projective constructions is almost negligible. Still, from a theoretical point of view, projective parsing of non-projective structures has the drawback that it rules out perfect accuracy even as an asymptotic goal. 99 (“Only one of them concerns quality.”) R Z (Out-of   ? AuxP P nich them   ? Atr VB je is T jen only   ? AuxZ C jedna one-FEM-SG   ? Sb R na to   ? AuxP N4 kvalitu quality ?   Adv Z: . .)   ? AuxZ Figure 1: Dependency graph for Czech sentence from the Prague Dependency Treebank1 There exist a few robust broad-coverage parsers that produce non-projective dependency structures, notably Tapanainen and J¨arvinen (1997) and Wang and Harper (2004) for English, Foth et al. (2004) for German, and Holan (2004) for Czech. In addition, there are several approaches to non-projective dependency parsing that are still to be evaluated in the large (Covington, 1990; Kahane et al., 1998; Duchier and Debusmann, 2001; Holan et al., 2001; Hellwig, 2003). Finally, since non-projective constructions often involve long-distance dependencies, the problem is closely related to the recovery of empty categories and non-local dependencies in constituency-based parsing (Johnson, 2002; Dienes and Dubey, 2003; Jijkoun and de Rijke, 2004; Cahill et al., 2004; Levy and Manning, 2004; Campbell, 2004). In this paper, we show how non-projective dependency parsing can be achieved by combining a datadriven projective parser with special graph transformation techniques. First, the training data for the parser is projectivized by applying a minimal number of lifting operations (Kahane et al., 1998) and encoding information about these lifts in arc labels. When the parser is trained on the transformed data, it will ideally learn not only to construct projective dependency structures but also to assign arc labels that encode information about lifts. By applying an inverse transformation to the output of the parser, arcs with non-standard labels can be lowered to their proper place in the dependency graph, giving rise 1The dependency graph has been modified to make the final period a dependent of the main verb instead of being a dependent of a special root node for the sentence. to non-projective structures. We call this pseudoprojective dependency parsing, since it is based on a notion of pseudo-projectivity (Kahane et al., 1998). The rest of the paper is structured as follows. In section 2 we introduce the graph transformation techniques used to projectivize and deprojectivize dependency graphs, and in section 3 we describe the data-driven dependency parser that is the core of our system. We then evaluate the approach in two steps. First, in section 4, we evaluate the graph transformation techniques in themselves, with data from the Prague Dependency Treebank and the Danish Dependency Treebank. In section 5, we then evaluate the entire parsing system by training and evaluating on data from the Prague Dependency Treebank. 2 Dependency Graph Transformations We assume that the goal in dependency parsing is to construct a labeled dependency graph of the kind depicted in Figure 1. Formally, we define dependency graphs as follows: 1. Let R = {r1, . . . , rm} be the set of permissible dependency types (arc labels). 2. A dependency graph for a string of words W = w1· · ·wn is a labeled directed graph D = (W, A), where (a) W is the set of nodes, i.e. word tokens in the input string, ordered by a linear precedence relation <, (b) A is a set of labeled arcs (wi, r, wj), where wi, wj ∈W, r ∈R, (c) for every wj ∈W, there is at most one arc (wi, r, wj) ∈A. 100 (“Only one of them concerns quality.”) R Z (Out-of   ? AuxP P nich them   ? Atr VB je is T jen only   ? AuxZ C jedna one-FEM-SG   ? Sb R na to   ? AuxP N4 kvalitu quality ?   Adv Z: . .)   ? AuxZ Figure 2: Projectivized dependency graph for Czech sentence 3. A graph D = (W, A) is well-formed iff it is acyclic and connected. If (wi, r, wj) ∈A, we say that wi is the head of wj and wj a dependent of wi. In the following, we use the notation wi r→wj to mean that (wi, r, wj) ∈A; we also use wi →wj to denote an arc with unspecified label and wi →∗wj for the reflexive and transitive closure of the (unlabeled) arc relation. The dependency graph in Figure 1 satisfies all the defining conditions above, but it fails to satisfy the condition of projectivity (Kahane et al., 1998): 1. An arc wi →wk is projective iff, for every word wj occurring between wi and wk in the string (wi <wj <wk or wi >wj >wk), wi →∗wj. 2. A dependency graph D = (W, A) is projective iff every arc in A is projective. The arc connecting the head jedna (one) to the dependent Z (out-of) spans the token je (is), which is not dominated by jedna. As observed by Kahane et al. (1998), any (nonprojective) dependency graph can be transformed into a projective one by a lifting operation, which replaces each non-projective arc wj →wk by a projective arc wi →wk such that wi →∗wj holds in the original graph. Here we use a slightly different notion of lift, applying to individual arcs and moving their head upwards one step at a time: LIFT(wj →wk) = ( wi →wk if wi →wj undefined otherwise Intuitively, lifting an arc makes the word wk dependent on the head wi of its original head wj (which is unique in a well-formed dependency graph), unless wj is a root in which case the operation is undefined (but then wj →wk is necessarily projective if the dependency graph is well-formed). Projectivizing a dependency graph by lifting nonprojective arcs is a nondeterministic operation in the general case. However, since we want to preserve as much of the original structure as possible, we are interested in finding a transformation that involves a minimal number of lifts. Even this may be nondeterministic, in case the graph contains several non-projective arcs whose lifts interact, but we use the following algorithm to construct a minimal projective transformation D′ = (W, A′) of a (nonprojective) dependency graph D = (W, A): PROJECTIVIZE(W, A) 1 A′ ←A 2 while (W, A′) is non-projective 3 a ←SMALLEST-NONP-ARC(A′) 4 A′ ←(A′ −{a}) ∪{LIFT(a)} 5 return (W, A′) The function SMALLEST-NONP-ARC returns the non-projective arc with the shortest distance from head to dependent (breaking ties from left to right). Applying the function PROJECTIVIZE to the graph in Figure 1 yields the graph in Figure 2, where the problematic arc pointing to Z has been lifted from the original head jedna to the ancestor je. Using the terminology of Kahane et al. (1998), we say that jedna is the syntactic head of Z, while je is its linear head in the projectivized representation. Unlike Kahane et al. (1998), we do not regard a projectivized representation as the final target of the parsing process. Instead, we want to apply an in101 Lifted arc label Path labels Number of labels Baseline d p n Head d↑h p n(n + 1) Head+Path d↑h p↓ 2n(n + 1) Path d↑ p↓ 4n Table 1: Encoding schemes (d = dependent, h = syntactic head, p = path; n = number of dependency types) verse transformation to recover the underlying (nonprojective) dependency graph. In order to facilitate this task, we extend the set of arc labels to encode information about lifting operations. In principle, it would be possible to encode the exact position of the syntactic head in the label of the arc from the linear head, but this would give a potentially infinite set of arc labels and would make the training of the parser very hard. In practice, we can therefore expect a trade-off such that increasing the amount of information encoded in arc labels will cause an increase in the accuracy of the inverse transformation but a decrease in the accuracy with which the parser can construct the labeled representations. To explore this tradeoff, we have performed experiments with three different encoding schemes (plus a baseline), which are described schematically in Table 1. The baseline simply retains the original labels for all arcs, regardless of whether they have been lifted or not, and the number of distinct labels is therefore simply the number n of distinct dependency types.2 In the first encoding scheme, called Head, we use a new label d↑h for each lifted arc, where d is the dependency relation between the syntactic head and the dependent in the non-projective representation, and h is the dependency relation that the syntactic head has to its own head in the underlying structure. Using this encoding scheme, the arc from je to Z in Figure 2 would be assigned the label AuxP↑Sb (signifying an AuxP that has been lifted from a Sb). In the second scheme, Head+Path, we in addition modify the label of every arc along the lifting path from the syntactic to the linear head so that if the original label is p the new label is p↓. Thus, the arc from je to jedna will be labeled Sb↓(to indicate that there is a syntactic head below it). In the third and final scheme, denoted Path, we keep the extra infor2Note that this is a baseline for the parsing experiment only (Experiment 2). For Experiment 1 it is meaningless as a baseline, since it would result in 0% accuracy. mation on path labels but drop the information about the syntactic head of the lifted arc, using the label d↑ instead of d↑h (AuxP↑instead of AuxP↑Sb). As can be seen from the last column in Table 1, both Head and Head+Path may theoretically lead to a quadratic increase in the number of distinct arc labels (Head+Path being worse than Head only by a constant factor), while the increase is only linear in the case of Path. On the other hand, we can expect Head+Path to be the most useful representation for reconstructing the underlying non-projective dependency graph. In approaching this problem, a variety of different methods are conceivable, including a more or less sophisticated use of machine learning. In the present study, we limit ourselves to an algorithmic approach, using a deterministic breadthfirst search. The details of the transformation procedure are slightly different depending on the encoding schemes: • Head: For every arc of the form wi d↑h −→wn, we search the graph top-down, left-to-right, breadth-first starting at the head node wi. If we find an arc wl h −→wm, called a target arc, we replace wi d↑h −→wn by wm d −→wn; otherwise we replace wi d↑h −→wn by wi d −→wn (i.e. we let the linear head be the syntactic head). • Head+Path: Same as Head, but the search only follows arcs of the form wj p↓ −→wk and a target arc must have the form wl h↓ −→wm; if no target arc is found, Head is used as backoff. • Path: Same as Head+Path, but a target arc must have the form wl p↓ −→wm and no outgoing arcs of the form wm p′↓ −→wo; no backoff. In section 4 we evaluate these transformations with respect to projectivized dependency treebanks, and in section 5 they are applied to parser output. Before 102 Feature type Top−1 Top Next Next+1 Next+2 Next+3 Word form + + + + Part-of-speech + + + + + + Dep type of head + leftmost dep + + rightmost dep + Table 2: Features used in predicting the next parser action we turn to the evaluation, however, we need to introduce the data-driven dependency parser used in the latter experiments. 3 Memory-Based Dependency Parsing In the experiments below, we employ a data-driven deterministic dependency parser producing labeled projective dependency graphs,3 previously tested on Swedish (Nivre et al., 2004) and English (Nivre and Scholz, 2004). The parser builds dependency graphs by traversing the input from left to right, using a stack to store tokens that are not yet complete with respect to their dependents. At each point during the derivation, the parser has a choice between pushing the next input token onto the stack – with or without adding an arc from the token on top of the stack to the token pushed – and popping a token from the stack – with or without adding an arc from the next input token to the token popped. More details on the parsing algorithm can be found in Nivre (2003). The choice between different actions is in general nondeterministic, and the parser relies on a memorybased classifier, trained on treebank data, to predict the next action based on features of the current parser configuration. Table 2 shows the features used in the current version of the parser. At each point during the derivation, the prediction is based on six word tokens, the two topmost tokens on the stack, and the next four input tokens. For each token, three types of features may be taken into account: the word form; the part-of-speech assigned by an automatic tagger; and labels on previously assigned dependency arcs involving the token – the arc from its head and the arcs to its leftmost and rightmost dependent, respectively. Except for the left3The graphs satisfy all the well-formedness conditions given in section 2 except (possibly) connectedness. For robustness reasons, the parser may output a set of dependency trees instead of a single tree. most dependent of the next input token, dependency type features are limited to tokens on the stack. The prediction based on these features is a knearest neighbor classification, using the IB1 algorithm and k = 5, the modified value difference metric (MVDM) and class voting with inverse distance weighting, as implemented in the TiMBL software package (Daelemans et al., 2003). More details on the memory-based prediction can be found in Nivre et al. (2004) and Nivre and Scholz (2004). 4 Experiment 1: Treebank Transformation The first experiment uses data from two dependency treebanks. The Prague Dependency Treebank (PDT) consists of more than 1M words of newspaper text, annotated on three levels, the morphological, analytical and tectogrammatical levels (Hajiˇc, 1998). Our experiments all concern the analytical annotation, and the first experiment is based only on the training part. The Danish Dependency Treebank (DDT) comprises about 100K words of text selected from the Danish PAROLE corpus, with annotation of primary and secondary dependencies (Kromann, 2003). The entire treebank is used in the experiment, but only primary dependencies are considered.4 In all experiments, punctuation tokens are included in the data but omitted in evaluation scores. In the first part of the experiment, dependency graphs from the treebanks were projectivized using the algorithm described in section 2. As shown in Table 3, the proportion of sentences containing some non-projective dependency ranges from about 15% in DDT to almost 25% in PDT. However, the overall percentage of non-projective arcs is less than 2% in PDT and less than 1% in DDT. The last four 4If secondary dependencies had been included, the dependency graphs would not have satisfied the well-formedness conditions formulated in section 2. 103 # Lifts in projectivization Data set # Sentences % NonP # Tokens % NonP 1 2 3 >3 PDT training 73,088 23.15 1,255,333 1.81 93.79 5.60 0.51 0.11 DDT total 5,512 15.48 100,238 0.94 79.49 13.28 4.36 2.87 Table 3: Non-projective sentences and arcs in PDT and DDT (NonP = non-projective) Data set Head H+P Path PDT training (28 labels) 92.3 (230) 99.3 (314) 97.3 (84) DDT total (54 labels) 92.3 (123) 99.8 (147) 98.3 (99) Table 4: Percentage of non-projective arcs recovered correctly (number of labels in parentheses) columns in Table 3 show the distribution of nonprojective arcs with respect to the number of lifts required. It is worth noting that, although nonprojective constructions are less frequent in DDT than in PDT, they seem to be more deeply nested, since only about 80% can be projectivized with a single lift, while almost 95% of the non-projective arcs in PDT only require a single lift. In the second part of the experiment, we applied the inverse transformation based on breadth-first search under the three different encoding schemes. The results are given in Table 4. As expected, the most informative encoding, Head+Path, gives the highest accuracy with over 99% of all non-projective arcs being recovered correctly in both data sets. However, it can be noted that the results for the least informative encoding, Path, are almost comparable, while the third encoding, Head, gives substantially worse results for both data sets. We also see that the increase in the size of the label sets for Head and Head+Path is far below the theoretical upper bounds given in Table 1. The increase is generally higher for PDT than for DDT, which indicates a greater diversity in non-projective constructions. 5 Experiment 2: Memory-Based Parsing The second experiment is limited to data from PDT.5 The training part of the treebank was projectivized under different encoding schemes and used to train memory-based dependency parsers, which were run on the test part of the treebank, consisting of 7,507 5Preliminary experiments using data from DDT indicated that the limited size of the treebank creates a severe sparse data problem with respect to non-projective constructions. sentences and 125,713 tokens.6 The inverse transformation was applied to the output of the parsers and the result compared to the gold standard test set. Table 5 shows the overall parsing accuracy attained with the three different encoding schemes, compared to the baseline (no special arc labels) and to training directly on non-projective dependency graphs. Evaluation metrics used are Attachment Score (AS), i.e. the proportion of tokens that are attached to the correct head, and Exact Match (EM), i.e. the proportion of sentences for which the dependency graph exactly matches the gold standard. In the labeled version of these metrics (L) both heads and arc labels must be correct, while the unlabeled version (U) only considers heads. The first thing to note is that projectivizing helps in itself, even if no encoding is used, as seen from the fact that the projective baseline outperforms the non-projective training condition by more than half a percentage point on attachment score, although the gain is much smaller with respect to exact match. The second main result is that the pseudo-projective approach to parsing (using special arc labels to guide an inverse transformation) gives a further improvement of about one percentage point on attachment score. With respect to exact match, the improvement is even more noticeable, which shows quite clearly that even if non-projective dependencies are rare on the token level, they are nevertheless important for getting the global syntactic structure correct. All improvements over the baseline are statistically significant beyond the 0.01 level (McNemar’s 6The part-of-speech tagging used in both training and testing was the uncorrected output of an HMM tagger distributed with the treebank; cf. Hajiˇc et al. (2001a). 104 Encoding UAS LAS UEM LEM Non-projective 78.5 71.3 28.9 20.6 Baseline 79.1 72.0 29.2 20.7 Head 80.1 72.8 31.6 22.2 Head+Path 80.0 72.8 31.8 22.4 Path 80.0 72.7 31.6 22.0 Table 5: Parsing accuracy (AS = attachment score, EM = exact match; U = unlabeled, L = labeled) Unlabeled Labeled Encoding P R F P R F Head 61.3 54.1 57.5 55.2 49.8 52.4 Head+Path 63.9 54.9 59.0 57.9 50.6 54.0 Path 58.2 49.5 53.4 51.0 45.7 48.2 Table 6: Precision, recall and F-measure for non-projective arcs test). By contrast, when we turn to a comparison of the three encoding schemes it is hard to find any significant differences, and the overall impression is that it makes little or no difference which encoding scheme is used, as long as there is some indication of which words are assigned their linear head instead of their syntactic head by the projective parser. This may seem surprising, given the experiments reported in section 4, but the explanation is probably that the non-projective dependencies that can be recovered at all are of the simple kind that only requires a single lift, where the encoding of path information is often redundant. It is likely that the more complex cases, where path information could make a difference, are beyond the reach of the parser in most cases. However, if we consider precision, recall and Fmeasure on non-projective dependencies only, as shown in Table 6, some differences begin to emerge. The most informative scheme, Head+Path, gives the highest scores, although with respect to Head the difference is not statistically significant, while the least informative scheme, Path – with almost the same performance on treebank transformation – is significantly lower (p < 0.01). On the other hand, given that all schemes have similar parsing accuracy overall, this means that the Path scheme is the least likely to introduce errors on projective arcs. The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. Although the best published results for the Collins parser is 80% UAS (Collins, 1999), this parser reaches 82% when trained on the entire training data set, and an adapted version of Charniak’s parser (Charniak, 2000) performs at 84% (Jan Hajiˇc, pers. comm.). However, the accuracy is considerably higher than previously reported results for robust non-projective parsing of Czech, with a best performance of 73% UAS (Holan, 2004). Compared to related work on the recovery of long-distance dependencies in constituency-based parsing, our approach is similar to that of Dienes and Dubey (2003) in that the processing of non-local dependencies is partly integrated in the parsing process, via an extension of the set of syntactic categories, whereas most other approaches rely on postprocessing only. However, while Dienes and Dubey recognize empty categories in a pre-processing step and only let the parser find their antecedents, we use the parser both to detect dislocated dependents and to predict either the type or the location of their syntactic head (or both) and use post-processing only to transform the graph in accordance with the parser’s analysis. 6 Conclusion We have presented a new method for non-projective dependency parsing, based on a combination of data-driven projective dependency parsing and graph transformation techniques. The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, 105 especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. Acknowledgements This work was supported in part by the Swedish Research Council (621-2002-4207). Memory-based classifiers for the experiments were created using TiMBL (Daelemans et al., 2003). Special thanks to Jan Hajiˇc and Matthias Trautner Kromann for assistance with the Czech and Danish data, respectively, and to Jan Hajiˇc, Tom´aˇs Holan, Dan Zeman and three anonymous reviewers for valuable comments on a preliminary version of the paper. References Cahill, A., Burke, M., O’Donovan, R., Van Genabith, J. and Way, A. 2004. Long-distance dependency resolution in automatically acquired wide-coverage PCFG-based LFG approximations. In Proceedings of ACL. Campbell, R. 2004. Using linguistic principles to recover empty categories. In Proceedings of ACL. Charniak, E. 2000. A maximum-entropy-inspired parser. In Proceedings of NAACL. Collins, M., Hajiˇc, J., Brill, E., Ramshaw, L. and Tillmann, C. 1999. A statistical parser for Czech. In Proceedings of ACL. Collins, M. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Covington, M. A. 1990. Parsing discontinuous constituents in dependency grammar. Computational Linguistics, 16:234– 236. Daelemans, W., Zavrel, J., van der Sloot, K. and van den Bosch, A. 2003. TiMBL: Tilburg Memory Based Learner, version 5.0, Reference Guide. Technical Report ILK 03-10, Tilburg University, ILK. Dienes, P. and Dubey, A. 2003. Deep syntactic processing by combining shallow methods. In Proceedings of ACL. Duchier, D. and Debusmann, R. 2001. Topological dependency trees: A constraint-based account of linear precedence. In Proceedings of ACL. Eisner, J. M. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of COLING. Foth, K., Daum, M. and Menzel, W. 2004. A broad-coverage parser for German based on defeasible constraints. In Proceedings of KONVENS. Hajiˇc, J., Krbec, P., Oliva, K., Kveton, P. and Petkevic, V. 2001. Serial combination of rules and statistics: A case study in Czech tagging. In Proceedings of ACL. Hajiˇc, J., Vidova Hladka, B., Panevov´a, J., Hajiˇcov´a, E., Sgall, P. and Pajas, P. 2001. Prague Dependency Treebank 1.0. LDC, 2001T10. Hajiˇc, J. 1998. Building a syntactically annotated corpus: The Prague Dependency Treebank. In Issues of Valency and Meaning, pages 106–132. Karolinum. Hellwig, P. 2003. Dependency unification grammar. In Dependency and Valency, pages 593–635. Walter de Gruyter. Holan, T., Kuboˇn, V. and Pl´atek, M. 2001. Word-order relaxations and restrictions within a dependency grammar. In Proceedings of IWPT. Holan, T. 2004. Tvorba zavislostniho syntaktickeho analyzatoru. In Proceedings of MIS’2004. Jijkoun, V. and de Rijke, M. 2004. Enriching the output of a parser using memory-based learning. In Proceedings of ACL. Johnson, M. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of ACL. Kahane, S., Nasr, A. and Rambow, O. 1998. Pseudoprojectivity: A polynomially parsable non-projective dependency grammar. In Proceedings of ACL-COLING. Kromann, M. T. 2003. The Danish Dependency Treebank and the DTAG treebank tool. In Proceedings of TLT 2003. Levy, R. and Manning, C. 2004. Deep dependencies from context-free statistical parsers: Correcting the surface dependency approximation. In Proceedings of ACL. Mel’ˇcuk, I. 1988. Dependency Syntax: Theory and Practice. State University of New York Press. Nivre, J. and Scholz, M. 2004. Deterministic dependency parsing of English text. In Proceedings of COLING. Nivre, J., Hall, J. and Nilsson, J. 2004. Memory-based dependency parsing. In Proceedings of CoNLL. Nivre, J. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of IWPT. Oflazer, K., Say, B., Hakkani-T¨ur, D. Z. and T¨ur, G. 2003. Building a Turkish treebank. In Treebanks: Building and Using Parsed Corpora, pages 261–277. Kluwer Academic Publishers. Oflazer, K. 2003. Dependency parsing with an extended finitestate approach. Computational Linguistics, 29:515–544. Sleator, D. and Temperley, D. 1993. Parsing English with a link grammar. In Proceedings of IWPT. Tapanainen, P. and J¨arvinen, T. 1997. A non-projective dependency parser. In Proceedings of ANLP. Wang, W. and Harper, M. P. 2004. A statistical constraint dependency grammar (CDG) parser. In Proceedings of the Workshop in Incremental Parsing (ACL). Yamada, H. and Matsumoto, Y. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT. 106
2005
13
Proceedings of the 43rd Annual Meeting of the ACL, pages 107–114, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics The Distributional Inclusion Hypotheses and Lexical Entailment Maayan Geffet School of Computer Science and Engineering Hebrew University, Jerusalem, Israel, 91904 [email protected] Ido Dagan Department of Computer Science Bar-Ilan University, Ramat-Gan, Israel, 52900 [email protected] Abstract This paper suggests refinements for the Distributional Similarity Hypothesis. Our proposed hypotheses relate the distributional behavior of pairs of words to lexical entailment – a tighter notion of semantic similarity that is required by many NLP applications. To automatically explore the validity of the defined hypotheses we developed an inclusion testing algorithm for characteristic features of two words, which incorporates corpus and web-based feature sampling to overcome data sparseness. The degree of hypotheses validity was then empirically tested and manually analyzed with respect to the word sense level. In addition, the above testing algorithm was exploited to improve lexical entailment acquisition. 1 Introduction Distributional Similarity between words has been an active research area for more than a decade. It is based on the general idea of Harris' Distributional Hypothesis, suggesting that words that occur within similar contexts are semantically similar (Harris, 1968). Concrete similarity measures compare a pair of weighted context feature vectors that characterize two words (Church and Hanks, 1990; Ruge, 1992; Pereira et al., 1993; Grefenstette, 1994; Lee, 1997; Lin, 1998; Pantel and Lin, 2002; Weeds and Weir, 2003). As it turns out, distributional similarity captures a somewhat loose notion of semantic similarity (see Table 1). It does not ensure that the meaning of one word is preserved when replacing it with the other one in some context. However, many semantic information-oriented applications like Question Answering, Information Extraction and Paraphrase Acquisition require a tighter similarity criterion, as was also demonstrated by papers at the recent PASCAL Challenge on Recognizing Textual Entailment (Dagan et al., 2005). In particular, all these applications need to know when the meaning of one word can be inferred (entailed) from another word, so that one word could substitute the other in some contexts. This relation corresponds to several lexical semantic relations, such as synonymy, hyponymy and some cases of meronymy. For example, in Question Answering, the word company in a question can be substituted in the text by firm (synonym), automaker (hyponym) or division (meronym). Unfortunately, existing manually constructed resources of lexical semantic relations, such as WordNet, are not exhaustive and comprehensive enough for a variety of domains and thus are not sufficient as a sole resource for application needs1. Most works that attempt to learn such concrete lexical semantic relations employ a co-occurrence pattern-based approach (Hearst, 1992; Ravichandran and Hovy, 2002; Moldovan et al., 2004). Typically, they use a set of predefined lexicosyntactic patterns that characterize specific semantic relations. If a candidate word pair (like company-automaker) co-occurs within the same sentence satisfying a concrete pattern (like " …companies, such as automakers"), then it is expected that the corresponding semantic relation holds between these words (hypernym-hyponym in this example). In recent work (Geffet and Dagan, 2004) we explored the correspondence between the distributional characterization of two words (which may hardly co-occur, as is usually the case for syno 1We found that less than 20% of the lexical entailment relations extracted by our method appeared as direct or indirect WordNet relations (synonyms, hyponyms or meronyms). 107 nyms) and the kind of tight semantic relationship that might hold between them. We formulated a lexical entailment relation that corresponds to the above mentioned substitutability criterion, and is termed meaning entailing substitutability (which we term here for brevity as lexical entailment). Given a pair of words, this relation holds if there are some contexts in which one of the words can be substituted by the other, such that the meaning of the original word can be inferred from the new one. We then proposed a new feature weighting function (RFF) that yields more accurate distributional similarity lists, which better approximate the lexical entailment relation. Yet, this method still applies a standard measure for distributional vector similarity (over vectors with the improved feature weights), and thus produces many loose similarities that do not correspond to entailment. This paper explores more deeply the relationship between distributional characterization of words and lexical entailment, proposing two new hypotheses as a refinement of the distributional similarity hypothesis. The main idea is that if one word entails the other then we would expect that virtually all the characteristic context features of the entailing word will actually occur also with the entailed word. To test this idea we developed an automatic method for testing feature inclusion between a pair of words. This algorithm combines corpus statistics with a web-based feature sampling technique. The web is utilized to overcome the data sparseness problem, so that features which are not found with one of the two words can be considered as truly distinguishing evidence. Using the above algorithm we first tested the empirical validity of the hypotheses. Then, we demonstrated how the hypotheses can be leveraged in practice to improve the precision of automatic acquisition of the entailment relation. 2 Background 2.1 Implementations of Distributional Similarity This subsection reviews the relevant details of earlier methods that were utilized within this paper. In the computational setting contexts of words are represented by feature vectors. Each word w is represented by a feature vector, where an entry in the vector corresponds to a feature f. Each feature represents another word (or term) with which w cooccurs, and possibly specifies also the syntactic relation between the two words as in (Grefenstette, 1994; Lin, 1998; Weeds and Weir, 2003). Pado and Lapata (2003) demonstrated that using syntactic dependency-based vector space models can help distinguish among classes of different lexical relations, which seems to be more difficult for traditional “bag of words” co-occurrence-based models. A syntactic feature is defined as a triple <term, syntactic_relation, relation_direction> (the direction is set to 1, if the feature is the word’s modifier and to 0 otherwise). For example, given the word “company” the feature <earnings_report, gen, 0> (genitive) corresponds to the phrase “company’s earnings report”, and <profit, pcomp, 0> (prepositional complement) corresponds to “the profit of the company”. Throughout this paper we used syntactic features generated by the Minipar dependency parser (Lin, 1993). The value of each entry in the feature vector is determined by some weight function weight(w,f), which quantifies the degree of statistical association between the feature and the corresponding word. The most widely used association weight function is (point-wise) Mutual Information (MI) (Church and Hanks, 1990; Lin, 1998; Dagan, 2000; Weeds et al., 2004). <=> element, component <=> gap, spread * town, airport <= loan, mortgage => government, body * warplane, bomb <=> program, plan * tank, warplane * match, winner => bill, program <= conflict, war => town, location Table 1: Sample of the data set of top-40 distributionally similar word pairs produced by the RFFbased method of (Geffet and Dagan, 2004). Entailment judgments are marked by the arrow direction, with '*' denoting no entailment. 108 Once feature vectors have been constructed, the similarity between two words is defined by some vector similarity metric. Different metrics have been used, such as weighted Jaccard (Grefenstette, 1994; Dagan, 2000), cosine (Ruge, 1992), various information theoretic measures (Lee, 1997), and the widely cited and competitive (see (Weeds and Weir, 2003)) measure of Lin (1998) for similarity between two words, w and v, defined as follows: , ) , ( ) , ( ) , ( ) , ( ) , ( ) ( ) ( ) ( ) (    ∈ ∈ ∩ ∈ + + = f v weight f w weight f v weight f w weight v w sim v F f w F f v F w F f Lin where F(w) and F(v) are the active features of the two words (positive feature weight) and the weight function is defined as MI. As typical for vector similarity measures, it assigns high similarity scores if many of the two word’s features overlap, even though some prominent features might be disjoint. This is a major reason for getting such semantically loose similarities, like company - government and country - economy. Investigating the output of Lin’s (1998) similarity measure with respect to the above criterion in (Geffet and Dagan, 2004), we discovered that the quality of similarity scores is often hurt by inaccurate feature weights, which yield rather noisy feature vectors. Hence, we tried to improve the feature weighting function to promote those features that are most indicative of the word meaning. A new weighting scheme was defined for bootstrapping feature weights, termed RFF (Relative Feature Focus). First, basic similarities are generated by Lin’s measure. Then, feature weights are recalculated, boosting the weights of features that characterize many of the words that are most similar to the given one2. As a result the most prominent features of a word are concentrated within the top-100 entries of the vector. Finally, word similarities are recalculated by Lin's metric over the vectors with the new RFF weights. The lexical entailment prediction task of (Geffet and Dagan, 2004) measures how many of the top ranking similarity pairs produced by the 2 In concrete terms RFF is defined by:  ∩ ∈ = ) , ( ) ( ) ( ) , ( v w sim w N f WS v f w RFF , where sim(w,v) is an initial approximation of the similarity space by Lin’s measure, WS(f) is a set of words co-occurring with feature f, and N(w) is the set of the most similar words of w by Lin’s measure. RFF-based metric hold the entailment relation, in at least one direction. To this end a data set of 1,200 pairs was created, consisting of top-N (N=40) similar words of 30 randomly selected nouns, which were manually judged by the lexical entailment criterion. Quite high Kappa agreement values of 0.75 and 0.83 were reported, indicating that the entailment judgment task was reasonably well defined. A subset of the data set is demonstrated in Table 1. The RFF weighting produced 10% precision improvement over Lin’s original use of MI, suggesting the RFF capability to promote semantically meaningful features. However, over 47% of the word pairs in the top-40 similarities are not related by entailment, which calls for further improvement. In this paper we use the same data set 3 and the RFF metric as a basis for our experiments. 2.2 Predicting Semantic Inclusion Weeds et al. (2004) attempted to refine the distributional similarity goal to predict whether one term is a generalization/specification of the other. They present a distributional generality concept and expect it to correlate with semantic generality. Their conjecture is that the majority of the features of the more specific word are included in the features of the more general one. They define the feature recall of w with respect to v as the weighted proportion of features of v that also appear in the vector of w. Then, they suggest that a hypernym would have a higher feature recall for its hyponyms (specifications), than vice versa. However, their results in predicting the hyponymy-hyperonymy direction (71% precision) are comparable to the naïve baseline (70% precision) that simply assumes that general words are more frequent than specific ones. Possible sources of noise in their experiment could be ignoring word polysemy and data sparseness of word-feature cooccurrence in the corpus. 3 The Distributional Inclusion Hypotheses In this paper we suggest refined versions of the distributional similarity hypothesis which relate distributional behavior with lexical entailment. 3 Since the original data set did not include the direction of entailment, we have enriched it by adding the judgments of entailment direction. 109 Extending the rationale of Weeds et al., we suggest that if the meaning of a word v entails another word w then it is expected that all the typical contexts (features) of v will occur also with w. That is, the characteristic contexts of v are expected to be included within all w's contexts (but not necessarily amongst the most characteristic ones for w). Conversely, we might expect that if v's characteristic contexts are included within all w's contexts then it is likely that the meaning of v does entail w. Taking both directions together, lexical entailment is expected to highly correlate with characteristic feature inclusion. Two additional observations are needed before concretely formulating these hypotheses. As explained in Section 2, word contexts should be represented by syntactic features, which are more restrictive and thus better reflect the restrained semantic meaning of the word (it is difficult to tie entailment to looser context representations, such as co-occurrence in a text window). We also notice that distributional similarity principles are intended to hold at the sense level rather than the word level, since different senses have different characteristic contexts (even though computational common practice is to work at the word level, due to the lack of robust sense annotation). We can now define the two distributional inclusion hypotheses, which correspond to the two directions of inference relating distributional feature inclusion and lexical entailment. Let vi and wj be two word senses of the words w and v, correspondingly, and let vi => wj denote the (directional) entailment relation between these senses. Assume further that we have a measure that determines the set of characteristic features for the meaning of each word sense. Then we would hypothesize: Hypothesis I: If vi => wj then all the characteristic (syntacticbased) features of vi are expected to appear with wj. Hypothesis II: If all the characteristic (syntactic-based) features of vi appear with wj then we expect that vi => wj. 4 Word Level Testing of Feature Inclusion To check the validity of the hypotheses we need to test feature inclusion. In this section we present an automated word-level feature inclusion testing method, termed ITA (Inclusion Testing Algorithm). To overcome the data sparseness problem we incorporated web-based feature sampling. Given a test pair of words, three main steps are performed, as detailed in the following subsections: Step 1: Computing the set of characteristic features for each word. Step 2: Testing feature inclusion for each pair, in both directions, within the given corpus data. Step 3: Complementary testing of feature inclusion for each pair in the web. 4.1 Step 1: Corpus-based generation of characteristic features To implement the first step of the algorithm, the RFF weighting function is exploited and its top100 weighted features are taken as most characteristic for each word. As mentioned in Section 2, (Geffet and Dagan, 2004) shows that RFF yields high concentration of good features at the top of the vector. 4.2 Step 2: Corpus-based feature inclusion test We first check feature inclusion in the corpus that was used to generate the characteristic feature sets. For each word pair (w, v) we first determine which features of w do co-occur with v in the corpus. The same is done to identify features of v that co-occur with w in the corpus. 4.3 Step 3: Complementary Webbased Inclusion Test This step is most important to avoid inclusion misses due to the data sparseness of the corpus. A few recent works (Ravichandran and Hovy, 2002; Keller et al., 2002; Chklovski and Pantel, 2004) used the web to collect statistics on word cooccurrences. In a similar spirit, our inclusion test is completed by searching the web for the missing (non-included) features on both sides. We call this web-based technique mutual web-sampling. The web results are further parsed to verify matching of the feature's syntactic relationship. 110 We denote the subset of w's features that are missing for v as M(w, v) (and equivalently M(v, w)). Since web sampling is time consuming we randomly sample a subset of k features (k=20 in our experiments), denoted as M(v,w,k). Mutual Web-sampling Procedure: For each pair (w, v) and their k-subsets M(w, v, k) and M(v, w, k) execute: 1. Syntactic Filtering of “Bag-of-Words” Search: Search the web for sentences including v and a feature f from M(w, v, k) as “bag of words”, i. e. sentences where w and f appear in any distance and in either order. Then filter out the sentences that do not match the defined syntactic relation between f and v (based on parsing). Features that co-occur with w in the correct syntactic relation are removed from M(w, v, k). Do the same search and filtering for w and features from M(v, w, k). 2. Syntactic Filtering of “Exact String” Matching: On the missing features on both sides (which are left in M(w, v, k) and M(v, w, k) after stage 1), apply “exact string” search of the web. For this, convert the tuple (v, f) to a string by adding prepositions and articles where needed. For example, for (element, <project, pcomp_of, 1>) generate the corresponding string “element of the project” and search the web for exact matches of the string. Then validate the syntactic relationship of f and v in the extracted sentences. Remove the found features from M(w, v, k) and M(v, w, k), respectively. 3. Missing Features Validation: Since some of the features may be too infrequent or corpus-biased, check whether the remaining missing features do co-occur on the web with their original target words (with which they did occur in the corpus data). Otherwise, they should not be considered as valid misses and are also removed from M(w, v, k) and M(v, w, k). Output: Inclusion in either direction holds if the corresponding set of missing features is now empty. We also experimented with features consisting of words without syntactic relations. For example, exact string, or bag-of-words match. However, almost all the words (also non-entailing) were found with all the features of each other, even for semantically implausible combinations (e.g. a word and a feature appear next to each other but belong to different clauses of the sentence). Therefore we conclude that syntactic relation validation is very important, especially on the web, in order to avoid coincidental co-occurrences. 5 Empirical Results To test the validity of the distributional inclusion hypotheses we performed an empirical analysis on a selected test sample using our automated testing procedure. 5.1 Data and setting We experimented with a randomly picked test sample of about 200 noun pairs of 1,200 pairs produced by RFF (for details see Geffet and Dagan, 2004) under Lin’s similarity scheme (Lin, 1998). The words were judged by the lexical entailment criterion (as described in Section 2). The original percentage of correct (52%) and incorrect (48%) entailments was preserved. To estimate the degree of validity of the distributional inclusion hypotheses we decomposed each word pair of the sample (w, v) to two directional pairs ordered by potential entailment direction: (w, v) and (v, w). The 400 resulting ordered pairs are used as a test set in Sections 5.2 and 5.3. Features were computed from co-occurrences in a subset of the Reuters corpus of about 18 million words. For the web feature sampling the maximal number of web samples for each query (word - feature) was set to 3,000 sentences. 5.2 Automatic Testing the Validity of the Hypotheses at the Word Level The test set of 400 ordered pairs was examined in terms of entailment (according to the manual judgment) and feature inclusion (according to the ITA algorithm), as shown in Table 2. According to Hypothesis I we expect that a pair (w, v) that satisfies entailment will also preserve feature inclusion. On the other hand, by Hypothesis II if all the features of w are included by v then we expect that w entails v. 111 We observed that Hypothesis I is better attested by our data than the second hypothesis. Thus 86% (97 out of 113) of the entailing pairs fulfilled the inclusion condition. Hypothesis II holds for approximately 70% (97 of 139) of the pairs for which feature inclusion holds. In the next section we analyze the cases of violation of both hypotheses and find that the first hypothesis held to an almost perfect extent with respect to word senses. It is also interesting to note that thanks to the web-sampling procedure over 90% of the nonincluded features in the corpus were found on the web, while most of the missing features (in the web) are indeed semantically implausible. 5.3 Manual Sense Level Testing of Hypotheses Validity Since our data was not sense tagged, the automatic validation procedure could only test the hypotheses at the word level. In this section our goal is to analyze the findings of our empirical test at the word sense level as our hypotheses were defined for senses. Basically, two cases of hypotheses invalidity were detected: Case 1: Entailments with non-included features (violation of Hypothesis I); Case 2: Feature Inclusion for non-entailments (violation of Hypothesis II). At the word level we observed 14% invalid pairs of the first case and 30% of the second case. However, our manual analysis shows, that over 90% of the first case pairs were due to a different sense of one of the entailing word, e.g. capital - town (capital as money) and spread - gap (spread as distribution) (Table 3). Note that ambiguity of the entailed word does not cause errors (like town – area, area as domain) (Table 3). Thus the first hypothesis holds at the sense level for over 98% of the cases (Table 4). Two remaining invalid instances of the first case were due to the web sampling method limitations and syntactic parsing filtering mistakes, especially for some less characteristic and infrequent features captured by RFF. Thus, in virtually all the examples tested in our experiment Hypothesis I was valid. We also explored the second case of invalid pairs: non-entailing words that pass the feature inclusion test. After sense based analysis their percentage was reduced slightly to 27.4%. Three possible reasons were discovered. First, there are words with features typical to the general meaning of the domain, which tend to be included by many other words of this domain, like valley – town. The features of valley (“eastern valley”, “central valley”, “attack in valley”, “industry of the valley”) are not discriminative enough to be distinguished from town, as they are all characteristic to any geographic location. Inclusion Entailment + - + 97 16 - 42 245 Table 2: Distribution of 400 entailing/nonentailing ordered pairs that hold/do not hold feature inclusion at the word level. Inclusion Entailment + - + 111 2 - 42 245 Table 4: Distribution of the entailing/nonentailing ordered pairs that hold/do not hold feature inclusion at the sense level. spread – gap (mutually entail each other) <weapon, pcomp_of> The Committee was discussing the Programme of the “Big Eight,” aimed against spread of weapon of mass destruction. town – area (“town” entails “area”) <cooperation, pcomp_for> This is a promising area for cooperation and exchange of experiences. capital – town (“capital” entails “town”) <flow, nn> Offshore financial centers affect cross-border capital flow in China. Table 3: Examples of ambiguity of entailmentrelated words, where the disjoint features belong to a different sense of the word. 112 The second group consists of words that can be entailing, but only in a context-dependent (anaphoric) manner rather than ontologically. For example, government and neighbour, while neighbour is used in the meaning of “neighbouring (country) government”. Finally, sometimes one or both of the words are abstract and general enough and also highly ambiguous to appear with a wide range of features on the web, like element (violence – element, with all the tested features of violence included by element). To prevent occurrences of the second case more characteristic and discriminative features should be provided. For this purpose features extracted from the web, which are not domain-biased (like features from the corpus) and multi-word features may be helpful. Overall, though, there might be inherent cases that invalidate Hypothesis II. 6 Improving Lexical Entailment Prediction by ITA (Inclusion Testing Algorithm) In this section we show that ITA can be practically used to improve the (non-directional) lexical entailment prediction task described in Section 2. Given the output of the distributional similarity method, we employ ITA at the word level to filter out non-entailing pairs. Word pairs that satisfy feature inclusion of all k features (at least in one direction) are claimed as entailing. The same test sample of 200 word pairs mentioned in Section 5.1 was used in this experiment. The results were compared to RFF under Lin’s similarity scheme (RFF-top-40 in Table 5). Precision was significantly improved, filtering out 60% of the incorrect pairs. On the other hand, the relative recall (considering RFF recall as 100%) was only reduced by 13%, consequently leading to a better relative F1, when considering the RFF-top-40 output as 100% recall (Table 5). Since our method removes about 35% of the original top-40 RFF output, it was interesting to compare our results to simply cutting off the 35% of the lowest ranked RFF words (top-26). The comparison to the baseline (RFF-top-26 in Table 5) showed that ITA filters the output much better than just cutting off the lowest ranking similarities. We also tried a couple of variations on feature sampling for the web-based procedure. In one of our preliminary experiments we used the top-k RFF features instead of random selection. But we observed that top ranked RFF features are less discriminative than the random ones due to the nature of the RFF weighting strategy, which promotes features shared by many similar words. Then, we attempted doubling the sampling to 40 random features. As expected the recall was slightly decreased, while precision was increased by over 5%. In summary, the behavior of ITA sampling of k=20 and k=40 features is closely comparable (ITA-20 and ITA-40 in Table 5, respectively)4. 7 Conclusions and Future Work The main contributions of this paper were: 1. We defined two Distributional Inclusion Hypotheses that associate feature inclusion with lexical entailment at the word sense level. The Hypotheses were proposed as a refinement for Harris’ Distributional hypothesis and as an extension to the classic distributional similarity scheme. 2. To estimate the empirical validity of the defined hypotheses we developed an automatic inclusion testing algorithm (ITA). The core of the algorithm is a web-based feature inclusion testing procedure, which helped significantly to compensate for data sparseness. 3. Then a thorough analysis of the data behavior with respect to the proposed hypotheses was conducted. The first hypothesis was almost fully attested by the data, particularly at the sense level, while the second hypothesis did not fully hold. 4. Motivated by the empirical analysis we proposed to employ ITA for the practical task of improving lexical entailment acquisition. The algorithm was applied as a filtering technique on the distributional similarity (RFF) output. We ob 4 The ITA-40 sampling fits the analysis from section 5.2 and 5.3 as well. Method Precision Recall F1 ITA-20 0.700 0.875 0.777 ITA-40 0.740 0.846 0.789 RFF-top-40 0.520 1.000 0.684 RFF-top-26 0.561 0.701 0.624 Table 5: Comparative results of using the filter, with 20 and 40 feature sampling, compared to RFF top-40 and RFF top-26 similarities. ITA-20 and ITA-40 denote the websampling method with 20 and random 40 features, respectively. 113 tained 17% increase of precision and succeeded to improve relative F1 by 15% over the baseline. Although the results were encouraging our manual data analysis shows that we still have to handle word ambiguity. In particular, this is important in order to be able to learn the direction of entailment. To achieve better precision we need to increase feature discriminativeness. To this end syntactic features may be extended to contain more than one word, and ways for automatic extraction of features from the web (rather than from a corpus) may be developed. Finally, further investigation of combining the distributional and the co-occurrence pattern-based approaches over the web is desired. Acknowledgement We are grateful to Shachar Mirkin for his help in implementing the web-based sampling procedure heavily employed in our experiments. We thank Idan Szpektor for providing the infrastructure system for web-based data extraction. References Chklovski, Timothy and Patrick Pantel. 2004. VERBOCEAN: Mining the Web for Fine-Grained Semantic Verb Relations. In Proc. of EMNLP-04. Barcelona, Spain. Church, Kenneth W. and Hanks Patrick. 1990. Word association norms, mutual information, and Lexicography. Computational Linguistics, 16(1), pp. 22–29. Dagan, Ido. 2000. Contextual Word Similarity, in Rob Dale, Hermann Moisl and Harold Somers (Eds.), Handbook of Natural Language Processing, Marcel Dekker Inc, 2000, Chapter 19, pp. 459-476. Dagan, Ido, Oren Glickman and Bernardo Magnini. 2005. The PASCAL Recognizing Textual Entailment Challenge. In Proc. of the PASCAL Challenges Workshop for Recognizing Textual Entailment. Southampton, U.K. Geffet, Maayan and Ido Dagan, 2004. Feature Vector Quality and Distributional Similarity. In Proc. of Coling-04. Geneva. Switzerland. Grefenstette, Gregory. 1994. Exploration in Automatic Thesaurus Discovery. Kluwer Academic Publishers. Harris, Zelig S. Mathematical structures of language. Wiley, 1968. Hearst, Marti. 1992. Automatic acquisition of hyponyms from large text corpora. In Proc. of COLING92. Nantes, France. Keller, Frank, Maria Lapata, and Olga Ourioupina. 2002. Using the Web to Overcome Data Sparseness. In Jan Hajic and Yuji Matsumoto, eds., In Proc. of EMNLP-02. Philadelphia, PA. Lee, Lillian. 1997. Similarity-Based Approaches to Natural Language Processing. Ph.D. thesis, Harvard University, Cambridge, MA. Lin, Dekang. 1993. Principle-Based Parsing without Overgeneration. In Proc. of ACL-93. Columbus, Ohio. . Lin, Dekang. 1998. Automatic Retrieval and Clustering of Similar Words. In Proc. of COLING–ACL98, Montreal, Canada. Moldovan, Dan, Badulescu, A., Tatu, M., Antohe, D., and Girju, R. 2004. Models for the semantic classification of noun phrases. In Proc. of HLT/NAACL2004 Workshop on Computational Lexical Semantics. Boston. Pado, Sebastian and Mirella Lapata. 2003. Constructing semantic space models from parsed corpora. In Proc. of ACL-03, Sapporo, Japan. Pantel, Patrick and Dekang Lin. 2002. Discovering Word Senses from Text. In Proc. of ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD-02). Edmonton, Canada. Pereira, Fernando, Tishby Naftali, and Lee Lillian. 1993. Distributional clustering of English words. In Proc. of ACL-93. Columbus, Ohio. Ravichandran, Deepak and Eduard Hovy. 2002. Learning Surface Text Patterns for a Question Answering System. In Proc. of ACL-02. Philadelphia, PA. Ruge, Gerda. 1992. Experiments on linguisticallybased term associations. Information Processing & Management, 28(3), pp. 317–332. Weeds, Julie and David Weir. 2003. A General Framework for Distributional Similarity. In Proc. of EMNLP-03. Sapporo, Japan. Weeds, Julie, D. Weir, D. McCarthy. 2004. Characterizing Measures of Lexical Distributional Similarity. In Proc. of Coling-04. Geneva, Switzerland. 114
2005
14
Proceedings of the 43rd Annual Meeting of the ACL, pages 115–124, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales Bo Pang   and Lillian Lee    (1) Department of Computer Science, Cornell University (2) Language Technologies Institute, Carnegie Mellon University (3) Computer Science Department, Carnegie Mellon University Abstract We address the rating-inference problem, wherein rather than simply decide whether a review is “thumbs up” or “thumbs down”, as in previous sentiment analysis work, one must determine an author’s evaluation with respect to a multi-point scale (e.g., one to five “stars”). This task represents an interesting twist on standard multi-class text categorization because there are several different degrees of similarity between class labels; for example, “three stars” is intuitively closer to “four stars” than to “one star”. We first evaluate human performance at the task. Then, we apply a metaalgorithm, based on a metric labeling formulation of the problem, that alters a given  -ary classifier’s output in an explicit attempt to ensure that similar items receive similar labels. We show that the meta-algorithm can provide significant improvements over both multi-class and regression versions of SVMs when we employ a novel similarity measure appropriate to the problem. 1 Introduction There has recently been a dramatic surge of interest in sentiment analysis, as more and more people become aware of the scientific challenges posed and the scope of new applications enabled by the processing of subjective language. (The papers collected by Qu, Shanahan, and Wiebe (2004) form a representative sample of research in the area.) Most prior work on the specific problem of categorizing expressly opinionated text has focused on the binary distinction of positive vs. negative (Turney, 2002; Pang, Lee, and Vaithyanathan, 2002; Dave, Lawrence, and Pennock, 2003; Yu and Hatzivassiloglou, 2003). But it is often helpful to have more information than this binary distinction provides, especially if one is ranking items by recommendation or comparing several reviewers’ opinions: example applications include collaborative filtering and deciding which conference submissions to accept. Therefore, in this paper we consider generalizing to finer-grained scales: rather than just determine whether a review is “thumbs up” or not, we attempt to infer the author’s implied numerical rating, such as “three stars” or “four stars”. Note that this differs from identifying opinion strength (Wilson, Wiebe, and Hwa, 2004): rants and raves have the same strength but represent opposite evaluations, and referee forms often allow one to indicate that one is very confident (high strength) that a conference submission is mediocre (middling rating). Also, our task differs from ranking not only because one can be given a single item to classify (as opposed to a set of items to be ordered relative to one another), but because there are settings in which classification is harder than ranking, and vice versa. One can apply standard  -ary classifiers or regression to this rating-inference problem; independent work by Koppel and Schler (2005) considers such 115 methods. But an alternative approach that explicitly incorporates information about item similarities together with label similarity information (for instance, “one star” is closer to “two stars” than to “four stars”) is to think of the task as one of metric labeling (Kleinberg and Tardos, 2002), where label relations are encoded via a distance metric. This observation yields a meta-algorithm, applicable to both semi-supervised (via graph-theoretic techniques) and supervised settings, that alters a given  -ary classifier’s output so that similar items tend to be assigned similar labels. In what follows, we first demonstrate that humans can discern relatively small differences in (hidden) evaluation scores, indicating that rating inference is indeed a meaningful task. We then present three types of algorithms — one-vs-all, regression, and metric labeling — that can be distinguished by how explicitly they attempt to leverage similarity between items and between labels. Next, we consider what item similarity measure to apply, proposing one based on the positive-sentence percentage. Incorporating this new measure within the metriclabeling framework is shown to often provide significant improvements over the other algorithms. We hope that some of the insights derived here might apply to other scales for text classifcation that have been considered, such as clause-level opinion strength (Wilson, Wiebe, and Hwa, 2004); affect types like disgust (Subasic and Huettner, 2001; Liu, Lieberman, and Selker, 2003); reading level (Collins-Thompson and Callan, 2004); and urgency or criticality (Horvitz, Jacobs, and Hovel, 1999). 2 Problem validation and formulation We first ran a small pilot study on human subjects in order to establish a rough idea of what a reasonable classification granularity is: if even people cannot accurately infer labels with respect to a five-star scheme with half stars, say, then we cannot expect a learning algorithm to do so. Indeed, some potential obstacles to accurate rating inference include lack of calibration (e.g., what an understated author intends as high praise may seem lukewarm), author inconsistency at assigning fine-grained ratings, and Rating diff. Pooled Subject 1 Subject 2  or more 100% 100% (35) 100% (15) 2 (e.g., 1 star) 83% 77% (30) 100% (11) 1 (e.g.,  star) 69% 65% (57) 90% (10) 0 55% 47% (15) 80% ( 5) Table 1: Human accuracy at determining relative positivity. Rating differences are given in “notches”. Parentheses enclose the number of pairs attempted. ratings not entirely supported by the text1. For data, we first collected Internet movie reviews in English from four authors, removing explicit rating indicators from each document’s text automatically. Now, while the obvious experiment would be to ask subjects to guess the rating that a review represents, doing so would force us to specify a fixed rating-scale granularity in advance. Instead, we examined people’s ability to discern relative differences, because by varying the rating differences represented by the test instances, we can evaluate multiple granularities in a single experiment. Specifically, at intervals over a number of weeks, we authors (a non-native and a native speaker of English) examined pairs of reviews, attemping to determine whether the first review in each pair was (1) more positive than, (2) less positive than, or (3) as positive as the second. The texts in any particular review pair were taken from the same author to factor out the effects of cross-author divergence. As Table 1 shows, both subjects performed perfectly when the rating separation was at least 3 “notches” in the original scale (we define a notch as a half star in a four- or five-star scheme and 10 points in a 100-point scheme). Interestingly, although human performance drops as rating difference decreases, even at a one-notch separation, both subjects handily outperformed the random-choice baseline of 33%. However, there was large variation in accuracy between subjects.2 1For example, the critic Dennis Schwartz writes that “sometimes the review itself [indicates] the letter grade should have been higher or lower, as the review might fail to take into consideration my overall impression of the film — which I hope to capture in the grade” (http://www.sover.net/˜ozus/cinema.htm). 2One contributing factor may be that the subjects viewed disjoint document sets, since we wanted to maximize experimental coverage of the types of document pairs within each difference class. We thus cannot report inter-annotator agreement, 116 Because of this variation, we defined two different classification regimes. From the evidence above, a three-class task (categories 0, 1, and 2 — essentially “negative”, “middling”, and “positive”, respectively) seems like one that most people would do quite well at (but we should not assume 100% human accuracy: according to our one-notch results, people may misclassify borderline cases like 2.5 stars). Our study also suggests that people could do at least fairly well at distinguishing full stars in a zero- to four-star scheme. However, when we began to construct five-category datasets for each of our four authors (see below), we found that in each case, either the most negative or the most positive class (but not both) contained only about 5% of the documents. To make the classes more balanced, we folded these minority classes into the adjacent class, thus arriving at a four-class problem (categories 0-3, increasing in positivity). Note that the four-class problem seems to offer more possibilities for leveraging class relationship information than the three-class setting, since it involves more class pairs. Also, even the two-category version of the rating-inference problem for movie reviews has proven quite challenging for many automated classification techniques (Pang, Lee, and Vaithyanathan, 2002; Turney, 2002). We applied the above two labeling schemes to a scale dataset3 containing four corpora of movie reviews. All reviews were automatically preprocessed to remove both explicit rating indicators and objective sentences; the motivation for the latter step is that it has previously aided positive vs. negative classification (Pang and Lee, 2004). All of the 1770, 902, 1307, or 1027 documents in a given corpus were written by the same author. This decision facilitates interpretation of the results, since it factors out the effects of different choices of methods for calibrating authors’ scales.4 We point out that but since our goal is to recover a reviewer’s “true” recommendation, reader-author agreement is more relevant. While another factor might be degree of English fluency, in an informal experiment (six subjects viewing the same three pairs), native English speakers made the only two errors. 3Available at http://www.cs.cornell.edu/People/pabo/moviereview-data as scale dataset v1.0. 4From the Rotten Tomatoes website’s FAQ: “star systems are not consistent between critics. For critics like Roger Ebert and James Berardinelli, 2.5 stars or lower out of 4 stars is always negative. For other critics, 2.5 stars can either be positive it is possible to gather author-specific information in some practical applications: for instance, systems that use selected authors (e.g., the Rotten Tomatoes movie-review website — where, we note, not all authors provide explicit ratings) could require that someone submit rating-labeled samples of newlyadmitted authors’ work. Moreover, our results at least partially generalize to mixed-author situations (see Section 5.2). 3 Algorithms Recall that the problem we are considering is multicategory classification in which the labels can be naturally mapped to a metric space (e.g., points on a line); for simplicity, we assume the distance metric       throughout. In this section, we present three approaches to this problem in order of increasingly explicit use of pairwise similarity information between items and between labels. In order to make comparisons between these methods meaningful, we base all three of them on Support Vector Machines (SVMs) as implemented in Joachims’ (1999)  "!$#&%('*) package. 3.1 One-vs-all The standard SVM formulation applies only to binary classification. One-vs-all (OVA) (Rifkin and Klautau, 2004) is a common extension to the  -ary case. Training consists of building, for each label , an SVM binary classifier distinguishing label from “not ”. We consider the final output to be a label preference function +-,./ 102  3 , defined as the signed distance of (test) item 0 to the side of the vs. not decision plane. Clearly, OVA makes no explicit use of pairwise label or item relationships. However, it can perform well if each class exhibits sufficiently distinct language; see Section 4 for more discussion. 3.2 Regression Alternatively, we can take a regression perspective by assuming that the labels come from a discretization of a continuous function 4 mapping from the or negative. Even though Eric Lurio uses a 5 star system, his grading is very relaxed. So, 2 stars can be positive.” Thus, calibration may sometimes require strong familiarity with the authors involved, as anyone who has ever needed to reconcile conflicting referee reports probably knows. 117 feature space to a metric space.5 If we choose 4 from a family of sufficiently “gradual” functions, then similar items necessarily receive similar labels. In particular, we consider linear, 5 -insensitive SVM regression (Vapnik, 1995; Smola and Sch¨olkopf, 1998); the idea is to find the hyperplane that best fits the training data, but where training points whose labels are within distance 5 of the hyperplane incur no loss. Then, for (test) instance 0 , the label preference function +76 8:9 10;  < is the negative of the distance between and the value predicted for 0 by the fitted hyperplane function. Wilson, Wiebe, and Hwa (2004) used SVM regression to classify clause-level strength of opinion, reporting that it provided lower accuracy than other methods. However, independently of our work, Koppel and Schler (2005) found that applying linear regression to classify documents (in a different corpus than ours) with respect to a three-point rating scale provided greater accuracy than OVA SVMs and other algorithms. 3.3 Metric labeling Regression implicitly encodes the “similar items, similar labels” heuristic, in that one can restrict consideration to “gradual” functions. But we can also think of our task as a metric labeling problem (Kleinberg and Tardos, 2002), a special case of the maximum a posteriori estimation problem for Markov random fields, to explicitly encode our desideratum. Suppose we have an initial label preference function + 102  3 , perhaps computed via one of the two methods described above. Also, let  be a distance metric on labels, and let 7>= 10? denote the @ nearest neighbors of item 0 according to some item-similarity function AB C . Then, it is quite natural to pose our problem as finding a mapping of instances 0 to labels <D (respecting the original labels of the training instances) that minimizes E DF test GH  + 102  D -IJ E K FMLNLPO*QRD3SNT UV D  K  AB C 102 WXYZ[ where T is monotonically increasing (we chose T U\]^ unless otherwise specified) and J is a trade-off and/or scaling parameter. (The inner summation is familiar from work in locally-weighted 5We discuss the ordinal regression variant in Section 6. learning6 (Atkeson, Moore, and Schaal, 1997).) In a sense, we are using explicit item and label similarity information to increasingly penalize the initial classifier as it assigns more divergent labels to similar items. In this paper, we only report supervised-learning experiments in which the nearest neighbors for any given test item were drawn from the training set alone. In such a setting, the labeling decisions for different test items are independent, so that solving the requisite optimization problem is simple. Aside: transduction The above formulation also allows for transductive semi-supervised learning as well, in that we could allow nearest neighbors to come from both the training and test sets. We intend to address this case in future work, since there are important settings in which one has a small number of labeled reviews and a large number of unlabeled reviews, in which case considering similarities between unlabeled texts could prove quite helpful. In full generality, the corresponding multi-label optimization problem is intractable, but for many families of T functions (e.g., convex) there exist practical exact or approximation algorithms based on techniques for finding minimum s-t cuts in graphs (Ishikawa and Geiger, 1998; Boykov, Veksler, and Zabih, 1999; Ishikawa, 2003). Interestingly, previous sentiment analysis research found that a minimum-cut formulation for the binary subjective/objective distinction yielded good results (Pang and Lee, 2004). Of course, there are many other related semi-supervised learning algorithms that we would like to try as well; see Zhu (2005) for a survey. 4 Class struggle: finding a label-correlated item-similarity function We need to specify an item similarity function A_B1C to use the metric-labeling formulation described in Section 3.3. We could, as is commonly done, employ a term-overlap-based measure such as the cosine between term-frequency-based document vectors (henceforth “TO(cos)”). However, Table 2 6If we ignore the `badc\e1fg term, different choices of h correspond to different versions of nearest-neighbor learning, e.g., majority-vote, weighted average of labels, or weighted median of labels. 118 Label difference: 1 2 3 Three-class data 37% 33% — Four-class data 34% 31% 30% Table 2: Average over authors and class pairs of between-class vocabulary overlap as the class labels of the pair grow farther apart. shows that in aggregate, the vocabularies of distant classes overlap to a degree surprisingly similar to that of the vocabularies of nearby classes. Thus, item similarity as measured by TO(cos) may not correlate well with similarity of the item’s true labels. We can potentially develop a more useful similarity metric by asking ourselves what, intuitively, accounts for the label relationships that we seek to exploit. A simple hypothesis is that ratings can be determined by the positive-sentence percentage (PSP) of a text, i.e., the number of positive sentences divided by the number of subjective sentences. (Termbased versions of this premise have motivated much sentiment-analysis work for over a decade (Das and Chen, 2001; Tong, 2001; Turney, 2002).) But counterexamples are easy to construct: reviews can contain off-topic opinions, or recount many positive aspects before describing a fatal flaw. We therefore tested the hypothesis as follows. To avoid the need to hand-label sentences as positive or negative, we first created a sentence polarity dataset7 consisting of 10,662 movie-review “snippets” (a striking extract usually one sentence long) downloaded from www.rottentomatoes.com; each snippet was labeled with its source review’s label (positive or negative) as provided by Rotten Tomatoes. Then, we trained a Naive Bayes classifier on this data set and applied it to our scale dataset to identify the positive sentences (recall that objective sentences were already removed). Figure 1 shows that all four authors tend to exhibit a higher PSP when they write a more positive review, and we expect that most typical reviewers would follow suit. Hence, PSP appears to be a promising basis for computing document similarity for our rating-inference task. In particular, 7Available at http://www.cs.cornell.edu/People/pabo/moviereview-data as sentence polarity dataset v1.0. we defined  XiX&j k  k 107 to be the two-dimensional vector k  k 107_ <lm k  k 10? , and then set the itemsimilarity function required by the metric-labeling optimization function (Section 3.3) to AB C 102 WXn oprqts 1iXiRj k  k 10? XXiuj k  k 1Wiwvyx 8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 2 4 6 8 10 mean and standard deviation of PSP rating (in notches) Positive-sentence percentage (PSP) statistics Author a Author b Author c Author d Figure 1: Average and standard deviation of PSP for reviews expressing different ratings. But before proceeding, we note that it is possible that similarity information might yield no extra benefit at all. For instance, we don’t need it if we can reliably identify each class just from some set of distinguishing terms. If we define such terms as frequent ones ( z {| ) that appear in a single class 50% or more of the time, then we do find many instances; some examples for one author are: “meaningless”, “disgusting” (class 0); “pleasant”, “uneven” (class 1); and “oscar”, “gem” (class 2) for the three-class case, and, in the four-class case, “flat”, “tedious” (class 1) versus “straightforward”, “likeable” (class 2). Some unexpected distinguishing terms for this author are “lion” for class 2 (threeclass case), and for class 2 in the four-class case, “jennifer”, for a wide variety of Jennifers. 5 Evaluation This section compares the accuracies of the approaches outlined in Section 3 on the four corpora comprising our scale dataset. (Results using } error were qualitatively similar.) Throughout, when 8While admittedly we initially chose this function because it was convenient to work with cosines, post hoc analysis revealed that the corresponding metric space “stretched” certain distances in a useful way. 119 we refer to something as “significant”, we mean statistically so with respect to the paired ~ -test, € x |r‚ . The results that follow are based on \ƒ !$#&%('*) ’s default parameter settings for SVM regression and OVA. Preliminary analysis of the effect of varying the regression parameter 5 in the four-class case revealed that the default value was often optimal. The notation “A I B” denotes metric labeling where method A provides the initial label preference function + and B serves as similarity measure. To train, we first select the meta-parameters @ and J by running 9-fold cross-validation within the training set. Fixing @ and J to those values yielding the best performance, we then re-train A (but with SVM parameters fixed, as described above) on the whole training set. At test time, the nearest neighbors of each item are also taken from the full training set. 5.1 Main comparison Figure 2 summarizes our average 10-fold crossvalidation accuracy results. We first observe from the plots that all the algorithms described in Section 3 always definitively outperform the simple baseline of predicting the majority class, although the improvements are smaller in the four-class case. Incidentally, the data was distributed in such a way that the absolute performance of the baseline itself does not change much between the three- and four-class case (which implies that the three-class datasets were relatively more balanced); and Author c’s datasets seem noticeably easier than the others. We now examine the effect of implicitly using label and item similarity. In the four-class case, regression performed better than OVA (significantly so for two authors, as shown in the righthand table); but for the three-category task, OVA significantly outperforms regression for all four authors. One might initially interprete this “flip” as showing that in the four-class scenario, item and label similarities provide a richer source of information relative to class-specific characteristics, especially since for the non-majority classes there is less data available; whereas in the three-class setting the categories are better modeled as quite distinct entities. However, the three-class results for metric labeling on top of OVA and regression (shown in Figure 2 by black versions of the corresponding icons) show that employing explicit similarities always improves results, often to a significant degree, and yields the best overall accuracies. Thus, we can in fact effectively exploit similarities in the three-class case. Additionally, in both the three- and four- class scenarios, metric labeling often brings the performance of the weaker base method up to that of the stronger one (as indicated by the “disappearance” of upward triangles in corresponding table rows), and never hurts performance significantly. In the four-class case, metric labeling and regression seem roughly equivalent. One possible interpretation is that the relevant structure of the problem is already captured by linear regression (and perhaps a different kernel for regression would have improved its three-class performance). However, according to additional experiments we ran in the four-class situation, the test-set-optimal parameter settings for metric labeling would have produced significant improvements, indicating there may be greater potential for our framework. At any rate, we view the fact that metric labeling performed quite well for both rating scales as a definitely positive result. 5.2 Further discussion Q: Metric labeling looks like it’s just combining SVMs with nearest neighbors, and classifier combination often improves performance. Couldn’t we get the same kind of results by combining SVMs with any other reasonable method? A: No. For example, if we take the strongest base SVM method for initial label preferences, but replace PSP with the term-overlap-based cosine (TO(cos)), performance often drops significantly. This result, which is in accordance with Section 4’s data, suggests that choosing an item similarity function that correlates well with label similarity is important. (ova I PSP „P„P„P„ ova I TO(cos) [3c]; reg I PSP „ reg I TO(cos) [4c]) Q: Could you explain that notation, please? A: Triangles point toward the significantly better algorithm for some dataset. For instance, “M „P„P… N [3c]” means, “In the 3-class task, method M is significantly better than N for two author datasets and significantly worse for one dataset (so the algorithms were statistically indistinguishable on the remaining dataset)”. When the algorithms being compared are statistically indistinguishable on 120 Average accuracies, three-class data Average accuracies, four-class data 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Author a Author b Author c Author d majority ova ova+PSP reg reg+PSP 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Author a Author b Author c Author d majority ova ova+PSP reg reg+PSP Average ten-fold cross-validation accuracies. Open icons: SVMs in either one-versus-all (square) or regression (circle) mode; dark versions: metric labeling using the corresponding SVM together with the positive-sentence percentage (PSP). The W -axes of the two plots are aligned. Significant differences, three-class data Significant differences, four-class data ova ova+PSP reg reg+PSP a b c d a b c d a b c d a b c d ova †V†?† . „?„?„V„ . „ . . ova+PSP ‡?‡?‡ . „?„?„V„ „?„?„ . reg †?†?†V† †V†?†?† . † . † reg+PSP . † . . †V†?† . . ‡ . ‡ ova ova+PSP reg reg+PSP a b c d a b c d a b c d a b c d ova . †?†?† †?† . . † . . † ova+PSP . ‡?‡?‡ † . . . † . . . reg „?„ . . „ . . . . . . . reg+PSP „ . . „ „ . . . . . . . Triangles point towards significantly better algorithms for the results plotted above. Specifically, if the difference between a row and a column algorithm for a given author dataset (a, b, c, or d) is significant, a triangle points to the better one; otherwise, a dot (.) is shown. Dark icons highlight the effect of adding PSP information via metric labeling. Figure 2: Results for main experimental comparisons. all four datasets (the “no triangles” case), we indicate this with an equals sign (“=”). Q: Thanks. Doesn’t Figure 1 show that the positive-sentence percentage would be a good classifier even in isolation, so metric labeling isn’t necessary? A: No. Predicting class labels directly from the PSP value via trained thresholds isn’t as effective (ova I PSP „P„P„P„ threshold PSP [3c]; reg I PSP „P„ threshold PSP [4c]). Alternatively, we could use only the PSP component of metric labeling by setting the label preference function to the constant function 0, but even with test-set-optimal parameter settings, doing so underperforms the trained metric labeling algorithm with access to an initial SVM classifier (ova I PSP „P„P„P„ 0 I k  k‰ˆ [3c]; reg I PSP „P„ 0 I k  kŠˆ [4c]). Q: What about using PSP as one of the features for input to a standard classifier? A: Our focus is on investigating the utility of similarity information. In our particular rating-inference setting, it so happens that the basis for our pairwise similarity measure can be incorporated as an 121 item-specific feature, but we view this as a tangential issue. That being said, preliminary experiments show that metric labeling can be better, barely (for test-set-optimal parameter settings for both algorithms: significantly better results for one author, four-class case; statistically indistinguishable otherwise), although one needs to determine an appropriate weight for the PSP feature to get good performance. Q: You defined the “metric transformation” function T as the identity function T U‹Œ , imposing greater loss as the distance between labels assigned to two similar items increases. Can you do just as well if you penalize all non-equal label assignments by the same amount, or does the distance between labels really matter? A: You’re asking for a comparison to the Potts model, which sets T to the function  T U  l if  Ž | , | otherwise. In the one setting in which there is a significant difference between the two, the Potts model does worse (ova I PSP „ ova  I PSP [3c]). Also, employing the Potts model generally leads to fewer significant improvements over a chosen base method (compare Figure 2’s tables with: reg  I PSP „ reg [3c]; ova  I PSP „P„ ova [3c]; ova  I PSP  ova [4c]; but note that reg  I PSP „ reg [4c]). We note that optimizing the Potts model in the multi-label case is NPhard, whereas the optimal metric labeling with the identity metric-transformation function can be efficiently obtained (see Section 3.3). Q: Your datasets had many labeled reviews and only one author each. Is your work relevant to settings with many authors but very little data for each? A: As discussed in Section 2, it can be quite difficult to properly calibrate different authors’ scales, since the same number of “stars” even within what is ostensibly the same rating system can mean different things for different authors. But since you ask: we temporarily turned a blind eye to this serious issue, creating a collection of 5394 reviews by 496 authors with at most 80 reviews per author, where we pretended that our rating conversions mapped correctly into a universal rating scheme. Preliminary results on this dataset were actually comparable to the results reported above, although since we are not confident in the class labels themselves, more work is needed to derive a clear analysis of this setting. (Abusing notation, since we’re already playing fast and loose: [3c]: baseline 52.4%, reg 61.4%, reg I PSP 61.5%, ova (65.4%) … ova I PSP (66.3%); [4c]: baseline 38.8%, reg (51.9%) … reg I PSP (52.7%), ova (53.8%) … ova I PSP (54.6%)) In future work, it would be interesting to determine author-independent characteristics that can be used on (or suitably adapted to) data for specific authors. Q: How about trying — A: —Yes, there are many alternatives. A few that we tested are described in the Appendix, and we propose some others in the next section. We should mention that we have not yet experimented with all-vs.-all (AVA), another standard binary-tomulti-category classifier conversion method, because we wished to focus on the effect of omitting pairwise information. In independent work on 3-category rating inference for a different corpus, Koppel and Schler (2005) found that regression outperformed AVA, and Rifkin and Klautau (2004) argue that in principle OVA should do just as well as AVA. But we plan to try it out. 6 Related work and future directions In this paper, we addressed the rating-inference problem, showing the utility of employing label similarity and (appropriate choice of) item similarity — either implicitly, through regression, or explicitly and often more effectively, through metric labeling. In the future, we would like to apply our methods to other scale-based classification problems, and explore alternative methods. Clearly, varying the kernel in SVM regression might yield better results. Another choice is ordinal regression (McCullagh, 1980; Herbrich, Graepel, and Obermayer, 2000), which only considers the ordering on labels, rather than any explicit distances between them; this approach could work well if a good metric on labels is lacking. Also, one could use mixture models (e.g., combine “positive” and “negative” language models) to capture class relationships (McCallum, 1999; Schapire and Singer, 2000; Takamura, Matsumoto, and Yamada, 2004). We are also interested in framing multi-class but non-scale-based categorization problems as metric 122 labeling tasks. For example, positive vs. negative vs. neutral sentiment distinctions are sometimes considered in which neutral means either objective (Engstr¨om, 2004) or a conflation of objective with a rating of mediocre (Das and Chen, 2001). (Koppel and Schler (2005) in independent work also discuss various types of neutrality.) In either case, we could apply a metric in which positive and negative are closer to objective (or objective+mediocre) than to each other. As another example, hierarchical label relationships can be easily encoded in a label metric. Finally, as mentioned in Section 3.3, we would like to address the transductive setting, in which one has a small amount of labeled data and uses relationships between unlabeled items, since it is particularly well-suited to the metric-labeling approach and may be quite important in practice. Acknowledgments We thank Paul Bennett, Dave Blei, Claire Cardie, Shimon Edelman, Thorsten Joachims, Jon Kleinberg, Oren Kurland, John Lafferty, Guy Lebanon, Pradeep Ravikumar, Jerry Zhu, and the anonymous reviewers for many very useful comments and discussion. We learned of Moshe Koppel and Jonathan Schler’s work while preparing the cameraready version of this paper; we thank them for so quickly answering our request for a pre-print. Our descriptions of their work are based on that pre-print; we apologize in advance for any inaccuracies in our descriptions that result from changes between their pre-print and their final version. We also thank CMU for its hospitality during the year. This paper is based upon work supported in part by the National Science Foundation (NSF) under grant no. IIS-0329064 and CCR-0122581; SRI International under subcontract no. 03-000211 on their project funded by the Department of the Interior’s National Business Center; and by an Alfred P. Sloan Research Fellowship. Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of any sponsoring institutions, the U.S. government, or any other entity. References Atkeson, Christopher G., Andrew W. Moore, and Stefan Schaal. 1997. Locally weighted learning. Artificial Intelligence Review, 11(1):11–73. Boykov, Yuri, Olga Veksler, and Ramin Zabih. 1999. Fast approximate energy minimization via graph cuts. In Proceedings of the International Conference on Computer Vision (ICCV), pages 377–384. Journal version in IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 23(11):1222–1239, 2001. Collins-Thompson, Kevyn and Jamie Callan. 2004. A language modeling approach to predicting reading difficulty. In HLTNAACL: Proceedings of the Main Conference, pages 193– 200. Das, Sanjiv and Mike Chen. 2001. Yahoo! for Amazon: Extracting market sentiment from stock message boards. In Proceedings of the Asia Pacific Finance Association Annual Conference (APFA). Dave, Kushal, Steve Lawrence, and David M. Pennock. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In Proceedings of WWW, pages 519–528. Engstr¨om, Charlotta. 2004. Topic dependence in sentiment classification. Master’s thesis, University of Cambridge. Herbrich, Ralf, Thore Graepel, and Klaus Obermayer. 2000. Large margin rank boundaries for ordinal regression. In Alexander J. Smola, Peter L. Bartlett, Bernhard Sch¨olkopf, and Dale Schuurmans, editors, Advances in Large Margin Classifiers, Neural Information Processing Systems. MIT Press, pages 115–132. Horvitz, Eric, Andy Jacobs, and David Hovel. 1999. Attentionsensitive alerting. In Proceedings of the Conference on Uncertainty and Artificial Intelligence, pages 305–313. Ishikawa, Hiroshi. 2003. Exact optimization for Markov random fields with convex priors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(10). Ishikawa, Hiroshi and Davi Geiger. 1998. Occlusions, discontinuities, and epipolar lines in stereo. In Proceedings of the 5th European Conference on Computer Vision (ECCV), volume I, pages 232–248, London, UK. Springer-Verlag. Joachims, Thorsten. 1999. Making large-scale SVM learning practical. In Bernhard Sch¨olkopf and Alexander Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT Press, pages 44–56. Kleinberg, Jon and ´Eva Tardos. 2002. Approximation algorithms for classification problems with pairwise relationships: Metric labeling and Markov random fields. Journal of the ACM, 49(5):616–639. Koppel, Moshe and Jonathan Schler. 2005. The importance of neutral examples for learning sentiment. In Workshop on the Analysis of Informal and Formal Information Exchange during Negotiations (FINEXIN). Liu, Hugo, Henry Lieberman, and Ted Selker. 2003. A model of textual affect sensing using real-world knowledge. In Proceedings of Intelligent User Interfaces (IUI), pages 125–132. McCallum, Andrew. 1999. Multi-label text classification with a mixture model trained by EM. In AAAI Workshop on Text Learning. McCullagh, Peter. 1980. Regression models for ordinal data. Journal of the Royal Statistical Society, 42(2):109–42. 123 Pang, Bo and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the ACL, pages 271–278. Pang, Bo, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of EMNLP, pages 79–86. Qu, Yan, James Shanahan, and Janyce Wiebe, editors. 2004. Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications. AAAI Press. AAAI technical report SS-04-07. Rifkin, Ryan M. and Aldebaro Klautau. 2004. In defense of one-vs-all classification. Journal of Machine Learning Research, 5:101–141. Schapire, Robert E. and Yoram Singer. 2000. BoosTexter: A boosting-based system for text categorization. Machine Learning, 39(2/3):135–168. Smola, Alex J. and Bernhard Sch¨olkopf. 1998. A tutorial on support vector regression. Technical Report NeuroCOLT NC-TR-98-030, Royal Holloway College, University of London. Subasic, Pero and Alison Huettner. 2001. Affect analysis of text using fuzzy semantic typing. IEEE Transactions on Fuzzy Systems, 9(4):483–496. Takamura, Hiroya, Yuji Matsumoto, and Hiroyasu Yamada. 2004. Modeling category structures with a kernel function. In Proceedings of CoNLL, pages 57–64. Tong, Richard M. 2001. An operational system for detecting and tracking opinions in on-line discussion. SIGIR Workshop on Operational Text Classification. Turney, Peter. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the ACL, pages 417–424. Vapnik, Vladimir. 1995. The Nature of Statistical Learning Theory. Springer. Wilson, Theresa, Janyce Wiebe, and Rebecca Hwa. 2004. Just how mad are you? Finding strong and weak opinion clauses. In Proceedings of AAAI, pages 761–769. Yu, Hong and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proceedings of EMNLP. Zhu, Xiaojin (Jerry). 2005. Semi-Supervised Learning with Graphs. Ph.D. thesis, Carnegie Mellon University. A Appendix: other variations attempted A.1 Discretizing binary classification In our setting, we can also incorporate class relations by directly altering the output of a binary classifier, as follows. We first train a standard SVM, treating ratings greater than 0.5 as positive labels and others as negative labels. If we then consider the resulting classifier to output a positivity-preference function +  107 , we can then learn a series of thresholds to convert this value into the desired label set, under the assumption that the bigger +  10? is, the more positive the review.9 This algorithm always outperforms the majority-class baseline, but not to the degree that the best of SVM OVA and SVM regression does. Koppel and Schler (2005) independently found in a three-class study that thresholding a positive/negative classifier trained only on clearly positive or clearly negative examples did not yield large improvements. A.2 Discretizing regression In our experiments with SVM regression, we discretized regression output via a set of fixed decision thresholds 3| x ‚ <lx ‚ { x ‚ *xdxdx&‘ to map it into our set of class labels. Alternatively, we can learn the thresholds instead. Neither option clearly outperforms the other in the four-class case. In the three-class setting, the learned version provides noticeably better performance in two of the four datasets. But these results taken together still mean that in many cases, the difference is negligible, and if we had started down this path, we would have needed to consider similar tweaks for one-vs-all SVM as well. We therefore stuck with the simpler version in order to maintain focus on the central issues at hand. 9This is not necessarily true: if the classifier’s goal is to optimize binary classification error, its major concern is to increase confidence in the positive/negative distinction, which may not correspond to higher confidence in separating “five stars” from “four stars”. 124
2005
15
Proceedings of the 43rd Annual Meeting of the ACL, pages 125–132, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Inducing Ontological Co-occurrence Vectors Patrick Pantel Information Sciences Institute University of Southern California 4676 Admiralty Way Marina del Rey, CA 90292 [email protected] Abstract In this paper, we present an unsupervised methodology for propagating lexical cooccurrence vectors into an ontology such as WordNet. We evaluate the framework on the task of automatically attaching new concepts into the ontology. Experimental results show 73.9% attachment accuracy in the first position and 81.3% accuracy in the top-5 positions. This framework could potentially serve as a foundation for ontologizing lexical-semantic resources and assist the development of other largescale and internally consistent collections of semantic information. 1 Introduction Despite considerable effort, there is still today no commonly accepted semantic corpus, semantic framework, notation, or even agreement on precisely which aspects of semantics are most useful (if at all). We believe that one important reason for this rather startling fact is the absence of truly wide-coverage semantic resources. Recognizing this, some recent work on wide coverage term banks, like WordNet (Miller 1990) and CYC (Lenat 1995), and annotated corpora, like FrameNet (Baker et al. 1998), Propbank (Kingsbury et al. 2002) and Nombank (Meyers et al. 2004), seeks to address the problem. But manual efforts such as these suffer from two drawbacks: they are difficult to tailor to new domains, and they have internal inconsistencies that can make automating the acquisition process difficult. In this work, we introduce a general framework for inducing co-occurrence feature vectors for nodes in a WordNet-like ontology. We believe that this framework will be useful for a variety of applications, including adding additional semantic information to existing semantic term banks by disambiguating lexical-semantic resources. Ontologizing semantic resources Recently, researchers have applied text- and web-mining algorithms for automatically creating lexical semantic resources like similarity lists (Lin 1998), semantic lexicons (Riloff and Shepherd 1997), hyponymy lists (Shinzato and Torisawa 2004; Pantel and Ravichandran 2004), partwhole lists (Girgu et al. 2003), and verb relation graphs (Chklovski and Pantel 2004). However, none of these resources have been directly linked into an ontological framework. For example, in VERBOCEAN (Chklovski and Pantel 2004), we find the verb relation “to surpass is-stronger-than to hit”, but it is not specified that it is the achieving sense of hit where this relation applies. We term ontologizing a lexical-semantic resource as the task of sense disambiguating the resource. This problem is different but not orthogonal to word-sense disambiguation. If we could disambiguate large collections of text with high accuracy, then current methods for building lexical-semantic resources could easily be applied to ontologize them by treating each word’s senses as separate words. Our method does not require the disambiguation of text. Instead, it relies on the principle of distributional similarity and that polysemous words that are similar in one sense are dissimilar in their other senses. 125 Given the enriched ontologies produced by our method, we believe that ontologizing lexicalsemantic resources will be feasible. For example, consider the example verb relation “to surpass isstronger-than to hit” from above. To disambiguate the verb hit, we can look at all other verbs that to surpass is stronger than (for example, in VERBOCEAN, “to surpass is-stronger-than to overtake” and “to surpass is-stronger-than to equal”). Now, we can simply compare the lexical co-occurrence vectors of overtake and equal with the ontological feature vectors of the senses of hit (which are induced by our framework). The sense whose feature vector is most similar is selected. It remains to be seen in future work how well this approach performs on ontologizing various semantic resources. In this paper, we focus on the general framework for inducing the ontological co-occurrence vectors and we apply it to the task of linking new terms into the ontology. 2 Relevant work Our framework aims at enriching WordNet-like ontologies with syntactic features derived from a non-annotated corpus. Others have also made significant additions to WordNet. For example, in eXtended WordNet (Harabagiu et al. 1999), the rich glosses in WordNet are enriched by disambiguating the nouns, verbs, adverbs, and adjectives with synsets. Another work has enriched WordNet synsets with topically related words extracted from the Web (Agirre et al. 2001). While this method takes advantage of the redundancy of the web, our source of information is a local document collection, which opens the possibility for domain specific applications. Distributional approaches to building semantic repositories have shown remarkable power. The underlying assumption, called the Distributional Hypothesis (Harris 1985), links the semantics of words to their lexical and syntactic behavior. The hypothesis states that words that occur in the same contexts tend to have similar meaning. Researchers have mostly looked at representing words by their surrounding words (Lund and Burgess 1996) and by their syntactical contexts (Hindle 1990; Lin 1998). However, these representations do not distinguish between the different senses of words. Our framework utilizes these principles and representations to induce disambiguated feature vectors. We describe these representations further in Section 3. In supervised word sense disambiguation, senses are commonly represented by their surrounding words in a sense-tagged corpus (Gale et al. 1991). If we had a large collection of sensetagged text, then we could extract disambiguated feature vectors by collecting co-occurrence features for each word sense. However, since there is little sense-tagged text available, the feature vectors for a random WordNet concept would be very sparse. In our framework, feature vectors are induced from much larger untagged corpora (currently 3GB of newspaper text). Another approach to building semantic repositories is to collect and merge existing ontologies. Attempts to automate the merging process have not been particularly successful (Knight and Luk 1994; Hovy 1998; Noy and Musen 1999). The principal problems of partial and unbalanced coverage and of inconsistencies between ontologies continue to hamper these approaches. 3 Resources The framework we present in Section 4 propagates any type of lexical feature up an ontology. In previous work, lexicals have often been represented by proximity and syntactic features. Consider the following sentence: The tsunami left a trail of horror. In a proximity approach, a word is represented by a window of words surrounding it. For the above sentence, a window of size 1 would yield two features (-1:the and +1:left) for the word tsunami. In a syntactic approach, more linguistically rich features are extracted by using each grammatical relation in which a word is involved (e.g. the features for tsunami are determiner:the and subject-of:leave). For the purposes of this work, we consider the propagation of syntactic features. We used Minipar (Lin 1994), a broad coverage parser, to analyze text. We collected the statistics on the grammatical relations (contexts) output by Minipar and used these as the feature vectors. Following Lin (1998), we measure each feature f for a word e not by its frequency but by its pointwise mutual information, mief: 126 ( ) ( ) ( ) f P e P f e P mief × = , log 4 Inducing ontological features The resource described in the previous section yields lexical feature vectors for each word in a corpus. We term these vectors lexical because they are collected by looking only at the lexicals in the text (i.e. no sense information is used). We use the term ontological feature vector to refer to a feature vector whose features are for a particular sense of the word. In this section, we describe our framework for inducing ontological feature vectors for each node of an ontology. Our approach employs two phases. A divide-and-conquer algorithm first propagates syntactic features to each node in the ontology. A final sweep over the ontology, which we call the Coup phase, disambiguates the feature vectors of lexicals (leaf nodes) in the ontology. 4.1 Divide-and-conquer phase In the first phase of the algorithm, we propagate features up the ontology in a bottom-up approach. Figure 1 gives an overview of this phase. The termination condition of the recursion is met when the algorithm processes a leaf node. The feature vector that is assigned to this node is an exact copy of the lexical feature vector for that leaf (obtained from a large corpus as described in Section 3). For example, for the two leaf nodes labeled chair in Figure 2, we assign to both the same ambiguous lexical feature vector, an excerpt of which is shown in Figure 3. When the recursion meets a non-leaf node, like chairwoman in Figure 2, the algorithm first recursively applies itself to each of the node’s children. Then, the algorithm selects those features common to its children to propagate up to its own ontological feature vector. The assumption here is that features of other senses of polysemous words will not be propagated since they will not be common across the children. Below, we describe the two methods we used to propagate features: Shared and Committee. Shared propagation algorithm The first technique for propagating features to a concept node n from its children C is the simplest and scored best in our evaluation (see Section 5.2). The goal is that the feature vector for n Input: A node n and a corpus C. Step 1: Termination Condition: If n is a leaf node then assign to n its lexical feature vector as described in Section 3. Step 2: Recursion Step: For each child c of n, reecurse on c and C. Assign a feature vector to n by propagating features from its children. Output: A feature vector assigned to each node of the tree rooted by n. Figure 1. Divide-and-conquer phase. chair stool armchair chaiselongue taboret music stool step stool cutty stool desk chair chair seating furniture furniture furniture bed mirror table concept leaf node Legend: chair chairman president chairwoman vice chairman vice chairman chairwoman leader Decomposable object Figure 2. Subtrees of WordNet illustrating two senses of chair. "chair" conjunction: sofa 77 11.8 professor 11 6.0 dining room 2 5.6 cushion 1 4.5 council member 1 4.4 President 9 2.9 foreign minister 1 2.8 nominal subject Ottoman 8 12.1 director 22 9.1 speaker 8 8.6 Joyner 2 8.22 recliner 2 7.7 candidate 1 3.5 Figure 3. Excerpt of a lexical feature vector for the word chair. Grammatical relations are in italics (conjunction and nominal-subject). The first column of numbers are frequency counts and the other are mutual information scores. In bold are the features that intersect with the induced ontological feature vector for the parent concept of chair’s chairwoman sense. 127 represents the general grammatical behavior that its children will have. For example, for the concept node furniture in Figure 2, we would like to assign features like object-of:clean since mosttypes of furniture can be cleaned. However, even though you can eat on a table, we do not want the feature on:eat for the furniture concept since we do not eat on mirrors or beds. In the Shared propagation algorithm, we propagate only those features that are shared by at least t children. In our experiments, we experimentally set t = min(3, |C|). The frequency of a propagated feature is obtained by taking a weighted sum of the frequency of the feature across its children. Let fi be the frequency of the feature for child i, let ci be the total frequency of child i, and let N be the total frequency of all children. Then, the frequency f of the propagated feature is given by: ∑ × = i i i N c f f (1) Committee propagation algorithm The second propagation algorithm finds a set of representative children from which to propagate features. Pantel and Lin (2002) describe an algorithm, called Clustering By Committee (CBC), which discovers clusters of words according to their meanings in test. The key to CBC is finding for each class a set of representative elements, called a committee, which most unambiguously describe the members of the class. For example, for the color concept, CBC discovers the following committee members: purple, pink, yellow, mauve, turquoise, beige, fuchsia Words like orange and violet are avoided because they are polysemous. For a given concept c, we build a committee by clustering its children according to their similarity and then keep the largest and most interconnected cluster (see Pantel and Lin (2002) for details). The propagated features are then those that are shared by at least two committee members. The frequency of a propagated feature is obtained using Eq. 1 where the children i are chosen only among the committee members. Generating committees using CBC works best for classes with many members. In its original application (Pantel and Lin 2002), CBC discovered a flat list of coarse concepts. In the finer grained concept hierarchy of WordNet, there are many fewer children for each concept so we expect to have more difficulty finding committees. 4.2 Coup phase At the end of the Divide-and-conquer phase, the non-leaf nodes of the ontology contain disambiguated features1. By design of the propagation algorithm, each concept node feature is shared by at least two of its children. We assume that two polysemous words, w1 and w2, that are similar in one sense will be dissimilar in its other senses. Under the distributional hypothesis, similar words occur in the same grammatical contexts and dissimilar words occur in different grammatical contexts. We expect then that most features that are shared between w1 and w2 will be the grammatical contexts of their similar sense. Hence, mostly disambiguated features are propagated up the ontology in the Divide-and-conquer phase. However, the feature vectors for the leaf nodes remain ambiguous (e.g. the feature vectors for both leaf nodes labeled chair in Figure 2 are identical). In this phase of the algorithm, leaf node feature vectors are disambiguated by looking at the parents of their other senses. Leaf nodes that are unambiguous in the ontology will have unambiguous feature vectors. For ambiguous leaf nodes (i.e. leaf nodes that have more than one concept parent), we apply the algorithm described in Figure 4. Given a polysemous leaf node n, we remove from its ambiguous 1 By disambiguated features, we mean that the features are co-occurrences with a particular sense of a word; the features themselves are not sense-tagged. Input: A node n and the enriched ontology O output from the algorithm in Figure 1. Step 1: If n is not a leaf node then return. Step 2: Remove from n’s feature vector all features that intersect with the feature vector of any of n’s other senses’ parent concepts, but are not in n’s parent concept feature vector. Output: A disambiguated feature vector for each leaf node n. Figure 4. Coup phase. 128 feature vector those features that intersect with the ontological feature vector of any of its other senses’ parent concept but that are not in its own parent’s ontological feature vector. For example, consider the furniture sense of the leaf node chair in Figure 2. After the Divide-and-conquer phase, the node chair is assigned the ambiguous lexical feature vector shown in Figure 3. Suppose that chair only has one other sense in WordNet, which is the chairwoman sense illustrated in Figure 2. The features in bold in Figure 3 represent those features of chair that intersect with the ontological feature vector of chairwoman. In the Coup phase of our system, we remove these bold features from the furniture sense leaf node chair. What remains are features like “chair and sofa”, “chair and cushion”, “Ottoman is a chair”, and “recliner is a chair”. Similarly, for the chairwoman sense of chair, we remove those features that intersect with the ontological feature vector of the chair concept (the parent of the other chair leaf node). As shown in the beginning of this section, concept node feature vectors are mostly unambiguous after the Divide-and-conquer phase. However, the Divide-and-conquer phase may be repeated after the Coup phase using a different termination condition. Instead of assigning to leaf nodes ambiguous lexical feature vectors, we use the leaf node feature vectors from the Coup phase. In our experiments, we did not see any significant performance difference by skipping this extra Divide-and-conquer step. 5 Experimental results In this section, we provide a quantitative and qualitative evaluation of our framework. 5.1 Experimental Setup We used Minipar (Lin 1994), a broad coverage parser, to parse two 3GB corpora (TREC-9 and TREC-2002). We collected the frequency counts of the grammatical relations (contexts) output by Minipar and used these to construct the lexical feature vectors as described in Section 3. WordNet 2.0 served as our testing ontology. Using the algorithm presented in Section 4, we induced ontological feature vectors for the noun nodes in WordNet using the lexical co-occurrence features from the TREC-2002 corpus. Due to memory limitations, we were only able to propagate features to one quarter of the ontology. We experimented with both the Shared and Committee propagation models described in Section 4.1. 5.2 Quantitative evaluation To evaluate the resulting ontological feature vectors, we considered the task of attaching new nodes into the ontology. To automatically evaluate this, we randomly extracted a set of 1000 noun leaf nodes from the ontology and accumulated lexical feature vectors for them using the TREC-9 corpus (a separate corpus than the one used to propagate features, but of the same genre). We experimented with two test sets: • Full: The 424 of the 1000 random nodes that existed in the TREC-9 corpus • Subset: Subset of Full where only nodes that do not have concept siblings are kept (380 nodes). For each random node, we computed the similarity of the node with each concept node in the ontology by computing the cosine of the angle (Salton and McGill 1983) between the lexical feature vector of the random node ei and the ontological feature vector of the concept nodes ej: ( ) ∑ ∑ ∑ × × = f f e f f e f f e f e j i j i j i mi mi mi mi e e sim 2 2 , We only kept those similar nodes that had a similarity above a threshold σ. We experimentally set σ = 0.1. Top-K accuracy We collected the top-K most similar concept nodes (attachment points) for each node in the test sets and computed the accuracy of finding a correct attachment point in the top-K list. Table 1 shows the result. We expected the algorithm to perform better on the Subset data set since only concepts that have exclusively lexical children must be considered for attachment. In the Full data set, the algorithm must consider each concept in the ontology as a potential attachment point. However, considering the top-5 best attachments, the algorithm performed equally well on both data sets. The Shared propagation algorithm performed consistently slightly better than the Committee method. As described in Section 4.1, building a 129 committee performs best for concepts with many children. Since many nodes in WordNet have few direct children, the Shared propagation method is more appropriate. One possible extension of the Committee propagation algorithm is to find committee members from the full list of descendants of a node rather than only its immediate children. Precision and Recall We computed the precision and recall of our system on varying numbers of returned attachments. Figure 5 and Figure 6 show the attachment precision and recall of our system when the maximum number of returned attachments ranges between 1 and 5. In Figure 5, we see that the Shared propagation method has better precision than the Committee method. Both methods perform similarly on recall. The recall of the system increases most dramatically when returning two attachments without too much of a hit on precision. The low recall when returning only one attachment is due to both system errors and also to the fact that many nodes in the hierarchy are polysemous. In the next section, we discuss further experiments on polysemous nodes. Figure 6 illustrates the large difference on both precision and recall when using the simpler Subset data set. All 95% confidence bounds in Figure 5 and Figure 6 range between ±2.8% and ±5.3%. Polysemous nodes 84 of the nodes in the Full data set are polysemous (they are attached to more than one concept node in the ontology). On average, these nodes have 2.6 senses for a total of 219 senses. Figure 7 compares the precision and recall of the system on all nodes in the Full data set vs. the 84 polysemous nodes. The 95% confidence intervals range between ±3.8% and ±5.0% for the Full data set and between ±1.2% and ±9.4% for the polysemous nodes. The precision on the polysemous nodes is consistently better since these have more possible correct attachments. Clearly, when the system returns at most one or two attachments, the recall on the polysemous nodes is lower than on the Full set. However, it is interesting to note that recall on the polysemous nodes equals the recall on the Full set after K=3. Table 1. Correct attachment point in the top-K attachments (with 95% conf.) K Shared (Full) Committee (Full) Shared (Subset) Committee (Subset) 1 73.9% ± 4.5% 72.0% ± 4.9% 77.4% ± 3.6% 76.1% ± 5.1% 2 78.7% ± 4.1% 76.6% ± 4.2% 80.7% ± 4.0% 79.1% ± 4.5% 3 79.9% ± 4.0% 78.2% ± 4.2% 81.2% ± 3.9% 80.5% ± 4.8% 4 80.6% ± 4.1% 79.0% ± 4.0% 81.5% ± 4.1% 80.8% ± 5.0% 5 81.3% ± 3.8% 79.5% ± 3.9% 81.7% ± 4.1% 81.3% ± 4.9% Figure 5. Attachment precision and recall for the Shared and Committee propagation methods when returning at most K attachments (on the Full set). Precision and Recall (Shared and Committee) vs. Number of Returned Attachments 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 K Precision (Shared) Recall (Shared) Precision (Committee) Recall (Committee) Precision and Recall (Full and Subset) vs. Number of Returned Attachments 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 K Precision (Full) Recall (Full) Precision (Subset) Recall (Subset) Figure 6. Attachment precision and recall for the Full and Subset data sets when returning at most K attachments (using the Shared propagation method). 130 5.3 Qualitative evaluation Inspection of errors revealed that the system often makes plausible attachments. Table 2 shows some example errors generated by our system. For the word arsenic, the system attached it to the concept trioxide, which is the parent of the correct attachment. The system results may be useful to help validate the ontology. For example, for the word law, the system attached it to the regulation (as an organic process) and ordinance (legislative act) concepts. According to WordNet, law has seven possible attachment points, none of which are a legislative act. Hence, the system has found that in the TREC-9 corpus, the word law has a sense of legislative act. Similarly, the system discovered the symptom sense of vomiting. The system discovered a potential anomaly in WordNet with the word slob. The system classified slob as follows: fool Æ simpleton Æ someone whereas WordNet classifies it as: vulgarian Æ unpleasant person Æ unwelcome person Æ someone The ontology could use this output to verify if fool should link in the unpleasant person subtree. Capitalization is not very trustworthy in large collections of text. One of our design decisions was to ignore the case of words in our corpus, which in turn caused some errors since WordNet is case sensitive. For example, the lexical node Munch (Norwegian artist) was attached to the munch concept (food) by error because our system accumulated all features of the word Munch in text regardless of its capitalization. 6 Discussion One question that remains unanswered is how clean an ontology must be in order for our methodology to work. Since the structure of the ontology guides the propagation of features, a very noisy ontology will result in noisy feature vectors. However, the framework is tolerant to some amount of noise and can in fact be used to correct some errors (as shown in Section 5.3). We showed in Section 1 how our framework can be used to disambiguate lexical-semantic resources like hyponym lists, verb relations, and unknown words or terms. Other avenues of future work include: Adapting/extending existing ontologies It takes a large amount of time to build resources like WordNet. However, adapting existing resources to a new corpus might be possible using our framework. Once we have enriched the ontology with features from a corpus, we can rearrange the ontological structure according to the inter-conceptual similarity of nodes. For example, consider the word computer in WordNet, which has two senses: a) a machine; and b) a person who calculates. In a computer science corpus, sense b) occurs very infrequently and possibly a new sense of computer (e.g. a processing chip) occurs. A system could potentially remove sense b) since the similarity of the other children of b) and computer is very low. It could also uncover the new processing chip sense by finding a high similarity between computer and the processing chip concept. Validating ontologies This is a holy grail problem in the knowledge representation community. As a small step, our framework can be used to flag potential anomalies to the knowledge engineer. What makes a chair different from a recliner? Given an enriched ontology, we can remove from the feature vectors of chair and recliner those features that occur in their parent furniture concept. The features that remain describe their different syntactic behaviors in text. Figure 7. Attachment precision and recall on the Full set vs. the polysemous nodes in the Full set when the system returns at most K attachments. Precision and Recall (All vs. Polysemous Nodes) 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 K Precision (All) Recall (All) Precision (Polysemous) Recall (Polysemous) 131 7 Conclusions We presented a framework for inducing ontological feature vectors from lexical co-occurrence vectors. Our method does not require the disambiguation of text. Instead, it relies on the principle of distributional similarity and the fact that polysemous words that are similar in one sense tend to be dissimilar in their other senses. On the task of attaching new words to WordNet using our framework, our experiments showed that the first attachment has 73.9% accuracy and that a correct attachment is in the top-5 attachments with 81.3% accuracy. We believe this work to be useful for a variety of applications. Not only can sense selection tasks such as word sense disambiguation, parsing, and semantic analysis benefit from our framework, but more inference-oriented tasks such as question answering and text summarization as well. We hope that this work will assist with the development of other large-scale and internally consistent collections of semantic information. References Agirre, E.; Ansa, O.; Martinez, D.; and Hovy, E. 2001. Enriching WordNet concepts with topic signatures. In Proceedings of the NAACL workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations. Pittsburgh, PA. Baker, C.; Fillmore, C.; and Lowe, J. 1998. The Berkeley FrameNet project. In Proceedings of COLING-ACL. Montreal, Canada. Chklovski, T., and Pantel, P. VERBOCEAN: Mining the Web for Fine-Grained Semantic Verb Relations. In Proceedings of EMNLP-2004. pp. 33-40. Barcelona, Spain. Gale, W.; Church, K.; and Yarowsky, D. 1992. A method for disambiguating word senses in a large corpus. Computers and Humanities, 26:415-439. Girju, R.; Badulescu, A.; and Moldovan, D. 2003. Learning semantic constraints for the automatic discovery of part-whole relations. In Proceedings of HLT/NAACL-03. pp. 80-87. Edmonton, Canada. Harabagiu, S.; Miller, G.; and Moldovan, D. 1999. WordNet 2 - A Morphologically and Semantically Enhanced Resource. In Proceedings of SIGLEX-99. pp.1-8. University of Maryland. Harris, Z. 1985. Distributional structure. In: Katz, J. J. (ed.) The Philosophy of Linguistics. New York: Oxford University Press. pp. 26-47. Hovy, E. 1998. Combining and standardizing large-scale, practical ontologies for machine translation and other uses. In Proceedings LREC-98. pp. 535-542. Granada, Spain. Hindle, D. 1990. Noun classification from predicate-argument structures. In Proceedings of ACL-90. pp. 268-275. Pittsburgh, PA. Kingsbury, P; Palmer, M.; and Marcus, M. 2002. Adding semantic annotation to the Penn TreeBank. In Proceedings of HLT2002. San Diego, California. Knight, K. and Luk, S. K. 1994. Building a large-scale knowledge base for machine translation. In Proceedings of AAAI1994. Seattle, WA. Lenat, D. 1995. CYC: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33-38. Lin, D. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING/ACL-98. pp. 768-774. Montreal, Canada. Lin, D. 1994. Principar - an efficient, broad-coverage, principlebased parser. Proceedings of COLING-94. pp. 42-48. Kyoto, Japan. Lund, K. and Burgess, C. 1996. Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instruments, and Computers, 28:203-208. Meyers, A.; Reeves, R.; Macleod, C.; Szekely, R.; Zielinska, V.; Young, B.; and Grishman, R. Annotating noun argument structure for NomBank. In Proceedings of LREC-2004. Lisbon, Portugal. Miller, G. 1990. WordNet: An online lexical database. International Journal of Lexicography, 3(4). Noy, N. F. and Musen, M. A. 1999. An algorithm for merging and aligning ontologies: Automation and tool support. In Proceedings of the Workshop on Ontology Management (AAAI-99). Orlando, FL. Pantel, P. and Lin, D. 2002. Discovering Word Senses from Text. In Proceedings of SIGKDD-02. pp. 613-619. Edmonton, Canada. Riloff, E. and Shepherd, J. 1997. A corpus-based approach for building semantic lexicons. In Proceedings of EMNLP-1997. Salton, G. and McGill, M. J. 1983. Introduction to Modern Information Retrieval. McGraw Hill. Shinzato, K. and Torisawa, K. 2004. Acquiring hyponymy relations from web documents. In Proceedings of HLT-NAACL2004. pp. 73-80. Boston, MA. Table 2. Example attachment errors by our system. Node System Attachment Correct Attachment arsenic* trioxide arsenic OR element law regulation law OR police OR … Munch† munch Munch slob fool slob vomiting fever emesis * the system’s attachment was a parent of the correct attachment. † error due to case mix-up (our algorithm does not differentiate between case). 132
2005
16
Proceedings of the 43rd Annual Meeting of the ACL, pages 133–140, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Extracting Semantic Orientations of Words using Spin Model Hiroya Takamura Takashi Inui Manabu Okumura Precision and Intelligence Laboratory Tokyo Institute of Technology 4259 Nagatsuta Midori-ku Yokohama, 226-8503 Japan {takamura,oku}@pi.titech.ac.jp, [email protected] Abstract We propose a method for extracting semantic orientations of words: desirable or undesirable. Regarding semantic orientations as spins of electrons, we use the mean field approximation to compute the approximate probability function of the system instead of the intractable actual probability function. We also propose a criterion for parameter selection on the basis of magnetization. Given only a small number of seed words, the proposed method extracts semantic orientations with high accuracy in the experiments on English lexicon. The result is comparable to the best value ever reported. 1 Introduction Identification of emotions (including opinions and attitudes) in text is an important task which has a variety of possible applications. For example, we can efficiently collect opinions on a new product from the internet, if opinions in bulletin boards are automatically identified. We will also be able to grasp people’s attitudes in questionnaire, without actually reading all the responds. An important resource in realizing such identification tasks is a list of words with semantic orientation: positive or negative (desirable or undesirable). Frequent appearance of positive words in a document implies that the writer of the document would have a positive attitude on the topic. The goal of this paper is to propose a method for automatically creating such a word list from glosses (i.e., definition or explanation sentences ) in a dictionary, as well as from a thesaurus and a corpus. For this purpose, we use spin model, which is a model for a set of electrons with spins. Just as each electron has a direction of spin (up or down), each word has a semantic orientation (positive or negative). We therefore regard words as a set of electrons and apply the mean field approximation to compute the average orientation of each word. We also propose a criterion for parameter selection on the basis of magnetization, a notion in statistical physics. Magnetization indicates the global tendency of polarization. We empirically show that the proposed method works well even with a small number of seed words. 2 Related Work Turney and Littman (2003) proposed two algorithms for extraction of semantic orientations of words. To calculate the association strength of a word with positive (negative) seed words, they used the number of hits returned by a search engine, with a query consisting of the word and one of seed words (e.g., “word NEAR good”, “word NEAR bad”). They regarded the difference of two association strengths as a measure of semantic orientation. They also proposed to use Latent Semantic Analysis to compute the association strength with seed words. An empirical evaluation was conducted on 3596 words extracted from General Inquirer (Stone et al., 1966). Hatzivassiloglou and McKeown (1997) focused on conjunctive expressions such as “simple and 133 well-received” and “simplistic but well-received”, where the former pair of words tend to have the same semantic orientation, and the latter tend to have the opposite orientation. They first classify each conjunctive expression into the same-orientation class or the different-orientation class. They then use the classified expressions to cluster words into the positive class and the negative class. The experiments were conducted with the dataset that they created on their own. Evaluation was limited to adjectives. Kobayashi et al. (2001) proposed a method for extracting semantic orientations of words with bootstrapping. The semantic orientation of a word is determined on the basis of its gloss, if any of their 52 hand-crafted rules is applicable to the sentence. Rules are applied iteratively in the bootstrapping framework. Although Kobayashi et al.’s work provided an accurate investigation on this task and inspired our work, it has drawbacks: low recall and language dependency. They reported that the semantic orientations of only 113 words are extracted with precision 84.1% (the low recall is due partly to their large set of seed words (1187 words)). The handcrafted rules are only for Japanese. Kamps et al. (2004) constructed a network by connecting each pair of synonymous words provided by WordNet (Fellbaum, 1998), and then used the shortest paths to two seed words “good” and “bad” to obtain the semantic orientation of a word. Limitations of their method are that a synonymy dictionary is required, that antonym relations cannot be incorporated into the model. Their evaluation is restricted to adjectives. The method proposed by Hu and Liu (2004) is quite similar to the shortest-path method. Hu and Liu’s method iteratively determines the semantic orientations of the words neighboring any of the seed words and enlarges the seed word set in a bootstrapping manner. Subjective words are often semantically oriented. Wiebe (2000) used a learning method to collect subjective adjectives from corpora. Riloff et al. (2003) focused on the collection of subjective nouns. We later compare our method with Turney and Littman’s method and Kamps et al.’s method. The other pieces of research work mentioned above are related to ours, but their objectives are different from ours. 3 Spin Model and Mean Field Approximation We give a brief introduction to the spin model and the mean field approximation, which are wellstudied subjects both in the statistical mechanics and the machine learning communities (Geman and Geman, 1984; Inoue and Carlucci, 2001; Mackay, 2003). A spin system is an array of N electrons, each of which has a spin with one of two values “+1 (up)” or “−1 (down)”. Two electrons next to each other energetically tend to have the same spin. This model is called the Ising spin model, or simply the spin model (Chandler, 1987). The energy function of a spin system can be represented as E(x, W) = −1 2 X ij wijxixj, (1) where xi and xj (∈x) are spins of electrons i and j, matrix W = {wij} represents weights between two electrons. In a spin system, the variable vector x follows the Boltzmann distribution : P(x|W) = exp(−βE(x, W)) Z(W) , (2) where Z(W) = P x exp(−βE(x, W)) is the normalization factor, which is called the partition function and β is a constant called the inversetemperature. As this distribution function suggests, a configuration with a higher energy value has a smaller probability. Although we have a distribution function, computing various probability values is computationally difficult. The bottleneck is the evaluation of Z(W), since there are 2N configurations of spins in this system. We therefore approximate P(x|W) with a simple function Q(x; θ). The set of parameters θ for Q, is determined such that Q(x; θ) becomes as similar to P(x|W) as possible. As a measure for the distance between P and Q, the variational free energy F is often used, which is defined as the difference between the mean energy with respect to Q and the entropy of Q : F(θ) = β X x Q(x; θ)E(x; W) 134 − µ − X x Q(x; θ) log Q(x; θ) ¶ . (3) The parameters θ that minimizes the variational free energy will be chosen. It has been shown that minimizing F is equivalent to minimizing the KullbackLeibler divergence between P and Q (Mackay, 2003). We next assume that the function Q(x; θ) has the factorial form : Q(x; θ) = Y i Q(xi; θi). (4) Simple substitution and transformation leads us to the following variational free energy : F(θ) = −β 2 X ij wij ¯xi¯xj − X i µ − X xi Q(xi; θi) log Q(xi; θi) ¶ . (5) With the usual method of Lagrange multipliers, we obtain the mean field equation : ¯xi = P xi xi exp µ βxi P j wij ¯xj ¶ P xi exp µ βxi P j wij ¯xj ¶ . (6) This equation is solved by the iterative update rule : ¯xnew i = P xi xi exp µ βxi P j wij ¯xold j ¶ P xi exp µ βxi P j wij ¯xold j ¶ . (7) 4 Extraction of Semantic Orientation of Words with Spin Model We use the spin model to extract semantic orientations of words. Each spin has a direction taking one of two values: up or down. Two neighboring spins tend to have the same direction from a energetic reason. Regarding each word as an electron and its semantic orientation as the spin of the electron, we construct a lexical network by connecting two words if, for example, one word appears in the gloss of the other word. Intuition behind this is that if a word is semantically oriented in one direction, then the words in its gloss tend to be oriented in the same direction. Using the mean-field method developed in statistical mechanics, we determine the semantic orientations on the network in a global manner. The global optimization enables the incorporation of possibly noisy resources such as glosses and corpora, while existing simple methods such as the shortest-path method and the bootstrapping method cannot work in the presence of such noisy evidences. Those methods depend on less-noisy data such as a thesaurus. 4.1 Construction of Lexical Networks We construct a lexical network by linking two words if one word appears in the gloss of the other word. Each link belongs to one of two groups: the sameorientation links SL and the different-orientation links DL. If at least one word precedes a negation word (e.g., not) in the gloss of the other word, the link is a different-orientation link. Otherwise the links is a same-orientation link. We next set weights W = (wij) to links : wij =        1 √ d(i)d(j) (lij ∈SL) − 1 √ d(i)d(j) (lij ∈DL) 0 otherwise , (8) where lij denotes the link between word i and word j, and d(i) denotes the degree of word i, which means the number of words linked with word i. Two words without connections are regarded as being connected by a link of weight 0. We call this network the gloss network (G). We construct another network, the glossthesaurus network (GT), by linking synonyms, antonyms and hypernyms, in addition to the the above linked words. Only antonym links are in DL. We enhance the gloss-thesaurus network with cooccurrence information extracted from corpus. As mentioned in Section 2, Hatzivassiloglou and McKeown (1997) used conjunctive expressions in corpus. Following their method, we connect two adjectives if the adjectives appear in a conjunctive form in the corpus. If the adjectives are connected by “and”, the link belongs to SL. If they are connected by “but”, the link belongs to DL. We call this network the gloss-thesaurus-corpus network (GTC). 135 4.2 Extraction of Orientations We suppose that a small number of seed words are given. In other words, we know beforehand the semantic orientations of those given words. We incorporate this small labeled dataset by modifying the previous update rule. Instead of βE(x, W) in Equation (2), we use the following function H(β, x, W) : H(β, x, W) = −β 2 X ij wijxixj + α X i∈L (xi −ai)2, (9) where L is the set of seed words, ai is the orientation of seed word i, and α is a positive constant. This expression means that if xi (i ∈L) is different from ai, the state is penalized. Using function H, we obtain the new update rule for xi (i ∈L) : ¯xnew i = P xi xi exp µ βxisold i −α(xi −ai)2 ¶ P xi exp µ βxisold i −α(xi −ai)2 ¶ , (10) where sold i = P j wij ¯xold j . ¯xold i and ¯xnew i are the averages of xi respectively before and after update. What is discussed here was constructed with the reference to work by Inoue and Carlucci (2001), in which they applied the spin glass model to image restoration. Initially, the averages of the seed words are set according to their given orientations. The other averages are set to 0. When the difference in the value of the variational free energy is smaller than a threshold before and after update, we regard computation converged. The words with high final average values are classified as positive words. The words with low final average values are classified as negative words. 4.3 Hyper-parameter Prediction The performance of the proposed method largely depends on the value of hyper-parameter β. In order to make the method more practical, we propose criteria for determining its value. When a large labeled dataset is available, we can obtain a reliable pseudo leave-one-out error rate : 1 |L| X i∈L [ai¯x′ i], (11) where [t] is 1 if t is negative, otherwise 0, and ¯x′ i is calculated with the right-hand-side of Equation (6), where the penalty term α(¯xi −ai)2 in Equation (10) is ignored. We choose β that minimizes this value. However, when a large amount of labeled data is unavailable, the value of pseudo leave-one-out error rate is not reliable. In such cases, we use magnetization m for hyper-parameter prediction : m = 1 N X i ¯xi. (12) At a high temperature, spins are randomly oriented (paramagnetic phase, m ≈0). At a low temperature, most of the spins have the same direction (ferromagnetic phase, m ̸= 0). It is known that at some intermediate temperature, ferromagnetic phase suddenly changes to paramagnetic phase. This phenomenon is called phase transition. Slightly before the phase transition, spins are locally polarized; strongly connected spins have the same polarity, but not in a global way. Intuitively, the state of the lexical network is locally polarized. Therefore, we calculate values of m with several different values of β and select the value just before the phase transition. 4.4 Discussion on the Model In our model, the semantic orientations of words are determined according to the averages values of the spins. Despite the heuristic flavor of this decision rule, it has a theoretical background related to maximizer of posterior marginal (MPM) estimation, or ‘finite-temperature decoding’ (Iba, 1999; Marroquin, 1985). In MPM, the average is the marginal distribution over xi obtained from the distribution over x. We should note that the finite-temperature decoding is quite different from annealing type algorithms or ‘zero-temperature decoding’, which correspond to maximum a posteriori (MAP) estimation and also often used in natural language processing (Cowie et al., 1992). Since the model estimation has been reduced to simple update calculations, the proposed model is similar to conventional spreading activation approaches, which have been applied, for example, to word sense disambiguation (Veronis and Ide, 1990). Actually, the proposed model can be regarded as a spreading activation model with a specific update 136 rule, as long as we are dealing with 2-class model (2-Ising model). However, there are some advantages in our modelling. The largest advantage is its theoretical background. We have an objective function and its approximation method. We thus have a measure of goodness in model estimation and can use another better approximation method, such as Bethe approximation (Tanaka et al., 2003). The theory tells us which update rule to use. We also have a notion of magnetization, which can be used for hyperparameter estimation. We can use a plenty of knowledge, methods and algorithms developed in the field of statistical mechanics. We can also extend our model to a multiclass model (Q-Ising model). Another interesting point is the relation to maximum entropy model (Berger et al., 1996), which is popular in the natural language processing community. Our model can be obtained by maximizing the entropy of the probability distribution Q(x) under constraints regarding the energy function. 5 Experiments We used glosses, synonyms, antonyms and hypernyms of WordNet (Fellbaum, 1998) to construct an English lexical network. For part-of-speech tagging and lemmatization of glosses, we used TreeTagger (Schmid, 1994). 35 stopwords (quite frequent words such as “be” and “have”) are removed from the lexical network. Negation words include 33 words. In addition to usual negation words such as “not” and “never”, we include words and phrases which mean negation in a general sense, such as “free from” and “lack of”. The whole network consists of approximately 88,000 words. We collected 804 conjunctive expressions from Wall Street Journal and Brown corpus as described in Section 4.2. The labeled dataset used as a gold standard is General Inquirer lexicon (Stone et al., 1966) as in the work by Turney and Littman (2003). We extracted the words tagged with “Positiv” or “Negativ”, and reduced multiple-entry words to single entries. As a result, we obtained 3596 words (1616 positive words and 1980 negative words) 1. In the computation of 1Although we preprocessed in the same way as Turney and Littman, there is a slight difference between their dataset and our dataset. However, we believe this difference is insignificant. Table 1: Classification accuracy (%) with various networks and four different sets of seed words. In the parentheses, the predicted value of β is written. For cv, no value is written for β, since 10 different values are obtained. seeds GTC GT G cv 90.8 (—) 90.9 (—) 86.9 (—) 14 81.9 (1.0) 80.2 (1.0) 76.2 (1.0) 4 73.8 (0.9) 73.7 (1.0) 65.2 (0.9) 2 74.6 (1.0) 61.8 (1.0) 65.7 (1.0) accuracy, seed words are eliminated from these 3596 words. We conducted experiments with different values of β from 0.1 to 2.0, with the interval 0.1, and predicted the best value as explained in Section 4.3. The threshold of the magnetization for hyper-parameter estimation is set to 1.0 × 10−5. That is, the predicted optimal value of β is the largest β whose corresponding magnetization does not exceeds the threshold value. We performed 10-fold cross validation as well as experiments with fixed seed words. The fixed seed words are the ones used by Turney and Littman: 14 seed words {good, nice, excellent, positive, fortunate, correct, superior, bad, nasty, poor, negative, unfortunate, wrong, inferior}; 4 seed words {good, superior, bad, inferior}; 2 seed words {good, bad}. 5.1 Classification Accuracy Table 1 shows the accuracy values of semantic orientation classification for four different sets of seed words and various networks. In the table, cv corresponds to the result of 10-fold cross validation, in which case we use the pseudo leave-one-out error for hyper-parameter estimation, while in other cases we use magnetization. In most cases, the synonyms and the cooccurrence information from corpus improve accuracy. The only exception is the case of 2 seed words, in which G performs better than GT. One possible reason of this inversion is that the computation is trapped in a local optimum, since a small number of seed words leave a relatively large degree of freedom in the solution space, resulting in more local optimal points. We compare our results with Turney and 137 Table 2: Actual best classification accuracy (%) with various networks and four different sets of seed words. In the parenthesis, the actual best value of β is written, except for cv. seeds GTC GT G cv 91.5 (—) 91.5 (—) 87.0 (—) 14 81.9 (1.0) 80.2 (1.0) 76.2 (1.0) 4 74.4 (0.6) 74.4 (0.6) 65.3 (0.8) 2 75.2 (0.8) 61.9 (0.8) 67.5 (0.5) Littman’s results. With 14 seed words, they achieved 61.26% for a small corpus (approx. 1 × 107 words), 76.06% for a medium-sized corpus (approx. 2×109 words), 82.84% for a large corpus (approx. 1×1011 words). Without a corpus nor a thesaurus (but with glosses in a dictionary), we obtained accuracy that is comparable to Turney and Littman’s with a medium-sized corpus. When we enhance the lexical network with corpus and thesaurus, our result is comparable to Turney and Littman’s with a large corpus. 5.2 Prediction of β We examine how accurately our prediction method for β works by comparing Table 1 above and Table 2 below. Our method predicts good β quite well especially for 14 seed words. For small numbers of seed words, our method using magnetization tends to predict a little larger value. We also display the figure of magnetization and accuracy in Figure 1. We can see that the sharp change of magnetization occurs at around β = 1.0 (phrase transition). At almost the same point, the classification accuracy reaches the peak. 5.3 Precision for the Words with High Confidence We next evaluate the proposed method in terms of precision for the words that are classified with high confidence. We regard the absolute value of each average as a confidence measure and evaluate the top words with the highest absolute values of averages. The result of this experiment is shown in Figure 2, for 14 seed words as an example. The top 1000 words achieved more than 92% accuracy. This result shows that the absolute value of each average -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 4 5 6 7 8 9 10 40 45 50 55 60 65 70 75 80 85 90 Magnetization Accuracy Beta magnetization accuracy Figure 1: Example of magnetization and classification accuracy(14 seed words). 75 80 85 90 95 100 0 500 1000 1500 2000 2500 3000 3500 4000 Precision Number of selected words GTC GT G Figure 2: Precision (%) with 14 seed words. 138 Table 3: Precision (%) for selected adjectives. Comparison between the proposed method and the shortest-path method. seeds proposed short. path 14 73.4 (1.0) 70.8 4 71.0 (1.0) 64.9 2 68.2 (1.0) 66.0 Table 4: Precision (%) for adjectives. Comparison between the proposed method and the bootstrapping method. seeds proposed bootstrap 14 83.6 (0.8) 72.8 4 82.3 (0.9) 73.2 2 83.5 (0.7) 71.1 can work as a confidence measure of classification. 5.4 Comparison with other methods In order to further investigate the model, we conduct experiments in restricted settings. We first construct a lexical network using only synonyms. We compare the spin model with the shortest-path method proposed by Kamps et al. (2004) on this network, because the shortestpath method cannot incorporate negative links of antonyms. We also restrict the test data to 697 adjectives, which is the number of examples that the shortest-path method can assign a non-zero orientation value. Since the shortest-path method is designed for 2 seed words, the method is extended to use the average shortest-path lengths for 4 seed words and 14 seed words. Table 3 shows the result. Since the only difference is their algorithms, we can conclude that the global optimization of the spin model works well for the semantic orientation extraction. We next compare the proposed method with a simple bootstrapping method proposed by Hu and Liu (2004). We construct a lexical network using synonyms and antonyms. We restrict the test data to 1470 adjectives for comparison of methods. The result in Table 4 also shows that the global optimization of the spin model works well for the semantic orientation extraction. We also tested the shortest path method and the bootstrapping method on GTC and GT, and obtained low accuracies as expected in the discussion in Section 4. 5.5 Error Analysis We investigated a number of errors and concluded that there were mainly three types of errors. One is the ambiguity of word senses. For example, one of the glosses of “costly”is “entailing great loss or sacrifice”. The word “great” here means “large”, although it usually means “outstanding” and is positively oriented. Another is lack of structural information. For example, “arrogance” means “overbearing pride evidenced by a superior manner toward the weak”. Although “arrogance” is mistakingly predicted as positive due to the word “superior”, what is superior here is “manner”. The last one is idiomatic expressions. For example, although “brag” means “show off”, neither of “show” and “off” has the negative orientation. Idiomatic expressions often does not inherit the semantic orientation from or to the words in the gloss. The current model cannot deal with these types of errors. We leave their solutions as future work. 6 Conclusion and Future Work We proposed a method for extracting semantic orientations of words. In the proposed method, we regarded semantic orientations as spins of electrons, and used the mean field approximation to compute the approximate probability function of the system instead of the intractable actual probability function. We succeeded in extracting semantic orientations with high accuracy, even when only a small number of seed words are available. There are a number of directions for future work. One is the incorporation of syntactic information. Since the importance of each word consisting a gloss depends on its syntactic role. syntactic information in glosses should be useful for classification. Another is active learning. To decrease the amount of manual tagging for seed words, an active learning scheme is desired, in which a small number of good seed words are automatically selected. Although our model can easily extended to a 139 multi-state model, the effectiveness of using such a multi-state model has not been shown yet. Our model uses only the tendency of having the same orientation. Therefore we can extract semantic orientations of new words that are not listed in a dictionary. The validation of such extension will widen the possibility of application of our method. Larger corpora such as web data will improve performance. The combination of our method and the method by Turney and Littman (2003) is promising. Finally, we believe that the proposed model is applicable to other tasks in computational linguistics. References Adam L. Berger, Stephen Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. David Chandler. 1987. Introduction to Modern Statistical Mechanics. Oxford University Press. Jim Cowie, Joe Guthrie, and Louise Guthrie. 1992. Lexical disambiguation using simulated annealing. In Proceedings of the 14th conference on Computational linguistics, volume 1, pages 359–365. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database, Language, Speech, and Communication Series. MIT Press. Stuart Geman and Donald Geman. 1984. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741. Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the semantic orientation of adjectives. In Proceedings of the Thirty-Fifth Annual Meeting of the Association for Computational Linguistics and the Eighth Conference of the European Chapter of the Association for Computational Linguistics, pages 174–181. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the 2004 ACM SIGKDD international conference on Knowledge discovery and data mining (KDD-2004), pages 168–177. Yukito Iba. 1999. The nishimori line and bayesian statistics. Journal of Physics A: Mathematical and General, pages 3875–3888. Junichi Inoue and Domenico M. Carlucci. 2001. Image restoration using the q-ising spin glass. Physical Review E, 64:036121–1 – 036121–18. Jaap Kamps, Maarten Marx, Robert J. Mokken, and Maarten de Rijke. 2004. Using wordnet to measure semantic orientation of adjectives. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004), volume IV, pages 1115–1118. Nozomi Kobayashi, Takashi Inui, and Kentaro Inui. 2001. Dictionary-based acquisition of the lexical knowledge for p/n analysis (in Japanese). In Proceedings of Japanese Society for Artificial Intelligence, SLUD-33, pages 45–50. David J. C. Mackay. 2003. Information Theory, Inference and Learning Algorithms. Cambridge University Press. Jose L. Marroquin. 1985. Optimal bayesian estimators for image segmentation and surface reconstruction. Technical Report A.I. Memo 839, Massachusetts Institute of Technology. Ellen Riloff, Janyce Wiebe, and Theresa Wilson. 2003. Learning subjective nouns using extraction pattern bootstrapping. In Proceedings of the Seventh Conference on Natural Language Learning (CoNLL-03), pages 25–32. Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of International Conference on New Methods in Language Processing, pages 44–49. Philip J. Stone, Dexter C. Dunphy, Marshall S. Smith, and Daniel M. Ogilvie. 1966. The General Inquirer: A Computer Approach to Content Analysis. The MIT Press. Kazuyuki Tanaka, Junichi Inoue, and Mike Titterington. 2003. Probabilistic image processing by means of the bethe approximation for the q-ising model. Journal of Physics A: Mathematical and General, 36:11023– 11035. Peter D. Turney and Michael L. Littman. 2003. Measuring praise and criticism: Inference of semantic orientation from association. ACM Transactions on Information Systems, 21(4):315–346. Jean Veronis and Nancy M. Ide. 1990. Word sense disambiguation with very large neural networks extracted from machine readable dictionaries. In Proceedings of the 13th Conference on Computational Linguistics, volume 2, pages 389–394. Janyce M. Wiebe. 2000. Learning subjective adjectives from corpora. In Proceedings of the 17th National Conference on Artificial Intelligence (AAAI2000), pages 735–740. 140
2005
17
Proceedings of the 43rd Annual Meeting of the ACL, pages 141–148, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Modeling Local Coherence: An Entity-based Approach Regina Barzilay Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology [email protected] Mirella Lapata School of Informatics University of Edinburgh [email protected] Abstract This paper considers the problem of automatic assessment of local coherence. We present a novel entity-based representation of discourse which is inspired by Centering Theory and can be computed automatically from raw text. We view coherence assessment as a ranking learning problem and show that the proposed discourse representation supports the effective learning of a ranking function. Our experiments demonstrate that the induced model achieves significantly higher accuracy than a state-of-the-art coherence model. 1 Introduction A key requirement for any system that produces text is the coherence of its output. Not surprisingly, a variety of coherence theories have been developed over the years (e.g., Mann and Thomson, 1988; Grosz et al. 1995) and their principles have found application in many symbolic text generation systems (e.g., Scott and de Souza, 1990; Kibble and Power, 2004). The ability of these systems to generate high quality text, almost indistinguishable from human writing, makes the incorporation of coherence theories in robust large-scale systems particularly appealing. The task is, however, challenging considering that most previous efforts have relied on handcrafted rules, valid only for limited domains, with no guarantee of scalability or portability (Reiter and Dale, 2000). Furthermore, coherence constraints are often embedded in complex representations (e.g., Asher and Lascarides, 2003) which are hard to implement in a robust application. This paper focuses on local coherence, which captures text relatedness at the level of sentence-tosentence transitions, and is essential for generating globally coherent text. The key premise of our work is that the distribution of entities in locally coherent texts exhibits certain regularities. This assumption is not arbitrary — some of these regularities have been recognized in Centering Theory (Grosz et al., 1995) and other entity-based theories of discourse. The algorithm introduced in the paper automatically abstracts a text into a set of entity transition sequences, a representation that reflects distributional, syntactic, and referential information about discourse entities. We argue that this representation of discourse allows the system to learn the properties of locally coherent texts opportunistically from a given corpus, without recourse to manual annotation or a predefined knowledge base. We view coherence assessment as a ranking problem and present an efficiently learnable model that orders alternative renderings of the same information based on their degree of local coherence. Such a mechanism is particularly appropriate for generation and summarization systems as they can produce multiple text realizations of the same underlying content, either by varying parameter values, or by relaxing constraints that control the generation process. A system equipped with a ranking mechanism, could compare the quality of the candidate outputs, much in the same way speech recognizers employ language models at the sentence level. Our evaluation results demonstrate the effectiveness of our entity-based ranking model within the general framework of coherence assessment. First, we evaluate the utility of the model in a text ordering task where our algorithm has to select a maximally coherent sentence order from a set of candidate permutations. Second, we compare the rankings produced by the model against human coherence judgments elicited for automatically generated summaries. In both experiments, our method yields 141 a significant improvement over a state-of-the-art coherence model based on Latent Semantic Analysis (Foltz et al., 1998). In the following section, we provide an overview of existing work on the automatic assessment of local coherence. Then, we introduce our entity-based representation, and describe our ranking model. Next, we present the experimental framework and data. Evaluation results conclude the paper. 2 Related Work Local coherence has been extensively studied within the modeling framework put forward by Centering Theory (Grosz et al., 1995; Walker et al., 1998; Strube and Hahn, 1999; Poesio et al., 2004; Kibble and Power, 2004). One of the main assumptions underlying Centering is that a text segment which foregrounds a single entity is perceived to be more coherent than a segment in which multiple entities are discussed. The theory formalizes this intuition by introducing constraints on the distribution of discourse entities in coherent text. These constraints are formulated in terms of focus, the most salient entity in a discourse segment, and transition of focus between adjacent sentences. The theory also establishes constraints on the linguistic realization of focus, suggesting that it is more likely to appear in prominent syntactic positions (such as subject or object), and to be referred to with anaphoric expressions. A great deal of research has attempted to translate principles of Centering Theory into a robust coherence metric (Miltsakaki and Kukich, 2000; Hasler, 2004; Karamanis et al., 2004). Such a translation is challenging in several respects: one has to specify the “free parameters” of the system (Poesio et al., 2004) and to determine ways of combining the effects of various constraints. A common methodology that has emerged in this research is to develop and evaluate coherence metrics on manually annotated corpora. For instance, Miltsakaki and Kukich (2000) annotate a corpus of student essays with transition information, and show that the distribution of transitions correlates with human grades. Karamanis et al. (2004) use a similar methodology to compare coherence metrics with respect to their usefulness for text planning in generation. The present work differs from these approaches in two key respects. First, our method does not require manual annotation of input texts. We do not aim to produce complete centering annotations; instead, our inference procedure is based on a discourse representation that preserves essential entity transition information, and can be computed automatically from raw text. Second, we learn patterns of entity distribution from a corpus, without attempting to directly implement or refine Centering constraints. 3 The Coherence Model In this section we introduce our entity-based representation of discourse. We describe how it can be computed and how entity transition patterns can be extracted. The latter constitute a rich feature space on which probabilistic inference is performed. Text Representation Each text is represented by an entity grid, a two-dimensional array that captures the distribution of discourse entities across text sentences. We follow Miltsakaki and Kukich (2000) in assuming that our unit of analysis is the traditional sentence (i.e., a main clause with accompanying subordinate and adjunct clauses). The rows of the grid correspond to sentences, while the columns correspond to discourse entities. By discourse entity we mean a class of coreferent noun phrases. For each occurrence of a discourse entity in the text, the corresponding grid cell contains information about its grammatical role in the given sentence. Each grid column thus corresponds to a string from a set of categories reflecting the entity’s presence or absence in a sequence of sentences. Our set consists of four symbols: S (subject), O (object), X (neither subject nor object) and – (gap which signals the entity’s absence from a given sentence). Table 1 illustrates a fragment of an entity grid constructed for the text in Table 2. Since the text contains six sentences, the grid columns are of length six. Consider for instance the grid column for the entity trial, [O – – – – X]. It records that trial is present in sentences 1 and 6 (as O and X respectively) but is absent from the rest of the sentences. Grid Computation The ability to identify and cluster coreferent discourse entities is an important prerequisite for computing entity grids. The same entity may appear in different linguistic forms, e.g., Microsoft Corp., Microsoft, and the company, but should still be mapped to a single entry in the grid. Table 1 exemplifies the entity grid for the text in Table 2 when coreference resolution is taken into account. To automatically compute entity classes, 142 Department Trial Microsoft Evidence Competitors Markets Products Brands Case Netscape Software Tactics Government Suit Earnings 1 S O S X O – – – – – – – – – – 1 2 – – O – – X S O – – – – – – – 2 3 – – S O – – – – S O O – – – – 3 4 – – S – – – – – – – – S – – – 4 5 – – – – – – – – – – – – S O – 5 6 – X S – – – – – – – – – – – O 6 Table 1: A fragment of the entity grid. Noun phrases are represented by their head nouns. 1 [The Justice Department]S is conducting an [anti-trust trial]O against [Microsoft Corp.]X with [evidence]X that [the company]S is increasingly attempting to crush [competitors]O. 2 [Microsoft]O is accused of trying to forcefully buy into [markets]X where [its own products]S are not competitive enough to unseat [established brands]O. 3 [The case]S revolves around [evidence]O of [Microsoft]S aggressively pressuring [Netscape]O into merging [browser software]O. 4 [Microsoft]S claims [its tactics]S are commonplace and good economically. 5 [The government]S may file [a civil suit]O ruling that [conspiracy]S to curb [competition]O through [collusion]X is [a violation of the Sherman Act]O. 6 [Microsoft]S continues to show [increased earnings]O despite [the trial]X. Table 2: Summary augmented with syntactic annotations for grid computation. we employ a state-of-the-art noun phrase coreference resolution system (Ng and Cardie, 2002) trained on the MUC (6–7) data sets. The system decides whether two NPs are coreferent by exploiting a wealth of features that fall broadly into four categories: lexical, grammatical, semantic and positional. Once we have identified entity classes, the next step is to fill out grid entries with relevant syntactic information. We employ a robust statistical parser (Collins, 1997) to determine the constituent structure for each sentence, from which subjects (s), objects (o), and relations other than subject or object (x) are identified. Passive verbs are recognized using a small set of patterns, and the underlying deep grammatical role for arguments involved in the passive construction is entered in the grid (see the grid cell o for Microsoft, Sentence 2, Table 2). When a noun is attested more than once with a different grammatical role in the same sentence, we default to the role with the highest grammatical ranking: subjects are ranked higher than objects, which in turn are ranked higher than the rest. For example, the entity Microsoft is mentioned twice in Sentence 1 with the grammatical roles x (for Microsoft Corp.) and s (for the company), but is represented only by s in the grid (see Tables 1 and 2). Coherence Assessment We introduce a method for coherence assessment that is based on grid representation. A fundamental assumption underlying our approach is that the distribution of entities in coherent texts exhibits certain regularities reflected in grid topology. Some of these regularities are formalized in Centering Theory as constraints on transitions of local focus in adjacent sentences. Grids of coherent texts are likely to have some dense columns (i.e., columns with just a few gaps such as Microsoft in Table 1) and many sparse columns which will consist mostly of gaps (see markets, earnings in Table 1). One would further expect that entities corresponding to dense columns are more often subjects or objects. These characteristics will be less pronounced in low-coherence texts. Inspired by Centering Theory, our analysis revolves around patterns of local entity transitions. A local entity transition is a sequence {S, O, X,–}n that represents entity occurrences and their syntactic roles in n adjacent sentences. Local transitions can be easily obtained from a grid as continuous subsequences of each column. Each transition will have a certain probability in a given grid. For instance, the probability of the transition [S –] in the grid from Table 1 is 0.08 (computed as a ratio of its frequency (i.e., six) divided by the total number of transitions of length two (i.e., 75)). Each text can thus be viewed as a distribution defined over transition types. We believe that considering all entity transitions may uncover new patterns relevant for coherence assessment. We further refine our analysis by taking into account the salience of discourse entities. Centering and other discourse theories conjecture that the way an entity is introduced and mentioned depends on its global role in a given discourse. Therefore, we discriminate between transitions of salient entities and the rest, collecting statistics for each group separately. We identify salient entities based on their 143 S S S O S X S – O S O O O X O – X S X O X X X – – S – O – X – – d1 0 0 0 .03 0 0 0 .02 .07 0 0 .12 .02 .02 .05 .25 d2 0 0 0 .02 0 .07 0 .02 0 0 .06 .04 0 0 0 .36 d3 .02 0 0 .03 0 0 0 .06 0 0 0 .05 .03 .07 .07 .29 Table 3: Example of a feature-vector document representation using all transitions of length two given syntactic categories: S, O, X, and –. frequency,1 following the widely accepted view that the occurrence frequency of an entity correlates with its discourse prominence (Morris and Hirst, 1991; Grosz et al., 1995). Ranking We view coherence assessment as a ranking learning problem. The ranker takes as input a set of alternative renderings of the same document and ranks them based on their degree of local coherence. Examples of such renderings include a set of different sentence orderings of the same text and a set of summaries produced by different systems for the same document. Ranking is more suitable than classification for our purposes since in text generation, a system needs a scoring function to compare among alternative renderings. Furthermore, it is clear that coherence assessment is not a categorical decision but a graded one: there is often no single coherent rendering of a given text but many different possibilities that can be partially ordered. As explained previously, coherence constraints are modeled in the grid representation implicitly by entity transition sequences. To employ a machine learning algorithm to our problem, we encode transition sequences explicitly using a standard feature vector notation. Each grid rendering j of a document di is represented by a feature vector Φ(xij) = (p1(xij), p2(xij),..., pm(xij)), where m is the number of all predefined entity transitions, and pt(xij) the probability of transition t in grid xij. Note that considerable latitude is available when specifying the transition types to be included in a feature vector. These can be all transitions of a given length (e.g., two or three) or the most frequent transitions within a document collection. An example of a feature space with transitions of length two is illustrated in Table 3. The training set consists of ordered pairs of renderings (xij,xik), where xij and xik are renderings 1The frequency threshold is empirically determined on the development set. See Section 5 for further discussion. of the same document di, and xij exhibits a higher degree of coherence than xik. Without loss of generality, we assume j > k. The goal of the training procedure is to find a parameter vector ⃗w that yields a “ranking score” function ⃗w · Φ(xij), which minimizes the number of violations of pairwise rankings provided in the training set. Thus, the ideal ⃗w would satisfy the condition ⃗w·(Φ(xij)−Φ(xik)) > 0 ∀j,i,k such that j > k. The problem is typically treated as a Support Vector Machine constraint optimization problem, and can be solved using the search technique described in Joachims (2002a). This approach has been shown to be highly effective in various tasks ranging from collaborative filtering (Joachims, 2002a) to parsing (Toutanova et al., 2004). In our ranking experiments, we use Joachims’ (2002a) SVMlight package for training and testing with all parameters set to their default values. 4 Evaluation Set-Up In this section we describe two evaluation tasks that assess the merits of the coherence modeling framework introduced above. We also give details regarding our data collection, and parameter estimation. Finally, we introduce the baseline method used for comparison with our approach. 4.1 Text Ordering Text structuring algorithms (Lapata, 2003; Barzilay and Lee, 2004; Karamanis et al., 2004) are commonly evaluated by their performance at information-ordering. The task concerns determining a sequence in which to present a pre-selected set of information-bearing items; this is an essential step in concept-to-text generation, multi-document summarization, and other text-synthesis problems. Since local coherence is a key property of any well-formed text, our model can be used to rank alternative sentence orderings. We do not assume that local coherence is sufficient to uniquely determine the best ordering — other constraints clearly play a role here. However, we expect that the accuracy of a coherence model is reflected in its performance in the ordering task. Data To acquire a large collection for training and testing, we create synthetic data, wherein the candidate set consists of a source document and permutations of its sentences. This framework for data acquisition is widely used in evaluation of ordering algorithms as it enables large scale automatic evalu144 ation. The underlying assumption is that the original sentence order in the source document must be coherent, and so we should prefer models that rank it higher than other permutations. Since we do not know the relative quality of different permutations, our corpus includes only pairwise rankings that comprise the original document and one of its permutations. Given k original documents, each with n randomly generated permutations, we obtain k · n (trivially) annotated pairwise rankings for training and testing. Using the technique described above, we collected data in two different genres: newspaper articles and accident reports written by government officials. The first collection consists of Associated Press articles from the North American News Corpus on the topic of natural disasters. The second includes narratives from the National Transportation Safety Board’s database2. Both sets have documents of comparable length – the average number of sentences is 10.4 and 11.5, respectively. For each set, we used 100 source articles with 20 randomly generated permutations for training. The same number of pairwise rankings (i.e., 2000) was used for testing. We held out 10 documents (i.e., 200 pairwise rankings) from the training data for development purposes. 4.2 Summary Evaluation We further test the ability of our method to assess coherence by comparing model induced rankings against rankings elicited by human judges. Admittedly, the information ordering task only partially approximates degrees of coherence violation using different sentence permutations of a source document. A stricter evaluation exercise concerns the assessment of texts with naturally occurring coherence violations as perceived by human readers. A representative example of such texts are automatically generated summaries which often contain sentences taken out of context and thus display problems with respect to local coherence (e.g., dangling anaphors, thematically unrelated sentences). A model that exhibits high agreement with human judges not only accurately captures the coherence properties of the summaries in question, but ultimately holds promise for the automatic evaluation of machine-generated texts. Existing automatic evaluation measures such as BLEU (Papineni et al., 2002) and ROUGE (Lin 2The collections are available from http://www.csail. mit.edu/regina/coherence/. and Hovy, 2003), are not designed for the coherence assessment task, since they focus on content similarity between system output and reference texts. Data Our evaluation was based on materials from the Document Understanding Conference (DUC, 2003), which include multi-document summaries produced by human writers and by automatic summarization systems. In order to learn a ranking, we require a set of summaries, each of which have been rated in terms of coherence. We therefore elicited judgments from human subjects.3 We randomly selected 16 input document clusters and five systems that had produced summaries for these sets, along with summaries composed by several humans. To ensure that we do not tune a model to a particular system, we used the output summaries of distinct systems for training and testing. Our set of training materials contained 4 · 16 summaries (average length 4.8), yielding 4 2  ·16 = 96 pairwise rankings. In a similar fashion, we obtained 32 pairwise rankings for the test set. Six documents from the training data were used as a development set. Coherence ratings were obtained during an elicitation study by 177 unpaid volunteers, all native speakers of English. The study was conducted remotely over the Internet. Participants first saw a set of instructions that explained the task, and defined the notion of coherence using multiple examples. The summaries were randomized in lists following a Latin square design ensuring that no two summaries in a given list were generated from the same document cluster. Participants were asked to use a seven point scale to rate how coherent the summaries were without having seen the source texts. The ratings (approximately 23 per summary) given by our subjects were averaged to provide a rating between 1 and 7 for each summary. The reliability of the collected judgments is crucial for our analysis; we therefore performed several tests to validate the quality of the annotations. First, we measured how well humans agree in their coherence assessment. We employed leaveone-out resampling4 (Weiss and Kulikowski, 1991), by correlating the data obtained from each participant with the mean coherence ratings obtained from all other participants. The inter-subject agree3The ratings are available from http://homepages.inf. ed.ac.uk/mlap/coherence/. 4We cannot apply the commonly used Kappa statistic for measuring agreement since it is appropriate for nominal scales, whereas our summaries are rated on an ordinal scale. 145 ment was r = .768. Second, we examined the effect of different types of summaries (human- vs. machine-generated.) An ANOVA revealed a reliable effect of summary type: F(1;15) = 20.38, p < 0.01 indicating that human summaries are perceived as significantly more coherent than system-generated ones. Finally, the judgments of our participants exhibit a significant correlation with DUC evaluations (r = .41, p < 0.01). 4.3 Parameter Estimation Our model has two free parameters: the frequency threshold used to identify salient entities and the length of the transition sequence. These parameters were tuned separately for each data set on the corresponding held-out development set. For our ordering and summarization experiments, optimal saliencebased models were obtained for entities with frequency ≥2. The optimal transition length was ≤3 for ordering and ≤2 for summarization. 4.4 Baseline We compare our algorithm against the coherence model proposed by Foltz et al. (1998) which measures coherence as a function of semantic relatedness between adjacent sentences. Semantic relatedness is computed automatically using Latent Semantic Analysis (LSA, Landauer and Dumais 1997) from raw text without employing syntactic or other annotations. This model is a good point of comparison for several reasons: (a) it is fully automatic, (b) it is a not a straw-man baseline; it correlates reliably with human judgments and has been used to analyze discourse structure, and (c) it models an aspect of coherence which is orthogonal to ours (their model is lexicalized). Following Foltz et al. (1998) we constructed vector-based representations for individual words from a lemmatized version of the North American News Text Corpus5 (350 million words) using a term-document matrix. We used singular value decomposition to reduce the semantic space to 100 dimensions obtaining thus a space similar to LSA. We represented the meaning of a sentence as a vector by taking the mean of the vectors of its words. The similarity between two sentences was determined by measuring the cosine of their means. An overall text coherence measure was obtained by averaging the cosines for all pairs of adjacent sentences. 5Our selection of this corpus was motivated by its similarity to the DUC corpus which primarily consists of news stories. In sum, each text was represented by a single feature, its sentence-to-sentence semantic similarity. During training, the ranker learns an appropriate threshold value for this feature. 4.5 Evaluation Metric Model performance was assessed in the same way for information ordering and summary evaluation. Given a set of pairwise rankings, we measure accuracy as the ratio of correct predictions made by the model over the size of the test set. In this setup, random prediction results in an accuracy of 50%. 5 Results The evaluation of our coherence model was driven by two questions: (1) How does the proposed model compare to existing methods for coherence assessment that make use of distinct representations? (2) What is the contribution of linguistic knowledge to the model’s performance? Table 4 summarizes the accuracy of various configurations of our model for the ordering and coherence assessment tasks. We first compared a linguistically rich grid model that incorporates coreference resolution, expressive syntactic information, and a salience-based feature space (Coreference+Syntax+Salience) against the LSA baseline (LSA). As can be seen in Table 4, the grid model outperforms the baseline in both ordering and summary evaluation tasks, by a wide margin. We conjecture that this difference in performance stems from the ability of our model to discriminate between various patterns of local sentence transitions. In contrast, the baseline model only measures the degree of overlap across successive sentences, without taking into account the properties of the entities that contribute to the overlap. Not surprisingly, the difference between the two methods is more pronounced for the second task — summary evaluation. Manual inspection of our summary corpus revealed that low-quality summaries often contain repetitive information. In such cases, simply knowing about high cross-sentential overlap is not sufficient to distinguish a repetitive summary from a well-formed one. In order to investigate the contribution of linguistic knowledge on model performance we compared the full model introduced above against models using more impoverished representations. We focused on three sources of linguistic knowledge — syntax, coreference resolution, and salience — which play 146 Model Ordering (Set1) Ordering (Set2) Summarization Coreference+Syntax+Salience 87.3 90.4 68.8 Coreference+Salience 86.9 88.3 62.5 Syntax+Salience 83.4 89.7 81.3 Coreference+Syntax 76.5 88.8 75.0 LSA 72.1 72.1 25.0 Table 4: Ranking accuracy measured as the fraction of correct pairwise rankings in the test set. a prominent role in Centering analyses of discourse coherence. An additional motivation for our study is exploration of the trade-off between robustness and richness of linguistic annotations. NLP tools are typically trained on human-authored texts, and may deteriorate in performance when applied to automatically generated texts with coherence violations. Syntax To evaluate the effect of syntactic knowledge, we eliminated the identification of grammatical relations from our grid computation and recorded solely whether an entity is present or absent in a sentence. This leaves only the coreference and salience information in the model, and the results are shown in Table 4 under (Coreference+Salience). The omission of syntactic information causes a uniform drop in performance on both tasks, which confirms its importance for coherence analysis. Coreference To measure the effect of fullyfledged coreference resolution, we constructed entity classes simply by clustering nouns on the basis of their identity. In other words, each noun in a text corresponds to a different entity in a grid, and two nouns are considered coreferent only if they are identical. The performance of the model (Syntax+Salience) is shown in the third row of Table 4. While coreference resolution improved model performance in ordering, it caused a decrease in accuracy in summary evaluation. This drop in performance can be attributed to two factors related to the nature of our corpus — machine-generated texts. First, an automatic coreference resolution tool expectedly decreases in accuracy because it was trained on well-formed human-authored texts. Second, automatic summarization systems do not use anaphoric expressions as often as humans do. Therefore, a simple entity clustering method is more suitable for automatic summaries. Salience Finally, we evaluate the contribution of salience information by comparing our original model (Coreference+Syntax+Salience) which accounts separately for patterns of salient and non-salient entities against a model that does not attempt to discriminate between them (Coreference+Syntax). Our results on the ordering task indicate that models that take salience information into account consistently outperform models that do not. The effect of salience is less pronounced for the summarization task when it is combined with coreference information (Coreference + Salience). This is expected, since accurate identification of coreferring entities is prerequisite to deriving accurate salience models. However, as explained above, our automatic coreference tool introduces substantial noise in our representation. Once this noise is removed (see Syntax+Salience), the salience model has a clear advantage over the other models. 6 Discussion and Conclusions In this paper we proposed a novel framework for representing and measuring text coherence. Central to this framework is the entity grid representation of discourse which we argue captures important patterns of sentence transitions. We re-conceptualize coherence assessment as a ranking task and show that our entity-based representation is well suited for learning an appropriate ranking function; we achieve good performance on text ordering and summary coherence evaluation. On the linguistic side, our results yield empirical support to some of Centering Theory’s main claims. We show that coherent texts are characterized by transitions with particular properties which do not hold for all discourses. Our work, however, not only validates these findings, but also quantitatively measures the predictive power of various linguistic features for the task of coherence assessment. An important future direction lies in augmenting our entity-based model with lexico-semantic knowledge. One way to achieve this goal is to cluster entities based on their semantic relatedness, thereby cre147 ating a grid representation over lexical chains (Morris and Hirst, 1991). An entirely different approach is to develop fully lexicalized models, akin to traditional language models. Cache language models (Kuhn and Mori, 1990) seem particularly promising in this context. In the discourse literature, entity-based theories are primarily applied at the level of local coherence, while relational models, such as Rhetorical Structure Theory (Mann and Thomson, 1988; Marcu, 2000), are used to model the global structure of discourse. We plan to investigate how to combine the two for improved prediction on both local and global levels, with the ultimate goal of handling longer texts. Acknowledgments The authors acknowledge the support of the National Science Foundation (Barzilay; CAREER grant IIS-0448168 and grant IIS-0415865) and EPSRC (Lapata; grant GR/T04540/01). We are grateful to Claire Cardie and Vincent Ng for providing us the results of their system on our data. Thanks to Eli Barzilay, Eugene Charniak, Michael Elhadad, Noemie Elhadad, Frank Keller, Alex Lascarides, Igor Malioutov, Smaranda Muresan, Martin Rinard, Kevin Simler, Caroline Sporleder, Chao Wang, Bonnie Webber and three anonymous reviewers for helpful comments and suggestions. Any opinions, findings, and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the National Science Foundation or EPSRC. References N. Asher, A. Lascarides. 2003. Logics of Conversation. Cambridge University Press. R. Barzilay, L. Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of HLT-NAACL, 113– 120. M. Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the ACL/EACL, 16–23. P. W. Foltz, W. Kintsch, T. K. Landauer. 1998. Textual coherence using latent semantic analysis. Discourse Processes, 25(2&3):285–307. B. Grosz, A. K. Joshi, S. Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–225. L. Hasler. 2004. An investigation into the use of centering transitions for summarisation. In Proceedings of the 7th Annual CLUK Research Colloquium, 100– 107, University of Birmingham. T. Joachims. 2002a. Optimizing search engines using clickthrough data. In Proceesings of KDD, 133–142. N. Karamanis, M. Poesio, C. Mellish, J. Oberlander. 2004. Evaluating centering-based metrics of coherence for text structuring using a reliably annotated corpus. In Proceedings of the ACL, 391–398. R. Kibble, R. Power. 2004. Optimising referential coherence in text generation. Computational Linguistics, 30(4):401–416. R. Kuhn, R. D. Mori. 1990. A cache-based natural language model for speech recognition. IEEE Transactions on PAMI, 12(6):570–583. T. K. Landauer, S. T. Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction and representation of knowledge. Psychological Review, 104(2):211–240. M. Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proceedings of the ACL, 545–552. C.-Y. Lin, E. H. Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of HLT-NAACL, 71–78. W. C. Mann, S. A. Thomson. 1988. Rhetorical structure theory. Text, 8(3):243–281. D. Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. MIT Press. E. Miltsakaki, K. Kukich. 2000. The role of centering theory’s rough-shift in the teaching and evaluation of writing skills. In Proceedings of the ACL, 408–415. J. Morris, G. Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 1(17):21–43. V. Ng, C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the ACL, 104–111. K. Papineni, S. Roukos, T. Ward, W.-J. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the ACL, 311–318. M. Poesio, R. Stevenson, B. D. Eugenio, J. Hitzeman. 2004. Centering: a parametric theory and its instantiations. Computational Linguistics, 30(3):309–363. E. Reiter, R. Dale. 2000. Building Natural-Language Generation Systems. Cambridge University Press. D. Scott, C. S. de Souza. 1990. Getting the message across in RST-based text generation. In R. Dale, C. Mellish, M. Zock, eds., Current Research in Natural Language Generation, 47–73. Academic Press. M. Strube, U. Hahn. 1999. Functional centering – grounding referential coherence in information structure. Computational Linguistics, 25(3):309–344. K. Toutanova, P. Markova, C. D. Manning. 2004. The leaf projection path view of parse trees: Exploring string kernels for HPSG parse selection. In Proceedings of the EMNLP, 166–173. M. Walker, A. Joshi, E. Prince, eds. 1998. Centering Theory in Discourse. Clarendon Press. S. M. Weiss, C. A. Kulikowski. 1991. Computer Systems that Learn: Classification and Prediction Methods from, Statistics, Neural Nets, Machine Learning, and Expert Systems. Morgan Kaufmann. 148
2005
18
Proceedings of the 43rd Annual Meeting of the ACL, pages 149–156, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Modelling the substitutability of discourse connectives Ben Hutchinson School of Informatics University of Edinburgh [email protected] Abstract Processing discourse connectives is important for tasks such as discourse parsing and generation. For these tasks, it is useful to know which connectives can signal the same coherence relations. This paper presents experiments into modelling the substitutability of discourse connectives. It shows that substitutability effects distributional similarity. A novel variancebased function for comparing probability distributions is found to assist in predicting substitutability. 1 Introduction Discourse coherence relations contribute to the meaning of texts, by specifying the relationships between semantic objects such as events and propositions. They also assist in the interpretation of anaphora, verb phrase ellipsis and lexical ambiguities (Hobbs, 1985; Kehler, 2002; Asher and Lascarides, 2003). Coherence relations can be implicit, or they can be signalled explicitly through the use of discourse connectives, e.g. because, even though. For a machine to interpret a text, it is important that it recognises coherence relations, and so as explicit markers discourse connectives are of great assistance (Marcu, 2000). When discourse connectives are not present, the task is more difficult. For such cases, unsupervised approaches have been developed for predicting relations, by using sentences containing discourse connectives as training data (Marcu and Echihabi, 2002; Lapata and Lascarides, 2004). However the nature of the relationship between the coherence relations signalled by discourse connectives and their empirical distributions has to date been poorly understood. In particular, one might wonder whether connectives with similar meanings also have similar distributions. Concerning natural language generation, texts are easier for humans to understand if they are coherently structured. Addressing this, a body of research has considered the problems of generating appropriate discourse connectives (for example (Moser and Moore, 1995; Grote and Stede, 1998)). One such problem involves choosing which connective to generate, as the mapping between connectives and relations is not one-to-one, but rather many-to-many. Siddharthan (2003) considers the task of paraphrasing a text while preserving its rhetorical relations. Clauses conjoined by but, or and when are separated to form distinct orthographic sentences, and these conjunctions are replaced by the discourse adverbials however, otherwise and then, respectively. The idea underlying Siddharthan’s work is that one connective can be substituted for another while preserving the meaning of a text. Knott (1996) studies the substitutability of discourse connectives, and proposes that substitutability can motivate theories of discourse coherence. Knott uses an empirical methodology to determine the substitutability of pairs of connectives. However this methodology is manually intensive, and Knott derives relationships for only about 18% of pairs of connectives. It would thus be useful if substitutability could be predicted automatically. 149 This paper proposes that substitutability can be predicted through statistical analysis of the contexts in which connectives appear. Similar methods have been developed for predicting the similarity of nouns and verbs on the basis of their distributional similarity, and many distributional similarity functions have been proposed for these tasks (Lee, 1999). However substitutability is a more complex notion than similarity, and we propose a novel variance-based function for assisting in this task. This paper constitutes a first step towards predicting substitutability of cnonectives automatically. We demonstrate that the substitutability of connectives has significant effects on both distributional similarity and the new variance-based function. We then attempt to predict substitutability of connectives using a simplified task that factors out the prior likelihood of being substitutable. 2 Relationships between connectives Two types of relationships between connectives are of interest: similarity and substitutability. 2.1 Similarity The concept of lexical similarity occupies an important role in psychology, artificial intelligence, and computational linguistics. For example, in psychology, Miller and Charles (1991) report that psychologists ‘have largely abandoned “synonymy” in favour of “semantic similarity”.’ In addition, work in automatic lexical acquisition is based on the proposition that distributional similarity correlates with semantic similarity (Grefenstette, 1994; Curran and Moens, 2002; Weeds and Weir, 2003). Several studies have found subjects’ judgements of semantic similarity to be robust. For example, Miller and Charles (1991) elicit similarity judgements for 30 pairs of nouns such as cord–smile, and found a high correlation with judgements of the same data obtained over 25 years previously (Rubenstein and Goodenough, 1965). Resnik (1999) repeated the experiment, and calculated an inter-rater agreement of 0.90. Resnik and Diab (2000) also performed a similar experiment with pairs of verbs (e.g. bathe–kneel). The level of inter-rater agreement was again significant (r = 0.76). 1. Take an instance of a discourse connective in a corpus. Imagine you are the writer that produced this text, but that you need to choose an alternative connective. 2. Remove the connective from the text, and insert another connective in its place. 3. If the new connective achieves the same discourse goals as the original one, it is considered substitutable in this context. Figure 1: Knott’s Test for Substitutability Given two words, it has been suggested that if words have the similar meanings, then they can be expected to have similar contextual distributions. The studies listed above have also found evidence that similarity ratings correlate positively with the distributional similarity of the lexical items. 2.2 Substitutability The notion of substitutability has played an important role in theories of lexical relations. A definition of synonymy attributed to Leibniz states that two words are synonyms if one word can be used in place of the other without affecting truth conditions. Unlike similarity, the substitutability of discourse connectives has been previously studied. Halliday and Hasan (1976) note that in certain contexts otherwise can be paraphrased by if not, as in (1) It’s the way I like to go to work. One person and one line of enquiry at a time. Otherwise/if not, there’s a muddle. They also suggest some other extended paraphrases of otherwise, such as under other circumstances. Knott (1996) systematises the study of the substitutability of discourse connectives. His first step is to propose a Test for Substitutability for connectives, which is summarised in Figure 1. An application of the Test is illustrated by (2). Here seeing as was the connective originally used by the writer, however because can be used instead. 150 w1 w2 (a) w1 and w2 are SYNONYMS w1 w2 (b) w1 is a HYPONYM of w2 w1 w2 (c) w1 and w2 are CONTINGENTLY SUBSTITUTABLE w1 w2 (d) w1 and w2 are EXCLUSIVE Figure 2: Venn diagrams representing relationships between distributions (2) Seeing as/because we’ve got nothing but circumstantial evidence, it’s going to be difficult to get a conviction. (Knott, p. 177) However the ability to substitute is sensitive to the context. In other contexts, for example (3), the substitution of because for seeing as is not valid. (3) It’s a fairly good piece of work, seeing as/#because you have been under a lot of pressure recently. (Knott, p. 177) Similarly, there are contexts in which because can be used, but seeing as cannot be substituted for it: (4) That proposal is useful, because/#seeing as it gives us a fallback position if the negotiations collapse. (Knott, p. 177) Knott’s next step is to generalise over all contexts a connective appears in, and to define four substitutability relationships that can hold between a pair of connectives w1 and w2. These relationships are illustrated graphically through the use of Venn diagrams in Figure 2, and defined below. • w1 is a SYNONYM of w2 if w1 can always be substituted for w2, and vice versa. • w1 and w2 are EXCLUSIVE if neither can ever be substituted for the other. • w1 is a HYPONYM of w2 if w2 can always be substituted for w1, but not vice versa. • w1 and w2 are CONTINGENTLY SUBSTITUTABLE if each can sometimes, but not always, be substituted for the other. Given examples (2)–(4) we can conclude that because and seeing as are CONTINGENTLY SUBSTITUTABLE (henceforth “CONT. SUBS.”). However this is the only relationship that can be established using a finite number of linguistic examples. The other relationships all involve generalisations over all contexts, and so rely to some degree on the judgement of the analyst. Examples of each relationship given by Knott (1996) include: given that and seeing as are SYNONYMS, on the grounds that is a HYPONYM of because, and because and now that are EXCLUSIVE. Although substitutability is inherently a more complex notion than similarity, distributional similarity is expected to be of some use in predicting substitutability relationships. For example, if two discourse connectives are SYNONYMS then we would expect them to have similar distributions. On the other hand, if two connectives are EXCLUSIVE, then we would expect them to have dissimilar distributions. However if the relationship between two connectives is HYPONYMY or CONT. SUBS. then we expect to have partial overlap between their distributions (consider Figure 2), and so distributional similarity might not distinguish these relationships. The Kullback-Leibler (KL) divergence function is a distributional similarity function that is of particular relevance here since it can be described informally in terms of substitutability. Given cooccurrence distributions p and q, its mathematical definition can be written as: D(p||q) = X x p(x)(log 1 q(x) −log 1 p(x)) (5) 151 w1 w2 (a) w1 and w2 are SYNONYMS w2 w1 (b) w2 is a HYPONYM of w1 w1 w2 (c) w1 is a HYPONYM of w2 w1 w2 (d) w1 and w2 are CONT. SUBS. w2 w1 (e) w1 and w2 are EXCLUSIVE Figure 3: Surprise in substituting w2 for w1 (darker shading indicates higher surprise) The value log 1 p(x) has an informal interpretation as a measure of how surprised an observer would be to see event x, given prior likelihood expectations defined by p. Thus, if p and q are the distributions of words w1 and w2 then D(p||q) = Ep(surprise in seeing w2 −surprise in seeing w1) (6) where Ep is the expectation function over the distribution of w1 (i.e. p). That is, KL divergence measures how much more surprised we would be, on average, to see word w2 rather than w1, where the averaging is weighted by the distribution of w1. 3 A variance-based function for distributional analysis A distributional similarity function provides only a one-dimensional comparison of two distributions, namely how similar they are. However we can obtain an additional perspective by using a variancebased function. We now introduce a new function V by taking the variance of the surprise in seeing w2, over the contexts in which w1 appears: V (p, q) = V ar(surprise in seeing w2) = Ep((Ep(log 1 q(x)) −log 1 q(x))2) (7) Note that like KL divergence, V (p, q) is asymmetric. We now consider how the substitutability of connectives affects our expectations of the value of V . If two connectives are SYNONYMS then each can always be used in place of other. Thus we would always expect a low level of surprise in seeing one Relationship Function of w1 to w2 D(p||q) D(q||p) V (p, q) V (q, p) SYNONYM Low Low Low Low HYPONYM Low Medium Low High CONT. SUBS. Medium Medium High High EXCLUSIVE High High Low Low Table 1: Expectations for distributional functions connective in place of the other, and this low level of surprise is indicated via light shading in Figure 3a. It follows that the variance in surprise is low. On the other hand, if two connectives are EXCLUSIVE then there would always be a high degree of surprise in seeing one in place of the other. This is indicated using dark shading in Figure 3e. Only one set is shaded because we need only consider the contexts in which w1 is appropriate. In this case, the variance in surprise is again low. The situation is more interesting when we consider two connectives that are CONT. SUBS.. In this case substitutability (and hence surprise) is dependent on the context. This is illustrated using light and dark shading in Figure 3d. As a result, the variance in surprise is high. Finally, with HYPONYMY, the variance in surprise depends on whether the original connective was the HYPONYM or the HYPERNYM. Table 1 summarises our expectations of the values of KL divergence and V , for the various substitutability relationships. (KL divergence, unlike most similarity functions, is sensitive to the order of arguments related by hyponymy (Lee, 1999).) The 152 Something happened and something else happened. Something happened or something else happened. ⃝0 ⃝1 ⃝2 ⃝3 ⃝4 ⃝5 Figure 4: Example experimental item experiments described below test these expectations using empirical data. 4 Experiments We now describe our empirical experiments which investigate the connections between a) subjects’ ratings of the similarity of discourse connectives, b) the substitutability of discourse connectives, and c) KL divergence and the new function V applied to the distributions of connectives. Our motivation is to explore how distributional properties of words might be used to predict substitutability. The experiments are restricted to connectives which relate clauses within a sentence. These include coordinating conjunctions (e.g. but) and a range of subordinators including conjunctions (e.g. because) as well as phrases introducing adverbial clauses (e.g. now that, given that, for the reason that). Adverbial discourse connectives are therefore not considered. 4.1 Experiment 1: Subject ratings of similarity This experiment tests the hypotheses that 1) subjects agree on the degree of similarity between pairs of discourse connectives, and 2) similarity ratings correlate with the degree of substitutability. 4.1.1 Methodology We randomly selected 48 pairs of discourse connectives such that there were 12 pairs standing in each of the four substitutability relationships.To do this, we used substitutability judgements made by Knott (1996), supplemented with some judgements of our own. Each experimental item consisted of the two discourse connectives along with dummy clauses, as illustrated in Figure 4. The format of the experimental items was designed to indicate how a phrase could be used as a discourse connective (e.g. it may not be obvious to a subject that the phrase the moment is a discourse connective), but without Mean HYP CONT. SUBS. EXCL SYNONYM 3.97 * * * HYPONYM 3.43 * * CONT. SUBS. 1.79 * EXCLUSIVE 1.08 Table 2: Similarity by substitutability relationship providing complete semantics for the clauses, which might bias the subjects’ ratings. Forty native speakers of English participated in the experiment, which was conducted remotely via the internet. 4.1.2 Results Leave-one-out resampling was used to compare each subject’s ratings are with the means of their peers’ (Weiss and Kulikowski, 1991). The average inter-subject correlation was 0.75 (Min = 0.49, Max = 0.86, StdDev = 0.09), which is comparable to previous results on verb similarity ratings (Resnik and Diab, 2000). The effect of substitutability on similarity ratings can be seen in Table 2. Post-hoc Tukey tests revealed all differences between means in Table 2 to be significant. The results demonstrate that subjects’ ratings of connective similarity show significant agreement and are robust enough for effects of substitutability to be found. 4.2 Experiment 2: Modelling similarity This experiment compares subjects’ ratings of similarity with lexical co-occurrence data. It hypothesises that similarity ratings correlate with distributional similarity, but that neither correlates with the new variance in surprise function. 4.2.1 Methodology Sentences containing discourse connectives were gathered from the British National Corpus and the world wide web, with discourse connectives identified on the basis of their syntactic contexts (for details, see Hutchinson (2004b)). The mean number of sentences per connective was about 32, 000, although about 12% of these are estimated to be errors. From these sentences, lexical co-occurrence data were collected. Only co-occurrences with dis153 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5 Divergence of DM co-occurrences Similarity judgements best fit SYNONYM HYPONYM CONT SUBS EXCLUSIVE Figure 5: Similarity versus distributional divergence course adverbials and other structural discourse connectives were stored, as these had previously been found to be useful for predicting semantic features of connectives (Hutchinson, 2004a). 4.2.2 Results A skewed variant of the Kullback-Leibler divergence function was used to compare co-occurrence distributions (Lee, 1999, with α = 0.95). Spearman’s correlation coefficient for ranked data showed a significant correlation (r = −0.51, p < 0.001). (The correlation is negative because KL divergence is lower when distributions are more similar.) The strength of this correlation is comparable with similar results achieved for verbs (Resnik and Diab, 2000), but not as great as has been observed for nouns (McDonald, 2000). Figure 5 plots the mean similarity judgements against the distributional divergence obtained using discourse markers, and also indicates the substitutability relationship for each item. (Two outliers can be observed in the upper left corner; these were excluded from the calculations.) The “variance in surprise” function introduced in the previous section was applied to the same cooccurrence data.1 These variances were compared to distributional divergence and the subjects’ similarity ratings, but in both cases Spearman’s correlation coefficient was not significant. In combination with the previous experiment, 1In practice, the skewed variant V (p, 0.95q + 0.05p) was used, in order to avoid problems arising when q(x) = 0. these results demonstrate a three way correspondence between the human ratings of the similarity of a pair of connectives, their substitutability relationship, and their distributional similarity. Hutchinson (2005) presents further experiments on modelling connective similarity, and discusses their implications. This experiment also provides empirical evidence that the new variance in surprise function is not a measure of similarity. 4.3 Experiment 3: Predicting substitutability The previous experiments provide hope that substitutability of connectives might be predicted on the basis of their empirical distributions. However one complicating factor is that EXCLUSIVE is by far the most likely relationship, holding between about 70% of pairs. Preliminary experiments showed that the empirical evidence for other relationships was not strong enough to overcome this prior bias. We therefore attempted two pseudodisambiguation tasks which eliminated the effects of prior likelihoods. The first task involved distinguishing between the relationships whose connectives subjects rated as most similar, namely SYNONYMY and HYPONYMY. Triples of connectives ⟨p, q, q′⟩were collected such that SYNONYM(p, q) and either HYPONYM(p, q′) or HYPONYM(q′, p) (we were not attempting to predict the order of HYPONYMY). The task was then to decide automatically which of q and q′ is the SYNONYM of p. The second task was identical in nature to the first, however here the relationship between p and q was either SYNONYMY or HYPONYMY, while p and q′ were either CONT. SUBS. or EXCLUSIVE. These two sets of relationships are those corresponding to high and low similarity, respectively. In combination, the two tasks are equivalent to predicting SYNONYMY or HYPONYMY from the set of all four relationships, by first distinguishing the high similarity relationships from the other two, and then making a finer-grained distinction between the two. 4.3.1 Methodology Substitutability relationships between 49 structural discourse connectives were extracted from Knott’s (1996) classification. In order to obtain more evaluation data, we used Knott’s methodology to obtain relationships between an additional 32 connec154 max(D1, D2) max(V1, V2) (V1 −V2)2 SYN 0.627 4.44 3.29 HYP 0.720 5.16 8.02 CONT 1.057 4.85 7.81 EXCL 1.069 4.79 7.27 Table 3: Distributional analysis by substitutability tives. This resulted in 46 triples ⟨p, q, q′⟩for the first task, and 10,912 triples for the second task. The co-occurrence data from the previous section were re-used. These were used to calculate D(p||q) and V (p, q). Both of these are asymmetric, so for our purposes we took the maximum of applying their arguments in both orders. Recall from Table 1 that when two connectives are in a HYPONYMY relation we expect V to be sensitive to the order in which the connectives are given as arguments. To test this, we also calculated (V (p, q) −V (q, p))2, i.e. the square of the difference of applying the arguments to V in both orders. The average values are summarised in Table 3, with D1 and D2 (and V1 and V2) denoting different orderings of the arguments to D (and V ), and max denoting the function which selects the larger of two numbers. These statistics show that our theoretically motivated expectations are supported. In particular, (1) SYNONYMOUS connectives have the least distributional divergence and EXCLUSIVE connectives the most, (2) CONT. SUBS. and HYPONYMOUS connectives have the greatest values for V , and (3) V shows the greatest sensitivity to the order of its arguments in the case of HYPONYMY. The co-occurrence data were used to construct a Gaussian classifier, by assuming the values for D and V are generated by Gaussians.2 First, normal functions were used to calculate the likelihood ratio of p and q being in the two relationships: P(syn|data) P(hyp|data) = P(syn) P(hyp) · P(data|syn) P(data|hyp) (8) = 1·n(max(D1, D2); µsyn, σsyn) n(max(D1, D2); µhyp, σhyp) (9) 2KL divergence is right skewed, so a log-normal model was used to model D, whereas a normal model used for V . Input to Gaussian SYN vs SYN/HYP vs Model HYP EX/CONT max(D1, D2) 50.0% 76.1% max(V1, V2) 84.8% 60.6% Table 4: Accuracy on pseudodisambiguation task where n(x; µ, σ) is the normal function with mean µ and standard deviation σ, and where µsyn, for example, denotes the mean of the Gaussian model for SYNONYMY. Next the likelihood ratio for p and q was divided by that for p and q′. If this value was greater than 1, the model predicted p and q were SYNONYMS, otherwise HYPONYMS. The same technique was used for the second task. 4.3.2 Results A leave-one-out cross validation procedure was used. For each triple ⟨p, q, q′⟩, the data concerning the pairs p, q and p, q′ were held back, and the remaining data used to construct the models. The results are shown in Table 4. For comparison, a random baseline classifier achieves 50% accuracy. The results demonstrate the utility of the new variance-based function V . The new variance-based function V is better than KL divergence at distinguishing HYPONYMY from SYNONYMY (χ2 = 11.13, df = 1, p < 0.001), although it performs worse on the coarser grained task. This is consistent with the expectations of Table 1. The two classifiers were also combined by making a naive Bayes assumption. This gave an accuracy of 76.1% on the first task, which is significantly better than just using KL divergence (χ2 = 5.65, df = 1, p < 0.05), and not significantly worse than using V . The combination’s accuracy on the second task was 76.2%, which is about the same as using KL divergence. This shows that combining similarity- and variancebased measures can be useful can improve overall performance. 5 Conclusions The concepts of lexical similarity and substitutability are of central importance to psychology, artificial intelligence and computational linguistics. 155 To our knowledge this is the first modelling study of how these concepts relate to lexical items involved in discourse-level phenomena. We found a three way correspondence between data sources of quite distinct types: distributional similarity scores obtained from lexical co-occurrence data, substitutability judgements made by linguists, and the similarity ratings of naive subjects. The substitutability of lexical items is important for applications such as text simplification, where it can be desirable to paraphrase one discourse connective using another. Ultimately we would like to automatically predict substitutability for individual tokens. However predicting whether one connective can either a) always, b) sometimes or c) never be substituted for another is a step towards this goal. Our results demonstrate that these general substitutability relationships have empirical correlates. We have introduced a novel variance-based function of two distributions which complements distributional similarity. We demonstrated the new function’s utility in helping to predict the substitutability of connectives, and it can be expected to have wider applicability to lexical acquisition tasks. In particular, it is expected to be useful for learning relationships which cannot be characterised purely in terms of similarity, such as hyponymy. In future work we will analyse further the empirical properties of the new function, and investigate its applicability to learning relationships between other classes of lexical items such as nouns. Acknowledgements I would like to thank Mirella Lapata, Alex Lascarides, Alistair Knott, and the anonymous ACL reviewers for their helpful comments. This research was supported by EPSRC Grant GR/R40036/01 and a University of Sydney Travelling Scholarship. References Nicholas Asher and Alex Lascarides. 2003. Logics of Conversation. Cambridge University Press. James R. Curran and M. Moens. 2002. Improvements in automatic thesaurus extraction. In Proceedings of the Workshop on Unsupervised Lexical Acquisition, Philadelphia, USA. Gregory Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers, Boston. Brigitte Grote and Manfred Stede. 1998. Discourse marker choice in sentence planning. In Eduard Hovy, editor, Proceedings of the Ninth International Workshop on Natural Language Generation, pages 128–137, New Brunswick, New Jersey. Association for Computational Linguistics. M. Halliday and R. Hasan. 1976. Cohesion in English. Longman. Jerry A Hobbs. 1985. On the coherence and structure of discourse. Technical Report CSLI-85-37, Center for the Study of Language and Information, Stanford University. Ben Hutchinson. 2004a. Acquiring the meaning of discourse markers. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL 2004), pages 685–692. Ben Hutchinson. 2004b. Mining the web for discourse markers. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004), pages 407–410, Lisbon, Portugal. Ben Hutchinson. 2005. Modelling the similarity of discourse connectives. To appear in Proceedings of the the 27th Annual Meeting of the Cognitive Science Society (CogSci2005). Andrew Kehler. 2002. Coherence, Reference and the Theory of Grammar. CSLI publications. Alistair Knott. 1996. A data-driven methodology for motivating a set of coherence relations. Ph.D. thesis, University of Edinburgh. Mirella Lapata and Alex Lascarides. 2004. Inferring sentenceinternal temporal relations. In In Proceedings of the Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics Annual Meeting, Boston, MA. Lillian Lee. 1999. Measures of distributional similarity. In Proceedings of ACL 1999. Daniel Marcu and Abdessamad Echihabi. 2002. An unsupervised approach to recognizing discourse relations. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-2002), Philadelphia, PA. Daniel Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. The MIT Press. Scott McDonald. 2000. Environmental determinants of lexical processing effort. Ph.D. thesis, University of Edinburgh. George A. Miller and William G. Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1–28. M. Moser and J. Moore. 1995. Using discourse analysis and automatic text generation to study discourse cue usage. In Proceedings of the AAAI 1995 Spring Symposium on Empirical Methods in Discourse Interpretation and Generation. Philip Resnik and Mona Diab. 2000. Measuring verb similarity. In Proceedings of the Twenty Second Annual Meeting of the Cognitive Science Society, Philadelphia, US, August. Philip Resnik. 1999. Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language. Journal of Artificial Intelligence Research, 11:95–130. H. Rubenstein and J. B. Goodenough. 1965. Contextual correlates of synonymy. Computational Linguistics, 8:627–633. Advaith Siddharthan. 2003. Preserving discourse structure when simplifying text. In Proceedings of the 2003 European Natural Language Generation Workshop. Julie Weeds and David Weir. 2003. A general framework for distributional similarity. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2003), Sapporo, Japan, July. Sholom M. Weiss and Casimir A. Kulikowski. 1991. Computer systems that learn. Morgan Kaufmann, San Mateo, CA. 156
2005
19
Proceedings of the 43rd Annual Meeting of the ACL, pages 10–17, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Scaling Conditional Random Fields Using Error-Correcting Codes Trevor Cohn Department of Computer Science and Software Engineering University of Melbourne, Australia [email protected] Andrew Smith Division of Informatics University of Edinburgh United Kingdom [email protected] Miles Osborne Division of Informatics University of Edinburgh United Kingdom [email protected] Abstract Conditional Random Fields (CRFs) have been applied with considerable success to a number of natural language processing tasks. However, these tasks have mostly involved very small label sets. When deployed on tasks with larger label sets, the requirements for computational resources mean that training becomes intractable. This paper describes a method for training CRFs on such tasks, using error correcting output codes (ECOC). A number of CRFs are independently trained on the separate binary labelling tasks of distinguishing between a subset of the labels and its complement. During decoding, these models are combined to produce a predicted label sequence which is resilient to errors by individual models. Error-correcting CRF training is much less resource intensive and has a much faster training time than a standardly formulated CRF, while decoding performance remains quite comparable. This allows us to scale CRFs to previously impossible tasks, as demonstrated by our experiments with large label sets. 1 Introduction Conditional random fields (CRFs) (Lafferty et al., 2001) are probabilistic models for labelling sequential data. CRFs are undirected graphical models that define a conditional distribution over label sequences given an observation sequence. They allow the use of arbitrary, overlapping, non-independent features as a result of their global conditioning. This allows us to avoid making unwarranted independence assumptions over the observation sequence, such as those required by typical generative models. Efficient inference and training methods exist when the graphical structure of the model forms a chain, where each position in a sequence is connected to its adjacent positions. CRFs have been applied with impressive empirical results to the tasks of named entity recognition (McCallum and Li, 2003), simplified part-of-speech (POS) tagging (Lafferty et al., 2001), noun phrase chunking (Sha and Pereira, 2003) and extraction of tabular data (Pinto et al., 2003), among other tasks. CRFs are usually estimated using gradient-based methods such as limited memory variable metric (LMVM). However, even with these efficient methods, training can be slow. Consequently, most of the tasks to which CRFs have been applied are relatively small scale, having only a small number of training examples and small label sets. For much larger tasks, with hundreds of labels and millions of examples, current training methods prove intractable. Although training can potentially be parallelised and thus run more quickly on large clusters of computers, this in itself is not a solution to the problem: tasks can reasonably be expected to increase in size and complexity much faster than any increase in computing power. In order to provide scalability, the factors which most affect the resource usage and runtime of the training method 10 must be addressed directly – ideally the dependence on the number of labels should be reduced. This paper presents an approach which enables CRFs to be used on larger tasks, with a significant reduction in the time and resources needed for training. This reduction does not come at the cost of performance – the results obtained on benchmark natural language problems compare favourably, and sometimes exceed, the results produced from regular CRF training. Error correcting output codes (ECOC) (Dietterich and Bakiri, 1995) are used to train a community of CRFs on binary tasks, with each discriminating between a subset of the labels and its complement. Inference is performed by applying these ‘weak’ models to an unknown example, with each component model removing some ambiguity when predicting the label sequence. Given a sufficient number of binary models predicting suitably diverse label subsets, the label sequence can be inferred while being robust to a number of individual errors from the weak models. As each of these weak models are binary, individually they can be efficiently trained, even on large problems. The number of weak learners required to achieve good performance is shown to be relatively small on practical tasks, such that the overall complexity of error-correcting CRF training is found to be much less than that of regular CRF training methods. We have evaluated the error-correcting CRF on the CoNLL 2003 named entity recognition (NER) task (Sang and Meulder, 2003), where we show that the method yields similar generalisation performance to standardly formulated CRFs, while requiring only a fraction of the resources, and no increase in training time. We have also shown how the errorcorrecting CRF scales when applied to the larger task of POS tagging the Penn Treebank and also the even larger task of simultaneously noun phrase chunking (NPC) and POS tagging using the CoNLL 2000 data-set (Sang and Buchholz, 2000). 2 Conditional random fields CRFs are undirected graphical models used to specify the conditional probability of an assignment of output labels given a set of input observations. We consider only the case where the output labels of the model are connected by edges to form a linear chain. The joint distribution of the label sequence, y, given the input observation sequence, x, is given by p(y|x) = 1 Z(x) exp T+1 X t=1 X k λkfk(t, yt−1, yt, x) where T is the length of both sequences and λk are the parameters of the model. The functions fk are feature functions which map properties of the observation and the labelling into a scalar value. Z(x) is the partition function which ensures that p is a probability distribution. A number of algorithms can be used to find the optimal parameter values by maximising the loglikelihood of the training data. Assuming that the training sequences are drawn IID from the population, the conditional log likelihood L is given by L = X i log p(y(i)|x(i)) = X i    T (i)+1 X t=1 X k λkfk(t, y(i) t−1, y(i) t , x(i)) − log Z(x(i)) o where x(i) and y(i) are the ith observation and label sequence. Note that a prior is often included in the L formulation; it has been excluded here for clarity of exposition. CRF estimation methods include generalised iterative scaling (GIS), improved iterative scaling (IIS) and a variety of gradient based methods. In recent empirical studies on maximum entropy models and CRFs, limited memory variable metric (LMVM) has proven to be the most efficient method (Malouf, 2002; Wallach, 2002); accordingly, we have used LMVM for CRF estimation. Every iteration of LMVM training requires the computation of the log-likelihood and its derivative with respect to each parameter. The partition function Z(x) can be calculated efficiently using dynamic programming with the forward algorithm. Z(x) is given by P y αT (y) where α are the forward values, defined recursively as αt+1(y) = X y′ αt(y′) exp X k λkfk(t + 1, y′, y, x) 11 The derivative of the log-likelihood is given by ∂L ∂λk = X i    T (i)+1 X t=1 fk(t, y(i) t−1, y(i) t , x(i)) − X y p(y|x(i)) T (i)+1 X t=1 fk(t, yt−1, yt, x(i))    The first term is the empirical count of feature k, and the second is the expected count of the feature under the model. When the derivative equals zero – at convergence – these two terms are equal. Evaluating the first term of the derivative is quite simple. However, the sum over all possible labellings in the second term poses more difficulties. This term can be factorised, yielding X t X y′,y p(Yt−1 = y′, Yt = y|x(i))fk(t, y′, y, x(i)) This term uses the marginal distribution over pairs of labels, which can be efficiently computed from the forward and backward values as αt−1(y′) exp P k λkfk(t, y′, y, x(i))βt(y) Z(x(i)) The backward probabilities β are defined by the recursive relation βt(y) = X y′ βt+1(y′) exp X k λkfk(t + 1, y, y′, x) Typically CRF training using LMVM requires many hundreds or thousands of iterations, each of which involves calculating of the log-likelihood and its derivative. The time complexity of a single iteration is O(L2NTF) where L is the number of labels, N is the number of sequences, T is the average length of the sequences, and F is the average number of activated features of each labelled clique. It is not currently possible to state precise bounds on the number of iterations required for certain problems; however, problems with a large number of sequences often require many more iterations to converge than problems with fewer sequences. Note that efficient CRF implementations cache the feature values for every possible clique labelling of the training data, which leads to a memory requirement with the same complexity of O(L2NTF) – quite demanding even for current computer hardware. 3 Error Correcting Output Codes Since the time and space complexity of CRF estimation is dominated by the square of the number of labels, it follows that reducing the number of labels will significantly reduce the complexity. Error-correcting coding is an approach which recasts multiple label problems into a set of binary label problems, each of which is of lesser complexity than the full multiclass problem. Interestingly, training a set of binary CRF classifiers is overall much more efficient than training a full multi-label model. This is because error-correcting CRF training reduces the L2 complexity term to a constant. Decoding proceeds by predicting these binary labels and then recovering the encoded actual label. Error-correcting output codes have been used for text classification, as in Berger (1999), on which the following is based. Begin by assigning to each of the m labels a unique n-bit string Ci, which we will call the code for this label. Now train n binary classifiers, one for each column of the coding matrix (constructed by taking the labels’ codes as rows). The jth classifier, γj, takes as positive instances those with label i where Cij = 1. In this way, each classifier learns a different concept, discriminating between different subsets of the labels. We denote the set of binary classifiers as Γ = {γ1, γ2, . . . , γn}, which can be used for prediction as follows. Classify a novel instance x with each of the binary classifiers, yielding a n-bit vector Γ(x) = {γ1(x), γ2(x), . . . , γn(x)}. Now compare this vector to the codes for each label. The vector may not exactly match any of the labels due to errors in the individual classifiers, and thus we chose the actual label which minimises the distance argmini∆(Γ(x), Ci). Typically the Hamming distance is used, which simply measures the number of differing bit positions. In this manner, prediction is resilient to a number of prediction errors by the binary classifiers, provided the codes for the labels are sufficiently diverse. 3.1 Error-correcting CRF training Error-correcting codes can also be applied to sequence labellers, such as CRFs, which are capable of multiclass labelling. ECOCs can be used with CRFs in a similar manner to that given above for 12 classifiers. A series of CRFs are trained, each on a relabelled variant of the training data. The relabelling for each binary CRF maps the labels into binary space using the relevant column of the coding matrix, such that label i is taken as a positive for the jth model example if Cij = 1. Training with a binary label set reduces the time and space complexity for each training iteration to O(NTF); the L2 term is now a constant. Provided the code is relatively short (i.e. there are few binary models, or weak learners), this translates into considerable time and space savings. Coding theory doesn’t offer any insights into the optimal code length (i.e. the number of weak learners). When using a very short code, the error-correcting CRF will not adequately model the decision boundaries between all classes. However, using a long code will lead to a higher degree of dependency between pairs of classifiers, where both model similar concepts. The generalisation performance should improve quickly as the number of weak learners (code length) increases, but these gains will diminish as the inter-classifier dependence increases. 3.2 Error-correcting CRF decoding While training of error-correcting CRFs is simply a logical extension of the ECOC classifier method to sequence labellers, decoding is a different matter. We have applied three decoding different strategies. The Standalone method requires each binary CRF to find the Viterbi path for a given sequence, yielding a string of 0s and 1s for each model. For each position t in the sequence, the tth bit from each model is taken, and the resultant bit string compared to each of the label codes. The label with the minimum Hamming distance is then chosen as the predicted label for that site. This method allows for error correction to occur at each site, however it discards information about the uncertainty of each weak learner, instead only considering the most probable paths. The Marginals method of decoding uses the marginal probability distribution at each position in the sequence instead of the Viterbi paths. This distribution is easily computed using the forward backward algorithm. The decoding proceeds as before, however instead of a bit string we have a vector of probabilities. This vector is compared to each of the label codes using the L1 distance, and the closest label is chosen. While this method incorporates the uncertainty of the binary models, it does so at the expense of the path information in the sequence. Neither of these decoding methods allow the models to interact, although each individual weak learner may benefit from the predictions of the other weak learners. The Product decoding method addresses this problem. It treats each weak model as an independent predictor of the label sequence, such that the probability of the label sequence given the observations can be re-expressed as the product of the probabilities assigned by each weak model. A given labelling y is projected into a bit string for each weak learner, such that the ith entry in the string is Ckj for the jth weak learner, where k is the index of label yi. The weak learners can then estimate the probability of the bit string; these are then combined into a global product to give the probability of the label sequence p(y|x) = 1 Z′(x) Y j pj(bj(y)|x) where pj(q|x) is the predicted probability of q given x by the jth weak learner, bj(y) is the bit string representing y for the jth weak learner and Z′(x) is the partition function. The log probability is X j {Fj(bj(y), x) · λj −log Zj(x)} −log Z′(x) where Fj(y, x) = PT+1 t=1 fj(t, yt−1, yt, x). This log probability can then be maximised using the Viterbi algorithm as before, noting that the two log terms are constant with respect to y and thus need not be evaluated. Note that this decoding is an equivalent formulation to a uniformly weighted logarithmic opinion pool, as described in Smith et al. (2005). Of the three decoding methods, Standalone has the lowest complexity, requiring only a binary Viterbi decoding for each weak learner. Marginals is slightly more complex, requiring the forward and backward values. Product, however, requires Viterbi decoding with the full label set, and many features – the union of the features of each weak learner – which can be quite computationally demanding. 13 3.3 Choice of code The accuracy of ECOC methods are highly dependent on the quality of the code. The ideal code has diverse rows, yielding a high error-correcting capability, and diverse columns such that the weak learners model highly independent concepts. When the number of labels, k, is small, an exhaustive code with every unique column is reasonable, given there are 2k−1 −1 unique columns. With larger label sets, columns must be selected with care to maximise the inter-row and inter-column separation. This can be done by randomly sampling the column space, in which case the probability of poor separation diminishes quickly as the number of columns increases (Berger, 1999). Algebraic codes, such as BCH codes, are an alternative coding scheme which can provide near-optimal error-correcting capability (MacWilliams and Sloane, 1977), however these codes provide no guarantee of good column separation. 4 Experiments Our experiments show that error-correcting CRFs are highly accurate on benchmark problems with small label sets, as well as on larger problems with many more labels, which would be otherwise prove intractable for traditional CRFs. Moreover, with a good code, the time and resources required for training and decoding can be much less than that of the standardly formulated CRF. 4.1 Named entity recognition CRFs have been used with strong results on the CoNLL 2003 NER task (McCallum, 2003) and thus this task is included here as a benchmark. This data set consists of a 14,987 training sentences (204,567 tokens) drawn from news articles, tagged for person, location, organisation and miscellaneous entities. There are 8 IOB-2 style labels. A multiclass (standardly formulated) CRF was trained on these data using features covering word identity, word prefix and suffix, orthographic tests for digits, case and internal punctuation, word length, POS tag and POS tag bigrams before and after the current word. Only features seen at least once in the training data were included in the model, resulting in 450,345 binary features. The model was Model Decoding MLE Regularised Multiclass 88.04 89.78 Coded standalone 88.23∗ 88.67† marginals 88.23∗ 89.19 product 88.69∗ 89.69 Table 1: F1 scores on NER task. trained without regularisation and with a Gaussian prior. An exhaustive code was created with all 127 unique columns. All of the weak learners were trained with the same feature set, each having around 315,000 features. The performance of the standard and error-correcting models are shown in Table 1. We tested for statistical significance using the matched pairs test (Gillick and Cox, 1989) at p < 0.001. Those results which are significantly better than the corresponding multiclass MLE or regularised model are flagged with a ∗, and those which are significantly worse with a †. These results show that error-correcting CRF training achieves quite similar performance to the multiclass CRF on the task (which incidentally exceeds McCallum (2003)’s result of 89.0 using feature induction). Product decoding was the better of the three methods, giving the best performance both with and without regularisation, although this difference was only statistically significant between the regularised standalone and the regularised product decoding. The unregularised error-correcting CRF significantly outperformed the multiclass CRF with all decoding strategies, suggesting that the method already provides some regularisation, or corrects some inherent bias in the model. Using such a large number of weak learners is costly, in this case taking roughly ten times longer to train than the multiclass CRF. However, much shorter codes can also achieve similar results. The simplest code, where each weak learner predicts only a single label (a.k.a. one-vs-all), achieved an F score of 89.56, while only requiring 8 weak learners and less than half the training time as the multiclass CRF. This code has no error correcting capability, suggesting that the code’s column separation (and thus interdependence between weak learners) is more important than its row separation. 14 An exhaustive code was used in this experiment simply for illustrative purposes: many columns in this code were unnecessary, yielding only a slight gain in performance over much simpler codes while incurring a very large increase in training time. Therefore, by selecting a good subset of the exhaustive code, it should be possible to reduce the training time while preserving the strong generalisation performance. One approach is to incorporate skew in the label distribution in our choice of code – the code should minimise the confusability of commonly occurring labels more so than that of rare labels. Assuming that errors made by the weak learners are independent, the probability of a single error, q, as a function of the code length n can be bounded by q(n) ≤1 − X l p(l) ⌊hl−1 2 ⌋ X i=0 n i ! ˆpi(1 −ˆp)n−i where p(l) is the marginal probability of the label l, hl is the minimum Hamming distance between l and any other label, and ˆp is the maximum probability of an error by a weak learner. The performance achieved by selecting the code with the minimum loss bound from a large random sample of codes is shown in Figure 1, using standalone decoding, where ˆp was estimated on the development set. For comparison, randomly sampled codes and a greedy oracle are shown. The two random sampled codes show those samples where no column is repeated, and where duplicate columns are permitted (random with replacement). The oracle repeatedly adds to the code the column which most improves its F1 score. The minimum loss bound method allows the performance plateau to be reached more quickly than random sampling; i.e. shorter codes can be used, thus allowing more efficient training and decoding. Note also that multiclass CRF training required 830Mb of memory, while error-correcting training required only 380Mb. Decoding of the test set (51,362 tokens) with the error-correcting model (exhaustive, MLE) took between 150 seconds for standalone decoding and 173 seconds for integrated decoding. The multiclass CRF was much faster, taking only 31 seconds, however this time difference could be reduced with suitable optimisations. 83 84 85 86 87 88 89 90 10 15 20 25 30 35 40 45 50 F1 score code length random random with replacement minimum loss bound oracle MLE multiclass CRF Regularised multiclass CRF Figure 1: NER F1 scores for standalone decoding with random codes, a minimum loss code and a greedy oracle. Coding Decoding MLE Regularised Multiclass 95.69 95.78 Coded - 200 standalone 95.63 96.03 marginals 95.68 96.03 One-vs-all product 94.90 96.57 Table 2: POS tagging accuracy. 4.2 Part-of-speech Tagging CRFs have been applied to POS tagging, however only with a very simple feature set and small training sample (Lafferty et al., 2001). We used the Penn Treebank Wall Street Journal articles, training on sections 2–21 and testing on section 24. In this task there are 45,110 training sentences, a total of 1,023,863 tokens and 45 labels. The features used included word identity, prefix and suffix, whether the word contains a number, uppercase letter or a hyphen, and the words one and two positions before and after the current word. A random code of 200 columns was used for this task. These results are shown in Table 2, along with those of a multiclass CRF and an alternative one-vsall coding. As for the NER experiment, the decoding performance levelled off after 100 bits, beyond which the improvements from longer codes were only very slight. This is a very encouraging characteristic, as only a small number of weak learners are required for good performance. 15 The random code of 200 bits required 1,300Mb of RAM, taking a total of 293 hours to train and 3 hours to decode (54,397 tokens) on similar machines to those used before. We do not have figures regarding the resources used by Lafferty et al.’s CRF for the POS tagging task and our attempts to train a multiclass CRF for full-scale POS tagging were thwarted due to lack of sufficient available computing resources. Instead we trained on a 10,000 sentence subset of the training data, which required approximately 17Gb of RAM and 208 hours to train. Our best result on the task was achieved using a one-vs-all code, which reduced the training time to 25 hours, as it only required training 45 binary models. This result exceeds Lafferty et al.’s accuracy of 95.73% using a CRF but falls short of Toutanova et al. (2003)’s state-of-the-art 97.24%. This is most probably due to our only using a first-order Markov model and a fairly simple feature set, where Tuotanova et al. include a richer set of features in a third order model. 4.3 Part-of-speech Tagging and Noun Phrase Segmentation The joint task of simultaneously POS tagging and noun phrase chunking (NPC) was included in order to demonstrate the scalability of error-correcting CRFs. The data was taken from the CoNLL 2000 NPC shared task, with the model predicting both the chunk tags and the POS tags. The training corpus consisted of 8,936 sentences, with 47,377 tokens and 118 labels. A 200-bit random code was used, with the following features: word identity within a window, prefix and suffix of the current word and the presence of a digit, hyphen or upper case letter in the current word. This resulted in about 420,000 features for each weak learner. A joint tagging accuracy of 90.78% was achieved using MLE training and standalone decoding. Despite the large increase in the number of labels in comparison to the earlier tasks, the performance also began to plateau at around 100 bits. This task required 220Mb of RAM and took a total of 30 minutes to train each of the 200 binary CRFs, this time on Pentium 4 machines with 1Gb RAM. Decoding of the 47,377 test tokens took 9,748 seconds and 9,870 seconds for the standalone and marginals methods respectively. Sutton et al. (2004) applied a variant of the CRF, the dynamic CRF (DCRF), to the same task, modelling the data with two interconnected chains where one chain predicted NPC tags and the other POS tags. They achieved better performance and training times than our model; however, this is not a fair comparison, as the two approaches are orthogonal. Indeed, applying the error-correcting CRF algorithms to DCRF models could feasibly decrease the complexity of the DCRF, allowing the method to be applied to larger tasks with richer graphical structures and larger label sets. In all three experiments, error-correcting CRFs have achieved consistently good generalisation performance. The number of weak learners required to achieve these results was shown to be relatively small, even for tasks with large label sets. The time and space requirements were lower than those of a traditional CRF for the larger tasks and, most importantly, did not increase substantially when the number of labels was increased. 5 Related work Most recent work on improving CRF performance has focused on feature selection. McCallum (2003) describes a technique for greedily adding those feature conjuncts to a CRF which significantly improve the model’s log-likelihood. His experimental results show that feature induction yields a large increase in performance, however our results show that standardly formulated CRFs can perform well above their reported 73.3%, casting doubt on the magnitude of the possible improvement. Roark et al. (2004) have also employed feature selection to the huge task of language modelling with a CRF, by partially training a voted perceptron then removing all features that the are ignored by the perceptron. The act of automatic feature selection can be quite time consuming in itself, while the performance and runtime gains are often modest. Even with a reduced number of features, tasks with a very large label space are likely to remain intractable. 16 6 Conclusion Standard training methods for CRFs suffer greatly from their dependency on the number of labels, making tasks with large label sets either difficult or impossible. As CRFs are deployed more widely to tasks with larger label sets this problem will become more evident. The current ‘solutions’ to these scaling problems – namely feature selection, and the use of large clusters – don’t address the heart of the problem: the dependence on the square of number of labels. Error-correcting CRF training allows CRFs to be applied to larger problems and those with larger label sets than were previously possible, without requiring computationally demanding methods such as feature selection. On standard tasks we have shown that error-correcting CRFs provide comparable or better performance than the standardly formulated CRF, while requiring less time and space to train. Only a small number of weak learners were required to obtain good performance on the tasks with large label sets, demonstrating that the method provides efficient scalability to the CRF framework. Error-correction codes could be applied to other sequence labelling methods, such as the voted perceptron (Roark et al., 2004). This may yield an increase in performance and efficiency of the method, as its runtime is also heavily dependent on the number of labels. We plan to apply error-correcting coding to dynamic CRFs, which should result in better modelling of naturally layered tasks, while increasing the efficiency and scalability of the method. We also plan to develop higher order CRFs, using error-correcting codes to curb the increase in complexity. 7 Acknowledgements This work was supported in part by a PORES travelling scholarship from the University of Melbourne, allowing Trevor Cohn to travel to Edinburgh. References Adam Berger. 1999. Error-correcting output coding for text classification. In Proceedings of IJCAI: Workshop on machine learning for information filtering. Thomas G. Dietterich and Ghulum Bakiri. 1995. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Reseach, 2:263–286. L. Gillick and Stephen Cox. 1989. Some statistical issues in the comparison of speech recognition algorithms. In Proceedings of the IEEE Conference on Acoustics, Speech and Signal Processing, pages 532–535, Glasgow, Scotland. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labelling sequence data. In Proceedings of ICML 2001, pages 282–289. Florence MacWilliams and Neil Sloane. 1977. The theory of error-correcting codes. North Holland, Amsterdam. Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of CoNLL 2002, pages 49–55. Andrew McCallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of CoNLL 2003, pages 188–191. Andrew McCallum. 2003. Efficiently inducing features of conditional random fields. In Proceedings of UAI 2003, pages 403–410. David Pinto, Andrew McCallum, Xing Wei, and Bruce Croft. 2003. Table extraction using conditional random fields. In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 235–242. Brian Roark, Murat Saraclar, Michael Collins, and Mark Johnson. 2004. Discriminative language modeling with conditional random fields and the perceptron algorithm. In Proceedings of ACL 2004, pages 48–55. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Proceedings of CoNLL 2000 and LLL 2000, pages 127–132. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of CoNLL 2003, pages 142–147, Edmonton, Canada. Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proceedings of HLT-NAACL 2003, pages 213–220. Andrew Smith, Trevor Cohn, and Miles Osborne. 2005. Logarithmic opinion pools for conditional random fields. In Proceedings of ACL 2005. Charles Sutton, Khashayar Rohanimanesh, and Andrew McCallum. 2004. Dynamic conditional random fields: Factorized probabilistic models for labelling and segmenting sequence data. In Proceedings of the ICML 2004. Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature rich part-of-speech tagging with a cyclic dependency network. In Proceedings of HLTNAACL 2003, pages 252–259. Hanna Wallach. 2002. Efficient training of conditional random fields. Master’s thesis, University of Edinburgh. 17
2005
2
Proceedings of the 43rd Annual Meeting of the ACL, pages 157–164, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Machine Learning for Coreference Resolution: From Local Classification to Global Ranking Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 [email protected] Abstract In this paper, we view coreference resolution as a problem of ranking candidate partitions generated by different coreference systems. We propose a set of partition-based features to learn a ranking model for distinguishing good and bad partitions. Our approach compares favorably to two state-of-the-art coreference systems when evaluated on three standard coreference data sets. 1 Introduction Recent research in coreference resolution — the problem of determining which noun phrases (NPs) in a text or dialogue refer to which real-world entity — has exhibited a shift from knowledgebased approaches to data-driven approaches, yielding learning-based coreference systems that rival their hand-crafted counterparts in performance (e.g., Soon et al. (2001), Ng and Cardie (2002b), Strube et al. (2002), Yang et al. (2003), Luo et al. (2004)). The central idea behind the majority of these learningbased approaches is to recast coreference resolution as a binary classification task. Specifically, a classifier is first trained to determine whether two NPs in a document are co-referring or not. A separate clustering mechanism then coordinates the possibly contradictory pairwise coreference classification decisions and constructs a partition on the given set of NPs, with one cluster for each set of coreferent NPs. Though reasonably successful, this “standard” approach is not as robust as one may think. First, design decisions such as the choice of the learning algorithm and the clustering procedure are apparently critical to system performance, but are often made in an ad-hoc and unprincipled manner that may be suboptimal from an empirical point of view. Second, this approach makes no attempt to search through the space of possible partitions when given a set of NPs to be clustered, employing instead a greedy clustering procedure to construct a partition that may be far from optimal. Another potential weakness of this approach concerns its inability to directly optimize for clusteringlevel accuracy: the coreference classifier is trained and optimized independently of the clustering procedure to be used, and hence improvements in classification accuracy do not guarantee corresponding improvements in clustering-level accuracy. Our goal in this paper is to improve the robustness of the standard approach by addressing the above weaknesses. Specifically, we propose the following procedure for coreference resolution: given a set of NPs to be clustered, (1) use pre-selected learningbased coreference systems to generate candidate partitions of the NPs, and then (2) apply an automatically acquired ranking model to rank these candidate hypotheses, selecting the best one to be the final partition. The key features of this approach are: Minimal human decision making. In contrast to the standard approach, our method obviates, to a large extent, the need to make tough or potentially suboptimal design decisions.1 For instance, if we 1We still need to determine the  coreference systems to be employed in our framework, however. Fortunately, the choice of  is flexible, and can be as large as we want subject to the 157 cannot decide whether learner is better to use than learner  in a coreference system, we can simply create two copies of the system with one employing and the other  , and then add both into our preselected set of coreference systems. Generation of multiple candidate partitions. Although an exhaustive search for the best partition is not computationally feasible even for a document with a moderate number of NPs, our approach explores a larger portion of the search space than the standard approach via generating multiple hypotheses, making it possible to find a potentially better partition of the NPs under consideration. Optimization for clustering-level accuracy via ranking. As mentioned above, the standard approach trains and optimizes a coreference classifier without necessarily optimizing for clustering-level accuracy. In contrast, we attempt to optimize our ranking model with respect to the target coreference scoring function, essentially by training it in such a way that a higher scored candidate partition (according to the scoring function) would be assigned a higher rank (see Section 3.2 for details). Perhaps even more importantly, our approach provides a general framework for coreference resolution. Instead of committing ourselves to a particular resolution method as in previous approaches, our framework makes it possible to leverage the strengths of different methods by allowing them to participate in the generation of candidate partitions. We evaluate our approach on three standard coreference data sets using two different scoring metrics. In our experiments, our approach compares favorably to two state-of-the-art coreference systems adopting the standard machine learning approach, outperforming them by as much as 4–7% on the three data sets for one of the performance metrics. 2 Related Work As mentioned before, our approach differs from the standard approach primarily by (1) explicitly learning a ranker and (2) optimizing for clustering-level accuracy. In this section we will focus on discussing related work along these two dimensions. Ranking candidate partitions. Although we are not aware of any previous attempt on training a available computing resources. ranking model using global features of an NP partition, there is some related work on partition ranking where the score of a partition is computed via a heuristic function of the probabilities of its NP pairs being coreferent.2 For instance, Harabagiu et al. (2001) introduce a greedy algorithm for finding the highest-scored partition by performing a beam search in the space of possible partitions. At each step of this search process, candidate partitions are ranked based on their heuristically computed scores. Optimizing for clustering-level accuracy. Ng and Cardie (2002a) attempt to optimize their rulebased coreference classifier for clustering-level accuracy, essentially by finding a subset of the learned rules that performs the best on held-out data with respect to the target coreference scoring program. Strube and M¨uller (2003) propose a similar idea, but aim instead at finding a subset of the available features with which the resulting coreference classifier yields the best clustering-level accuracy on held-out data. To our knowledge, our work is the first attempt to optimize a ranker for clustering-level accuracy. 3 A Ranking Approach to Coreference Our ranking approach operates by first dividing the available training texts into two disjoint subsets: a training subset and a held-out subset. More specifically, we first train each of our pre-selected coreference systems on the documents in the training subset, and then use these resolvers to generate candidate partitions for each text in the held-out subset from which a ranking model will be learned. Given a test text, we use our coreference systems to create candidate partitions as in training, and select the highest-ranked partition according to the ranking model to be the final partition.3 The rest of this section describes how we select these learning-based coreference systems and acquire the ranking model. 3.1 Selecting Coreference Systems A learning-based coreference system can be defined by four elements: the learning algorithm used to train the coreference classifier, the method of creating training instances for the learner, the feature set 2Examples of such scoring functions include the DempsterShafer rule (see Kehler (1997) and Bean and Riloff (2004)) and its variants (see Harabagiu et al. (2001) and Luo et al. (2004)). 3The ranking model breaks ties randomly. 158 used to represent a training or test instance, and the clustering algorithm used to coordinate the coreference classification decisions. Selecting a coreference system, then, is a matter of instantiating these elements with specific values. Now we need to define the set of allowable values for each of these elements. In particular, we want to define them in such a way that the resulting coreference systems can potentially generate good candidate partitions. Given that machine learning approaches to the problem have been promising, our choices will be guided by previous learning-based coreference systems, as described below. Training instance creation methods. A training instance represents two NPs, NP and NP  , having a class value of COREFERENT or NOT COREFERENT depending on whether the NPs co-refer in the associated text. We consider three previously-proposed methods of creating training instances. In McCarthy and Lehnert’s method, a positive instance is created for each anaphoric NP paired with each of its antecedents, and a negative instance is created by pairing each NP with each of its preceding non-coreferent noun phrases. Hence, the number of instances created by this method is quadratic in the number of NPs in the associated text. The large number of instances can potentially make the training process inefficient. In an attempt to reduce the training time, Soon et al.’s method creates a smaller number of training instances than McCarthy and Lehnert’s. Specifically, a positive instance is created for each anaphoric NP, NP  , and its closest antecedent, NP ; and a negative instance is created for NP  paired with each of the intervening NPs, NP  , NP  , , NP   . Unlike Soon et al., Ng and Cardie’s method generates a positive instance for each anaphoric NP and its most confident antecedent. For a non-pronominal NP, the most confident antecedent is assumed to be its closest non-pronominal antecedent. For pronouns, the most confident antecedent is simply its closest preceding antecedent. Negative instances are generated as in Soon et al.’s method. Feature sets. We employ two feature sets for representing an instance, as described below. Soon et al.’s feature set consists of 12 surfacelevel features, each of which is computed based on one or both NPs involved in the instance. The features can be divided into four groups: lexical, grammatical, semantic, and positional. Space limitations preclude a description of these features. Details can be found in Soon et al. (2001). Ng and Cardie expand Soon et al.’s feature set from 12 features to a deeper set of 53 to allow more complex NP string matching operations as well as finer-grained syntactic and semantic compatibility tests. See Ng and Cardie (2002b) for details. Learning algorithms. We consider three learning algorithms, namely, the C4.5 decision tree induction system (Quinlan, 1993), the RIPPER rule learning algorithm (Cohen, 1995), and maximum entropy classification (Berger et al., 1996). The classification model induced by each of these learners returns a number between 0 and 1 that indicates the likelihood that the two NPs under consideration are coreferent. In this work, NP pairs with class values above 0.5 are considered COREFERENT; otherwise the pair is considered NOT COREFERENT. Clustering algorithms. We employ three clustering algorithms, as described below. The closest-first clustering algorithm selects as the antecedent of NP  its closest preceding coreferent NP. If no such NP exists, then NP  is assumed to be non-anaphoric (i.e., no antecedent is selected). On the other hand, the best-first clustering algorithm selects as the antecedent of NP  the closest NP with the highest coreference likelihood value from its set of preceding coreferent NPs. If this set is empty, then no antecedent is selected for NP  . Since the most likely antecedent is chosen for each NP, best-first clustering may produce partitions with higher precision than closest-first clustering. Finally, in aggressive-merge clustering, each NP is merged with all of its preceding coreferent NPs. Since more merging occurs in comparison to the previous two algorithms, aggressive-merge clustering may yield partitions with higher recall. Table 1 summarizes the previous work on coreference resolution that employs the learning algorithms, clustering algorithms, feature sets, and instance creation methods discussed above. With three learners, three training instance creation methods, two feature sets, and three clustering algorithms, we can produce 54 coreference systems in total. 159 Decision tree learners Aone and Bennett (1995), McCarthy and Lehnert (1995), Soon et al. (2001), Learning (C4.5/C5/CART) Strube et al. (2002), Strube and M¨uller (2003), Yang et al. (2003) algorithm RIPPER Ng and Cardie (2002b) Maximum entropy Kehler (1997), Morton (2000), Luo et al. (2004) Instance McCarthy and Lehnert’s McCarthy and Lehnert (1995), Aone and Bennett (1995) creation Soon et al.’s Soon et al. (2001), Strube et al. (2002), Iida et al. (2003) method Ng and Cardie’s Ng and Cardie (2002b) Feature Soon et al.’s Soon et al. (2001) set Ng and Cardie’s Ng and Cardie (2002b) Clustering Closest-first Soon et al. (2001), Strube et al. (2002) algorithm Best-first Aone and Bennett (1995), Ng and Cardie (2002b), Iida et al. (2003) Aggressive-merge McCarthy and Lehnert (1995) Table 1: Summary of the previous work on coreference resolution that employs the learning algorithms, the clustering algorithms, the feature sets, and the training instance creation methods discussed in Section 3.1. 3.2 Learning to Rank Candidate Partitions We train an SVM-based ranker for ranking candidate partitions by means of Joachims’ (2002) SVM  package, with all the parameters set to their default values. To create training data, we first generate 54 candidate partitions for each text in the held-out subset as described above and then convert each partition into a training instance consisting of a set of partition-based features and method-based features. Partition-based features are used to characterize a candidate partition and can be derived directly from the partition itself. Following previous work on using global features of candidate structures to learn a ranking model (Collins, 2002), the global (i.e., partition-based) features we consider here are simple functions of the local features that capture the relationship between NP pairs. Specifically, we define our partition-based features in terms of the features in the Ng and Cardie (N&C) feature set (see Section 3.1) as follows. First, let us assume that  is the  -th nominal feature in N&C’s feature set and  is the -th possible value of  . Next, for each  and , we create two partitionbased features,  and   .  is computed over the set of coreferent NP pairs (with respect to the candidate partition), denoting the probability of encountering    in this set when the pairs are represented as attribute-value vectors using N&C’s features. On the other hand,   is computed over the set of non-coreferent NP pairs (with respect to the candidate partition), denoting the probability of encountering    in this set when the pairs are represented as attribute-value vectors using N&C’s features. One partition-based feature, for instance, would denote the probability that two NPs residing in the same cluster have incompatible gender values. Intuitively, a good NP partition would have a low probability value for this feature. So, having these partition-based features can potentially help us distinguish good and bad candidate partitions. Method-based features, on the other hand, are used to encode the identity of the coreference system that generated the candidate partition under consideration. Specifically, we have one method-based feature representing each pre-selected coreference system. The feature value is 1 if the corresponding coreference system generated the candidate partition and 0 otherwise. These features enable the learner to learn how to distinguish good and bad partitions based on the systems that generated them, and are particularly useful when some coreference systems perform consistently better than the others. Now, we need to compute the “class value” for each training instance, which is a positive integer denoting the rank of the corresponding partition among the 54 candidates generated for the training document under consideration. Recall from the introduction that we want to train our ranking model so that higher scored partitions according to the target coreference scoring program are ranked higher. To this end, we compute the rank of each candidate partition as follows. First, we apply the target scoring program to score each candidate partition against the correct partition derived from the training text. We then assign rank  to the  -th lowest scored partition.4 Effectively, the learning algorithm learns what a good partition is from the scoring program. 4Two partitions with the same score will have the same rank. 160 Training Corpus Test Corpus # Docs # Tokens # Docs # Tokens BNEWS 216 67470 51 18357 NPAPER 76 71944 17 18174 NWIRE 130 85688 29 20528 Table 2: Statistics for the ACE corpus. 4 Evaluation 4.1 Experimental Setup For evaluation purposes, we use the ACE (Automatic Content Extraction) coreference corpus, which is composed of three data sets created from three different news sources, namely, broadcast news (BNEWS), newspaper (NPAPER), and newswire (NWIRE).5 Statistics of these data sets are shown in Table 2. In our experiments, we use the training texts to acquire coreference classifiers and evaluate the resulting systems on the test texts with respect to two commonly-used coreference scoring programs: the MUC scorer (Vilain et al., 1995) and the B-CUBED scorer (Bagga and Baldwin, 1998). 4.2 Results Using the MUC Scorer Baseline systems. We employ as our baseline systems two existing coreference resolvers: our duplication of the Soon et al. (2001) system and the Ng and Cardie (2002b) system. Both resolvers adopt the standard machine learning approach and therefore can be characterized using the four elements discussed in Section 3.1. Specifically, Soon et al.’s system employs a decision tree learner to train a coreference classifier on instances created by Soon’s method and represented by Soon’s feature set, coordinating the classification decisions via closest-first clustering. Ng and Cardie’s system, on the other hand, employs RIPPER to train a coreference classifier on instances created by N&C’s method and represented by N&C’s feature set, inducing a partition on the given NPs via best-first clustering. The baseline results are shown in rows 1 and 2 of Table 3, where performance is reported in terms of recall, precision, and F-measure. As we can see, the N&C system outperforms the Duplicated Soon system by about 2-6% on the three ACE data sets. 5See http://www.itl.nist.gov/iad/894.01/ tests/ace for details on the ACE research program. Our approach. Recall that our approach uses labeled data to train both the coreference classifiers and the ranking model. To ensure a fair comparison of our approach with the baselines, we do not rely on additional labeled data for learning the ranker; instead, we use half of the training texts for training classifiers and the other half for ranking purposes. Results using our approach are shown in row 3 of Table 3. Our ranking model, when trained to optimize for F-measure using both partition-based features and method-based features, consistently provides substantial gains in F-measure over both baselines. In comparison to the stronger baseline (i.e., N&C), F-measure increases by 7.4, 7.2, and 4.6 for the BNEWS, NPAPER, and NWIRE data sets, respectively. Perhaps more encouragingly, gains in Fmeasure are accompanied by simultaneous increase in recall and precision for all three data sets. Feature contribution. In an attempt to gain additional insight into the contribution of partition-based features and method-based features, we train our ranking model using each type of features in isolation. Results are shown in rows 4 and 5 of Table 3. For the NPAPER and NWIRE data sets, we still see gains in F-measure over both baseline systems when the model is trained using either type of features. The gains, however, are smaller than those observed when the two types of features are applied in combination. Perhaps surprisingly, the results for BNEWS do not exhibit the same trend as those for the other two data sets. Here, the method-based features alone are strongly predictive of good candidate partitions, yielding even slightly better performance than when both types of features are applied. Overall, however, these results seem to suggest that both partition-based and method-based features are important to learning a good ranking model. Random ranking. An interesting question is: how much does supervised ranking help? If all of our candidate partitions are of very high quality, then ranking will not be particularly important because choosing any of these partitions may yield good results. To investigate this question, we apply a random ranking model, which randomly selects a candidate partition for each test text. Row 6 of Table 3 shows the results (averaged over five runs) when the random ranker is used in place of the supervised 161 BNEWS NPAPER NWIRE System Variation R P F R P F R P F 1 Duplicated Soon et al. baseline 52.7 47.5 50.0 63.3 56.7 59.8 48.7 40.9 44.5 2 Ng and Cardie baseline 56.5 58.6 57.5 57.1 68.0 62.1 43.1 59.9 50.1 3 Ranking framework 62.2 67.9 64.9 67.4 71.4 69.3 50.1 60.3 54.7 4 Partition-based features only 54.5 55.5 55.0 66.3 63.0 64.7 50.7 51.2 51.0 5 Method-based features only 62.0 68.5 65.1 67.5 61.2 64.2 51.1 49.9 50.5 6 Random ranking model 48.6 54.8 51.5 57.4 63.3 60.2 40.3 44.3 42.2 7 Perfect ranking model 66.0 69.3 67.6 70.4 71.2 70.8 56.6 59.7 58.1 Table 3: Results for the three ACE data sets obtained via the MUC scoring program. ranker. In comparison to the results in row 3, we see that the supervised ranker surpasses its random counterpart by about 9-13% in F-measure, implying that ranking plays an important role in our approach. Perfect ranking. It would be informative to see whether our ranking model is performing at its upper limit, because further performance improvement beyond this point would require enlarging our set of candidate partitions. So, we apply a perfect ranking model, which uses an oracle to choose the best candidate partition for each test text. Results in row 7 of Table 3 indicate that our ranking model performs at about 1-3% below the perfect ranker, suggesting that we can further improve coreference performance by improving the ranking model. 4.3 Results Using the B-CUBED Scorer Baseline systems. In contrast to the MUC results, the B-CUBED results for the two baseline systems are mixed (see rows 1 and 2 of Table 4). Specifically, while there is no clear winner for the NWIRE data set, N&C performs better on BNEWS but worse on NPAPER than the Duplicated Soon system. Our approach. From row 3 of Table 4, we see that our approach achieves small but consistent improvements in F-measure over both baseline systems. In comparison to the better baseline, F-measure increases by 0.1, 1.1, and 2.0 for the BNEWS, NPAPER, and NWIRE data sets, respectively. Feature contribution. Unlike the MUC results, using more features to train the ranking model does not always yield better performance with respect to the B-CUBED scorer (see rows 3-5 of Table 4). In particular, the best result for BNEWS is achieved using only method-based features, whereas the best result for NPAPER is obtained using only partitionbased features. Nevertheless, since neither type of features offers consistently better performance than the other, it still seems desirable to apply the two types of features in combination to train the ranker. Random ranking. Comparing rows 3 and 6 of Table 4, we see that the supervised ranker yields a nontrivial improvement of 2-3% in F-measure over the random ranker for the three data sets. Hence, ranking still plays an important role in our approach with respect to the B-CUBED scorer despite its modest performance gains over the two baseline systems. Perfect ranking. Results in rows 3 and 7 of Table 4 indicate that the supervised ranker underperforms the perfect ranker by about 5% for BNEWS and 3% for both NPAPER and NWIRE in terms of F-measure, suggesting that the supervised ranker still has room for improvement. Moreover, by comparing rows 1-2 and 7 of Table 4, we can see that the perfect ranker outperforms the baselines by less than 5%. This is essentially an upper limit on how much our approach can improve upon the baselines given the current set of candidate partitions. In other words, the performance of our approach is limited in part by the quality of the candidate partitions, more so with B-CUBED than with the MUC scorer. 5 Discussion Two questions naturally arise after examining the above results. First, which of the 54 coreference systems generally yield superior results? Second, why is the same set of candidate partitions scored so differently by the two scoring programs? To address the first question, we take the 54 coreference systems that were trained on half of the available training texts (see Section 4) and apply them to the three ACE test data sets. Table 5 shows the bestperforming resolver for each test set and scoring program combination. Interestingly, with respect to the 162 BNEWS NPAPER NWIRE System Variation R P F R P F R P F 1 Duplicated Soon et al. baseline 53.4 78.4 63.5 58.0 75.4 65.6 56.0 75.3 64.2 2 Ng and Cardie baseline 59.9 72.3 65.5 61.8 64.9 63.3 62.3 66.7 64.4 3 Ranking framework 57.0 77.1 65.6 62.8 71.2 66.7 59.3 75.4 66.4 4 Partition-based features only 55.0 79.1 64.9 61.3 74.7 67.4 57.1 76.8 65.5 5 Method-based features only 63.1 69.8 65.8 58.4 75.2 65.8 58.9 75.5 66.1 6 Random ranking model 52.5 79.9 63.4 58.4 69.2 63.3 54.3 77.4 63.8 7 Perfect ranking model 64.5 76.7 70.0 61.3 79.1 69.1 63.2 76.2 69.1 Table 4: Results for the three ACE data sets obtained via the B-CUBED scoring program. MUC scorer, the best performance on the three data sets is achieved by the same resolver. The results with respect to B-CUBED are mixed, however. For each resolver shown in Table 5, we also compute the average rank of the partitions generated by the resolver for the corresponding test texts.6 Intuitively, a resolver that consistently produces good partitions (relative to other candidate partitions) would achieve a low average rank. Hence, we can infer from the fairly high rank associated with the top B-CUBED resolvers that they do not perform consistently better than their counterparts. Regarding our second question of why the same set of candidate partitions is scored differently by the two scoring programs, the reason can be attributed to two key algorithmic differences between these scorers. First, while the MUC scorer only rewards correct identification of coreferent links, B-CUBED additionally rewards successful recognition of noncoreference relationships. Second, the MUC scorer applies the same penalty to each erroneous merging decision, whereas B-CUBED penalizes erroneous merging decisions involving two large clusters more heavily than those involving two small clusters. Both of the above differences can potentially cause B-CUBED to assign a narrower range of Fmeasure scores to each set of 54 candidate partitions than the MUC scorer, for the following reasons. First, our candidate partitions in general agree more on singleton clusters than on non-singleton clusters. Second, by employing a non-uniform penalty function B-CUBED effectively removes a bias inherent in the MUC scorer that leads to under-penalization of partitions in which entities are over-clustered. Nevertheless, our B-CUBED results suggest that 6The rank of a partition is computed in the same way as in Section 3.2, except that we now adopt the common convention of assigning rank to the -th highest scored partition. (1) despite its modest improvement over the baselines, our approach offers robust performance across the data sets; and (2) we could obtain better scores by improving the ranking model and expanding our set of candidate partitions, as elaborated below. To improve the ranking model, we can potentially (1) design new features that better characterize a candidate partition (e.g., features that measure the size and the internal cohesion of a cluster), and (2) reserve more labeled data for training the model. In the latter case we may have less data for training coreference classifiers, but at the same time we can employ weakly supervised techniques to bootstrap the classifiers. Previous attempts on bootstrapping coreference classifiers have only been mildly successful (e.g., M¨uller et al. (2002)), and this is also an area that deserves further research. To expand our set of candidate partitions, we can potentially incorporate more high-performing coreference systems into our framework, which is flexible enough to accommodate even those that adopt knowledge-based (e.g., Harabagiu et al. (2001)) and unsupervised approaches (e.g., Cardie and Wagstaff (1999), Bean and Riloff (2004)). Of course, we can also expand our pre-selected set of coreference systems via incorporating additional learning algorithms, clustering algorithms, and feature sets. Once again, we may use previous work to guide our choices. For instance, Iida et al. (2003) and Zelenko et al. (2004) have explored the use of SVM, voted perceptron, and logistic regression for training coreference classifiers. McCallum and Wellner (2003) and Zelenko et al. (2004) have employed graph-based partitioning algorithms such as correlation clustering (Bansal et al., 2002). Finally, Strube et al. (2002) and Iida et al. (2003) have proposed new edit-distance-based string-matching features and centering-based features, respectively. 163 Scoring Average Coreference System Test Set Program Rank Instance Creation Method Feature Set Learner Clustering Algorithm BNEWS MUC 7.2549 McCarthy and Lehnert’s Ng and Cardie’s C4.5 aggressive-merge BCUBED 16.9020 McCarthy and Lehnert’s Ng and Cardie’s C4.5 aggressive-merge NPAPER MUC 1.4706 McCarthy and Lehnert’s Ng and Cardie’s C4.5 aggressive-merge B-CUBED 9.3529 Soon et al.’s Soon et al.’s RIPPER closest-first NWIRE MUC 7.7241 McCarthy and Lehnert’s Ng and Cardie’s C4.5 aggressive-merge B-CUBED 13.1379 Ng and Cardie’s Ng and Cardie’s MaxEnt closest-first Table 5: The coreference systems that achieved the highest F-measure scores for each test set and scorer combination. The average rank of the candidate partitions produced by each system for the corresponding test set is also shown. Acknowledgments We thank the three anonymous reviewers for their valuable comments on an earlier draft of the paper. References C. Aone and S. W. Bennett. 1995. Evaluating automated and manual acquisition of anaphora resolution strategies. In Proc. of the ACL, pages 122–129. A. Bagga and B. Baldwin. 1998. Entity-based crossdocument coreferencing using the vector space model. In Proc. of COLING-ACL, pages 79–85. N. Bansal, A. Blum, and S. Chawla. 2002. Correlation clustering. In Proc. of FOCS, pages 238–247. D. Bean and E. Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. In Proc. of HLT/NAACL, pages 297–304. A. Berger, S. Della Pietra, and V. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. C. Cardie and K. Wagstaff. 1999. Noun phrase coreference as clustering. In Proc. of EMNLP/VLC, pages 82–89. W. Cohen. 1995. Fast effective rule induction. In Proc. of ICML, pages 115–123. M. Collins. 2002. Discriminative training methods for Hidden Markov Models: Theory and experiments with perceptron algorithms. In Proc. of EMNLP, pages 1–8. S. Harabagiu, R. Bunescu, and S. Maiorano. 2001. Text and knowledge mining for coreference resolution. In Proc. of NAACL, pages 55–62. R. Iida, K. Inui, H. Takamura, and Y. Matsumoto. 2003. Incorporating contextual cues in trainable models for coreference resolution. In Proc. of the EACL Workshop on The Computational Treatment of Anaphora. T. Joachims. 2002. Optimizing search engines using clickthrough data. In Proc. of KDD, pages 133–142. A. Kehler. 1997. Probabilistic coreference in information extraction. In Proc. of EMNLP, pages 163–173. X. Luo, A. Ittycheriah, H. Jing, N. Kambhatla, and S. Roukos. 2004. A mention-synchronous coreference resolution algorithm based on the Bell tree. In Proc. of the ACL, pages 136–143. A. McCallum and B. Wellner. 2003. Toward conditional models of identity uncertainty with application to proper noun coreference. In Proc. of the IJCAI Workshop on Information Integration on the Web. J. McCarthy and W. Lehnert. 1995. Using decision trees for coreference resolution. In Proc. of the IJCAI, pages 1050–1055. T. Morton. 2000. Coreference for NLP applications. In Proc. of the ACL. C. M¨uller, S. Rapp, and M. Strube. 2002. Applying cotraining to reference resolution. In Proc. of the ACL, pages 352–359. V. Ng and C. Cardie. 2002a. Combining sample selection and error-driven pruning for machine learning of coreference rules. In Proc. of EMNLP, pages 55–62. V. Ng and C. Cardie. 2002b. Improving machine learning approaches to coreference resolution. In Proc. of the ACL, pages 104–111. J. R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann. W. M. Soon, H. T. Ng, and D. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. M. Strube and C. M¨uller. 2003. A machine learning approach to pronoun resolution in spoken dialogue. In Proc. of the ACL, pages 168–175. M. Strube, S. Rapp, and C. M¨uller. 2002. The influence of minimum edit distance on reference resolution. In Proc. of EMNLP, pages 312–319. M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proc. of the Sixth Message Understanding Conference (MUC-6), pages 45–52. X. Yang, G. D. Zhou, J. Su, and C. L. Tan. 2003. Coreference resolution using competitive learning approach. In Proc. of the ACL, pages 176–183. D. Zelenko, C. Aone, and J. Tibbetts. 2004. Coreference resolution for information extraction. In Proc. of the ACL Workshop on Reference Resolution and its Applications, pages 9–16. 164
2005
20
Proceedings of the 43rd Annual Meeting of the ACL, pages 165–172, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Improving Pronoun Resolution Using Statistics-Based Semantic Compatibility Information Xiaofeng Yang†‡ Jian Su† Chew Lim Tan‡ †Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore, 119613 {xiaofengy,sujian}@i2r.a-star.edu.sg ‡ Department of Computer Science National University of Singapore, Singapore, 117543 {yangxiao,tancl}@comp.nus.edu.sg Abstract In this paper we focus on how to improve pronoun resolution using the statisticsbased semantic compatibility information. We investigate two unexplored issues that influence the effectiveness of such information: statistics source and learning framework. Specifically, we for the first time propose to utilize the web and the twin-candidate model, in addition to the previous combination of the corpus and the single-candidate model, to compute and apply the semantic information. Our study shows that the semantic compatibility obtained from the web can be effectively incorporated in the twin-candidate learning model and significantly improve the resolution of neutral pronouns. 1 Introduction Semantic compatibility is an important factor for pronoun resolution. Since pronouns, especially neutral pronouns, carry little semantics of their own, the compatibility between an anaphor and its antecedent candidate is commonly evaluated by examining the relationships between the candidate and the anaphor’s context, based on the statistics that the corresponding predicate-argument tuples occur in a particular large corpus. Consider the example given in the work of Dagan and Itai (1990): (1) They know full well that companies held tax money aside for collection later on the basis that the government said it1 was going to collect it2. For anaphor it1, the candidate government should have higher semantic compatibility than money because government collect is supposed to occur more frequently than money collect in a large corpus. A similar pattern could also be observed for it2. So far, the corpus-based semantic knowledge has been successfully employed in several anaphora resolution systems. Dagan and Itai (1990) proposed a heuristics-based approach to pronoun resolution. It determined the preference of candidates based on predicate-argument frequencies. Recently, Bean and Riloff (2004) presented an unsupervised approach to coreference resolution, which mined the co-referring NP pairs with similar predicatearguments from a large corpus using a bootstrapping method. However, the utility of the corpus-based semantics for pronoun resolution is often argued. Kehler et al. (2004), for example, explored the usage of the corpus-based statistics in supervised learning based systems, and found that such information did not produce apparent improvement for the overall pronoun resolution. Indeed, existing learning-based approaches to anaphor resolution have performed reasonably well using limited and shallow knowledge (e.g., Mitkov (1998), Soon et al. (2001), Strube and Muller (2003)). Could the relatively noisy semantic knowledge give us further system improvement? In this paper we focus on improving pronominal anaphora resolution using automatically computed semantic compatibility information. We propose to enhance the utility of the statistics-based knowledge from two aspects: Statistics source. Corpus-based knowledge usually suffers from data sparseness problem. That is, many predicate-argument tuples would be unseen even in a large corpus. A possible solution is the 165 web. It is believed that the size of the web is thousands of times larger than normal large corpora, and the counts obtained from the web are highly correlated with the counts from large balanced corpora for predicate-argument bi-grams (Keller and Lapata, 2003). So far the web has been utilized in nominal anaphora resolution (Modjeska et al., 2003; Poesio et al., 2004) to determine the semantic relation between an anaphor and candidate pair. However, to our knowledge, using the web to help pronoun resolution still remains unexplored. Learning framework. Commonly, the predicateargument statistics is incorporated into anaphora resolution systems as a feature. What kind of learning framework is suitable for this feature? Previous approaches to anaphora resolution adopt the singlecandidate model, in which the resolution is done on an anaphor and one candidate at a time (Soon et al., 2001; Ng and Cardie, 2002). However, as the purpose of the predicate-argument statistics is to evaluate the preference of the candidates in semantics, it is possible that the statistics-based semantic feature could be more effectively applied in the twincandidate (Yang et al., 2003) that focusses on the preference relationships among candidates. In our work we explore the acquisition of the semantic compatibility information from the corpus and the web, and the incorporation of such semantic information in the single-candidate model and the twin-candidate model. We systematically evaluate the combinations of different statistics sources and learning frameworks in terms of their effectiveness in helping the resolution. Results on the MUC data set show that for neutral pronoun resolution in which an anaphor has no specific semantic category, the web-based semantic information would be the most effective when applied in the twin-candidate model: Not only could such a system significantly improve the baseline without the semantic feature, it also outperforms the system with the combination of the corpus and the single-candidate model (by 11.5% success). The rest of this paper is organized as follows. Section 2 describes the acquisition of the semantic compatibility information from the corpus and the web. Section 3 discusses the application of the statistics in the single-candidate and twin-candidate learning models. Section 4 gives the experimental results, and finally, Section 5 gives the conclusion. 2 Computing the Statistics-based Semantic Compatibility In this section, we introduce in detail how to compute the semantic compatibility, using the predicateargument statistics obtained from the corpus or the web. 2.1 Corpus-Based Semantic Compatibility Three relationships, possessive-noun, subject-verb and verb-object, are considered in our work. Before resolution a large corpus is prepared. Documents in the corpus are processed by a shallow parser that could generate predicate-argument tuples of the above three relationships1. To reduce data sparseness, the following steps are applied in each resulting tuple, automatically: • Only the nominal or verbal heads are retained. • Each Named-Entity (NE) is replaced by a common noun which corresponds to the semantic category of the NE (e.g. “IBM” →“company”) 2. • All words are changed to their base morphologic forms (e.g. “companies →company”). During resolution, for an encountered anaphor, each of its antecedent candidates is substituted with the anaphor . According to the role and type of the anaphor in its context, a predicate-argument tuple is extracted and the above three steps for data-sparse reduction are applied. Consider the sentence (1), for example. The anaphors “it1” and “it2” indicate a subject verb and verb object relationship, respectively. Thus, the predicate-argument tuples for the two candidates “government” and “money” would be (collect (subject government)) and (collect (subject money)) for “it1”, and (collect (object government)) and (collect (object money)) for “it2”. Each extracted tuple is searched in the prepared tuples set of the corpus, and the times the tuple occurs are calculated. For each candidate, its semantic 1The possessive-noun relationship involves the forms like “NP2 of NP1” and “NP1’s NP2”. 2In our study, the semantic category of a NE is identified automatically by the pre-processing NE recognition component. 166 compatibility with the anaphor could be represented simply in terms of frequency StatSem(candi, ana) = count(candi, ana) (1) where count(candi, ana) is the count of the tuple formed by candi and ana, or alternatively, in terms of conditional probability (P(candi, ana|candi)), where the count of the tuple is divided by the count of the single candidate in the corpus. That is StatSem(candi, ana) = count(candi, ana) count(candi) (2) In this way, the statistics would not bias candidates that occur frequently in isolation. 2.2 Web-Based Semantic Compatibility Unlike documents in normal corpora, web pages could not be preprocessed to generate the predicateargument reserve. Instead, the predicate-argument statistics has to be obtained via a web search engine like Google and Altavista. For the three types of predicate-argument relationships, queries are constructed in the forms of “NPcandi VP” (for subjectverb), “VP NPcandi” (for verb-object), and “NPcandi ’s NP” or “NP of NPcandi” (for possessive-noun). Consider the following sentence: (2) Several experts suggested that IBM’s accounting grew much more liberal since the mid 1980s as its business turned sour. For the pronoun “its” and the candidate “IBM”, the two generated queries are “business of IBM” and “IBM’s business”. To reduce data sparseness, in an initial query only the nominal or verbal heads are retained. Also, each NE is replaced by the corresponding common noun. (e.g, “IBM’s business” →“company’s business” and “business of IBM” →“business of company”). A set of inflected queries is generated by expanding a term into all its possible morphological forms. For example, in Sentence (1), “collect money” becomes “collected|collecting|... money”, and in (2) “business of company” becomes “business of company|companies”). Besides, determiners are inserted for every noun. If the noun is the candidate under consideration, only the definite article the is inserted. For other nouns, instead, a/an, the and the empty determiners (for bare plurals) would be added (e.g., “the|a business of the company|companies”). Queries are submitted to a particular web search engine (Google in our study). All queries are performed as exact matching. Similar to the corpusbased statistics, the compatibility for each candidate and anaphor pair could be represented using either frequency (Eq. 1) or probability (Eq. 2) metric. In such a situation, count(candi, ana) is the hit number of the inflected queries returned by the search engine, while count(candi) is the hit number of the query formed with only the head of the candidate (i.e.,“the + candi”). 3 Applying the Semantic Compatibility In this section, we discuss how to incorporate the statistics-based semantic compatibility for pronoun resolution, in a machine learning framework. 3.1 The Single-Candidate Model One way to utilize the semantic compatibility is to take it as a feature under the single-candidate learning model as employed by Ng and Cardie (2002). In such a learning model, each training or testing instance takes the form of i{C, ana}, where ana is the possible anaphor and C is its antecedent candidate. An instance is associated with a feature vector to describe their relationships. During training, for each anaphor in a given text, a positive instance is created by pairing the anaphor and its closest antecedent. Also a set of negative instances is formed by pairing the anaphor and each of the intervening candidates. Based on the training instances, a binary classifier is generated using a certain learning algorithm, like C5 (Quinlan, 1993) in our work. During resolution, given a new anaphor, a test instance is created for each candidate. This instance is presented to the classifier, which then returns a positive or negative result with a confidence value indicating the likelihood that they are co-referent. The candidate with the highest confidence value would be selected as the antecedent. 3.2 Features In our study we only consider those domainindependent features that could be obtained with low 167 Feature Description DefNp 1 if the candidate is a definite NP; else 0 Pron 1 if the candidate is a pronoun; else 0 NE 1 if the candidate is a named entity; else 0 SameSent 1 if the candidate and the anaphor is in the same sentence; else 0 NearestNP 1 if the candidate is nearest to the anaphor; else 0 ParalStuct 1 if the candidate has an parallel structure with ana; else 0 FirstNP 1 if the candidate is the first NP in a sentence; else 0 Reflexive 1 if the anaphor is a reflexive pronoun; else 0 Type Type of the anaphor (0: Single neuter pronoun; 1: Plural neuter pronoun; 2: Male personal pronoun; 3: Female personal pronoun) StatSem∗ the statistics-base semantic compatibility of the candidate SemMag∗∗ the semantic compatibility difference between two competing candidates Table 1: Feature set for our pronoun resolution system(*ed feature is only for the single-candidate model while **ed feature is only for the twin-candidate mode) computational cost but with high reliability. Table 1 summarizes the features with their respective possible values. The first three features represent the lexical properties of a candidate. The POS properties could indicate whether a candidate refers to a hearerold entity that would have a higher preference to be selected as the antecedent (Strube, 1998). SameSent and NearestNP mark the distance relationships between an anaphor and the candidate, which would significantly affect the candidate selection (Hobbs, 1978). FirstNP aims to capture the salience of the candidate in the local discourse segment. ParalStuct marks whether a candidate and an anaphor have similar surrounding words, which is also a salience factor for the candidate evaluation (Mitkov, 1998). Feature StatSem records the statistics-based semantic compatibility computed, from the corpus or the web, by either frequency or probability metric, as described in the previous section. If a candidate is a pronoun, this feature value would be set to that of its closest nominal antecedent. As described, the semantic compatibility of a candidate is computed under the context of the current anaphor. Consider two occurrences of anaphors “. ..it1 collected . . .” and “. ..it2 said .. .”. As “NP collected” should occur less frequently than “NP said”, the candidates of it1 would generally have predicate-argument statistics lower than those of it2. That is, a positive instance for it1 might bear a lower semantic feature value than a negative instance for it2. The consequence is that the learning algorithm would think such a feature is not that ”indicative” and reduce its salience in the resulting classifier. One way to tackle this problem is to normalize the feature by the frequencies of the anaphor’s context, e.g., “count(collected)” and “count(said)”. This, however, would require extra calculation. In fact, as candidates of a specific anaphor share the same anaphor context, we can just normalize the semantic feature of a candidate by that of its competitor: StatSemN(C, ana) = StatSem(C, ana) max ci∈candi set(ana) StatSem(ci, ana) The value (0 ∼1) represents the rank of the semantic compatibility of the candidate C among candi set(ana), the current candidates of ana. 3.3 The Twin-Candidate Model Yang et al. (2003) proposed an alternative twincandidate model for anaphora resolution task. The strength of such a model is that unlike the singlecandidate model, it could capture the preference relationships between competing candidates. In the model, candidates for an anaphor are paired and features from two competing candidates are put together for consideration. This property could nicely deal with the above mentioned training problem of different anaphor contexts, because the semantic feature would be considered under the current candidate set only. In fact, as semantic compatibility is 168 a preference-based factor for anaphor resolution, it would be incorporated in the twin-candidate model more naturally. In the twin-candidate model, an instance takes a form like i{C1, C2, ana}, where C1 and C2 are two candidates. We stipulate that C2 should be closer to ana than C1 in distance. The instance is labelled as “10” if C1 the antecedent, or “01” if C2 is. During training, for each anaphor, we find its closest antecedent, Cante. A set of “10” instances, i{Cante, C, ana}, is generated by pairing Cante and each of the interning candidates C. Also a set of “01” instances, i{C, Cante, ana}, is created by pairing Cante with each candidate before Cante until another antecedent, if any, is reached. The resulting pairwise classifier would return “10” or “01” indicating which candidate is preferred to the other. During resolution, candidates are paired one by one. The score of a candidate is the total number of the competitors that the candidate wins over. The candidate with the highest score would be selected as the antecedent. Features The features for the twin-candidate model are similar to those for the single-candidate model except that a duplicate set of features has to be prepared for the additional candidate. Besides, a new feature, SemMag, is used in place of StatSem to represent the difference magnitude between the semantic compatibility of two candidates. Let mag = StatSem(C1, ana)/StatSem(C2, ana), feature SemMag is defined as follows, SemMag(C1, C2, ana) = ½ mag −1 : mag >= 1 1 −mag−1 : mag < 1 The positive or negative value marks the times that the statistics of C1 is larger or smaller than C2. 4 Evaluation and Discussion 4.1 Experiment Setup In our study we were only concerned about the thirdperson pronoun resolution. With an attempt to examine the effectiveness of the semantic feature on different types of pronouns, the whole resolution was divided into neutral pronoun (it & they) resolution and personal pronoun (he & she) resolution. The experiments were done on the newswire domain, using MUC corpus (Wall Street Journal articles). The training was done on 150 documents from MUC-6 coreference data set, while the testing was on the 50 formal-test documents of MUC-6 (30) and MUC-7 (20). Throughout the experiments, default learning parameters were applied to the C5 algorithm. The performance was evaluated based on success, the ratio of the number of correctly resolved anaphors over the total number of anaphors. An input raw text was preprocessed automatically by a pipeline of NLP components. The noun phrase identification and the predicate-argument extraction were done based on the results of a chunk tagger, which was trained for the shared task of CoNLL-2000 and achieved 92% accuracy (Zhou et al., 2000). The recognition of NEs as well as their semantic categories was done by a HMM based NER, which was trained for the MUC NE task and obtained high F-scores of 96.9% (MUC-6) and 94.3% (MUC-7) (Zhou and Su, 2002). For each anaphor, the markables occurring within the current and previous two sentences were taken as the initial candidates. Those with mismatched number and gender agreements were filtered from the candidate set. Also, pronouns or NEs that disagreed in person with the anaphor were removed in advance. For the training set, there are totally 645 neutral pronouns and 385 personal pronouns with non-empty candidate set, while for the testing set, the number is 245 and 197. 4.2 The Corpus and the Web The corpus for the predicate-argument statistics computation was from the TIPSTER’s Text Research Collection (v1994). Consisting of 173,252 Wall Street Journal articles from the year 1988 to 1992, the data set contained about 76 million words. The documents were preprocessed using the same POS tagging and NE-recognition components as in the pronoun resolution task. Cass (Abney, 1996), a robust chunker parser was then applied to generate the shallow parse trees, which resulted in 353,085 possessive-noun tuples, 759,997 verb-object tuples and 1,090,121 subject-verb tuples. We examined the capacity of the web and the corpus in terms of zero-count ratio and count number. On average, among the predicate-argument tuples that have non-zero corpus-counts, above 93% have also non-zero web-counts. But the ratio is only around 40% contrariwise. And for the predicate169 Neutral Pron Personal Pron Overall Learning Model System Corpus Web Corpus Web Corpus Web baseline 65.7 86.8 75.1 +frequency 67.3 69.9 86.8 86.8 76.0 76.9 Single-Candidate +normalized frequency 66.9 67.8 86.8 86.8 75.8 76.2 +probability 65.7 65.7 86.8 86.8 75.1 75.1 +normalized probability 67.7 70.6 86.8 86.8 76.2 77.8 baseline 73.9 91.9 81.9 Twin-Candidate +frequency 76.7 79.2 91.4 91.9 83.3 84.8 +probability 75.9 78.0 91.4 92.4 82.8 84.4 Table 2: The performance of different resolution systems Relationship N-Pron P-Pron Possessive-Noun 0.508 0.517 Verb-Object 0.503 0.526 Subject-Verb 0.619 0.676 Table 3: Correlation between web and corpus counts on the seen predicate-argument tuples argument tuples that could be seen in both data sources, the count from the web is above 2000 times larger than that from the corpus. Although much less sparse, the web counts are significantly noisier than the corpus count since no tagging, chunking and parsing could be carried out on the web pages. However, previous study (Keller and Lapata, 2003) reveals that the large amount of data available for the web counts could outweigh the noisy problems. In our study we also carried out a correlation analysis3 to examine whether the counts from the web and the corpus are linearly related, on the predicate-argument tuples that can be seen in both data sources. From the results listed in Table 3, we observe moderately high correlation, with coefficients ranging from 0.5 to 0.7 around, between the counts from the web and the corpus, for both neutral pronoun (N-Pron) and personal pronoun (PPron) resolution tasks. 4.3 System Evaluation Table 2 summarizes the performance of the systems with different combinations of statistics sources and learning frameworks. The systems without the se3All the counts were log-transformed and the correlation coefficients were evaluated based on Pearsons’ r. mantic feature were used as the baseline. Under the single-candidate (SC) model, the baseline system obtains a success of 65.7% and 86.8% for neutral pronoun and personal pronoun resolution, respectively. By contrast, the twin-candidate (TC) model achieves a significantly (p ≤0.05, by two-tailed ttest) higher success of 73.9% and 91.9%, respectively. Overall, for the whole pronoun resolution, the baseline system under the TC model yields a success 81.9%, 6.8% higher than SC does4. The performance is comparable to most state-of-the-art pronoun resolution systems on the same data set. Web-based feature vs. Corpus-based feature The third column of the table lists the results using the web-based compatibility feature for neutral pronouns. Under both SC and TC models, incorporation of the web-based feature significantly boosts the performance of the baseline: For the best system in the SC model and the TC model, the success rate is improved significantly by around 4.9% and 5.3%, respectively. A similar pattern of improvement could be seen for the corpus-based semantic feature. However, the increase is not as large as using the web-based feature: Under the two learning models, the success rate of the best system with the corpus-based feature rises by up to 2.0% and 2.8% respectively, about 2.9% and 2.5% less than that of the counterpart systems with the web-based feature. The larger size and the better counts of the web against the corpus, as reported in Section 4.2, 4The improvement against SC is higher than that reported in (Yang et al., 2003). It should be because we now used 150 training documents rather than 30 ones as in the previous work. The TC model would benefit from larger training data set as it uses more features (more than double) than SC. 170 should contribute to the better performance. Single-candidate model vs. Twin-Candidate model The difference between the SC and the TC model is obvious from the table. For the N-Pron and P-Pron resolution, the systems under TC could outperform the counterpart systems under SC by above 5% and 8% success, respectively. In addition, the utility of the statistics-based semantic feature is more salient under TC than under SC for N-Pron resolution: the best gains using the corpus-based and the web-based semantic features under TC are 2.9% and 5.3% respectively, higher than those under the SC model using either un-normalized semantic features (1.6% and 3.3%), or normalized semantic features (2.0% and 4.9%). Although under SC, the normalized semantic feature could result in a gain close to under TC, its utility is not stable: with metric frequency, using the normalized feature performs even worse than using the un-normalized one. These results not only affirm the claim by Yang et al. (2003) that the TC model is superior to the SC model for pronoun resolution, but also indicate that TC is more reliable than SC in applying the statistics-based semantic feature, for N-Pron resolution. Web+TC vs. Other combinations The above analysis has exhibited the superiority of the web over the corpus, and the TC model over the SC model. The experimental results also reveal that using the the web-based semantic feature together with the TC model is able to further boost the resolution performance for neutral pronouns. The system with such a Web+TC combination could achieve a high success of 79.2%, defeating all the other possible combinations. Especially, it considerably outperforms (up to 11.5% success) the system with the Corpus+SC combination, which is commonly adopted in previous work (e.g., Kehler et al. (2004)). Personal pronoun resolution vs. Neutral pronoun resolution Interestingly, the statistics-based semantic feature has no effect on the resolution of personal pronouns, as shown in the table 2. We found in the learned decision trees such a feature did not occur (SC) or only occurred in bottom nodes (TC). This should be because personal pronouns have strong restriction on the semantic category (i.e., human) of the candidates. A non-human candidate, even with a high predicate-argument statistics, could Feature Group Isolated Combined SemMag (Web-based) 61.2 61.2 Type+Reflexive 53.1 61.2 ParaStruct 53.1 61.2 Pron+DefNP+InDefNP+NE 57.1 67.8 NearestNP+SameSent 53.1 70.2 FirstNP 65.3 79.2 Table 4: Results of different feature groups under the TC model for N-pron resolution SameSent_1 = 0: :..SemMag > 0: : :..Pron_2 = 0: 10 (200/23) : : Pron_2 = 1: ... : SemMag <= 0: : :..Pron_2 = 1: 01 (75/1) : Pron_2 = 0: : :..SemMag <= -28: 01 (110/19) : SemMag > -28: ... SameSent_1 = 1: :..SameSent_2 = 0: 01 (1655/49) SameSent_2 = 1: :..FirstNP_2 = 1: 01 (104/1) FirstNP_2 = 0: :..ParaStruct_2 = 1: 01 (3) ParaStruct_2 = 0: :..SemMag <= -151: 01 (27/2) SemMag > -151:... Figure 1: Top portion of the decision tree learned under TC model for N-pron resolution (features ended with “ 1” are for the first candidate C1 and those with “ 2” are for C2.) not be used as the antecedent (e.g. company said in the sentence “. . . the company . . . he said .. .”). In fact, our analysis of the current data set reveals that most P-Prons refer back to a P-Pron or NE candidate whose semantic category (human) has been determined. That is, simply using features NE and Pron is sufficient to guarantee a high success, and thus the relatively weak semantic feature would not be taken in the learned decision tree for resolution. 4.4 Feature Analysis In our experiment we were also concerned about the importance of the web-based compatibility feature (using frequency metric) among the feature set. For this purpose, we divided the features into groups, and then trained and tested on one group at a time. Table 4 lists the feature groups and their respective results for N-Pron resolution under the TC model. 171 The second column is for the systems with only the current feature group, while the third column is with the features combined with the existing feature set. We see that used in isolation, the semantic compatibility feature is able to achieve a success up to 61% around, just 4% lower than the best indicative feature FirstNP. In combination with other features, the performance could be improved by as large as 18% as opposed to being used alone. Figure 1 shows the top portion of the pruned decision tree for N-Pron resolution under the TC model. We could find that: (i) When comparing two candidates which occur in the same sentence as the anaphor, the web-based semantic feature would be examined in the first place, followed by the lexical property of the candidates. (ii) When two nonpronominal candidates are both in previous sentences before the anaphor, the web-based semantic feature is still required to be examined after FirstNP and ParaStruct. The decision tree further indicates that the web-based feature plays an important role in N-Pron resolution. 5 Conclusion Our research focussed on improving pronoun resolution using the statistics-based semantic compatibility information. We explored two issues that affect the utility of the semantic information: statistics source and learning framework. Specifically, we proposed to utilize the web and the twin-candidate model, in addition to the common combination of the corpus and single-candidate model, to compute and apply the semantic information. Our experiments systematically evaluated different combinations of statistics sources and learning models. The results on the newswire domain showed that the web-based semantic compatibility could be the most effectively incorporated in the twin-candidate model for the neutral pronoun resolution. While the utility is not obvious for personal pronoun resolution, we can still see the improvement on the overall performance. We believe that the semantic information under such a configuration would be even more effective on technical domains where neutral pronouns take the majority in the pronominal anaphors. Our future work would have a deep exploration on such domains. References S. Abney. 1996. Partial parsing via finite-state cascades. In Workshop on Robust Parsing, 8th European Summer School in Logic, Language and Information, pages 8–15. D. Bean and E. Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. In Proceedings of 2004 North American chapter of the Association for Computational Linguistics annual meeting. I. Dagan and A. Itai. 1990. Automatic processing of large corpora for the resolution of anahora references. In Proceedings of the 13th International Conference on Computational Linguistics, pages 330–332. J. Hobbs. 1978. Resolving pronoun references. Lingua, 44:339–352. A. Kehler, D. Appelt, L. Taylor, and A. Simma. 2004. The (non)utility of predicate-argument frequencies for pronoun interpretation. In Proceedings of 2004 North American chapter of the Association for Computational Linguistics annual meeting. F. Keller and M. Lapata. 2003. Using the web to obtain freqencies for unseen bigrams. Computational Linguistics, 29(3):459–484. R. Mitkov. 1998. Robust pronoun resolution with limited knowledge. In Proceedings of the 17th Int. Conference on Computational Linguistics, pages 869–875. N. Modjeska, K. Markert, and M. Nissim. 2003. Using the web in machine learning for other-anaphora resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 176–183. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 104–111, Philadelphia. M. Poesio, R. Mehta, A. Maroudas, and J. Hitzeman. 2004. Learning to resolve bridging references. In Proceedings of 42th Annual Meeting of the Association for Computational Linguistics. J. R. Quinlan. 1993. C4.5: Programs for machine learning. Morgan Kaufmann Publishers, San Francisco, CA. W. Soon, H. Ng, and D. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. M. Strube and C. Muller. 2003. A machine learning approach to pronoun resolution in spoken dialogue. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 168–175, Japan. M. Strube. 1998. Never look back: An alternative to centering. In Proceedings of the 17th Int. Conference on Computational Linguistics and 36th Annual Meeting of ACL, pages 1251– 1257. X. Yang, G. Zhou, J. Su, and C. Tan. 2003. Coreference resolution using competition learning approach. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, Japan. G. Zhou and J. Su. 2002. Named Entity recognition using a HMM-based chunk tagger. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia. G. Zhou, J. Su, and T. Tey. 2000. Hybrid text chunking. In Proceedings of the 4th Conference on Computational Natural Language Learning, pages 163–166, Lisbon, Portugal. 172
2005
21
Proceedings of the 43rd Annual Meeting of the ACL, pages 173–180, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Coarse-to-fine n-best parsing and MaxEnt discriminative reranking Eugene Charniak and Mark Johnson Brown Laboratory for Linguistic Information Processing (BLLIP) Brown University Providence, RI 02912 {mj|ec}@cs.brown.edu Abstract Discriminative reranking is one method for constructing high-performance statistical parsers (Collins, 2000). A discriminative reranker requires a source of candidate parses for each sentence. This paper describes a simple yet novel method for constructing sets of 50-best parses based on a coarse-to-fine generative parser (Charniak, 2000). This method generates 50-best lists that are of substantially higher quality than previously obtainable. We used these parses as the input to a MaxEnt reranker (Johnson et al., 1999; Riezler et al., 2002) that selects the best parse from the set of parses for each sentence, obtaining an f-score of 91.0% on sentences of length 100 or less. 1 Introduction We describe a reranking parser which uses a regularized MaxEnt reranker to select the best parse from the 50-best parses returned by a generative parsing model. The 50-best parser is a probabilistic parser that on its own produces high quality parses; the maximum probability parse trees (according to the parser’s model) have an f-score of 0.897 on section 23 of the Penn Treebank (Charniak, 2000), which is still state-of-the-art. However, the 50 best (i.e., the 50 highest probability) parses of a sentence often contain considerably better parses (in terms of f-score); this paper describes a 50-best parsing algorithm with an oracle f-score of 96.8 on the same data. The reranker attempts to select the best parse for a sentence from the 50-best list of possible parses for the sentence. Because the reranker only has to consider a relatively small number of parses per sentences, it is not necessary to use dynamic programming, which permits the features to be essentially arbitrary functions of the parse trees. While our reranker does not achieve anything like the oracle f-score, the parses it selects do have an f-score of 91.0, which is considerably better than the maximum probability parses of the n-best parser. In more detail, for each string s the n-best parsing algorithm described in section 2 returns the n highest probability parses Y(s) = {y1(s), . . . , yn(s)} together with the probability p(y) of each parse y according to the parser’s probability model. The number n of parses was set to 50 for the experiments described here, but some simple sentences actually received fewer than 50 parses (so n is actually a function of s). Each yield or terminal string in the training, development and test data sets is mapped to such an n-best list of parse/probability pairs; the cross-validation scheme described in Collins (2000) was used to avoid training the n-best parser on the sentence it was being used to parse. A feature extractor, described in section 3, is a vector of m functions f = (f1, . . . , fm), where each fj maps a parse y to a real number fj(y), which is the value of the jth feature on y. So a feature extractor maps each y to a vector of feature values f(y) = (f1(y), . . . , fm(y)). Our reranking parser associates a parse with a 173 score vθ(y), which is a linear function of the feature values f(y). That is, each feature fj is associated with a weight θj, and the feature values and weights define the score vθ(y) of each parse y as follows: vθ(y) = θ · f(y) = m X j=1 θjfj(y). Given a string s, the reranking parser’s output ˆy(s) on string s is the highest scoring parse in the n-best parses Y(s) for s, i.e., ˆy(s) = arg max y∈Y(s) vθ(y). The feature weight vector θ is estimated from the labelled training corpus as described in section 4. Because we use labelled training data we know the correct parse y⋆(s) for each sentence s in the training data. The correct parse y⋆(s) is not always a member of the n-best parser’s output Y(s), but we can identify the parses Y+(s) in Y(s) with the highest f-scores. Informally, the estimation procedure finds a weight vector θ that maximizes the score vθ(y) of the parses y ∈Y+(s) relative to the scores of the other parses in Y(s), for each s in the training data. 2 Recovering the n-best parses using coarse-to-fine parsing The major difficulty in n-best parsing, compared to 1-best parsing, is dynamic programming. For example, n-best parsing is straight-forward in best-first search or beam search approaches that do not use dynamic programming: to generate more than one parse, one simply allows the search mechanism to create successive versions to one’s heart’s content. A good example of this is the Roark parser (Roark, 2001) which works left-to right through the sentence, and abjures dynamic programming in favor of a beam search, keeping some large number of possibilities to extend by adding the next word, and then re-pruning. At the end one has a beam-width’s number of best parses (Roark, 2001). The Collins parser (Collins, 1997) does use dynamic programming in its search. That is, whenever a constituent with the same history is generated a second time, it is discarded if its probability is lower than the original version. If the opposite is true, then the original is discarded. This is fine if one only wants the first-best, but obviously it does not directly enumerate the n-best parses. However, Collins (Collins, 2000; Collins and Koo, in submission) has created an nbest version of his parser by turning off dynamic programming (see the user’s guide to Bikel’s re-implementation of Collins’ parser, http://www.cis.upenn.edu/ dbikel/software.html#statparser). As with Roark’s parser, it is necessary to add a beam-width constraint to make the search tractable. With a beam width of 1000 the parser returns something like a 50-best list (Collins, personal communication), but the actual number of parses returned for each sentences varies. However, turning off dynamic programming results in a loss in efficiency. Indeed, Collins’s n-best list of parses for section 24 of the Penn tree-bank has some sentences with only a single parse, because the n-best parser could not find any parses. Now there are two known ways to produce n-best parses while retaining the use of dynamic programming: the obvious way and the clever way. The clever way is based upon an algorithm developed by Schwartz and Chow (1990). Recall the key insight in the Viterbi algorithm: in the optimal parse the parsing decisions at each of the choice points that determine a parse must be optimal, since otherwise one could find a better parse. This insight extends to n-best parsing as follows. Consider the secondbest parse: if it is to differ from the best parse, then at least one of its parsing decisions must be suboptimal. In fact, all but one of the parsing decisions in second-best parse must be optimal, and the one suboptimal decision must be the second-best choice at that choice point. Further, the nth-best parse can only involve at most n suboptimal parsing decisions, and all but one of these must be involved in one of the second through the n−1th-best parses. Thus the basic idea behind this approach to n-best parsing is to first find the best parse, then find the second-best parse, then the third-best, and so on. The algorithm was originally described for hidden Markov models. Since this first draft of this paper we have become aware of two PCFG implementations of this algorithm (Jimenez and Marzal, 2000; Huang and Chang, 2005). The first was tried on relatively small grammars, while the second was implemented on top of the Bikel re-implementation of the Collins 174 parser (Bikel, 2004) and achieved oracle results for 50-best parses similar to those we report below. Here, however, we describe how to find n-best parses in a more straight-forward fashion. Rather than storing a single best parse of each edge, one stores n of them. That is, when using dynamic programming, rather than throwing away a candidate if it scores less than the best, one keeps it if it is one of the top n analyses for this edge discovered so far. This is really very straight-forward. The problem is space. Dynamic programming parsing algorithms for PCFGs require O(m2) dynamic programming states, where m is the length of the sentence, so an n-best parsing algorithm requires O(nm2). However things get much worse when the grammar is bilexicalized. As shown by Eisner (Eisner and Satta, 1999) the dynamic programming algorithms for bilexicalized PCFGs require O(m3) states, so a n-best parser would require O(nm3) states. Things become worse still in a parser like the one described in Charniak (2000) because it conditions on (and hence splits the dynamic programming states according to) features of the grandparent node in addition to the parent, thus multiplying the number of possible dynamic programming states even more. Thus nobody has implemented this version. There is, however, one particular feature of the Charniak parser that mitigates the space problem: it is a “coarse-to-fine” parser. By “coarse-to-fine” we mean that it first produces a crude version of the parse using coarse-grained dynamic programming states, and then builds fine-grained analyses by splitting the most promising of coarse-grained states. A prime example of this idea is from Goodman (1997), who describes a method for producing a simple but crude approximate grammar of a standard context-free grammar. He parses a sentence using the approximate grammar, and the results are used to constrain the search for a parse with the full CFG. He finds that total parsing time is greatly reduced. A somewhat different take on this paradigm is seen in the parser we use in this paper. Here the parser first creates a parse forest based upon a much less complex version of the complete grammar. In particular, it only looks at standard CFG features, the parent and neighbor labels. Because this grammar encodes relatively little state information, its dynamic programming states are relatively coarse and hence there are comparatively few of them, so it can be efficiently parsed using a standard dynamic programming bottom-up CFG parser. However, precisely because this first stage uses a grammar that ignores many important contextual features, the best parse it finds will not, in general, be the best parse according to the finer-grained second-stage grammar, so clearly we do not want to perform best-first parsing with this grammar. Instead, the output of the first stage is a polynomial-sized packed parse forest which records the left and right string positions for each local tree in the parses generated by this grammar. The edges in the packed parse forest are then pruned, to focus attention on the coarsegrained states that are likely to correspond to highprobability fine-grained states. The edges are then pruned according to their marginal probability conditioned on the string s being parsed as follows: p(ni j,k | s) = α(ni j,k)β(ni j,k) p(s) (1) Here ni j,k is a constituent of type i spanning the words from j to k, α(ni j,k) is the outside probability of this constituent, and β(ni j,k) is its inside probability. From parse forest both α and β can be computed in time proportional to the size of the compact forest. The parser then removes all constituents ni j,k whose probability falls below some preset threshold. In the version of this parser available on the web, this threshold is on the order of 10−4. The unpruned edges are then exhaustively evaluated according to the fine-grained probabilistic model; in effect, each coarse-grained dynamic programming state is split into one or more fine-grained dynamic programming states. As noted above, the fine-grained model conditions on information that is not available in the coarse-grained model. This includes the lexical head of one’s parents, the part of speech of this head, the parent’s and grandparent’s category labels, etc. The fine-grained states investigated by the parser are constrained to be refinements of the coarse-grained states, which drastically reduces the number of fine-grained states that need to be investigated. It is certainly possible to do dynamic programming parsing directly with the fine-grained grammar, but precisely because the fine-grained grammar 175 conditions on a wide variety of non-local contextual information there would be a very large number of different dynamic programming states, so direct dynamic programming parsing with the fine-grained grammar would be very expensive in terms of time and memory. As the second stage parse evaluates all the remaining constituents in all of the contexts in which they appear (e.g., what are the possible grand-parent labels) it keeps track of the most probable expansion of the constituent in that context, and at the end is able to start at the root and piece together the overall best parse. Now comes the easy part. To create a 50-best parser we simply change the fine-grained version of 1-best algorithm in accordance with the “obvious” scheme outlined earlier in this section. The first, coarse-grained, pass is not changed, but the second, fine-grained, pass keeps the n-best possibilities at each dynamic programming state, rather than keeping just first best. When combining two constituents to form a larger constituent, we keep the best 50 of the 2500 possibilities they offer. Naturally, if we keep each 50-best list sorted, we do nothing like 2500 operations. The experimental question is whether, in practice, the coarse-to-fine architecture keeps the number of dynamic programming states sufficiently low that space considerations do not defeat us. The answer seems to be yes. We ran the algorithm on section 24 of the Penn WSJ tree-bank using the default pruning settings mentioned above. Table 1 shows how the number of fine-grained dynamic programming states increases as a function of sentence length for the sentences in section 24 of the Treebank. There are no sentences of length greater than 69 in this section. Columns two to four show the number of sentences in each bucket, their average length, and the average number of fine-grained dynamic programming structures per sentence. The final column gives the value of the function 100∗L1.5 where L is the average length of sentences in the bucket. Except for bucket 6, which is abnormally low, it seems that this add-hoc function tracks the number of structures quite well. Thus the number of dynamic programming states does not grow as L2, much less as L3. To put the number of these structures per senLen Num Av sen Av strs 100 ∗L1.5 sents length per sent 0–9 225 6.04 1167 1484 10–19 725 15.0 4246 5808 20–29 795 24.2 9357 11974 30–39 465 33.8 15893 19654 40–49 162 43.2 21015 28440 50–59 35 52.8 30670 38366 60–69 9 62.8 23405 49740 Table 1: Number of structures created as a function of sentence length n 1 2 10 25 50 f-score 0.897 0.914 0.948 0.960 0.968 Table 2: Oracle f-score as a function of number n of n-best parses tence in perspective, consider the size of such structures. Each one must contain a probability, the nonterminal label of the structure, and a vector of pointers to it’s children (an average parent has slightly more than two children). If one were concerned about every byte this could be made quite small. In our implementation probably the biggest factor is the STL overhead on vectors. If we figure we are using, say, 25 bytes per structure, the total space required is only 1.25Mb even for 50,000 dynamic programming states, so it is clearly not worth worrying about the memory required. The resulting n-bests are quite good, as shown in Table 2. (The results are for all sentences of section 23 of the WSJ tree-bank of length ≤100.) From the 1-best result we see that the base accuracy of the parser is 89.7%.1 2-best and 10-best show dramatic oracle-rate improvements. After that things start to slow down, and we achieve an oracle rate of 0.968 at 50-best. To put this in perspective, Roark (Roark, 2001) reports oracle results of 0.941 (with the same experimental setup) using his parser to return a variable number of parses. For the case cited his parser returns, on average, 70 parses per sentence. Finally, we note that 50-best parsing is only a fac1Charniak in (Charniak, 2000) cites an accuracy of 89.5%. Fixing a few very small bugs discovered by users of the parser accounts for the difference. 176 tor of two or three slower than 1-best. 3 Features for reranking parses This section describes how each parse y is mapped to a feature vector f(y) = (f1(y), . . . , fm(y)). Each feature fj is a function that maps a parse to a real number. The first feature f1(y) = log p(y) is the logarithm of the parse probability p according to the n-best parser model. The other features are integer valued; informally, each feature is associated with a particular configuration, and the feature’s value fj(y) is the number of times that the configuration that fj indicates. For example, the feature feat pizza(y) counts the number of times that a phrase in y headed by eat has a complement phrase headed by pizza. Features belong to feature schema, which are abstract schema from which specific features are instantiated. For example, the feature feat pizza is an instance of the “Heads” schema. Feature schema are often parameterized in various ways. For example, the “Heads” schema is parameterized by the type of heads that the feature schema identifies. Following Grimshaw (1997), we associate each phrase with a lexical head and a function head. For example, the lexical head of an NP is a noun while the functional head of an NP is a determiner, and the lexical head of a VP is a main verb while the functional head of VP is an auxiliary verb. We experimented with various kinds of feature selection, and found that a simple count threshold performs as well as any of the methods we tried. Specifically, we ignored all features that did not vary on the parses of at least t sentences, where t is the count threshold. In the experiments described below t = 5, though we also experimented with t = 2. The rest of this section outlines the feature schemata used in the experiments below. These feature schemata used here were developed using the n-best parses provided to us by Michael Collins approximately a year before the n-best parser described here was developed. We used the division into preliminary training and preliminary development data sets described in Collins (2000) while experimenting with feature schemata; i.e., the first 36,000 sentences of sections 2–20 were used as preliminary training data, and the remaining sentences of sections 20 and 21 were used as preliminary development data. It is worth noting that developing feature schemata is much more of an art than a science, as adding or deleting a single schema usually does not have a significant effect on performance, yet the overall impact of many well-chosen schemata can be dramatic. Using the 50-best parser output described here, there are 1,148,697 features that meet the count threshold of at least 5 on the main training data (i.e., Penn treebank sections 2–21). We list each feature schema’s name, followed by the number of features in that schema with a count of at least 5, together with a brief description of the instances of the schema and the schema’s parameters. CoPar (10) The instances of this schema indicate conjunct parallelism at various different depths. For example, conjuncts which have the same label are parallel at depth 0, conjuncts with the same label and whose children have the same label are parallel at depth 1, etc. CoLenPar (22) The instances of this schema indicate the binned difference in length (in terms of number of preterminals dominated) in adjacent conjuncts in the same coordinated structures, conjoined with a boolean flag that indicates whether the pair is final in the coordinated phrase. RightBranch (2) This schema enables the reranker to prefer right-branching trees. One instance of this schema returns the number of nonterminal nodes that lie on the path from the root node to the right-most non-punctuation preterminal node, and the other instance of this schema counts the number of the other nonterminal nodes in the parse tree. Heavy (1049) This schema classifies nodes by their category, their binned length (i.e., the number of preterminals they dominate), whether they are at the end of the sentence and whether they are followed by punctuation. Neighbours (38,245) This schema classifies nodes by their category, their binned length, and the part of speech categories of the ℓ1 preterminals to the node’s left and the ℓ2 preterminals to the 177 node’s right. ℓ1 and ℓ2 are parameters of this schema; here ℓ1 = 1 or ℓ1 = 2 and ℓ2 = 1. Rule (271,655) The instances of this schema are local trees, annotated with varying amounts of contextual information controlled by the schema’s parameters. This schema was inspired by a similar schema in Collins and Koo (in submission). The parameters to this schema control whether nodes are annotated with their preterminal heads, their terminal heads and their ancestors’ categories. An additional parameter controls whether the feature is specialized to embedded or non-embedded clauses, which roughly corresponds to Emonds’ “nonroot” and “root” contexts (Emonds, 1976). NGram (54,567) The instances of this schema are ℓ-tuples of adjacent children nodes of the same parent. This schema was inspired by a similar schema in Collins and Koo (in submission). This schema has the same parameters as the Rule schema, plus the length ℓof the tuples of children (ℓ= 2 here). Heads (208,599) The instances of this schema are tuples of head-to-head dependencies, as mentioned above. The category of the node that is the least common ancestor of the head and the dependent is included in the instance (this provides a crude distinction between different classes of arguments). The parameters of this schema are whether the heads involved are lexical or functional heads, the number of heads in an instance, and whether the lexical item or just the head’s part of speech are included in the instance. LexFunHeads (2,299) The instances of this feature are the pairs of parts of speech of the lexical head and the functional head of nodes in parse trees. WProj (158,771) The instances of this schema are preterminals together with the categories of ℓof their closest maximal projection ancestors. The parameters of this schema control the number ℓ of maximal projections, and whether the preterminals and the ancestors are lexicalized. Word (49,097) The instances of this schema are lexical items together with the categories of ℓ of their immediate ancestor nodes, where ℓis a schema parameter (ℓ= 2 or ℓ= 3 here). This feature was inspired by a similar feature in Klein and Manning (2003). HeadTree (72,171) The instances of this schema are tree fragments consisting of the local trees consisting of the projections of a preterminal node and the siblings of such projections. This schema is parameterized by the head type (lexical or functional) used to determine the projections of a preterminal, and whether the head preterminal is lexicalized. NGramTree (291,909) The instances of this schema are subtrees rooted in the least common ancestor of ℓcontiguous preterminal nodes. This schema is parameterized by the number ℓof contiguous preterminals (ℓ= 2 or ℓ= 3 here) and whether these preterminals are lexicalized. 4 Estimating feature weights This section explains how we estimate the feature weights θ = (θ1, . . . , θm) for the feature functions f = (f1, . . . , fm). We use a MaxEnt estimator to find the feature weights ˆθ, where L is the loss function and R is a regularization penalty term: ˆθ = arg min θ LD(θ) + R(θ). The training data D = (s1, . . . , sn′) is a sequence of sentences and their correct parses y⋆(s1), . . . , y⋆(sn). We used the 20-fold crossvalidation technique described in Collins (2000) to compute the n-best parses Y(s) for each sentence s in D. In general the correct parse y⋆(s) is not a member of Y(s), so instead we train the reranker to identify one of the best parses Y+(s) = arg maxy∈Y(s) Fy⋆(s)(y) in the n-best parser’s output, where Fy⋆(y) is the Parseval f-score of y evaluated with respect to y⋆. Because there may not be a unique best parse for each sentence (i.e., |Y+(s)| > 1 for some sentences s) we used the variant of MaxEnt described in Riezler et al. (2002) for partially labelled training data. 178 Recall the standard MaxEnt conditional probability model for a parse y ∈Y: Pθ(y|Y) = exp vθ(y) P y′∈Y exp vθ(y′), where vθ(y) = θ · f(y) = m X j=1 θjfj(y). The loss function LD proposed in Riezler et al. (2002) is just the negative log conditional likelihood of the best parses Y+(s) relative to the n-best parser output Y(s): LD(θ) = − n′ X i=1 log Pθ(Y+(si)|Y(si)), where Pθ(Y+|Y) = X y∈Y+ Pθ(y|Y) The partial derivatives of this loss function, which are required by the numerical estimation procedure, are: ∂LD θj = n′ X i=1 Eθ[fj|Y(si)] −Eθ[fj|Y+(si)] Eθ[f|Y] = X y∈Y f(y)Pθ(y|Y) In the experiments reported here, we used a Gaussian or quadratic regularizer R(w) = c Pm j=1 w2 j, where c is an adjustable parameter that controls the amount of regularization, chosen to optimize the reranker’s f-score on the development set (section 24 of the treebank). We used the Limited Memory Variable Metric optimization algorithm from the PETSc/TAO optimization toolkit (Benson et al., 2004) to find the optimal feature weights ˆθ because this method seems substantially faster than comparable methods (Malouf, 2002). The PETSc/TAO toolkit provides a variety of other optimization algorithms and flags for controlling convergence, but preliminary experiments on the Collins’ trees with different algorithms and early stopping did not show any performance improvements, so we used the default PETSc/TAO setting for our experiments here. 5 Experimental results We evaluated the performance of our reranking parser using the standard PARSEVAL metrics. We n-best trees f-score New 0.9102 Collins 0.9037 Table 3: Results on new n-best trees and Collins nbest trees, with weights estimated from sections 2– 21 and the regularizer constant c adjusted for optimal f-score on section 24 and evaluated on sentences of length less than 100 in section 23. trained the n-best parser on sections 2–21 of the Penn Treebank, and used section 24 as development data to tune the mixing parameters of the smoothing model. Similarly, we trained the feature weights θ with the MaxEnt reranker on sections 2–21, and adjusted the regularizer constant c to maximize the f-score on section 24 of the treebank. We did this both on the trees supplied to us by Michael Collins, and on the output of the n-best parser described in this paper. The results are presented in Table 3. The n-best parser’s most probable parses are already of state-of-the-art quality, but the reranker further improves the f-score. 6 Conclusion This paper has described a dynamic programming n-best parsing algorithm that utilizes a heuristic coarse-to-fine refinement of parses. Because the coarse-to-fine approach prunes the set of possible parse edges beforehand, a simple approach which enumerates the n-best analyses of each parse edge is not only practical but quite efficient. We use the 50-best parses produced by this algorithm as input to a MaxEnt discriminative reranker. The reranker selects the best parse from this set of parses using a wide variety of features. The system we described here has an f-score of 0.91 when trained and tested using the standard PARSEVAL framework. This result is only slightly higher than the highest reported result for this test-set, Bod’s (.907) (Bod, 2003). More to the point, however, is that the system we describe is reasonably efficient so it can be used for the kind of routine parsing currently being handled by the Charniak or Collins parsers. A 91.0 f-score represents a 13% reduction in f179 measure error over the best of these parsers.2 Both the 50-best parser, and the reranking parser can be found at ftp://ftp.cs.brown.edu/pub/nlparser/, named parser and reranker respectively. Acknowledgements We would like to thanks Michael Collins for the use of his data and many helpful comments, and Liang Huang for providing an early draft of his paper and very useful comments on our paper. Finally thanks to the National Science Foundation for its support (NSF IIS-0112432, NSF 9721276, and NSF DMS-0074276). References Steve Benson, Lois Curfman McInnes, Jorge J. Mor, and Jason Sarich. 2004. Tao users manual. Technical Report ANL/MCS-TM-242-Revision 1.6, Argonne National Laboratory. Daniel M. Bikel. 2004. Intricacies of collins parsing model. Computational Linguistics, 30(4). Rens Bod. 2003. An efficient implementation of an new dop model. In Proceedings of the European Chapter of the Association for Computational Linguists. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In The Proceedings of the North American Chapter of the Association for Computational Linguistics, pages 132–139. Michael Collins and Terry Koo. in submission. Discriminative reranking for natural language parsing. Technical report, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In The Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, San Francisco. Morgan Kaufmann. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Machine Learning: Proceedings of the Seventeenth International Conference (ICML 2000), pages 175–182, Stanford, California. Jason Eisner and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proceedings of the 37th Annual 2This probably underestimates the actual improvement. There are no currently accepted figures for inter-annotater agreement on Penn WSJ, but it is no doubt well short of 100%. If we take 97% as a reasonable estimate of the the upper bound on tree-bank accuracy, we are instead talking about an 18% error reduction. Meeting of the Association for Computational Linguistics, pages 457–464. Joseph Emonds. 1976. A Transformational Approach to English Syntax: Root, Structure-Preserving and Local Transformations. Academic Press, New York, NY. Joshua Goodman. 1997. Global thresholding and multiple-pass parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 1997). Jane Grimshaw. 1997. Projection, heads, and optimality. Linguistic Inquiry, 28(3):373–422. Liang Huang and David Chang. 2005. Better k-best parsing. Technical Report MS-CIS-05-08, Department of Computer Science, University of Pennsylvania. Victor M. Jimenez and Andres Marzal. 2000. Computation of the n best parse trees for weighted and stochastic context-free grammars. In Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition. Springer LNCS 1876. Mark Johnson, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic “unification-based” grammars. In The Proceedings of the 37th Annual Conference of the Association for Computational Linguistics, pages 535–541, San Francisco. Morgan Kaufmann. Dan Klein and Christopher Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of the Sixth Conference on Natural Language Learning (CoNLL-2002), pages 49–55. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. III Maxwell, and Mark Johnson. 2002. Parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 271–278. Morgan Kaufmann. Brian Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249–276. R. Schwartz and Y.L. Chow. 1990. The n-best algorithm: An efficient and exact procedure for finding the n most likely sentence hypotheses. In Proceedings of the IEEE International Conference on Acoustic, Speech, and Signal, Processing, pages 81–84. 180
2005
22
Proceedings of the 43rd Annual Meeting of the ACL, pages 181–188, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Data-Defined Kernels for Parse Reranking Derived from Probabilistic Models James Henderson School of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW, United Kingdom [email protected] Ivan Titov Department of Computer Science University of Geneva 24, rue G´en´eral Dufour CH-1211 Gen`eve 4, Switzerland [email protected] Abstract Previous research applying kernel methods to natural language parsing have focussed on proposing kernels over parse trees, which are hand-crafted based on domain knowledge and computational considerations. In this paper we propose a method for defining kernels in terms of a probabilistic model of parsing. This model is then trained, so that the parameters of the probabilistic model reflect the generalizations in the training data. The method we propose then uses these trained parameters to define a kernel for reranking parse trees. In experiments, we use a neural network based statistical parser as the probabilistic model, and use the resulting kernel with the Voted Perceptron algorithm to rerank the top 20 parses from the probabilistic model. This method achieves a significant improvement over the accuracy of the probabilistic model. 1 Introduction Kernel methods have been shown to be very effective in many machine learning problems. They have the advantage that learning can try to optimize measures related directly to expected testing performance (i.e. “large margin” methods), rather than the probabilistic measures used in statistical models, which are only indirectly related to expected testing performance. Work on kernel methods in natural language has focussed on the definition of appropriate kernels for natural language tasks. In particular, most of the work on parsing with kernel methods has focussed on kernels over parse trees (Collins and Duffy, 2002; Shen and Joshi, 2003; Shen et al., 2003; Collins and Roark, 2004). These kernels have all been hand-crafted to try reflect properties of parse trees which are relevant to discriminating correct parse trees from incorrect ones, while at the same time maintaining the tractability of learning. Some work in machine learning has taken an alternative approach to defining kernels, where the kernel is derived from a probabilistic model of the task (Jaakkola and Haussler, 1998; Tsuda et al., 2002). This way of defining kernels has two advantages. First, linguistic knowledge about parsing is reflected in the design of the probabilistic model, not directly in the kernel. Designing probabilistic models to reflect linguistic knowledge is a process which is currently well understood, both in terms of reflecting generalizations and controlling computational cost. Because many NLP problems are unbounded in size and complexity, it is hard to specify all possible relevant kernel features without having so many features that the computations become intractable and/or the data becomes too sparse.1 Second, the kernel is defined using the trained parameters of the probabilistic model. Thus the kernel is in part determined by the training data, and is automatically tailored to reflect properties of parse trees which are relevant to parsing. 1For example, see (Henderson, 2004) for a discussion of why generative models are better than models parameterized to estimate the a posteriori probability directly. 181 In this paper, we propose a new method for deriving a kernel from a probabilistic model which is specifically tailored to reranking tasks, and we apply this method to natural language parsing. For the probabilistic model, we use a state-of-the-art neural network based statistical parser (Henderson, 2003). The resulting kernel is then used with the Voted Perceptron algorithm (Freund and Schapire, 1998) to reranking the top 20 parses from the probabilistic model. This method achieves a significant improvement over the accuracy of the probabilistic model alone. 2 Kernels Derived from Probabilistic Models In recent years, several methods have been proposed for constructing kernels from trained probabilistic models. As usual, these kernels are then used with linear classifiers to learn the desired task. As well as some empirical successes, these methods are motivated by theoretical results which suggest we should expect some improvement with these classifiers over the classifier which chooses the most probable answer according to the probabilistic model (i.e. the maximum a posteriori (MAP) classifier). There is guaranteed to be a linear classifier for the derived kernel which performs at least as well as the MAP classifier for the probabilistic model. So, assuming a large-margin classifier can optimize a more appropriate criteria than the posterior probability, we should expect the derived kernel’s classifier to perform better than the probabilistic model’s classifier, although empirical results on a given task are never guaranteed. In this section, we first present two previous kernels and then propose a new kernel specifically for reranking tasks. In each of these discussions we need to characterize the parsing problem as a classification task. Parsing can be regarded as a mapping from an input space of sentences x∈X to a structured output space of parse trees y∈Y. On the basis of training sentences, we learn a discriminant function F : X × Y →R. The parse tree y with the largest value for this discriminant function F(x, y) is the output parse tree for the sentence x. We focus on the linear discriminant functions: Fw(x, y) = <w, φ(x, y)>, where φ(x, y) is a feature vector for the sentencetree pair, w is a parameter vector for the discriminant function, and <a, b> is the inner product of vectors a and b. In the remainder of this section, we will characterize the kernel methods we consider in terms of the feature extractor φ(x, y). 2.1 Fisher Kernels The Fisher kernel (Jaakkola and Haussler, 1998) is one of the best known kernels belonging to the class of probability model based kernels. Given a generative model of P(z|ˆθ) with smooth parameterization, the Fisher score of an example z is a vector of partial derivatives of the log-likelihood of the example with respect to the model parameters: φˆθ(z) = ( ∂log P(z|ˆθ) ∂θ1 , . . . , ∂log P(z|ˆθ) ∂θl ). This score can be regarded as specifying how the model should be changed in order to maximize the likelihood of the example z. Then we can define the similarity between data points as the inner product of the corresponding Fisher scores. This kernel is often referred to as the practical Fisher kernel. The theoretical Fisher kernel depends on the Fisher information matrix, which is not feasible to compute for most practical tasks and is usually omitted. The Fisher kernel is only directly applicable to binary classification tasks. We can apply it to our task by considering an example z to be a sentencetree pair (x, y), and classifying the pairs into correct parses versus incorrect parses. When we use the Fisher score φˆθ(x, y) in the discriminant function F, we can interpret the value as the confidence that the tree y is correct, and choose the y in which we are the most confident. 2.2 TOP Kernels Tsuda (2002) proposed another kernel constructed from a probabilistic model, called the Tangent vectors Of Posterior log-odds (TOP) kernel. Their TOP kernel is also only for binary classification tasks, so, as above, we treat the input z as a sentence-tree pair and the output category c ∈{−1, +1} as incorrect/correct. It is assumed that the true probability distribution is included in the class of probabilistic models and that the true parameter vector θ⋆is unique. The feature extractor of the TOP kernel for 182 the input z is defined by: φˆθ(z) = (v(z, ˆθ), ∂v(z,ˆθ) ∂θ1 , . . . , ∂v(z,ˆθ) ∂θl ), where v(z, ˆθ) = log P(c=+1|z, ˆθ) − log P(c=−1|z, ˆθ). In addition to being at least as good as the MAP classifier, the choice of the TOP kernel feature extractor is motivated by the minimization of the binary classification error of a linear classifier <w, φˆθ(z)> + b. Tsuda (2002) demonstrates that this error is closely related to the estimation error of the posterior probability P(c=+1|z, θ⋆) by the estimator g(<w, φˆθ(z)> + b), where g is the sigmoid function g(t) = 1/(1 + exp (−t)). The TOP kernel isn’t quite appropriate for structured classification tasks because φˆθ(z) is motivated by binary classificaton error minimization. In the next subsection, we will adapt it to structured classification. 2.3 A TOP Kernel for Reranking We define the reranking task as selecting a parse tree from the list of candidate trees suggested by a probabilistic model. Furthermore, we only consider learning to rerank the output of a particular probabilistic model, without requiring the classifier to have good performance when applied to a candidate list provided by a different model. In this case, it is natural to model the probability that a parse tree is the best candidate given the list of candidate trees: P(yk|x, y1, . . . , ys) = P(x,yk) P t P(x,yt), where y1, . . . , ys is the list of candidate parse trees. To construct a new TOP kernel for reranking, we apply an approach similar to that used for the TOP kernel (Tsuda et al., 2002), but we consider the probability P(yk|x, y1, . . . , ys, θ⋆) instead of the probability P(c=+1|z, θ⋆) considered by Tsuda. The resulting feature extractor is given by: φˆθ(x, yk) = (v(x, yk, ˆθ), ∂v(x,yk,ˆθ) ∂θ1 , . . . , ∂v(x,yk,ˆθ) ∂θl ), where v(x, yk, ˆθ) = log P(yk|y1, . . . , ys, ˆθ) − log P t̸=k P(yt|y1, . . . , ys, ˆθ). We will call this kernel the TOP reranking kernel. 3 The Probabilistic Model To complete the definition of the kernel, we need to choose a probabilistic model of parsing. For this we use a statistical parser which has previously been shown to achieve state-of-the-art performance, namely that proposed in (Henderson, 2003). This parser has two levels of parameterization. The first level of parameterization is in terms of a historybased generative probability model, but this level is not appropriate for our purposes because it defines an infinite number of parameters (one for every possible partial parse history). When parsing a given sentence, the bounded set of parameters which are relevant to a given parse are estimated using a neural network. The weights of this neural network form the second level of parameterization. There is a finite number of these parameters. Neural network training is applied to determine the values of these parameters, which in turn determine the values of the probability model’s parameters, which in turn determine the probabilistic model of parse trees. We do not use the complete set of neural network weights to define our kernels, but instead we define a third level of parameterization which only includes the network’s output layer weights. These weights define a normalized exponential model, with the network’s hidden layer as the input features. When we tried using the complete set of weights in some small scale experiments, training the classifier was more computationally expensive, and actually performed slightly worse than just using the output weights. Using just the output weights also allows us to make some approximations in the TOP reranking kernel which makes the classifier learning algorithm more efficient. 3.1 A History-Based Probability Model As with many other statistical parsers (Ratnaparkhi, 1999; Collins, 1999; Charniak, 2000), Henderson (2003) uses a history-based model of parsing. He defines the mapping from phrase structure trees to parse sequences using a form of left-corner parsing strategy (see (Henderson, 2003) for more details). The parser actions include: introducing a new constituent with a specified label, attaching one constituent to another, and predicting the next word of the sentence. A complete parse consists of a sequence of these actions, d1,..., dm, such that performing d1,..., dm results in a complete phrase structure tree. Because this mapping to parse sequences is 183 one-to-one, and the word prediction actions in a complete parse d1,..., dm specify the sentence, P(d1,..., dm) is equivalent to the joint probability of the output phrase structure tree and the input sentence. This probability can be then be decomposed into the multiplication of the probabilities of each action decision di conditioned on that decision’s prior parse history d1,..., di−1. P(d1,..., dm) = ΠiP(di|d1,..., di−1) 3.2 Estimating Decision Probabilities with a Neural Network The parameters of the above probability model are the P(di|d1,..., di−1). There are an infinite number of these parameters, since the parse history d1,..., di−1 grows with the length of the sentence. In other work on history-based parsing, independence assumptions are applied so that only a finite amount of information from the parse history can be treated as relevant to each parameter, thereby reducing the number of parameters to a finite set which can be estimated directly. Instead, Henderson (2003) uses a neural network to induce a finite representation of this unbounded history, which we will denote h(d1,..., di−1). Neural network training tries to find such a history representation which preserves all the information about the history which is relevant to estimating the desired probability. P(di|d1,..., di−1) ≈P(di|h(d1,..., di−1)) Using a neural network architecture called Simple Synchrony Networks (SSNs), the history representation h(d1,..., di−1) is incrementally computed from features of the previous decision di−1 plus a finite set of previous history representations h(d1,..., dj), j < i −1. Each history representation is a finite vector of real numbers, called the network’s hidden layer. As long as the history representation for position i −1 is always included in the inputs to the history representation for position i, any information about the entire sequence could be passed from history representation to history representation and be used to estimate the desired probability. However, learning is biased towards paying more attention to information which passes through fewer history representations. To exploit this learning bias, structural locality is used to determine which history representations are input to which others. First, each history representation is assigned to the constituent which is on the top of the parser’s stack when it is computed. Then earlier history representations whose constituents are structurally local to the current representation’s constituent are input to the computation of the correct representation. In this way, the number of representations which information needs to pass through in order to flow from history representation i to history representation j is determined by the structural distance between i’s constituent and j’s constituent, and not just the distance between i and j in the parse sequence. This provides the neural network with a linguistically appropriate inductive bias when it learns the history representations, as explained in more detail in (Henderson, 2003). Once it has computed h(d1,..., di−1), the SSN uses a normalized exponential to estimate a probability distribution over the set of possible next decisions di given the history: P(di|d1,..., di−1, θ) ≈ exp(<θdi,h(d1,...,di−1)>) P t∈N(di−1) exp(<θt,h(d1,...,di−1)>), where by θt we denote the set of output layer weights, corresponding to the parser action t, N(di−1) defines a set of possible next parser actions after the step di−1 and θ denotes the full set of model parameters. We trained SSN parsing models, using the on-line version of Backpropagation to perform the gradient descent with a maximum likelihood objective function. This learning simultaneously tries to optimize the parameters of the output computation and the parameters of the mappings h(d1,..., di−1). With multilayered networks such as SSNs, this training is not guaranteed to converge to a global optimum, but in practice a network whose criteria value is close to the optimum can be found. 4 Large-Margin Optimization Once we have defined a kernel over parse trees, general techniques for linear classifier optimization can be used to learn the given task. The most sophisticated of these techniques (such as Support Vector Machines) are unfortunately too computationally expensive to be used on large datasets like the Penn Treebank (Marcus et al., 1993). Instead we use a 184 method which has often been shown to be virtually as good, the Voted Perceptron (VP) (Freund and Schapire, 1998) algorithm. The VP algorithm was originally applied to parse reranking in (Collins and Duffy, 2002) with the Tree kernel. We modify the perceptron training algorithm to make it more suitable for parsing, where zero-one classification loss is not the evaluation measure usually employed. We also develop a variant of the kernel defined in section 2.3, which is more efficient when used with the VP algorithm. Given a list of candidate trees, we train the classifier to select the tree with largest constituent F1 score. The F1 score is a measure of the similarity between the tree in question and the gold standard parse, and is the standard way to evaluate the accuracy of a parser. We denote the k’th candidate tree for the j’th sentence xj by yj k. Without loss of generality, let us assume that yj 1 is the candidate tree with the largest F1 score. The Voted Perceptron algorithm is an ensemble method for combining the various intermediate models which are produced during training a perceptron. It demonstrates more stable generalization performance than the normal perceptron algorithm when the problem is not linearly separable (Freund and Schapire, 1998), as is usually the case. We modify the perceptron algorithm by introducing a new classification loss function. This modification enables us to treat differently the cases where the perceptron predicts a tree with an F1 score much smaller than that of the top candidate and the cases where the predicted and the top candidates have similar score values. The natural choice for the loss function would be ∆(yj k, yj 1) = F1(yj 1) −F1(yj k), where F1(yj k) denotes the F1 score value for the parse tree yj k. This approach is very similar to slack variable rescaling for Support Vector Machines proposed in (Tsochantaridis et al., 2004). The learning algorithm we employed is presented in figure 1. When applying kernels with a large training corpus, we face efficiency issues because of the large number of the neural network weights. Even though we use only the output layer weights, this vector grows with the size of the vocabulary, and thus can be large. The kernels presented in section 2 all lead to feature vectors without many zero values. This w = 0 for j = 1 .. n for k = 2 .. s if <w, φ(xj, yj k)> > <w, φ(xj, yj 1)> w = w + ∆(yj k, yj 1)(φ(xj, yj 1) −φ(xj, yj k)) Figure 1: The modified perceptron algorithm happens because we compute the derivative of the normalization factor used in the network’s estimation of P(di|d1,..., di−1). This normalization factor depends on the output layer weights corresponding to all the possible next decisions (see section 3.2). This makes an application of the VP algorithm infeasible in the case of a large vocabulary. We can address this problem by freezing the normalization factor when computing the feature vector. Note that we can rewrite the model logprobability of the tree as: log P(y|θ) = P i log ( exp(<θdi,h(d1,...,di−1)>) P t∈N(di−1) exp(<θt,h(d1,...,di−1)>)) = P i(<θdi, h(d1,..., di−1)>)− P i log P t∈N(di−1) exp(<θt, h(d1,..., di−1)>). We treat the parameters used to compute the first term as different from the parameters used to compute the second term, and we define our kernel only using the parameters in the first term. This means that the second term does not effect the derivatives in the formula for the feature vector φ(x, y). Thus the feature vector for the kernel will contain nonzero entries only in the components corresponding to the parser actions which are present in the candidate derivation for the sentence, and thus in the first vector component. We have applied this technique to the TOP reranking kernel, the result of which we will call the efficient TOP reranking kernel. 5 The Experimental Results We used the Penn Treebank WSJ corpus (Marcus et al., 1993) to perform empirical experiments on the proposed parsing models. In each case the input to the network is a sequence of tag-word pairs.2 We report results for two different vocabulary sizes, varying in the frequency with which tag-word pairs must 2We used a publicly available tagger (Ratnaparkhi, 1996) to provide the tags. 185 occur in the training set in order to be included explicitly in the vocabulary. A frequency threshold of 200 resulted in a vocabulary of 508 tag-word pairs (including tag-unknown word pairs) and a threshold of 20 resulted in 4215 tag-word pairs. We denote the probabilistic model trained with the vocabulary of 508 by the SSN-Freq≥200, the model trained with the vocabulary of 4215 by the SSN-Freq≥20. Testing the probabilistic parser requires using a beam search through the space of possible parses. We used a form of beam search which prunes the search after the prediction of each word. We set the width of this post-word beam to 40 for both testing of the probabilistic model and generating the candidate list for reranking. For training and testing of the kernel models, we provided a candidate list consisting of the top 20 parses found by the generative probabilistic model. When using the Fisher kernel, we added the log-probability of the tree given by the probabilistic model as the feature. This was not necessary for the TOP kernels because they already contain a feature corresponding to the probability estimated by the probabilistic model (see section 2.3). We trained the VP model with all three kernels using the 508 word vocabulary (Fisher-Freq≥200, TOP-Freq≥200, TOP-Eff-Freq≥200) but only the efficient TOP reranking kernel model was trained with the vocabulary of 4215 words (TOP-Eff-Freq≥20). The non-sparsity of the feature vectors for other kernels led to the excessive memory requirements and larger testing time. In each case, the VP model was run for only one epoch. We would expect some improvement if running it for more epochs, as has been empirically demonstrated in other domains (Freund and Schapire, 1998). To avoid repeated testing on the standard testing set, we first compare the different models with their performance on the validation set. Note that the validation set wasn’t used during learning of the kernel models or for adjustment of any parameters. Standard measures of accuracy are shown in table 1.3 Both the Fisher kernel and the TOP kernels show better accuracy than the baseline probabilistic 3All our results are computed with the evalb program following the standard criteria in (Collins, 1999), and using the standard training (sections 2–22, 39,832 sentences, 910,196 words), validation (section 24, 1346 sentence, 31507 words), and testing (section 23, 2416 sentences, 54268 words) sets (Collins, 1999). LR LP Fβ=1 SSN-Freq≥200 87.2 88.5 87.8 Fisher-Freq≥200 87.2 88.8 87.9 TOP-Freq≥200 87.3 88.9 88.1 TOP-Eff-Freq≥200 87.3 88.9 88.1 SSN-Freq≥20 88.1 89.2 88.6 TOP-Eff-Freq≥20 88.2 89.7 88.9 Table 1: Percentage labeled constituent recall (LR), precision (LP), and a combination of both (Fβ=1) on validation set sentences of length at most 100. model, but only the improvement of the TOP kernels is statistically significant.4 For the TOP kernel, the improvement over baseline is about the same with both vocabulary sizes. Also note that the performance of the efficient TOP reranking kernel is the same as that of the original TOP reranking kernel, for the smaller vocabulary. For comparison to previous results, table 2 lists the results on the testing set for our best model (TOP-Efficient-Freq≥20) and several other statistical parsers (Collins, 1999; Collins and Duffy, 2002; Collins and Roark, 2004; Henderson, 2003; Charniak, 2000; Collins, 2000; Shen and Joshi, 2004; Shen et al., 2003; Henderson, 2004; Bod, 2003). First note that the parser based on the TOP efficient kernel has better accuracy than (Henderson, 2003), which used the same parsing method as our baseline model, although the trained network parameters were not the same. When compared to other kernel methods, our approach performs better than those based on the Tree kernel (Collins and Duffy, 2002; Collins and Roark, 2004), and is only 0.2% worse than the best results achieved by a kernel method for parsing (Shen et al., 2003; Shen and Joshi, 2004). 6 Related Work The first application of kernel methods to parsing was proposed by Collins and Duffy (2002). They used the Tree kernel, where the features of a tree are all its connected tree fragments. The VP algorithm was applied to rerank the output of a probabilistic model and demonstrated an improvement over the baseline. 4We measured significance with the randomized significance test of (Yeh, 2000). 186 LR LP Fβ=1∗ Collins99 88.1 88.3 88.2 Collins&Duffy02 88.6 88.9 88.7 Collins&Roark04 88.4 89.1 88.8 Henderson03 88.8 89.5 89.1 Charniak00 89.6 89.5 89.5 TOP-Eff-Freq≥20 89.1 90.1 89.6 Collins00 89.6 89.9 89.7 Shen&Joshi04 89.5 90.0 89.8 Shen et al.03 89.7 90.0 89.8 Henderson04 89.8 90.4 90.1 Bod03 90.7 90.8 90.7 * Fβ=1 for previous models may have rounding errors. Table 2: Percentage labeled constituent recall (LR), precision (LP), and a combination of both (Fβ=1) on the entire testing set. Shen and Joshi (2003) applied an SVM based voting algorithm with the Preference kernel defined over pairs for reranking. To define the Preference kernel they used the Tree kernel and the Linear kernel as its underlying kernels and achieved state-ofthe-art results with the Linear kernel. In (Shen et al., 2003) it was pointed out that most of the arbitrary tree fragments allowed by the Tree kernel are linguistically meaningless. The authors suggested the use of Lexical Tree Adjoining Grammar (LTAG) based features as a more linguistically appropriate set of features. They empirically demonstrated that incorporation of these features helps to improve reranking performance. Shen and Joshi (2004) proposed to improve margin based methods for reranking by defining the margin not only between the top tree and all the other trees in the candidate list but between all the pairs of parses in the ordered candidate list for the given sentence. They achieved the best results when training with an uneven margin scaled by the heuristic function of the candidates positions in the list. One potential drawback of this method is that it doesn’t take into account the actual F1 score of the candidate and considers only the position in the list ordered by the F1 score. We expect that an improvement could be achieved by combining our approach of scaling updates by the F1 loss with the all pairs approach of (Shen and Joshi, 2004). Use of the F1 loss function during training demonstrated better performance comparing to the 0-1 loss function when applied to a structured classification task (Tsochantaridis et al., 2004). All the described kernel methods are limited to the reranking of candidates from an existing parser due to the complexity of finding the best parse given a kernel (i.e. the decoding problem). (Taskar et al., 2004) suggested a method for maximal margin parsing which employs the dynamic programming approach to decoding and parameter estimation problems. The efficiency of dynamic programming means that the entire space of parses can be considered, not just a candidate list. However, not all kernels are suitable for this method. The dynamic programming approach requires the feature vector of a tree to be decomposable into a sum over parts of the tree. In particular, this is impossible with the TOP and Fisher kernels derived from the SSN model. Also, it isn’t clear whether the algorithm remains tractable for a large training set with long sentences, since the authors only present results for sentences of length less than or equal to 15. 7 Conclusions This paper proposes a method for deriving a kernel for reranking from a probabilistic model, and demonstrates state-of-the-art accuracy when this method is applied to parse reranking. Contrary to most of the previous research on kernel methods in parsing, linguistic knowledge does not have to be expressed through a list of features, but instead can be expressed through the design of a probability model. The parameters of this probability model are then trained, so that they reflect what features of trees are relevant to parsing. The kernel is then derived from this trained model in such a way as to maximize its usefulness for reranking. We performed experiments on parse reranking using a neural network based statistical parser as both the probabilistic model and the source of the list of candidate parses. We used a modification of the Voted Perceptron algorithm to perform reranking with the kernel. The results were amongst the best current statistical parsers, and only 0.2% worse than the best current parsing methods which use kernels. We would expect further improvement if we used different models to derive the kernel and to gener187 ate the candidates, thereby exploiting the advantages of combining multiple models, as do the better performing methods using kernels. In recent years, probabilistic models have become commonplace in natural language processing. We believe that this approach to defining kernels would simplify the problem of defining kernels for these tasks, and could be very useful for many of them. In particular, maximum entropy models also use a normalized exponential function to estimate probabilities, so all the methods discussed in this paper would be applicable to maximum entropy models. This approach would be particularly useful for tasks where there is less data available than in parsing, for which large-margin methods work particularly well. References Rens Bod. 2003. An efficient implementation of a new DOP model. In Proc. 10th Conf. of European Chapter of the Association for Computational Linguistics, Budapest, Hungary. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proc. 1st Meeting of North American Chapter of Association for Computational Linguistics, pages 132–139, Seattle, Washington. Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures and the voted perceptron. In Proc. 40th Meeting of Association for Computational Linguistics, pages 263–270. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proc. 42th Meeting of Association for Computational Linguistics, Barcelona, Spain. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proc. 17th Int. Conf. on Machine Learning, pages 175–182, Stanford, CA. Yoav Freund and Robert E. Schapire. 1998. Large margin classification using the perceptron algorithm. In Proc. of the 11th Annual Conf. on Computational Learning Theory, pages 209–217, Madisson WI. James Henderson. 2003. Inducing history representations for broad coverage statistical parsing. In Proc. joint meeting of North American Chapter of the Association for Computational Linguistics and the Human Language Technology Conf., pages 103–110, Edmonton, Canada. James Henderson. 2004. Discriminative training of a neural network statistical parser. In Proc. 42nd Meeting of Association for Computational Linguistics, Barcelona, Spain. Tommi S. Jaakkola and David Haussler. 1998. Exploiting generative models in discriminative classifiers. Advances in Neural Information Processes Systems 11. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proc. Conf. on Empirical Methods in Natural Language Processing, pages 133–142, Univ. of Pennsylvania, PA. Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine Learning, 34:151–175. Libin Shen and Aravind K. Joshi. 2003. An SVM based voting algorithm with application to parse reranking. In Proc. of the 7th Conf. on Computational Natural Language Learning, pages 9–16, Edmonton, Canada. Libin Shen and Aravind K. Joshi. 2004. Flexible margin selection for reranking with full pairwise samples. In Proc. of the 1st Int. Joint Conf. on Natural Language Processing, Hainan Island, China. Libin Shen, Anoop Sarkar, and Aravind K. Joshi. 2003. Using LTAG based features in parse reranking. In Proc. of Conf. on Empirical Methods in Natural Language Processing, Sapporo, Japan. Ben Taskar, Dan Klein, Michael Collins, Daphne Koller, and Christopher Manning. 2004. Max-margin parsing. In Proc. Conf. on Empirical Methods in Natural Language Processing, Barcelona, Spain. Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In Proc. 21st Int. Conf. on Machine Learning, pages 823–830, Banff, Alberta, Canada. K. Tsuda, M. Kawanabe, G. Ratsch, S. Sonnenburg, and K. Muller. 2002. A new discriminative kernel from probabilistic models. Neural Computation, 14(10):2397–2414. Alexander Yeh. 2000. More accurate tests for the statistical significance of the result differences. In Proc. 17th International Conf. on Computational Linguistics, pages 947–953, Saarbruken, Germany. 188
2005
23
Proceedings of the 43rd Annual Meeting of the ACL, pages 189–196, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Boosting-based parse reranking with subtree features Taku Kudo ∗ Jun Suzuki Hideki Isozaki NTT Communication Science Laboratories. 2-4 Hikaridai, Seika-cho, Soraku, Kyoto, Japan {taku,jun,isozaki}@cslab.kecl.ntt.co.jp Abstract This paper introduces a new application of boosting for parse reranking. Several parsers have been proposed that utilize the all-subtrees representation (e.g., tree kernel and data oriented parsing). This paper argues that such an all-subtrees representation is extremely redundant and a comparable accuracy can be achieved using just a small set of subtrees. We show how the boosting algorithm can be applied to the all-subtrees representation and how it selects a small and relevant feature set efficiently. Two experiments on parse reranking show that our method achieves comparable or even better performance than kernel methods and also improves the testing efficiency. 1 Introduction Recent work on statistical natural language parsing and tagging has explored discriminative techniques. One of the novel discriminative approaches is reranking, where discriminative machine learning algorithms are used to rerank the n-best outputs of generative or conditional parsers. The discriminative reranking methods allow us to incorporate various kinds of features to distinguish the correct parse tree from all other candidates. With such feature design flexibility, it is nontrivial to employ an appropriate feature set that has a good discriminative ability for parse reranking. In early studies, feature sets were given heuristically by simply preparing task-dependent feature templates (Collins, 2000; Collins, 2002). These ad-hoc solutions might provide us with reasonable levels of per∗Currently, Google Japan Inc., [email protected] formance. However, they are highly task dependent and require careful design to create the optimal feature set for each task. Kernel methods offer an elegant solution to these problems. They can work on a potentially huge or even infinite number of features without a loss of generalization. The best known kernel for modeling a tree is the tree kernel (Collins and Duffy, 2002), which argues that a feature vector is implicitly composed of the counts of subtrees. Although kernel methods are general and can cover almost all useful features, the set of subtrees that is used is extremely redundant. The main question addressed in this paper concerns whether it is possible to achieve a comparable or even better accuracy using just a small and non-redundant set of subtrees. In this paper, we present a new application of boosting for parse reranking. While tree kernel implicitly uses the all-subtrees representation, our boosting algorithm uses it explicitly. Although this set-up makes the feature space large, the l1-norm regularization achived by boosting automatically selects a small and relevant feature set. Such a small feature set is useful in practice, as it is interpretable and makes the parsing (reranking) time faster. We also incorporate a variant of the branch-and-bound technique to achieve efficient feature selection in each boosting iteration. 2 General setting of parse reranking We describe the general setting of parse reranking. • Training data T is a set of input/output pairs, e.g., T = {⟨x1, y1⟩, . . . , ⟨xL, yL⟩}, where xi is an input sentence, and yi is a correct parse associated with the sentence xi. • Let Y(x) be a function that returns a set of candi189 date parse trees for a particular sentence x. • We assume that Y(xi) contains the correct parse tree yi, i.e., yi ∈Y(xi) ∗ • Let Φ(y) ∈Rd be a feature function that maps the given parse tree y into Rd space. w ∈Rd is a parameter vector of the model. The output parse ˆy of this model on input sentence x is given as: ˆy = argmaxy∈Y(x) w · Φ(y). There are two questions as regards this formulation. One is how to set the parameters w, and the other is how to design the feature function Φ(y). We briefly describe the well-known solutions to these two problems in the next subsections. 2.1 Parameter estimation We usually adopt a general loss function Loss(w), and set the parameters w that minimize the loss, i.e., ˆw = argminw∈Rd Loss(w). Generally, the loss function has the following form: Loss(w) = L X i=1 L(w, Φ(yi), xi), where L(w, Φ(yi), xi) is an arbitrary loss function. We can design a variety of parameter estimation methods by changing the loss function. The following three loss functions, LogLoss, HingeLoss, and BoostLoss, have been widely used in parse reranking tasks. LogLoss = −log ţ X y∈Y(xi) exp ş w · [Φ(yi) −Φ(y)] ť ű HingeLoss = X y∈Y(xi) max(0, 1 −w · [Φ(yi) −Φ(y)]) BoostLos = X y∈Y(xi) exp ş −w · [Φ(yi) −Φ(y)] ť LogLoss is based on the standard maximum likelihood optimization, and is used with maximum entropy models. HingeLoss captures the errors only when w · [Φ(yi) −Φ(y)]) < 1. This loss is closely related to the maximum margin strategy in SVMs (Vapnik, 1998). BoostLoss is analogous to the boosting algorithm and is used in (Collins, 2000; Collins, 2002). ∗In the real setting, we cannot assume this condition. In this case, we select the parse tree ˆy that is the most similar to yi and take ˆy as the correct parse tree yi. 2.2 Definition of feature function It is non-trivial to define an appropriate feature function Φ(y) that has a good ability to distinguish the correct parse yi from all other candidates In early studies, the feature functions were given heuristically by simply preparing feature templates (Collins, 2000; Collins, 2002). However, such heuristic selections are task dependent and would not cover all useful features that contribute to overall accuracy. When we select the special family of loss functions, the problem can be reduced to a dual form that depends only on the inner products of two instances Φ(y1) · Φ(y2). This property is important as we can use a kernel trick and we do not need to provide an explicit feature function. For example, tree kernel (Collins and Duffy, 2002), one of the convolution kernels, implicitly maps the instance represented in a tree into all-subtrees space. Even though the feature space is large, inner products under this feature space can be calculated efficiently using dynamic programming. Tree kernel is more general than feature templates since it can use the all-subtrees representation without loss of efficiency. 3 RankBoost with subtree features A simple question related to kernel-based parse reranking asks whether all subtrees are really needed to construct the final parameters w. Suppose we have two large trees t and t′, where t′ is simply generated by attaching a single node to t. In most cases, these two trees yield an almost equivalent discriminative ability, since they are very similar and highly correlated with each other. Even when we exploit all subtrees, most of them are extremely redundant. The motivation of this paper is based on the above observation. We think that only a small set of subtrees is needed to express the final parameters. A compact, non-redundant, and highly relevant feature set is useful in practice, as it is interpretable and increases the parsing (reranking) speed. To realize this goal, we propose a new boostingbased reranking algorithm based on the all-subtrees representation. First, we describe the architecture of our reranking method. Second, we show a connection between boosting and SVMs, and describe how the algorithm realizes the sparse feature representa190               Figure 1: Labeled ordered tree and subtree relation tion described above. 3.1 Preliminaries Let us introduce a labeled ordered tree (or simply ’tree’), its definition and notations, first. Definition 1 Labeled ordered tree (Tree) A labeled ordered tree is a tree where each node is associated with a label and is ordered among its siblings, that is, there is a first child, second child, third child, etc. Definition 2 Subtree Let t and u be labeled ordered trees. We say that t matches u, or t is a subtree of u (t ⊆u), if there is a one-to-one function ψ from nodes in t to u, satisfying the conditions: (1) ψ preserves the parent-daughter relation, (2) ψ preserves the sibling relation, (3) ψ preserves the labels. We denote the number of nodes in t as |t|. Figure 1 shows an example of a labeled ordered tree and its subtree and non-subtree. 3.2 Feature space given by subtrees We first assume that a parse tree y is represented in a labeled ordered tree. Note that the outputs of partof-speech tagging, shallow parsing, and dependency analysis can be modeled as labeled ordered trees. The feature set F consists of all subtrees seen in the training data, i.e., F = ∪i,y∈Y(xi){t | t ⊆y}. The feature mapping Φ(y) is then given by letting the existence of a tree t be a single dimension, i.e., Φ(y) = {I(t1 ⊆y), . . . , I(tm ⊆y)} ∈{0, 1}m, where I(·) is the indicator function, m = |F|, and {t1, . . . , tm} ∈F. The feature space is essentially the same as that of tree kernel † †Strictly speaking, tree kernel uses the cardinality of each subtree 3.3 RankBoost algorithm The parameter estimation method we adopt is a variant of the RankBoost algorithm introduced in (Freund et al., 2003). Collins et al. used RankBoost to parse reranking tasks (Collins, 2000; Collins, 2002). The algorithm proceeds for K iterations and tries to minimize the BoostLoss for given training data‡. At each iteration, a single feature (hypothesis) is chosen, and its weight is updated. Suppose we have current parameters: w = {w1, w2, . . . , wm} ∈Rm. New parameters w∗⟨k,δ⟩∈Rm are then given by selecting a single feature k and updating the weight through an increment δ: w∗ ⟨k,δ⟩= {w1, w2, . . . , wk + δ, . . . , wm}. After the update, the new loss is given: Loss(w∗ ⟨k,δ⟩) = X i, y∈Y(xi) exp ş −w∗ ⟨k,δ⟩· [Φ(yi) −Φ(y)] ť . (1) The RankBoost algorithm iteratively selects the optimal pair ⟨ˆk, ˆδ⟩that minimizes the loss, i.e., ⟨ˆk, ˆδ⟩= argmin ⟨k,δ⟩ Loss(w∗ ⟨k,δ⟩). By setting the differential of (1) at 0, the following optimal solutions are obtained: ˆk = argmax k=1,...,m ŕŕŕŕ q W + k − q W − k ŕŕŕŕ , and δ = 1 2 log W + ˆk W − ˆk , (2) where W b k = P i,y∈Y(xi) D(yi, y) · I[I(tk ⊆yi) − I(tk ⊆y) = b], b ∈{+1, −1}, and D(yi, y) = exp ( −w · [Φ(yi) −Φ(y)]). Following (Freund et al., 2003; Collins, 2000), we introduce smoothing to prevent the case when either W + k or W − k is 0 §: δ = 1 2 log W + ˆk + ϵZ W − ˆk + ϵZ , where Z = X i,y∈Y(xi) D(yi, y) and ϵ ∈R+. The function Y(x) is usually performed by a probabilistic history-based parser, which can output not only a parse tree but the log probability of the ‡In our experiments, optimal settings for K were selected by using development data. §For simplicity, we fix ϵ at 0.001 in all our experiments. 191 tree. We incorporate the log probability into the reranking by using it as a feature: Φ(y) = {L(y), I(t1 ⊆y), . . . , I(tm ⊆y)}, and w = {w0, w1, w2, . . . , wm}, where L(y) is the log probability of a tree y under the base parser and w0 is the parameter of L(y). Note that the update algorithm (2) does not allow us to calculate the parameter w0, since (2) is restricted to binary features. To prevent this problem, we use the approximation technique introduced in (Freund et al., 2003). 3.4 Sparse feature representation Recent studies (Schapire et al., 1997; R¨atsch, 2001) have shown that both boosting and SVMs (Vapnik, 1998) work according to similar strategies: constructing optimal parameters w that maximize the smallest margin between positive and negative examples. The critical difference is the definition of margin or the way they regularize the vector w. (R¨atsch, 2001) shows that the iterative feature selection performed in boosting asymptotically realizes an l1-norm ||w||1 regularization. In contrast, it is well known that SVMs are reformulated as an l2norm ||w||2 regularized algorithm. The relationship between two regularizations has been studied in the machine learning community. (Perkins et al., 2003) reported that l1-norm should be chosen for a problem where most given features are irrelevant. On the other hand, l2-norm should be chosen when most given features are relevant. An advantage of the l1-norm regularizer is that it often leads to sparse solutions where most wk are exactly 0. The features assigned zero weight are thought to be irrelevant features as regards classifications. The l1-norm regularization is useful for our setting, since most features (subtrees) are redundant and irrelevant, and these redundant features are automatically eliminated. 4 Efficient Computation In each boosting iteration, we have to solve the following optimization problem: ˆk = argmax k=1,...,m gain(tk), where gain(tk) = ¯¯¯ q W + k − q W − k ¯¯¯. It is non-trivial to find the optimal tree tˆk that maximizes gain(tk), since the number of subtrees is exponential to its size. In fact, the problem is known to be NP-hard (Yang, 2004). However, in real applications, the problem is manageable, since the maximum number of subtrees is usually bounded by a constant. To solve the problem efficiently, we now adopt a variant of the branch-and-bound algorithm, similar to that described in (Kudo and Matsumoto, 2004) 4.1 Efficient Enumeration of Trees Abe and Zaki independently proposed an efficient method, rightmost-extension, for enumerating all subtrees from a given tree (Abe et al., 2002; Zaki, 2002). First, the algorithm starts with a set of trees consisting of single nodes, and then expands a given tree of size (n−1) by attaching a new node to it to obtain trees of size n. However, it would be inefficient to expand nodes at arbitrary positions of the tree, as duplicated enumeration is inevitable. The algorithm, rightmost extension, avoids such duplicated enumerations by restricting the position of attachment. Here we give the definition of rightmost extension to describe this restriction in detail. Definition 3 Rightmost Extension (Abe et al., 2002; Zaki, 2002) Let t and t′ be labeled ordered trees. We say t′ is a rightmost extension of t, if and only if t and t′ satisfy the following three conditions: (1) t′ is created by adding a single node to t, (i.e., t ⊂t′ and |t| + 1 = |t′|). (2) A node is added to a node existing on the unique path from the root to the rightmost leaf (rightmostpath) in t. (3) A node is added as the rightmost sibling. Consider Figure 2, which illustrates example tree t with labels drawn from the set L = {a, b, c}. For the sake of convenience, each node in this figure has its original number (depth-first enumeration). The rightmost-path of the tree t is (a(c(b))), and it occurs at positions 1, 4 and 6 respectively. The set of rightmost extended trees is then enumerated by simply adding a single node to a node on the rightmost path. Since there are three nodes on the rightmost path and the size of the label set is 3 (= |L|), a to192 b a c 1 2 4 a b 5 6 c 3 b a c 1 2 4 a b 5 6 c 3 b a c 1 2 4 a b 5 6 c 3 b a c 1 2 4 a b 5 6 c 3 rightmost- path t rightmost extension 7 7 7 t’ } , , { c b a L = } , , { c b a } , , { c b a } , , { c b a Figure 2: Rightmost extension tal of 9 trees are enumerated from the original tree t. By repeating the rightmost-extension process recursively, we can create a search space in which all trees drawn from the set L are enumerated. 4.2 Pruning Rightmost extension defines a canonical search space in which we can enumerate all subtrees from a given set of trees. Here we consider an upper bound of the gain that allows subspace pruning in this canonical search space. The following observation provides a convenient way of computing an upper bound of the gain(tk) for any super-tree tk′ of tk. Observation 1 Upper bound of the gain(tk) For any tk′ ⊇tk, the gain of tk′ is bounded by µ(tk): gain(tk′) = ŕŕŕŕ q W + k′ − q W − k′ ŕŕŕŕ ≤ max( q W + k′, q W − k′) ≤ max( q W + k , q W − k ) = µ(tk), since tk′ ⊇tk ⇒W b k′ ≤W b k, b ∈{+1, −1}. We can efficiently prune the search space spanned by the rightmost extension using the upper bound of gain µ(t). During the traverse of the subtree lattice built by the recursive process of rightmost extension, we always maintain the temporally suboptimal gain τ of all the previously calculated gains. If µ(t) < τ, the gain of any super-tree t′ ⊇t is no greater than τ, and therefore we can safely prune the search space spanned from the subtree t. In contrast, if µ(t) ≥τ, we cannot prune this space, since there might be a super-tree t′ ⊇t such that gain(t′) ≥τ. 4.3 Ad-hoc techniques In real applications, we also employ the following practical methods to reduce the training costs. • Size constraint Larger trees are usually less effective to discrimination. Thus, we give a size threshold s, and use subtrees whose size is no greater than s. This constraint is easily realized by controlling the rightmost extension according to the size of the trees. • Frequency constraint The frequency-based cut-off has been widely used in feature selections. We employ a frequency threshold f, and use subtrees seen on at least one parse for at least f different sentences. Note that a similar branch-and-bound technique can also be applied to the cut-off. When we find that the frequency of a tree t is no greater than f, we can safely prune the space spanned from t as the frequencies of any super-trees t′ ⊇t are also no greater than f. • Pseudo iterations After several 5- or 10-iterations of boosting, we alternately perform 100- or 300 pseudo iterations, in which the optimal feature (subtree) is selected from the cache that maintains the features explored in the previous iterations. The idea is based on our observation that a feature in the cache tends to be reused as the number of boosting iterations increases. Pseudo iterations converge very fast, and help the branch-and-bound algorithm find new features that are not in the cache. 5 Experiments 5.1 Parsing Wall Street Journal Text In our experiments, we used the same data set that used in (Collins, 2000). Sections 2-21 of the Penn Treebank were used as training data, and section 23 was used as test data. The training data contains about 40,000 sentences, each of which has an average of 27 distinct parses. Of the 40,000 training sentences, the first 36,000 sentences were used to perform the RankBoost algorithm. The remaining 4,000 sentences were used as development data. Model2 of (Collins, 1999) was used to parse both the training and test data. To capture the lexical information of the parse trees, we did not use a standard CFG tree but a lexicalized-CFG tree where each non-terminal node has an extra lexical node labeled with the head word of the constituent. Figure 3 shows an example of the lexicalized-CFG tree used in our experiments. The 193 TOP S (saw) NP (I) PRP I VP (saw) VBD saw NP (girl) DT a NN girl Figure 3: Lexicalized CFG tree for WSJ parsing head word, e.g., (saw), is put as a leftmost constituent size parameter s and frequency parameter f were experimentally set at 6 and 10, respectively. As the data set is very large, it is difficult to employ the experiments with more unrestricted parameters. Table 1 lists results on test data for the Model2 of (Collins, 1999), for several previous studies, and for our best model. We achieve recall and precision of 89.3/%89.6% and 89.9%/90.1% for sentences with ≤100 words and ≤40 words, respectively. The method shows a 1.2% absolute improvement in average precision and recall (from 88.2% to 89.4% for sentences ≤100 words), a 10.1% relative reduction in error. (Collins, 2000) achieved 89.6%/89.9% recall and precision for the same datasets (sentences ≤100 words) using boosting and manually constructed features. (Charniak, 2000) extends PCFG and achieves similar performance to (Collins, 2000). The tree kernel method of (Collins and Duffy, 2002) uses the all-subtrees representation and achieves 88.6%/88.9% recall and precision, which are slightly worse than the results obtained with our model. (Bod, 2001) also uses the all-subtrees representation with a very different parameter estimation method, and realizes 90.06%/90.08% recall and precision for sentences of ≤40 words. 5.2 Shallow Parsing We used the same data set as the CoNLL 2000 shared task (Tjong Kim Sang and Buchholz, 2000). Sections 15-18 of the Penn Treebank were used as training data, and section 20 was used as test data. As a baseline model, we used a shallow parser based on Conditional Random Fields (CRFs), very similar to that described in (Sha and Pereira, 2003). CRFs have shown remarkable results in a number of tagging and chunking tasks in NLP. n-best outputs were obtained by a combination of forward MODEL ≤40 Words (2245 sentences) LR LP CBs 0 CBs 2 CBs CO99 88.5% 88.7% 0.92 66.7% 87.1% CH00 90.1% 90.1% 0.74 70.1% 89.6% CO00 90.1% 90.4% 0.74 70.3% 89.6% CO02 89.1% 89.4% 0.85 69.3% 88.2% Boosting 89.9% 90.1% 0.77 70.5% 89.4% MODEL ≤100 Words (2416 sentences) LR LP CBs 0 CBs 2 CBs CO99 88.1% 88.3% 1.06 64.0% 85.1% CH00 89.6% 89.5% 0.88 67.6% 87.7% CO00 89.6% 89.9% 0.87 68.3% 87.7% CO02 88.6% 88.9% 0.99 66.5% 86.3% Boosting 89.3% 89.6% 0.90 67.9% 87.5% Table 1: Results for section 23 of the WSJ Treebank LR/LP = labeled recall/precision. CBs is the average number of cross brackets per sentence. 0 CBs, and 2CBs are the percentage of sentences with 0 or ≤2 crossing brackets, respectively. COL99 = Model 2 of (Collins, 1999). CH00 = (Charniak, 2000), CO00=(Collins, 2000). CO02=(Collins and Duffy, 2002). Viterbi search and backward A* search. Note that this search algorithm yields optimal n-best results in terms of the CRFs score. Each sentence has at most 20 distinct parses. The log probability from the CRFs shallow parser was incorporated into the reranking. Following (Collins, 2000), the training set was split into 5 portions, and the CRFs shallow parser was trained on 4/5 of the data, then used to decode the remaining 1/5. The outputs of the base parser, which consist of base phrases, were converted into right-branching trees by assuming that two adjacent base phrases are in a parent-child relationship. Figure 4 shows an example of the tree for shallow parsing task. We also put two virtual nodes, left/right boundaries, to capture local transitions. The size parameter s and frequency parameter f were experimentally set at 6 and 5, respectively. Table 2 lists results on test data for the baseline CRFs parser, for several previous studies, and for our best model. Our model achieves a 94.12 Fmeasure, and outperforms the baseline CRFs parser and the SVMs parser (Kudo and Matsumoto, 2001). (Zhang et al., 2002) reported a higher F-measure with a generalized winnow using additional linguistic features. The accuracy of our model is very similar to that of (Zhang et al., 2002) without using such additional features. Table 3 shows the results for our best model per chunk type. 194 TOP NP PRP (L) I (R) VP VBD (L) saw (R) NP DT (L) a NN girl (R) EOS Figure 4: Tree representation for shallow parsing Represented in a right-branching tree with two virtual nodes MODEL Fβ=1 CRFs (baseline) 93.76 8 SVMs-voting (Kudo and Matsumoto, 2001) 93.91 RW + linguistic features (Zhang et al., 2002) 94.17 Boosting (our model) 94.12 Table 2: Results of shallow parsing Fβ=1 is the harmonic mean of precision and recall. 6 Discussion 6.1 Interpretablity and Efficiency The numbers of active (non-zero) features selected by boosting are around 8,000 and 3,000 in the WSJ parsing and shallow parsing, respectively. Although almost all the subtrees are used as feature candidates, boosting selects a small and highly relevant subset of features. When we explicitly enumerate the subtrees used in tree kernel, the number of active features might amount to millions or more. Note that the accuracies under such sparse feature spaces are still comparable to those obtained with tree kernel. This result supports our first intuition that we do not always need all the subtrees to construct the parameters. The sparse feature representations are useful in practice as they allow us to analyze what kinds of features are relevant. Table 4 shows examples of active features along with their weights wk. In the shallow parsing tasks, subordinate phrases (SBAR) are difficult to analyze without seeing long dependencies. Subordinate phrases usually precede a sentence (NP and VP). However, Markov-based shallow parsers, such as MEMM or CRFs, cannot capture such a long dependency. Our model automatically selects useful subtrees to obtain an improvement on subordinate phrases. It is interesting that the Precision Recall Fβ=1 ADJP 80.35% 73.41% 76.72 ADVP 83.88% 82.33% 83.10 CONJP 42.86% 66.67% 52.17 INTJ 50.00% 50.00% 50.00 LST 0.00% 0.00% 0.00 NP 94.45% 94.36% 94.41 PP 97.24% 98.07% 97.65 PRT 76.92% 75.47% 76.19 SBAR 90.70% 89.35% 90.02 VP 93.95% 94.72% 94.33 Overall 94.11% 94.13% 94.12 Table 3: Results of shallow parsing per chunk type tree (SBAR(IN(for))(NP(VP(TO)))) has a large positive weight, while the tree (SBAR((IN(for))(NP(O)))) has a negative weight. The improvement on subordinate phrases is considerable. We achieve 19% of the relative error reduction for subordinate phrase (from 87.68 to 90.02 in F-measure) The testing speed of our model is much higher than that of other models. The speeds of reranking for WSJ parsing and shallow parsing are 0.055 sec./sent. and 0.042 sec./sent. respectively, which are fast enough for real applications ¶. 6.2 Relationship to previous work Tree kernel uses the all-subtrees representation not explicitly but implicitly by reducing the problem to the calculation of the inner-products of two trees. The implicit calculation yields a practical computation in training. However, in testing, kernel methods require a number of kernel evaluations, which are too heavy to allow us to realize real applications. Moreover, tree kernel needs to incorporate a decay factor to downweight the contribution of larger subtrees. It is non-trivial to set the optimal decay factor as the accuracies are sensitive to its selection. Similar to our model, data oriented parsing (DOP) methods (Bod, 1998) deal with the all-subtrees representation explicitly. Since the exact computation of scores for DOP is NP-complete, several approximations are employed to perform an efficient parsing. The critical difference between our model and DOP is that our model leads to an extremely sparse solution and automatically eliminates redundant subtrees. With the DOP methods, (Bod, 2001) also employs constraints (e.g., depth of subtrees) to ¶We ran these tests on a Linux PC with Pentium 4 3.2 Ghz. 195 WSJ parsing w active trees that contain the word “in” 0.3864 (VP(NP(NNS(plants)))(PP(in))) 0.3326 (VP(VP(PP)(PP(in)))(VP)) 0.2196 (NP(VP(VP(PP)(PP(in))))) 0.1748 (S(NP(NNP))(PP(in)(NP))) ... ... -1.1217 (PP(in)(NP(NP(effect)))) -1.1634 (VP(yield)(PP(PP))(PP(in))) -1.3574 (NP(PP(in)(NP(NN(way))))) -1.8030 (NP(PP(in)(NP(trading)(JJ)))) shallow parsing w active trees that contain the phrase “SBAR” 1.4500 (SBAR(IN(for))(NP(VP(TO)))) 0.6177 (VP(SBAR(NP(VBD))) 0.6173 (SBAR(NP(VP(“)))) 0.5644 (VP(SBAR(NP(VP(JJ))))) .. .. -0.9034 (SBAR(IN(for))(NP(O))) -0.9181 (SBAR(NP(O))) -1.0695 (ADVP(NP(SBAR(NP(VP))))) -1.1699 (SBAR(NP(NN)(NP))) Table 4: Examples of active features (subtrees) All trees are represented in S-expression. In the shallow parsing task, O is a special phrase that means “out of chunk”. select relevant subtrees and achieves the best results for WSJ parsing. However, these techniques are not based on the regularization framework focused on this paper and do not always eliminate all the redundant subtrees. Even using the methods of (Bod, 2001), millions of subtrees are still exploited, which leads to inefficiency in real problems. 7 Conclusions In this paper, we presented a new application of boosting for parse reranking, in which all subtrees are potentially used as distinct features. Although this set-up greatly increases the feature space, the l1-norm regularization performed by boosting selects a compact and relevant feature set. Our model achieved a comparable or even better accuracy than kernel methods even with an extremely small number of features (subtrees). References Kenji Abe, Shinji Kawasoe, Tatsuya Asai, Hiroki Arimura, and Setsuo Arikawa. 2002. Optimized substructure discovery for semi-structured data. In Proc. of PKDD, pages 1–14. Rens Bod. 1998. Beyond Grammar: An Experience Based Theory of Language. CSLI Publications/Cambridge University Press. Rens Bod. 2001. What is the minimal set of fragments that achieves maximal parse accuracy? In Proc. of ACL, pages 66–73. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proc. of NAACL, pages 132–139. Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proc. of ACL. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proc. of ICML, pages 175–182. Michael Collins. 2002. Ranking algorithms for named-entity extraction: Boosting and the voted perceptron. In Proc. of ACL, pages 489–496. Yoav Freund, Raj D. Iyer, Robert E. Schapire, and Yoram Singer. 2003. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933– 969. Taku Kudo and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proc. of NAACL, pages 192–199. Taku Kudo and Yuji Matsumoto. 2004. A boosting algorithm for classification of semi-structured text. In Proc. of EMNLP, pages 301–308. Simon Perkins, Kevin Lacker, and James Thiler. 2003. Grafting: Fast, incremental feature selection by gradient descent in function space. Journal of Machine Learning Research, 3:1333–1356. Gunnar. R¨atsch. 2001. Robust Boosting via Convex Optimization. Ph.D. thesis, Department of Computer Science, University of Potsdam. Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. 1997. Boosting the margin: a new explanation for the effectiveness of voting methods. In Proc. of ICML, pages 322–330. Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proc. of HLT-NAACL, pages 213–220. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 Shared Task: Chunking. In Proc. of CoNLL-2000 and LLL-2000, pages 127–132. Vladimir N. Vapnik. 1998. Statistical Learning Theory. WileyInterscience. Guizhen Yang. 2004. The complexity of mining maximal frequent itemsets and maximal frequent patterns. In Proc. of SIGKDD. Mohammed Zaki. 2002. Efficiently mining frequent trees in a forest. In Proc. of SIGKDD, pages 71–80. Tong Zhang, Fred Damerau, and David Johnson. 2002. Text chunking based on a generalization of winnow. Journal of Machine Learning Research, 2:615–637. 196
2005
24
Proceedings of the 43rd Annual Meeting of the ACL, pages 197–204, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Automatic Measurement of Syntactic Development in Child Language Kenji Sagae and Alon Lavie Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15232 {sagae,alavie}@cs.cmu.edu Brian MacWhinney Department of Psychology Carnegie Mellon University Pittsburgh, PA 15232 [email protected] Abstract To facilitate the use of syntactic information in the study of child language acquisition, a coding scheme for Grammatical Relations (GRs) in transcripts of parent-child dialogs has been proposed by Sagae, MacWhinney and Lavie (2004). We discuss the use of current NLP techniques to produce the GRs in this annotation scheme. By using a statistical parser (Charniak, 2000) and memorybased learning tools for classification (Daelemans et al., 2004), we obtain high precision and recall of several GRs. We demonstrate the usefulness of this approach by performing automatic measurements of syntactic development with the Index of Productive Syntax (Scarborough, 1990) at similar levels to what child language researchers compute manually. 1 Introduction Automatic syntactic analysis of natural language has benefited greatly from statistical and corpus-based approaches in the past decade. The availability of syntactically annotated data has fueled the development of high quality statistical parsers, which have had a large impact in several areas of human language technologies. Similarly, in the study of child language, the availability of large amounts of electronically accessible empirical data in the form of child language transcripts has been shifting much of the research effort towards a corpus-based mentality. However, child language researchers have only recently begun to utilize modern NLP techniques for syntactic analysis. Although it is now common for researchers to rely on automatic morphosyntactic analyses of transcripts to obtain part-of-speech and morphological analyses, their use of syntactic parsing is rare. Sagae, MacWhinney and Lavie (2004) have proposed a syntactic annotation scheme for the CHILDES database (MacWhinney, 2000), which contains hundreds of megabytes of transcript data and has been used in over 1,500 studies in child language acquisition and developmental language disorders. This annotation scheme focuses on syntactic structures of particular importance in the study of child language. In this paper, we describe the use of existing NLP tools to parse child language transcripts and produce automatically annotated data in the format of the scheme of Sagae et al. We also validate the usefulness of the annotation scheme and our analysis system by applying them towards the practical task of measuring syntactic development in children according to the Index of Productive Syntax, or IPSyn (Scarborough, 1990), which requires syntactic analysis of text and has traditionally been computed manually. Results obtained with current NLP technology are close to what is expected of human performance in IPSyn computations, but there is still room for improvement. 2 The Index of Productive Syntax (IPSyn) The Index of Productive Syntax (Scarborough, 1990) is a measure of development of child language that provides a numerical score for grammatical complexity. IPSyn was designed for investigating individual differences in child language acqui197 sition, and has been used in numerous studies. It addresses weaknesses in the widely popular Mean Length of Utterance measure, or MLU, with respect to the assessment of development of syntax in children. Because it addresses syntactic structures directly, it has gained popularity in the study of grammatical aspects of child language learning in both research and clinical settings. After about age 3 (Klee and Fitzgerald, 1985), MLU starts to reach ceiling and fails to properly distinguish between children at different levels of syntactic ability. For these purposes, and because of its higher content validity, IPSyn scores often tells us more than MLU scores. However, the MLU holds the advantage of being far easier to compute. Relatively accurate automated methods for computing the MLU for child language transcripts have been available for several years (MacWhinney, 2000). Calculation of IPSyn scores requires a corpus of 100 transcribed child utterances, and the identification of 56 specific language structures in each utterance. These structures are counted and used to compute numeric scores for the corpus in four categories (noun phrases, verb phrases, questions and negations, and sentence structures), according to a fixed score sheet. Each structure in the four categories receives a score of zero (if the structure was not found in the corpus), one (if it was found once in the corpus), or two (if it was found two or more times). The scores in each category are added, and the four category scores are added into a final IPSyn score, ranging from zero to 112.1 Some of the language structures required in the computation of IPSyn scores (such as the presence of auxiliaries or modals) can be recognized with the use of existing child language analysis tools, such as the morphological analyzer MOR (MacWhinney, 2000) and the part-of-speech tagger POST (Parisse and Le Normand, 2000). However, more complex structures in IPSyn require syntactic analysis that goes beyond what POS taggers can provide. Examples of such structures include the presence of an inverted copula or auxiliary in a wh-question, conjoined clauses, bitransitive predicates, and fronted or center-embedded subordinate clauses. 1See (Scarborough, 1990) for a complete listing of targeted structures and the IPSyn score sheet used for calculation of scores. Sentence (input): We eat the cheese sandwich Grammatical Relations (output): [Leftwall] We eat the cheese sandwich SUBJ ROOT OBJ DET MOD Figure 1: Input sentence and output produced by our system. 3 Automatic Syntactic Analysis of Child Language Transcripts A necessary step in the automatic computation of IPSyn scores is to produce an automatic syntactic analysis of the transcripts being scored. We have developed a system that parses transcribed child utterances and identifies grammatical relations (GRs) according to the CHILDES syntactic annotation scheme (Sagae et al., 2004). This annotation scheme was designed specifically for child-parent dialogs, and we have found it suitable for the identification of the syntactic structures necessary in the computation of IPSyn. Our syntactic analysis system takes a sentence and produces a labeled dependency structure representing its grammatical relations. An example of the input and output associated with our system can be seen in figure 1. The specific GRs identified by the system are listed in figure 2. The three main steps in our GR analysis are: text preprocessing, unlabeled dependency identification, and dependency labeling. In the following subsections, we examine each of them in more detail. 3.1 Text Preprocessing The CHAT transcription system2 is the format followed by all transcript data in the CHILDES database, and it is the input format we use for syntactic analysis. CHAT specifies ways of transcribing extra-grammatical material such as disfluency, retracing, and repetition, common in spontaneous spoken language. Transcripts of child language may contain a large amount of extra-grammatical mate2http://childes.psy.cmu.edu/manuals/CHAT.pdf 198 SUBJ, ESUBJ, CSUBJ, XSUBJ COMP, XCOMP JCT, CJCT, XJCT OBJ, OBJ2, IOBJ PRED, CPRED, XPRED MOD, CMOD, XMOD AUX NEG DET QUANT POBJ PTL CPZR COM INF VOC COORD ROOT Subject, expletive subject, clausal subject (finite and non−finite) Object, second object, indirect object Clausal complement (finite and non−finite) Predicative, clausal predicative (finite and non−finite) Adjunct, clausal adjunct (finite and non−finite) Nominal modifier, clausal nominal modifier (finite and non−finite) Auxiliary Negation Determiner Quantifier Prepositional object Verb particle Communicator Complementizer Infinitival "to" Vocative Coordinated item Top node Figure 2: Grammatical relations in the CHILDES syntactic annotation scheme. rial that falls outside of the scope of the syntactic annotation system and our GR identifier, since it is already clearly marked in CHAT transcripts. By using the CLAN tools (MacWhinney, 2000), designed to process transcripts in CHAT format, we remove disfluencies, retracings and repetitions from each sentence. Furthermore, we run each sentence through the MOR morphological analyzer (MacWhinney, 2000) and the POST part-of-speech tagger (Parisse and Le Normand, 2000). This results in fairly clean sentences, accompanied by full morphological and part-of-speech analyses. 3.2 Unlabeled Dependency Identification Once we have isolated the text that should be analyzed in each sentence, we parse it to obtain unlabeled dependencies. Although we ultimately need labeled dependencies, our choice to produce unlabeled structures first (and label them in a later step) is motivated by available resources. Unlabeled dependencies can be readily obtained by processing constituent trees, such as those in the Penn Treebank (Marcus et al., 1993), with a set of rules to determine the lexical heads of constituents. This lexicalization procedure is commonly used in statistical parsing (Collins, 1996) and produces a dependency tree. This dependency extraction procedure from constituent trees gives us a straightforward way to obtain unlabeled dependencies: use an existing statistical parser (Charniak, 2000) trained on the Penn Treebank to produce constituent trees, and extract unlabeled dependencies using the aforementioned head-finding rules. Our target data (transcribed child language) is from a very different domain than the one of the data used to train the statistical parser (the Wall Street Journal section of the Penn Treebank), but the degradation in the parser’s accuracy is acceptable. An evaluation using 2,018 words of in-domain manually annotated dependencies shows that the dependency accuracy of the parser is 90.1% on child language transcripts (compared to over 92% on section 23 of the Wall Street Journal portion of the Penn Treebank). Despite the many differences with respect to the domain of the training data, our domain features sentences that are much shorter (and therefore easier to parse) than those found in Wall Street Journal articles. The average sentence length varies from transcript to transcript, because of factors such as the age and verbal ability of the child, but it is usually less than 15 words. 3.3 Dependency Labeling After obtaining unlabeled dependencies as described above, we proceed to label those dependencies with the GR labels listed in Figure 2. Determining the labels of dependencies is in general an easier task than finding unlabeled dependencies in text.3 Using a classifier, we can choose one of the 30 possible GR labels for each dependency, given a set of features derived from the dependencies. Although we need manually labeled data to train the classifier for labeling dependencies, the size of this training set is far smaller than what would be necessary to train a parser to find labeled dependen3Klein and Manning (2002) offer an informal argument that constituent labels are much more easily separable in multidimensional space than constituents/distituents. The same argument applies to dependencies and their labels. 199 cies in one pass. We use a corpus of about 5,000 words with manually labeled dependencies to train TiMBL (Daelemans et al., 2003), a memory-based learner (set to use the k-nn algorithm with k=1, and gain ratio weighing), to classify each dependency with a GR label. We extract the following features for each dependency: • The head and dependent words; • The head and dependent parts-of-speech; • Whether the dependent comes before or after the head in the sentence; • How many words apart the dependent is from the head; • The label of the lowest node in the constituent tree that includes both the head and dependent. The accuracy of the classifier in labeling dependencies is 91.4% on the same 2,018 words used to evaluate unlabeled accuracy. There is no intersection between the 5,000 words used for training and the 2,018-word test set. Features were tuned on a separate development set of 582 words. When we combine the unlabeled dependencies obtained with the Charniak parser (and head-finding rules) and the labels obtained with the classifier, overall labeled dependency accuracy is 86.9%, significantly above the results reported (80%) by Sagae et al. (2004) on very similar data. Certain frequent and easily identifiable GRs, such as DET, POBJ, INF, and NEG were identified with precision and recall above 98%. Among the most difficult GRs to identify were clausal complements COMP and XCOMP, which together amount to less than 4% of the GRs seen the training and test sets. Table 1 shows the precision and recall of GRs of particular interest. Although not directly comparable, our results are in agreement with state-of-the-art results for other labeled dependency and GR parsers. Nivre (2004) reports a labeled (GR) dependency accuracy of 84.4% on modified Penn Treebank data. Briscoe and Carroll (2002) achieve a 76.5% F-score on a very rich set of GRs in the more heterogeneous and challenging Susanne corpus. Lin (1998) evaluates his MINIPAR system at 83% F-score on identification of GRs, also in data from the Susanne corpus (but using simpler GR set than Briscoe and Carroll). GR Precision Recall F-score SUBJ 0.94 0.93 0.93 OBJ 0.83 0.91 0.87 COORD 0.68 0.85 0.75 JCT 0.91 0.82 0.86 MOD 0.79 0.92 0.85 PRED 0.80 0.83 0.81 ROOT 0.91 0.92 0.91 COMP 0.60 0.50 0.54 XCOMP 0.58 0.64 0.61 Table 1: Precision, recall and F-score (harmonic mean) of selected Grammatical Relations. 4 Automating IPSyn Calculating IPSyn scores manually is a laborious process that involves identifying 56 syntactic structures (or their absence) in a transcript of 100 child utterances. Currently, researchers work with a partially automated process by using transcripts in electronic format and spreadsheets. However, the actual identification of syntactic structures, which accounts for most of the time spent on calculating IPSyn scores, still has to be done manually. By using part-of-speech and morphological analysis tools, it is possible to narrow down the number of sentences where certain structures may be found. The search for such sentences involves patterns of words and parts-of-speech (POS). Some structures, such as the presence of determiner-noun or determiner-adjective-noun sequences, can be easily identified through the use of simple patterns. Other structures, such as front or center-embedded clauses, pose a greater challenge. Not only are patterns for such structures difficult to craft, they are also usually inaccurate. Patterns that are too general result in too many sentences to be manually examined, but more restrictive patterns may miss sentences where the structures are present, making their identification highly unlikely. Without more syntactic analysis, automatic searching for structures in IPSyn is limited, and computation of IPSyn scores still requires a great deal of manual inspection. Long, Fey and Channell (2004) have developed a software package, Computerized Profiling (CP), for child language study, which includes a (mostly) 200 automated computation of IPSyn.4 CP is an extensively developed example of what can be achieved using only POS and morphological analysis. It does well on identifying items in IPSyn categories that do not require deeper syntactic analysis. However, the accuracy of overall scores is not high enough to be considered reliable in practical usage, in particular for older children, whose utterances are longer and more sophisticated syntactically. In practice, researchers usually employ CP as a first pass, and manually correct the automatic output. Section 5 presents an evaluation of the CP version of IPSyn. Syntactic analysis of transcripts as described in section 3 allows us to go a step further, fully automating IPSyn computations and obtaining a level of reliability comparable to that of human scoring. The ability to search for both grammatical relations and parts-of-speech makes searching both easier and more reliable. As an example, consider the following sentences (keeping in mind that there are no explicit commas in spoken language): (a) Then [,] he said he ate. (b) Before [,] he said he ate. (c) Before he ate [,] he ran. Sentences (a) and (b) are similar, but (c) is different. If we were looking for a fronted subordinate clause, only (c) would be a match. However, each one of the sentences has an identical part-speechsequence. If this were an isolated situation, we might attempt to fix it by having tags that explicitly mark verbs that take clausal complements, or by adding lexical constraints to a search over part-ofspeech patterns. However, even by modifying this simple example slightly, we find more problems: (d) Before [,] he told the man he was cold. (e) Before he told the story [,] he was cold. Once again, sentences (d) and (e) have identical part-of-speech sequences, but only sentence (e) features a fronted subordinate clause. These limited toy examples only scratch the surface of the difficulties in identifying syntactic structures without syntactic 4Although CP requires that a few decisions be made manually, such as the disambiguation of the lexical item “’s” as copula vs. genitive case marker, and the definition of sentence breaks for long utterances, the computation of IPSyn scores is automated to a large extent. analysis beyond part-of-speech and morphological tagging. In these sentences, searching with GRs is easy: we simply find a GR of clausal type (e.g. CJCT, COMP, CMOD, etc) where the dependent is to the left of its head. For illustration purposes of how searching for structures in IPSyn is done with GRs, let us look at how to find other IPSyn structures5: • Wh-embedded clauses: search for wh-words whose head, or transitive head (its head’s head, or head’s head’s head...) is a dependent in GR of types [XC]SUBJ, [XC]PRED, [XC]JCT, [XC]MOD, COMP or XCOMP; • Relative clauses: search for a CMOD where the dependent is to the right of the head; • Bitransitive predicate: search for a word that is a head of both OBJ and OBJ2 relations. Although there is still room for under- and overgeneralization with search patterns involving GRs, finding appropriate ways to search is often made trivial, or at least much more simple and reliable than searching without GRs. An evaluation of our automated version of IPSyn, which searches for IPSyn structures using POS, morphology and GR information, and a comparison to the CP implementation, which uses only POS and morphology information, is presented in section 5. 5 Evaluation We evaluate our implementation of IPSyn in two ways. The first is Point Difference, which is calculated by taking the (unsigned) difference between scores obtained manually and automatically. The point difference is of great practical value, since it shows exactly how close automatically produced scores are to manually produced scores. The second is Point-to-Point Accuracy, which reflects the overall reliability over each individual scoring decision in the computation of IPSyn scores. It is calculated by counting how many decisions (identification of presence/absence of language structures in the transcript being scored) were made correctly, and dividing that 5More detailed descriptions and examples of each structure are found in (Scarborough, 1990), and are omitted here for space considerations, since the short descriptions are fairly selfexplanatory. 201 number by the total number of decisions. The pointto-point measure is commonly used for assessing the inter-rater reliability of metrics such as the IPSyn. In our case, it allows us to establish the reliability of automatically computed scores against human scoring. 5.1 Test Data We obtained two sets of transcripts with corresponding IPSyn scoring (total scores, and each individual decision) from two different child language research groups. The first set (A) contains 20 transcripts of children of ages ranging between two and three. The second set (B) contains 25 transcripts of children of ages ranging between eight and nine. Each transcript in set A was scored fully manually. Researchers looked for each language structure in the IPSyn scoring guide, and recorded its presence in a spreadsheet. In set B, scoring was done in a two-stage process. In the first stage, each transcript was scored automatically by CP. In the second stage, researchers checked each automatic decision made by CP, and corrected any errors manually. Two transcripts in each set were held out for development and debugging. The final test sets contained: (A) 18 transcripts with a total of 11,704 words and a mean length of utterance of 2.9, and (B) 23 transcripts with a total of 40,819 words and a mean length of utterance of 7.0. 5.2 Results Scores computed automatically from transcripts parsed as described in section 3 were very close to the scores computed manually. Table 2 shows a summary of the results, according to our two evaluation metrics. Our system is labeled as GR, and manually computed scores are labeled as HUMAN. For comparison purposes, we also show the results of running Long et al.’s automated version of IPSyn, labeled as CP, on the same transcripts. Point Difference The average (absolute) point difference between automatically computed scores (GR) and manually computed scores (HUMAN) was 3.3 (the range of HUMAN scores on the data was 21-91). There was no clear trend on whether the difference was positive or negative. In some cases, the automated scores were higher, in other cases lower. The minimum difSystem Avg. Pt. Difference Point-to-Point to HUMAN Reliability GR (Total) 3.3 92.8% CP (Total) 8.3 85.4% GR (Set A) 3.7 92.5% CP (Set A) 6.2 86.2% GR (Set B) 2.9 93.0% CP (Set B) 10.2 84.8% Table 2: Summary of evaluation results. GR is our implementation of IPSyn based on grammatical relations, CP is Long et al.’s (2004) implementation of IPSyn, and HUMAN is manual scoring. Histogram of Point Differences (3 point bins) 0 10 20 30 40 50 60 3 6 9 12 15 18 21 Point Difference Frequency (%) GR CP Figure 3: Histogram of point differences between HUMAN scores and GR (black), and CP (white). ference was zero, and the maximum difference was 12. Only two scores differed by 10 or more, and 17 scores differed by two or less. The average point difference between HUMAN and the scores obtained with Long et al.’s CP was 8.3. The minimum was zero and the maximum was 21. Sixteen scores differed by 10 or more, and six scores differed by 2 or less. Figure 3 shows the point differences between GR and HUMAN, and CP and HUMAN. It is interesting to note that the average point differences between GR and HUMAN were similar on sets A and B (3.7 and 2.9, respectively). Despite the difference in age ranges, the two averages were less than one point apart. On the other hand, the average difference between CP and HUMAN was 6.2 on set A, and 10.2 on set B. The larger difference reflects CP’s difficulty in scoring transcripts of older children, whose sentences are more syntactically complex, using only POS analysis. 202 Point-to-Point Accuracy In the original IPSyn reliability study (Scarborough, 1990), point-to-point measurements using 75 transcripts showed the mean inter-rater agreement for IPSyn among human scorers at 94%, with a minimum agreement of 90% of all decisions within a transcript. The lowest agreement between HUMAN and GR scoring for decisions within a transcript was 88.5%, with a mean of 92.8% over the 41 transcripts used in our evaluation. Although comparisons of agreement figures obtained with different sets of transcripts are somewhat coarse-grained, given the variations within children, human scorers and transcript quality, our results are very satisfactory. For direct comparison purposes using the same data, the mean point-to-point accuracy of CP was 85.4% (a relative increase of about 100% in error). In their separate evaluation of CP, using 30 samples of typically developing children, Long and Channell (2001) found a 90.7% point-to-point accuracy between fully automatic and manually corrected IPSyn scores.6 However, Long and Channell compared only CP output with manually corrected CP output, while our set A was manually scored from scratch. Furthermore, our set B contained only transcripts from significantly older children (as in our evaluation, Long and Channell observed decreased accuracy of CP’s IPSyn with more complex language usage). These differences, and the expected variation from using different transcripts from different sources, account for the difference in our results and Long and Channell’s. 5.3 Error Analysis Although the overall accuracy of our automatically computed scores is in large part comparable to manual IPSyn scoring (and significantly better than the only option currently available for automatic scoring), our system suffers from visible deficiencies in the identification of certain structures within IPSyn. Four of the 56 structures in IPSyn account for almost half of the number of errors made by our system. Table 3 lists these IPSyn items, with their respective percentages of the total number of errors. 6Long and Channell’s evaluation also included samples from children with language disorders. Their 30 samples of typically developing children (with a mean age of 5) are more directly comparable to the data used in our evaluation. IPSyn item Error S11 (propositional complement) 16.9% V15 (copula, modal or aux for 12.3% emphasis or ellipsis) S16 (relative clause) 10.6% S14 (bitransitive predicate) 5.8% Table 3: IPSyn structures where errors occur most frequently, and their percentages of the total number of errors over 41 transcripts. Errors in items S11 (propositional complements), S16 (relative clauses), and S14 (bitransitive predicates) are caused by erroneous syntactic analyses. For an example of how GR assignments affect IPSyn scoring, let us consider item S11. Searching for the relation COMP is a crucial part in finding propositional complements. However, COMP is one of the GRs that can be identified the least reliably in our set (precision of 0.6 and recall of 0.5, see table 1). As described in section 2, IPSyn requires that we credit zero points to item S11 for no occurrences of propositional complements, one point for a single occurrence, and two points for two or more occurrences. If there are several COMPs in the transcript, we should find about half of them (plus others, in error), and correctly arrive at a credit of two points. However, if there are very few or none, our count is likely to be incorrect. Most errors in item V15 (emphasis or ellipsis) were caused not by incorrect GR assignments, but by imperfect search patterns. The searching failed to account for a number of configurations of GRs, POS tags and words that indicate that emphasis or ellipsis exists. This reveals another general source of error in our IPSyn implementation: the search patterns that use GR analyzed text to make the actual IPSyn scoring decisions. Although our patterns are far more reliable than what we could expect from POS tags and words alone, these are still hand-crafted rules that need to be debugged and perfected over time. This was the first evaluation of our system, and only a handful of transcripts were used during development. We expect that once child language researchers have had the opportunity to use the system in practical settings, their feedback will allow us to refine the search patterns at a more rapid pace. 203 6 Conclusion and Future Work We have presented an automatic way to annotate transcripts of child language with the CHILDES syntactic annotation scheme. By using existing resources and a small amount of annotated data, we achieved state-of-the-art accuracy levels. GR identification was then used to automate the computation of IPSyn scores to measure grammatical development in children. The reliability of our automatic IPSyn was very close to the inter-rater reliability among human scorers, and far higher than that of the only other computational implementation of IPSyn. This demonstrates the value of automatic GR assignment to child language research. From the analysis in section 5.3, it is clear that the identification of certain GRs needs to be made more accurately. We intend to annotate more in-domain training data for GR labeling, and we are currently investigating the use of other applicable GR parsing techniques. Finally, IPSyn score calculation could be made more accurate with the knowledge of the expected levels of precision and recall of automatic assignment of specific GRs. It is our intuition that in a number of cases it would be preferable to trade recall for precision. We are currently working on a framework for soft-labeling of GRs, which will allow us to manipulate the precision/recall trade-off as discussed in (Carroll and Briscoe, 2002). Acknowledgments This work was supported in part by the National Science Foundation under grant IIS-0414630. References Edward J. Briscoe and John A. Carroll. 2002. Robust accurate statistical annotation of general text. Proceedings of the 3rd International Conference on Language Resources and Evaluation, (pp. 1499–1504). Las Palmas, Gran Canaria. John A. Carroll and Edward J. Briscoe. 2002. High precision extraction of grammatical relations. Proceedings of the 19th International Conference on Computational Linguistics, (pp. 134-140). Taipei, Taiwan. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the First Annual Meeting of the North American Chapter of the Association for Computational Linguistics. Seattle, WA. Michael Collins. 1996. A new statistical parser based on bigram lexical dependencies. Proceedings of the 34th Meeting of the Association for Computational Linguistics (pp. 184-191). Santa Cruz, CA. Walter Daelemans, Jacub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2004. TiMBL: Tilburg Memory Based Learner, version 5.1, Reference Guide. ILK Research Group Technical Report Series no. 04-02, 2004. T. Klee and M. D. Fitzgerald. 1985. The relation between grammatical development and mean length of utterance in morphemes. Journal of Child Language, 12, 251-269. Dan Klein and Christopher D. Manning. 2002. A generative constituent-context model for improved grammar induction. Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (pp. 128-135). Dekang Lin. 1998. Dependency-based evaluation of MINIPAR. In Proceedings of the Workshop on the Evaluation of Parsing Systems. Granada, Spain. Steve H. Long and Ron W. Channell. 2001. Accuracy of four language analysis procedures performed automatically. American Journal of Speech-Language Pathology, 10(2). Steven H. Long, Marc E. Fey, and Ron W. Channell. 2004. Computerized Profiling (Version 9.6.0). Cleveland, OH: Case Western Reserve University. Brian MacWhinney. 2000. The CHILDES Project: Tools for Analyzing Talk. Mahwah, NJ: Lawrence Erlbaum Associates. Mitchel P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewics. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19. Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. Proceedings of International Conference on Computational Linguistics (pp. 64-70). Geneva, Switzerland. Christophe Parisse and Marie-Thrse Le Normand. 2000. Automatic disambiguation of the morphosyntax in spoken language corpora. Behavior Research Methods, Instruments, and Computers, 32, 468-481. Kenji Sagae, Alon Lavie, and Brian MacWhinney. 2004. Adding Syntactic annotations to transcripts of parentchild dialogs. Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004). Lisbon, Portugal. Hollis S. Scarborough. 1990. Index of Productive Syntax. In Applied Psycholinguistics, 11, 1-22. 204
2005
25
Proceedings of the 43rd Annual Meeting of the ACL, pages 205–214, Ann Arbor, June 2005. c⃝2005 Association for Computational Linguistics Experiments with Interactive Question-Answering Sanda Harabagiu, Andrew Hickl, John Lehmann, and Dan Moldovan Language Computer Corporation Richardson, Texas USA [email protected] Abstract This paper describes a novel framework for interactive question-answering (Q/A) based on predictive questioning. Generated off-line from topic representations of complex scenarios, predictive questions represent requests for information that capture the most salient (and diverse) aspects of a topic. We present experimental results from large user studies (featuring a fully-implemented interactive Q/A system named FERRET) that demonstrates that surprising performance is achieved by integrating predictive questions into the context of a Q/A dialogue. 1 Introduction In this paper, we propose a new architecture for interactive question-answering based on predictive questioning. We present experimental results from a currently-implemented interactive Q/A system, named FERRET, that demonstrates that surprising performance is achieved by integrating sources of topic information into the context of a Q/A dialogue. In interactive Q/A, professional users engage in extended dialogues with automatic Q/A systems in order to obtain information relevant to a complex scenario. Unlike Q/A in isolation, where the performance of a system is evaluated in terms of how well answers returned by a system meet the specific information requirements of a single question, the performance of interactive Q/A systems have traditionally been evaluated by analyzing aspects of the dialogue as a whole. Q/A dialogues have been evaluated in terms of (1) efficiency, defined as the number of questions that the user must pose to find particular information, (2) effectiveness, defined by the relevance of the answers returned, (3) user satisfaction. In order to maximize performance in these three areas, interactive Q/A systems need a predictive dialogue architecture that enables them to propose related questions about the relevant information that could be returned to a user, given a domain of interest. We argue that interactive Q/A systems depend on three factors: (1) the effective representation of the topic of a dialogue, (2) the dynamic recognition of the structure of the dialogue, and (3) the ability to return relevant answers to a particular question. In this paper, we describe results from experiments we conducted with our own interactive Q/A system, FERRET, under the auspices of the ARDA AQUAINT1 program, involving 8 different dialogue scenarios and more than 30 users. The results presented here illustrate the role of predictive questioning in enhancing the performance of Q/A interactions. In the remainder of this paper, we describe a new architecture for interactive Q/A. Section 2 presents the functionality of several of FERRET’s modules and describes the NLP techniques it relies upon. In Section 3, we present one of the dialogue scenarios and the topic representations we have employed. Section 4 highlights the management of the interaction between the user and FERRET, while Section 5 presents the results of evaluating our proposed 1AQUAINT is an acronym for Advanced QUestion Answering for INTelligence. 205 Dialogue Management Collection Document Question Similarity Answer Fusion (PDN) Network Dialogue Predictive Answer Fusion Context Management Dialogue Shell Online Question Answering Topic Predictive Dialogue Question Answer Decomposition Question Information Extraction Representation Off−line Question Answering Database (QUAB) Question−Answer Figure 1: FERRET - A Predictive Interactive Question-Answering Architecture. model, and Section 6 summarizes the conclusions. 2 Interactive Question-Answering We have found that the quality of interactions produced by an interactive Q/A system can be greatly enhanced by predicting the range of questions that a user might ask in the context of a given topic. If a large database of topic-relevant questions were available for a wide variety of topics, the accuracy of a state-of-the-art Q/A system such as (Harabagiu et al., 2003) could be enhanced. In FERRET, our interactive Q/A system, we store such “predicted” pairs of questions and answers in a database known as the Question Answer Database (or QUAB). FERRET uses this large set of topicrelevant question-and-answer pairs to improve the interaction with the user by suggesting new questions. For example, when a user asks a question like (Q1) (as illustrated in Table 1), FERRET returns an answer to the question (A1) and proposes (Q2), (Q3), and (Q4) as suggestions of possible continuations of the dialogue. Users then choose how to continue the interaction by either (1) ignoring the suggestions made by the system and proposing a different question, or by (2) selecting one of the proposed questions and examining its answer. Figure 1 illustrates the architecture of FERRET. The interactions are managed by a dialogue shell, which processes questions by transforming them into their corresponding predicate-argument structures2. The data collection used in our experiments was 2We have employed the same representation of predicateargument structures as those encoded in PropBank. We use a semantic parser (described in (Surdeanu et al., 2003)) that recognizes predicate-argument structures. (Q1) What weapons are included in Egypt’s stockpiles? (A1) The Israelis point to comments made by former President Anwar Sadat, who in 1970 stated that Egypt has biological weapons stored in refrigerators ready to use against Israel if need be. The program might include ”plague, botulism toxin, encephalitis virus, anthrax, Rift Valley fever and mycotoxicosis.” (Q2) Where did Egypt inherit its first stockpiles of chemical weapons? (Q3) Is there evidence that Egypt has dismantled its stockpiles of weapons? (Q4) Where are Egypt’s weapons stockpiles located? (Q5) Who oversees Egypt’s weapons stockpiles? Table 1: User question and proposed questions from QUABs made available by the Center for Non-Proliferation Studies (CNS)3. Modules from the FERRET’s dialogue shell interact with modules from the predictive dialogue block. Central to the predictive dialogue is the topic representation for each scenario, which enables the population of a Predictive Dialogue Network (PDN). The PDN consists of a large set of questions that were asked or predicted for each topic. It is a network because questions are related by “similarity” links, which are computed by the Question Similarity module. The topic representation enables an Information Extraction module based on (Surdeanu and Harabagiu, 2002) to find topic-relevant information in the document collection and to use it as answers for the QUABs. The questions associated with each predicted answer are generated from patterns that are related to the extraction patterns used for identifying topic relevant information. The quality of the dialog between the user and FERRET depends on the quality of the topic representations and the coverage of the QUABs. 3The Center for Non-Proliferation Studies at the Monterrey Institute of International Studies distributes collections of print and online documents on weapons of mass destruction. More information at: http://cns.miis.edu. 206 GENERAL BACKGROUND 1) Country Profile 3) Military Operations: Army, Navy, Air Force, Leaders, Capabilities, Intentions 4) Allies/Partners: Coalition Forces 5) Weapons: Chemical, Biological, Materials, Stockpiles, Facilities, Access, Research Efforts, Scientists 6) Citizens: Population, Growth Rate, Education 8) Economics: Growth Domestic Product, Growth Rate, Imports 9) Threat Perception: Border and Surrounding States, International, Terrorist Groups 10) Behaviour: Threats, Invasions, Sponsorship and Harboring of Bad Actors 13) Leadership: 7) Industrial: Major Industrires, Exports, Power Sources 14) Behaviour: Threats to use WMDs, Actual Usage, Sophistication of Attack, Anectodal or Simultaneous Serving as a background to the scenarios, the following list contains subject areas that may be relevant to the scenarios under examination, and it is provided to assist the analyst in generating questions. 2) Government: Type of, Leadership, Relations SCENARIO: Assessment of Egypt’s Biological Weapons As terrorist Activity in Egypt increases, the Commander of the United States Army believes a better understanding of Egypt’s Military capabilities is needed. Egypt’s biological weapons database needs to be updated to correspond with the Commander’s request. Focus your investigation on Egypt’s access to old technology, assistance received from the Soviet Union for development of their pharmaceutical infrastructure, production of toxins and BW agents, stockpiles, exportation of these materials and development technology to Middle Eastern countries, and the effect that this information will have on the United States and Coalition Forces in the Middle East. Please incorporate any other related information to your report. 11) Transportation Infrastructure: Kilometers of Road, Rail, Air Runways, Harbors and Ports, Rivers 12) Beliefs: Ideology, Goals, Intentions 15) Weapons: Chemical, Bilogical, Materials, Stockpiles, Facilities, Access Figure 2: Example of a Dialogue Scenario. 3 Modeling the Dialogue Topic Our experiments in interactive Q/A were based on several scenarios that were presented to us as part of the ARDA Metrics Challenge Dialogue Workshop. Figure 2 illustrates one of these scenarios. It is to be noted that the general background consists of a list of subject areas, whereas the scenario is a narration in which several sub-topics are identified (e.g. production of toxins or exportation of materials). The creation of scenarios for interactive Q/A requires several different types of domain-specific knowledge and a level of operational expertise not available to most system developers. In addition to identifying a particular domain of interest, scenarios must specify the set of relevant actors, outcomes, and related topics that are expected to operate within the domain of interest, the salient associations that may exist between entities and events in the scenario, and the specific timeframe and location that bound the scenario in space and time. In addition, real-world scenarios also need to identify certain operational parameters as well, such as the identity of the scenario’s sponsor (i.e. the organization sponsoring the research) and audience (i.e. the organization receiving the information), as well as a series of evidence conditions which specify how much verification information must be subject to before it can be accepted as fact. We assume the set of sub-topics mentioned in the general background and the scenario can be used together to define a topic structure that will govern future interactions with the Q/A system. In order to model this structure, the topic representation that we create considers separate topic signatures for each sub-topic. The notion of topic signatures was first introduced in (Lin and Hovy, 2000). For each subtopic in a scenario, given (a) documents relevant to the sub-topic and (b) documents not relevant to the subtopic, a statistical method based on the likelihood ratio is used to discover a weighted list of the most topic-specific concepts, known as the topic signature. Later work by (Harabagiu, 2004) demonstrated that topic signatures can be further enhanced by discovering the most relevant relations that exist between pairs of concepts. However, both of these types of topic representations are limited by the fact that they require the identification of topic-relevant documents prior to the discovery of the topic signatures. In our experiments, we were only presented with a set of documents relevant to a particular scenario; no further relevance information was provided for individual subject areas or sub-topics. In order to solve the problem of finding relevant documents for each subtopic, we considered four different approaches: Approach 1: All documents in the CNS collection were initially clustered using K-Nearest Neighbor (KNN) clustering (Dudani, 1976). Each cluster that contained at least one keyword that described the sub-topic was deemed relevant to the topic. Approach 2: Since individual documents may contain discourse segments pertaining to different sub-topics, we first used TextTiling (Hearst, 1994) to automatically segment all of the documents in the CNS collection into individual text tiles. These individual discourse segments 207 then served as input to the KNN clustering algorithm described in Approach 1. Approach 3: In this approach, relevant documents were discovered simultaneously with the discovery of topic signatures. First, we associated a binary seed relation  for each each sub-topic   . (Seed relations were created both by hand and using the method presented in (Harabagiu, 2004).) Since seed relations are by definition relevant to a particular subtopic, they can be used to determine a binary partition of the document collection  into (1) a relevant set of documents  (that is, the documents relevant to relation  ) and (2) a set of non-relevant documents   . Inspired by the method presented in (Yangarber et al., 2000), a topic signature (as calculated by (Harabagiu, 2004)) is then produced for the set of documents in  . For each subtopic   defined as part of the dialogue scenario, documents relevant to a corresponding seed relation  are added to  iff the relation  meets the density criterion (as defined in (Yangarber et al., 2000)). If represents the set of documents where  is recognized, then the density criterion can be defined as:     . Once is added to  , then a new topic signature is calculated for  . Relations extracted from the new topic signature can then be used to determine a new document partition by re-iterating the discovery of the topic signature and of the documents relevant to each subtopic. Approach 4: Approach 4 implements the technique described in Approach 3, but operates at the level of discourse segments (or texttiles) rather than at the level of full documents. As with Approach 2, segments were produced using the TextTiling algorithm. In modeling the dialogue scenarios, we considered three types of topic-relevant relations: (1) structural relations, which represent hypernymy or meronymy relations between topic-relevant concepts, (2) definition relations, which uncover the characteristic properties of a concept, and (3) extraction relations, which model the most relevant events or states associated with a sub-topic. Although structural relations and definition relations are discovered reliably using patterns available from our Q/A system (Harabagiu et al., 2003), we found only extraction relations to be useful in determining the set of documents relevant to a subtopic. Structural relations were available from concept ontologies implemented in the Q/A system. The definition relations were identified by patterns used for processing definition questions. Extraction relations are discovered by processing documents in order to identify three types of relations, including: (1) syntactic attachment relations (including subject-verb, object-verb, and verb-PP relations), (2) predicate-argument relations, and (3) salience-based relations that can be used to encode long-distance dependencies between topic-relevant concepts. (Salience-based relations are discovered using a technique first reported in (Harabagiu, 2004) which approximates a Centering Theory-style approach (Kameyama, 1997) to the resolution of coreference.) Subtopic: Egypt’s production of toxins and BW agents Topic Signature: produce − phosphorous trichloride (TOXIN) house − ORGANIZATION cultivate − non−pathogenic Bacilus Subtilis (TOXIN) produce − mycotoxins (TOXIN) acquire − FACILITY Subtopic: Egypt’s allies and partners Topic Signature: provide − COUNTRY cultivate − COUNTRY supply − precursors cooperate − COUNTRY train − PERSON supply − know−how Figure 3: Example of two topic signatures acquired for the scenario illustrated in Figure 2. We made the extraction relations associated with each topic signature more general (a) by replacing words with their (morphological) root form (e.g. wounded with wound, weapons with weapon), (b) by replacing lexemes with their subsuming category from an ontology of 100,000 words (e.g. truck is replaced by VEHICLE, ARTIFACT, or OBJECT), and (c) by replacing each name with its name class (Egypt with COUNTRY). Figure 3 illustrates the topic signatures resulting for the scenario illustrated in Figure 2. Once extraction relations were obtained for a particular set of documents, the resulting set of relations were ranked according to a method proposed in (Yangarber, 2003). Under this approach, 208 the score associated with each relation is given by:          !   , where " #" represents the cardinality of the documents where the relation is identified, and   !   represents support associated with the relation .   !   is defined as the sum of the relevance of each document in :   !  $ %'&)( *,+ . . The relevance of a document that contains a topic-significant relation can be defined as: */+ .0 214365  (,7  143 8    9 , where :  represents the topic signature of the subtopic4. The accuracy of the relation, then, is given by: 8    ; <  %'&)( */+>=@? .A3 %CBED F  */+>=HG .9 . Here, *,+ ? . measures the relevance of a subtopic   to a particular document - , while */+ G . measures the relevance of to another subtopic,  B . We use a different learner for each subtopic in order to train simultaneously on each iteration. (The calculation of topic signatures continues to iterate until there are no more relations that can be added to the overall topic signature.) When the precision of a relation to a subtopic   is computed, it takes into account the negative evidence of its relevance to any other subtopic  JI  B . If 8    JKML , the relation is not included in the topic signature, where relations are ranked by the score  )   N 8      + O    !  9 . Representing topics in terms of relevant concepts and relations is important for the processing of questions asked within the context of a given topic. For interactive Q/A, however, the ideal topic-structured representation would be in the form of questionanswer pairs (QUABs) that model the individual segments of the scenario. We have currently created two sets of QUABs: a handcrafted set and an automatically-generated set. For the manuallycreated set of QUABs, 4 linguists manually generated 3210 question-answer pairs for each of the 8 dialogue scenarios considered in our experiments. In a separate effort, we devised a process for automatically populating the QUAB for each scenario. In order to generate question-answer pairs for each subtopic, we first identified relevant text passages in the document collection to serve as “answers” and then generated individual questions that could be an4Initially, P Q contains only the seed relation. Additional relations can be added with each iteration. swered by each answer passage. R Answer Identification: We defined an answer passage as a contiguous sequence of sentences with a positive answer rank and a passage price of K 4. To select answer passages for each subtopic   , we calculate an answer rank, SUTWV  SXY %  ?    Z , that sums across the scores of each relation from the topic signature that is identified in the same text window. Initially, the text window is set to one sentence. (If the sentence is part of a quote, however, the text window is immediately expanded to encompass the entire sentence that contains the quote.) Each passage with SUTWV  SX\[]L is then considered to be a candidate answer passage. The text window of each candidate answer passage is then expanded to include the following sentence. If the answer rank does not increase with the addition of the succeeding sentence, then the price ( ! ) of the candidate answer passage is incremented by 1, otherwise it is decremented by 1. The text window of each candidate answer passage continues to expand until ! _^ . Before the ranked list of candidate answers can be considered by the Question Generation module, answer passages with a positive price ! are stripped of the last ! sentences. ANSWER In the early 1970s, Egyptian President Anwar Sadat validates that Egypt has a BW stockpile. Predicate−Argument Structures P1: validate arguments: A0 = E2: Answer Type: Definition A1 = P2: have arguments: A0 = E3 A1 = E4 ArgM−TMP: E1: Answer Type: Time P3: admit Reference 4 (relational) Egyptian President X E5: BW program Reference 2 (metonymic) Reference 3 (part−whole) QUESTIONS Definition Pattern: Who is X? Q1: Who is Anwar Sadat? Pattern: When did E3 P1 to P2 E4? Q2: When did Egypt validate to having BW stockpiles? Pattern: When did E3 P3 to P2 E4? Q3: When did Egypt admit to having BW stockpiles? Pattern: When did E3 P3 to P2 E5? Q4: When did Egypt admint to having a BW program? E1: "in the early 1970s"; Category: TIME E2: "Egyptian President Anwar Sadat"; Category: PERSON E3: "Egypt"; Category: COUNTRY E4: "BW stockpile"; Category: UNKNOWN 4 entities 2 predicates: P1="validate"; P2="has" PROCESSING Reference 1 (definitional) Figure 4: Associating Questions with Answers. R Question Generation: In order to automatically generate questions from answer passages, we considered the following two problems: Problem 1: Every word in an answer passage can refer to an entity, a relation, or an event. In order for question generation be successful, we must determine whether a particular reference 209 is “interesting” enough to the scenario such that it deserves to be mentioned in a topic-relevant question. For example, Figure 4 illustrates an answer that includes two predicates and four entities. In this case, four types of reference are used to associate these linguistic objects with other related objects: (a) definitional reference, used to link entity (E1) “Anwar Sadat” to a corresponding attribute “Egyptian President”, (b) metonymic reference, since (E1) can be coerced into (E2), (c) part-whole reference, since “BW stockpiles”(E4) necessarily imply the existence of a “BW program”(E5), and (d) relational reference, since validating is subsumed as part of the meaning of declaring (as determined by WordNet glosses), while admitting can be defined in terms of declaring, as in declaring [to be true]. ANSWER Egyptian Deputy Minister Mahmud Salim states that Egypt’s Egyptians have "adequate means of retaliating without delay". enemies would never use BW because they are aware that the Predicates: P’1=state; P’2 = never use; P3 = be aware; Causality: P’2(BW) = NON−NEGATIVE RESULT(P5); P’5 = "obstacle" Reference: P’1 P’6 = view QUESTIONS Does Egypt view the possesion of BW as an obstacle? Does Egypt view the possesion of BW as a deterrent? P’4 = have P"4 = "the possesion" P"4 = "the possesion" = nominalization(P’4) = EFFECT(P’2(BW)) PROCESSING specialization Pattern: Does Egypt P’6 P"4(BW) as a P’5? Figure 5: Questions for Implied Causal Relations. Problem 2: We have found that the identification of the association between a candidate answer and a question depends on (a) the recognition of predicates and entities based on both the output of a named entity recognizer and a semantic parser (Surdeanu et al., 2003) and their structuring into predicate-argument frames, (b) the resolution of reference (addressed in Problem 1), (c) the recognition of implicit relations between predications stated in the answer. Some of these implicit relations are referential, as is the relation between predicates 8 < and 8illustrated in Figure 4. A special case of implicit relations are the causal relations. Figure 5 illustrates an answer where a causal relation exists and is marked by the cue phrase because. Predicates – like those in Figure 5 – can be phrasal (like 8 ) or negative (like 8  ). Causality is established between predicates 8   and 8 ’ as they are the ones that ultimately determine the selection of the answer. The predicate !   can be substituted by its nominalization since  O < of 8  is BW, the same argument is transferred to 8   . The causality implied by the answer from Figure 5 has two components: (1) the effect (i.e. the predicate 8    ) and (2) the result, which eliminates the semantic effect of the negative polarity item never by implying the predicate !  , obstacle. The questions that are generated are based on question patterns associated with causal relations and therefore allow different degrees for the specificity of the resultative, i.e obstacle or deterrent. We generated several questions for each answer passage. Questions were generated based on patterns that were acquired to model interrogations using relations between predicates and their arguments. Such interrogations are based on (1) associations between the answer type (e.g. DATE) and the question stem (e.g. “when” and (2) the relation between predicates, question stem and the words that determine the answer type (Narayanan and Harabagiu, 2004). In order to obtain these predicate-argument patterns, we used 30% (approximately 1500 questions) of the handcrafted questionanswer pairs, selected at random from each of the 8 dialogue scenarios. As Figures 4 and 5 illustrate, we used patterns based on (a) embedded predicates and (b) causal or counterfactual predicates. 4 Managing Interactive Q/A Dialogues As illustrated in Figure 1, the main idea of managing dialogues in which interactions with the Q/A system occur is based on the notion of predictions, i.e. by proposing to the user a small set of questions that tackle the same subject as her question (as illustrated in Table 1). The advantage is that the user can follow-up with one of the pre-processed questions, that has a correct answer and resides in one of the QUABs. This enhances the effectiveness of the dialogue. It also may impact on the efficiency, i.e. the number of questions being asked if the QUABs have good coverage of the subject areas of the scenario. Moreover, complex questions, that generally are not processed with high accuracy by current state-ofthe-art Q/A systems, are associated with predictive questions that represent decompositions based on 210 similarities between predicates and arguments of the original question and the predicted questions. The selection of the questions from the QUABs that are proposed for each user question is based on a similarity-metric that ranks the QUAB questions. To compute the similarity metric, we have experimented with seven different metrics. The first four metrics were introduced in (Lytinen and Tomuro, 2002). Similarity Metric 1 is based on two processing steps: (a) the content words of the questions are weighted using the   measure used in Information Retrieval        1   Z9   & ? , where  is the number of questions in the QUAB,   is the number of questions containing  and  is the number of times  appears in the question. This allows the user question and any QUAB question to be transformed into two vectors,          and         "!  ; (b) the term vector similarity is used to compute the similarity between the user question and any question from the QUAB: # %$     J  %   ?   ? & 9 %    ?   ('  %     ?     Similarity Metric 2 is based on the percent of user question terms that appear in the QUAB question. It is obtained by finding the intersection of the terms in the term vectors of the two questions. Similarity Metric 3 is based on semantic information available from WordNet. It involves: (a) finding the minimum path between WordNet concepts. Given two terms < and  , each with T and ) WordNet senses  < * <  ,+.and   * <  / - . The semantic distance between the terms 0  <    is defined by the minimum of all the possible pairwise semantic distances between  < and   : 0  <    13254 = ? ( %76  G (     B  , where    B  is the path length between  and B . (b) the semantic similarity between the user question : 8  <      +  and the QUAB question :9 ;: <  :   :</  to be defined as ,)  : = :>/ ?  7 @ 6 7 A CB ?  7 A 6 7 @  7 @ B 7 A , where D  :E  :>FN % E (,7HG < < BIKJ LNM<OQP MR  E 6 F  Similarity Metric 4 is based on the question type similarity. Instead of using the question class, determined by its stem, whenever we could recognize the answer type expected by the question, we used it for matching. As backoff only, we used a question type similarity based on a matrix akin to the one reported in (Lytinen and Tomuro, 2002) Similarity Metric 5 is based on question concepts rather than question terms. In order to translate question terms into concepts, we replaced (a) question stems (i.e. a WH-word + NP construction) with expected answer types (taken from the answer type hierarchy employed by FERRET’s Q/A system) and (b) named entities with corresponding their corresponding classes. Remaining nouns and verbs were also replaced with their WordNet semantic classes, as well. Each concept was then associated with a weight: concepts derived from named entities classes were weighted heavier than concepts from answer types, which were in turn weighted heavier than concepts taken from WordNet clases. Similarity was then computed across “matching” concepts. 5 The resultant similarity score was based on three variables: S = sum of the weights of all concepts matched between a user query ( T ) and a QUAB query ( TVU ); W = sum of the weights of all unmatched concepts in T ; X = sum of the weights of all unmatched concepts in TVU ; The similarity between T and TYU was calculated as S 3  ! ' W  3  ! U ' X  , where ! and ! U were used as coefficients to penalize the contribution of unmatched concepts in T and TVU respectively. 6 Similarity Metric 6 is based on the fact that the 5In the case of ambiguous nouns and verbs associated with multiple WordNet classes, all possible classes for a term were considered in matching. 6We set Z @ = 0.4 and Z[ = 0.1 in our experiments. 211 Q1: Does Iran have an indigenous CW program? (1b) Has the plant at Qazvin been linked to CW production? (1c) What CW does Iran produce? (1a) How did Iran start its CW program? Q2: Where are Iran’s CW facilities located? (2a) What factories in Iran could produce CW? (2b) Where are Iran’s stockpiles of CW? (2c) Where has Iran bought equipment to produce CW? Q3: What is Iran’s goal for its CW program? (3a) What motivated Iran to expand its chemical weapons program? (3b) How do CW figure into Iran’s long−term strategic plan? (3c) What are Iran’s future CW plans? QUABs: QUABs: QUABs: Answer(A3): Answer(A2): Answer (A1): Although Iran is making a concerted effort to attain an independent production capability for all aspects of chemical weapons program, it remains dependent on foreign sources for chemical warfare−related technologies. According to several sources, Iran’s primary suspected chemical weapons production facility is located in the city of Damghan. In their pursuit of regional hegemony, Iran and Iraq probably regard CW weapons and missiles as necessary to support their political and military objectives. Possession of chemical weapons would likely lead to increased intimidation of their Gulf, neighbors, as well as increased willingness to confront the United States. Figure 6: A sample interactive Q/A dialogue. QUAB questions are clustered based on their mapping to a vector of important concepts in the QUAB.The clustering was done using the K-Nearest Neighbor (KNN) method (Dudani, 1976). Instead of measuring the similarity between the user question and each question in the QUAB, similarities are computed only between the user question and the centroid of each cluster. Similarity Metric 7 was derived from the results of Similarity Metrics 5 and 6 above. In this case, if the QUAB question ( T U ) that was deemed to be most similar to a user question ( T ) under Similarity Metric 5 is contained in the cluster of QUAB questions deemed to be most similar to T under Similarity Metric 6, then TVU receives a cluster adjustment score in order to boost its ranking within its QUAB cluster. We calculate the cluster adjustment score as )   & B  TYU   )    1 3   99   )     , where   represents the difference in rank between the centroid of the cluster and the previous rank of the QUAB question T U . In the currently-implemented version of FERRET, we used Similarity Metric 5 to automatically identify the set of 10 QUAB questions that were most similar to a user’s question. These question-andanswer pairs were then returned to the user – along with answers from FERRET’s automatic Q/A system – as potential continuations of the Q/A dialogue. We used the remaining 6 similarity metrics described in this section to manually assess the impact of similarity on a Q/A dialogue. 5 Experiments with Interactive Q/A Dialogues To date, we have used FERRET to produce over 90 Q/A dialogues with human users. Figure 6 illustrates three turns from a real dialogue from a human user investigating Iran’s chemical weapons prorgram. As it can be seen coherence can be established between the user’s questions and the system’s answers (e.g. Q3 is related to both A1 and A3) as well as between the QUABs and the user’s follow-up questions (e.g. QUAB (1b) is more related to Q2 than either Q1 or A1). Coherence alone is not sufficient to analyze the quality of interactions, however. In order to better understand interactive Q/A dialogues, we have conducted three sets of experiments with human users of FERRET. In these experiments, users were allotted two hours to interact with Ferret to gather information requested by a dialogue scenario similar to the one presented in Figure 2. In Experiment 1 (E1), 8 U.S. Navy Reserve (USNR) intelligence analysts used FERRET to research 8 different scenarios related to chemical and biological weapons. Experiment 2 and Experiment 3 considered several of the same scenarios addressed in E1: E2 included 24 mixed teams of analysts and novice users working with 2 scenarios, while E3 featured 4 USNR analysts working with 6 of the original 8 scenarios. (Details for each experiment are provided in Table 2.) Users were also given a task to focus their 212 research; in E1 and E3, users prepared a short report detailing their findings; in E2, users were given a list of “challenge” questions to answer. Exp Users QUABs? Scenarios Topics E1 8 Yes 8 Egypt BW, Russia CW, South Africa CW, India CW, North Korea CBW, Pakistan CW, Libya CW, Iran CW E2 24 Yes 2 Egypt BW, Russia CW E3 4 No 6 Egypt BW, Russia CW, North Korea CBW, Pakistan CW India CW, Libya CW, Iran CW Table 2: Experiment details In E1 and E2, users had access to a total of 3210 QUAB questions that had been hand-created by developers for each the 8 dialogue scenarios. (Table 3 provides totals for each scenario.) In E3, users performed research with a version of FERRET that included no QUABs at all. Scenario Handcrafted QUABs INDIA 460 LIBYA 414 IRAN 522 NORTH KOREA 316 PAKISTAN 322 SOUTH AFRICA 454 RUSSIA 366 EGYPT 356 Testing Total 3210 Table 3: QUAB distribution over scenarios We have evaluated FERRET by measuring efficiency, effectiveness, and user satisfaction: Efficiency FERRET’s QUAB collection enabled users in our experiments to find more relevant information by asking fewer questions. When manuallycreated QUABs were available (E1 and E2), users submitted an average of 12.25 questions each session. When no QUABs were available (E3), users entered a total of 44.5 questions per session. Table 4 lists the number of QUAB question-answer pairs selected by users and the number of user questions entered by users during the 8 scenarios considered in E1. In E2, freed from the task of writing a research report, users asked significantly (p 0.05) fewer questions and selected fewer QUABs than they did in E1. (See Table 5). Effectiveness QUAB question-answer pairs also improved the overall accuracy of the answers returned by FERRET. To measure the effectiveness of a Q/A dialogue, human annotators were used to perform a post-hoc analysis of how relevant the QUAB pairs returned by FERRET were to each question Country n QUAB User Q Total (avg.) (avg.) (avg.) India 2 21.5 13.0 34.5 Libya 2 12.0 9.0 21.0 Iran 2 18.5 11.0 29.5 N.Korea 2 16.5 7.5 34.0 Pakistan 2 29.5 15.5 45.0 S.Africa 2 14.5 6.0 20.5 Russia 2 13.5 15.5 29.0 Egypt 2 15.0 20.5 35.5 TOTAL(E1) 16 17.63 12.25 29.88 Table 4: Efficiency of Dialogues in Experiment 1 Country n QUAB User Q Total (avg.) (avg.) (avg.) Russia 24 8.2 5.5 13.7 Egypt 24 10.8 7.6 18.4 TOTAL(E2) 48 9.50 6.55 16.05 Table 5: Efficiency of Dialogues in Experiment 2 entered by a user: each QUAB pair returned was graded as “relevant” or “irrelevant” to a user question in a forced-choice task. Aggregate relevance scores were used to calculate (1) the percentage of relevant QUAB pairs returned and (2) the mean reciprocal rank (MRR) for each user question. MRR is defined as < + %  F < <  ? , whree  is the lowest rank of any relevant answer for the   user query7. Table 6 describes the performance of FERRET when each of the 7 similarity measures presented in Section 4 are used to return QUAB pairs in response to a query. When only answers from FERRET’s automatic Q/A system were available to users, only 15.7% of system responses were deemed to be relevant to a user’s query. In contrast, when manually-generated QUAB pairs were introduced, as high as 84% of the system’s responses were deemed to be relevant. The results listed in Table 6 show that the best metric is Similarity Metric 5. Thse results suggest that the selection of relevant questions depends on sophisticated similarity measures that rely on conceptual hierarchies and semantic recognizers. We evaluated the quality of each of the four sets of automatically-generated QUABs in a similar fashion. For each question submitted by a user in E1, E2, and E3, we collected the top 5 QUAB question-answer pairs (as determined by Similarity Metric 5) that FERRET returned. As with the manually-generated QUABs, the automatically7We chose MRR as our scoring metric because it reflects the fact that a user is most likely to examine the first few answers from any system, but that all correct answers returned by the system have some value because users will sometimes examine a very large list of query results. 213 % of Top 5 Responses % of Top 1 Responses MRR Relevant to User Q Relevant to User Q Without QUAB 15.73% 26.85% 0.325 Similarity 1 82.61% 60.63% 0.703 Similarity 2 79.95% 58.45% 0.681 Similarity 3 79.47% 56.04% 0.664 Similarity 4 78.26% 46.14% 0.592 Similarity 5 84.06% 68.36% 0.753 Similarity 6 81.64% 56.04% 0.671 Similarity 7 84.54% 64.01% 0.730 Table 6: Effectiveness of dialogs generated pairs were submitted to human assessors who annotated each as “relevant” or irrelevant to the user’s query. Aggregate scores are presented in Table 7. Egypt Russia Approach % of Top 5 % of Top 5 Responses Rel. MRR Responses Rel. MRR to User Q to User Q Approach 1 40.01% 0.295 60.25% 0.310 Approach 2 36.00% 0.243 72.00% 0.475 Approach 3 44.62% 0.271 60.00% 0.297 Approach 4 68.05% 0.510 68.00% 0.406 Table 7: Quality of QUABs acquired automatically User Satisfaction Users were consistently satisfied with their interactions with FERRET. In all three experiments, respondents claimed that they found that FERRET (1) gave meaningful answers, (2) provided useful suggestions, (3) helped answer specific questions, and (4) promoted their general understanding of the issues considered in the scenario. Complete results of this study are presented in Table 88. Factor E1 E2 E3 Promoted understanding 3.40 3.20 3.75 Helped with specific questions 3.70 3.60 3.25 Make good use of questions 3.40 3.55 3.0 Gave new scenario insights 3.00 3.10 2.2 Gave good collection coverage 3.75 3.70 3.75 Stimulated user thinking 3.50 3.20 2.75 Easy to use 3.50 3.55 4.10 Expanded understanding 3.40 3.20 3.00 Gave meaningful answers 4.10 3.60 2.75 Was helpful 4.00 3.75 3.25 Helped with new search methods 2.75 3.05 2.25 Provided novel suggestions 3.25 3.40 2.65 Is ready for work environment 2.85 2.80 3.25 Would speed up work 3.25 3.25 3.00 Overall like of system 3.75 3.60 3.75 Table 8: User Satisfaction Survey Results 6 Conclusions We believe that the quality of Q/A interactions depends on the modeling of scenario topics. An ideal model is provided by question-answer databases (QUABs) that are created off-line and then used to 8Evaluation scale: 1-does not describe the system, 5completely describes the system make suggestions to a user of potential relevant continuations of a discourse. In this paper, we have presented FERRET, an interactive Q/A system which makes use of a novel Q/A architecture that integrates QUAB question-answer pairs into the processing of questions. Experiments with FERRET have shown that, in addition to being rapidly adopted by users as valid suggestions, the incorporation of QUABs into Q/A can greatly improve the overall accuracy of an interactive Q/A dialogue. References S. Dudani. 1976. The distance-weighted k-nearest-neighbour rule. IEEE Transactions on Systems, Man, and Cybernetics, SMC-6(4):325–327. S. Harabagiu, D. Moldovan, C. Clark, M. Bowden, J. Williams, and J. Bensley. 2003. Answer Mining by Combining Extraction Techniques with Abductive Reasoning. In Proceedings of the Twelfth Text Retrieval Conference (TREC 2003). Sanda Harabagiu. 2004. Incremental Topic Representations. In Proceedings of the 20th COLING Conference, Geneva, Switzerland. Marti Hearst. 1994. Multi-Paragraph Segmentation of Expository Text. In Proceedings of the 32nd Meeting of the Association for Computational Linguistics, pages 9–16. Megumi Kameyama. 1997. Recognizing Referential Links: An Information Extraction Perspective. In Workshop of Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts, (ACL-97/EACL-97), pages 46–53. Chin-Yew Lin and Eduard Hovy. 2000. The Automated Acquisition of Topic Signatures for Text Summarization. In Proceedings of the 18th COLING Conference, pages 495–501. S. Lytinen and N. Tomuro. 2002. The Use of Question Types to Match Questions in FAQFinder. In Papers from the 2002 AAAI Spring Symposium on Mining Answers from Texts and Knowledge Bases, pages 46–53. Srini Narayanan and Sanda Harabagiu. 2004. Question Answering Based on Semantic Structures. In Proceedings of the 20th COLING Conference, Geneva, Switzerland. Mihai Surdeanu and Sanda M. Harabagiu. 2002. Infratructure for open-domanin information extraction. In Conference for Human Language Technology (HLT-2002). Mihai Surdeanu, Sanda M. Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument structures for information extraction. In ACL, pages 8–15. Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Automatic Acquisition of Domain Knowledge for Information Extraction. In Proceedings of the 18th COLING Conference, pages 940–946. Roman Yangarber. 2003. Counter-Training in Discovery of Semantic Patterns. In Proceedings of the 41th Meeting of the Association for Computational Linguistics, pages 343–350. 214
2005
26