text
stringlengths 0
316k
| year
stringclasses 50
values | No
stringclasses 911
values |
---|---|---|
Convolution Kernels with Feature Selection for Natural Language Processing Tasks Jun Suzuki, Hideki Isozaki and Eisaku Maeda NTT Communication Science Laboratories, NTT Corp. 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto,619-0237 Japan {jun, isozaki, maeda}@cslab.kecl.ntt.co.jp Abstract Convolution kernels, such as sequence and tree kernels, are advantageous for both the concept and accuracy of many natural language processing (NLP) tasks. Experiments have, however, shown that the over-fitting problem often arises when these kernels are used in NLP tasks. This paper discusses this issue of convolution kernels, and then proposes a new approach based on statistical feature selection that avoids this issue. To enable the proposed method to be executed efficiently, it is embedded into an original kernel calculation process by using sub-structure mining algorithms. Experiments are undertaken on real NLP tasks to confirm the problem with a conventional method and to compare its performance with that of the proposed method. 1 Introduction Over the past few years, many machine learning methods have been successfully applied to tasks in natural language processing (NLP). Especially, state-of-the-art performance can be achieved with kernel methods, such as Support Vector Machine (Cortes and Vapnik, 1995). Examples include text categorization (Joachims, 1998), chunking (Kudo and Matsumoto, 2002) and parsing (Collins and Duffy, 2001). Another feature of this kernel methodology is that it not only provides high accuracy but also allows us to design a kernel function suited to modeling the task at hand. Since natural language data take the form of sequences of words, and are generally analyzed using discrete structures, such as trees (parsed trees) and graphs (relational graphs), discrete kernels, such as sequence kernels (Lodhi et al., 2002), tree kernels (Collins and Duffy, 2001), and graph kernels (Suzuki et al., 2003a), have been shown to offer excellent results. These discrete kernels are related to convolution kernels (Haussler, 1999), which provides the concept of kernels over discrete structures. Convolution kernels allow us to treat structural features without explicitly representing the feature vectors from the input object. That is, convolution kernels are well suited to NLP tasks in terms of both accuracy and concept. Unfortunately, experiments have shown that in some cases there is a critical issue with convolution kernels, especially in NLP tasks (Collins and Duffy, 2001; Cancedda et al., 2003; Suzuki et al., 2003b). That is, the over-fitting problem arises if large “substructures” are used in the kernel calculations. As a result, the machine learning approach can never be trained efficiently. To solve this issue, we generally eliminate large sub-structures from the set of features used. However, the main reason for using convolution kernels is that we aim to use structural features easily and efficiently. If use is limited to only very small structures, it negates the advantages of using convolution kernels. This paper discusses this issue of convolution kernels, and proposes a new method based on statistical feature selection. The proposed method deals only with those features that are statistically significant for kernel calculation, large significant substructures can be used without over-fitting. Moreover, the proposed method can be executed efficiently by embedding it in an original kernel calculation process by using sub-structure mining algorithms. In the next section, we provide a brief overview of convolution kernels. Section 3 discusses one issue of convolution kernels, the main topic of this paper, and introduces some conventional methods for solving this issue. In Section 4, we propose a new approach based on statistical feature selection to offset the issue of convolution kernels using an example consisting of sequence kernels. In Section 5, we briefly discuss the application of the proposed method to other convolution kernels. In Section 6, we compare the performance of conventional methods with that of the proposed method by using real NLP tasks: question classification and sentence modality identification. The experimental results described in Section 7 clarify the advantages of the proposed method. 2 Convolution Kernels Convolution kernels have been proposed as a concept of kernels for discrete structures, such as sequences, trees and graphs. This framework defines the kernel function between input objects as the convolution of “sub-kernels”, i.e. the kernels for the decompositions (parts) of the objects. Let X and Y be discrete objects. Conceptually, convolution kernels K(X, Y ) enumerate all substructures occurring in X and Y and then calculate their inner product, which is simply written as: K(X, Y ) = ⟨φ(X), φ(Y )⟩= X i φi(X) · φi(Y ). (1) φ represents the feature mapping from the discrete object to the feature space; that is, φ(X) = (φ1(X), . . . , φi(X), . . .). With sequence kernels (Lodhi et al., 2002), input objects X and Y are sequences, and φi(X) is a sub-sequence. With tree kernels (Collins and Duffy, 2001), X and Y are trees, and φi(X) is a sub-tree. When implemented, these kernels can be efficiently calculated in quadratic time by using dynamic programming (DP). Finally, since the size of the input objects is not constant, the kernel value is normalized using the following equation. ˆK(X, Y ) = K(X, Y ) p K(X, X) · K(Y, Y ) (2) The value of ˆK(X, Y ) is from 0 to 1, ˆK(X, Y ) = 1 if and only if X = Y . 2.1 Sequence Kernels To simplify the discussion, we restrict ourselves hereafter to sequence kernels. Other convolution kernels are briefly addressed in Section 5. Many kinds of sequence kernels have been proposed for a variety of different tasks. This paper basically follows the framework of word sequence kernels (Cancedda et al., 2003), and so processes gapped word sequences to yield the kernel value. Let Σ be a set of finite symbols, and Σn be a set of possible (symbol) sequences whose sizes are n or less that are constructed by symbols in Σ. The meaning of “size” in this paper is the number of symbols in the sub-structure. Namely, in the case of sequence, size n means length n. S and T can represent any sequence. si and tj represent the ith and jth symbols in S and T, respectively. Therefore, a S T 1 2 1 1 2 1 λ + λ λ 1 λ λ 1 1 1 1 a, b, c, aa, ab, ac, ba, bc, aba, aac, abc, bac, abac abc S = abac T = prod. 1 0 1 0 1 0 0 1 0 2 1 1 0 1 3 λ λ + 0 λ 0 0 λ 0 (a, b, c, ab, ac, bc, abc) (a, b, c, aa, ab, ac, ba, bc, aba, aac, abc, bac, abac) u 3 5 3λ λ + + kernel value λ sequences sub-sequences 1 0 0 Figure 1: Example of sequence kernel output sequence S can be written as S = s1 . . . si . . . s|S|, where |S| represents the length of S. If sequence u is contained in sub-sequence S[i : j] def = si . . . sj of S (allowing the existence of gaps), the position of u in S is written as i = (i1 : i|u|). The length of S[i] is l(i) = i|u| −i1 + 1. For example, if u = ab and S = cacbd, then i = (2 : 4) and l(i) = 4 −2 + 1 = 3. By using the above notations, sequence kernels can be defined as: KSK(S, T) = X u∈Σn X i|u=S[i] λγ(i) X j|u=T [j] λγ(j), (3) where λ is the decay factor that handles the gap present in a common sub-sequence u, and γ(i) = l(i)−|u|. In this paper, | means “such that”. Figure 1 shows a simple example of the output of this kernel. However, in general, the number of features |Σn|, which is the dimension of the feature space, becomes very high, and it is computationally infeasible to calculate Equation (3) explicitly. The efficient recursive calculation has been introduced in (Cancedda et al., 2003). To clarify the discussion, we redefine the sequence kernels with our notation. The sequence kernel can be written as follows: KSK(S, T) = n X m=1 X 1≤i≤|S| X 1≤j≤|T | Jm(Si, Tj). (4) where Si and Tj represent the sub-sequences Si = s1, s2, . . . , si and Tj = t1, t2, . . . , tj, respectively. Let Jm(Si, Tj) be a function that returns the value of common sub-sequences if si = tj. Jm(Si, Tj) = J′ m−1(Si, Tj) · I(si, tj) (5) I(si, tj) is a function that returns a matching value between si and tj. This paper defines I(si, tj) as an indicator function that returns 1 if si = tj, otherwise 0. Then, J′ m(Si, Tj) and J′′ m(Si, Tj) are introduced to calculate the common gapped sub-sequences between Si and Tj. J′ m(Si, Tj) = 1 if m = 0, 0 if j = 0 and m > 0, λJ′ m(Si, Tj−1) + J′′ m(Si, Tj−1) otherwise (6) J′′ m(Si, Tj) = 0 if i = 0, λJ′′ m(Si−1, Tj) + Jm(Si−1, Tj) otherwise (7) If we calculate Equations (5) to (7) recursively, Equation (4) provides exactly the same value as Equation (3). 3 Problem of Applying Convolution Kernels to NLP tasks This section discusses an issue that arises when applying convolution kernels to NLP tasks. According to the original definition of convolution kernels, all the sub-structures are enumerated and calculated for the kernels. The number of substructures in the input object usually becomes exponential against input object size. As a result, all kernel values ˆK(X, Y ) are nearly 0 except the kernel value of the object itself, ˆK(X, X), which is 1. In this situation, the machine learning process becomes almost the same as memory-based learning. This means that we obtain a result that is very precise but with very low recall. To avoid this, most conventional methods use an approach that involves smoothing the kernel values or eliminating features based on the sub-structure size. For sequence kernels, (Cancedda et al., 2003) use a feature elimination method based on the size of sub-sequence n. This means that the kernel calculation deals only with those sub-sequences whose size is n or less. For tree kernels, (Collins and Duffy, 2001) proposed a method that restricts the features based on sub-trees depth. These methods seem to work well on the surface, however, good results are achieved only when n is very small, i.e. n = 2. The main reason for using convolution kernels is that they allow us to employ structural features simply and efficiently. When only small sized substructures are used (i.e. n = 2), the full benefits of convolution kernels are missed. Moreover, these results do not mean that larger sized sub-structures are not useful. In some cases we already know that larger sub-structures are significant features as regards solving the target problem. That is, these significant larger sub-structures, Table 1: Contingency table and notation for the chisquared value c ¯c P row u Ouc = y Ou¯c Ou = x ¯u O¯uc O¯u¯c O¯u P column Oc = M O¯c N which the conventional methods cannot deal with efficiently, should have a possibility of improving the performance furthermore. The aim of the work described in this paper is to be able to use any significant sub-structure efficiently, regardless of its size, to solve NLP tasks. 4 Proposed Feature Selection Method Our approach is based on statistical feature selection in contrast to the conventional methods, which use sub-structure size. For a better understanding, consider the twoclass (positive and negative) supervised classification problem. In our approach we test the statistical deviation of all the sub-structures in the training samples between the appearance of positive samples and negative samples. This allows us to select only the statistically significant sub-structures when calculating the kernel value. Our approach, which uses a statistical metric to select features, is quite natural. We note, however, that kernels are calculated using the DP algorithm. Therefore, it is not clear how to calculate kernels efficiently with a statistical feature selection method. First, we briefly explain a statistical metric, the chisquared (χ2) value, and provide an idea of how to select significant features. We then describe a method for embedding statistical feature selection into kernel calculation. 4.1 Statistical Metric: Chi-squared Value There are many kinds of statistical metrics, such as chi-squared value, correlation coefficient and mutual information. (Rogati and Yang, 2002) reported that chi-squared feature selection is the most effective method for text classification. Following this information, we use χ2 values as statistical feature selection criteria. Although we selected χ2 values, any other statistical metric can be used as long as it is based on the contingency table shown in Table 1. We briefly explain how to calculate the χ2 value by referring to Table 1. In the table, c and ¯c represent the names of classes, c for the positive class S T 1 2 1 1 2 1 λ + λ λ 1 λ λ 1 ( ) 2 u χ 0.1 0.5 1.2 1 1 1 1.5 0.9 0.8 a, b, c, aa, ab, ac, ba, bc, aba, aac, abc, bac, abac abc S = abac T = prod. 1 0 1 0 1 0 0 1 0 2 1 1 0 1 3 λ λ + 0 λ 0 0 λ 0 1.0 τ = threshold 2.5 1 1 λ (a, b, c, ab, ac, bc, abc) (a, b, c, aa, ab, ac, ba, bc, aba, aac, abc, bac, abac) u 3 5 3λ λ + + 2 λ + 0 0 0 0 2 1 1 0 1 3 λ λ + 0 λ 0 0 λ 0 kernel value kernel value under the feature selection feature selection λ sequences sub-sequences 1 0 0 0 Figure 2: Example of statistical feature selection and ¯c for the negative class. Ouc, Ou¯c, O¯uc and O¯u¯c represent the number of u that appeared in the positive sample c, the number of u that appeared in the negative sample ¯c, the number of u that did not appear in c, and the number of u that did not appear in ¯c, respectively. Let y be the number of samples of positive class c that contain sub-sequence u, and x be the number of samples that contain u. Let N be the total number of (training) samples, and M be the number of positive samples. Since N and M are constant for (fixed) data, χ2 can be written as a function of x and y, χ2(x, y) = N(Ouc · O¯u¯c −O¯uc · Ou¯c)2 Ou · O¯u · Oc · O¯c . (8) χ2 expresses the normalized deviation of the observation from the expectation. We simply represent χ2(x, y) as χ2(u). 4.2 Feature Selection Criterion The basic idea of feature selection is quite natural. First, we decide the threshold τ of the χ2 value. If χ2(u) < τ holds, that is, u is not statistically significant, then u is eliminated from the features and the value of u is presumed to be 0 for the kernel value. The sequence kernel with feature selection (FSSK) can be defined as follows: KFSSK(S, T) = X τ≤χ2(u)|u∈Σn X i|u=S[i] λγ(i) X j|u=T [j] λγ(j). (9) The difference between Equations (3) and (9) is simply the condition of the first summation. FSSK selects significant sub-sequence u by using the condition of the statistical metric τ ≤χ2(u). Figure 2 shows a simple example of what FSSK calculates for the kernel value. 4.3 Efficient χ2(u) Calculation Method It is computationally infeasible to calculate χ2(u) for all possible u with a naive exhaustive method. In our approach, we use a sub-structure mining algorithm to calculate χ2(u). The basic idea comes from a sequential pattern mining technique, PrefixSpan (Pei et al., 2001), and a statistical metric pruning (SMP) method, Apriori SMP (Morishita and Sese, 2000). By using these techniques, all the significant sub-sequences u that satisfy τ ≤χ2(u) can be found efficiently by depth-first search and pruning. Below, we briefly explain the concept involved in finding the significant features. First, we denote uv, which is the concatenation of sequences u and v. Then, u is a specific sequence and uv is any sequence that is constructed by u with any suffix v. The upper bound of the χ2 value of uv can be defined by the value of u (Morishita and Sese, 2000). χ2(uv)≤max χ2(yu, yu), χ2(xu −yu, 0) =bχ2(u) where xu and yu represent the value of x and y of u. This inequation indicates that if bχ2(u) is less than a certain threshold τ, all sub-sequences uv can be eliminated from the features, because no subsequence uv can be a feature. The PrefixSpan algorithm enumerates all the significant sub-sequences by using a depth-first search and constructing a TRIE structure to store the significant sequences of internal results efficiently. Specifically, PrefixSpan algorithm evaluates uw, where uw represents a concatenation of a sequence u and a symbol w, using the following three conditions. 1. τ ≤χ2(uw) 2. τ > χ2(uw), τ > bχ2(uw) 3. τ > χ2(uw), τ ≤bχ2(uw) With 1, sub-sequence uw is selected as a significant feature. With 2, sub-sequence uw and arbitrary subsequences uwv, are less than the threshold τ. Then w is pruned from the TRIE, that is, all uwv where v represents any suffix pruned from the search space. With 3, uw is not selected as a significant feature because the χ2 value of uw is less than τ, however, uwv can be a significant feature because the upperbound χ2 value of uwv is greater than τ, thus the search is continued to uwv. Figure 3 shows a simple example of PrefixSpan with SMP that searches for the significant features a b c c d b c a b a c a c d a b d a b c c d b c b a c a c d a b d ⊥ a b c d b c 1.0 τ = b: c: d: +1 -1 +1 -1 -1 a u = w = ( ) 2 uw χ ( ) 2ˆ uw χ TRIE representation x y +1 -1 +1 -1 +1 ab u = d c … w 2 3 1 1 2 1 +1 -1 +1 -1 -1 class training data suffix c: d: w = x y 1 1 1 0 5.0 0.0 5.0 0.8 5.0 0.8 2.2 2.2 1.9 0.1 1.9 1.9 0.8 0.8 5.0 2.2 a: b: c: d: +1 -1 +1 -1 -1 u = Λ w = x y 5 4 4 2 2 2 2 0 c d 1.9 1.9 0.8 0.8 … a b c c d b c a b a c a c d a b d suffix suffix a b c c d b c b a c a c d a b d 5 N = 2 M = 2 3 1 4 5 search order pruned pruned Figure 3: Efficient search for statistically significant sub-sequences using the PrefixSpan algorithm with SMP by using a depth-first search with a TRIE representation of the significant sequences. The values of each symbol represent χ2(u) and bχ2(u) that can be calculated from the number of xu and yu. The TRIE structure in the figure represents the statistically significant sub-sequences that can be shown in a path from ⊥to the symbol. We exploit this TRIE structure and PrefixSpan pruning method in our kernel calculation. 4.4 Embedding Feature Selection in Kernel Calculation This section shows how to integrate statistical feature selection in the kernel calculation. Our proposed method is defined in the following equations. KFSSK(S, T) = n X m=1 X 1≤i≤|S| X 1≤j≤|T | Km(Si, Tj) (10) Let Km(Si, Tj) be a function that returns the sum value of all statistically significant common subsequences u if si = tj. Km(Si, Tj) = X u∈Γm(Si,Tj) Ju(Si, Tj), (11) where Γm(Si, Tj) represents a set of sub-sequences whose size |u| is m and that satisfy the above condition 1. The Γm(Si, Tj) is defined in detail in Equation (15). Then, let Ju(Si, Tj), J ′ u(Si, Tj) and J ′′ u (Si, Tj) be functions that calculate the value of the common sub-sequences between Si and Tj recursively, as well as equations (5) to (7) for sequence kernels. We introduce a special symbol Λ to represent an “empty sequence”, and define Λw = w and |Λw| = 1. Juw(Si, Tj) = J ′ u(Si, Tj) · I(w) if uw ∈bΓ|uw|(Si, Tj), 0 otherwise (12) where I(w) is a function that returns a matching value of w. In this paper, we define I(w) is 1. bΓm(Si, Tj) has realized conditions 2 and 3; the details are defined in Equation (16). J ′ u(Si, Tj) = 1 if u = Λ, 0 if j = 0 and u ̸= Λ, λJ ′ u(Si, Tj−1) + J ′′ u (Si, Tj−1) otherwise (13) J ′′ u (Si, Tj) = 0 if i = 0, λJ ′′ u (Si−1, Tj) + Ju(Si−1, Tj) otherwise (14) The following five equations are introduced to select a set of significant sub-sequences. Γm(Si, Tj) and bΓm(Si, Tj) are sets of sub-sequences (features) that satisfy condition 1 and 3, respectively, when calculating the value between Si and Tj in Equations (11) and (12). Γm(Si, Tj) = {u | u ∈bΓm(Si, Tj), τ ≤χ2(u)} (15) bΓm(Si, Tj) = Ψ(bΓ′ m−1(Si, Tj), si) if si = tj ∅ otherwise (16) Ψ(F, w) = {uw | u ∈F, τ ≤bχ2(uw)}, (17) where F represents a set of sub-sequences. Notice that Γm(Si, Tj) and bΓm(Si, Tj) have only subsequences u that satisfy τ ≤χ2(uw) or τ ≤ bχ2(uw), respectively, if si = tj(= w); otherwise they become empty sets. The following two equations are introduced for recursive set operations to calculate Γm(Si, Tj) and bΓm(Si, Tj). bΓ′ m(Si, Tj) = {Λ} if m = 0, ∅ if j = 0 and m > 0, bΓ′ m(Si, Tj−1) ∪bΓ′′ m(Si, Tj−1) otherwise (18) bΓ′′ m(Si, Tj) = ∅ if i = 0 , bΓ′′ m(Si−1, Tj) ∪bΓm(Si−1, Tj) otherwise (19) In the implementation, Equations (11) to (14) can be performed in the same way as those used to calculate the original sequence kernels, if the feature selection condition of Equations (15) to (19) has been removed. Then, Equations (15) to (19), which select significant features, are performed by the PrefixSpan algorithm described above and the TRIE representation of statistically significant features. The recursive calculation of Equations (12) to (14) and Equations (16) to (19) can be executed in the same way and at the same time in parallel. As a result, statistical feature selection can be embedded in oroginal sequence kernel calculation based on a dynamic programming technique. 4.5 Properties The proposed method has several important advantages over the conventional methods. First, the feature selection criterion is based on a statistical measure, so statistically significant features are automatically selected. Second, according to Equations (10) to (18), the proposed method can be embedded in an original kernel calculation process, which allows us to use the same calculation procedure as the conventional methods. The only difference between the original sequence kernels and the proposed method is that the latter calculates a statistical metric χ2(u) by using a sub-structure mining algorithm in the kernel calculation. Third, although the kernel calculation, which unifies our proposed method, requires a longer training time because of the feature selection, the selected sub-sequences have a TRIE data structure. This means a fast calculation technique proposed in (Kudo and Matsumoto, 2003) can be simply applied to our method, which yields classification very quickly. In the classification part, the features (subsequences) selected in the learning part must be known. Therefore, we store the TRIE of selected sub-sequences and use them during classification. 5 Proposed Method Applied to Other Convolution Kernels We have insufficient space to discuss this subject in detail in relation to other convolution kernels. However, our proposals can be easily applied to tree kernels (Collins and Duffy, 2001) by using string encoding for trees. We enumerate nodes (labels) of tree in postorder traversal. After that, we can employ a sequential pattern mining technique to select statistically significant sub-trees. This is because we can convert to the original sub-tree form from the string encoding representation. Table 2: Parameter values of proposed kernels and Support Vector Machines parameter value soft margin for SVM (C) 1000 decay factor of gap (λ) 0.5 threshold of χ2 (τ) 2.7055 3.8415 As a result, we can calculate tree kernels with statistical feature selection by using the original tree kernel calculation with the sequential pattern mining technique introduced in this paper. Moreover, we can expand our proposals to hierarchically structured graph kernels (Suzuki et al., 2003a) by using a simple extension to cover hierarchical structures. 6 Experiments We evaluated the performance of the proposed method in actual NLP tasks, namely English question classification (EQC), Japanese question classification (JQC) and sentence modality identification (MI) tasks. We compared the proposed method (FSSK) with a conventional method (SK), as discussed in Section 3, and with bag-of-words (BOW) Kernel (BOW-K)(Joachims, 1998) as baseline methods. Support Vector Machine (SVM) was selected as the kernel-based classifier for training and classification. Table 2 shows some of the parameter values that we used in the comparison. We set thresholds of τ = 2.7055 (FSSK1) and τ = 3.8415 (FSSK2) for the proposed methods; these values represent the 10% and 5% level of significance in the χ2 distribution with one degree of freedom, which used the χ2 significant test. 6.1 Question Classification Question classification is defined as a task similar to text categorization; it maps a given question into a question type. We evaluated the performance by using data provided by (Li and Roth, 2002) for English and (Suzuki et al., 2003b) for Japanese question classification and followed the experimental setting used in these papers; namely we use four typical question types, LOCATION, NUMEX, ORGANIZATION, and TIME TOP for JQA, and “coarse” and “fine” classes for EQC. We used the one-vs-rest classifier of SVM as the multi-class classification method for EQC. Figure 4 shows examples of the question classification data used here. question types input object : word sequences ([ ]: information of chunk and ⟨⟩: named entity) ABBREVIATION what,[B-NP] be,[B-VP] the,[B-NP] abbreviation,[I-NP] for,[B-PP] Texas,[B-NP],⟨B-GPE⟩?,[O] DESCRIPTION what,[B-NP] be,[B-VP] Aborigines,[B-NP] ?,[O] HUMAN who,[B-NP] discover,[B-VP] America,[B-NP],⟨B-GPE⟩?,[O] Figure 4: Examples of English question classification data Table 3: Results of the Japanese question classification (F-measure) (a) TIME TOP (b) LOCATION (c) ORGANIZATION (d) NUMEX n FSSK1 FSSK2 SK BOW-K 1 2 3 4 ∞ - .961 .958 .957 .956 - .961 .956 .957 .956 - .946 .910 .866 .223 .902 .909 .886 .855 1 2 3 4 ∞ - .795 .793 .798 .792 - .788 .799 .804 .800 - .791 .775 .732 .169 .744 .768 .756 .747 1 2 3 4 ∞ - .709 .720 .720 .723 - .703 .710 .716 .720 - .705 .668 .594 .035 .641 690 .636 .572 1 2 3 4 ∞ - .912 .915 .908 .908 - .913 .916 .911 .913 - .912 .885 .817 .036 .842 .852 .807 .726 6.2 Sentence Modality Identification For example, sentence modality identification techniques are used in automatic text analysis systems that identify the modality of a sentence, such as “opinion” or “description”. The data set was created from Mainichi news articles and one of three modality tags, “opinion”, “decision” and “description” was applied to each sentence. The data size was 1135 sentences consisting of 123 sentences of “opinion”, 326 of “decision” and 686 of “description”. We evaluated the results by using 5-fold cross validation. 7 Results and Discussion Tables 3 and 4 show the results of Japanese and English question classification, respectively. Table 5 shows the results of sentence modality identification. n in each table indicates the threshold of the sub-sequence size. n = ∞means all possible subsequences are used. First, SK was consistently superior to BOW-K. This indicates that the structural features were quite efficient in performing these tasks. In general we can say that the use of structural features can improve the performance of NLP tasks that require the details of the contents to perform the task. Most of the results showed that SK achieves its maximum performance when n = 2. The performance deteriorates considerably once n exceeds 4. This implies that SK with larger sub-structures degrade classification performance. These results show the same tendency as the previous studies discussed in Section 3. Table 6 shows the precision and recall of SK when n = ∞. As shown in Table 6, the classifier offered high precision but low recall. This is evidence of over-fitting in learning. As shown by the above experiments, FSSK proTable 6: Precision and recall of SK: n = ∞ Precision Recall F MI:Opinion .917 .209 .339 JQA:LOCATION .896 .093 .168 vided consistently better performance than the conventional methods. Moreover, the experiments confirmed one important fact. That is, in some cases maximum performance was achieved with n = ∞. This indicates that sub-sequences created using very large structures can be extremely effective. Of course, a larger feature space also includes the smaller feature spaces, Σn ⊂Σn+1. If the performance is improved by using a larger n, this means that significant features do exist. Thus, we can improve the performance of some classification problems by dealing with larger substructures. Even if optimum performance was not achieved with n = ∞, difference between the performance of smaller n are quite small compared to that of SK. This indicates that our method is very robust as regards substructure size; It therefore becomes unnecessary for us to decide sub-structure size carefully. This indicates our approach, using large sub-structures, is better than the conventional approach of eliminating sub-sequences based on size. 8 Conclusion This paper proposed a statistical feature selection method for convolution kernels. Our approach can select significant features automatically based on a statistical significance test. Our proposed method can be embedded in the DP based kernel calculation process for convolution kernels by using substructure mining algorithms. Table 4: Results of English question classification (Accuracy) (a) coarse (b) fine n FSSK1 FSSK2 SK BOW-K 1 2 3 4 ∞ - .908 .914 .916 .912 - .902 .896 .902 .906 - .912 .914 .912 .892 .728 .836 .864 .858 1 2 3 4 ∞ - .852 .854 .852 .850 - .858 .856 .854 .854 - .850 .840 .830 .796 .754 .792 .790 .778 Table 5: Results of sentence modality identification (F-measure) (a) opinion (b) decision (c) description n FSSK1 FSSK2 SK BOW-K 1 2 3 4 ∞ - .734 .743 .746 .751 - .740 .748 .750 .750 - .706 .672 .577 .058 .507 .531 .438 .368 1 2 3 4 ∞ - .828 .858 .854 .857 - .824 .855 .859 .860 - .816 .834 .830 .339 .652 .708 .686 .665 1 2 3 4 ∞ - .896 .906 .910 .910 - .894 .903 .909 .909 - .902 .913 .910 .808 .819 .839 .826 .793 Experiments show that our method is superior to conventional methods. Moreover, the results indicate that complex features exist and can be effective. Our method can employ them without over-fitting problems, which yields benefits in terms of concept and performance. References N. Cancedda, E. Gaussier, C. Goutte, and J.-M. Renders. 2003. Word-Sequence Kernels. Journal of Machine Learning Research, 3:1059–1082. M. Collins and N. Duffy. 2001. Convolution Kernels for Natural Language. In Proc. of Neural Information Processing Systems (NIPS’2001). C. Cortes and V. N. Vapnik. 1995. Support Vector Networks. Machine Learning, 20:273–297. D. Haussler. 1999. Convolution Kernels on Discrete Structures. In Technical Report UCS-CRL99-10. UC Santa Cruz. T. Joachims. 1998. Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Proc. of European Conference on Machine Learning (ECML ’98), pages 137– 142. T. Kudo and Y. Matsumoto. 2002. Japanese Dependency Analysis Using Cascaded Chunking. In Proc. of the 6th Conference on Natural Language Learning (CoNLL 2002), pages 63–69. T. Kudo and Y. Matsumoto. 2003. Fast Methods for Kernel-based Text Analysis. In Proc. of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-2003), pages 24–31. X. Li and D. Roth. 2002. Learning Question Classifiers. In Proc. of the 19th International Conference on Computational Linguistics (COLING 2002), pages 556–562. H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and C. Watkins. 2002. Text Classification Using String Kernel. Journal of Machine Learning Research, 2:419–444. S. Morishita and J. Sese. 2000. Traversing Itemset Lattices with Statistical Metric Pruning. In Proc. of ACM SIGACT-SIGMOD-SIGART Symp. on Database Systems (PODS’00), pages 226– 236. J. Pei, J. Han, B. Mortazavi-Asl, and H. Pinto. 2001. PrefixSpan: Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth. In Proc. of the 17th International Conference on Data Engineering (ICDE 2001), pages 215–224. M. Rogati and Y. Yang. 2002. High-performing Feature Selection for Text Classification. In Proc. of the 2002 ACM CIKM International Conference on Information and Knowledge Management, pages 659–661. J. Suzuki, T. Hirao, Y. Sasaki, and E. Maeda. 2003a. Hierarchical Directed Acyclic Graph Kernel: Methods for Natural Language Data. In Proc. of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-2003), pages 32–39. J. Suzuki, Y. Sasaki, and E. Maeda. 2003b. Kernels for Structured Natural Language Data. In Proc. of the 17th Annual Conference on Neural Information Processing Systems (NIPS2003). | 2004 | 16 |
Improving Pronoun Resolution by Incorporating Coreferential Information of Candidates Xiaofeng Yang†‡ Jian Su† Guodong Zhou† Chew Lim Tan‡ †Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore, 119613 {xiaofengy,sujian,zhougd} @i2r.a-star.edu.sg ‡ Department of Computer Science National University of Singapore, Singapore, 117543 {yangxiao,tancl}@comp.nus.edu.sg Abstract Coreferential information of a candidate, such as the properties of its antecedents, is important for pronoun resolution because it reflects the salience of the candidate in the local discourse. Such information, however, is usually ignored in previous learning-based systems. In this paper we present a trainable model which incorporates coreferential information of candidates into pronoun resolution. Preliminary experiments show that our model will boost the resolution performance given the right antecedents of the candidates. We further discuss how to apply our model in real resolution where the antecedents of the candidate are found by a separate noun phrase resolution module. The experimental results show that our model still achieves better performance than the baseline. 1 Introduction In recent years, supervised machine learning approaches have been widely explored in reference resolution and achieved considerable success (Ge et al., 1998; Soon et al., 2001; Ng and Cardie, 2002; Strube and Muller, 2003; Yang et al., 2003). Most learning-based pronoun resolution systems determine the reference relationship between an anaphor and its antecedent candidate only from the properties of the pair. The knowledge about the context of anaphor and antecedent is nevertheless ignored. However, research in centering theory (Sidner, 1981; Grosz et al., 1983; Grosz et al., 1995; Tetreault, 2001) has revealed that the local focusing (or centering) also has a great effect on the processing of pronominal expressions. The choices of the antecedents of pronouns usually depend on the center of attention throughout the local discourse segment (Mitkov, 1999). To determine the salience of a candidate in the local context, we may need to check the coreferential information of the candidate, such as the existence and properties of its antecedents. In fact, such information has been used for pronoun resolution in many heuristicbased systems. The S-List model (Strube, 1998), for example, assumes that a co-referring candidate is a hearer-old discourse entity and is preferred to other hearer-new candidates. In the algorithms based on the centering theory (Brennan et al., 1987; Grosz et al., 1995), if a candidate and its antecedent are the backwardlooking centers of two subsequent utterances respectively, the candidate would be the most preferred since the CONTINUE transition is always ranked higher than SHIFT or RETAIN. In this paper, we present a supervised learning-based pronoun resolution system which incorporates coreferential information of candidates in a trainable model. For each candidate, we take into consideration the properties of its antecedents in terms of features (henceforth backward features), and use the supervised learning method to explore their influences on pronoun resolution. In the study, we start our exploration on the capability of the model by applying it in an ideal environment where the antecedents of the candidates are correctly identified and the backward features are optimally set. The experiments on MUC-6 (1995) and MUC-7 (1998) corpora show that incorporating coreferential information of candidates boosts the system performance significantly. Further, we apply our model in the real resolution where the antecedents of the candidates are provided by separate noun phrase resolution modules. The experimental results show that our model still outperforms the baseline, even with the low recall of the non-pronoun resolution module. The remaining of this paper is organized as follows. Section 2 discusses the importance of the coreferential information for candidate evaluation. Section 3 introduces the baseline learning framework. Section 4 presents and evaluates the learning model which uses backward features to capture coreferential information, while Section 5 proposes how to apply the model in real resolution. Section 6 describes related research work. Finally, conclusion is given in Section 7. 2 The Impact of Coreferential Information on Pronoun Resolution In pronoun resolution, the center of attention throughout the discourse segment is a very important factor for antecedent selection (Mitkov, 1999). If a candidate is the focus (or center) of the local discourse, it would be selected as the antecedent with a high possibility. See the following example, <s> Gitano1 has pulled offa clever illusion2 with its3 advertising4. <s> <s> The campaign5 gives its6 clothes a youthful and trendy image to lure consumers into the store. <s> Table 1: A text segment from MUC-6 data set In the above text, the pronoun “its6” has several antecedent candidates, i.e., “Gitano1”, “a clever illusion2”, “its3”, “its advertising4” and “The campaign5”. Without looking back, “The campaign5” would be probably selected because of its syntactic role (Subject) and its distance to the anaphor. However, given the knowledge that the company Gitano is the focus of the local context and “its3” refers to “Gitano1”, it would be clear that the pronoun “its6” should be resolved to “its3” and thus “Gitano1”, rather than other competitors. To determine whether a candidate is the “focus” entity, we should check how the status (e.g. grammatical functions) of the entity alternates in the local context. Therefore, it is necessary to track the NPs in the coreferential chain of the candidate. For example, the syntactic roles (i.e., subject) of the antecedents of “its3” would indicate that “its3” refers to the most salient entity in the discourse segment. In our study, we keep the properties of the antecedents as features of the candidates, and use the supervised learning method to explore their influence on pronoun resolution. Actually, to determine the local focus, we only need to check the entities in a short discourse segment. That is, for a candidate, the number of its adjacent antecedents to be checked is limited. Therefore, we could evaluate the salience of a candidate by looking back only its closest antecedent instead of each element in its coreferential chain, with the assumption that the closest antecedent is able to provide sufficient information for the evaluation. 3 The Baseline Learning Framework Our baseline system adopts the common learning-based framework employed in the system by Soon et al. (2001). In the learning framework, each training or testing instance takes the form of i{ana, candi}, where ana is the possible anaphor and candi is its antecedent candidate1. An instance is associated with a feature vector to describe their relationships. As listed in Table 2, we only consider those knowledge-poor and domain-independent features which, although superficial, have been proved efficient for pronoun resolution in many previous systems. During training, for each anaphor in a given text, a positive instance is created by paring the anaphor and its closest antecedent. Also a set of negative instances is formed by paring the anaphor and each of the intervening candidates. Based on the training instances, a binary classifier is generated using C5.0 learning algorithm (Quinlan, 1993). During resolution, each possible anaphor ana, is paired in turn with each preceding antecedent candidate, candi, from right to left to form a testing instance. This instance is presented to the classifier, which will then return a positive or negative result indicating whether or not they are co-referent. The process terminates once an instance i{ana, candi} is labelled as positive, and ana will be resolved to candi in that case. 4 The Learning Model Incorporating Coreferential Information The learning procedure in our model is similar to the above baseline method, except that for each candidate, we take into consideration its closest antecedent, if possible. 4.1 Instance Structure During both training and testing, we adopt the same instance selection strategy as in the baseline model. The only difference, however, is the structure of the training or testing instances. Specifically, each instance in our model is composed of three elements like below: 1In our study candidates are filtered by checking the gender, number and animacy agreements in advance. Features describing the candidate (candi) 1. candi DefNp 1 if candi is a definite NP; else 0 2. candi DemoNP 1 if candi is an indefinite NP; else 0 3. candi Pron 1 if candi is a pronoun; else 0 4. candi ProperNP 1 if candi is a proper name; else 0 5. candi NE Type 1 if candi is an “organization” named-entity; 2 if “person”, 3 if other types, 0 if not a NE 6. candi Human the likelihood (0-100) that candi is a human entity (obtained from WordNet) 7. candi FirstNPInSent 1 if candi is the first NP in the sentence where it occurs 8. candi Nearest 1 if candi is the candidate nearest to the anaphor; else 0 9. candi SubjNP 1 if candi is the subject of the sentence it occurs; else 0 Features describing the anaphor (ana): 10. ana Reflexive 1 if ana is a reflexive pronoun; else 0 11. ana Type 1 if ana is a third-person pronoun (he, she,. . . ); 2 if a single neuter pronoun (it,. . . ); 3 if a plural neuter pronoun (they,. . . ); 4 if other types Features describing the relationships between candi and ana: 12. SentDist Distance between candi and ana in sentences 13. ParaDist Distance between candi and ana in paragraphs 14. CollPattern 1 if candi has an identical collocation pattern with ana; else 0 Table 2: Feature set for the baseline pronoun resolution system i{ana, candi, ante-of-candi} where ana and candi, similar to the definition in the baseline model, are the anaphor and one of its candidates, respectively. The new added element in the instance definition, anteof-candi, is the possible closest antecedent of candi in its coreferential chain. The ante-ofcandi is set to NIL in the case when candi has no antecedent. Consider the example in Table 1 again. For the pronoun “it6”, three training instances will be generated, namely, i{its6, The compaign5, NIL}, i{its6, its advertising4, NIL}, and i{its6, its3, Gitano1}. 4.2 Backward Features In addition to the features adopted in the baseline system, we introduce a set of backward features to describe the element ante-of-candi. The ten features (15-24) are listed in Table 3 with their respective possible values. Like feature 1-9, features 15-22 describe the lexical, grammatical and semantic properties of ante-of-candi. The inclusion of the two features Apposition (23) and candi NoAntecedent (24) is inspired by the work of Strube (1998). The feature Apposition marks whether or not candi and ante-of-candi occur in the same appositive structure. The underlying purpose of this feature is to capture the pattern that proper names are accompanied by an appositive. The entity with such a pattern may often be related to the hearers’ knowledge and has low preference. The feature candi NoAntecedent marks whether or not a candidate has a valid antecedent in the preceding text. As stipulated in Strube’s work, co-referring expressions belong to hearer-old entities and therefore have higher preference than other candidates. When the feature is assigned value 1, all the other backward features (15-23) are set to 0. 4.3 Results and Discussions In our study we used the standard MUC6 and MUC-7 coreference corpora. In each data set, 30 “dry-run” documents were annotated for training as well as 20-30 documents for testing. The raw documents were preprocessed by a pipeline of automatic NLP components (e.g. NP chunker, part-of-speech tagger, named-entity recognizer) to determine the boundary of the NPs, and to provide necessary information for feature calculation. In an attempt to investigate the capability of our model, we evaluated the model in an optimal environment where the closest antecedent of each candidate is correctly identified. MUC6 and MUC-7 can serve this purpose quite well; the annotated coreference information in the data sets enables us to obtain the correct closest Features describing the antecedent of the candidate (ante-of-candi): 15. ante-candi DefNp 1 if ante-of-candi is a definite NP; else 0 16. ante-candi IndefNp 1 if ante-of-candi is an indefinite NP; else 0 17. ante-candi Pron 1 if ante-of-candi is a pronoun; else 0 18. ante-candi Proper 1 if ante-of-candi is a proper name; else 0 19. ante-candi NE Type 1 if ante-of-candi is an “organization” named-entity; 2 if “person”, 3 if other types, 0 if not a NE 20. ante-candi Human the likelihood (0-100) that ante-of-candi is a human entity 21. ante-candi FirstNPInSent 1 if ante-of-candi is the first NP in the sentence where it occurs 22. ante-candi SubjNP 1 if ante-of-candi is the subject of the sentence where it occurs Features describing the relationships between the candidate (candi) and ante-of-candi: 23. Apposition 1 if ante-of-candi and candi are in an appositive structure Features describing the candidate (candi): 24. candi NoAntecedent 1 if candi has no antecedent available; else 0 Table 3: Backward features used to capture the coreferential information of a candidate antecedent for each candidate and accordingly generate the training and testing instances. In the next section we will further discuss how to apply our model into the real resolution. Table 4 shows the performance of different systems for resolving the pronominal anaphors 2 in MUC-6 and MUC-7. Default learning parameters for C5.0 were used throughout the experiments. In this table we evaluated the performance based on two kinds of measurements: • “Recall-and-Precision”: Recall = #positive instances classified correctly #positive instances Precision = #positive instances classified correctly #instances classified as positive The above metrics evaluate the capability of the learned classifier in identifying positive instances3. F-measure is the harmonic mean of the two measurements. • “Success”: Success = #anaphors resolved correctly #total anaphors The metric4 directly reflects the pronoun resolution capability. The first and second lines of Table 4 compare the performance of the baseline system (Base2The first and second person pronouns are discarded in our study. 3The testing instances are collected in the same ways as the training instances. 4In the experiments, an anaphor is considered correctly resolved only if the found antecedent is in the same coreferential chain of the anaphor. ante-candi_SubjNP = 1: 1 (49/5) ante-candi_SubjNP = 0: :..candi_SubjNP = 1: :..SentDist = 2: 0 (3) : SentDist = 0: : :..candi_Human > 0: 1 (39/2) : : candi_Human <= 0: : : :..candi_NoAntecedent = 0: 1 (8/3) : : candi_NoAntecedent = 1: 0 (3) : SentDist = 1: : :..ante-candi_Human <= 50 : 0 (4) : ante-candi_Human > 50 : 1 (10/2) : candi_SubjNP = 0: :..candi_Pron = 1: 1 (32/7) candi_Pron = 0: :..candi_NoAntecedent = 1: :..candi_FirstNPInSent = 1: 1 (6/2) : candi_FirstNPInSent = 0: ... candi_NoAntecedent = 0: ... Figure 1: Top portion of the decision tree learned on MUC-6 with the backward features line) and our system (Optimal), where DTpron and DTpron−opt are the classifiers learned in the two systems, respectively. The results indicate that our system outperforms the baseline system significantly. Compared with Baseline, Optimal achieves gains in both recall (6.4% for MUC-6 and 4.1% for MUC-7) and precision (1.3% for MUC-6 and 9.0% for MUC-7). For Success, we also observe an apparent improvement by 4.7% (MUC-6) and 3.5% (MUC-7). Figure 1 shows the portion of the pruned decision tree learned for MUC-6 data set. It visualizes the importance of the backward features for the pronoun resolution on the data set. From Testing Backward feature MUC-6 MUC-7 Experiments classifier assigner* R P F S R P F S Baseline DTpron NIL 77.2 83.4 80.2 70.0 71.9 68.6 70.2 59.0 Optimal DTpron−opt (Annotated) 83.6 84.7 84.1 74.7 76.0 77.6 76.8 62.5 RealResolve-1 DTpron−opt DTpron−opt 75.8 83.8 79.5 73.1 62.3 77.7 69.1 53.8 RealResolve-2 DTpron−opt DTpron 75.8 83.8 79.5 73.1 63.0 77.9 69.7 54.9 RealResolve-3 DT ′ pron DTpron 79.3 86.3 82.7 74.7 74.7 67.3 70.8 60.8 RealResolve-4 DT ′ pron DT ′ pron 79.3 86.3 82.7 74.7 74.7 67.3 70.8 60.8 Table 4: Results of different systems for pronoun resolution on MUC-6 and MUC-7 (*Here we only list backward feature assigner for pronominal candidates. In RealResolve-1 to RealResolve-4, the backward features for non-pronominal candidates are all found by DTnon−pron.) the tree we could find that: 1.) Feature ante-candi SubjNP is of the most importance as the root feature of the tree. The decision tree would first examine the syntactic role of a candidate’s antecedent, followed by that of the candidate. This nicely proves our assumption that the properties of the antecedents of the candidates provide very important information for the candidate evaluation. 2.) Both features ante-candi SubjNP and candi SubjNP rank top in the decision tree. That is, for the reference determination, the subject roles of the candidate’s referent within a discourse segment will be checked in the first place. This finding supports well the suggestion in centering theory that the grammatical relations should be used as the key criteria to rank forward-looking centers in the process of focus tracking (Brennan et al., 1987; Grosz et al., 1995). 3.) candi Pron and candi NoAntecedent are to be examined in the cases when the subject-role checking fails, which confirms the hypothesis in the S-List model by Strube (1998) that co-refereing candidates would have higher preference than other candidates in the pronoun resolution. 5 Applying the Model in Real Resolution In Section 4 we explored the effectiveness of the backward feature for pronoun resolution. In those experiments our model was tested in an ideal environment where the closest antecedent of a candidate can be identified correctly when generating the feature vector. However, during real resolution such coreferential information is not available, and thus a separate module has algorithm PRON-RESOLVE input: DTnon−pron: classifier for resolving non-pronouns DTpron: classifier for resolving pronouns begin: M1..n:= the valid markables in the given document Ante[1..n] := 0 for i = 1 to N for j = i - 1 downto 0 if (Mi is a non-pron and DTnon−pron(i{Mi, Mj}) == + ) or (Mi is a pron and DTpron(i{Mi, Mj, Ante[j]}) == +) then Ante[i] := Mj break return Ante Figure 2: The pronoun resolution algorithm by incorporating coreferential information of candidates to be employed to obtain the closest antecedent for a candidate. We describe the algorithm in Figure 2. The algorithm takes as input two classifiers, one for the non-pronoun resolution and the other for pronoun resolution. Given a testing document, the antecedent of each NP is identified using one of these two classifiers, depending on the type of NP. Although a separate nonpronoun resolution module is required for the pronoun resolution task, this is usually not a big problem as these two modules are often integrated in coreference resolution systems. We just use the results of the one module to improve the performance of the other. 5.1 New Training and Testing Procedures For a pronominal candidate, its antecedent can be obtained by simply using DTpron−opt. For Training Procedure: T1. Train a non-pronoun resolution classifier DTnon−pron and a pronoun resolution classifier DTpron, using the baseline learning framework (without backward features). T2. Apply DTnon−pron and DTpron to identify the antecedent of each non-pronominal and pronominal markable, respectively, in a given document. T3. Go through the document again. Generate instances with backward features assigned using the antecedent information obtained in T2. T4. Train a new pronoun resolution classifier DT ′ pron on the instances generated in T3. Testing Procedure: R1. For each given document, do T2∼T3. R2. Resolve pronouns by applying DT ′ pron. Table 5: New training and testing procedures a non-pronominal candidate, we built a nonpronoun resolution module to identify its antecedent. The module is a duplicate of the NP coreference resolution system by Soon et al. (2001)5 , which uses the similar learning framework as described in Section 3. In this way, we could do pronoun resolution just by running PRON-RESOLVE(DTnon−pron, DTpron−opt), where DTnon−pron is the classifier of the non-pronoun resolution module. One problem, however, is that DTpron−opt is trained on the instances whose backward features are correctly assigned. During real resolution, the antecedent of a candidate is found by DTnon−pron or DTpron−opt, and the backward feature values are not always correct. Indeed, for most noun phrase resolution systems, the recall is not very high. The antecedent sometimes can not be found, or is not the closest one in the preceding coreferential chain. Consequently, the classifier trained on the “perfect” feature vectors would probably fail to output anticipated results on the noisy data during real resolution. Thus we modify the training and testing procedures of the system. For both training and testing instances, we assign the backward feature values based on the results from separate NP resolution modules. The detailed procedures are described in Table 5. 5Details of the features can be found in Soon et al. (2001) algorithm REFINE-CLASSIFIER begin: DT1 pron := DT ′ pron for i = 1 to ∞ Use DTi pron to update the antecedents of pronominal candidates and the corresponding backward features; Train DTi+1 pron based on the updated training instances; if DTi+1 pron is not better than DTi pron then break; return DTi pron Figure 3: The classifier refining algorithm The idea behind our approach is to train and test the pronoun resolution classifier on instances with feature values set in a consistent way. Here the purpose of DTpron and DTnon−pron is to provide backward feature values for training and testing instances. From this point of view, the two modules could be thought of as a preprocessing component of our pronoun resolution system. 5.2 Classifier Refining If the classifier DT ′ pron outperforms DTpron as expected, we can employ DT ′ pron in place of DTpron to generate backward features for pronominal candidates, and then train a classifier DT ′′ pron based on the updated training instances. Since DT ′ pron produces more correct feature values than DTpron, we could expect that DT ′′ pron will not be worse, if not better, than DT ′ pron. Such a process could be repeated to refine the pronoun resolution classifier. The algorithm is described in Figure 3. In algorithm REFINE-CLASSIFIER, the iteration terminates when the new trained classifier DTi+1 pron provides no further improvement than DTi pron. In this case, we can replace DTi+1 pron by DTi pron during the i+1(th) testing procedure. That means, by simply running PRON-RESOLVE(DTnon−pron,DTi pron), we can use for both backward feature computation and instance classification tasks, rather than applying DTpron and DT ′ pron subsequently. 5.3 Results and Discussions In the experiments we evaluated the performance of our model in real pronoun resolution. The performance of our model depends on the performance of the non-pronoun resolution classifier, DTnon−pron. Hence we first examined the coreference resolution capability of DTnon−pron based on the standard scoring scheme by Vilain et al. (1995). For MUC-6, the module obtains 62.2% recall and 78.8% precision, while for MUC-7, it obtains 50.1% recall and 75.4% precision. The poor recall and comparatively high precision reflect the capability of the state-ofthe-art learning-based NP resolution systems. The third block of Table 4 summarizes the performance of the classifier DTpron−opt in real resolution. In the systems RealResolve-1 and RealResolve-2, the antecedents of pronominal candidates are found by DTpron−opt and DTpron respectively, while in both systems the antecedents of non-pronominal candidates are by DTnon−pron. As shown in the table, compared with the Optimal where the backward features of testing instances are optimally assigned, the recall rates of two systems drop largely by 7.8% for MUC-6 and by about 14% for MUC-7. The scores of recall are even lower than those of Baseline. As a result, in comparison with Optimal, we see the degrade of the F-measure and the success rate, which confirms our hypothesis that the classifier learned on perfect training instances would probably not perform well on the noisy testing instances. The system RealResolve-3 listed in the fifth line of the table uses the classifier trained and tested on instances whose backward features are assigned according to the results from DTnon−pron and DTpron. From the table we can find that: (1) Compared with Baseline, the system produces gains in recall (2.1% for MUC-6 and 2.8% for MUC-7) with no significant loss in precision. Overall, we observe the increase in F-measure for both data sets. If measured by Success, the improvement is more apparent by 4.7% (MUC-6) and 1.8% (MUC-7). (2) Compared with RealResolve-1(2), the performance decrease of RealResolve-3 against Optimal is not so large. Especially for MUC-6, the system obtains a success rate as high as Optimal. The above results show that our model can be successfully applied in the real pronoun resolution task, even given the low recall of the current non-pronoun resolution module. This should be owed to the fact that for a candidate, its adjacent antecedents, even not the closest one, could give clues to reflect its salience in the local discourse. That is, the model prefers a high precision to a high recall, which copes well with the capability of the existing non-pronoun resolution module. In our experiments we also tested the classifier refining algorithm described in Figure 3. We found that for both MUC-6 and MUC-7 data set, the algorithm terminated in the second round. The comparison of DT2 pron and DT1 pron (i.e. DT ′ pron) showed that these two trees were exactly the same. The algorithm converges fast probably because in the data set, most of the antecedent candidates are non-pronouns (89.1% for MUC-6 and 83.7% for MUC-7). Consequently, the ratio of the training instances with backward features changed may be not substantial enough to affect the classifier generation. Although the algorithm provided no further refinement for DT ′ pron, we can use DT ′ pron, as suggested in Section 5.2, to calculate backward features and classify instances by running PRON-RESOLVE(DTnon−pron, DT ′ pron). The results of such a system, RealResolve-4, are listed in the last line of Table 4. For both MUC6 and MUC-7, RealResolve-4 obtains exactly the same performance as RealResolve-3. 6 Related Work To our knowledge, our work is the first effort that systematically explores the influence of coreferential information of candidates on pronoun resolution in learning-based ways. Iida et al. (2003) also take into consideration the contextual clues in their coreference resolution system, by using two features to reflect the ranking order of a candidate in Salience Reference List (SRL). However, similar to common centering models, in their system the ranking of entities in SRL is also heuristic-based. The coreferential chain length of a candidate, or its variants such as occurrence frequency and TFIDF, has been used as a salience factor in some learning-based reference resolution systems (Iida et al., 2003; Mitkov, 1998; Paul et al., 1999; Strube and Muller, 2003). However, for an entity, the coreferential length only reflects its global salience in the whole text(s), instead of the local salience in a discourse segment which is nevertheless more informative for pronoun resolution. Moreover, during resolution, the found coreferential length of an entity is often incomplete, and thus the obtained length value is usually inaccurate for the salience evaluation. 7 Conclusion and Future Work In this paper we have proposed a model which incorporates coreferential information of candidates to improve pronoun resolution. When evaluating a candidate, the model considers its adjacent antecedent by describing its properties in terms of backward features. We first examined the effectiveness of the model by applying it in an optimal environment where the closest antecedent of a candidate is obtained correctly. The experiments show that it boosts the success rate of the baseline system for both MUC-6 (4.7%) and MUC-7 (3.5%). Then we proposed how to apply our model in the real resolution where the antecedent of a non-pronoun is found by an additional non-pronoun resolution module. Our model can still produce Success improvement (4.7% for MUC-6 and 1.8% for MUC-7) against the baseline system, despite the low recall of the non-pronoun resolution module. In the current work we restrict our study only to pronoun resolution. In fact, the coreferential information of candidates is expected to be also helpful for non-pronoun resolution. We would like to investigate the influence of the coreferential factors on general NP reference resolution in our future work. References S. Brennan, M. Friedman, and C. Pollard. 1987. A centering approach to pronouns. In Proceedings of the 25th Annual Meeting of the Association for Compuational Linguistics, pages 155–162. N. Ge, J. Hale, and E. Charniak. 1998. A statistical approach to anaphora resolution. In Proceedings of the 6th Workshop on Very Large Corpora. B. Grosz, A. Joshi, and S. Weinstein. 1983. Providing a unified account of definite noun phrases in discourse. In Proceedings of the 21st Annual meeting of the Association for Computational Linguistics, pages 44–50. B. Grosz, A. Joshi, and S. Weinstein. 1995. Centering: a framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–225. R. Iida, K. Inui, H. Takamura, and Y. Matsumoto. 2003. Incorporating contextual cues in trainable models for coreference resolution. In Proceedings of the 10th Conference of EACL, Workshop ”The Computational Treatment of Anaphora”. R. Mitkov. 1998. Robust pronoun resolution with limited knowledge. In Proceedings of the 17th Int. Conference on Computational Linguistics, pages 869–875. R. Mitkov. 1999. Anaphora resolution: The state of the art. Technical report, University of Wolverhampton. MUC-6. 1995. Proceedings of the Sixth Message Understanding Conference. Morgan Kaufmann Publishers, San Francisco, CA. MUC-7. 1998. Proceedings of the Seventh Message Understanding Conference. Morgan Kaufmann Publishers, San Francisco, CA. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 104–111, Philadelphia. M. Paul, K. Yamamoto, and E. Sumita. 1999. Corpus-based anaphora resolution towards antecedent preference. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, Workshop ”Coreference and It’s Applications”, pages 47–52. J. R. Quinlan. 1993. C4.5: Programs for machine learning. Morgan Kaufmann Publishers, San Francisco, CA. C. Sidner. 1981. Focusing for interpretation of pronouns. American Journal of Computational Linguistics, 7(4):217–231. W. Soon, H. Ng, and D. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. M. Strube and C. Muller. 2003. A machine learning approach to pronoun resolution in spoken dialogue. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 168–175, Japan. M. Strube. 1998. Never look back: An alternative to centering. In Proceedings of the 17th Int. Conference on Computational Linguistics and 36th Annual Meeting of ACL, pages 1251–1257. J. R. Tetreault. 2001. A corpus-based evaluation of centering and pronoun resolution. Computational Linguistics, 27(4):507–520. M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of the Sixth Message understanding Conference (MUC-6), pages 45–52, San Francisco, CA. Morgan Kaufmann Publishers. X. Yang, G. Zhou, J. Su, and C. Tan. 2003. Coreference resolution using competition learning approach. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, Japan. | 2004 | 17 |
A Mention-Synchronous Coreference Resolution Algorithm Based on the Bell Tree Xiaoqiang Luo and Abe Ittycheriah Hongyan Jing and Nanda Kambhatla and Salim Roukos 1101 Kitchawan Road Yorktown Heights, NY 10598, U.S.A. {xiaoluo,abei,hjing,nanda,roukos}@us.ibm.com Abstract This paper proposes a new approach for coreference resolution which uses the Bell tree to represent the search space and casts the coreference resolution problem as finding the best path from the root of the Bell tree to the leaf nodes. A Maximum Entropy model is used to rank these paths. The coreference performance on the 2002 and 2003 Automatic Content Extraction (ACE) data will be reported. We also train a coreference system using the MUC6 data and competitive results are obtained. 1 Introduction In this paper, we will adopt the terminologies used in the Automatic Content Extraction (ACE) task (NIST, 2003). Coreference resolution in this context is defined as partitioning mentions into entities. A mention is an instance of reference to an object, and the collection of mentions referring to the same object in a document form an entity. For example, in the following sentence, mentions are underlined: “The American Medical Association voted yesterday to install the heir apparent as its president-elect, rejecting a strong, upstart challenge by a District doctor who argued that the nation’s largest physicians’ group needs stronger ethics and new leadership.” “American Medical Association”, “its” and “group” belong to the same entity as they refer to the same object. Early work of anaphora resolution focuses on finding antecedents of pronouns (Hobbs, 1976; Ge et al., 1998; Mitkov, 1998), while recent advances (Soon et al., 2001; Yang et al., 2003; Ng and Cardie, 2002; Ittycheriah et al., 2003) employ statistical machine learning methods and try to resolve reference among all kinds of noun phrases (NP), be it a name, nominal, or pronominal phrase – which is the scope of this paper as well. One common strategy shared by (Soon et al., 2001; Ng and Cardie, 2002; Ittycheriah et al., 2003) is that a statistical model is trained to measure how likely a pair of mentions corefer; then a greedy procedure is followed to group mentions into entities. While this approach has yielded encouraging results, the way mentions are linked is arguably suboptimal in that an instant decision is made when considering whether two mentions are linked or not. In this paper, we propose to use the Bell tree to represent the process of forming entities from mentions. The Bell tree represents the search space of the coreference resolution problem – each leaf node corresponds to a possible coreference outcome. We choose to model the process from mentions to entities represented in the Bell tree, and the problem of coreference resolution is cast as finding the “best” path from the root node to leaves. A binary maximum entropy model is trained to compute the linking probability between a partial entity and a mention. The rest of the paper is organized as follows. In Section 2, we present how the Bell tree can be used to represent the process of creating entities from mentions and the search space. We use a maximum entropy model to rank paths in the Bell tree, which is discussed in Section 3. After presenting the search strategy in Section 4, we show the experimental results on the ACE 2002 and 2003 data, and the Message Understanding Conference (MUC) (MUC, 1995) data in Section 5. We compare our approach with some recent work in Section 6. 2 Bell Tree: From Mention to Entity Let us consider traversing mentions in a document from beginning (left) to end (right). The process of forming entities from mentions can be represented by a tree structure. The root node is the initial state of the process, which consists of a partial entity containing the first mention of a document. The second mention is [1][2] 3* [1][2][3] [1] [23] [13][2] [123] [12][3] [1] 2* 3 [1] [12] [1] [2] (c1) (c5) (b1) (c2) (c3) (c4) (a) [12] 3* (b2) Figure 1: Bell tree representation for three mentions: numbers in [] denote a partial entity. In-focus entities are marked on the solid arrows, and active mentions are marked by *. Solid arrows signify that a mention is linked with an in-focus partial entity while dashed arrows indicate starting of a new entity. added in the next step by either linking to the existing entity, or starting a new entity. A second layer of nodes are created to represent the two possible outcomes. Subsequent mentions are added to the tree in the same manner. The process is mention-synchronous in that each layer of tree nodes are created by adding one mention at a time. Since the number of tree leaves is the number of possible coreference outcomes and it equals the Bell Number (Bell, 1934), the tree is called the Bell tree. The Bell Number is the number of ways of partitioning distinguishable objects (i.e., mentions) into non-empty disjoint subsets (i.e., entities). The Bell Number has a “closed” formula
and it increases rapidly as increases: ! #" %$&' )( ! Clearly, an efficient search strategy is necessary, and it will be addressed in Section 4. Figure 1 illustrates how the Bell tree is created for a document with three mentions. The initial node consists of the first partial entity [1] (i.e., node (a) in Figure 1). Next, mention 2 becomes active (marked by “*” in node (a)) and can either link with the partial entity [1] and result in a new node (b1), or start a new entity and create another node (b2). The partial entity which the active mention considers linking with is said to be in-focus. In-focus entities are highlighted on the solid arrows in Figure 1. Similarly, mention 3 will be active in the next stage and can take five possible actions, which create five possible coreference results shown in node (c1) through (c5). Under the derivation illustrated in Figure 1, each leaf node in the Bell tree corresponds to a possible coreference outcome, and there is no other way to form entities. The Bell tree clearly represents the search space of the coreference resolution problem. The coreference resolution can therefore be cast equivalently as finding the “best” leaf node. Since the search space is large (even for a document with a moderate number of mentions), it is difficult to estimate a distribution over leaves directly. Instead, we choose to model the process from mentions to entities, or in other words, score paths from the root to leaves in the Bell tree. A nice property of the Bell tree representation is that the number of linking or starting steps is the same for all the hypotheses. This makes it easy to rank them using the “local” linking and starting probabilities as the number of factors is the same. The Bell tree representation is also incremental in that mentions are added sequentially. This makes it easy to design a decoder and search algorithm. 3 Coreference Model 3.1 Linking and Starting Model We use a binary conditional model to compute the probability that an active mention links with an infocus partial entity. The conditions include all the partially-formed entities before, the focus entity index, and the active mention. Formally, let *'+-,. &0/213/465 be mentions in a document. Mention index 1 represents the order it appears in the document. Let 78 be an entity, and 9 . 1;:<>= be the (many-to-one) map from mention index 1 to entity index = . For an active mention index ? )&@/ ? /A , define B *CD.C 9 E1FG for some &H/A1I/ ?KJ &5LG the set of indices of the partially-established entities to the left of + (note that B NM ), and O *P7Q
.CIR B 5G the set of the partially-established entities. The link model is then STVUXW O G + GZY C G (1) the probability linking the active mention + with the in-focus entity 7 Q . The random variable Y takes value from the set B and signifies which entity is in focus; U takes binary value and is & if + links with 7'Q . As an example, for the branch from (b2) to (c4) in Figure 1, the active mention is “3”, the set of partial entities to the left of “3” is O ( *\[ &]^G [ P]5 , the active entity is the second partial entity “[2]”. Probability STVU_`&\W O ( Gbadce eGZY ( fb measures how likely mention “3” links with the entity “[2].” The model STVUXW O G + GdY C only computes how likely + links with 7 Q ; It does not say anything about the possibility that + starts a new entity. Fortunately, the starting probability can be computed using link probabilities (1), as shown now. Since starting a new entity means that + does not link with any entities in O , the probability of starting a new entity, STVUA W O G + , can be computed as STU W O G + (2) Q STVUA #GZY C W O G + K& J Q STY C W O G + STUA &\W O G + GZY C " (3) (3) indicates that the probability of starting an entity can be computed using the linking probabilities STU_ & W O G + GdY C , provided that the marginal STY Q C W O G + is known. In this paper, STY C W O G + is approximated as: STY C W O G + L
& if C , STU & W O G + GdY 1F otherwise (4) With the approximation (4), the starting probability (3) is STUA W O G + K& J Q STUA &\W O G + GZY C " (5) The linking model (1) and approximated starting model (5) can be used to score paths in the Bell tree. For example, the score for the path (a)-(b2)-(c4) in Figure 1 is the product of the start probability from (a) to (b2) and the linking probability from (b2) to (c4). Since (5) is an approximation, not true probability, a constant is introduced to balance the linking probability and starting probability and the starting probability becomes: S VUA W O G + L STUA W O G + " (6) If & , it penalizes creating new entities; Therefore, is called start penalty. The start penalty can be used to balance entity miss and false alarm. 3.2 Model Training and Features The model STU W O G + GdY C depends on all partial entities O , which can be very expensive. After making some modeling assumptions, we can approximate it as: STUA &\W O G + GZY C (7) XSTUA &\W 7 Q G + (8) ! STU &\W + G + " (9) From (7) to (8), entities other than the one in focus, 7 Q , are assumed to have no influence on the decision of linking + with 7'Q . (9) further assumes that the entity-mention score can be obtained by the maximum mention pair score. The model (9) is very similar to the model in (Morton, 2000; Soon et al., 2001; Ng and Cardie, 2002) while (8) has more conditions. We use maximum entropy model (Berger et al., 1996) for both the mention-pair model (9) and the entity-mention model (8): STVUXW + , G +
7#"$ #% '&(*) ,+ .0/1 2 + , G + G (10) STU W 7 Q G +
7#" $ % '&(*) .0/ 1 2 7'Q G + G (11) where 9 3 G4 GZUI is a feature and 5 is its weight; 2 ! G6 is a normalizing factor to ensure that (10) or (11) is a probability. Effective training algorithm exists (Berger et al., 1996) once the set of features * 9 ! G6 GdUD 5 is selected. The basic features used in the models are tabulated in Table 1. Features in the lexical category are applicable to non-pronominalmentions only. Distance features characterize how far the two mentions are, either by the number of tokens, by the number of sentences, or by the number of mentions in-between. Syntactic features are derived from parse trees output from a maximum entropy parser (Ratnaparkhi, 1997). The “Count” feature calculates how many times a mention string is seen. For pronominal mentions, attributes such as gender, number, possessiveness and reflexiveness are also used. Apart from basic features in Table 1, composite features can be generated by taking conjunction of basic features. For example, a distance feature together with reflexiveness of a pronoun mention can help to capture that the antecedent of a reflexive pronoun is often closer than that of a non-reflexive pronoun. The same set of basic features in Table 1 is used in the entity-mention model, but feature definitions are slightly different. Lexical features, including the acronym features, and the apposition feature are computed by testing any mention in the entity 7 Q against the active mention + . Editing distance for 7 Q G + is defined as the minimum distance over any non-pronoun mentions and the active mention. Distance features are computed by taking minimum between mentions in the entity and the active mention. In the ACE data, mentions are annotated with three levels: NAME, NOMINAL and PRONOUN. For each ACE entity, a canonical mention is defined as the longest NAME mention if available; or if the entity does not have a NAME mention, the most recent NOMINAL mention; if there is no NAME and NOMINAL mention, the most recent pronoun mention. In the entity-mention model, “ncd”,“spell” and “count” features are computed over the canonical mention of the in-focus entity and the active mention. Conjunction features are used in the entity-mention model too. The mention-pair model is appealing for its simplicity: features are easy to compute over a pair of menCategory Features Remark Lexical exact_strm 1 if two mentions have the same spelling; 0 otherwise left_subsm 1 if one mention is a left substring of the other; 0 otherwise right_subsm 1 if one mention is a right substring of the other; 0 otherwise acronym 1 if one mention is an acronym of the other; 0 otherwise edit_dist quantized editing distance between two mention strings spell pair of actual mention strings ncd number of different capitalized words in two mentions Distance token_dist how many tokens two mentions are apart (quantized) sent_dist how many sentences two mentions are apart (quantized) gap_dist how many mentions in between the two mentions in question (quantized) Syntax POS_pair POS-pair of two mention heads apposition 1 if two mentions are appositive; 0 otherwise Count count pair of (quantized) numbers, each counting how many times a mention string is seen Pronoun gender pair of attributes of {female, male, neutral, unknown } number pair of attributes of {singular, plural, unknown} possessive 1 if a pronoun is possessive; 0 otherwise reflexive 1 if a pronoun is reflexive; 0 otherwise Table 1: Basic features used in the maximum entropy model. tions; its drawback is that information outside the mention pair is ignored. Suppose a document has three mentions “Mr. Clinton”, “Clinton” and “she”, appearing in that order. When considering the mention pair “Clinton” and “she”, the model may tend to link them because of their proximity; But this mistake can be easily avoided if “Mr. Clinton” and “Clinton” have been put into the same entity and the model knows “Mr. Clinton” referring to a male while “she” is female. Since gender and number information is propagated at the entity level, the entity-mention model is able to check the gender consistency when considering the active mention “she”. 3.3 Discussion There is an in-focus entity in the condition of the linking model (1) while the starting model (2) conditions on all left entities. The disparity is intentional as the starting action is influenced by all established entities on the left. (4) is not the only way STY C W O G + can be approximated. For example, one could use a uniform distribution over B . We experimented several schemes of approximation, including a uniform distribution, and (4) worked the best and is adopted here. One may consider training STY C W O G + directly and use it to score paths in the Bell tree. The problem is that 1) the size of B from which Y takes value is variable; 2) the start action depends on all entities in O , which makes it difficult to train STY C W O G + directly. 4 Search Issues As shown in Section 2, the search space of the coreference problem can be represented by the Bell tree. Thus, the search problem reduces to creating the Bell tree while keeping track of path scores and picking the top-N best paths. This is exactly what is described in Algorithm 1. In Algorithm 1, contains all the hypotheses, or paths from the root to the current layer of nodes. Variable VO stores the cumulative score for a coreference result O . At line 1, is initialized with a single entity consisting of mention + , which corresponds to the root node of the Bell tree in Figure 1. Line 2 to 15 loops over the remaining mentions ( + to + ), and for each mention + , the algorithm extends each result O in (or a path in the Bell tree) by either linking + with an existing entity 7 , (line 5 to 10), or starting an entity [ + ] (line 11 to 14). The loop from line 2 to 12 corresponds to creating a new layer of nodes for the active mention + in the Bell tree. in line 4 and in line 6 and 11 have to do with pruning, which will be discussed shortly. The last line returns top results, where O ) / denotes the Q
result ranked by 3 : VO ) / O ) / 66 VO ) / " Algorithm 1 Search Algorithm Input: mentions *+;,6. &bG" ""G)65 ; Output: top entity results 1:Initialize: . * O . *[ + ]5b5 O
f& 2:for ? to 3: foreach node O R 4: compute . 5: foreach 1 R B 6: if ( S VUA & W OTG + GdY 1 ) { 8: Extend O to O e , by linking + with 7P, 9: VO e , . O S UAf& W O G + GdY 1F 10: } 11: if( SHUA W OTG + ) { 12: Extend O to O e by starting [ + ] . 13: O e . VO S VUA W OTG + 14: } 15: . * O e5 * O e , . 1 R B 5 . 16:return * O ) / GZO ) / G646GZO ) / 5 The complexity of the search Algorithm 1 is the total number of nodes in the Bell tree, which is ? , where ? is the Bell Number. Since the Bell number increases rapidly as a function of the number of mentions, pruning is necessary. We prune the search space in the following places: Local pruning: any children with a score below a fixed factor of the maximum score are pruned. This is done at line 6 and 11 in Algorithm 1. The operation in line 4 is: . * * S VU W OTG + 5 * STVU f&\W OTG + GZY 1F . 1 R B 5" Block 8-9 is carried out only if STU & W OTG + GZY 1F and block 12-13 is carried out only if SUA W OTG + . Global pruning: similar to local pruning except that this is done using the cumulative score O . Pruning based on the global scores is carried out at line 15 of Algorithm 1. Limit hypotheses: we set a limit on the maximum number of live paths. This is useful when a document contains many mentions, in which case excessive number of paths may survive local and global pruning. Whenever available, we check the compatibility of entity types between the in-focus entity and the active mention. A hypothesis with incompatible entity types is discarded. In the ACE annotation, every mention has an entity type. Therefore we can eliminate hypotheses with two mentions of different types. 5 Experiments 5.1 Performance Metrics The official performance metric for the ACE task is ACE-value. ACE-value is computed by first calculating the weighted cost of entity insertions, deletions and substitutions; The cost is then normalized against the cost of a nominal coreference system which outputs no entities; The ACE-value is obtained by subtracting the normalized cost from & . Weights are designed to emphasize NAME entities, while PRONOUN entities (i.e., an entity consisting of only pronominal mentions) carry very low weights. A perfect coreference system will get a &'b ACE-value while a system outputs no entities will get a ACE-value. Thus, the ACE-value can be interpreted as percentage of value a system has, relative to the perfect system. Since the ACE-value is an entity-level metric and is weighted heavily toward NAME entities, we also measure our system’s performance by an entity-constrained mention F-measure (henceforth “ECM-F”). The metric first aligns the system entities with the reference entities so that the number of common mentions is maximized. Each system entity is constrained to align with at most one reference entity, and vice versa. For example, suppose that a reference document contains three entities: *\[ + ]G [ + G + ( ]G [ + ]5 while a system outputs four entities: * [ + G + ]^G [ + ( ]G [ + ]G [ + ]5 , then the best alignment (from reference to system) would be [ + ] [ + G + ] , [ + G + ( ] [ + ( ] and other entities are not aligned. The number of common mentions of the best alignment is (i.e., + and + ( ), which leads to a mention recall and precision . The ECM-F measures the percentage of mentions that are in the “right” entities. For tests on the MUC data, we report both F-measure using the official MUC score (Vilain et al., 1995) and ECM-F. The MUC score counts the common links between the reference and the system output. 5.2 Results on the ACE data The system is first developed and tested using the ACE data. The ACE coreference system is trained with &
documents (about &b words) of ACE 2002 training data. A separate b documents ( words) is used as the development-test (Devtest) set. In 2002, NIST released two test sets in February (Feb02) and September (Sep02), respectively. Statistics of the three test sets is summarized in Table 2. We will report coreference results on the true mentions of the three test sets. TestSet #-docs #-words #-mentions #-entities Devtest 90 50426 7470 2891 Feb02 97 52677 7665 3104 Sep02 186 69649 10577 4355 Table 2: Statistics of three test sets. For the mention-pair model, training events are generated for all compatible mention-pairs, which results in about b events, about &' of which are positive examples. The full mention-pair model uses about & & features; Most are conjunction features. For the entity-mention model, events are generated by walking through the Bell tree. Only events on the true path (i.e., positive examples) and branches emitting from a node on the true path to a node not on the true path (i.e., negative examples) are generated. For example, in Figure 1, suppose that the path (a)-(b2)-(c4) is the truth, then positive training examples are starting event from (a) to (b2) and linking event from (b2) to (c4); While the negative examples are linking events from (a) to (b1), (b2) to (c3), and the starting event from (b2) to (c5). This scheme generates about c events, out of which about & are positive training examples. The full entity-mention model has about #" features, due to less number of conjunction features and training examples. Coreference results on the true mentions of the Devtest, Feb02, and Sep02 test sets are tabulated in Table 3. These numbers are obtained with a fixed search beam b and pruning threshold " #& (widening the search beam or using a smaller pruning threshold did not change results significantly). The mention-pair model in most cases performs better than the mention-entity model by both ACE-value and ECM-F measure although none of the differences is statistically significant (pair-wise t-test) at p-value #" . Note that, however, the mention-pair model uses times more features than the entity-pair model. We also observed that, because the score between the infocus entity and the active mention is computed by (9) in the mention-pair model, the mention-pair sometimes mistakenly places a male pronoun and female pronoun into the same entity, while the same mistake is avoided in the entity-mention model. Using the canonical mentions when computing some features (e.g., “spell”) in the entity-mention model is probably not optimal and it is an area that needs further research. When the same mention-pair model is used to score the ACE 2003 evaluation data, an ACE-value c " is obtained on the system1 mentions. After retrained with Chinese and Arabic data (much less training data than English), the system got " and " ACE-value on the system mentions of ACE 2003 evaluation data for Chinese and Arabic, respectively. The results for all three languages are among the top-tier submission systems. Details of the mention detection and coreference system can be found in (Florian et al., 2004). Since the mention-pair model is better, subsequent analyses are done with the mention pair model only. 5.2.1 Feature Impact To see how each category of features affects the performance, we start with the aforementioned mentionpair model, incrementally remove each feature category, retrain the system and test it on the Devtest set. The result is summarized in Table 4. The last column lists the number of features. The second row is the full mention-pair model, the third through seventh row correspond to models by removing the syntactic features (i.e., POS tags and apposition features), count features, distance features, mention type and level information, and pair of mention-spelling features. If a basic feature is removed, conjunction features using that basic feature are also removed. It is striking that the smallest system consisting of only c features (string and substring match, acronym, edit distance and number of different capitalized words) can get as much as #" ACE-value. Table 4 shows clearly that these lexical features and the distance features are the most important. Sometimes the ACE-value increases after removing a set of features, but the ECM-F measure tracks nicely the trend that the more features there are, the better the performance is. This is because the ACE-value 1System mentions are output from a mention detection system. −2.5 −2 −1.5 −1 −0.5 0 0.65 0.7 0.75 0.8 0.85 0.9 log α ACE−value or ECM−F ECM−F ACE−value Figure 2: Performance vs. log start penalty is a weighted metric. A small fluctuation of NAME entities will impact the ACE-value more than many NOMINAL or PRONOUN entities. Model ACE-val(%) ECM-F(%) #-features Full 89.8 73.20 ( 2.9) 171K -syntax 89.0 72.6 ( 2.5) 71K -count 89.4 72.0 ( 3.3) 70K -dist 86.7 *66.2 ( 3.9) 24K -type/level 86.8 65.7 ( 2.2) 5.4K -spell 86.0 64.4 ( 1.9) 39 Table 4: Impact of feature categories. Numbers after are the standard deviations. * indicates that the result is significantly (pair-wise t-test) different from the line above at #" . 5.2.2 Effect of Start Penalty As discussed in Section 3.1, the start penalty can be used to balance the entity miss and false alarm. To see this effect, we decode the Devtest set by varying the start penalty and the result is depicted in Figure 2. The ACE-value and ECM-F track each other fairly well. Both achieve the optimal when J #" . 5.3 Experiments on the MUC data To see how the proposed algorithm works on the MUC data, we test our algorithm on the MUC6 data. To minimize the change to the coreference system, we first map the MUC data into the ACE style. The original MUC coreference data does not have entity types (i.e., “ORGANIZATION”, “LOCATION” etc), required in the ACE style. Part of entity types can be recovered from the corresponding named-entity annotations. The recovered named-entity label is propagated to all mentions belonging to the same entity. There are 504 out of 2072 mentions of the MUC6 formal test set and 695 out of 2141 mentions of the MUC6 dry-run test set that cannot be assigned labels by this procedure. A Devtest Feb02 Sep02 Model ACE-val(%) ECM-F(%) ACE-val(%) ECM-F(%) ACE-val(%) ECM-F(%) MP 89.8 73.2 ( 2.9) 90.0 73.1 ( 4.0) 88.0 73.1 ( 6.8) EM 89.9 71.7 ( 2.4) 88.2 70.8 ( 3.9) 87.6 72.4 ( 6.2) Table 3: Coreference results on true mentions: MP – mention-pair model; EM – entity-mention model; ACE-val: ACE-value; ECM-F: Entity-constrained Mention F-measure. MP uses & & features while EM uses only " features. None of the ECM-F differences between MP and EM is statistically significant at #" . generic type “UNKNOWN” is assigned to these mentions. Mentions that can be found in the named-entity annotation are assumed to have the ACE mention level “NAME”; All other mentions other than English pronouns are assigned the level “NOMINAL.” After the MUC data is mapped into the ACE-style, the same set of feature templates is used to train a coreference system. Two coreference systems are trained on the MUC6 data: one trained with 30 dry-run test documents (henceforth “MUC6-small”); the other trained with 191 “dryrun-train” documents that have both coreference and named-entity annotations (henceforth “MUC6-big”) in the latest LDC release. To use the official MUC scorer, we convert the output of the ACE-style coreference system back into the MUC format. Since MUC does not require entity label and level, the conversion from ACE to MUC is “lossless.” Table 5 tabulates the test results on the true mentions of the MUC6 formal test set. The numbers in the table represent the optimal operating point determined by ECM-F. The MUC scorer cannot be used since it inherently favors systems that output fewer number of entities (e.g., putting all mentions of the MUC6 formal test set into one entity will yield a &'b recall and " precision of links, which gives an #" F-measure). The MUC6-small system compares favorably with the similar experiment in Harabagiu et al. (2001) in which an &b" F-measure is reported. When measured by the ECM-F measure, the MUC6-small system has the same level of performance as the ACE system, while the MUC6-big system performs better than the ACE system. The results show that the algorithm works well on the MUC6 data despite some information is lost in the conversion from the MUC format to the ACE format. System MUC F-measure ECM-F MUC6-small 83.9% 72.1% MUC6-big 85.7% 76.8% Table 5: Results on the MUC6 formal test set. 6 Related Work There exists a large body of literature on the topic of coreference resolution. We will compare this study with some relevant work using machine learning or statistical methods only. Soon et al. (2001) uses a decision tree model for coreference resolution on the MUC6 and MUC7 data. Leaves of the decision tree are labeled with “link” or “not-link” in training. At test time, the system checks a mention against all its preceding mentions, and the first one labeled with “link” is picked as the antecedent. Their work is later enhanced by (Ng and Cardie, 2002) in several aspects: first, the decision tree returns scores instead of a hard-decision of “link” or “not-link” so that Ng and Cardie (2002) is able to pick the “best” candidate on the left, as opposed the first in (Soon et al., 2001); Second, Ng and Cardie (2002) expands the feature sets of (Soon et al., 2001). The model in (Yang et al., 2003) expands the conditioning scope by including a competing candidate. Neither (Soon et al., 2001) nor (Ng and Cardie, 2002) searches for the global optimal entity in that they make locally independent decisions during search. In contrast, our decoder always searches for the best result ranked by the cumulative score (subject to pruning), and subsequent decisions depend on earlier ones. Recently, McCallum and Wellner (2003) proposed to use graphical models for computing probabilities of entities. The model is appealing in that it can potentially overcome the limitation of mention-pair model in which dependency among mentions other than the two in question is ignored. However, models in (McCallum and Wellner, 2003) compute directly the probability of an entity configuration conditioned on mentions, and it is not clear how the models can be factored to do the incremental search, as it is impractical to enumerate all possible entities even for documents with a moderate number of mentions. The Bell tree representation proposed in this paper, however, provides us with a naturally incremental framework for coreference resolution. Maximum entropy method has been used in coreference resolution before. For example, Kehler (1997) uses a mention-pair maximum entropy model, and two methods are proposed to compute entity scores based on the mention-pair model: 1) a distribution over entity space is deduced; 2) the most recent mention of an entity, together with the candidate mention, is used to compute the entity-mention score. In contrast, in our mention pair model, an entity-mention pair is scored by taking the maximum score among possible mention pairs. Our entity-mention model eliminates the need to synthesize an entity-mention score from mention-pair scores. Morton (2000) also uses a maximum entropy mention-pair model, and a special “dummy” mention is used to model the event of starting a new entity. Features involving the dummy mention are essentially computed with the single (normal) mention, and therefore the starting model is weak. In our model, the starting model is obtained by “complementing” the linking scores. The advantage is that we do not need to train a starting model. To compensate the model inaccuracy, we introduce a “starting penalty” to balance the linking and starting scores. To our knowledge, the paper is the first time the Bell tree is used to represent the search space of the coreference resolution problem. 7 Conclusion We propose to use the Bell tree to represent the process of forming entities from mentions. The Bell tree represents the search space of the coreference resolution problem. We studied two maximum entropy models, namely the mention-pair model and the entitymention model, both of which can be used to score entity hypotheses. A beam search algorithm is used to search the best entity result. State-of-the-art performance has been achieved on the ACE coreference data across three languages. Acknowledgments This work was partially supported by the Defense Advanced Research Projects Agency and monitored by SPAWAR under contract No. N66001-99-2-8916. The views and findings contained in this material are those of the authors and do not necessarily reflect the position of policy of the Government and no official endorsement should be inferred. We also would like to thank the anonymous reviewers for suggestions of improving the paper. References E.T. Bell. 1934. Exponential numbers. Amer. Math. Monthly, pages 411–419. Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71, March. R Florian, H Hassan, A Ittycheriah, H Jing, N Kambhatla, X Luo, N Nicolov, and S Roukos. 2004. A statistical model for multilingual entity detection and tracking. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 1–8, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Niyu Ge, John Hale, and Eugene Charniak. 1998. A statistical approach to anaphora resolution. In Proc. of the sixth Workshop on Very Large Corpora. Sanda M. Harabagiu, Razvan C. Bunescu, and Steven J. Maiorano. 2001. Text and knowledge mining for coreference resolution. In Proc. of NAACL. J. Hobbs. 1976. Pronoun resolution. Technical report, Dept. of Computer Science, CUNY, Technical Report TR76-1. A. Ittycheriah, L. Lita, N. Kambhatla, N. Nicolov, S. Roukos, and M. Stys. 2003. Identifying and tracking entity mentions in a maximum entropy framework. In HLT-NAACL 2003: Short Papers, May 27 - June 1. Andrew Kehler. 1997. Probabilistic coreference in information extraction. In Proc. of EMNLP. Andrew McCallum and Ben Wellner. 2003. Toward conditional models of identity uncertainty with application to proper noun coreference. In IJCAI Workshop on Information Integration on the Web. R. Mitkov. 1998. Robust pronoun resolution with limited knowledge. In Procs. of the 17th Internaltional Conference on Computational Linguistics, pages 869–875. Thomas S. Morton. 2000. Coreference for NLP applications. In In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. MUC-6. 1995. Proceedings of the Sixth Message Understanding Conference(MUC-6), San Francisco, CA. Morgan Kaufmann. Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proc. of ACL, pages 104–111. NIST. 2003. The ACE evaluation plan. www.nist.gov/speech/tests/ace/index.htm. Adwait Ratnaparkhi. 1997. A Linear Observed Time Statistical Parser Based on Maximum Entropy Models. In Second Conference on Empirical Methods in Natural Language Processing, pages 1 – 10. Wee Meng Soon, Hwee Tou Ng, and Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. M. Vilain, J. Burger, J. Aberdeen, D. Connolly, , and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In In Proc. of MUC6, pages 45–52. Xiaofeng Yang, Guodong Zhou, Jian Su, and Chew Lim Tan. 2003. Coreference resolution using competition learning approach. In Proc. of the & Q ACL. | 2004 | 18 |
LEARNING TO RESOLVE BRIDGING REFERENCES Massimo Poesio,♣Rahul Mehta,♣Axel Maroudas,♣and Janet Hitzeman♠ ♣Dept. of Comp. Science, University of Essex, UK poesio at essex dot ac dot uk ♠MITRE Corporation, USA hitz at mitre dot org Abstract We use machine learning techniques to find the best combination of local focus and lexical distance features for identifying the anchor of mereological bridging references. We find that using first mention, utterance distance, and lexical distance computed using either Google or WordNet results in an accuracy significantly higher than obtained in previous experiments. 1 Introduction BRIDGING REFERENCES (BR) (Clark, 1977)– anaphoric expressions that cannot be resolved purely on the basis of string matching and thus require the reader to ’bridge’ the gap using commonsense inferences–are arguably the most interesting and, at the same time, the most challenging problem in anaphora resolution. Work such as (Poesio et al., 1998; Poesio et al., 2002; Poesio, 2003) provided an experimental confirmation of the hypothesis first put forward by Sidner (1979) that BRIDGING DESCRIPTIONS (BD)1 are more similar to pronouns than to other types of definite descriptions, in that they are sensitive to the local rather than the global focus (Grosz and Sidner, 1986). This previuous work also suggested that simply choosing the entity whose description is lexically closest to that of the bridging description among those in the current focus space gives poor results; in fact, better results are obtained by always choosing as ANCHOR of the bridging reference2 the first-mentioned entity of the previous sentence (Poesio, 2003). But neither source of information in isolation resulted in an accuracy over 40%. In short, this earlier work suggested that a combination of salience and lexical / 1We will use the term bridging descriptions to indicate bridging references realized by definite descriptions, equated here with noun phrases with determiner the, like the top. 2Following (Poesio and Vieira, 1998), we use the term ‘anchor’ as as a generalization of the term ANTECEDENT, to indicate the discourse entity which an anaphoric expression either realizes, or is related to by an associative relation; reserving ‘antecedent’ for the cases of identity. commonsense information is needed to choose the most likely anchor; the problem remained of how to combine this information. In the work described in this paper, we used machine learning techniques to find the best combination of local focus features and lexical distance features, focusing on MEREOLOGICAL bridging references:3 references referring to parts of an object already introduced (the cabinet), such as the panels or the top (underlined) in the following example from the GNOME corpus (Poesio et al., 2004). (1) The combination of rare and expensive materials used on [this cabinet]i indicates that it was a particularly expensive commission. The four Japanese lacquer panels date from the mid- to late 1600s and were created with a technique known as kijimaki-e. For this type of lacquer, artisans sanded plain wood to heighten its strong grain and used it as the background of each panel. They then added the scenic elements of landscape, plants, and animals in raised lacquer. Although this technique was common in Japan, such large panels were rarely incorporated into French eighteenth-century furniture. Heavy Ionic pilasters, whose copper-filled flutes give an added rich color and contrast to the giltbronze mounts, flank the panels. Yellow jasper, a semiprecious stone, rather than the usual marble, forms the top. 2 Two sources of information for bridging reference resolution 2.1 Lexical information The use of different sources of lexical knowledge for resolving bridging references has been investigated in a series of papers by Poesio et al. all using as dataset the Bridging Descriptions (BDs) contained in the corpus used by Vieira and Poesio 3We make use of the classification of bridging references proposed by Vieira and Poesio (2000). ‘Mereological’ bridging references are one of the the ‘WordNet’ bridging classes, which cover cases where the information required to bridge the gap may be found in a resource such as WordNet (Fellbaum, 1998): synonymy, hyponymy, and meronymy. (2000). In these studies, the lexical distance between a BD and its antecedent was used to choose the anchor for the BD among the antecedents in the previous five sentences. In (Poesio et al., 1997; Vieira and Poesio, 2000) WordNet 1.6 was used as a lexical resource, with poor or mediocre results. These results were due in part to missing entries and / or relations; in part to the fact that because of the monotonic organization of information in WordNet, complex searches are required even to find apparently close associations (like that between wheel and car). Similar results using WordNet 1.6 were reported at around the same time by other groups - e.g., (Humphreys et al., 1997; Harabagiu and Moldovan, 1998) and have been confirmed by more recent studies studying both hyponymy (Markert et al., 2003) and more specifically mereological BDs. Poesio (2003) found that none of the 58 mereological references in the GNOME corpus (discussed below) had a direct mereological link to their anchor: for example, table is not listed as a possible holonym of drawer, nor is house listed as a possible holonym for furniture. Garcia-Almanza (2003) found that only 16 of these 58 mereological references could be resolved by means of more complex searches in WordNet, including following the hypernymy hierarchy for both the anchor and the bridging reference, and a ’spreading activation’ search. Poesio et al. (1998) explored the usefulness of vector-space representations of lexical meaning for BDs that depended on lexical knowledge about hyponymy and synonymy. The HAL model discussed in Lund et al. (1995) was used to find the anchor of the BDs in the dataset already used by Poesio et al. (1997). However, using vectorial representations did not improve the results for the ‘WordNet’ BDs: for the synonymy cases the results were comparable to those obtained with WordNet (4/12, 33%), but for the hyponymy BDs (2/14, as opposed to 8/14 with WordNet) and especially for mereological references (2/12) they were clearly worse. On the other hand, the post-hoc analysis of results suggested that the poor results were in part due to the lack of mechanisms for choosing the most salient (or most recent) BDs. The poor results for mereological BDs with both WordNet and vectorial representations indicated that a different approach was needed to acquire information about part-of relations. Grefenstette’s work on semantic similarity (Grefenstette, 1993) and Hearst’s work on acquiring taxonomic information (Hearst, 1998) suggested that certain syntactic constructions could be usefully viewed as reflecting underlying semantic relations. In (Ishikawa, 1998; Poesio et al., 2002) it was proposed that syntactic patterns (henceforth: CONSTRUCTIONS) such as the wheel of the car could indicate that wheel and car stood in a part-of relation.4 Vectorbased lexical representations whose elements encoded the strength of associations identified by means of constructions like the one discussed were constructed from the British National Corpus, using Abney’s CASS chunker. These representations were then used to choose the anchor of BDs, using again the same dataset and the same methods as in the previous two attempts, and using mutual information to determine the strength of association. The results on mereological BDs–recall .67, precision=.73–were drastically better than those obtained with WordNet or with simple vectorial representations. The results with the three types of lexical resources and the different types of BDs in the Vieira / Poesio dataset are summarized in Table 1. Finally, a number of researchers recently argued for using the Web as a way of addressing data sparseness (Keller and Lapata, 2003). The Web has proven a useful resource for work in anaphora resolution as well. Uryupina (2003) used the Web to estimate ‘Definiteness probabilities’ used as a feature to identify discourse-new definites. Markert et al. (2003) used the Web and the construction method to extract information about hyponymy used to resolve other-anaphora (achieving an f value of around 67%) as well as the BDs in the Vieira-Poesio dataset (their results for these cases were not better than those obtained by (Vieira and Poesio, 2000)). Markert et al. also found a sharp difference between using the Web as a a corpus and using the BNC, the results in the latter case being significantly worse than when using WordNet. Poesio (2003) used the Web to choose between the hypotheses concerning the anchors of mereological BDs in the GNOME corpus generated on the basis of Centering information (see below). 2.2 Salience One of the motivations behind Grosz and Sidner’s (1986) distinction between two aspects of the attentional state - the LOCAL FOCUS and the GLOBAL FOCUS–is the difference between the interpretive preferences of pronouns and definite descriptions. According to Grosz and Sidner, the interpretation for pronouns is preferentially found in the local focus, whereas that of definite descriptions is preferentially found in the global focus. 4A similar approach was pursued in parallel by Berland and Charniak (1999). Synonymy Hyponymy Meronymy Total WN Total BDs BDs in Vieira / Poesio corpus 12 14 12 38 204 Using WordNet 4 (33.3%) 8(57.1%) 3(33.3%) 15 (39%) 34 (16.7%) Using HAL Lexicon 4 (33.3%) 2(14.3%) 2(16.7%) 8 (22.2%) 46(22.7%) Using Construction Lexicon 1 (8.3%) 0 8(66.7%) 9 (23.7%) 34(16.7%) Table 1: BD resolution results using only lexical distance with WordNet, HAL-style vectorial lexicon, and construction-based lexicon. However, already Sidner (1979) hypothesized that BDs are different from other definite descriptions, in that the local focus is preferred for their interpretation. As already mentioned, the error analysis of Poesio et al. (1998) supported this finding: the study found that the strategy found to be optimal for anaphoric definite descriptions by Vieira and Poesio (2000), considering as equally likely all antecedents in the previous five-sentence window (as opposed to preferring closer antecedents), gave poor results for bridging references; entities introduced in the last two sentences and ‘main entities’ were clearly preferred. The following example illustrates how the local focus affects the interpretation of a mereological BD, the sides, in the third sentence. (2) [Cartonnier (Filing Cabinet)]i with Clock [This piece of mid-eighteenth-century furniture]i was meant to be used like a modern filing cabinet; papers were placed in [leatherfronted cardboard boxes]j (now missing) that were fitted into the open shelves. [A large table]k decorated in the same manner would have been placed in front for working with those papers. Access to [the cartonnier]i’s lower half can only be gained by the doors at the sides, because the table would have blocked the front. The three main candidate anchors in this example– the cabinet, the boxes, and the table–all have sides. However, the actual anchor, the cabinet, is clearly the Backward-Looking Center (CB) (Grosz et al., 1995) of the first sentence after the title;5 and if we assume that entities can be indirectly realized– see (Poesio et al., 2004)–the cabinet is the CB of all three sentences, including the one containing the BR, and therefore a preferred candidate. In (Poesio, 2003), the impact on associative BD resolution of both relatively simple salience features (such as distance and order or mention) and of more complex ones (such as whether the anchor was a CB or not) was studied using the GNOME corpus (discussed below) and the CB-tracking techniques developed to compare alternative ways of instantiating 5The CB is Centering theory’s (Grosz et al., 1995) implementation of the notion of ‘topic’ or ‘main entity’. the parameters of Centering by Poesio et al. (2004). Poesio (2003) analyzed, first of all, the distance between the BD and the closest mention of the anchor, finding that of the 169 associative BDs, 77.5% had an anchor occurring either in the same sentence (59) or the previous one (72); and that only 4.2% of anchors were realized more than 5 sentences back. These percentages are very similar to those found with pronouns (Hobbs, 1978). Next, Poesio analyzed the order of mention of the anchors of the 72 associative BD whose anchor was in the previous sentence, finding that 49/72, 68%, were realized in first position. This finding is consistent with the preference for first-mentioned entities (as opposed to the most recent ones) repeatedly observed in the psychological literature on anaphora (Gernsbacher and Hargreaves, 1988; Gordon et al., 1993). Finally, Poesio examined the hypothesis that finding the anchor of a BD involves knowing which entities are the CB and the CP in the sense of Centering (Grosz et al., 1995). He found that CB(U-1) is the anchor of 37/72 of the BDs whose anchor is in the previous utterance (51.3%), and only 33.6% overall. (CP(U-1) was the anchor for 38.2% associative BDs.) Clearly, simply choosing the CB (or the CP) of the previous sentence as the anchor doesn’t work very well. However, Poesio also found that 89% of the anchors of associative BDs had been CBs or CPs. This suggested that while knowing the local focus isn’t sufficient to determine the anchor of a BD, restricting the search for anchors to CBs and CPs only might increase the precision of the BD resolution process. This hypothesis was supported by a preliminary test with 20 associative BDs. The anchor for a BD with head noun NBD was chosen among the subset of all potential antecedents (PA) in the previous five sentences that had been CBs or CPs by calling Google (by hand) with the query “the NBD of the NPA”, where NPA is the head noun of the potential antecedent, and choosing the PA with the highest hit count. 14 mereological BDs (70%) were resolved correctly this way. 3 Methods The results just discussed suggest that lexical information and salience information combine to determine the anchor of associative BRs. The goal of the experiments discussed in this paper was to test more thoroughly this hypothesis using machine learning techniques to combine the two types of information, using a larger dataset than used in this previous work, and using completely automatic techniques. We concentrated on mereological BDs, but our methods could be used to study other types of bridging references, using, e.g., the constructions used by Markert et al. (2003).6 3.1 The corpus We used for these experiments the GNOME corpus, already used in (Poesio, 2003). An important property of this corpus for the purpose of studying BR resolution is that fewer types of BDs are annotated than in the original Vieira / Poesio dataset, but the annotation is reliable (Poesio et al., 2004).7 The corpus also contains more mereological BDs and BRs than the original dataset used by Poesio and Vieira. The GNOME corpus contains about 500 sentences and 3000 NPs. A variety of semantic and discourse information has been annotated (the manual is available from the GNOME project’s home page at http://www.hcrc.ed.ac.uk/ ˜ gnome). Four types of anaphoric relations were annotated: identity (IDENT), set membership (ELEMENT), subset (SUBSET), and ‘generalized possession’ (POSS), which also includes part-of relations. A total of 2073 anaphoric relations were annotated; these include 1164 identity relations (including those realized with synonyms and hyponyms) and 153 POSS relations. Bridging references are realized by noun phrases of different types, including indefinites (as in I bought a book and a page fell out (Prince, 1981)). Of the 153 mereological references, 58 mereological references are realized by definite descriptions. 6In (Poesio, 2003), bridging descriptions based on set relations (element, subset) were also considered, but we found that this class of BDs required completely different methods. 7A serious problem when working with bridging references is the fact that subjects, when asked for judgments about bridging references in general, have a great deal of difficulty in agreeing on which expressions in the corpus are bridging references, and what their anchors are (Poesio and Vieira, 1998). This finding raises a number of interesting theoretical questions concerning the extent of agreement on semantic judgments, but also the practical question of whether it is possible to evaluate the performance of a system on this task. Subsequent work found, however, that restricting the type of bridging inferences required does make it possible for annotators to agree among themselves (Poesio et al., 2004). In the GNOME corpus only a few types of associative relations are marked, but these can be marked reliably, and do include part-of relations like that between the top and the cabinet that we are concerned with. 3.2 Features Our classifiers use two types of input features. Lexical features Only one lexical feature was used: lexical distance, but extracted from two different lexical sources. Google distance was computed as in (Poesio, 2003) (see also Markert et al. (2003)): given head nouns NBD of the BD and NPA of a potential antecedent, Google is called (via the Google API) with a query of the form “the NBD of the NPA” (e.g., the sides of the table) and the number of hits NHits is computed. Then Google distance = ( 1 if NHits = 0 1 NHits otherwise The query “the NBD of NPA” (e.g., the amount of cream) is used when NPA is used as a mass noun (information about mass vs count is annotated in the GNOME corpus). If the potential antecedent is a pronoun, the head of the closest realization of the same discourse entity is used. We also reconsidered WordNet (1.7.1) as an alternative way of establishing lexical distance, but made a crucial change from the studies reported above. Both earlier studies such as (Poesio et al., 1997) and more recent ones (Poesio, 2003; GarciaAlmanza, 2003) had shown that mereological information in WordNet is extremely sparse. However, these studies also showed that information about hypernyms is much more extensive. This suggested trading precision for recall with an alternative way of using WordNet to compute lexical distance: instead of requiring the path between the head predicate of the associative BD and the head predicate of the potential antecedent to contain at least one mereological link (various strategies for performing a search of this type were considered in (GarciaAlmanza, 2003)), consider only hypernymy and hyponymy links. To compute our second measure of lexical distance between NBD and NPA defined as above, WordNet distance, the following algorithm was used. Let distance(s, s′) be the number of hypernim links between concepts s and s′. Then 1. Get from WordNet all the senses of both NBD and NPA; 2. Get the hypernym tree of each of these senses; 3. For each pair of senses sNBDi and sNPAj, find the Most Specific Common Subsumer scomm ij (this is the closest concept which is an hypernym of both senses). 4. The ShortestWNDistance between NBD and NPA is then computed as the shortest distance between any of the senses of NBD and any of the senses of NPA: ShtstWNDist(NBD, NPA) = mini,j(distance(sNBDi, scom ij ) + distance(scom ij , sNP Aj )) 5. Finally, a normalized WordNet distance in the range 0..1 is then obtained by dividing ShtstWNDist by a MaxWNDist factor (30 in our experiments). WordNet distance = 1 if no path between the concepts was found. WN distance = ( 1 if no path ShtstWNDist MaxWNDist otherwise Salience features In choosing the salience features we took into account the results in (Poesio, 2003), but we only used features that were easy to compute, hoping that they would approximate the more complex features used in (Poesio, 2003). The first of these features was utterance distance, the distance between the utterance in which the BR occurs and the utterance containing the potential antecedent. (Sentences are used as utterances, as suggested by the results of (Poesio et al., 2004).) As discussed above, studies such as (Poesio, 2003) suggested that bridging references were sensitive to distance, in the same way as pronouns (Hobbs, 1978; Clark and Sengul, 1979). This finding was confirmed in our study; all anchors of the 58 mereological BDs occurred within the previous five sentences, and 47/58 (81%) in the previous two. (It is interesting to note that no anchor occurred in the same sentence as the BD.) The second salience feature was boolean: whether the potential antecedent had been realized in first mention position in a sentence (Poesio, 2003; Gernsbacher and Hargreaves, 1988; Gordon et al., 1993). Two forms of this feature were tried: local first mention (whether the entity had been realized in first position within the previous five sentences) and global first mention (whether it had been realized in first position anywhere). 269 entities are realized in first position in the five sentences preceding one of the 58 BDs; 298 entities are realized in first position anywhere in the preceding text. For 31/58 of the anchors of mereological BDs, 53.5%, local first mention = 1; global first mention = 1 for 33/58 of anchors, 56.9%. 3.3 Training Methods Constructing the data set The data set used to train and test BR resolution consisted of a set of positive instances (the actual anchors of the mereological BRs) and a set of negative instances (other entities mentioned in the previous five sentences of the text). However, preliminary tests showed that simply including all potential antecedents as negative instances would make the data set too unbalanced, particularly when only bridging descriptions were considered: in this case we would have had 58 positive instances vs. 1672 negative ones. We therefore developed a parametric script that could create datasets with different positive / negative ratios - 1:1, 1:2, 1:3 - by including, with each positive instance, a varying number of negative instances (1, 2, 3, ...) randomly chosen among the other potential antecedents, the number of negative instances to be included for each positive one being a parameter chosen by the experimenter. We report the results obtained with 1:1 and 1:3 ratios. The dataset thus constructed was used for both training and testing, by means of a 10-fold crossvalidation. Types of Classifiers Used Multi-layer perceptrons (MLPs) have been claimed to work well with small datasets; we tested both our own implementation of an MLP with back-propagation in MatLab 6.5, experimenting with different configurations, and an off-the-shelf MLP included in the Weka Machine Learning Library8, Weka-NN. The best configuration for our own MLP proved to be one with a sigle hidden layer and 10 hidden nodes. We also used the implementation of a Naive Bayes classifier included in the Weka MLL, as Modjeska et al. (2003) reported good results. 4 Experimental Results In the first series of experiments only mereological Bridging Descriptions were considered (i.e., only bridging references realized by the-NPs). In a second series of experiments we considered all 153 mereological BRs, including ones realized with indefinites. Finally, we tested a classifier trained on balanced data (1:1 and 1:3) to find the anchors of BDs among all possible anchors. 4.1 Experiment 1: Mereological descriptions The GNOME corpus contains 58 mereological BDs. The five sentences preceding these 58 BDs contain a total of 1511 distinct entities for which a head could be recovered, possibly by examining their antecedents. This means an average of 26 distinct potential antecedents per BD, and 5.2 entities per sentence. The simplest baselines for the task of finding 8The library is available from http://www.cs.waikato.ac.nz/ml/weka/. the anchor are therefore 4% (by randomly choosing one antecedent among those in the previous five sentences) and 19.2% (by randomly choosing one antecedent among those in the previous sentence only). As 4.6 entities on average were realized in first mention position in the five sentences preceding a BD (269/58), choosing randomly among the first-mentioned entities gives a slighly higher accuracy of 21.3%. A few further baselines can be established by examining each feature separately. Google didn’t return any hits for 1089 out of 1511 distinct PAs, and no hit for 24/58 anchors; in 8/58 of cases (13.8%) the entity with the minimum Google distance is the correct anchor. We saw before that the method for computing WordNet distance used in (Poesio, 2003) didn’t find a path for any of the mereological BDs; however, not trying to follow mereological links worked much better, achieving the same accuracy as Google distance (8/58, 13.8%) and finding connections for much higher percentages of concepts: no path could be found for only 10/58 of actual anchors, and for 503/1511 potential antecedents. Pairwise combinations of these features were also considered. The best such combination, choosing the first mentioned entity in the previous sentence, achieves an accuracy of 18/58, 31%. These baseline results are summarized in the following table. Notice how even the best baselines achieve pretty low accuracy, and how even simple ’salience’ measures work better than lexical distance measures. Baseline Accuracy Random choice between entities in previous 5 4% Random choice between entities in previous 1 19% Random choice between First Ment. 21.3% entities in previous 5 Entity with min Google distance 13.8% Entity with min WordNet distance 13.8% FM entity in previous sentence 31% Min Google distance in previous sentence 17.2% Min WN distance in previous sentence 25.9% FM and Min Google distance 12% FM and Min WN distance 24.1% Table 2: Baselines for the BD task The features utterance distance, local first mention, and global f.m. were used in all machine learning experiments. But since one of our goals was to compare different lexical resources, only one lexical distance feature was used in the first two experiment. The three classifiers were trained to classify a potential antecedent as either ‘anchor’ or ‘not anchor’. The classification results with Google distance and WN distance for all three classifiers and the 1:1 data set (116 instances in total, 58 real anchor, 58 negative instances), for all elements of the data set, and averaging across the 10 cross-validations, are shown in Table 3. WN Distance Google Distance (Correct) (Correct) Our own MLP 92(79.3%) 89(76.7%) Weka NN 91(78.4%) 86(74.1%) Weka Naive Bayes 88(75.9%) 85(73.3%) Table 3: Classification results for BDs These results are clearly better than those obtained with any of the baseline methods discussed above. The differences between WN distance and Google distance, and that between our own MLP and the Weka implementation of Naive Bayes, are also significant (by a sign test, p ≤.05), whereas the pairwise differences between our own MLP and Weka’s NN, and between this and the Naive Bayes classifier, aren’t. In other words, although we find little difference between using WordNet and Google to compute lexical distance, using WordNet leads to slightly better results for BDs. The next table shows precision, recall and f-values for the positive data points, for the feature sets using WN distance and Google distance, respectively: Precision Recall F-value WN features 75.4% 84.5% 79.6% Google features 70.6% 86.2% 77.6% Table 4: Precision and recall for positive instances Using a 1:3 dataset (3 negative data points for each anchor), overall accuracy increases (to 82% using Google distance) and accuracy with Google distance is better than with Wordnet distance (80.6%); however, the precision and recall figures for the positive data points get much worse: 56.7% with Google, 55.7% with Wordnet. 4.2 All mereological references Clearly, 58 positive instances is a fairly small dataset. In order to have a larger dataset, we included every bridging reference in the corpus, including those realized with indefinite NPs, thus bringing the total to 153 positive instances. We then ran a second series of experiments using the same methods as before. The results were slightly lower than those for BDs only, but in this case there was no difference between using Google and using WN. Fmeasure on positive instances was 76.3% with WN, 75.8% with Google. 4.3 A harder test In a last experiment, we used classifiers trained on balanced and moderately unbalanced data to determine the anchor of 6 randomly chosen BDs among WN Distance Google Distance (Correct) (Correct) Weka NN 227(74.2%) 230(75.2%) Table 5: Classification results for all BDs all of their 346 possible antecedents in context. For these experiments, we also tried to use both Google and WordNet simultaneously. The results for BDs are shown in Table 6. The first column of the table specifies the lexical resource used; the second the degree of balance; the next two columns percentage correct and F value on a testing set with the same balance as the training set; the final two columns perc. correct and F value on the harder test set. The best results,F=.5, are obtained using both Google and WN distance, and using a larger (if unbalanced) training corpus. These results are not as good as those obtained (by hand) by Poesio (which, however, used a complete focus tracking mechanism), but the F measure is still 66% higher than that obtained with the highest baseline (FM only), and not far off from the results obtained with direct anaphoric definite descriptions (e.g., by (Poesio and Alexandrov-Kabadjov, 2004)). It’s also conforting to note that results with the harder test improve the more data are used, which suggests that better results could be obtained with a larger corpus. 5 Related work In recent years there has been a lot of work to develop anaphora resolution algorithms using both symbolic and statistical methods that could be quantitatively evaluated (Humphreys et al., 1997; Ng and Cardie, 2002) but this work focused on identity relations; bridging references were explicitly excluded from the MUC coreference task because of the problems with reliability discussed earlier. Thus, most work on bridging has been theoretical, like the work by Asher and Lascarides (1998). Apart from the work by Poesio et al., the main other studies attempting quantitative evaluations of bridging reference resolution are (Markert et al., 1996; Markert et al., 2003). Markert et al. (1996) also argue for the need to use both Centering information and conceptual knowledge, and attempt to characterize the ‘best’ paths on the basis of an analysis of part-of relations, but use a hand-coded, domain-dependent knowledge base. Markert et al. (2003) focus on other anaphora, using Hearst’ patterns to mine information about hyponymy from the Web, but do not use focusing knowledge. 6 Discussion and Conclusions The two main results of this study are, first of all, that combining ’salience’ features with ’lexical’ features leads to much better results than using either method in isolation; and that these results are an improvement over those previously reported in the literature. A secondary, but still interesting, result is that using WordNet in a different way –taking advantage of its extensive information about hypernyms to obviate its lack of information about meronymy–obviates the problems previously reported in the literature on using WordNet for resolving mereological bridging references, leading to results comparable to those obtained using Google. (Of course, from a practical perspective Google may still be preferrable, particularly for languages for which no WordNet exists.) The main limitation of the present work is that the number of BDs and BRs considered, while larger than in our previous studies, is still fairly small. Unfortunately, creating a reasonably accurate gold standard for this type of semantic interpretation process is slow work. Our first priority will be therefore to extend the data set, including also the original cases studied by Poesio and Vieira. Current and future work will also include incorporating the methods tested here in an actual anaphora resolution system, the GUITAR system (Poesio and Alexandrov-Kabadjov, 2004). We are also working on methods for automatically recognizing bridging descriptions, and dealing with other types of (non-associative) bridging references based on synonymy and hyponymy. Acknowledgments The creation of the GNOME corpus was supported by the EPSRC project GNOME, GR/L51126/01. References N. Asher and A. Lascarides. 1998. Bridging. Journal of Semantics, 15(1):83–13. M. Berland and E. Charniak. 1999. Finding parts in very large corpora. In Proc. of the 37th ACL. H. H. Clark and C. J. Sengul. 1979. In search of referents for nouns and pronouns. Memory and Cognition, 7(1):35–41. H. H. Clark. 1977. Bridging. In P. N. JohnsonLaird and P.C. Wason, editors, Thinking: Readings in Cognitive Science. Cambridge. C. Fellbaum, editor. 1998. WordNet: An electronic lexical database. The MIT Press. A. Garcia-Almanza. 2003. Using WordNet for mereological anaphora resolution. Master’s thesis, University of Essex. Lex Res Balance Perc on bal F on bal Perc on Hard F on Hard WN 1:1 70.2% .7 80.2% .2 1:3 75.9% .4 91.7% 0 Google 1:1 64.4% .7 63.6% .1 1.3 79.8% .5 88.4% .3 WN + 1:1 66.3% .6 65.3% .2 Google 1.3 77.9% .4 92.5% .5 Table 6: Results using a classifier trained on balanced data on unbalanced ones. M. A. Gernsbacher and D. Hargreaves. 1988. Accessing sentence participants. Journal of Memory and Language, 27:699–717. P. C. Gordon, B. J. Grosz, and L. A. Gillion. 1993. Pronouns, names, and the centering of attention in discourse. Cognitive Science, 17:311–348. G. Grefenstette. 1993. SEXTANT: extracting semantics from raw text. Heuristics. B. J. Grosz and C. L. Sidner. 1986. Attention, intention, and the structure of discourse. Computational Linguistics, 12(3):175–204. B. J. Grosz, A. K. Joshi, and S. Weinstein. 1995. Centering. Computational Linguistics, 21(2):202–225. S. Harabagiu and D. Moldovan. 1998. Knowledge processing on extended WordNet. In (Fellbaum, 1998), pages 379–405. M. A. Hearst. 1998. Automated discovery of Wordnet relations. In (Fellbaum, 1998). J. R. Hobbs. 1978. Resolving pronoun references. Lingua, 44:311–338. K. Humphreys, R. Gaizauskas, S. Azzam, C. Huyck, B. Mitchell, and H. Cunningham Y. Wilks. 1997. Description of the LaSIE-II System as used for MUC-7. In Proc. of the 7th Message Understanding Conference (MUC-7). T. Ishikawa. 1998. Acquisition of associative information and resolution of bridging descriptions. Master’s thesis, University of Edinburgh. F. Keller and M. Lapata. 2003. Using the Web to obtain frequencies for unseen bigrams. Computational Linguistics, 29(3). K. Lund, C. Burgess, and R. A. Atchley. 1995. Semantic and associative priming in highdimensional semantic space. In Proc. of the 17th Conf. of the Cogn. Science Soc., pages 660–665. K. Markert, M. Strube, and U. Hahn. 1996. Inferential realization constraints on functional anaphora in the centering model. In Proc. of 18th Conf. of the Cog. Science Soc., pages 609–614. K. Markert, M. Nissim, and N.. Modjeska. 2003. Using the Web for nominal anaphora resolution. In Proc. of the EACL Workshop on the Computational Treatment of Anaphora, pages 39–46. N. Modjeska, K. Markert, and M. Nissim. 2003. Using the Web in ML for anaphora resolution. In Proc. of EMNLP-03, pages 176–183. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Meeting of the ACL. M. Poesio and R. Vieira. 1998. A corpus-based investigation of definite description use. Computational Linguistics, 24(2):183–216, June. M. Poesio, R. Vieira, and S. Teufel. 1997. Resolving bridging references in unrestricted text. In R. Mitkov, editor, Proc. of the ACL Workshop on Robust Anaphora Resolution, pages 1–6, Madrid. M. Poesio, S. Schulte im Walde, and C. Brew. 1998. Lexical clustering and definite description interpretation. In Proc. of the AAAI Spring Symposium on Learning for Discourse, pages 82–89. M. Poesio, T. Ishikawa, S. Schulte im Walde, and R. Vieira. 2002. Acquiring lexical knowledge for anaphora resolution. In Proc. of the 3rd LREC. M. Poesio and M. Alexandrov-Kabadjov. 2004. A general-purpose, off the shelf anaphoric resolver. In Proc. of the 4th LREC, Lisbon. M. Poesio, R. Stevenson, B. Di Eugenio, and J. M. Hitzeman. 2004. Centering: A parametric theory and its instantiations. Comp. Linguistics. 30(3). M. Poesio. 2003. Associative descriptions and salience. In Proc. of the EACL Workshop on Computational Treatments of Anaphora. E. F. Prince. 1981. Toward a taxonomy of givennew information. In P. Cole, editor, Radical Pragmatics, pages 223–256. Academic Press. C. L. Sidner. 1979. Towards a computational theory of definite anaphora comprehension in English discourse. Ph.D. thesis, MIT. O. Uryupina. 2003. High-precision identification of discourse-new and unique noun phrases. In Proc. of ACL 2003 Stud. Workshop, pages 80–86. R. Vieira and M. Poesio. 2000. An empiricallybased system for processing definite descriptions. Computational Linguistics, 26(4), December. | 2004 | 19 |
Constructivist Development of Grounded Construction Grammars Luc Steels University of Brussels (VUB AI Lab) SONY Computer Science Lab - Paris 6 Rue Amyot, 75005 Paris [email protected] Abstract The paper reports on progress in building computational models of a constructivist approach to language development. It introduces a formalism for construction grammars and learning strategies based on invention, abduction, and induction. Examples are drawn from experiments exercising the model in situated language games played by embodied artificial agents. 1 Introduction The constructivist approach to language learning proposes that ”children acquire linguistic competence (...) only gradually, beginning with more concrete linguistic structures based on particular words and morphemes, and then building up to more abstract and productive structures based on various types of linguistic categories, schemas, and constructions.” (TomaselloBrooks, 1999), p. 161. The approach furthermore assumes that language development is (i) grounded in cognition because prior to (or in a co-development with language) there is an understanding and conceptualisation of scenes in terms of events, objects, roles that objects play in events, and perspectives on the event, and (ii) grounded in communication because language learning is intimately embedded in interactions with specific communicative goals. In contrast to the nativist position, defended, for example, by Pinker (Pinker, 1998), the constructivist approach does not assume that the semantic and syntactic categories as well as the linking rules (specifying for example that the agent of an action is linked to the subject of a sentence) are universal and innate. Rather, semantic and syntactic categories as well as the way they are linked is built up in a gradual developmental process, starting from quite specific ‘verb-island constructions’. Although the constructivist approach appears to explain a lot of the known empirical data about child language acquisition, there is so far no worked out model that details how constructivist language development works concretely, i.e. what kind of computational mechanisms are implied and how they work together to achieve adult (or even child) level competence. Moreover only little work has been done so far to build computational models for handling the sort of ’construction grammars’ assumed by this approach. Both challenges inform the research discussed in this paper. 2 Abductive Learning In the constructivist literature, there is often the implicit assumption that grammatical development is the result of observational learning, and several research efforts are going on to operationalise this approach for acquiring grounded lexicons and grammars (see e.g. (Roy, 2001)). The agents are given pairs with a real world situation, as perceived by the sensori-motor apparatus, and a language utterance. For example, an image of a ball is shown and at the same time a stretch of speech containing the word “ball”. Based on a generalisation process that uses statistical pattern recognition algorithms or neural networks, the learner then gradually extracts what is common between the various situations in which the same word or construction is used, thus progressively building a grounded lexicon and grammar of a language. The observational learning approach has had some success in learning words for objects and acquiring simple grammatical constructions, but there seem to be two inherent limitations. First, there is the well known poverty of the stimulus argument, widely accepted in linguistics, which says that there is not enough data in the sentences normally available to the language learner to arrive at realistic lexicons and grammars, let alone learn at the same time the categorisations and conceptualisations of the world implied by the language. This has lead many linguists to adopt the nativist position mentioned earlier. The nativist position could in principle be integrated in an observational learning framework by introducing strong biases on the generalisation process, incorporating the constraints of universal grammar, but it has been difficult to identify and operationalise enough of these constraints to do concrete experiments in realistic settings. Second, observational learning assumes that the language system (lexicon and grammar) exists as a fixed static system. However, observations of language in use shows that language users constantly align their language conventions to suit the purposes of specific conversations (ClarkBrennan, 1991). Natural languages therefore appear more to be like complex adaptive systems, similar to living systems that constantly adapt and evolve. This makes it difficult to rely exclusively on statistical generalisation. It does not capture the inherently creative nature of language use. This paper explores an alternative approach, which assumes a much more active stance from language users based on the Peircian notion of abduction (Fann, 1970). The speaker first attempts to use constructions from his existing inventory to express whatever he wants to express. However when that fails or is judged unsatisfactory, the speaker may extend his existing repertoire by inventing new constructions. These new constructions should be such that there is a high chance that the hearer may be able to guess their meaning. The hearer also uses as much as possible constructions stored in his own inventory to make sense of what is being said. But when there are unknown constructions, or the meanings do not fit with the situation being talked about, the hearer makes an educated guess about what the meaning of the unknown language constructions could be, and adds them as new hypotheses to his own inventory. Abductive constructivist learning hence relies crucially on the fact that both agents have sufficient common ground, share the same situation, have established joint attention, and share communicative goals. Both speaker and hearer use themselves as models of the other in order to guess how the other one will interpret a sentence or why the speaker says things in a particular way. Because both speaker and hearer are taking risks making abductive leaps, a third activity is needed, namely induction, not in the sense of statistical generalisation as in observational learning but in the sense of Peirce (Fann, 1970): A hypothesis arrived at by making educated guesses is tested against further data coming from subsequent interactions. When a construction leads to a successful interaction, there is some evidence that this construction is (or could become) part of the set of conventions adopted by the group, and language users should therefore prefer it in the future. When the construction fails, the language user should avoid it if alternatives are available. Implementing these visions of language learning and use is obviously an enormous challenge for computational linguistics. It requires not only cognitiveand communicative grounding, but also grammar formalisms and associated parsing and production algorithms which are extremely flexible, both from the viewpoint of getting as far as possible in the interpretation or production process despite missing rules or incompatibilities in the inventories of speaker and hearer, and from the viewpoint of supporting continuous change. 3 Language Games The research reported here uses a methodological approach which is quite common in Artificial Life research but still relatively novel in (computational) linguistics: Rather than attempting to develop simulations that generate natural phenomena directly, as one does when using Newton’s equations to simulate the trajectory of a ball falling from a tower, we engage in computational simulations and robotic experiments that create (new) artificial phenomena that have some of the characteristics of natural phenomena and hence are seen as explaining them. Specifically, we implement artificial agents with components modeling certain cognitive operations (such as introducing a new syntactic category, computing an analogy between two events, etc.), and then see what language phenomena result if these agents exercise these components in embodied situated language games. This way we can investigate very precisely what causal factors may underly certain phenomena and can focus on certain aspects of (grounded) language use without having to face the vast full complexity of real human languages. A survey of work which follows a similar methodology is found in (CangelosiParisi, 2003). The artificial agents used in the experiments driving our research observe real-world scenes through their cameras. The scenes consist of interactions between puppets, as shown in figure 1. These scenes enact common events like movement of people and objects, actions such as push or pull, give or take, etc. In order to achieve the cognitive grounding assumed in constructivistlanguage learning, the scenes are processed by a battery of relatively standard machine vision algorithms that segment objects based on color and movement, track objects in real-time, and compute a stream of lowlevel features indicating which objects are touching, in which direction objects are moving, etc. These low-level features are input to an eventrecognition system that uses an inventory of hierarchical event structures and matches them against the data streaming in from low-level vision, similar to the systems described in (SteelsBaillie, 2003). Figure 1: Scene enacted with puppets so that typical interactions between humans involving agency can be perceived and described. In order to achieve the communicative grounding required for constructivist learning, agents go through scripts in which they play various language games, similar to the setups described in (Steels, 2003). These language games are deliberately quite similar to the kind of scenes and interactions used in a lot of child language research. A language game is a routinised interaction between two agents about a shared situation in the world that involves the exchange of symbols. Agents take turns playing the role of speaker and hearer and give each other feedback about the outcome of the game. In the game further used in this paper, one agent describes to another agent an event that happened in the most recently experienced scene. The game succeeds if the hearer agrees that the event being described occurred in the recent scene. 4 The Lexicon Visual processing and event recognition results in a world model in the form of a series of facts describing the scene. To play the description game, the speaker selects one event as the topic and then seeks a series of facts which discriminate this event and its objects against the other events and objects in the context. We use a standard predicate calculus-style representation for meanings. A semantic structure consists of a set of units where each unit has a referent, which is the object or event to which the unit draws attention, and a meaning, which is a set of clauses constraining the referent. A semantic structure with one unit is for example written down as follows: [1] unit1 ev1 fall(ev1,true), fall-1(ev1,obj1),ball(obj1) where unit1 is the unit, ev1 the referent, and fall(ev1, true), fall-1(ev1,obj1), ball(obj1) the meaning. The different arguments of an event are decomposed into different predicates. For example, for “John gives a book to Mary”, there would be four clauses: give(ev1,true) for the event itself, give-1(ev1, John), for the one who gives, give-2(ev1,book1), for the object given, and give-3(ev1,Mary), for the recipient. This representation is more flexible and makes it possible to add new components (like the manner of an event) at any time. Syntactic structures mirror semantic structures. They also consist of units and the name of units are shared with semantic structures so that crossreference between them is straightforward. The form aspects of the sentence are represented in a declarative predicate calculus style, using the units as arguments. For example, the following unit is constrained as introducing the string “fall”: [2] unit1 string(unit1, “fall”) The rule formalism we have developed uses ideas from several existing formalisms, particularly unification grammars and is most similar to the Embodied Construction Grammars proposed in (BergenChang, 2003). Lexical rules link parts of semantic structure with parts of syntactic structure. All rules are reversable. When producing, the left side of a rule is matched against the semantic structure and, if there is a match, the right side is unified with the syntactic structure. Conversely when parsing, the right side is matched against the syntactic structure and the left side unified with the semantic structure. Here is a lexical entry for the word ”fall”. [3] ?unit ?ev fall(?ev,?state), fall-1(?ev,?obj) ?unit string(?unit,“fall”) It specifies that a unit whose meaning is fall(?ev,?state), fall-1(?ev,?obj) is expressed with the string “fall”. Variables are written down with a question mark in front. Their scope is restricted to the structure or rule in which they appear and rule application often implies the renaming of certain variables to take care of the scope constraints. Here is a lexical entry for “ball”: [4] ?unit ?obj ball(?obj) ?unit string(?unit,“ball”) Lexicon lookup attempts to find the minimal set of rules that covers the total semantic structure. New units may get introduced (both in the syntactic and semantic structure) if the meaning of a unit is broken down in the lexicon into more than one word. Thus, the original semantic structure in [1] results after the application of the two rules [3] and [4] in the following syntactic and semantic structures: [5] unit1 ev1 fall(ev1,true), fall-1(ev1,obj1) unit2 obj1 ball(obj1) —– unit1 string(unit1, “fall”) unit2 string(unit2, “ball”) If this syntactic structure is rendered, it produces the utterance “fall ball”. No syntax is implied yet. In the reverse direction, the parser starts with the two units forming the syntactic structure in [5] and application of the rules produces the following semantic structure: [6] unit1 ?ev fall(?ev,?state), fall-1(?ev,?obj) unit2 ?obj1 ball(?obj1) The semantic structure in [6] now contains variables for the referent of each unit and for the various predicate-arguments in their meanings. The interpretation process matches these variables against the facts in the world model. If a single consistent series of bindings can be found, then interpretation is successful. For example, assume that the facts in the meaning part of [1] are in the world model then matching [6] against them results in the bindings: [7] ?ev/ev1, ?state/true, ?obj/obj1, ?obj1/obj1 When the same word or the same meaning is covered by more than one rule, a choice needs to be made. Competing rules may develop if an agent invented a new word for a particular meaning but is later confronted with another word used by somebody else for the same meaning. Every rule has a score and in production and parsing, rules with the highest score are preferred. When the speaker performs lexicon lookup and rules were found to cover the complete semantic structure, no new rules are needed. But when some part is uncovered, the speaker should create a new rule. We have experimented so far with a simple strategy where agents lump together the uncovered facts in a unit and create a brand new word, consisting of a randomly chosen configuration of syllables. For example, if no word for ball(obj1) exists yet to cover the semantic structure in [1], a new rule such as [4] can be constructed by the speaker and subsequently used. If there is no word at all for the whole semantic structure in [1], a single word covering the whole meaning will be created, giving the effect of holophrases. The hearer first attempts to parse as far as possible the given sentence, and then interprets the resulting semantic structure, possibly using joint attention or other means that may help to find the intended interpretation. If this results in a unique set of bindings, the language game is deemed successful. But if there were parts of the sentence which were not covered by any rule, then the hearer can use abductive learning. The first critical step is to guess as well as possible the meaning of the unknown word(s). Thus suppose the sentence is “fall ball”, resulting in the semantic structure: [8] unit1 ?ev fall(?ev,?state), fall-1(?ev,?obj) If this structure is matched, bindings for ?ev and ?obj are found. The agent can now try to find the possible meaning of the unknown word “ball”. He can assume that this meaning must somehow help in the interpretation process. He therefore conceptualises the same way as if he would be the speaker and constructs a distinctive description that draws attention to the event in question, for example by constraining the referent of ?obj with an additional predicate. Although there are usually several ways in which obj1 differs from other objects in the context. There is a considerable chance that the predicate ball is chosen and hence ball(?obj) is abductively inferred as the meaning of “ball” resulting in a rule like [4]. Agents use induction to test whether the rules they created by invention and abduction have been adopted by the group. Every rule has a score, which is local to each agent. When the speaker or hearer has success with a particular rule, its score is increased and the score of competing rules is decreased, thus implementing lateral inhibition. When there is a failure, the score of the rule that was used is decreased. Because the agents prefer rules with the highest score, there is a positive feedback in the system. The more a word is used for a particular meaning, the more success that word will have. Figure 2: Winner-take-all effect in words competing for same meaning. The x-axis plots language games and the y-axis the use frequency. Scores rise in all the agents for these words and so progressively we see a winner-take-all effect with one word dominating for the expression of a particular meaning (see figure 2). Many experiments have by now been performed showing that this kind of lateral inhibition dynamics allows a population of agents to negotiate a shared inventory of formmeaning pairs for content words (Steels, 2003). 5 Syntactisation The reader may have noticed that the semantic structure in [6] resulting from parsing the sentence “fall ball”, includes two variables which will both get bound to the same object, namely ?obj, introduced by the predicate fall-1(?ev,?obj), and ?obj1, introduced by the predicate ball(?obj1). We say that in this case ?obj and ?obj1 form an equality. Just from parsing the two words, the hearer cannot know that the object involved in the fall event is the same as the object introduced by ball. He can only figure this out when looking at the scene (i.e. the world model). In fact, if there are several balls in the scene and only one of them is falling, there is no way to know which object is intended. And even if the hearer can figure it out, it is still desirable that the speaker should provide extra-information about equalities to optimise the hearer’s interpretation efforts. A major thesis of the present paper is that resolving equivalences between variables is the main motor for the introduction of syntax. To achieve it, the agents could, as a first approximation, use rules like the following one, to be applied after all lexical rules have been applied: [9] ?unit1 ?ev1 fall-1(?ev1,?obj2) ?unit2 ?obj2 ball(?obj2) ?unit1 string(?unit1, ”fall”) ?unit2 string(?unit2, ”ball”) This rule is formally equivalent to the lexical rules discussed earlier in the sense that it links parts of a semantic structure with parts of a syntactic structure. But now more than one unit is involved. Rule [9] will do the job, because when unifying its right side with the semantic structure (in parsing) ?obj2 unifies with the variables ?obj (supplied by ”fall”) and ?obj1 (supplied by ”ball”) and this forces them to be equivalent. Note that ?unit1 in [9] only contains those parts of the original meaning that involve the variables which need to be made equal. The above rule works but is completely specific to this case. It is an example of the ad hoc ‘verb-island’ constructions reported in an early stage of child language development. Obviously it is much more desirable to have a more general rule, which can be achieved by introducing syntactic and semantic categories. A semantic category (such as agent, perfective, countable, male) is a categorisation of a conceptual relation, which is used to constrain the semantic side of grammatical rules. A syntactic category (such as noun, verb, nominative) is a categorisation of a word or a group of words, which can be used to constrain the syntactic side of grammatical rules. A rule using categories can be formed by taking rule [9] above and turning all predicates or content words into semantic or syntactic categories. [10] ?unit1 ?ev1 semcat1(?ev1,?obj2) ?unit2 ?obj2 semcat2(?obj2) ?unit1 syncat1 (?unit1) ?unit2 syncat2(?unit2) The agent then needs to create sem-rules to categorise a predicate as belonging to a semantic category, as in: [11] ?unit1 ?ev1 fall-1(?ev1,?obj2) ?unit1 ?ev1 semcat1(?ev1,?obj1) and syn-rules to categorise a word as belonging to a syntactic category, as in: [12] ?unit1 string(?unit1,”fall”) ?unit1 ?ev1 syncat1(?unit1) These rules have arrows going only in one direction because they are only applied in one way.1 During production, the sem-rules are applied first, then the lexical rules, next the syn-rules and then the gram1Actually if word morphology is integrated, syn-rules need to be bi-directional, but this topic is not discussed further here due to space limitations. matical rules. In parsing, the lexical rules are applied first (in reverse direction), then the syn-rules and the sem-rules, and only then the grammatical rules (in reverse direction). The complete syntactic and semantic structures for example [9] look as follows: [13] unit1 ?ev1 fall(?ev1,?state), fall-1(?ev1,?obj), semcat1(?ev1,?obj) unit2 ?obj1 ball(?obj1), semcat2(?obj1) —– unit1 string(unit1, “fall”), syncat-1(unit1) unit2 string(unit2, “ball”), syncat-2(unit2) The right side of rule [10] matches with this syntactic structure, and if the left side of rule [10] is unified with the semantic structure in [13] the variable ?obj2 unifies with ?obj and ?obj1, thus resolving the equality before semantic interpretation (matching against the world model) starts. How can language users develop such rules? The speaker can detect equalities that need to be resolved by re-entrance: Before rendering a sentence and communicating it to the hearer, the speaker reparses his own sentence and interprets it against the facts in his own world model. If the resulting set of bindings contains variables that are bound to the same object after interpretation, then these equalities are candidates for the construction of a rule and new syntactic and semantic categories are made as a side effect. Note how the speaker uses himself as a model of the hearer and fixes problems that the hearer might otherwise encounter. The hearer can detect equalities by first interpreting the sentence based on the constructions that are already part of his own inventory and the shared situation and prior joint attention. These equalities are candidates for new rules to be constructed by the hearer, and they again involve the introduction of syntactic and semantic categories. Note that syntactic and semantic categories are always local to an agent. The same lateral inhibition dynamics is used for grammatical rules as for lexical rules, and so is also a positive feedback loop leading to a winner-take-all effect for grammatical rules. 6 Hierarchy Natural languages heavily use categories to tighten rule application, but they also introduce additional syntactic markings, such as word order, function words, affixes, morphological variation of word forms, and stress or intonation patterns. These markings are often used to signal to which category certain words belong. They can be easily incorporated in the formalism developed so far by adding additional descriptors of the units in the syntactic structure. For example, rule [10] can be expanded with word order constraints and the introduction of a particle “ba”: [14] ?unit1 ?ev1 semcat1(?ev1,?obj2) ?unit2 ?obj2 semcat2(?obj2) ?unit1 syncat1 (?unit1) ?unit2 syncat2(?unit2) ?unit3 string (?unit3, “ba”) ?unit4 syn-subunits ( ?unit1, ?unit2, ?unit3 ), preceeds(?unit2, ?unit3) Note that it was necessary to introduce a superunit ?unit4 in order to express the word order constraints between the ba-particle and the unit that introduces the object. Applying this rule as well as the synrules and sem-rules discussed earlier to the semantic structure in [5] yields: [13] unit1 ev1 fall(ev1,true), fall-1(ev1,obj), semcat1(ev1,obj) unit2 obj1 ball(obj1), semcat2(obj1) —– unit1 string(unit1, “fall”), syncat-1(unit1) unit2 string(unit2, “ball”), syncat-2(unit2) unit3 string(unit3, “ba”) unit4 syn-subunits( unit1,unit2,unit3 ), preceeds(unit2,unit3) When this syntactic structure is rendered, it produces ”fall ball ba”, or equivalently ”ball ba fall”, because only the order between “ball” and “ba” is constrained. Obviously the introduction of additional syntactic features makes the learning of grammatical rules more difficult. Natural languages appear to have meta-level strategies for invention and abduction. For example, a language (like Japanese) tends to use particles for expressing the roles of objects in events and this usage is a strategy both for inventing the expression of a new relation and for guessing what the use of an unknown word in the sentence might be. Another language (like Swahili) uses morphological variations similar to Latin for the same purpose and thus has ended up with a rich set of affixes. In our experiments so far, we have implemented such strategies directly, so that invention and abduction is strongly constrained. We still need to work out a formalism for describing these strategies as metarules and research the associated learning mechanisms. Figure 3: The graph shows the dependency structure as well as the phrase-structure emerging through the application of multiple rules When the same word participates in several rules, we automatically get the emergence of hierarchical structures. For example, suppose that two predicates are used to draw attention to obj1 in [5]: ball and red. If the lexicon has two separate words for each predicate, then the initial semantic structure would introduce different variables so that the meaning after parsing ”fall ball ba red” would be: [15] fall(?ev,?state), fall-1(?ev,?obj), ball (?obj), red(?obj2) To resolve the equality between ?obj and ?obj2, the speaker could create the following rule: [14] ?unit1 ?obj semcat3(?obj) ?unit2 ?obj semcat4(?obj) ?unit1 syncat3(?unit1) ?unit2 syncat4(?unit2) ?unit3 syn-subunits ( unit1,unit2 ), preceeds(unit1,unit2) The predicate ball is declared to belong to semcat4 and the word “ball” to syncat4. The predicate red belongs to semcat3 and the word “red” to syncat3. Rendering the syntactic structure after application of this rule gives the sentence ”fall red ball ba”. A hierarchical structure (figure 3) emerges because “ball” participates in two rules. 7 Re-use Agents obviously should not invent new conventions from scratch every time they need one, but rather use as much as possible existing categorisations and hence existing rules. This simple economy principle quickly leads to the kind of syntagmatic and paradigmatic regularities that one finds in natural grammars. For example, if the speaker wants to express that a block is falling, no new semantic or syntactic categories or linking rules are needed but block can simply be declared to belong to semcat4 and “block” to syncat3 and rule [14] applies. Re-use should be driven by analogy. In one of the largest experiments we have carried out so far, agents had a way to compute the similarity between two event-structures by pairing the primitive operations making up an event. For example, a pick-up action is decomposed into: an object moving into the direction of another stationary object, the first object then touching the second object, and next the two objects moving together in (roughly) the opposite direction. A put-down action has similar subevents, except that their ordering is different. The roles of the objects involved (the hand, the object being picked up) are identical and so their grammatical marking could be re-used with very low risk of being misunderstood. When a speaker reuses a grammatical marking for a particular semantic category, this gives a strong hint to the hearer what kind of analogy is expected. By using these invention and abduction strategies, semantic categories like agent or patient gradually emerged in the artificial grammars. Figure 4 visualises the result of this experiment (after 700 games between 2 agents taking turns). The x-axis (randomly) ranks the different predicate-argument relations, the y-axis their markers. Without re-use, every argument would have its own marker. Now several markers (such as “va” or “zu”) cover more than one relation. Figure 4: More compact grammars result from reuse based on semantic analogies. 8 Conclusions The paper reports significant steps towards the computational modeling of a constructivist approach to language development. It has introduced aspects of a construction grammar formalism that is designed to handle the flexibility required for emergent developing grammars. It also proposed that invention, abduction, and induction are necessary and sufficient for language learning. Much more technical work remains to be done but already significant experimental results have been obtained with embodied agents playing situated language games. Most of the open questions concern under what circumstances syntactic and semantic categories should be re-used. Research funded by Sony CSL with additional funding from ESF-OMLL program, EU FET-ECAgents and CNRS OHLL. References Bergen, B.K. and N.C. Chang. 2003. Embodied Construction Grammar in Simulation-Based Language Understanding. TR 02-004, ICSI, Berkeley. Cangelosi, and D. Parisi 2003. Simulating the Evolution of Language. Springer-Verlag, Berlin. Clark, H. and S. Brennan 1991. Grounding in communication. In: Resnick, L. J. Levine and S. Teasley (eds.) Perspectives on Socially Shared Cognition. APA Books, Washington. p. 127-149. Fann, K.T. 1970. Peirce’s Theory of Abduction Martinus Nijhoff, The Hague. Roy, D. 2001. Learning Visually Grounded Words and Syntax of Natural Spoken Language. Evolution of communication 4(1). Pinker, S. 1998. Learnability and Cognition: The acquisition of Argument Structure. The MIT Press, Cambridge Ma. Steels, L. 2003 Evolving grounded communication for robots. Trends in Cognitive Science. Volume 7, Issue 7, July 2003 , pp. 308-312. Steels, L. and J-C. Baillie 2003. Shared Grounding of Event Descriptions by Autonomous Robots. Journal of Robotics and Autonomous Systems 43, 2003, pp. 163-173. Tomasello, M. and P.J. Brooks 1999. Early syntactic development: A Construction Grammar approach In: Barrett, M. (ed.) (1999) The Development of LanguagePsychology Press, London. pp. 161-190. | 2004 | 2 |
Learning Noun Phrase Anaphoricity to Improve Coreference Resolution: Issues in Representation and Optimization Vincent Ng Department of Computer Science Cornell University Ithaca, NY 14853-7501 [email protected] Abstract Knowledge of the anaphoricity of a noun phrase might be profitably exploited by a coreference system to bypass the resolution of non-anaphoric noun phrases. Perhaps surprisingly, recent attempts to incorporate automatically acquired anaphoricity information into coreference systems, however, have led to the degradation in resolution performance. This paper examines several key issues in computing and using anaphoricity information to improve learning-based coreference systems. In particular, we present a new corpus-based approach to anaphoricity determination. Experiments on three standard coreference data sets demonstrate the effectiveness of our approach. 1 Introduction Noun phrase coreference resolution, the task of determining which noun phrases (NPs) in a text refer to the same real-world entity, has long been considered an important and difficult problem in natural language processing. Identifying the linguistic constraints on when two NPs can co-refer remains an active area of research in the community. One significant constraint on coreference, the non-anaphoricity constraint, specifies that a nonanaphoric NP cannot be coreferent with any of its preceding NPs in a given text. Given the potential usefulness of knowledge of (non-)anaphoricity for coreference resolution, anaphoricity determination has been studied fairly extensively. One common approach involves the design of heuristic rules to identify specific types of (non-)anaphoric NPs such as pleonastic pronouns (e.g., Paice and Husk (1987), Lappin and Leass (1994), Kennedy and Boguraev (1996), Denber (1998)) and definite descriptions (e.g., Vieira and Poesio (2000)). More recently, the problem has been tackled using unsupervised (e.g., Bean and Riloff (1999)) and supervised (e.g., Evans (2001), Ng and Cardie (2002a)) approaches. Interestingly, existing machine learning approaches to coreference resolution have performed reasonably well without anaphoricity determination (e.g., Soon et al. (2001), Ng and Cardie (2002b), Strube and M¨uller (2003), Yang et al. (2003)). Nevertheless, there is empirical evidence that resolution systems might further be improved with anaphoricity information. For instance, our coreference system mistakenly identifies an antecedent for many non-anaphoric common nouns in the absence of anaphoricity information (Ng and Cardie, 2002a). Our goal in this paper is to improve learningbased coreference systems using automatically computed anaphoricity information. In particular, we examine two important, yet largely unexplored, issues in anaphoricity determination for coreference resolution: representation and optimization. Constraint-based vs. feature-based representation. How should the computed anaphoricity information be used by a coreference system? From a linguistic perspective, knowledge of nonanaphoricity is most naturally represented as “bypassing” constraints, with which the coreference system bypasses the resolution of NPs that are determined to be non-anaphoric. But for learning-based coreference systems, anaphoricity information can be simply and naturally accommodated into the machine learning framework by including it as a feature in the instance representation. Local vs. global optimization. Should the anaphoricity determination procedure be developed independently of the coreference system that uses the computed anaphoricity information (local optimization), or should it be optimized with respect to coreference performance (global optimization)? The principle of software modularity calls for local optimization. However, if the primary goal is to improve coreference performance, global optimization appears to be the preferred choice. Existing work on anaphoricity determination for anaphora/coreference resolution can be characterized along these two dimensions. Interestingly, most existing work employs constraintbased, locally-optimized methods (e.g., Mitkov et al. (2002) and Ng and Cardie (2002a)), leaving the remaining three possibilities largely unexplored. In particular, to our knowledge, there have been no attempts to (1) globally optimize an anaphoricity determination procedure for coreference performance and (2) incorporate anaphoricity into coreference systems as a feature. Consequently, as part of our investigation, we propose a new corpus-based method for achieving global optimization and experiment with representing anaphoricity as a feature in the coreference system. In particular, we systematically evaluate all four combinations of local vs. global optimization and constraint-based vs. feature-based representation of anaphoricity information in terms of their effectiveness in improving a learning-based coreference system. Results on three standard coreference data sets are somewhat surprising: our proposed globally-optimized method, when used in conjunction with the constraint-based representation, outperforms not only the commonly-adopted locallyoptimized approach but also its seemingly more natural feature-based counterparts. The rest of the paper is structured as follows. Section 2 focuses on optimization issues, discussing locally- and globally-optimized approaches to anaphoricity determination. In Section 3, we give an overview of the standard machine learning framework for coreference resolution. Sections 4 and 5 present the experimental setup and evaluation results, respectively. We examine the features that are important to anaphoricity determination in Section 6 and conclude in Section 7. 2 The Anaphoricity Determination System: Local vs. Global Optimization In this section, we will show how to build a model of anaphoricity determination. We will first present the standard, locally-optimized approach and then introduce our globally-optimized approach. 2.1 The Locally-Optimized Approach In this approach, the anaphoricity model is simply a classifier that is trained and optimized independently of the coreference system (e.g., Evans (2001), Ng and Cardie (2002a)). Building a classifier for anaphoricity determination. A learning algorithm is used to train a classifier that, given a description of an NP in a document, decides whether or not the NP is anaphoric. Each training instance represents a single NP and consists of a set of features that are potentially useful for distinguishing anaphoric and non-anaphoric NPs. The classification associated with a training instance — one of ANAPHORIC or NOT ANAPHORIC — is derived from coreference chains in the training documents. Specifically, a positive instance is created for each NP that is involved in a coreference chain but is not the head of the chain. A negative instance is created for each of the remaining NPs. Applying the classifier. To determine the anaphoricity of an NP in a test document, an instance is created for it as during training and presented to the anaphoricity classifier, which returns a value of ANAPHORIC or NOT ANAPHORIC. 2.2 The Globally-Optimized Approach To achieve global optimization, we construct a parametric anaphoricity model with which we optimize the parameter1 for coreference accuracy on heldout development data. In other words, we tighten the connection between anaphoricity determination and coreference resolution by using the parameter to generate a set of anaphoricity models from which we select the one that yields the best coreference performance on held-out data. Global optimization for a constraint-based representation. We view anaphoricity determination as a problem of determining how conservative an anaphoricity model should be in classifying an NP as (non-)anaphoric. Given a constraint-based representation of anaphoricity information for the coreference system, if the model is too liberal in classifying an NP as non-anaphoric, then many anaphoric NPs will be misclassified, ultimately leading to a deterioration of recall and of the overall performance of the coreference system. On the other hand, if the model is too conservative, then only a small fraction of the truly non-anaphoric NPs will be identified, and so the resulting anaphoricity information may not be effective in improving the coreference system. The challenge then is to determine a “good” degree of conservativeness. As a result, we can design a parametric anaphoricity model whose conservativeness can be adjusted via a conservativeness parameter. To achieve global optimization, we can simply tune this parameter to optimize for coreference performance on held-out development data. Now, to implement this conservativeness-based anaphoricity determination model, we propose two methods, each of which is built upon a different definition of conservativeness. Method 1: Varying the Cost Ratio Our first method exploits a parameter present in many off-the-shelf machine learning algorithms for 1We can introduce multiple parameters for this purpose, but to simply the optimization process, we will only consider single-parameter models in this paper. training a classifier — the cost ratio (cr), which is defined as follows. cr := cost of misclassifying a positive instance cost of misclassifying a negative instance Inspection of this definition shows that cr provides a means of adjusting the relative misclassification penalties placed on training instances of different classes. In particular, the larger cr is, the more conservative the classifier is in classifying an instance as negative (i.e., non-anaphoric). Given this observation, we can naturally define the conservativeness of an anaphoricity classifier as follows. We say that classifier A is more conservative than classifier B in determining an NP as non-anaphoric if A is trained with a higher cost ratio than B. Based on this definition of conservativeness, we can construct an anaphoricity model parameterized by cr. Specifically, the parametric model maps a given value of cr to the anaphoricity classifier trained with this cost ratio. (For the purpose of training anaphoricity classifiers with different values of cr, we use RIPPER (Cohen, 1995), a propositional rule learning algorithm.) It should be easy to see that increasing cr makes the model more conservative in classifying an NP as non-anaphoric. With this parametric model, we can tune cr to optimize for coreference performance on held-out data. Method 2: Varying the Classification Threshold We can also define conservativeness in terms of the number of NPs classified as non-anaphoric for a given set of NPs. Specifically, given two anaphoricity models A and B and a set of instances I to be classified, we say that A is more conservative than B in determining an NP as non-anaphoric if A classifies fewer instances in I as non-anaphoric than B. Again, this definition is consistent with our intuition regarding conservativeness. We can now design a parametric anaphoricity model based on this definition. First, we train in a supervised fashion a probablistic model of anaphoricity PA(c | i), where i is an instance representing an NP and c is one of the two possible anaphoricity values. (In our experiments, we use maximum entropy classification (MaxEnt) (Berger et al., 1996) to train this probability model.) Then, we can construct a parametric model making binary anaphoricity decisions from PA by introducing a threshold parameter t as follows. Given a specific t (0 ≤t ≤1) and a new instance i, we define an anaphoricity model Mt A in which Mt A(i) = NOT ANAPHORIC if and only if PA(c = NOT ANAPHORIC | i) ≥t. It should be easy to see that increasing t yields progressively more conservative anaphoricity models. Again, t can be tuned using held-out development data. Global optimization for a feature-based representation. We can similarly optimize our proposed conservativeness-based anaphoricity model for coreference performance when anaphoricity information is represented as a feature for the coreference system. Unlike in a constraint-based representation, however, we cannot expect that the recall of the coreference system would increase with the conservativeness parameter. The reason is that we have no control over whether or how the anaphoricity feature is used by the coreference learner. In other words, the behavior of the coreference system is less predictable in comparison to a constraint-based representation. Other than that, the conservativenessbased anaphoricity model is as good to use for global optimization with a feature-based representation as with a constraint-based representation. We conclude this section by pointing out that the locally-optimized approach to anaphoricity determination is indeed a special case of the global one. Unlike the global approach in which the conservativeness parameter values are tuned based on labeled data, the local approach uses “default” parameter values. For instance, when RIPPER is used to train an anaphoricity classifier in the local approach, cr is set to the default value of one. Similarly, when probabilistic anaphoricity decisions generated via a MaxEnt model are converted to binary anaphoricity decisions for subsequent use by a coreference system, t is set to the default value of 0.5. 3 The Machine Learning Framework for Coreference Resolution The coreference system to which our automatically computed anaphoricity information will be applied implements the standard machine learning approach to coreference resolution combining classification and clustering. Below we will give a brief overview of this standard approach. Details can be found in Soon et al. (2001) or Ng and Cardie (2002b). Training an NP coreference classifier. After a pre-processing step in which the NPs in a document are automatically identified, a learning algorithm is used to train a classifier that, given a description of two NPs in the document, decides whether they are COREFERENT or NOT COREFERENT. Applying the classifier to create coreference chains. Test texts are processed from left to right. Each NP encountered, NPj, is compared in turn to each preceding NP, NPi. For each pair, a test instance is created as during training and is presented to the learned coreference classifier, which returns a number between 0 and 1 that indicates the likelihood that the two NPs are coreferent. The NP with the highest coreference likelihood value among the preceding NPs with coreference class values above 0.5 is selected as the antecedent of NPj; otherwise, no antecedent is selected for NPj. 4 Experimental Setup In Section 2, we examined how to construct locallyand globally-optimized anaphoricity models. Recall that, for each of these two types of models, the resulting (non-)anaphoricity information can be used by a learning-based coreference system either as hard bypassing constraints or as a feature. Hence, given a coreference system that implements the twostep learning approach shown above, we will be able to evaluate the four different combinations of computing and using anaphoricity information for improving the coreference system described in the introduction. Before presenting evaluation details, we will describe the experimental setup. Coreference system. In all of our experiments, we use our learning-based coreference system (Ng and Cardie, 2002b). Features for anaphoricity determination. In both the locally-optimized and the globallyoptimized approaches to anaphoricity determination described in Section 2, an instance is represented by 37 features that are specifically designed for distinguishing anaphoric and non-anaphoric NPs. Space limitations preclude a description of these features; see Ng and Cardie (2002a) for details. Learning algorithms. For training coreference classifiers and locally-optimized anaphoricity models, we use both RIPPER and MaxEnt as the underlying learning algorithms. However, for training globally-optimized anaphoricity models, RIPPER is always used in conjunction with Method 1 and MaxEnt with Method 2, as described in Section 2.2. In terms of setting learner-specific parameters, we use default values for all RIPPER parameters unless otherwise stated. For MaxEnt, we always train the feature-weight parameters with 100 iterations of the improved iterative scaling algorithm (Della Pietra et al., 1997), using a Gaussian prior to prevent overfitting (Chen and Rosenfeld, 2000). Data sets. We use the Automatic Content Extraction (ACE) Phase II data sets.2 We choose ACE rather than the more widely-used MUC corpus (MUC-6, 1995; MUC-7, 1998) simply because 2See http://www.itl.nist.gov/iad/894.01/ tests/ace for details on the ACE research program. BNEWS NPAPER NWIRE Number of training texts 216 76 130 Number of test texts 51 17 29 Number of training insts (for anaphoricity) 20567 21970 27338 Number of training insts (for coreference) 97036 148850 122168 Table 1: Statistics of the three ACE data sets ACE provides much more labeled data for both training and testing. However, our system was set up to perform coreference resolution according to the MUC rules, which are fairly different from the ACE guidelines in terms of the identification of markables as well as evaluation schemes. Since our goal is to evaluate the effect of anaphoricity information on coreference resolution, we make no attempt to modify our system to adhere to the rules specifically designed for ACE. The coreference corpus is composed of three data sets made up of three different news sources: Broadcast News (BNEWS), Newspaper (NPAPER), and Newswire (NWIRE). Statistics collected from these data sets are shown in Table 1. For each data set, we train an anaphoricity classifier and a coreference classifier on the (same) set of training texts and evaluate the coreference system on the test texts. 5 Evaluation In this section, we will compare the effectiveness of four approaches to anaphoricity determination (see the introduction) in improving our baseline coreference system. 5.1 Coreference Without Anaphoricity As mentioned above, we use our coreference system as the baseline system where no explicit anaphoricity determination system is employed. Results using RIPPER and MaxEnt as the underlying learners are shown in rows 1 and 2 of Table 2 where performance is reported in terms of recall, precision, and F-measure using the model-theoretic MUC scoring program (Vilain et al., 1995). With RIPPER, the system achieves an F-measure of 56.3 for BNEWS, 61.8 for NPAPER, and 51.7 for NWIRE. The performance of MaxEnt is comparable to that of RIPPER for the BNEWS and NPAPER data sets but slightly worse for the NWIRE data set. 5.2 Coreference With Anaphoricity The Constraint-Based, Locally-Optimized (CBLO) Approach. As mentioned before, in constraint-based approaches, the automatically computed non-anaphoricity information is used as System Variation BNEWS NPAPER NWIRE Experiments L R P F C R P F C R P F C 1 No RIP 57.4 55.3 56.3 60.0 63.6 61.8 53.2 50.3 51.7 2 Anaphoricity ME 60.9 52.1 56.2 65.4 58.6 61.8 54.9 46.7 50.4 3 ConstraintRIP 42.5 77.2 54.8 cr=1 46.7 79.3 58.8† cr=1 42.1 64.2 50.9 cr=1 4 Based, RIP 45.4 72.8 55.9 t=0.5 52.2 75.9 61.9 t=0.5 36.9 61.5 46.1† t=0.5 5 LocallyME 44.4 76.9 56.3 cr=1 50.1 75.7 60.3 cr=1 43.9 63.0 51.7 cr=1 6 Optimized ME 47.3 70.8 56.7 t=0.5 57.1 70.6 63.1∗ t=0.5 38.1 60.0 46.6† t=0.5 7 FeatureRIP 53.5 61.3 57.2 cr=1 58.7 69.7 63.7∗ cr=1 54.2 46.8 50.2† cr=1 8 Based, RIP 58.3 58.3 58.3∗ t=0.5 63.5 57.0 60.1† t=0.5 63.4 35.3 45.3† t=0.5 9 LocallyME 59.6 51.6 55.3† cr=1 65.6 57.9 61.5 cr=1 55.1 46.2 50.3 cr=1 10 Optimized ME 59.6 51.6 55.3† t=0.5 66.0 57.7 61.6 t=0.5 54.9 46.7 50.4 t=0.5 11 ConstraintRIP 54.5 68.6 60.8∗ cr=5 58.4 68.8 63.2∗ cr=4 50.5 56.7 53.4∗ cr=3 12 Based, RIP 54.1 67.1 59.9∗ t=0.7 56.5 68.1 61.7 t=0.65 50.3 53.8 52.0 t=0.7 13 GloballyME 54.8 62.9 58.5∗ cr=5 62.4 65.6 64.0∗ cr=3 52.2 57.0 54.5∗ cr=3 14 Optimized ME 54.1 60.6 57.2 t=0.7 61.7 64.0 62.8∗ t=0.7 52.0 52.8 52.4∗ t=0.7 15 FeatureRIP 60.8 56.1 58.4∗ cr=8 62.2 61.3 61.7 cr=6 54.6 49.4 51.9 cr=8 16 Based, RIP 59.7 57.0 58.3∗ t=0.6 63.6 59.1 61.3 t=0.8 56.7 48.4 52.3 t=0.7 17 GloballyME 59.9 51.0 55.1† cr=9 66.5 57.1 61.4 cr=1 56.3 46.9 51.2∗ cr=10 18 Optimized ME 59.6 51.6 55.3† t=0.95 65.9 57.5 61.4 t=0.95 56.5 46.7 51.1∗ t=0.5 Table 2: Results of the coreference systems using different approaches to anaphoricity determination on the three ACE test data sets. Information on which Learner (RIPPER or MaxEnt) is used to train the coreference classifier, as well as performance results in terms of Recall, Precision, F-measure and the corresponding Conservativeness parameter are provided whenever appropriate. The strongest result obtained for each data set is boldfaced. In addition, results that represent statistically significant gains and drops with respect to the baseline are marked with an asterisk (*) and a dagger (†), respectively. hard bypassing constraints, with which the coreference system attempts to resolve only NPs that the anaphoricity classifier determines to be anaphoric. As a result, we hypothesized that precision would increase in comparison to the baseline system. In addition, we expect that recall will drop owing to the anaphoricity classifier’s misclassifications of truly anaphoric NPs. Consequently, overall performance is not easily predictable: F-measure will improve only if gains in precision can compensate for the loss in recall. Results are shown in rows 3-6 of Table 2. Each row corresponds to a different combination of learners employed in training the coreference and anaphoricity classifiers.3 As mentioned in Section 2.2, locally-optimized approaches are a special case of their globally-optimized counterparts, with the conservativeness parameter set to the default value of one for RIPPER and 0.5 for MaxEnt. In comparison to the baseline, we see large gains in precision at the expense of recall. Moreover, CBLO does not seem to be very effective in improving the baseline, in part due to the dramatic loss in recall. In particular, although we see improvements in F-measure in five of the 12 experiments in this group, only one of them is statistically significant.4 3Bear in mind that different learners employed in training anaphoricity classifiers correspond to different parametric methods. For ease of exposition, however, we will refer to the method simply by the learner it employs. 4The Approximate Randomization test described in Noreen Worse still, F-measure drops significantly in three cases. The Feature-Based, Locally-Optimized (FBLO) Approach. The experimental setting employed here is essentially the same as that in CBLO, except that anaphoricity information is incorporated into the coreference system as a feature rather than as constraints. Specifically, each training/test coreference instance i(NPi,NPj) (created from NPj and a preceding NP NPi) is augmented with a feature whose value is the anaphoricity of NPj as computed by the anaphoricity classifier. In general, we hypothesized that FBLO would perform better than the baseline: the addition of an anaphoricity feature to the coreference instance representation might give the learner additional flexibility in creating coreference rules. Similarly, we expect FBLO to outperform its constraint-based counterpart: since anaphoricity information is represented as a feature in FBLO, the coreference learner can incorporate the information selectively rather than as universal hard constraints. Results using the FBLO approach are shown in rows 7-10 of Table 2. Somewhat unexpectedly, this approach is not effective in improving the baseline: F-measure increases significantly in only two of the 12 cases. Perhaps more surprisingly, we see significant drops in F-measure in five cases. To get a bet(1989) is applied to determine if the differences in the Fmeasure scores between two coreference systems are statistically significant at the 0.05 level or higher. System Variation BNEWS (dev) NPAPER (dev) NWIRE (dev) Experiments L R P F C R P F C R P F C 1 ConstraintRIP 62.6 76.3 68.8 cr=5 65.5 73.0 69.1 cr=4 56.1 58.9 57.4 cr=3 2 Based, RIP 62.5 75.5 68.4 t=0.7 63.0 71.7 67.1 t=0.65 56.7 54.8 55.7 t=0.7 3 GloballyME 63.1 71.3 66.9 cr=5 66.2 71.8 68.9 cr=3 57.9 59.7 58.8 cr=3 4 Optimized ME 62.9 70.8 66.6 t=0.7 61.4 74.3 67.3 t=0.65 58.4 55.3 56.8 t=0.7 Table 3: Results of the coreference systems using a constraint-based, globally-optimized approach to anaphoricity determination on the three ACE held-out development data sets. Information on which Learner (RIPPER or MaxEnt) is used to train the coreference classifier as well as performance results in terms of Recall, Precision, F-measure and the corresponding Conservativeness parameter are provided whenever appropriate. The strongest result obtained for each data set is boldfaced. ter idea of why F-measure decreases, we examine the relevant coreference classifiers induced by RIPPER. We find that the anaphoricity feature is used in a somewhat counter-intuitive manner: some of the induced rules posit a coreference relationship between NPj and a preceding NP NPi even though NPj is classified as non-anaphoric. These results seem to suggest that the anaphoricity feature is an irrelevant feature from a machine learning point of view. In comparison to CBLO, the results are mixed: there does not appear to be a clear winner in any of the three data sets. Nevertheless, it is worth noticing that the CBLO systems can be characterized as having high precision/low recall, whereas the reverse is true for FBLO systems in general. As a result, even though CBLO and FBLO systems achieve similar performance, the former is the preferred choice in applications where precision is critical. Finally, we note that there are other ways to encode anaphoricity information in a coreference system. For instance, it is possible to represent anaphoricity as a real-valued feature indicating the probability of an NP being anaphoric rather than as a binary-valued feature. Future work will examine alternative encodings of anaphoricity. The Constraint-Based, Globally-Optimized (CBGO) Approach. As discussed above, we optimize the anaphoricity model for coreference performance via the conservativeness parameter. In particular, we will use this parameter to maximize the F-measure score for a particular data set and learner combination using held-out development data. To ensure a fair comparison between global and local approaches, we do not rely on additional development data in the former; instead we use 2 3 of the original training texts for acquiring the anaphoricity and coreference classifiers and the remaining 1 3 for development for each of the data sets. As far as parameter tuning is concerned, we tested values of 1, 2, . . . , 10 as well as their reciprocals for cr and 0.05, 0.1, . . . , 1.0 for t. In general, we hypothesized that CBGO would outperform both the baseline and the locallyoptimized approaches, since coreference performance is being explicitly maximized. Results using CBGO, which are shown in rows 11-14 of Table 2, are largely consistent with our hypothesis. The best results on all of the three data sets are achieved using this approach. In comparison to the baseline, we see statistically significant gains in F-measure in nine of the 12 experiments in this group. Improvements stem primarily from large gains in precision accompanied by smaller drops in recall. Perhaps more importantly, CBGO never produces results that are significantly worse than those of the baseline systems on these data sets, unlike CBLO and FBLO. Overall, these results suggest that CBGO is more robust than the locally-optimized approaches in improving the baseline system. As can be seen, CBGO fails to produce statistically significant improvements over the baseline in three cases. The relatively poorer performance in these cases can potentially be attributed to the underlying learner combination. Fortunately, we can use the development data not only for parameter tuning but also in predicting the best learner combination. Table 3 shows the performance of the coreference system using CBGO on the development data, along with the value of the conservativeness parameter used to achieve the results in each case. Using the notation Learner1/Learner2 to denote the fact that Learner1 and Learner2 are used to train the underlying coreference classifier and anaphoricity classifier respectively, we can see that the RIPPER/RIPPER combination achieves the best performance on the BNEWS development set, whereas MaxEnt/RIPPER works best for the other two. Hence, if we rely on the development data to pick the best learner combination for use in testing, the resulting coreference system will outperform the baseline in all three data sets and yield the bestperforming system on all but the NPAPER data sets, achieving an F-measure of 60.8 (row 11), 63.2 (row 11), and 54.5 (row 13) for the BNEWS, NPAPER, 1 2 3 4 5 6 7 8 9 10 50 55 60 65 70 75 80 85 cr Score Recall Precision F−measure Figure 1: Effect of cr on the performance of the coreference system for the NPAPER development data using RIPPER/RIPPER and NWIRE data sets, respectively. Moreover, the high correlation between the relative coreference performance achieved by different learner combinations on the development data and that on the test data also reflects the stability of CBGO. In comparison to the locally-optimized approaches, CBGO achieves better F-measure scores in almost all cases. Moreover, the learned conservativeness parameter in CBGO always has a larger value than the default value employed by CBLO. This provides empirical evidence that the CBLO anaphoricity classifiers are too liberal in classifying NPs as non-anaphoric. To examine the effect of the conservativeness parameter on the performance of the coreference system, we plot in Figure 1 the recall, precision, Fmeasure curves against cr for the NPAPER development data using the RIPPER/RIPPER learner combination. As cr increases, recall rises and precision drops. This should not be surprising, since (1) increasing cr causes fewer anaphoric NPs to be misclassified and allows the coreference system to find a correct antecedent for some of them, and (2) decreasing cr causes more truly non-anaphoric NPs to be correctly classified and prevents the coreference system from attempting to resolve them. The best F-measure in this case is achieved when cr=4. The Feature-Based, Globally-Optimized (FBGO) Approach. The experimental setting employed here is essentially the same as that in the CBGO setting, except that anaphoricity information is incorporated into the coreference system as a feature rather than as constraints. Specifically, each training/test instance i(NPi,NPj) is augmented with a feature whose value is the computed anaphoricity of NPj. The development data is used to select the anaphoricity model (and hence the parameter value) that yields the best-performing coreference system. This model is then used to compute the anaphoricity value for the test instances. As mentioned before, we use the same parametric anaphoricity model as in CBGO for achieving global optimization. Since the parametric model is designed with a constraint-based representation in mind, we hypothesized that global optimization in this case would not be as effective as in CBGO. Nevertheless, we expect that this approach is still more effective in improving the baseline than the locally-optimized approaches. Results using FBGO are shown in rows 15-18 of Table 2. As expected, FBGO is less effective than CBGO in improving the baseline, underperforming its constraint-based counterpart in 11 of the 12 cases. In fact, FBGO is able to significantly improve the corresponding baseline in only four cases. Somewhat surprisingly, FBGO is by no means superior to the locally-optimized approaches with respect to improving the baseline. These results seem to suggest that global optimization is effective only if we have a “good” parameterization that is able to take into account how anaphoricity information will be exploited by the coreference system. Nevertheless, as discussed before, effective global optimization with a feature-based representation is not easy to accomplish. 6 Analyzing Anaphoricity Features So far we have focused on computing and using anaphoricity information to improve the performance of a coreference system. In this section, we examine which anaphoricity features are important in order to gain linguistic insights into the problem. Specifically, we measure the informativeness of a feature by computing its information gain (see p.22 of Quinlan (1993) for details) on our three data sets for training anaphoricity classifiers. Overall, the most informative features are HEAD MATCH (whether the NP under consideration has the same head as one of its preceding NPs), STR MATCH (whether the NP under consideration is the same string as one of its preceding NPs), and PRONOUN (whether the NP under consideration is a pronoun). The high discriminating power of HEAD MATCH and STR MATCH is a probable consequence of the fact that an NP is likely to be anaphoric if there is a lexically similar noun phrase preceding it in the text. The informativeness of PRONOUN can also be expected: most pronominal NPs are anaphoric. Features that determine whether the NP under consideration is a PROPER NOUN, whether it is a BARE SINGULAR or a BARE PLURAL, and whether it begins with an “a” or a “the” (ARTICLE) are also highly informative. This is consistent with our intuition that the (in)definiteness of an NP plays an important role in determining its anaphoricity. 7 Conclusions We have examined two largely unexplored issues in computing and using anaphoricity information for improving learning-based coreference systems: representation and optimization. In particular, we have systematically evaluated all four combinations of local vs. global optimization and constraint-based vs. feature-based representation of anaphoricity information in terms of their effectiveness in improving a learning-based coreference system. Extensive experiments on the three ACE coreference data sets using a symbolic learner (RIPPER) and a statistical learner (MaxEnt) for training coreference classifiers demonstrate the effectiveness of the constraint-based, globally-optimized approach to anaphoricity determination, which employs our conservativeness-based anaphoricity model. Not only does this approach improve a “no anaphoricity” baseline coreference system, it is more effective than the commonly-adopted locally-optimized approach without relying on additional labeled data. Acknowledgments We thank Regina Barzilay, Claire Cardie, Bo Pang, and the anonymous reviewers for their invaluable comments on earlier drafts of the paper. This work was supported in part by NSF Grant IIS–0208028. References David Bean and Ellen Riloff. 1999. Corpus-based identification of non-anaphoric noun phrases. In Proceedings of the ACL, pages 373–380. Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. Stanley Chen and Ronald Rosenfeld. 2000. A survey of smoothing techniques for ME models. IEEE Transactions on Speech on Audio Processing, 8(1):37–50. William Cohen. 1995. Fast effective rule induction. In Proceedings of ICML. Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. 1997. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380–393. Michel Denber. 1998. Automatic resolution of anaphora in English. Technical report, Eastman Kodak Co. Richard Evans. 2001. Applying machine learning toward an automatic classification of it. Literary and Linguistic Computing, 16(1):45–57. Christopher Kennedy and Branimir Boguraev. 1996. Anaphor for everyone: Pronominal anaphora resolution without a parser. In Proceedings of COLING, pages 113–118. Shalom Lappin and Herbert Leass. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4):535–562. Ruslan Mitkov, Richard Evans, and Constantin Orasan. 2002. A new, fully automatic version of Mitkov’s knowledge-poor pronoun resolution method. In Al. Gelbukh, editor, Computational Linguistics and Intelligent Text Processing, pages 169–187. MUC-6. 1995. Proceedings of the Sixth Message Understanding Conference (MUC-6). MUC-7. 1998. Proceedings of the Seventh Message Understanding Conference (MUC-7). Vincent Ng and Claire Cardie. 2002a. Identifying anaphoric and non-anaphoricnoun phrases to improve coreference resolution. In Proceedings of COLING, pages 730–736. Vincent Ng and Claire Cardie. 2002b. Improving machine learning approaches to coreference resolution. In Proceedings of the ACL, pages 104–111. Eric W. Noreen. 1989. Computer Intensive Methods for Testing Hypothesis: An Introduction. John Wiley & Sons. Chris Paice and Gareth Husk. 1987. Towards the automatic recognition of anaphoric features in English text: the impersonal pronoun ’it’. Computer Speech and Language, 2. J. Ross Quinlan. 1993. C4.5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann. Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. Michael Strube and Christoph M¨uller. 2003. A machine learning approach to pronoun resolution in spoken dialogue. In Proceedings of the ACL, pages 168–175. Renata Vieira and Massimo Poesio. 2000. An empirically-based system for processing definite descriptions. Computational Linguistics, 26(4):539– 593. Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme. In Proceedings of the Sixth Message Understanding Conference (MUC-6), pages 45–52. Xiaofeng Yang, Guodong Zhou, Jian Su, and Chew Lim Tan. 2003. Coreference resolution using competitive learning approach. In Proceedings of the ACL, pages 176–183. | 2004 | 20 |
A Joint Source-Channel Model for Machine Transliteration Li Haizhou, Zhang Min, Su Jian Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore 119613 {hli,sujian,mzhang}@i2r.a-star.edu.sg Abstract Most foreign names are transliterated into Chinese, Japanese or Korean with approximate phonetic equivalents. The transliteration is usually achieved through intermediate phonemic mapping. This paper presents a new framework that allows direct orthographical mapping (DOM) between two different languages, through a joint source-channel model, also called n-gram transliteration model (TM). With the n-gram TM model, we automate the orthographic alignment process to derive the aligned transliteration units from a bilingual dictionary. The n-gram TM under the DOM framework greatly reduces system development effort and provides a quantum leap in improvement in transliteration accuracy over that of other state-of-the-art machine learning algorithms. The modeling framework is validated through several experiments for English-Chinese language pair. 1 Introduction In applications such as cross-lingual information retrieval (CLIR) and machine translation, there is an increasing need to translate out-of-vocabulary words from one language to another, especially from alphabet language to Chinese, Japanese or Korean. Proper names of English, French, German, Russian, Spanish and Arabic origins constitute a good portion of out-of-vocabulary words. They are translated through transliteration, the method of translating into another language by preserving how words sound in their original languages. For writing foreign names in Chinese, transliteration always follows the original romanization. Therefore, any foreign name will have only one Pinyin (romanization of Chinese) and thus in Chinese characters. In this paper, we focus on automatic Chinese transliteration of foreign alphabet names. Because some alphabet writing systems use various diacritical marks, we find it more practical to write names containing such diacriticals as they are rendered in English. Therefore, we refer all foreign-Chinese transliteration to English-Chinese transliteration, or E2C. Transliterating English names into Chinese is not straightforward. However, recalling the original from Chinese transliteration is even more challenging as the E2C transliteration may have lost some original phonemic evidences. The Chinese-English backward transliteration process is also called back-transliteration, or C2E (Knight & Graehl, 1998). In machine transliteration, the noisy channel model (NCM), based on a phoneme-based approach, has recently received considerable attention (Meng et al. 2001; Jung et al, 2000; Virga & Khudanpur, 2003; Knight & Graehl, 1998). In this paper we discuss the limitations of such an approach and address its problems by firstly proposing a paradigm that allows direct orthographic mapping (DOM), secondly further proposing a joint source-channel model as a realization of DOM. Two other machine learning techniques, NCM and ID3 (Quinlan, 1993) decision tree, also are implemented under DOM as reference to compare with the proposed n-gram TM. This paper is organized as follows: In section 2, we present the transliteration problems. In section 3, a joint source-channel model is formulated. In section 4, several experiments are carried out to study different aspects of proposed algorithm. In section 5, we relate our algorithms to other reported work. Finally, we conclude the study with some discussions. 2 Problems in transliteration Transliteration is a process that takes a character string in source language as input and generates a character string in the target language as output. The process can be seen conceptually as two levels of decoding: segmentation of the source string into transliteration units; and relating the source language transliteration units with units in the target language, by resolving different combinations of alignments and unit mappings. A unit could be a Chinese character or a monograph, a digraph or a trigraph and so on for English. 2.1 Phoneme-based approach The problems of English-Chinese transliteration have been studied extensively in the paradigm of noisy channel model (NCM). For a given English name E as the observed channel output, one seeks a posteriori the most likely Chinese transliteration C that maximizes P(C|E). Applying Bayes rule, it means to find C to maximize P(E,C) = P(E | C)*P(C) (1) with equivalent effect. To do so, we are left with modeling two probability distributions: P(E|C), the probability of transliterating C to E through a noisy channel, which is also called transformation rules, and P(C), the probability distribution of source, which reflects what is considered good Chinese transliteration in general. Likewise, in C2E backtransliteration, we would find E that maximizes P(E,C) = P(C | E)*P(E) (2) for a given Chinese name. In eqn (1) and (2), P(C) and P(E) are usually estimated using n-gram language models (Jelinek, 1991). Inspired by research results of grapheme-tophoneme research in speech synthesis literature, many have suggested phoneme-based approaches to resolving P(E|C) and P(C|E), which approximates the probability distribution by introducing a phonemic representation. In this way, we convert the names in the source language, say E, into an intermediate phonemic representation P, and then convert the phonemic representation into the target language, say Chinese C. In E2C transliteration, the phoneme-based approach can be formulated as P(C|E) = P(C|P)P(P|E) and conversely we have P(E|C) = P(E|P)P(P|C) for C2E back-transliteration. Several phoneme-based techniques have been proposed in the recent past for machine transliteration using transformation-based learning algorithm (Meng et al. 2001; Jung et al, 2000; Virga & Khudanpur, 2003) and using finite state transducer that implements transformation rules (Knight & Graehl, 1998), where both handcrafted and data-driven transformation rules have been studied. However, the phoneme-based approaches are limited by two major constraints, which could compromise transliterating precision, especially in English-Chinese transliteration: 1) Latin-alphabet foreign names are of different origins. For instance, French has different phonic rules from those of English. The phoneme-based approach requires derivation of proper phonemic representation for names of different origins. One may need to prepare multiple language-dependent grapheme-to-phoneme (G2P) conversion systems accordingly, and that is not easy to achieve (The Onomastica Consortium, 1995). For example, /Lafontant/ is transliterated into 拉丰唐(La-FengTang) while /Constant/ becomes 康斯坦特(KangSi-Tan-Te) ,where syllable /-tant/ in the two names are transliterated differently depending on the names’ language of origin. 2) Suppose that language dependent graphemeto-phoneme systems are attainable, obtaining Chinese orthography will need two further steps: a) conversion from generic phonemic representation to Chinese Pinyin; b) conversion from Pinyin to Chinese characters. Each step introduces a level of imprecision. Virga and Khudanpur (2003) reported 8.3% absolute accuracy drops when converting from Pinyin to Chinese characters, due to homophone confusion. Unlike Japanese katakana or Korean alphabet, Chinese characters are more ideographic than phonetic. To arrive at an appropriate Chinese transliteration, one cannot rely solely on the intermediate phonemic representation. 2.2 Useful orthographic context To illustrate the importance of contextual information in transliteration, let’s take name /Minahan/ as an example, the correct segmentation should be /Mi-na-han/, to be transliterated as 米纳-汉 (Pinyin: Mi-Na-Han). English /mi- -na- -han/ Chinese 米 纳 汉 Pinyin Mi Nan Han However, a possible segmentation /Min-ah-an/ could lead to an undesirable syllabication of 明阿-安 (Pinyin: Min-A-An). English /min- -ah- -an/ Chinese 明 阿 安 Pinyin Min A An According to the transliteration guidelines, a wise segmentation can be reached only after exploring the combination of the left and right context of transliteration units. From the computational point of view, this strongly suggests using a contextual n-gram as the knowledge base for the alignment decision. Another example will show us how one-to-many mappings could be resolved by context. Let’s take another name /Smith/ as an example. Although we can arrive at an obvious segmentation /s-mi-th/, there are three Chinese characters for each of /s-/, /-mi-/ and /-th/. Furthermore, /s-/ and /-th/ correspond to overlapping characters as well, as shown next. English /s- -mi- -th/ Chinese 1 史 米 斯 Chinese 2 斯 密 史 Chinese 3 思 麦 瑟 A human translator will use transliteration rules between English syllable sequence and Chinese character sequence to obtain the best mapping 史密-斯, as indicated in italic in the table above. To address the issues in transliteration, we propose a direct orthographic mapping (DOM) framework through a joint source-channel model by fully exploring orthographic contextual information, aiming at alleviating the imprecision introduced by the multiple-step phoneme-based approach. 3 Joint source-channel model In view of the close coupling of the source and target transliteration units, we propose to estimate P(E,C) by a joint source-channel model, or n-gram transliteration model (TM). For K aligned transliteration units, we have ) ... , , ... , ( ) , ( 2 1 2 1 K K c c c e e e P C E P = ) , ... , , , ( 2 1 K c e c e c e P > < > < > < = (3) ∏ = − > < > < = K k k k c e c e P 1 1 1 ) , | , ( which provides an alternative to the phonemebased approach for resolving eqn. (1) and (2) by eliminating the intermediate phonemic representation. Unlike the noisy-channel model, the joint source-channel model does not try to capture how source names can be mapped to target names, but rather how source and target names can be generated simultaneously. In other words, we estimate a joint probability model that can be easily marginalized in order to yield conditional probability models for both transliteration and back-transliteration. Suppose that we have an English name m x x x ... 2 1 = α and a Chinese transliteration ny y y ... 2 1 = β where ix are letters and jy are Chinese characters. Oftentimes, the number of letters is different from the number of Chinese characters. A Chinese character may correspond to a letter substring in English or vice versa. m i i x x x x x x x ... ... 2 1 3 2 1 + + n j y y y y ... ... 2 1 where there exists an alignment γ with > =< > < 1 1 1 , , y x c e > =< > < 2 3 2 2 , , y x x c e … and > =< > < n m K y x c e , , . A transliteration unit correspondence > < c e, is called a transliteration pair. Then, the E2C transliteration can be formulated as ) , , ( max arg , γ β α β γ β P = (4) and similarly the C2E back-transliteration as ) , , ( max arg , γ β α α γ α P = (5) An n-gram transliteration model is defined as the conditional probability, or transliteration probability, of a transliteration pair k c e > < , depending on its immediate n predecessor pairs: ) , , ( ) , ( γ β α P C E P = ∏ = − + − > < > < = K k k n k k c e c e P 1 1 1) , | , ( (6) 3.1 Transliteration alignment A bilingual dictionary contains entries mapping English names to their respective Chinese transliterations. Like many other solutions in computational linguistics, it is possible to automatically analyze the bilingual dictionary to acquire knowledge in order to map new English names to Chinese and vice versa. Based on the transliteration formulation above, a transliteration model can be built with transliteration unit’s ngram statistics. To obtain the statistics, the bilingual dictionary needs to be aligned. The maximum likelihood approach, through EM algorithm (Dempster, 1977), allows us to infer such an alignment easily as described in the table below. The aligning process is different from that of transliteration given in eqn. (4) or (5) in that, here we have fixed bilingual entries, α and β . The aligning process is just to find the alignment segmentation γ between the two strings that maximizes the joint probability: ) , , ( max arg γ β α γ γ P = (7) A set of transliteration pairs that is derived from the aligning process forms a transliteration table, which is in turn used in the transliteration decoding. As the decoder is bounded by this table, it is important to make sure that the training database covers as much as possible the potential transliteration patterns. Here are some examples of resulting alignment pairs. 斯|s 尔|l 特|t 德|d 克|k 布|b 格|g 尔|r 尔|ll 克|c 罗|ro 里|ri 曼|man 姆|m 普|p 德|de 拉|ra 尔|le 阿|a 伯|ber 拉|la 森|son 顿|ton 特|tt 雷|re 科|co 奥|o 埃|e 马|ma 利|ley 利|li 默|mer Knowing that the training data set will never be sufficient for every n-gram unit, different smoothing approaches are applied, for example, by using backoff or class-based models, which can be found in statistical language modeling literatures (Jelinek, 1991). 3.2 DOM: n-gram TM vs. NCM Although in the literature, most noisy channel models (NCM) are studied under phoneme-based paradigm for machine transliteration, NCM can also be realized under direct orthographic mapping (DOM). Next, let’s look into a bigram case to see what n-gram TM and NCM present to us. For E2C conversion, re-writing eqn (1) and eqn (6) , we have ∏ = − ≈ K k k k k k c c P c e P P 1 1) | ( ) | ( ) , , ( γ β α (8) ) , , ( γ β α P ) , | , ( 1 1 − = > < > < ≈∏ k k K k c e c e P (9) The formulation of eqn. (8) could be interpreted as a hidden Markov model with Chinese characters as its hidden states and English transliteration units as the observations (Rabiner, 1989). The number of parameters in the bigram TM is potentially 2 T , while in the noisy channel model (NCM) it’s 2 C T + , where T is the number of transliteration pairs and C is the number of Chinese transliteration units. In eqn. (9), the current transliteration depends on both Chinese and English transliteration history while in eqn. (8), it depends only on the previous Chinese unit. As 2 2 C T T + >> , an n-gram TM gives a finer description than that of NCM. The actual size of models largely depends on the availability of training data. In Table 1, one can get an idea of how they unfold in a real scenario. With adequately sufficient training data, n-gram TM is expected to outperform NCM in the decoding. A perplexity study in section 4.1 will look at the model from another perspective. 4 The experiments1 We use a database from the bilingual dictionary “Chinese Transliteration of Foreign Personal Names” which was edited by Xinhua News Agency and was considered the de facto standard of personal name transliteration in today’s Chinese press. The database includes a collection of 37,694 unique English entries and their official Chinese transliteration. The listing includes personal names of English, French, Spanish, German, Arabic, Russian and many other origins. The database is initially randomly distributed into 13 subsets. In the open test, one subset is withheld for testing while the remaining 12 subsets are used as the training materials. This process is repeated 13 times to yield an average result, which is called the 13-fold open test. After experiments, we found that each of the 13-fold open tests gave consistent error rates with less than 1% deviation. Therefore, for simplicity, we randomly select one of the 13 subsets, which consists of 2896 entries, as the standard open test set to report results. In the close test, all data entries are used for training and testing. 1 demo at http://nlp.i2r.a-star.edu.sg/demo.htm The Expectation-Maximization algorithm 1. Bootstrap initial random alignment 2. Expectation: Update n-gram statistics to estimate probability distribution 3. Maximization: Apply the n-gram TM to obtain new alignment 4. Go to step 2 until the alignment converges 5. Derive a list transliteration units from final alignment as transliteration table 4.1 Modeling The alignment of transliteration units is done fully automatically along with the n-gram TM training process. To model the boundary effects, we introduce two extra units <s> and </s> for start and end of each name in both languages. The EM iteration converges at 8th round when no further alignment changes are reported. Next are some statistics as a result of the model training: # close set bilingual entries (full data) 37,694 # unique Chinese transliteration (close) 28,632 # training entries for open test 34,777 # test entries for open test 2,896 # unique transliteration pairs T 5,640 # total transliteration pairs T W 119,364 # unique English units E 3,683 # unique Chinese units C 374 # bigram TM ) , | , ( 1 − > < > < k k c e c e P 38,655 # NCM Chinese bigram ) | ( 1 − k k c c P 12,742 Table 1. Modeling statistics The most common metric for evaluating an ngram model is the probability that the model assigns to test data, or perplexity (Jelinek, 1991). For a test set W composed of V names, where each name has been aligned into a sequence of transliteration pair tokens, we can calculate the probability of test set ∏ = = V v v v v P W p 1 ) , , ( ) ( γ β α by applying the n-gram models to the token sequence. The cross-entropy ) (W H p of a model on data W is defined as ) ( log 1 ) ( 2 W p W W H T p − = where T W is the total number of aligned transliteration pair tokens in the data W. The perplexity ) (W PPp of a model is the reciprocal of the average probability assigned by the model to each aligned pair in the test set W as ) ( 2 ) ( W H p p W PP = . Clearly, lower perplexity means that the model describes better the data. It is easy to understand that closed test always gives lower perplexity than open test. TM open NCM open TM closed NCM closed 1-gram 670 729 655 716 2-gram 324 512 151 210 3-gram 306 487 68 127 Table 2. Perplexity study of bilingual database We have the perplexity reported in Table 2 on the aligned bilingual dictionary, a database of 119,364 aligned tokens. The NCM perplexity is computed using n-gram equivalents of eqn. (8) for E2C transliteration, while TM perplexity is based on those of eqn (9) which applies to both E2C and C2E. It is shown that TM consistently gives lower perplexity than NCM in open and closed tests. We have good reason to expect TM to provide better transliteration results which we expect to be confirmed later in the experiments. The Viterbi algorithm produces the best sequence by maximizing the overall probability, ) , , ( γ β α P . In CLIR or multilingual corpus alignment (Virga and Khudanpur, 2003), N-best results will be very helpful to increase chances of correct hits. In this paper, we adopted an N-best stack decoder (Schwartz and Chow, 1990) in both TM and NCM experiments to search for N-best results. The algorithm also allows us to apply higher order n-gram such as trigram in the search. 4.2 E2C transliteration In this experiment, we conduct both open and closed tests for TM and NCM models under DOM paradigm. Results are reported in Table 3 and Table 4. open (word) open (char) closed (word) closed (char) 1-gram 45.6% 21.1% 44.8% 20.4% 2-gram 31.6% 13.6% 10.8% 4.7% 3-gram 29.9% 10.8% 1.6% 0.8% Table 3. E2C error rates for n-gram TM tests. open (word) open (char) closed (word) closed (char) 1-gram 47.3% 23.9% 46.9% 22.1% 2-gram 39.6% 20.0% 16.4% 10.9% 3-gram 39.0% 18.8% 7.8% 1.9% Table 4. E2C error rates for n-gram NCM tests In word error report, a word is considered correct only if an exact match happens between transliteration and the reference. The character error rate is the sum of deletion, insertion and substitution errors. Only the top choice in N-best results is used for error rate reporting. Not surprisingly, one can see that n-gram TM, which benefits from the joint source-channel model coupling both source and target contextual information into the model, is superior to NCM in all the test cases. 4.3 C2E back-transliteration The C2E back-transliteration is more challenging than E2C transliteration. Not many studies have been reported in this area. It is common that multiple English names are mapped into the same Chinese transliteration. In Table 1, we see only 28,632 unique Chinese transliterations exist for 37,694 English entries, meaning that some phonemic evidence is lost in the process of transliteration. To better understand the task, let’s compare the complexity of the two languages presented in the bilingual dictionary. Table 1 also shows that the 5,640 transliteration pairs are cross mappings between 3,683 English and 374 Chinese units. In order words, on average, for each English unit, we have 1.53 = 5,640/3,683 Chinese correspondences. In contrast, for each Chinese unit, we have 15.1 = 5,640/374 English back-transliteration units! Confusion is increased tenfold going backward. The difficulty of back-transliteration is also reflected by the perplexity of the languages as in Table 5. Based on the same alignment tokenization, we estimate the monolingual language perplexity for Chinese and English independently using the n-gram language models ) | ( 1 1 − + − k n k k c c P and ) | ( 1 1 − + − k n k k e e P . Without surprise, Chinese names have much lower perplexity than English names thanks to fewer Chinese units. This contributes to the success of E2C but presents a great challenge to C2E backtransliteration. 1-gram 2-gram 3-gram Chinese 207/206 97/86 79/45 English 710/706 265/152 234/67 Table 5 language perplexity comparison (open/closed test) open (word) open (letter) closed (word) closed (letter) 1 gram 82.3% 28.2% 81% 27.7% 2 gram 63.8% 20.1% 40.4% 12.3% 3 gram 62.1% 19.6% 14.7% 5.0% Table 6. C2E error rate for n-gram TM tests E2C open E2C closed C2E open C2E closed 1-best 29.9% 1.6% 62.1% 14.7% 5-best 8.2% 0.94% 43.3% 5.2% 10-best 5.4% 0.90% 24.6% 4.8% Table 7. N-best word error rates for 3-gram TM tests A back-transliteration is considered correct if it falls within the multiple valid orthographically correct options. Experiment results are reported in Table 6. As expected, C2E error rate is much higher than that of E2C. In this paper, the n-gram TM model serves as the sole knowledge source for transliteration. However, if secondary knowledge, such as a lookup table of valid target transliterations, is available, it can help reduce error rate by discarding invalid transliterations top-down the N choices. In Table 7, the word error rates for both E2C and C2E are reported which imply potential error reduction by secondary knowledge source. The N-best error rates are reduced significantly at 10-best level as reported in Table 7. 5 Discussions It would be interesting to relate n-gram TM to other related framework. 5.1 DOM: n-gram TM vs. ID3 In section 4, one observes that contextual information in both source and target languages is essential. To capture them in the modeling, one could think of decision tree, another popular machine learning approach. Under the DOM framework, here is the first attempt to apply decision tree in E2C and C2E transliteration. With the decision tree, given a fixed size learning vector, we used top-down induction trees to predict the corresponding output. Here we implement ID3 (Quinlan, 1993) algorithm to construct the decision tree which contains questions and return values at terminal nodes. Similar to n-gram TM, for unseen names in open test, ID3 has backoff smoothing, which lies on the default case which returns the most probable value as its best guess for a partial tree path according to the learning set. In the case of E2C transliteration, we form a learning vector of 6 attributes by combining 2 left and 2 right letters around the letter of focus ke and 1 previous Chinese unit 1 − kc . The process is illustrated in Table 8, where both English and Chinese contexts are used to infer a Chinese character. Similarly, 4 attributes combining 1 left, 1 centre and 1 right Chinese character and 1 previous English unit are used for the learning vector in C2E test. An aligned bilingual dictionary is needed to build the decision tree. To minimize the effects from alignment variation, we use the same alignment results from section 4. Two trees are built for two directions, E2C and C2E. The results are compared with those 3-gram TM in Table 9. 2 − ke 1 − ke ke 1 + ke 2 + ke 1 − kc kc _ _ N I C _ > 尼 _ N I C E 尼 > _ N I C E _ _ > 斯 I C E _ _ 斯 > _ Table 8. E2C transliteration using ID3 decision tree for transliterating Nice to 尼斯 (尼|NI 斯|CE) open closed ID3 E2C 39.1% 9.7% 3-gram TM E2C 29.9% 1.6% ID3 C2E 63.3% 38.4% 3-gram TM C2E 62.1% 14.7% Table 9. Word error rate ID3 vs. 3-gram TM One observes that n-gram TM consistently outperforms ID3 decision tree in all tests. Three factors could have contributed: 1) English transliteration unit size ranges from 1 letter to 7 letters. The fixed size windows in ID3 obviously find difficult to capture the dynamics of various ranges. n-gram TM seems to have better captured the dynamics of transliteration units; 2) The backoff smoothing of n-gram TM is more effective than that of ID3; 3) Unlike n-gram TM, ID3 requires a separate aligning process for bilingual dictionary. The resulting alignment may not be optimal for tree construction. Nevertheless, ID3 presents another successful implementation of DOM framework. 5.2 DOM vs. phoneme-based approach Due to lack of standard data sets, it is difficult to compare the performance of the n-gram TM to that of other approaches. For reference purpose, we list some reported studies on other databases of E2C transliteration tasks in Table 10. As in the references, only character and Pinyin error rates are reported, we only include our character and Pinyin error rates for easy reference. The reference data are extracted from Table 1 and 3 of (Virga and Khudanpur 2003). As we have not found any C2E result in the literature, only E2C results are compared here. The first 4 setups by Virga et al all adopted the phoneme-based approach in the following steps: 1) English name to English phonemes; 2) English phonemes to Chinese Pinyin; 3) Chinese Pinyin to Chinese characters. It is obvious that the n-gram TM compares favorably to other techniques. n-gram TM presents an error reduction of 74.6%=(42.5-10.8)/42.5% for Pinyin over the best reported result, Huge MT (Big MT) test case, which is noteworthy. The DOM framework shows a quantum leap in performance with n-gram TM being the most successful implementation. The n-gram TM and ID3 under direct orthographic mapping (DOM) paradigm simplify the process and reduce the chances of conversion errors. As a result, n-gram TM and ID3 do not generate Chinese Pinyin as intermediate results. It is noted that in the 374 legitimate Chinese characters for transliteration, character to Pinyin mapping is unique while Pinyin to character mapping could be one to many. Since we have obtained results in character already, we expect less Pinyin error than character error should a character-to-Pinyin mapping be needed. System Trainin g size Test size Pinyin errors Char errors Meng et al 2,233 1,541 52.5% N/A Small MT 2,233 1,541 50.8% 57.4% Big MT 3,625 250 49.1% 57.4% Huge MT (Big MT) 309,01 9 3,122 42.5% N/A 3-gram TM/DOM 34,777 2,896 < 10.8% 10.8% ID3/DOM 34,777 2,896 < 15.6% 15.6% Table 10. Performance reference in recent studies 6 Conclusions In this paper, we propose a new framework (DOM) for transliteration. n-gram TM is a successful realization of DOM paradigm. It generates probabilistic orthographic transformation rules using a data driven approach. By skipping the intermediate phonemic interpretation, the transliteration error rate is reduced significantly. Furthermore, the bilingual aligning process is integrated into the decoding process in n-gram TM, which allows us to achieve a joint optimization of alignment and transliteration automatically. Unlike other related work where pre-alignment is needed, the new framework greatly reduces the development efforts of machine transliteration systems. Although the framework is implemented on an English-Chinese personal name data set, without loss of generality, it well applies to transliteration of other language pairs such as English/Korean and English/Japanese. It is noted that place and company names are sometimes translated in combination of transliteration and meanings, for example, /Victoria-Fall/ becomes 维多利亚瀑布 (Pinyin:Wei Duo Li Ya Pu Bu). As the proposed framework allows direct orthographical mapping, it can also be easily extended to handle such name translation. We expect to see the proposed model to be further explored in other related areas. References Dempster, A.P., N.M. Laird and D.B.Rubin, 1977. Maximum likelihood from incomplete data via the EM algorithm, J. Roy. Stat. Soc., Ser. B. Vol. 39, pp138 Helen M. Meng, Wai-Kit Lo, Berlin Chen and Karen Tang. 2001. Generate Phonetic Cognates to Handle Name Entities in English-Chinese cross-language spoken document retrieval, ASRU 2001 Jelinek, F. 1991, Self-organized language modeling for speech recognition, In Waibel, A. and Lee K.F. (eds), Readings in Speech Recognition, Morgan Kaufmann., San Mateo, CA K. Knight and J. Graehl. 1998. Machine Transliteration, Computational Linguistics 24(4) Paola Virga, Sanjeev Khudanpur, 2003. Transliteration of Proper Names in Crosslingual Information Retrieval. ACL 2003 workshop MLNER Quinlan J. R. 1993, C4.5 Programs for machine learning, Morgan Kaufmann , San Mateo, CA Rabiner, Lawrence R. 1989, A tutorial on hidden Markov models and selected applications in speech recognition, Proceedings of the IEEE 77(2) Schwartz, R. and Chow Y. L., 1990, The N-best algorithm: An efficient and Exact procedure for finding the N most likely sentence hypothesis, Proceedings of ICASSP 1990, Albuquerque, pp 81-84 Sung Young Jung, Sung Lim Hong and Eunok Paek, 2000, An English to Korean Transliteration Model of Extended Markov Window, Proceedings of COLING The Onomastica Consortium, 1995. The Onomastica interlanguage pronunciation lexicon, Proceedings of EuroSpeech, Madrid, Spain, Vol. 1, pp829-832 Xinhua News Agency, 1992, Chinese transliteration of foreign personal names, The Commercial Press | 2004 | 21 |
Collocation Translation Acquisition Using Monolingual Corpora Yajuan LÜ Microsoft Research Asia 5F Sigma Center, No. 49 Zhichun Road, Haidian District, Beijing, China, 100080 [email protected] Ming ZHOU Microsoft Research Asia 5F Sigma Center, No. 49 Zhichun Road, Haidian District, Beijing, China, 100080 [email protected] Abstract Collocation translation is important for machine translation and many other NLP tasks. Unlike previous methods using bilingual parallel corpora, this paper presents a new method for acquiring collocation translations by making use of monolingual corpora and linguistic knowledge. First, dependency triples are extracted from Chinese and English corpora with dependency parsers. Then, a dependency triple translation model is estimated using the EM algorithm based on a dependency correspondence assumption. The generated triple translation model is used to extract collocation translations from two monolingual corpora. Experiments show that our approach outperforms the existing monolingual corpus based methods in dependency triple translation and achieves promising results in collocation translation extraction. 1 Introduction A collocation is an arbitrary and recurrent word combination (Benson, 1990). Previous work in collocation acquisition varies in the kinds of collocations they detect. These range from twoword to multi-word, with or without syntactic structure (Smadja 1993; Lin, 1998; Pearce, 2001; Seretan et al. 2003). In this paper, a collocation refers to a recurrent word pair linked with a certain syntactic relation. For instance, <solve, verb-object, problem> is a collocation with a syntactic relation verb-object. Translation of collocations is difficult for nonnative speakers. Many collocation translations are idiosyncratic in the sense that they are unpredictable by syntactic or semantic features. Consider Chinese to English translation. The translations of “解决” can be “solve” or “resolve”. The translations of “问题” can be “problem” or “issue”. However, translations of the collocation “解决 ~ 问题” as “solve~problem” or “resolve~ issue” is preferred over “solve~issue” or “resolve ~problem”. Automatically acquiring these collocation translations will be very useful for machine translation, cross language information retrieval, second language learning and many other NLP applications. (Smadja et al., 1996; Gao et al., 2002; Wu and Zhou, 2003). Some studies have been done for acquiring collocation translations using parallel corpora (Smadja et al, 1996; Kupiec, 1993; Echizen-ya et al., 2003). These works implicitly assume that a bilingual corpus on a large scale can be obtained easily. However, despite efforts in compiling parallel corpora, sufficient amounts of such corpora are still unavailable. Instead of heavily relying on bilingual corpora, this paper aims to solve the bottleneck in a different way: to mine bilingual knowledge from structured monolingual corpora, which can be more easily obtained in a large volume. Our method is based on the observation that despite the great differences between Chinese and English, the main dependency relations tend to have a strong direct correspondence (Zhou et al., 2001). Based on this assumption, a new translation model based on dependency triples is proposed. The translation probabilities are estimated from two monolingual corpora using the EM algorithm with the help of a bilingual translation dictionary. Experimental results show that the proposed triple translation model outperforms the other three models in comparison. The obtained triple translation model is also used for collocation translation extraction. Evaluation results demonstrate the effectiveness of our method. The remainder of this paper is organized as follows. Section 2 provides a brief description on the related work. Section 3 describes our triple translation model and training algorithm. Section 4 extracts collocation translations from two independent monolingual corpora. Section 5 evaluates the proposed method, and the last section draws conclusions and presents the future work. 2 Related work There has been much previous work done on monolingual collocation extraction. They can in general be classified into two types: window-based and syntax-based methods. The former extracts collocations within a fixed window (Church and Hanks 1990; Smadja, 1993). The latter extracts collocations which have a syntactic relationship (Lin, 1998; Seretan et al., 2003). The syntax-based method becomes more favorable with recent significant increases in parsing efficiency and accuracy. Several metrics have been adopted to measure the association strength in collocation extraction. Thanopoulos et al. (2002) give comparative evaluations on these metrics. Most previous research in translation knowledge acquisition is based on parallel corpora (Brown et al., 1993). As for collocation translation, Smadja et al. (1996) implement a system to extract collocation translations from a parallel EnglishFrench corpus. English collocations are first extracted using the Xtract system, then corresponding French translations are sought based on the Dice coefficient. Echizen-ya et al. (2003) propose a method to extract bilingual collocations using recursive chain-link-type learning. In addition to collocation translation, there is also some related work in acquiring phrase or term translations from parallel corpus (Kupiec, 1993; Yamamoto and Matsumoto 2000). Since large aligned bilingual corpora are hard to obtain, some research has been conducted to exploit translation knowledge from non-parallel corpora. Their work is mainly on word level. Koehn and Knight (2000) presents an approach to estimating word translation probabilities using unrelated monolingual corpora with the EM algorithm. The method exhibits promising results in selecting the right translation among several options provided by bilingual dictionary. Zhou et al.(2001) proposes a method to simulate translation probability with a cross language similarity score, which is estimated from monolingual corpora based on mutual information. The method achieves good results in word translation selection. In addition, (Dagan and Itai, 1994) and (Li, 2002) propose using two monolingual corpora for word sense disambiguation. (Fung, 1998) uses an IR approach to induce new word translations from comparable corpora. (Rapp, 1999) and (Koehn and Knight, 2002) extract new word translations from non-parallel corpus. (Cao and Li, 2002) acquire noun phrase translations by making use of web data. (Wu and Zhou, 2003) also make full use of large scale monolingual corpora and limited bilingual corpora for synonymous collocation extraction. 3 Training a triple translation model from monolingual corpora In this section, we first describe the dependency correspondence assumption underlying our approach. Then a dependency triple translation model and the monolingual corpus based training algorithm are proposed. The obtained triple translation model will be used for collocation translation extraction in next section. 3.1 Dependency correspondence between Chinese and English A dependency triple consists of a head, a dependant, and a dependency relation. Using a dependency parser, a sentence can be analyzed into dependency triples. We represent a triple as (w1,r,w2), where w1 and w2 are words and r is the dependency relation. It means that w2 has a dependency relation r with w1. For example, a triple (overcome, verb-object, difficulty) means that “difficulty” is the object of the verb “overcome”. Among all the dependency relations, we only consider the following three key types that we think, are the most important in text analysis and machine translation: verb-object (VO), nounadj(AN), and verb- adv(AV). It is our observation that there is a strong correspondence in major dependency relations in the translation between English and Chinese. For example, an object-verb relation in Chinese (e.g.(克服, VO, 困难)) is usually translated into the same verb-object relation in English(e.g. (overcome, VO, difficulty)). This assumption has been experimentally justified based on a large and balanced bilingual corpus in our previous work (Zhou et al., 2001). We come to the conclusion that more than 80% of the above dependency relations have a one-one mapping between Chinese and English. We can conclude that there is indeed a very strong correspondence between Chinese and English in the three considered dependency relations. This fact will be used to estimate triple translation model using two monolingual corpora. 3.2 Triple translation model According to Bayes’s theorem, given a Chinese triple ) , , ( 2 1 c r c c c tri = , and the set of its candidate English triple translations ) , , ( 2 1 e r e e e tri = , the best English triple ) ˆ , , ˆ( ˆ 2 1 e r e e e tri = is the one that maximizes the Equation (1): ) | ( ) ( max arg ) ( / ) | ( ) ( max arg ) | ( max arg ˆ tri tri tri e tri tri tri tri e tri tri e tri e c p e p c p e c p e p c e p e tri tri tri = = = (1) where ) ( tri e p is usually called the language model and ) | ( tri tri e c p is usually called the translation model. Language Model The language model ) ( tri e p is calculated with English triples database. In order to tackle with the data sparseness problem, we smooth the language model with an interpolation method, as described below. When the given English triple occurs in the corpus, we can calculate it as in Equation (2). N e r e freq e p e tri ) , , ( ) ( 2 1 = (2) where ) , , ( 2 1 e r e freq e represents the frequency of triple tri e . N represents the total counts of all the English triples in the training corpus. For an English triple ) , , ( 2 1 e r e e e tri = , if we assume that two words 1e and 2e are conditionally independent given the relation er , Equation (2) can be rewritten as in (3)(Lin, 1998). ) | ( ) | ( ) ( ) ( 2 1 e e e tri r e p r e p r p e p = (3) where N r freq r p e e ,*) (*, ) ( = , ,*) (*, ,*) , ( ) | ( 1 1 e e e r freq r e freq r e p = , ,*) (*, ) , (*, ) | ( 2 2 2 e e r freq e r freq r e p = . The wildcard symbol * means it can be any word or relation. With Equations (2) and (3), we get the interpolated language model as shown in (4). ) | ( ) | ( ) ( ) 1( ) ( ) ( 2 1 e e e tri tri r e p r e p r p N e freq e p λ λ − + = (4) where 1 0 < < λ . λ is calculated as below: ) ( 1 1 1 tri e freq + − = λ (5) Translation Model We simplify the translation model according the following two assumptions. Assumption 1: Given an English triple tri e , and the corresponding Chinese dependency relation cr , 1c and 2c are conditionally independent. We have: ) | ( ) , | ( ) , | ( ) | , , ( ) | ( 2 1 2 1 tri c tri c tri c tri c tri tri e r p e r c p e r c p e c r c p e c p = = (6) Assumption 2: For an English triple tri e , assume that ic only depends on {1,2}) (i ∈ ie , and cr only depends on er . Equation (6) is rewritten as: ) | ( ) | ( ) | ( ) | ( ) , | ( ) , | ( ) | ( 2 2 1 1 2 1 e c tri e tri c tri c tri tri r r p e c p e c p e r p e r c p e r c p e c p = = (7) Notice that ) | ( 1 1 e c p and ) | ( 2 2 e c p are translation probabilities within triples, they are different from the unrestricted probabilities such as the ones in IBM models (Brown et al., 1993). We distinguish translation probability between head ( ) | ( 1 1 e c p ) and dependant ( ) | ( 2 2 e c p ). In the rest of the paper, we use ) | ( e c phead and ) | ( e c pdep to denote the head translation probability and dependant translation probability respectively. As the correspondence between the same dependency relation across English and Chinese is strong, we simply assume 1 ) | ( = e c r r p for the corresponding er and cr , and 0 ) | ( = e c r r p for the other cases. ) | ( 1 1 e c phead and ) | ( 2 2 e c pdep cannot be estimated directly because there is no triple-aligned corpus available. Here, we present an approach to estimating these probabilities from two monolingual corpora based on the EM algorithm. 3.3 Estimation of word translation probability using the EM algorithm Chinese and English corpora are first parsed using a dependency parser, and two dependency triple databases are generated. The candidate English translation set of Chinese triples is generated through a bilingual dictionary and the assumption of strong correspondence of dependency relations. There is a risk that unrelated triples in Chinese and English can be connected with this method. However, as the conditions that are used to make the connection are quite strong (i.e. possible word translations in the same triple structure), we believe that this risk, is not very severe. Then, the expectation maximization (EM) algorithm is introduced to iteratively strengthen the correct connections and weaken the incorrect connections. EM Algorithm According to section 3.2, the translation probabilities from a Chinese triple tri c to an English triple tri e can be computed using the English triple language model ) ( tri e p and a translation model from English to Chinese ) | ( tri tri e c p . The English language model can be estimated using Equation (4) and the translation model can be calculated using Equation (7). The translation probabilities ) | ( e c phead and ) | ( e c pdep are initially set to a uniform distribution as follows: ⎪⎩ ⎪⎨ ⎧ Γ ∈ Γ = = otherwise c if e c p e c p e e dep head ,0 ) ( , 1 ) | ( ) | ( (8) Where e Γ represents the translation set of the English word e. Then, the word translation probabilities are estimated iteratively using the EM algorithm. Figure 1 gives a formal description of the EM algorithm. Figure 1: EM algorithm The basic idea is that under the restriction of the English triple language model ) ( tri e p and translation dictionary, we wish to estimate the translation probabilities ) | ( e c phead and ) | ( e c pdep that best explain the Chinese triple database as a translation from the English triple database. In each iteration, the normalized triple translation probabilities are used to update the word translation probabilities. Intuitively, after finding the most probable translation of the Chinese triple, we can collect counts for the word translation it contains. Since the English triple language model provides context information for the disambiguation of the Chinese words, only the appropriate occurrences are counted. Now, with the language model estimated using Equation (4) and the translation probabilities estimated using EM algorithm, we can compute the best triple translation for a given Chinese triple using Equations (1) and (7). 4 Collocation translation extraction from two monolingual corpora This section describes how to extract collocation translation from independent monolingual corpora. First, collocations are extracted from a monolingual triples database. Then, collocation translations are acquired using the triple translation model obtained in section 3. 4.1 Monolingual collocation extraction As introduced in section 2, much work has been done to extract collocations. Among all the measure metrics, log likelihood ratio (LLR) has proved to give better results (Duning, 1993; Thanopoulos et al., 2002). In this paper, we take LLR as the metric to extract collocations from a dependency triple database. For a given Chinese triple ) , , ( 2 1 c r c c c tri = , the LLR score is calculated as follows: N N d c d c d b d b c a c a b a b a d d c c b b a a Logl log ) log( ) ( ) log( ) ( ) log( ) ( ) log( ) ( log log log log + + + − + + − + + − + + − + + + = (9) where, . ), , , ( ) , (*, ), , , ( ,*) , ( ), , , ( 2 1 2 2 1 1 2 1 c b a N d c r c freq c r freq c c r c freq r c freq b c r c freq a c c c c c − − − = − = − = = N is the total counts of all Chinese triples. Those triples whose LLR values are larger than a given threshold are taken as a collocation. This syntax-based collocation has the advantage that it can represent both adjacent and long distance word association. Here, we only extract the three main types of collocation that have been mentioned in section 3.1. 4.2 Collocation translation extraction For the acquired collocations, we try to extract their translations from the other monolingual Train language model for English triple ) ( tri e p ; Initialize word translation probabilities ) | ( e c phead and ) | ( e c pdep uniformly as in Equation (8); Iterate Set ) | ( e c scorehead and ) | ( e c scoredep to 0 for all dictionary entries (c,e); for all Chinese triples ) , , ( 2 1 c r c c c tri = for all candidate English triple translations ) , , ( 2 1 e r e e e tri = compute triple translation probability ) | ( tri tri c e p by ) | ( ) | ( ) | ( ) ( 2 2 1 1 e c dep head tri r r p e c p e c p e p end for normalize ) | ( tri tri c e p , so that their sum is 1; for all triple translation ) , , ( 2 1 e r e e e tri = add ) | ( tri tri c e p to ) | ( 1 1 e c scorehead add ) | ( tri tri c e p to ) | ( 2 2 e c scoredep endfor endfor for all translation pairs (c,e) set ) | ( e c phead to normalized ) | ( e c scorehead ; set ) | ( e c pdep to normalized ) | ( e c scoredep ; endfor enditerate corpus using the triple translation model trained with the method proposed in section 3. Our objective is to acquire collocation translations as translation knowledge for a machine translation system, so only highly reliable collocation translations are extracted. Figure 2 describes the algorithm for Chinese-English collocation translation extraction. It can be seen that the best English triple candidate is extracted as the translation of the given Chinese collocation only if the Chinese collocation is also the best translation candidate of the English triple. But the English triple is not necessarily a collocation. English collocation translations can be extracted in a similar way. Figure 2: Collocation translation extraction 4.3 Implementation of our approach Our English corpus is from Wall Street Journal (1987-1992) and Associated Press (1988-1990), and the Chinese corpus is from People’s Daily (1980-1998). The two corpora are parsed using the NLPWin parser1 (Heidorn, 2000). The statistics for three main types of dependency triples are shown in tables 1 and 2. Token refers to the total number of triple occurrences and Type refers to the number of unique triples in the corpus. Statistic for the extracted Chinese collocations and the collocation translations is shown in Table 3. Class #Type #Token VO 1,579,783 19,168,229 AN 311,560 5,383,200 AV 546,054 9,467,103 Table 1: Chinese dependency triples 1 The NLPWin parser is a rule-based parser developed at Microsoft research, which parses several languages including Chinese and English. Its output can be a phrase structure parse tree or a logical form which is represented with dependency triples. Class #Type #Token VO 1,526,747 8,943,903 AN 1,163,440 6,386,097 AV 215,110 1,034,410 Table 2: English dependency triples Class #Type #Translated VO 99,609 28,841 AN 35,951 12,615 AV 46,515 6,176 Table 3: Extracted Chinese collocations and E-C translation pairs The translation dictionaries we used in training and translation are combined from two dictionaries: HITDic and NLPWinDic 2 . The final E-C dictionary contains 126,135 entries, and C-E dictionary contains 91,275 entries. 5 Experiments and evaluation To evaluate the effectiveness of our methods, two experiments have been conducted. The first one compares our method with three other monolingual corpus based methods in triple translation. The second one evaluates the accuracy of the acquired collocation translation. 5.1 Dependency triple translation Triple translation experiments are conducted from Chinese to English. We randomly selected 2000 Chinese triples (whose frequency is larger than 2) from the dependency triple database. The standard translation answer sets were built manually by three linguistic experts. For each Chinese triple, its English translation set contain English triples provided by anyone of the three linguists. Among 2000 candidate triples, there are 101 triples that can’t be translated into English triples with same relation. For example, the Chinese triple (讲, VO, 价钱) should be translated into “bargain”. The two words in triple cannot be translated separately. We call this kind of collocation translation no-compositional translations. Our current model cannot deal with this kind of translation. In addition, there are also 157 error dependency triples, which result from parsing mistakes. We filtered out these two kinds of triples and got a standard test set with 1,742 Chinese triples and 4,645 translations in total. We compare our triple translation model with three other models on the same standard test set with the same translation dictionary. As the 2 These two dictionaries are built by Harbin Institute of Technology and Microsoft Research respectively. For each Chinese collocation col c : a. Acquire the best English triple translation tri eˆ using C-E triple translation model: ) | ( ) ( max arg ˆ tri tri tri e tri e c p e p e tri = b. For the acquired tri eˆ , calculate the best Chinese triple translation tri cˆ using E-C triple translation model: ) | ˆ( ) ( max arg ˆ tri tri tri c tri c e p c p c tri = c. If col c = tri cˆ , add col c Ù tri eˆ to collocation translation database. baseline experiment, Model A selects the highestfrequency translation for each word in triple; Model B selects translation with the maximal target triple probability, as proposed in (Dagan 1994); Model C selects translation using both language model and translation model, but the translation probability is simulated by a similarity score which is estimated from monolingual corpus using mutual information measure (Zhou et al., 2001). And our triple translation model is model D. Suppose ) , , ( 2 1 c r c c c tri = is the Chinese triple to be translated. The four compared models can be formally expressed as follows: Model A: )) ( ( max arg , )), ( ( max arg ( 2 ) ( 1 ) ( max 2 2 1 1 e freq r e freq e c Trans e e c Trans e ∈ ∈ = Model B: ) , , ( max arg ) ( max arg 2 1 ) ( ) ( max 2 2 1 1 e r e p e p e e c Trans e c Trans e tri etri ∈ ∈ = = Model C: )) , Sim( ) , Sim( ) ( ( max arg )) | ( likelyhood ) ( ( max arg 2 2 1 1 ) ( ) ( max 2 2 1 1 c e c e e p e c e p e tri c Trans e c Trans e tri tri tri etri × × = × = ∈ ∈ where, ) , Sim( c e is similarity score between e and c (Zhou et al., 2001). Model D (our model): )) | ( ) | ( ) | ( ) ( ( max arg )) | ( ) ( ( max arg 2 2 1 1 ) ( ) ( max 2 2 1 1 e c dep head tri c Trans e c Trans e tri tri tri e r r p e c p e c p e p e c p e p e tri ∈ ∈ = = Accuracy(%) Cove- Rage(%) Top 1 Top 3 Oracle (%) Model A 17.21 ---- Model B 33.56 53.79 Model C 35.88 57.74 Model D 83.98 36.91 58.58 66.30 Table 4: Translation results comparison The evaluation results on the standard test set are shown in Table 4, where coverage is the percentages of triples which can be translated. Some triples can’t be translated by Model B, C and D because of the lack of dictionary translations or data sparseness in triples. In fact, the coverage of Model A is 100%. It was set to the same as others in order to compare accuracy using the same test set. The oracle score is the upper bound accuracy under the conditions of current translation dictionary and standard test set. Top N accuracy is defined as the percentage of triples whose selected top N translations include correct translations. We can see that both Model C and Model D achieve better results than Model B. This shows that the translation model trained from monolingual corpora really helps to improve the performance of translation. Our model also outperforms Model C, which demonstrates the probabilities trained by our EM algorithm achieve better performance than heuristic similarity scores. In fact, our evaluation method is very rigorous. To avoid bias in evaluation, we take human translation results as standard. The real translation accuracy is reasonably better than the evaluation results. But as we can see, compared to the oracle score, the current models still have much room for improvement. And coverage is also not high due to the limitations of the translation dictionary and the sparse data problem. 5.2 Collocation translation extraction 47,632 Chinese collocation translations are extracted with the method proposed in section 4. We randomly selected 1000 translations for evaluation. Three linguistic experts tag the acceptability of the translation. Those translations that are tagged as acceptable by at least two experts are evaluated as correct. The evaluation results are shown in Table 5. Total Acceptance Accuracy (%) VO 590 373 63.22 AN 292 199 68.15 AV 118 60 50.85 All 1000 632 63.20 ColTrans 334 241 72.16 Table 5: Extracted collocation translation results We can see that the extracted collocation translations achieve a much better result than triple translation. The average accuracy is 63.20% and the collocations with relation AN achieve the highest accuracy of 68.15%. If we only consider those Chinese collocations whose translations are also English collocations, we obtain an even better accuracy of 72.16% as shown in the last row of Table 5. The results justify our idea that we can acquire reliable translation for collocation by making use of triple translation model in two directions. These acquired collocation translations are very valuable for translation knowledge building. Manually crafting collocation translations can be time-consuming and cannot ensure high quality in a consistent way. Our work will certainly improve the quality and efficiency of collocation translation acquisition. 5.3 Discussion Although our approach achieves promising results, it still has some limitations to be remedied in future work. (1) Translation dictionary extension Due to the limited coverage of the dictionary, a correct translation may not be stored in the dictionary. This naturally limits the coverage of triple translations. Some research has been done to expand translation dictionary using a non-parallel corpus (Rapp, 1999; Keohn and Knight, 2002). It can be used to improve our work. (2) Noise filtering of parsers Since we use parsers to generate dependency triple databases, this inevitably introduces some parsing mistakes. From our triple translation test data, we can see that 7.85% (157/2000) types of triples are error triples. These errors will certainly influence the translation probability estimation in the training process. We need to find an effective way to filter out mistakes and perform necessary automatic correction. (3) Non-compositional collocation translation. Our model is based on the dependency correspondence assumption, which assumes that a triple’s translation is also a triple. But there are still some collocations that can’t be translated word by word. For example, the Chinese triple (富有, VO, 成效) usually be translated into “be effective”; the English triple (take, VO, place) usually be translated into “发生”. The two words in triple cannot be translated separately. Our current model cannot deal with this kind of non-compositional collocation translation. Melamed (1997) and Lin (1999) have done some research on noncompositional phrases discovery. We will consider taking their work as a complement to our model. 6 Conclusion and future work This paper proposes a novel method to train a triple translation model and extract collocation translations from two independent monolingual corpora. Evaluation results show that it outperforms the existing monolingual corpus based methods in triple translation, mainly due to the employment of EM algorithm in cross language translation probability estimation. By making use of the acquired triple translation model in two directions, promising results are achieved in collocation translation extraction. Our work also demonstrates the possibility of making full use of monolingual resources, such as corpora and parsers for bilingual tasks. This can help overcome the bottleneck of the lack of a large-scale bilingual corpus. This approach is also applicable to comparable corpora, which are also easier to access than bilingual corpora. In future work, we are interested in extending our method to solving the problem of noncompositional collocation translation. We are also interested in incorporating our triple translation model for sentence level translation. 7 Acknowledgements The authors would like to thank John Chen, Jianfeng Gao and Yunbo Cao for their valuable suggestions and comments on a preliminary draft of this paper. References Morton Benson. 1990. Collocations and generalpurpose dictionaries. International Journal of Lexicography. 3(1):23–35 Yunbo Cao, Hang Li. 2002. Base noun phrase translation using Web data and the EM algorithm. The 19th International Conference on Computational Linguistics. pp.127-133 Kenneth W. Church and Patrick Hanks. 1990. Word association norms, mutural information, and lexicography. Computational Linguistics, 16(1):22-29 Ido Dagan and Alon Itai. 1994. Word sense disambiguation using a second language monolingual corpus. Computational Linguistics, 20(4):563-596 Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics. 19(1):61-74 Hiroshi Echizen-ya, Kenji Araki, Yoshi Momouchi, Koji Tochinai. 2003. Effectiveness of automatic extraction of bilingual collocations using recursive chain-link-type learning. The 9th Machine Translation Summit. pp.102-109 Pascale Fung, and Yee Lo Yuen. 1998. An IR approach for translating new words from nonparallel, comparable Texts. The 36th annual conference of the Association for Computational Linguistics. pp. 414-420 Jianfeng Gao, Jianyun Nie, Hongzhao He, Weijun Chen, Ming Zhou. 2002. Resolving query translation ambiguity using a decaying cooccurrence model and syntactic dependence relations. The 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. pp.183 - 190 G. Heidorn. 2000. Intelligent writing assistant. In R. Dale, H. Moisl, and H. Somers, editors, A Handbook of Natural Language Processing: Techniques and Applications for the Processing of Language as Text. Marcel Dekker. Philipp Koehn and Kevin Knight. 2000. Estimating word translation probabilities from unrelated monolingual corpora using the EM algorithm. National Conference on Artificial Intelligence. pp.711-715 Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. Unsupervised Lexical Acquisition: Workshop of the ACL Special Interest Group on the Lexicon. pp. 9-16 Julian Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. The 31st Annual Meeting of the Association for Computational Linguistics, pp. 23-30 Cong Li, Hang Li. 2002. Word translation disambiguation using bilingual bootstrapping. The 40th annual conference of the Association for Computational Linguistics. pp: 343-351 Dekang Lin. 1998. Extracting collocation from Text corpora. First Workshop on Computational Terminology. pp. 57-63 Dekang Lin 1999. Automatic identification of noncompositional phrases. The 37th Annual Meeting of the Association for Computational Linguistics. pp.317--324 Ilya Dan Melamed. 1997. Automatic discovery of non-compositional compounds in parallel data. The 2nd Conference on Empirical Methods in Natural Language Processing. pp. 97~108 Brown P.F., Pietra, S.A.D., Pietra, V. J. D., and Mercer R. L. 1993. The mathematics of machine translation: parameter estimation. Computational Linguistics, 19(2):263-313 Reinhard Rapp. 1999. Automatic identification of word translations from unrelated English and German corpora. The 37th annual conference of the Association for Computational Linguistics. pp. 519-526 Violeta Seretan, Luka Nerima, Eric Wehrli. 2003. Extraction of Multi-Word collocations using syntactic bigram composition. International Conference on Recent Advances in NLP. pp. 424-431 Frank Smadja. 1993. Retrieving collocations from text: Xtract. Computational Linguistics, 19(1):143-177 Frank Smadja, Kathleen R. Mckeown, Vasileios Hatzivassiloglou. 1996. Translation collocations for bilingual lexicons: a statistical approach. Computational Linguistics, 22:1-38 Aristomenis Thanopoulos, Nikos Fakotakis, George Kokkinakis. 2002. Comparative evaluation of collocation extraction metrics. The 3rd International Conference on Language Resource and Evaluation. pp.620-625 Hua Wu, Ming Zhou. 2003. Synonymous collocation extraction using translation Information. The 41th annual conference of the Association for Computational Linguistics. pp. 120-127 Kaoru Yamamoto, Yuji Matsumoto. 2000. Acquisition of phrase-level bilingual correspondence using dependency structure. The 18th International Conference on Computational Linguistics. pp. 933-939 Ming Zhou, Ding Yuan and Changning Huang. 2001. Improving translation selection with a new translation model trained by independent monolingual corpora. Computaional Linguistics & Chinese Language Processing. 6(1): 1-26 | 2004 | 22 |
Statistical Machine Translation with Word- and Sentence-Aligned Parallel Corpora Chris Callison-Burch David Talbot Miles Osborne School on Informatics University of Edinburgh 2 Buccleuch Place Edinburgh, EH8 9LW [email protected] Abstract The parameters of statistical translation models are typically estimated from sentence-aligned parallel corpora. We show that significant improvements in the alignment and translation quality of such models can be achieved by additionally including wordaligned data during training. Incorporating wordlevel alignments into the parameter estimation of the IBM models reduces alignment error rate and increases the Bleu score when compared to training the same models only on sentence-aligned data. On the Verbmobil data set, we attain a 38% reduction in the alignment error rate and a higher Bleu score with half as many training examples. We discuss how varying the ratio of word-aligned to sentencealigned data affects the expected performance gain. 1 Introduction Machine translation systems based on probabilistic translation models (Brown et al., 1993) are generally trained using sentence-aligned parallel corpora. For many language pairs these exist in abundant quantities. However for new domains or uncommon language pairs extensive parallel corpora are often hard to come by. Two factors could increase the performance of statistical machine translation for new language pairs and domains: a reduction in the cost of creating new training data, and the development of more efficient methods for exploiting existing training data. Approaches such as harvesting parallel corpora from the web (Resnik and Smith, 2003) address the creation of data. We take the second, complementary approach. We address the problem of efficiently exploiting existing parallel corpora by adding explicit word-level alignments between a number of the sentence pairs in the training corpus. We modify the standard parameter estimation procedure for IBM Models and HMM variants so that they can exploit these additional wordlevel alignments. Our approach uses both word- and sentence-level alignments for training material. In this paper we: 1. Describe how the parameter estimation framework of Brown et al. (1993) can be adapted to incorporate word-level alignments; 2. Report significant improvements in alignment error rate and translation quality when training on data with word-level alignments; 3. Demonstrate that the inclusion of word-level alignments is more effective than using a bilingual dictionary; 4. Show the importance of amplifying the contribution of word-aligned data during parameter estimation. This paper shows that word-level alignments improve the parameter estimates for translation models, which in turn results in improved statistical translation for languages that do not have large sentence-aligned parallel corpora. 2 Parameter Estimation Using Sentence-Aligned Corpora The task of statistical machine translation is to choose the source sentence, e, that is the most probable translation of a given sentence, f, in a foreign language. Rather than choosing e∗that directly maximizes p(e|f), Brown et al. (1993) apply Bayes’ rule and select the source sentence: e∗ = arg max e p(e)p(f|e). (1) In this equation p(e) is a language model probability and is p(f|e) a translation model probability. A series of increasingly sophisticated translation models, referred to as the IBM Models, was defined in Brown et al. (1993). The translation model, p(f|e) defined as a marginal probability obtained by summing over word-level alignments, a, between the source and target sentences: p(f|e) = X a p(f, a|e). (2) While word-level alignments are a crucial component of the IBM models, the model parameters are generally estimated from sentence-aligned parallel corpora without explicit word-level alignment information. The reason for this is that word-aligned parallel corpora do not generally exist. Consequently, word level alignments are treated as hidden variables. To estimate the values of these hidden variables, the expectation maximization (EM) framework for maximum likelihood estimation from incomplete data is used (Dempster et al., 1977). The previous section describes how the translation probability of a given sentence pair is obtained by summing over all alignments p(f|e) = P a p(f, a|e). EM seeks to maximize the marginal log likelihood, log p(f|e), indirectly by iteratively maximizing a bound on this term known as the expected complete log likelihood, ⟨log p(f, a|e)⟩q(a),1 log p(f|e) = log X a p(f, a|e) (3) = log X a q(a)p(f, a|e) q(a) (4) ≥ X a q(a) log p(f, a|e) q(a) (5) = ⟨log p(f, a|e)⟩q(a) + H(q(a)) where the bound in (5) is given by Jensen’s inequality. By choosing q(a) = p(a|f, e) this bound becomes an equality. This maximization consists of two steps: • E-step: calculate the posterior probability under the current model of every permissible alignment for each sentence pair in the sentence-aligned training corpus; • M-step: maximize the expected log likelihood under this posterior distribution, ⟨log p(f, a|e)⟩q(a), with respect to the model’s parameters. While in standard maximum likelihood estimation events are counted directly to estimate parameter settings, in EM we effectively collect fractional counts of events (here permissible alignments weighted by their posterior probability), and use these to iteratively update the parameters. 1Here ⟨·⟩q(·) denotes an expectation with respect to q(·). Since only some of the permissible alignments make sense linguistically, we would like EM to use the posterior alignment probabilities calculated in the E-step to weight plausible alignments higher than the large number of bogus alignments which are included in the expected complete log likelihood. This in turn should encourage the parameter adjustments made in the M-step to converge to linguistically plausible values. Since the number of permissible alignments for a sentence grows exponentially in the length of the sentences for the later IBM Models, a large number of informative example sentence pairs are required to distinguish between plausible and implausible alignments. Given sufficient data the distinction occurs because words which are mutual translations appear together more frequently in aligned sentences in the corpus. Given the high number of model parameters and permissible alignments, however, huge amounts of data will be required to estimate reasonable translation models from sentence-aligned data alone. 3 Parameter Estimation Using Word- and Sentence-Aligned Corpora As an alternative to collecting a huge amount of sentence-aligned training data, by annotating some of our sentence pairs with word-level alignments we can explicitly provide information to highlight plausible alignments and thereby help parameters converge upon reasonable settings with less training data. Since word-alignments are inherent in the IBM translation models it is straightforward to incorporate this information into the parameter estimation procedure. For sentence pairs with explicit wordlevel alignments marked, fractional counts over all permissible alignments need not be collected. Instead, whole counts are collected for the single hand annotated alignment for each sentence pair which has been word-aligned. By doing this the expected complete log likelihood collapses to a single term, the complete log likelihood (p(f, a|e)), and the Estep is circumvented. The parameter estimation procedure now involves maximizing the likelihood of data aligned only at the sentence level and also of data aligned at the word level. The mixed likelihood function, M, combines the expected information contained in the sentence-aligned data with the complete information contained in the word-aligned data. M = Ns X s=1 (1 −λ)⟨log p(fs, as|es)⟩q(as) + Nw X w=1 λ log p(fw, aw|ew) (6) Here s and w index the Ns sentence-aligned sentences and Nw word-aligned sentences in our corpora respectively. Thus M combines the expected complete log likelihood and the complete log likelihood. In order to control the relative contributions of the sentence-aligned and word-aligned data in the parameter estimation procedure, we introduce a mixing weight λ that can take values between 0 and 1. 3.1 The impact of word-level alignments The impact of word-level alignments on parameter estimation is closely tied to the structure of the IBM Models. Since translation and word alignment parameters are shared between all sentences, the posterior alignment probability of a source-target word pair in the sentence-aligned section of the corpus that were aligned in the word-aligned section will tend to be relatively high. In this way, the alignments from the word-aligned data effectively percolate through to the sentencealigned data indirectly constraining the E-step of EM. 3.2 Weighting the contribution of word-aligned data By incorporating λ, Equation 6 becomes an interpolation of the expected complete log likelihood provided by the sentence-aligned data and the complete log likelihood provided by word-aligned data. The use of a weight to balance the contributions of unlabeled and labeled data in maximum likelihood estimation was proposed by Nigam et al. (2000). λ quantifies our relative confidence in the expected statistics and observed statistics estimated from the sentence- and word-aligned data respectively. Standard maximum likelihood estimation (MLE) which weighs all training samples equally, corresponds to an implicit value of lambda equal to the proportion of word-aligned data in the whole of the training set: λ = Nw Nw+Ns . However, having the total amount of sentence-aligned data be much larger than the amount of word-aligned data implies a value of λ close to zero. This means that M can be maximized while essentially ignoring the likelihood of the word-aligned data. Since we believe that the explicit word-alignment information will be highly effective in distinguishing plausible alignments in the corpus as a whole, we expect to see benefits by setting λ to amplify the contribution of the wordaligned data set particularly when this is a relatively small portion of the corpus. 4 Experimental Design To perform our experiments with word-level alignements we modified GIZA++, an existing and freely available implementation of the IBM models and HMM variants (Och and Ney, 2003). Our modifications involved circumventing the E-step for sentences which had word-level alignments and incorporating these observed alignment statistics in the M-step. The observed and expected statistics were weighted accordingly by λ and (1 −λ) respectively as were their contributions to the mixed log likelihood. In order to measure the accuracy of the predictions that the statistical translation models make under our various experimental settings, we choose the alignment error rate (AER) metric, which is defined in Och and Ney (2003). We also investigated whether improved AER leads to improved translation quality. We used the alignments created during our AER experiments as the input to a phrase-based decoder. We translated a test set of 350 sentences, and used the Bleu metric (Papineni et al., 2001) to automatically evaluate machine translation quality. We used the Verbmobil German-English parallel corpus as a source of training data because it has been used extensively in evaluating statistical translation and alignment accuracy. This data set comes with a manually word-aligned set of 350 sentences which we used as our test set. Our experiments additionally required a very large set of word-aligned sentence pairs to be incorporated in the training set. Since previous work has shown that when training on the complete set of 34,000 sentence pairs an alignment error rate as low as 6% can be achieved for the Verbmobil data, we automatically generated a set of alignments for the entire training data set using the unmodified version of GIZA++. We wanted to use automatic alignments in lieu of actual hand alignments so that we would be able to perform experiments using large data sets. We ran a pilot experiment to test whether our automatic would produce similar results to manual alignments. We divided our manual word alignments into training and test sets and compared the performance of models trained on human aligned data against models trained on automatically aligned data. A Size of training corpus Model .5k 2k 8k 16k Model 1 29.64 24.66 22.64 21.68 HMM 18.74 15.63 12.39 12.04 Model 3 26.07 18.64 14.39 13.87 Model 4 20.59 16.05 12.63 12.17 Table 1: Alignment error rates for the various IBM Models trained with sentence-aligned data 100-fold cross validation showed that manual and automatic alignments produced AER results that were similar to each other to within 0.1%.2 Having satisfied ourselves that automatic alignment were a sufficient stand-in for manual alignments, we performed our main experiments which fell into the following categories: 1. Verifying that the use of word-aligned data has an impact on the quality of alignments predicted by the IBM Models, and comparing the quality increase to that gained by using a bilingual dictionary in the estimation stage. 2. Evaluating whether improved parameter estimates of alignment quality lead to improved translation quality. 3. Experimenting with how increasing the ratio of word-aligned to sentence-aligned data affected the performance. 4. Experimenting with our λ parameter which allows us to weight the relative contributions of the word-aligned and sentence-aligned data, and relating it to the ratio experiments. 5. Showing that improvements to AER and translation quality held for another corpus. 5 Results 5.1 Improved alignment quality As a staring point for comparison we trained GIZA++ using four different sized portions of the Verbmobil corpus. For each of those portions we output the most probable alignments of the testing data for Model 1, the HMM, Model 3, and Model 2Note that we stripped out probable alignments from our manually produced alignments. Probable alignments are large blocks of words which the annotator was uncertain of how to align. The many possible word-to-word translations implied by the manual alignments led to lower results than with the automatic alignments, which contained fewer word-to-word translation possibilities. Size of training corpus Model .5k 2k 8k 16k Model 1 21.43 18.04 16.49 16.20 HMM 14.42 10.47 9.09 8.80 Model 3 20.56 13.25 10.82 10.51 Model 4 14.19 10.13 7.87 7.52 Table 2: Alignment error rates for the various IBM Models trained with word-aligned data 4,3 and evaluated their AERs. Table 1 gives alignment error rates when training on 500, 2000, 8000, and 16000 sentence pairs from Verbmobil corpus without using any word-aligned training data. We obtained much better results when incorporating word-alignments with our mixed likelihood function. Table 2 shows the results for the different corpus sizes, when all of the sentence pairs have been word-aligned. The best performing model in the unmodified GIZA++ code was the HMM trained on 16,000 sentence pairs, which had an alignment error rate of 12.04%. In our modified code the best performing model was Model 4 trained on 16,000 sentence pairs (where all the sentence pairs are word-aligned) with an alignment error rate of 7.52%. The difference in the best performing models represents a 38% relative reduction in AER. Interestingly, we achieve a lower AER than the best performing unmodified models using a corpus that is one-eight the size of the sentence-aligned data. Figure 1 show an example of the improved alignments that are achieved when using the word aligned data. The example alignments were held out sentence pairs that were aligned after training on 500 sentence pairs. The alignments produced when the training on word-aligned data are dramatically better than when training on sentence-aligned data. We contrasted these improvements with the improvements that are to be had from incorporating a bilingual dictionary into the estimation process. For this experiment we allowed a bilingual dictionary to constrain which words can act as translations of each other during the initial estimates of translation probabilities (as described in Och and Ney (2003)). As can be seen in Table 3, using a dictionary reduces the AER when compared to using GIZA++ without a dictionary, but not as dramatically as integrating the word-alignments. We further tried combining a dictionary with our word-alignments but found that the dictionary results in only very minimal improvements over using word-alignments alone. 3We used the default training schemes for GIZA++, and left model smoothing parameters at their default settings. Then assume . Dann reserviere ich zwei Einzelzimmer I will reserve two single , nehme rooms , I ich mal an . (a) Sentence-aligned Then assume . Dann reserviere ich zwei Einzelzimmer I will reserve two single , nehme rooms , I ich mal an . (b) Word-aligned Then assume . Dann reserviere ich zwei Einzelzimmer I will reserve two single , nehme rooms , I ich mal an . (c) Reference Figure 1: Example alignments using sentence-aligned training data (a), using word-aligned data (b), and a reference manual alignment (c) Size of training corpus Model .5k 2k 8k 16k Model 1 23.56 20.75 18.69 18.37 HMM 15.71 12.15 9.91 10.13 Model 3 22.11 16.93 13.78 12.33 Model 4 17.07 13.60 11.49 10.77 Table 3: The improved alignment error rates when using a dictionary instead of word-aligned data to constrain word translations Sentence-aligned Word-aligned Size AER Bleu AER Bleu 500 20.59 0.211 14.19 0.233 2000 16.05 0.247 10.13 0.260 8000 12.63 0.265 7.87 0.278 16000 12.17 0.270 7.52 0.282 Table 4: Improved AER leads to improved translation quality 5.2 Improved translation quality The fact that using word-aligned data in estimating the parameters for machine translation leads to better alignments is predictable. A more significant result is whether it leads to improved translation quality. In order to test that our improved parameter estimates lead to better translation quality, we used a state-of-the-art phrase-based decoder to translate a held out set of German sentences into English. The phrase-based decoder extracts phrases from the word alignments produced by GIZA++, and computes translation probabilities based on the frequency of one phrase being aligned with another (Koehn et al., 2003). We trained a language model AER when when Ratio λ = Standard MLE λ = .9 0.1 11.73 9.40 0.2 10.89 8.66 0.3 10.23 8.13 0.5 8.65 8.19 0.7 8.29 8.03 0.9 7.78 7.78 Table 5: The effect of weighting word-aligned data more heavily that its proportion in the training data (corpus size 16000 sentence pairs) using the 34,000 English sentences from the training set. Table 4 shows that using word-aligned data leads to better translation quality than using sentencealigned data. Particularly, significantly less data is needed to achieve a high Bleu score when using word alignments. Training on a corpus of 8,000 sentence pairs with word alignments results in a higher Bleu score than when training on a corpus of 16,000 sentence pairs without word alignments. 5.3 Weighting the word-aligned data We have seen that using training data consisting of entirely word-aligned sentence pairs leads to better alignment accuracy and translation quality. However, because manually word-aligning sentence pairs costs more than just using sentence-aligned data, it is unlikely that we will ever want to label an entire corpus. Instead we will likely have a relatively small portion of the corpus word aligned. We want to be sure that this small amount of data labeled with word alignments does not get overwhelmed by a larger amount of unlabeled data. 0.07 0.075 0.08 0.085 0.09 0.095 0.1 0.105 0.11 0.115 0.12 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Alignment Error Rate Lambda 20% word-aligned 50% word-aligned 70% word-aligned 100% word-aligned Figure 2: The effect on AER of varying λ for a training corpus of 16K sentence pairs with various proportions of word-alignments Thus we introduced the λ weight into our mixed likelihood function. Table 5 compares the natural setting of λ (where it is proportional to the amount of labeled data in the corpus) to a value that amplifies the contribution of the word-aligned data. Figure 2 shows a variety of values for λ. It shows as λ increases AER decreases. Placing nearly all the weight onto the word-aligned data seems to be most effective.4 Note this did not vary the training data size – only the relative contributions between sentence- and word-aligned training material. 5.4 Ratio of word- to sentence-aligned data We also varied the ratio of word-aligned to sentence-aligned data, and evaluated the AER and Bleu scores, and assigned high value to λ (= 0.9). Figure 3 shows how AER improves as more word-aligned data is added. Each curve on the graph represents a corpus size and shows its reduction in error rate as more word-aligned data is added. For example, the bottom curve shows the performance of a corpus of 16,000 sentence pairs which starts with an AER of just over 12% with no word-aligned training data and decreases to an AER of 7.5% when all 16,000 sentence pairs are word-aligned. This curve essentially levels off after 30% of the data is word-aligned. This shows that a small amount of word-aligned data is very useful, and if we wanted to achieve a low AER, we would only have to label 4,800 examples with their word alignments rather than the entire corpus. Figure 4 shows how the Bleu score improves as more word-aligned data is added. This graph also 4At λ = 1 (not shown in Figure 2) the data that is only sentence-aligned is ignored, and the AER is therefore higher. 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0 0.2 0.4 0.6 0.8 1 Alignment error rate Ratio of word-aligned to sentence-aligned data 500 sentence pairs 2000 sentence pairs 8000 sentence pairs 16000 sentence pairs Figure 3: The effect on AER of varying the ratio of word-aligned to sentence-aligned data 0.2 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0 0.2 0.4 0.6 0.8 1 Bleu Score Ratio of word-aligned to sentence-aligned data 500 sentence pairs 2000 sentence pairs 8000 sentence pairs 16000 sentence pairs Figure 4: The effect on Bleu of varying the ratio of word-aligned to sentence-aligned data reinforces the fact that a small amount of wordaligned data is useful. A corpus of 8,000 sentence pairs with only 800 of them labeled with word alignments achieves a higher Bleu score than a corpus of 16,000 sentence pairs with no word alignments. 5.5 Evaluation using a larger training corpus We additionally tested whether incorporating wordlevel alignments into the estimation improved results for a larger corpus. We repeated our experiments using the Canadian Hansards French-English parallel corpus. Figure 6 gives a summary of the improvements in AER and Bleu score for that corpus, when testing on a held out set of 484 hand aligned sentences. On the whole, alignment error rates are higher and Bleu scores are considerably lower for the Hansards corpus. This is probably due to the differences in the corpora. Whereas the Verbmobil corpus has a small vocabulary (<10,000 per lanSentence-aligned Word-aligned Size AER Bleu AER Bleu 500 33.65 0.054 25.73 0.064 2000 25.97 0.087 18.57 0.100 8000 19.00 0.115 14.57 0.120 16000 16.59 0.126 13.55 0.128 Table 6: Summary results for AER and translation quality experiments on Hansards data guage), the Hansards has ten times that many vocabulary items and has a much longer average sentence length. This made it more difficult for us to create a simulated set of hand alignments; we measured the AER of our simulated alignments at 11.3% (which compares to 6.5% for our simulated alignments for the Verbmobil corpus). Nevertheless, the trend of decreased AER and increased Bleu score still holds. For each size of training corpus we tested we found better results using the word-aligned data. 6 Related Work Och and Ney (2003) is the most extensive analysis to date of how many different factors contribute towards improved alignments error rates, but the inclusion of word-alignments is not considered. Och and Ney do not give any direct analysis of how improved word alignments accuracy contributes toward better translation quality as we do here. Mihalcea and Pedersen (2003) described a shared task where the goal was to achieve the best AER. A number of different methods were tried, but none of them used word-level alignments. Since the best performing system used an unmodified version of Giza++, we would expected that our modifed version would show enhanced performance. Naturally this would need to be tested in future work. Melamed (1998) describes the process of manually creating a large set of word-level alignments of sentences in a parallel text. Nigam et al. (2000) described the use of weight to balance the respective contributions of labeled and unlabeled data to a mixed likelihood function. Corduneanu (2002) provides a detailed discussion of the instability of maximum likelhood solutions estimated from a mixture of labeled and unlabeled data. 7 Discussion and Future Work In this paper we show with the appropriate modification of EM significant improvement gains can be had through labeling word alignments in a bilingual corpus. Because of this significantly less data is required to achieve a low alignment error rate or high Bleu score. This holds even when using noisy word alignments such as our automatically created set. One should take our research into account when trying to efficiently create a statistical machine translation system for a language pair for which a parallel corpus is not available. Germann (2001) describes the cost of building a Tamil-English parallel corpus from scratch, and finds that using professional translations is prohibitively high. In our experience it is quicker to manually word-align translated sentence pairs than to translate a sentence, and word-level alignment can be done by someone who might not be fluent enough to produce translations. It might therefore be possible to achieve a higher performance at a fraction of the cost by hiring a nonprofessional produce word-alignments after a limited set of sentences have been translated. We plan to investigate whether it is feasible to use active learning to select which examples will be most useful when aligned at the word-level. Section 5.4 shows that word-aligning a fraction of sentence pairs in a training corpus, rather than the entire training corpus can still yield most of the benefits described in this paper. One would hope that by selectively sampling which sentences are to be manually word-aligned we would achieve nearly the same performance as word-aligning the entire corpus. Acknowledgements The authors would like to thank Franz Och, Hermann Ney, and Richard Zens for providing the Verbmobil data, and Linear B for providing its phrase-based decoder. References Peter Brown, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer. 1993. The mathematics of machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311, June. Adrian Corduneanu. 2002. Stable mixing of complete and incomplete information. Master’s thesis, Massachusetts Institute of Technology, February. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(1):1–38, Nov. Ulrich Germann. 2001. Building a statistical machine translation system from scratch: How much bang for the buck can we expect? In ACL 2001 Workshop on Data-Driven Machine Translation, Toulouse, France, July 7. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the HLT/NAACL. I. Dan Melamed. 1998. Manual annotation of translational equivalence: The blinker project. Cognitive Science Technical Report 98/07, University of Pennsylvania. Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In Rada Mihalcea and Ted Pedersen, editors, HLT-NAACL 2003 Workshop: Building and Using Parallel Texts. Kamal Nigam, Andrew K. McCallum, Sebastian Thrun, and Tom M. Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103–134. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51, March. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. IBM Research Report RC22176(W0109-022), IBM. Philip Resnik and Noah Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29(3):349– 380, September. | 2004 | 23 |
Finding Ideographic Representations of Japanese Names Written in Latin Script via Language Identification and Corpus Validation Yan Qu Clairvoyance Corporation 5001 Baum Boulevard, Suite 700 Pittsburgh, PA 15213-1854, USA [email protected] Gregory Grefenstette∗∗∗∗ LIC2M/LIST/CEA 18, route du Panorama, BP 6 Fontenay-aux-Roses, 92265 France [email protected] Abstract Multilingual applications frequently involve dealing with proper names, but names are often missing in bilingual lexicons. This problem is exacerbated for applications involving translation between Latin-scripted languages and Asian languages such as Chinese, Japanese and Korean (CJK) where simple string copying is not a solution. We present a novel approach for generating the ideographic representations of a CJK name written in a Latin script. The proposed approach involves first identifying the origin of the name, and then back-transliterating the name to all possible Chinese characters using language-specific mappings. To reduce the massive number of possibilities for computation, we apply a three-tier filtering process by filtering first through a set of attested bigrams, then through a set of attested terms, and lastly through the WWW for a final validation. We illustrate the approach with English-to-Japanese back-transliteration. Against test sets of Japanese given names and surnames, we have achieved average precisions of 73% and 90%, respectively. 1 Introduction Multilingual processing in the real world often involves dealing with proper names. Translations of names, however, are often missing in bilingual resources. This absence adversely affects multilingual applications such as machine translation (MT) or cross language information retrieval (CLIR) for which names are generally good discriminating terms for high IR performance (Lin et al., 2003). For language pairs with different writing systems, such as Japanese and English, and for which simple string-copying of a name from one language to another is not a solution, researchers have studied techniques for transliteration, i.e., phonetic translation across languages. For example, European names are often transcribed in Japanese using the syllabic katakana alphabet. Knight and Graehl (1998) used a bilingual English-katakana dictionary, a katakana-to-English phoneme mapping, and the CMU Speech Pronunciation Dictionary to create a series of weighted finite-state transducers between English words and katakana that produce and rank transliteration candidates. Using similar methods, Qu et al. (2003) showed that integrating automatically discovered transliterations of unknown katakana sequences, i.e. those not included in a large Japanese-English dictionary such as EDICT1, improves CLIR results. Transliteration of names between alphabetic and syllabic scripts has also been studied for languages such as Japanese/English (Fujii & Ishikawa, 2001), English/Korean (Jeong et al., 1999), and English/Arabic (Al-Onaizan and Knight, 2002). In work closest to ours, Meng et al (2001), working in cross-language retrieval of phonetically transcribed spoken text, studied how to transliterate names into Chinese phonemes (though not into Chinese characters). Given a list of identified names, Meng et al. first separated the names into Chinese names and English names. Romanized Chinese names were detected by a leftto-right longest match segmentation method, using the Wade-Giles2 and the pinyin syllable inventories in sequence. If a name could be segmented successfully, then the name was considered a Chinese name. As their spoken document collection had already been transcribed into pinyin, retrieval was based on pinyin-to-pinyin matching; pinyin to Chinese character conversion was not addressed. Names other than Chinese names were considered as foreign names and were converted into Chinese phonemes using a language model derived from a list of English-Chinese equivalents, both sides of which were represented in phonetic equivalents. ∗ The work was done by the author while at Clairvoyance Corporation. 1 http://www.csse.monash.edu.au/~jwb/edict.html 2 http://lcweb.loc.gov/catdir/pinyin/romcover.html The above English-to-Japanese or English-toChinese transliteration techniques, however, only solve a part of the name translation problem. In multilingual applications such as CLIR and Machine Translation, all types of names must be translated. Techniques for name translation from Latin scripts into CJK scripts often depend on the origin of the name. Some names are not transliterated into a nearly deterministic syllabic script but into ideograms that can be associated with a variety of pronunciations. For example, Chinese, Korean and Japanese names are usually written using Chinese characters (or kanji) in Japanese, while European names are transcribed using katakana characters, with each character mostly representing one syllable. In this paper, we describe a method for converting a Japanese name written with a Latin alphabet (or romanji), back into Japanese kanji3. Transcribing into Japanese kanji is harder than transliteration of a foreign name into syllabic katakana, since one phoneme can correspond to hundreds of possible kanji characters. For example, the sound “kou” can be mapped to 670 kanji characters. Our method for back-transliterating Japanese names from English into Japanese consists of the following steps: (1) language identification of the origins of names in order to know what languagespecific transliteration approaches to use, (2) generation of possible transliterations using sound and kanji mappings from the Unihan database (to be described in section 3.1) and then transliteration validation through a three-tier filtering process by filtering first through a set of attested bigrams, then through a set of attested terms, and lastly through the Web. The rest of the paper is organized as follows: in section 2, we describe and evaluate our name origin identifier; section 3 presents in detail the steps for back transliterating Japanese names written in Latin script into Japanese kanji representations; section 4 presents the evaluation setup and section 5 discusses the evaluation results; we conclude the paper in section 6. 2 Language Identification of Names Given a name in English for which we do not have a translation in a bilingual English-Japanese dictionary, we first have to decide whether the name is of Japanese, Chinese, Korean or some European origin. In order to determine the origin of names, we created a language identifier for names, using a trigram language identification 3 We have applied the same technique to Chinese and Korean names, though the details are not presented here. method (Cavner and Trenkle, 1994). During training, for Chinese names, we used a list of 11,416 Chinese names together with their frequency information4. For Japanese names, we used the list of 83,295 Japanese names found in ENAMDICT5. For English names, we used the list of 88,000 names found at the US. Census site6. (We did not obtain any training data for Korean names, so origin identification for Korean names is not available.) Each list of names7 was converted into trigrams; the trigrams for each list were then counted and normalized by dividing the count of the trigram by the number of all the trigrams. To identify a name as Chinese, Japanese or English (Other, actually), we divide the name into trigrams, and sum up the normalized trigram counts from each language. A name is identified with the language which provides the maximum sum of normalized trigrams in the word. Table 1 presents the results of this simple trigram-based language identifier over the list of names used for training the trigrams. The following are examples of identification errors: Japanese names recognized as English, e.g., aa, abason, abire, aebakouson; Japanese names recognized as Chinese, e.g., abeseimei, abei, adan, aden, afun, agei, agoin. These errors show that the language identifier can be improved, possibly by taking into account language-specific features, such as the number of syllables in a name. For origin detection of Japanese names, the current method works well enough for a first pass with an accuracy of 92%. Input names As JAP As CHI As ENG Accuracy Japanese 76816 5265 1212 92% Chinese 1147 9947 321 87% English 12115 14893 61701 70% Table 1: Accuracy of language origin identification for names in the training set (JAP, CHI, and ENG stand for Japanese, Chinese, and English, respectively) 4 http://www.geocities.com/hao510/namelist/ 5 http://www.csse.monash.edu.au/~jwb/ enamdict_doc.html 6 http://www.census.gov/genealogy/names 7 Some names appear in multiple name lists: 452 of the names are found both in the Japanese name list and in the Chinese name list; 1529 names appear in the Japanese name list and the US Census name list; and 379 names are found both in the Chinese name list and the US Census list. 3 English-Japanese Back-Transliteration Once the origin of a name in Latin scripts is identified, we apply language-specific rules for back-transliteration. For non-Asian names, we use a katakana transliteration method as described in (Qu et al., 2003). For Japanese and Chinese names, we use the method described below. For example, “koizumi” is identified as a name of Japanese origin and thus is back-transliterated to Japanese using Japanese specific phonetic mappings between romanji and kanji characters. 3.1 Romanji-Kanji Mapping To obtain the mappings between kanji characters and their romanji representations, we used the Unihan database, prepared by the Unicode Consortium 8 . The Unihan database, which currently contains 54,728 kanji characters found in Chinese, Japanese, and Korean, provides rich information about these kanji characters, such as the definition of the character, its values in different encoding systems, and the pronunciation(s) of the character in Chinese (listed under the feature kMandarin in the Unihan database), in Japanese (both the On reading and the Kun reading 9 : kJapaneseKun and kJapaneseOn), and in Korean (kKorean). For example, for the kanji character , coded with Unicode hexadecimal character 91D1, the Unihan database lists 49 features; we list below its pronunciations in Japanese, Chinese, and Korean: U+91D1 kJapaneseKun KANE U+91D1 kJapaneseOn KIN KON U+91D1 kKorean KIM KUM U+91D1 kMandarin JIN1 JIN4 In the example above, is represented in its Unicode scalar value in the first column, with a feature name in the second column and the values of the feature in the third column. The Japanese Kun reading of is KANE, while the Japanese On readings of is KIN and KON. From the Unicode database, we construct mappings between Japanese readings of a character in romanji and the kanji characters in its Unicode representation. As kanji characters in Japanese names can have either the Kun reading or the On 8 http://www.unicode.org/charts/unihan.html 9 Historically, when kanji characters were introduced into the Japanese writing system, two methods of transcription were used. One is called “on-yomi” (i.e., On reading), where the Chinese sounds of the characters were adopted for Japanese words. The other method is called “kun-yomi” (i.e., Kun reading), where a kanji character preserved its meaning in Chinese, but was pronounced using the Japanese sounds. reading, we consider both readings as candidates for each kanji character. The mapping table has a total of 5,525 entries. A typical mapping is as follows: kou U+4EC0 U+5341 U+554F U+5A09 U+5B58 U+7C50 U+7C58 ...... in which the first field specifies a pronunciation in romanji, while the rest of the fields specifies the possible kanji characters into which the pronunciation can be mapped. There is a wide variation in the distribution of these mappings. For example, kou can be the pronunciation of 670 kanji characters, while the sound katakumi can be mapped to only one kanji character. 3.2 Romanji Name Back-Transliteration In theory, once we have the mappings between romanji characters and the kanji characters, we can first segment a Japanese name written in romanji and then apply the mappings to back-transliterate the romanji characters into all possible kanji representations. However, for some segmentation, the number of the possible kanji combinations can be so large as to make the problem computationally intractable. For example, consider the short Japanese name “koizumi.” This name can be segmented into the romanji characters “ko-i-zu-mi” using the Romanji-Kanji mapping table described in section 3.1, but this segmentation then has 182*230*73*49 (over 149 million) possible kanji combinations. Here, 182, 239, 73, and 49 represents the numbers of possible kanji characters for the romanji characters “ko”, “i”, “zu”, and “mi”, respectively. In this study, we present an efficient procedure for back-transliterating romanji names to kanji characters that avoids this complexity. The procedure consists of the following steps: (1) romanji name segmentation, (2) kanji name generation, (3) kanji name filtering via monolingual Japanese corpus, and (4) kanjiromanji combination filtering via WWW. Our procedure relies on filtering using corpus statistics to reduce the hypothesis space in the last three steps. We illustrate the steps below using the romanji name “koizumi” ( . 3.2.1 Romanji Name Segmentation With the romanji characters from the RomanjiKanji mapping table, we first segment a name recognized as Japanese into sequences of romanji characters. Note that a greedy segmentation method, such as the left-to-right longest match method, often results in segmentation errors. For example, for “koizumi”, the longest match segmentation method produces segmentation “koizu-mi”, while the correct segmentation is “koizumi”. Motivated by this observation, we generate all the possible segmentations for a given name. The possible segmentations for “koizumi” are: ko-izumi koi-zu-mi ko-i-zu-mi 3.2.2 Kanji Name Segmentation Using the same Romanji-Kanji mapping table, we obtain the possible kanji combinations for a segmentation of a romanji name produced by the previous step. For the segmentation “ko-izumi”, we have a total of 546 (182*3) combinations (we use the Unicode scale value to represent the kanji characters and use spaces to separate them): U+5C0F U+6CC9 U+53E4 U+6CC9 ...... We do not produce all possible combinations. As we have discussed earlier, such a generation method can produce so many combinations as to make computation infeasible for longer segmentations. To control this explosion, we eliminate unattested combinations using a bigram model of the possible kanji sequences in Japanese. From the Japanese evaluation corpus of the NTCIR-4 CLIR track 10 , we collected bigram statistics by first using a statistical part-of-speech tagger of Japanese (Qu et al., 2004). All valid Japanese terms and their frequencies from the tagger output were extracted. From this term list, we generated kanji bigram statistics (as well as an attested term list used below in step 3). With this bigram-based model, our hypothesis space is significantly reduced. For example, with the segmentation “ko-i-zu-mi”, even though “ko-i” can have 182*230 possible combinations, we only retain the 42 kanji combinations that are attested in the corpus. Continuing with the romanji segments “i-zu”, we generate the possible kanji combinations for “i-zu” that can continue one of the 42 candidates for “koi”. This results in only 6 candidates for the segments “ko-i-zu”. Lastly, we consider the romanji segments “zumi”, and retain with only 4 candidates for the segmentation “ko-i-zu-mi” whose bigram sequences are attested in our language model: U+5C0F U+53F0 U+982D U+8EAB U+5B50 U+610F U+56F3 U+5B50 U+5C0F U+610F U+56F3 U+5B50 U+6545 U+610F U+56F3 U+5B50 10 http://research.nii.ac.jp/ntcir-ws4/clir/index.html Thus, for the segmentation “ko-i-zu-mi”, the bigram-based language model effectively reduces the hypothesis space from 182*230*73*49 possible kanji combinations to 4 candidates. For the other alternative segmentation “koi-zu-mi”, no candidates can be generated by the language model. 3.2.3 Corpus-based Kanji name Filtering In this step, we use a monolingual Japanese corpus to validate whether the kanji name candidates generated by step (2) are attested in the corpus. Here, we simply use Japanese term list extracted from the segmented NTCIR-4 corpus created for the previous step to filter out unattested kanji combinations. For the segmentation “koizumi”, the following kanji combinations are attested in the corpus (preceded by their frequency in the corpus): 4167 koizumi 16 koizumi 4 koizumi None of the four kanji candidates from the alternate segmentation “ko-i-zu-mi” is attested in the corpus. While step 2 filters out candidates using bigram sequences, step 3 uses corpus terms in their entirety to validate candidates. 3.2.4 Romanji-Kanji Combination Validation Here, we take the corpus-validated kanji candidates (but for which we are not yet sure if they correspond to the same reading as the original Japanese name written in romanji) and use the Web to validate the pairings of kanji-romanji combinations (e.g., AND koizumi). This is motivated by two observations. First, in contrast to monolingual corpus, Web pages are often mixedlingual. It is often possible to find a word and its translation on the same Web pages. Second, person names and specialized terminology are among the most frequent mixed-lingual items. Thus, we would expect that the appearance of both representations in close proximity on the same pages gives us more confidence in the kanji representations. For example, with the Google search engine, all three kanji-romanji combinations for “koizumi” are attested: 23,600 pages - koizumi 302 pages - koizumi 1 page - koizumi Among the three, the koizumi combination is the most common one, being the name of the current Japanese Prime Minister. 4 Evaluation In this section, we describe the gold standards and evaluation measures for evaluating the effectiveness of the above method for backtransliterating Japanese names. 4.1 Gold Standards Based on two publicly accessible name lists and a Japanese-to-English name lexicon, we have constructed two Gold Standards. The Japanese-toEnglish name lexicon is ENAMDICT 11 , which contains more than 210,000 Japanese-English name translation pairs. Gold Standard – Given Names (GS-GN): to construct a gold standard for Japanese given names, we obtained 7,151 baby names in romanji from http://www.kabalarians.com/. Of these 7,151 names, 5,115 names have kanji translations in the ENAMDICT12. We took the 5115 romanji names and their kanji translations in the ENAMDICT as the gold standard for given names. Gold Standard – Surnames (GS-SN): to construct a gold standard for Japanese surnames, we downloaded 972 surnames in romanji from http://business.baylor.edu/Phil_VanAuken/Japanes eSurnames.html. Of these names, 811 names have kanji translations in the ENAMDICT. We took these 811 romanji surnames and their kanji translations in the ENAMDICT as the gold standard for Japanese surnames. 4.2 Evaluation Measures Each name in romanji in the gold standards has at least one kanji representation obtained from the ENAMDICT. For each name, precision, recall, and F measures are calculated as follows: • Precision: number of correct kanji output / total number of kanji output • Recall: number of correct kanji output / total number of kanji names in gold standard • F-measure: 2*Precision*Recall / (Precision + Recall) Average Precision, Average Recall, and Average F-measure are computed over all the names in the test sets. 5 Evaluation Results and Analysis 5.1 Effectiveness of Corpus Validation Table 2 and Table 3 present the precision, recall, and F statistics for the gold standards GS-GN and 11 http://mirrors.nihongo.org/monash/ enamdict_doc.html 12 The fact that above 2000 of these names were missing from ENAMDICT is a further justification for a name translation method as described in this paper. GS-SN, respectively. For given names, corpus validation produces the best average precision of 0.45, while the best average recall is a low 0.27. With the additional step of Web validation of the romanji-kanji combinations, the average precision increased by 62.2% to 0.73, while the best average recall improved by 7.4% to 0.29. We observe a similar trend for surnames. The results demonstrate that, through a large, mixed-lingual corpus such as the Web, we can improve both precision and recall for automatically transliterating romanji names back to kanji. Avg Prec Avg Recall F (1) Corpus 0.45 0.27 0.33 (2) Web (over (1)) 0.73 (+62.2%) 0.29 (+7.4%) 0.38 (+15.2%) Table 2: The best Avg Precision, Avg Recall, and Avg F statistics achieved through corpus validation and Web validation for GS-GN. Avg Prec Avg Recall F (1) Corpus 0.69 0.44 0.51 (2) Web (over (1)) 0.90 (+23.3%) 0.45 (+2.3%) 0.56 (+9.8%) Table 3: The best Avg Precision, Avg Recall, and Avg F statistics achieved through corpus validation and Web validation for GS-SN. We also observe that the performance statistics for the surnames are significantly higher than those of the given names, which might reflect the different degrees of flexibility in using surnames and given names in Japanese. We would expect that the surnames form a somewhat closed set, while the given names belong to a more open set. This may account for the higher recall for surnames. 5.2 Effectiveness of Corpus Validation If the big, mixed-lingual Web can deliver better validation than the limited-sized monolingual corpus, why not use it at every stage of filtering? Technically, we could use the Web as the ultimate corpus for validation at any stage when a corpus is required. In practice, however, each Web access involves additional computation time for file IO, network connections, etc. For example, accessing Google took about 2 seconds per name13; gathering 13 We inserted a 1 second sleep between calls to the search engine so as not to overload the engine. statistics for about 30,000 kanji-romanji combinations14 took us around 15 hours. In the procedure described in section 3.2, we have aimed to reduce computation complexity and time at several stages. In step 2, we use bigrambased language model from a corpus to reduce the hypothesis space. In step 3, we use corpus filtering to obtain a fast validation of the candidates, before passing the output to the Web validation in step 4. Table 4 illustrates the savings achieved through these steps. GS-GN GS-SN All possible 2.0e+017 296,761,622,763 2gram model 21,306,322 (-99.9%) 2,486,598 (-99.9%) Corpus validate 30,457 (-99.9%) 3,298 (-99.9%) Web validation 20,787 (-31.7%) 2,769 (-16.0%) Table 4: The numbers of output candidates of each step to be passed to the next step. The percentages specify the amount of reduction in hypothesis space. 5.3 Thresholding Effects We have examined whether we should discard the validated candidates with low frequencies either from the corpus or the Web. The cutoff points examined include initial low frequency range 1 to 10 and then from 10 up to 400 in with increments of 5. Figure 1 and Figure 2 illustrate that, to achieve best overall performance, it is beneficial to discard candidates with very low frequencies, e.g., frequencies below 5. Even though we observe a stabling trend after reaching certain threshold points for these validation methods, it is surprising to see that, for the corpus validation method with GS-GN, with stricter thresholds, average precisions are actually decreasing. We are currently investigating this exception. 5.4 Error Analysis Based on a preliminary error analysis, we have identified three areas for improvements. First, our current method does not account for certain phonological transformations when the On/Kun readings are concatenated together. Consider the name “matsuda” ( ). The segmentation step correctly segmented the romanji to “matsu-da”. However, in the Unihan database, 14 At this rate, checking the 21 million combinations remaining after filtering with bigrams using the Web (without the corpus filtering step) would take more than a year. the Kun reading of is “ta”, while its On reading is “den”. Therefore, using the mappings from the Unihan database, we failed to obtain the mapping between the pronunciation “da” and the kanji , which resulted in both low precision and recall for “matsuda”. This suggests for introducing language-specific phonological transformations or alternatively fuzzy matching to deal with the mismatch problem. Avg Precision - GS_GN 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 6 15 50 100 150 200 250 300 350 400 Threshold for frequency cutoff Avg Precision corpus+web corpus Figure 1: Average precisions achieved via both corpus and corpus+Web validation with different frequency-based cutoff thresholds for GS-GN Avg Precision - GS_SN 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 6 15 50 100 150 200 250 300 350 400 Threshold for frequency cutoff Avg Precision corpus+web corpus Figure 2: Average precisions achieved via both corpus and corpus+Web validation with different frequency-based cutoff thresholds for GS-SN Second, ENAMDICT contains mappings between kanji and romanji that are not available from the Unihan database. For example, for the name “hiroshi” in romanji, based on the mappings from the Unihan database, we can obtain two possible segmentations: “hiro-shi” and “hi-ro-shi”. Our method produces two- and three-kanji character sequences that correspond to these romanji characters. For example, corpus validation produces the following kanji candidates for “hiroshi”: 2 hiroshi 10 hiroshi 5 hiroshi 1 hiroshi 2 hiroshi 11 hiroshi 33
hiroshi 311
hiroshi ENAMDCIT, however, in addition to the 2- and 3-character kanji names, also contains 1-character kanji names, whose mappings are not found in the Unihan database, e.g., Hiroshi Hiroshi Hiroshi Hiroshi Hiroshi Hiroshi This suggests the limitation of relying solely on the Unihan database for building mappings between romanji characters and kanji characters. Other mapping resources, such as ENAMDCIT, should be considered in our future work. Third, because the statistical part-of-speech tagger we used for Japanese term identification does not have a lexicon of all possible names in Japanese, some unknown names, which are incorrectly separated into individual kanji characters, are therefore not available for correct corpus-based validation. We are currently exploring methods using overlapping character bigrams, instead of the tagger-produced terms, as the basis for corpus-based validation and filtering. 6 Conclusions In this study, we have examined a solution to a previously little treated problem of transliterating CJK names written in Latin scripts back into their ideographic representations. The solution involves first identifying the origins of the CJK names and then back-transliterating the names to their respective ideographic representations with language-specific sound-to-character mappings. We have demonstrated that a simple trigram-based language identifier can serve adequately for identifying names of Japanese origin. During back-transliteration, the possibilities can be massive due to the large number of mappings between a Japanese sound and its kanji representations. To reduce the complexity, we apply a three-tier filtering process which eliminates most incorrect candidates, while still achieving an F measure of 0.38 on a test set of given names, and an F measure of 0.56 on a test of surnames. The three filtering steps involve using a bigram model derived from a large segmented Japanese corpus, then using a list of attested corpus terms from the same corpus, and lastly using the whole Web as a corpus. The Web is used to validate the backtransliterations using statistics of pages containing both the candidate kanji translation as well as the original romanji name. Based on the results of this study, our future work will involve testing the effectiveness of the current method in real CLIR applications, applying the method to other types of proper names and other language pairs, and exploring new methods for improving precision and recall for romanji name back-transliteration. In cross-language applications such as English to Japanese retrieval, dealing with a romaji name that is missing in the bilingual lexicon should involve (1) identifying the origin of the name for selecting the appropriate language-specific mappings, and (2) automatically generating the back-transliterations of the name in the right orthographic representations (e.g., Katakana representations for foreign Latin-origin names or kanji representations for native Japanese names). To further improve precision and recall, one promising technique is fuzzy matching (Meng et al, 2001) for dealing with phonological transformations in name generation that are not considered in our current approach (e.g., “matsuda” vs “matsuta”). Lastly, we will explore whether the proposed romanji to kanji backtransliteration approach applies to other types of names such as place names and study the effectiveness of the approach for backtransliterating romanji names of Chinese origin and Korean origin to their respective kanji representations. References Yaser Al-Onaizan and Kevin Knight. 2002. Machine Transliteration of Names in Arabic Text. Proc. of ACL Workshop on Computational Approaches to Semitic Languages William B. Cavnar and John M. Trenkle. 1994. Ngram based text categorization. In 3rd Annual Symposium on Document Analysis and Information Retrieval, 161-175 Atsushi Fujii and Tetsuya Ishikawa. 2001. Japanese/English Cross-Language Information Retrieval: Exploration of Query Translation and Transliteration. Computer and the Humanities, 35( 4): 389–420 K. S. Jeong, Sung-Hyon Myaeng, J. S. Lee, and K. S. Choi. 1999. Automatic identification and back-transliteration of foreign words for information retrieval. Information Processing and Management, 35(4): 523-540 Kevin Knight and Jonathan Graehl. 1998. Machine Transliteration. Computational Linguistics: 24(4): 599-612 Wen-Cheng Lin, Changhua Yang and Hsin-Hsi Chen. 2003. Foreign Name Backward Transliteration in Chinese-English CrossLanguage Image Retrieval, In Proceedings of CLEF 2003 Workshop, Trondheim, Norway. Helen Meng, Wai-Kit Lo, Berlin Chen, and Karen Tang. 2001. Generating Phonetic Cognates to Handel Named Entities in English-Chinese Cross-Language Spoken Document Retrieval. In Proc of the Automatic Speech Recognition and Understanding Workshop (ASRU 2001) Trento, Italy, Dec. Yan Qu, Gregory Grefenstette, David A. Evans. 2003. Automatic transliteration for Japanese-toEnglish text retrieval. In Proceedings of SIGIR 2003: 353-360 Yan Qu, Gregory Grefenstette, David A. Hull, David A. Evans, Toshiya Ueda, Tatsuo Kato, Daisuke Noda, Motoko Ishikawa, Setsuko Nara, and Kousaku Arita. 2004. JustsystemClairvoyance CLIR Experiments at NTCIR-4 Workshop. In Proceedings of the NTCIR-4 Workshop. | 2004 | 24 |
Extracting Regulatory Gene Expression Networks from PubMed Jasmin ˇSari´c EML Research gGmbH Heidelberg, Germany [email protected] Lars J. Jensen EMBL Heidelberg, Germany [email protected] Rossitza Ouzounova EMBL Heidelberg, Germany [email protected] Isabel Rojas EML Research gGmbH Heidelberg, Germany [email protected] Peer Bork EMBL Heidelberg, Germany [email protected] Abstract We present an approach using syntactosemantic rules for the extraction of relational information from biomedical abstracts. The results show that by overcoming the hurdle of technical terminology, high precision results can be achieved. From abstracts related to baker’s yeast, we manage to extract a regulatory network comprised of 441 pairwise relations from 58,664 abstracts with an accuracy of 83–90%. To achieve this, we made use of a resource of gene/protein names considerably larger than those used in most other biology related information extraction approaches. This list of names was included in the lexicon of our retrained part-of-speech tagger for use on molecular biology abstracts. For the domain in question an accuracy of 93.6–97.7% was attained on POS-tags. The method is easily adapted to other organisms than yeast, allowing us to extract many more biologically relevant relations. 1 Introduction and related work A massive amount of information is buried in scientific publications (more than 500,000 publications per year). Therefore, the need for information extraction (IE) and text mining in the life sciences is drastically increasing. Most of the ongoing work is being dedicated to deal with PubMed1 abstracts. The technical terminology of biomedicine presents the main challenge of applying IE to such a corpus (Hobbs, 2003). The goal of our work is to extract from biological abstracts which proteins are responsible for regulating the expression (i.e. transcription or translation) of which genes. This means to extract a specific type of pairwise relations between biological entities. This differs from the BioCreAtIvE competition tasks2 that aimed at classifying entities (gene products) into classes based on Gene Ontology (Ashburner et al., 2000). A task closely related to ours, which has received some attention over the past five years, is the extraction of protein–protein interactions from abstracts. This problem has mainly been addressed by statistical “bag of words” approaches (Marcotte et al., 2001), with the notable exception of Blaschke et al. (1999). All of the approaches differ significantly from ours by only attempting to extract the type of interaction and the participating proteins, disregarding agens and patiens. Most NLP based studies tend to have been focused on extraction of events involving one particular verb, e.g. bind (Thomas et al., 2000) or inhibit (Pustejovsky et al., 2002). From a biological point of view, there are two problems with such approaches: 1) the meaning of the extracted events 1PubMed is a bibliographic database covering life sciences with a focus on biomedicine, comprising around 12 × 106 articles, roughly half of them including abstract (http: //www.ncbi.nlm.nih.gov/PubMed/). 2Critical Assessment of Information Extraction systems in Biology, http://www.mitre.org/public/ biocreative/ will depend strongly on the selectional restrictions and 2) the same meaning can be expressed using a number of different verbs. In contrast and alike (Friedman et al., 2001), we instead set out to handle only one specific biological problem and, in return, extract the related events with their whole range of syntactic variations. The variety in the biological terminology used to describe regulation of gene expression presents a major hurdle to an IE approach; in many cases the information is buried to such an extent that even a human reader is unable to extract it unless having a scientific background in biology. In this paper we will show that by overcoming the terminological barrier, high precision extraction of entity relations can be achieved within the field of molecular biology. 2 The biological task and our approach To extract relations, one should first recognize the named entities involved. This is particularly difficult in molecular biology where many forms of variation frequently occur. Synonymy is very common due to lack of standardization of gene names; BYP1, CIF1, FDP1, GGS1, GLC6, TPS1, TSS1, and YBR126C are all synonyms for the same gene/protein. Additionally, these names are subject to orthographic variation originating from differences in capitalization and hyphenation as well as syntactic variation of multiword terms (e.g. riboflavin synthetase beta chain = beta chain of riboflavin synthetase). Moreover, many names are homonyms since a gene and its gene product are usually named identically, causing cross-over of terms between semantic classes. Finally, paragrammatical variations are more frequent in life science publications than in common English due to the large number of publications by non-native speakers (Netzel et al., 2003). Extracting that a protein regulates the expression of a gene is a challenging problem as this fact can be expressed in a variety of ways—possibly mentioning neither the biological process (expression) nor any of the two biological entities (genes and proteins). Figure 1 shows a simplified ontology providing an overview of the biological entities involved in gene expression, their ontological relationships, and how they can interact with Gene Transcript Gene product Stable RNA Promoter Binding site Upstream activating sequence Upstream repressing sequence mRNA Protein Transcription regulator Transcription activator Transcription repressor is a part of produces binds to Figure 1: A simplified ontology for transcription regulation. The background color used for each term signifies its semantic role in relations: regulator (white), target (black), or either (gray). one another. An ontology is a great help when writing extraction rules, as it immediately suggests a large number of relevant relations to be extracted. Examples include “promoter contains upstream activating sequence” and “transcription regulator binds to promoter”, both of which follow from indirect relationships via binding site. It is often not known whether the regulation takes place at the level of gene transcription or translation or by an indirect mechanism. For this reason, and for simplicity, we decided against trying to extract how the regulation of expression takes place. We do, however, strictly require that the extracted relations provide information about a protein (the regulator, R) regulating the expression of a gene (the target, X), for which reason three requirements must be fulfilled: 1. It must be ascertained that the sentence mentions gene expression. “The protein R activates X” fails this requirement, as R might instead activate X post-translationally. Thus, whether the event should be extracted or not depends on the type of the accusative object X (e.g. gene or gene product). Without a head noun specifying the type, X remains ambiguous, leaving the whole relation underspecified, for which reason it should not be extracted. It should be noted that two thirds of the gene/protein names mentioned in our corpus are ambiguous for this reason. 2. The identity of the regulator (R) must be known. “The X promoter activates X expression” fails this requirement, as it is not known which transcription factor activates the expression when binding to the X promoter. Linguistically this implies that noun chunks of certain semantic types should be disallowed as agens. 3. The identity of the target (X) must be known. “The transcription factor R activates R dependent expression” fails this requirement, as it is not know which gene’s expression is dependent on R. The semantic types allowed for patiens should thus also be restricted. The two last requirements are important to avoid extraction from non-informative sentences that— despite them containing no information—occur quite frequently in scientific abstracts. The coloring of the entities in Figure 1 helps discern which relations are meaningful and which are not. The ability to genetically modify an organism in experiments brings about further complication to IE: biological texts often mention what takes place when an organism is artificially modified in a particular way. In some cases such modification can reverse part of the meaning of the verb: from the sentence “Deletion of R increased X expression” one can conclude that R represses expression of X. The key point is to identify that “deletion of R” implies that the sentence describes an experiment in which R has been removed, but that R would normally be present and that the biological impact of R is thus the opposite of what the verb increased alone would suggest. In other cases the verb will lose part of its meaning: “Mutation of R increased X expression” implies that R regulates expression X, but we cannot infer whether R is an activator or a repressor. In this case mutation is dealt in a manner similar to deletion in the previous example. Finally, there are those relations that should be completely avoided as they exist only because they have been artificially introduced through genetic engineering. In our extraction method we address all three cases. We have opted for a rule based approach (implemented as finite state automata) to extract the relations for two reasons. The first is, that a rule based approach allows us to directly ensure that the three requirements stated above are fulfilled for the extracted relations. This is desired to attain high accuracy on the extracted relations, which is what matters to the biologist. Hence, we focus in our evaluation on the semantic correctness of our method rather than on its grammatical correctness. As long as grammatical errors do not result in semantic errors, we do not consider it an error. Conversely, even a grammatically correct extraction is considered an error if it is semantically wrong. Our second reason for choosing a rule based approach is that our approach is theory-driven and highly interdisciplinary, involving computational linguists, bioinformaticians, and biologists. The rule based approach allows us to benefit more from the interplay of scientists with different backgrounds, as known biological constraints can be explicitly incorporated in the extraction rules. 3 Methods Table 1 shows an overview of the architecture of our IE system. It is organized in levels such that the output of one level is the input of the next one. The following sections describe each level in detail. 3.1 The corpus The PubMed resource was downloaded on January 19, 2004. 58,664 abstracts related to the yeast Saccharomyces cerevisiae were extracted by looking for occurrences of the terms “Saccharomyces cerevisiae”, “S. cerevisiae”, “Baker’s yeast”, “Brewer’s yeast”, and “Budding yeast” in the title/abstract or as head of a MeSH term3. These abstracts were filtered to obtain the 15,777 that mention at least two names (see section 3.4) and subsequently divided into a training and an evaluation set of 9137 and 6640 abstracts respectively. 3Medical Subject Headings (MeSH) is a controlled vocabulary for manually annoting PubMed articles. Level Component L0 Tokenization and multiwords Word and sentence boundaries are detected and multiwords are recognized and recomposed to one token. L1 POS-Tagging A part-of-speech tag is assigned to each word (or multiword) of the tokenized corpus. L2 Semantic labeling A manually built taxonomy is used to assign semantic labels to tokens. The taxonomy consists of gene names, cue words relevant for entity recognition, and classes of verbs for relation extraction. L3 Named entity chunking Based on the POS-tags and the semantic labels, a cascaded chunk grammar recognizes noun chunks relevant for the gene transcription domain, e.g. [nxgene The GAL4 gene ]. L4 Relation chunking Relations between entities are recognized, e.g. The expression of the cytochrome genes CYC1 and CYC7 is controlled by HAP1. L5 Output and visualization Information is gathered from the recognised patterns and transformed into pre-defined records. From the example in L4 we extract that HAP1 regulates the expression of CYC1 and CYC7. Table 1: Overview over the extraction architecture 3.2 Tokenization and multiword detection The process of tokenization consists of two steps (Grefenstette and Tapanainen, 1994): segmentation of the input text into a sequence of tokens and the detection of sentential boundaries. We use the tokenizer developed by Helmut Schmid at IMS (University of Stuttgart) because it combines a high accuracy (99.56% on the Brown corpus) with unsupervised learning (i.e. no manually labelled data is needed) (Schmid, 2000). The determination of token boundaries in technical or scientific texts is one of the main challenges within information extraction or retrieval. On the one hand, technical terms contain special characters such as brackets, colons, hyphens, slashes, etc. On the other hand, they often appear as multiword expressions which makes it hard to detect the left and right boundaries of the terms. Although a lot of work has been invested in the detection of technical terms within biology related texts (see Nenadi´c et al. (2003) or Yamamoto et al. (2003) for representative results) this task is not yet solved to a satisfying extent. As we are interested in very special terms and high precision results we opted for multiword detection based on semi-automatical acquisition of multiwords (see sections 3.4 and 3.5). 3.3 Part-of-speech tagging To improve the accuracy of POS-tagging on PubMed abstracts, TreeTagger (Schmid, 1994) was retrained on the GENIA 3.0 corpus (Kim et al., 2003). Furthermore, we expanded the POStagger lexicon with entries relevant for our application such as gene names (see section 3.4) and multiwords (see section 3.5). As tag set we use the UPenn tag set (Santorini, 1991) plus some minor extensions for distinguishing auxiliary verbs. The GENIA 3.0 corpus consists of PubMed abstracts and has 466,179 manually annotated tokens. For our application we made two changes in the annotation. The first one concerns seemingly undecideable cases like in/or annotated as in|cc. These were split into three tokens: in, /, and or each annotated with its own tag. This was done because TreeTagger is not able to annotate two POS-tags for one token. The second set of changes was to adapt the tag set so that vb... is used for derivates of to be, vh... for derivates of to have, and vv... for all other verbs. 3.4 Recognizing gene/protein names To be able to recognize gene/protein names as such, and to associate them with the appropriate database identifiers, a list of synonymous names and identifiers in six eukaryotic model organisms was compiled from several sources (available from http://www.bork.embl. de/synonyms/). For S. cerevisiae specifically, 51,640 uniquely resolvable names and identifiers were obtained from Saccharomyces Genome Database (SGD) and SWISS-PROT (Dwight et al., 2002; Boeckmann et al., 2003). Before matching these names against the POStagged corpus, the list of names was expanded to include different orthographic variants of each name. Firstly, the names were allowed to have various combinations of uppercase and lowercase letters: all uppercase, all lowercase, first letter uppercase, and (for multiword names) first letter of each word uppercase. In each of these versions, we allowed whitespace to be replaced by hyphen, and hyphen to be removed or replaced by whitespace. In addition, from each gene name a possible protein name was generated by appending the letter p. The resulting list containing all orthographic variations comprises 516,799 entries. The orthographically expanded name list was fed into the multiword detection, the POS-tagger lexicon, and was subsequently matched against the POS-tagged corpus to retag gene/protein names as such (nnpg). By accepting only matches to words tagged as common nouns (nn), the problem of homonymy was reduced since e.g. the name MAP can occur as a verb as well. 3.5 Semantic tagging In addition to the recognition of the gene and protein names, we recognize several other terms and annotate them with semantic tags. This set of semantically relevant terms mainly consists of nouns and verbs, as well as some few prepositions like from, or adjectives like dependent. The first main set of terms consists of nouns, which are classified as follows: • Relevant concepts in our ontology: gene, protein, promoter, binding site, transcription factor, etc. (153 entries). • Relational nouns, like nouns of activation (e.g. derepression and positive regulation), nouns of repression (e.g. suppression and negative regulation), nouns of regulation (e.g. affect and control) (69 entries). • Triggering experimental (artificial) contexts: mutation, deletion, fusion, defect, vector, plasmids, etc. (11 entries). • Enzymes: gyrase, kinase, etc. (569 entries). • Organism names extracted from the NCBI taxonomy of organisms (Wheeler et al., 2004) (20,746 entries). The second set of terms contains 50 verbs and their inflections. They were classified according to their relevance in gene transcription. These verbs are crucial for the extraction of relations between entities: • Verbs of activation e.g. enhance, increase, induce, and positively regulate. • Verbs of repression e.g. block, decrease, downregulate, and down regulate. • Verbs of regulation e.g. affect and control. • Other selected verbs like code (or encode) and contain where given their own tags. Each of the terms consisting of more than one word was utilized for multiword recognition. We also have have two additional classes of words to prevent false positive extractions. The first contains words of negation, like not, cannot, etc. The other contains nouns that are to be distinguished from other common nouns to avoid them being allowed within named entitities, e.g. allele and diploid. 3.6 Extraction of named entities In the preceding steps we classified relevant nouns according to semantic criteria. This allows us to chunk noun phrases generalizing over both POStags and semantic tags. Syntacto-semantic chunking was performed to recognize named entities using cascades of finite state rules implemented as a CASS grammar (Abney, 1996). As an example we recognize gene noun phrases: [nx gene [dt the] [nnpg CYC1] [gene gene] [in in] [yeast Saccharomyces cerevisiae]] Other syntactic variants, as for example “the glucokinase gene GLK1” are recognized too. Similarly, we detect at this early level noun chunks denoting other biological entities such as proteins, activators, repressors, transcription factors etc. Subsequently, we recognize more complex noun chunks on the basis of the simpler ones, e.g. promoters, upstream activating/repressing sequences (UAS/URS), binding sites. At this point it becomes important to distinguish between agens and patiens forms of certain entities. Since a binding site is part of a target gene, it can be referred to either by the name of this gene or by the name of the regulator protein that binds to it. It is thus necessary to discriminate between “binding site of” and “binding site for”. As already mentioned, we have annotated a class of nouns that trigger experimental context. On the basis of these we identify noun chunks mentioning, as for example deletion, mutation, or overexpression of genes. At a fairly late stage we recognize events that can occur as arguments for verbs like “expression of”. 3.7 Extraction of relations between entities This step of processing concerns the recognition of three types of relations between the recognized named entities: up-regulation, down-regulation, and (underspecified) regulation of expression. We combine syntactic properties (subcategorization restrictions) and semantic properties (selectional restrictions) of the relevant verbs to map them to one of the three relation types. The following shows a reduced bracketed structure consting of three parts, a promoter chunk, a verbal complex chunk, and a UAS chunk in patiens: [nx prom the ATR1 promoter region] [contain contains] [nx uas pt [dt−a a] [bs binding site] [for for] [nx activator the GCN4 activator protein]]. From this we extract that the GCN4 protein activates the expression of the ATR1 gene. We identify passive constructs too e.g. “RNR1 expression is reduced by CLN1 or CLN2 overexpression”. In this case we extract two pairwise relations, namely that both CLN1 and CLN2 down-regulate the expression of the RNR1 gene. We also identify nominalized relations as exemplified by “the binding of GCN4 protein to the SER1 promoter in vitro”. 4 Results Using our relation extraction rules, we were able to extract 422 relation chunks from our complete corpus. Since one entity chunk can mention several different named entities, these corresponded to a total of 597 extracted pairwise relations. However, as several relation chunks may mention the same pairwise relations, this reduces to 441 unique pairwise relations comprised of 126 up-regulations, 90 down-regulations, and 225 regulations of unknown direction. Figure 2 displays these 441 relations as a regulatory network in which the nodes represent genes or proteins and the arcs are expression regulation relations. Known transcription factors according to the Saccharomyces Genome Database (SGD) (Dwight et al., 2002) are denoted by black nodes. From a biological point of view, it is reassuring that these tend to correspond to proteins serving as regulators in our relations. Figure 2: The extracted network of gene regulation The extracted relations are shown as a directed graph, in which each node corresponds to a gene or protein and each arc represents a pairwise relation. The arcs point from the regulator to the target and the type of regulation is specified by the type of arrow head. Known transcription factors are highlighted as black nodes. 4.1 Evaluation of relation extraction To evaluate the accuracy of the extracted relation, we manually inspected all relations extracted from the evaluation corpus using the TIGERSearch visualization tool (Lezius, 2002). The accuracy of the relations was evaluated at the semantic rather than the grammatical level. We thus carried out the evaluation in such a way that relations were counted as correct if they extracted the correct biological conclusion, even if the analysis of the sentence is not as to be desired from a linguistic point of view. Conversely, a relation was counted as an error if the biological conclusion was wrong. 75 of the 90 relation chunks (83%) extracted from the evaluation corpus were entirely correct, meaning that the relation corresponded to expression regulation, the regulator (R) and the regulatee (X) were correctly identified, and the direction of regulation (up or down) was correct if extracted. Further 6 relation chunks extracted the wrong direction of regulation but were otherwise correct; our accuracy increases to 90% if allowing for this minor type of error. Approximately half of the errors made by our method stem from overlooked genetic modifications—although mentioned in the sentence, the extracted relation is not biologically relevant. 4.2 Entity recognition For the sake of consistency, we have also evaluated our ability to correctly identify named entities at the level of semantic rather than grammatical correctness. Manual inspection of 500 named entities from the evaluation corpus revealed 14 errors, which corresponds to an estimated accuracy of just over 97%. Surprisingly, many of these errors were commited when recognizing proteins, for which our accuracy was only 95%. Phrases such as “telomerase associated protein” (which got confused with “telomerase protein” itself) were responsible for about half of these errors. Among the 153 entities involved in relations no errors were detected, which is fewer than expected from our estimated accuracy on entity recognition (99% confidence according to hypergeometric test). This suggests that the templates used for relation extraction are unlikely to match those sentence constructs on which the entity recognition goes wrong. False identification of named entities are thus unlikely to have an impact on the accuracy of relation extraction. 4.3 POS-tagging and tokenization We compared the POS-tagging performance of two parameter files on 55,166 tokens from the GENIA corpus that were not used for retraining. Using the retrained tagger, 93.6% of the tokens were correctly tagged, 4.1% carried questionable tags (e.g. confusing proper nouns for common nouns), and 2.3% were clear tagging errors. This compares favourably to the 85.7% correct, 8.5% questionable tags, and 5.8% errors obtained when using the Standard English parameter file. Retraining thus reduced the error rate more than two-fold. Of 198 sentences evaluated, the correct sentence boundary was detected in all cases. In addition, three abbreviations incorrectly resulted in sentence marker, corresponding to an overall precision of 98.5%. 5 Conclusions We have developed a method that allows us to extract information on regulation of gene expression from biomedical abstracts. This is a highly relevant biological problem, since much is known about it although this knowledge has yet to be collected in a database. Also, knowledge on how gene expression is regulated is crucial for interpreting the enormous amounts of gene expression data produced by high-throughput methods like spotted microarrays and GeneChips. Although we developed and evaluated our method on abstracts related to baker’s yeast only, we have successfully applied the method to other organisms including humans (to be published elsewhere). The main adaptation required was to replace the list of synonymous gene/protein names to reflect the change of organism. Furthermore, we also intend to reuse the recognition of named entities to extract other, specific types of interactions between biological entities. Acknowledgments The authors wish to thank Sean Hooper for help with Figure 2. Jasmin ˇSari´c is funded by the Klaus Tschira Foundation gGmbH, Heidelberg (http: //www.kts.villa-bosch.de). Lars Juhl Jensen is funded by the Bundesministerium f¨ur Forschung und Bildung, BMBF-01-GG-9817. References S. Abney. 1996. Partial parsing via finite-state cascades. In Proceedings of the ESSLLI ’96 Robust Parsing Workshop, pages 8–15, Prague, Czech Republic. M. Ashburner, C. A. Ball, J. A. Blake, D. Botstein, H. Butler, J. M. Cherry, A. P. Davis, K. Dolinski, S. S. Dwight, J. T. Eppig, M. A. Harris, D. P. Hill, L. Issel-Tarver, A. Kasarskis, S. Lewis, J. C. Matese, J. E. Richardson, M. Ringwald, G. M. Rubin, and G. Sherlock. 2000. Gene Ontology: tool for the unification of biology. Nature Genetics, 25:25–29. C. Blaschke, M. A. Andrade, C. Ouzounis, and A. Valencia. 1999. Automatic extraction of biological information from scientific text: protein–protein interactions. In Proc., Intelligent Systems for Molecular Biology, volume 7, pages 60–67, Menlo Park, CA. AAAI Press. B. Boeckmann, A. Bairoch, R. Apweiler, M. C. Blatter, A. Estreicher, E. Gasteiger, M. J. Martin, K Michoud, C. O’Donovan, I. Phan, S. Pilbout, and M. Schneider. 2003. The SWISS-PROT protein knowledgebase and its supplement TrEMBL in 2003. Nucleic Acids Res., 31:365–370. S. S. Dwight, M. A. Harris, K. Dolinski, C. A. Ball, G. Binkley, K. R. Christie, D. G. Fisk, L. IsselTarver, M. Schroeder, G. Sherlock, A. Sethuraman, S. Weng, D. Botstein, and J. M. Cherry. 2002. Saccharomyces Genome Database (SGD) provides secondary gene annotation using the Gene Ontology (GO). Nucleic Acids Res., 30:69–72. C. Friedman, P. Kra, H. Yu, M. Krauthammer, and A. Rzhetsky. 2001. GENIES: a natural-language processing system for the extraction of molecular pathways from journal articles. Bioinformatics, 17 Suppl. 1:S74–S82. G. Grefenstette and P. Tapanainen. 1994. What is a word, what is a sentence? problems of tokenization. In The 3rd International Conference on Computational Lexicography, pages 79–87. J. R. Hobbs. 2003. Information extraction from biomedical text. J. Biomedical Informatics. J.-D. Kim, T. Ohta, Y. Tateisi, and J. Tsujii. 2003. GENIA corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19 suppl. 1:i180– i182. W. Lezius. 2002. TIGERSearch—ein Suchwerkzeug f¨ur Baumbanken. In S. Busemann, editor, Proceedings der 6. Konferenz zur Verarbeitung natrlicher Sprache (KONVENS 2002), Saarbr¨ucken, Germany. E. M. Marcotte, I. Xenarios, and D. Eisenberg. 2001. Mining literature for protein–protein interactions. Bioinformatics, 17:359–363. G. Nenadi´c, S. Rice, I. Spasi´c, S. Ananiadou, and B. Stapley. 2003. Selecting text features for gene name classification: from documents to terms. In S. Ananiadou and J. Tsujii, editors, Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine, pages 121–128. R. Netzel, Perez-Iratxeta C., P. Bork, and M. A. Andrade. 2003. The way we write. EMBO Rep., 4:446–451. J. Pustejovsky, J. Casta˜no, J. Zhang, M. Kotecki, and B. Cochran. 2002. Robust relational parsing over biomedical literature: Extracting inhibit relations. In Proceedings of the Seventh Pacific Symposium on Biocomputing, pages 362–373, Hawaii. World Scientific. B. Santorini. 1991. Part-of-speech tagging guidelines for the penn treebank project. Technical report, University of Pennsylvania. H. Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In International Conference on New Methods in Language Processing, Manchester, UK. H. Schmid. 2000. Unsupervised learning of period disambiguation for tokenisation. Technical report, Institut fr Maschinelle Sprachverarbeitung, University of Stuttgart. J. Thomas, D. Milward, C. Ouzounis, S. Pulman, and M. Carroll. 2000. Automatic extraction of protein interactions from scientific abstracts. In Proceedings of the Fifth Pacific Symposium on Biocomputing, pages 707–709, Hawaii. World Scientific. D. L. Wheeler, D. M. Church, R. Edgar, S. Federhen, W. Helmberg, Madden T. L., Pontius J. U., Schuler G. D., Schriml L. M., E. Sequeira, T. O. Suzek, T. A. Tatusova, and L. Wagner. 2004. Database resources of the national center for biotechnology information: update. Nucleic Acids Res., 32:D35–40. K. Yamamoto, T. Kudo, A. Konagaya, and Y. Matsumoto. 2003. Protein name tagging for biomedical annotation in text. In S. Ananiadou and J. Tsujii, editors, Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine, pages 65–72. | 2004 | 25 |
Linguistic Profiling for Author Recognition and Verification Hans van Halteren Language and Speech, Univ. of Nijmegen P.O. Box 9103 NL-6500 HD, Nijmegen, The Netherlands [email protected] Abstract A new technique is introduced, linguistic profiling, in which large numbers of counts of linguistic features are used as a text profile, which can then be compared to average profiles for groups of texts. The technique proves to be quite effective for authorship verification and recognition. The best parameter settings yield a False Accept Rate of 8.1% at a False Reject Rate equal to zero for the verification task on a test corpus of student essays, and a 99.4% 2-way recognition accuracy on the same corpus. 1 Introduction There are several situations in language research or language engineering where we are in need of a specific type of extra-linguistic information about a text (document) and we would like to determine this information on the basis of linguistic properties of the text. Examples are the determination of the language variety or genre of a text, or a classification for document routing or information retrieval. For each of these applications, techniques have been developed focusing on specific aspects of the text, often based on frequency counts of functions words in linguistics and of content words in language engineering. In the technique we are introducing in this paper, linguistic profiling, we make no a priori choice for a specific type of word (or more complex feature) to be counted. Instead, all possible features are included and it is determined by the statistics for the texts under consideration, and the distinction to be made, how much weight, if any, each feature is to receive. Furthermore, the frequency counts are not used as absolute values, but rather as deviations from a norm, which is again determined by the situation at hand. Our hypothesis is that this technique can bring a useful contribution to all tasks where it is necessary to distinguish one group of texts from another. In this paper the technique is tested for one specific type of group, namely the group of texts written by the same author. 2 Tasks and Application Scenarios Traditionally, work on the attribution of a text to an author is done in one of two environments. The first is that of literary and/or historical research where attribution is sought for a work of unknown origin (e.g. Mosteller & Wallace, 1984; Holmes, 1998). As secondary information generally identifies potential authors, the task is authorship recognition: selection of one author from a set of known authors. Then there is forensic linguistics, where it needs to be determined if a suspect did or did not write a specific, probably incriminating, text (e.g. Broeders, 2001; Chaski, 2001). Here the task is authorship verification: confirming or denying authorship by a single known author. We would like to focus on a third environment, viz. that of the handling of large numbers of student essays. For some university courses, students have to write one or more essays every week and submit them for grading. Authorship recognition is needed in the case the sloppy student, who forgets to include his name in the essay. If we could link such an essay to the correct student ourselves, this would prevent delays in handling the essay. Authorship verification is needed in the case of the fraudulous student, who has decided that copying is much less work than writing an essay himself, which is only easy to spot if the original is also submitted by the original author. In both scenarios, the test material will be sizable, possibly around a thousand words, and at least several hundred. Training material can be sufficiently available as well, as long as text collection for each student is started early enough. Many other authorship verification scenarios do not have the luxury of such long stretches of test text. For now, however, we prefer to test the basic viability of linguistic profiling on such longer stretches. Afterwards, further experiments can show how long the test texts need to be to reach an acceptable recognition/verification quality. 2.1 Quality Measures For recognition, quality is best expressed as the percentage of correct choices when choosing between N authors, where N generally depends on the attribution problem at hand. We will use the percentage of correct choices between two authors, in order to be able to compare with previous work. For verification, quality is usually expressed in terms of erroneous decisions. When the system is asked to verify authorship for the actual author of a text and decides that the text was not written by that author, we speak of a False Reject. The False Reject Rate (FRR) is the percentage of cases in which this happens, the percentage being taken from the cases which should be accepted. Similarly, the False Accept Rate (FAR) is the percentage of cases where somebody who has not written the test text is accepted as having written the text. With increasing threshold settings, FAR will go down, while FRR goes up. The behaviour of a system can be shown by one of several types of FAR/FRR curve, such as the Receiver Operating Characteristic (ROC). Alternatively, if a single number is preferred, a popular measure is the Equal Error Rate (EER), viz. the threshold value where FAR is equal to FRR. However, the EER may be misleading, since it does not take into account the consequences of the two types of errors. Given the example application, plagiarism detection, we do not want to reject, i.e. accuse someone of plagiarism, unless we are sure. So we would like to measure the quality of the system with the False Accept Rate at the threshold at which the False Reject Rate becomes zero. 2.2 The Test Corpus Before using linguistic profiling for any real task, we should test the technique on a benchmark corpus. The first component of the Dutch Authorship Benchmark Corpus (ABC-NL1) appears to be almost ideal for this purpose. It contains widely divergent written texts produced by firstyear and fourth-year students of Dutch at the University of Nijmegen. The ABC-NL1 consists of 72 Dutch texts by 8 authors, controlled for age and educational level of the authors, and for register, genre and topic of the texts. It is assumed that the authors’ language skills were advanced, but their writing styles were as yet at only weakly developed and hence very similar, unlike those in literary attribution problems. Each author was asked to write nine texts of about a page and a half. In the end, it turned out that some authors were more productive than others, and that the text lengths varied from 628 to 1342 words. The authors did not know that the texts were to be used for authorship attribution studies, but instead assumed that their writing skill was measured. The topics for the nine texts were fixed, so that each author produced three argumentative non-fiction texts, on the television program Big Brother, the unification of Europe and smoking, three descriptive non-fiction texts, about soccer, the (then) upcoming new millennium and the most recent book they read, and three fiction texts, namely a fairy tale about Little Red Riding Hood, a murder story at the university and a chivalry romance. The ABC-NL1 corpus is not only well-suited because of its contents. It has also been used in previously published studies into authorship attribution. A ‘traditional’ authorship attribution method, i.e. using the overall relative frequencies of the fifty most frequent function words and a Principal Components Analysis (PCA) on the correlation matrix of the corresponding 50dimensional vectors, fails completely (Baayen et al., 2002). The use of Linear Discriminant Analysis (LDA) on overall frequency vectors for the 50 most frequent words achieves around 60% correct attributions when choosing between two authors, which can be increased to around 80% by the application of cross-sample entropy weighting (Baayen et al., 2002). Weighted Probability Distribution Voting (WPDV) modeling on the basis of a very large number of features achieves 97.8% correct attributions (van Halteren et al., To Appear). Although designed to produce a hard recognition task, the latter result show that very high recognition quality is feasible. Still, this appears to be a good test corpus to examine the effectiveness of a new technique. 3 Linguistic Profiling In linguistic profiling, the occurrences in a text are counted of a large number of linguistic features, either individual items or combinations of items. These counts are then normalized for text length and it is determined how much (i.e. how many standard deviations) they differ from the mean observed in a profile reference corpus. For the authorship task, the profile reference corpus consists of the collection of all attributed and non-attributed texts, i.e. the entire ABC-NL1 corpus. For each text, the deviation scores are combined into a profile vector, on which a variety of distance measures can be used to position the text in relation to any group of other texts. 3.1 Features Many types of linguistic features can be profiled, such as features referring to vocabulary, lexical patterns, syntax, semantics, pragmatics, information content or item distribution through a text. However, we decided to restrict the current experiments to a few simpler types of features to demonstrate the overall techniques and methodology for profiling before including every possible type of feature. In this paper, we first show the results for lexical features and continue with syntactic features, since these are the easiest ones to extract automatically for these texts. Other features will be the subject of further research. 3.2 Authorship Score Calculation In the problem at hand, the system has to decide if an unattributed text is written by a specific author, on the basis of attributed texts by that and other authors. We test our system’s ability to make this distinction by means of a 9-fold crossvalidation experiment. In each set of runs of the system, the training data consists of attributed texts for eight of the nine essay topics. The test data consists of the unattributed texts for the ninth essay topic. This means that for all runs, the test data is not included in the training data and is about a different topic than what is present in the training material. During each run within a set, the system only receives information about whether each training text is written by one specific author. All other texts are only marked as “not by this author”. 3.3 Raw Score The system first builds a profile to represent text written by the author in question. This is simply the featurewise average of the profile vectors of all text samples marked as being written by the author in question. The system then determines a raw score for all text samples in the list. Rather than using the normal distance measure, we opted for a non-symmetric measure which is a weighted combination of two factors: a) the difference between sample score and author score for each feature and b) the sample score by itself. This makes it possible to assign more importance to features whose count deviates significantly from the norm. The following distance formula is used: ∆T = (Σ |Ti–Ai| D |Ti| S) 1/(D+S) In this formula, Ti and Ai are the values for the ith feature for the text sample profile and the author profile respectively, and D and S are the weighting factors that can be used to assign more or less importance to the two factors described. We will see below how the effectiveness of the measure varies with their setting. The distance measure is then transformed into a score by the formula ScoreT = (Σ |Ti|(D+S)) 1/(D+S) – ∆T In this way, the score will grow with the similarity between text sample profile and author profile. Also, the first component serves as a correction factor for the length of the text sample profile vector. 3.4 Normalization and Renormalization The order of magnitude of the score values varies with the setting of D and S. Furthermore, the values can fluctuate significantly with the sample collection. To bring the values into a range which is suitable for subsequent calculations, we express them as the number of standard deviations they differ from the mean of the scores of the text samples marked as not being written by the author in question. In the experiments described in this paper, a rather special condition holds. In all tests, we know that the eight test samples are comparable in that they address the same topic, and that the author to be verified produced exactly one of the eight test samples. Under these circumstances, we should expect one sample to score higher than the others in each run, and we can profit from this knowledge by performing a renormalization, viz. to the number of standard deviations the score differs from the mean of the scores of the unattributed samples. However, this renormalization only makes sense in the situation that we have a fixed set of authors who each produced one text for each topic. This is in fact yet a different task than those mentioned above, say authorship sorting. Therefore, we will report on the results with renormalization, but only as additional information. The main description of the results will focus on the normalized scores. 4 Profiling with Lexical Features The most straightforward features that can be used are simply combinations of tokens in the text. 4.1 Lexical features Sufficiently frequent tokens, i.e. those that were observed at least a certain amount of times (in this case 5) in some language reference corpus (in this case the Eindhoven corpus; uit den Boogaart, 1975) are used as features by themselves. For less frequent tokens we determine a token pattern consisting of the sequence of character types, e.g., the token “Uefa-cup” is represented by the pattern “#L#6+/CL-L”, where the first “L” indicates low frequency, 6+ the size bracket, and the sequence “CL-L” a capital letter followed by one or more lower case letters followed by a hyphen and again one or more lower case letters. For lower case words, the final three letters of the word are included too, e.g. “waarmaken” leads to “#L#6+/L/ken”. These patterns have been originally designed for English and Dutch and will probably have to be extended when other languages are being handled. In addition to the form of the token, we also use the potential syntactic usage of the token as a feature. We apply the first few modules of a morphosyntactic tagger (in this case Wotan-Lite; Van Halteren et al., 2001) to the text, which determine which word class tags could apply to each token. For known words, the tags are taken from a lexicon; for unknown words, they are estimated on the basis of the word patterns described above. The three (if present) most likely tags are combined into a feature, e.g. “niet” leads to “#H#Adv(stell,onverv)-N(ev,neut)” and “waarmaken” to “#L#V(inf)-N(mv,neut)V(verldw, onverv)”. Note that the most likely tags are determined on the basis of the token itself and that the context is not consulted. The modules of the tagger which do context dependent disambiguation are not applied. Op top of the individual token and tag features we use all possible bi- and trigrams which can be built with them, e.g. the token combination “kon niet waarmaken” leads to features such as “wcw=#H#kon#H#Adv(stell,onverv)-N(ev,neut) #L#6+/L/ken”. Since the number of features quickly grows too high for efficient processing, we filter the set of features by demanding that a feature occurs in a set minimum number of texts in the profile reference corpus (in this case two). A feature which is filtered out instead contributes to a rest category feature, e.g. the feature above would contribute to “wcw=<OTHER>”. For the current corpus, this filtering leads to a feature set of about 100K features. The lexical features currently also include features for utterance length. Each utterance leads to two such features, viz. the exact length (e.g. “len=15”) and the length bracket (e.g. “len=1019”). 4.2 Results with lexical features A very rough first reconnaissance of settings for D and S suggested that the best results could be achieved with D between 0.1 and 2.4 and S between 0.0 and 1.0. Further examination of this area leads to FAR FRR=0 scores ranging down to around 15%. Figure 1 shows the scores at various settings for D and S. The z-axis is inverted (i.e. 1 - FAR FRR=0 is used) to show better scores as peaks rather than troughs. The most promising area is the ridge along the trough at D=0.0, S=0.0. A closer investigation of this area shows that the best settings are D=0.575 and S=0.15. The FAR FRR=0 score here is 14.9%, i.e. there is a threshold setting such that if all texts by the authors themselves are accepted, only 14.9% of texts by other authors are falsely accepted. The very low value for S is surprising. It indicates that it is undesirable to give too much attention to features which deviate much in the sample being measured; still, in the area in question, the score does peak at a positive S value, indicating that some such weighting does have effect. Successful low scores for S can also be seen in the hill leading around D=1.0, S=0.3, which peaks at an FAR FRR=0 score of around 17 percent. From the shape of the surface it would seem that an investigation of the area across the S=0.0 divide might still be worthwhile, which is in contradiction with the initial finding that negative values produce no useful results. 5 Beyond Lexical Features As stated above, once the basic viability of the technique was confirmed, more types of features would be added. As yet, this is limited to syntactic features. We will first describe the system quality using only syntactic features, and then describe the results when using lexical and syntactic features in combination. 5.1 Syntactic Features We used the Amazon parser to derive syntactic constituent analyses of each utterance (Coppen, 2003). We did not use the full rewrites, but rather constituent N-grams. The N-grams used were: • left hand side label, examining constituent occurrence • left hand side label plus one label from the right hand side, examining dominance • left hand side plus label two labels from the right hand side, in their actual order, examining dominance and linear precedence For each label, two representations are used. The first is only the syntactic constituent label, the second is the constituent label plus the head word. This is done for each part of the N-grams independently, leading to 2, 4 and 8 features respectively for the three types of N-gram. Furthermore, each feature is used once by itself, once with an additional marking for the depth of the rewrite in the analysis tree, once with an additional marking for the length of the rewrite, and once with both these markings. This means another multiplication factor of four for a total of 8, 16 and 32 features respectively. After filtering for minimum number of observations, again at least an observation in two different texts, there are about 900K active syntactic features, nine times as many as for the lexical features. Investigation of the results for various settings has not been as exhaustive as for the lexical features. The best settings so far, D=1.3, S=1.4, yield an FAR FRR=0 of 24.8%, much worse than the 14.9% seen for lexical features. 5.2 Combining Lexical and Syntactic Features From the FAR FRR=0 score, it would seem that syntactic features are not worth pursuing any furFigure 1: The variation of FAR (or rather 1-FAR) as a function of D and S, with D ranging from 0.1 to 2.4 and S from 0.0 to 1.0. ther, since they perform much worse than lexical ones. However, they might still be useful if we combine their scores with those for the lexical features. For now, rather than calculating new combined profiles, we just added the scores from the two individual systems. The combination of the best two individual systems leads to an FAR FRR=0 of 10.3%, a solid improvement over lexical features by themselves. However, the best individual systems are not necessarily the best combiners. The best combination systems produce FAR FRR=0 measurements down to 8.1%, with settings in different parts of the parameter space. It should be observed that the improvement gained by combination is linked to the chosen quality measure. If we examine the ROC-curves for several types of systems (plotting the FAR against the FRR; Figure 2), we see that the combination curves as a whole do not differ much from the lexical feature curve. In fact, the EER for the ‘best’ combination system is worse than that for the best lexical feature system. This means that we should be very much aware of the relative importance of FAR and FRR in any specific application when determining the ‘optimal’ features and parameters. 6 Parameter Settings A weak point in the system so far is that there is no automatic parameter selection. The best results reported above are the ones at optimal settings. One would hope that optimal settings on training/tuning data will remain good settings for new data. Further experiments on other data will have to shed more light on this. Another choice which cannot yet be made automatically is that of a threshold. So far, the presentation in this paper has been based on a single threshold for all author/text combinations. That there is an enormous potential for improvement can be shown by assuming a few more informed methods of threshold selection. The first method uses the fact that, in our experiments, there are always one true and seven false authors. This means we can choose the threshold at some point below the highest of the eight scores. We can hold on to the single threshold strategy if we first renormalize, as described in Section 3.4, and then choose a single value to threshold the renormalized values against. The second method assumes that we will be able to find an optimal threshold for each individual run of the system. The maximum effect of this can be estimated with an oracle providing the optimal threshold. Basically, since the oracle threshold will be at the score for the text by the author, we Figure 2: ROC (FAR plotted against FRR) for a varying threshold at good settings of D and S for different types of features. The top pane shows the whole range (0 to 1) for FAR and FRR. The bottom pane shows the area from 0.0 to 0.2. are examining how many texts by other authors score better than the text by the actual author. Table 1 compares the results for the best settings for these two new scenarios with the results presented above. Renormalizing already greatly improves the results. Interestingly, in this scenario, the syntactic features outperform the lexical ones, something which certainly merits closer investigation after the parameter spaces have been charted more extensively. The full potential of profiling becomes clear in the Oracle threshold scenario, which shows extremely good scores. Still, this potential will yet have to be realized by finding the right automatic threshold determination mechanism. 7 Comparison to Previous Authorship Attribution Work Above, we focused on the authorship verification task, since it is the harder problem, given that the potential group of authors is unknown. However, as mentioned in Section 2, previous work with this data has focused on the authorship recognition problem, to be exact on selecting the correct author out of two potential authors. We repeat the previously published results in Table 2, together with linguistic profiling scores, both for the 2way and for the 8-way selection problem. To do attribution with linguistic profiling, we calculated the author scores for each author from the set for a given text, and then selected the author with the highest score. The results are shown in Table 2, using lexical or syntactic features or both, and with and without renormalization. The Oracle scenario is not applicable as we are comparing rather than thresholding. In each case, the best results are not just found at a single parameter setting, but rather over a larger area in the parameter space. This means that the choice of optimal parameters will be more robust with regard to changes in authors and text types. We also observe that the optimal settings for recognition are very different from those for verification. A more detailed examination of the results is necessary to draw conclusions about these differences, which is again not possible until the parameter spaces have been charted more exhaustively. Lexical Features Syntactic Features Combination Single threshold 14.9% 24.8% 8.1% Single threshold after renormalization 9.3% 6.0% 2.4% Oracle threshold per run 0.8% 1.6% 0.2% Table 1: Best FAR FRR=0 scores for verification with various feature types and threshold selection mechanisms. 2-way errors /504 2-way percent correct 8-way errors /72 8-way percent correct 50 function words, PCA ± 50% followed by LDA ± 60% LDA with crosssample entropy weighting ± 80% all tokens, WPDV modeling 97.8% Lexical 6 98.8% 5 93% Syntactic 14 98.2% 10 86% Combined 3 99.4% 2 97% Lexical (renorm.) 1 99.8% 1 99% Syntactic (renorm.) 4 99.2% 3 96% Combined (renorm.) 0 100.0% 0 100% Table 2: Authorship recognition quality for various methods. All results with normalized scores are already better than the previously published results. When applying renormalization, which might be claimed to be justified in this particular authorship attribution problem, the combination system reaches the incredible level of making no mistakes at all. 8 Conclusion Linguistic profiling has certainly shown its worth for authorship recognition and verification. At the best settings found so far, a profiling system using combination of lexical and syntactic features is able select the correct author for 97% of the texts in the test corpus. It is also able to perform the verification task in such a way that it rejects no texts that should be accepted, while accepting only 8.1% of the texts that should be rejected. Using additional knowledge about the test corpus can improve this to 100% and 2.4%. The next step in the investigation of linguistic profiling for this task should be a more exhaustive charting of the parameter space, and especially the search for an automatic parameter selection procedure. Another avenue of future research is the inclusion of even more types of features. Here, however, it would be useful to define an even harder verification task, as the current system scores already very high and further improvements might be hard to measure. With the current corpus, the task might be made harder by limiting the size of the test texts. Other corpora might also serve to provide more obstinate data, although it must be said that the current test corpus was already designed specifically for this purpose. Use of further corpora will also help with parameter space charting, as they will show the similarities and/or differences in behaviour between data sets. Finally, with the right types of corpora, the worth of the technique for actual application scenarios could be investigated. So there are several possible routes to further improvement. Still, the current quality of the system is already such that the system could be applied as is. Certainly for authorship recognition and verification, as we hope to show by our participation in Patrick Juola’s Ad-hoc Authorship Attribution Contest (to be presented at ALLC/ACH 2004), for language verification (cf. van Halteren and Oostdijk, 2004), and possibly also for other text classification tasks, such as language or language variety recognition, genre recognition, or document classification for IR purposes. References Harald Baayen, Hans van Halteren, Anneke Neijt, and Fiona Tweedie. 2002. An Experiment in Authorship Attribution. Proc. JADT 2002, pp. 69-75. Ton Broeders. 2001. Forensic Speech and Audio Analysis, Forensic Linguistics 1998-2001 – A Review. Proc. 13th Interpol Forensic Science Symposium, Lyon, France. C. Chaski. 2001. Empirical Evaluations of LanguageBased Author Identification Techniques. Forensic Linguistics 8(1): 1-65. Peter Arno Coppen. 2003. Rejuvenating the Amazon parser. Poster presentation CLIN2003, Antwerp, Dec. 19, 2003. David Holmes. 1998. Authorship attribution. Literary and Linguistic Computing 13(3):111-117. F. Mosteller, and D.L. Wallace. 1984. Applied Bayesian and Classical Inference in the Case of the Federalist Papers (2nd edition). Springer Verlag, New York. P. C. Uit den Boogaart. 1975. Woordfrequenties in geschreven en gesproken Nederlands. Oosthoek, Scheltema & Holkema, Utrecht. Hans van Halteren, Jakub Zavrel, and Walter Daelemans. 2001. Improving accuracy in word class tagging through the combination of machine learning systems. Computational Linguistics 27(2):199-230. Hans van Halteren and Nelleke Oostdijk, 2004. Linguistic Profiling of Texts for the Purpose of Language Verification. Proc. COLING 2004. Hans van Halteren, Marco Haverkort, Harald Baayen, Anneke Neijt, and Fiona Tweedie. To appear. New Machine Learning Methods Demonstrate the Existence of a Human Stylome. Journal of Quantitative Linguistics. | 2004 | 26 |
An Empirical Study of Information Synthesis Tasks Enrique Amig´o Julio Gonzalo V´ıctor Peinado Anselmo Pe˜nas Felisa Verdejo Departamento de Lenguajes y Sistemas Inform´aticos Universidad Nacional de Educaci´on a Distancia c/Juan del Rosal, 16 - 28040 Madrid - Spain {enrique,julio,victor,anselmo,felisa}@lsi.uned.es Abstract This paper describes an empirical study of the “Information Synthesis” task, defined as the process of (given a complex information need) extracting, organizing and inter-relating the pieces of information contained in a set of relevant documents, in order to obtain a comprehensive, non redundant report that satisfies the information need. Two main results are presented: a) the creation of an Information Synthesis testbed with 72 reports manually generated by nine subjects for eight complex topics with 100 relevant documents each; and b) an empirical comparison of similarity metrics between reports, under the hypothesis that the best metric is the one that best distinguishes between manual and automatically generated reports. A metric based on key concepts overlap gives better results than metrics based on n-gram overlap (such as ROUGE) or sentence overlap. 1 Introduction A classical Information Retrieval (IR) system helps the user finding relevant documents in a given text collection. In most occasions, however, this is only the first step towards fulfilling an information need. The next steps consist of extracting, organizing and relating the relevant pieces of information, in order to obtain a comprehensive, non redundant report that satisfies the information need. In this paper, we will refer to this process as Information Synthesis. It is normally understood as an (intellectually challenging) human task, and perhaps the Google Answer Service1 is the best general purpose illustration of how it works. In this service, users send complex queries which cannot be answered simply by inspecting the first two or three documents returned by a search engine. These are a couple of real, representative examples: a) I’m looking for information concerning the history of text compression both before and with computers. 1http://answers.google.com b) Provide an analysis on the future of web browsers, if any. Answers to such complex information needs are provided by experts which, commonly, search the Internet, select the best sources, and assemble the most relevant pieces of information into a report, organizing the most important facts and providing additional web hyperlinks for further reading. This Information Synthesis task is understood, in Google Answers, as a human task for which a search engine only provides the initial starting point. Our midterm goal is to develop computer assistants that help users to accomplish Information Synthesis tasks. From a Computational Linguistics point of view, Information Synthesis can be seen as a kind of topic-oriented, informative multi-document summarization, where the goal is to produce a single text as a compressed version of a set of documents with a minimum loss of relevant information. Unlike indicative summaries (which help to determine whether a document is relevant to a particular topic), informative summaries must be helpful to answer, for instance, factual questions about the topic. In the remainder of the paper, we will use the term “reports” to refer to the summaries produced in an Information Synthesis task, in order to distinguish them from other kinds of summaries. Topic-oriented multi-document summarization has already been studied in other evaluation initiatives which provide testbeds to compare alternative approaches (Over, 2003; Goldstein et al., 2000; Radev et al., 2000). Unfortunately, those studies have been restricted to very small summaries (around 100 words) and small document sets (1020 documents). These are relevant summarization tasks, but hardly representative of the Information Synthesis problem we are focusing on. The first goal of our work has been, therefore, to create a suitable testbed that permits qualitative and quantitative studies on the information synthesis task. Section 2 describes the creation of such a testbed, which includes the manual generation of 72 reports by nine different subjects across 8 complex topics with 100 relevant documents per topic. Using this testbed, our second goal has been to compare alternative similarity metrics for the Information Synthesis task. A good similarity metric provides a way of evaluating Information Synthesis systems (comparing their output with manually generated reports), and should also shed some light on the common properties of manually generated reports. Our working hypothesis is that the best metric will best distinguish between manual and automatically generated reports. We have compared several similarity metrics, including a few baseline measures (based on document, sentence and vocabulary overlap) and a stateof-the-art measure to evaluate summarization systems, ROUGE (Lin and Hovy, 2003). We also introduce another proximity measure based on key concept overlap, which turns out to be substantially better than ROUGE for a relevant class of topics. Section 3 describes these metrics and the experimental design to compare them; in Section 4, we analyze the outcome of the experiment, and Section 5 discusses related work. Finally, Section 6 draws the main conclusions of this work. 2 Creation of an Information Synthesis testbed We refer to Information Synthesis as the process of generating a topic-oriented report from a nontrivial amount of relevant, possibly interrelated documents. The first goal of our work is the generation of a testbed (ISCORPUS) with manually produced reports that serve as a starting point for further empirical studies and evaluation of information synthesis systems. This section describes how this testbed has been built. 2.1 Document collection and topic set The testbed must have a certain number of features which, altogether, differentiate the task from current multi-document summarization evaluations: Complex information needs. Being Information Synthesis a step which immediately follows a document retrieval process, it seems natural to start with standard IR topics as used in evaluation conferences such as TREC2, CLEF3 or NTCIR4. The title/description/narrative topics commonly used in such evaluation exercises are specially well suited for an Information Synthesis task: they are complex 2http://trec.nist.gov 3http://www.clef-campaign.org 4http://research.nii.ac.jp/ntcir/ and well defined, unlike, for instance, typical web queries. We have selected the Spanish CLEF 2001-2003 news collection testbed (Peters et al., 2002), because Spanish is the native language of the subjects recruited for the manual generation of reports. Out of the CLEF topic set, we have chosen the eight topics with the largest number of documents manually judged as relevant from the assessment pools. We have slightly reworded the topics to change the document retrieval focus (“Find documents that...”) into an information synthesis wording (“Generate a report about...”). Table 1 shows the eight selected topics. C042: Generate a report about the invasion of Haiti by UN/US soldiers. C045: Generate a report about the main negotiators of the Middle East peace treaty between Israel and Jordan, giving detailed information on the treaty. C047: What are the reasons for the military intervention of Russia in Chechnya? C048: Reasons for the withdrawal of United Nations (UN) peace- keeping forces from Bosnia. C050: Generate a report about the uprising of Indians in Chiapas (Mexico). C085: Generate a report about the operation “Turquoise”, the French humanitarian program in Rwanda. C056: Generate a report about campaigns against racism in Europe. C080: Generate a report about hunger strikes attempted in order to attract attention to a cause. Table 1: Topic set This set of eight CLEF topics has two differentiated subsets: in a majority of cases (first six topics), it is necessary to study how a situation evolves in time; the importance of every event related to the topic can only be established in relation with the others. The invasion of Haiti by UN and USA troops (C042) is an example of such a topic. We will refer to them as “Topic Tracking” (TT) reports, because they resemble the kind of topics used in such task. The last two questions (56 and 80), however, resemble Information Extraction tasks: essentially, the user has to detect and describe instances of a generic event (cases of hunger strikes and campaigns against racism in Europe); hence we will refer to them as “IE” reports. Topic tracking reports need a more elaborated treatment of the information in the documents, and therefore are more interesting from the point of view of Information Synthesis. We have, however, decided to keep the two IE topics; first, because they also reflect a realistic synthesis task; and second, because they can provide contrastive information as compared to TT reports. Large document sets. All the selected CLEF topics have more than one hundred documents judged as relevant by the CLEF assessors. For homogeneity, we have restricted the task to the first 100 documents for each topic (using a chronological order). Complex reports. The elaboration of a comprehensive report requires more space than is allowed in current multi-document summarization experiences. We have established a maximum of fifty sentences per summary, i.e., half a sentence per document. This limit satisfies three conditions: a) it is large enough to contain the essential information about the topic, b) it requires a substantial compression effort from the user, and c) it avoids defaulting to a “first sentence” strategy by lazy (or tired) users, because this strategy would double the maximum size allowed. We decided that the report generation would be an extractive task, which consists of selecting sentences from the documents. Obviously, a realistic information synthesis process also involves rewriting and elaboration of the texts contained in the documents. Keeping the task extractive has, however, two major advantages: first, it permits a direct comparison to automatic systems, which will typically be extractive; and second, it is a simpler task which produces less fatigue. 2.2 Generation of manual reports Nine subjects between 25 and 35 years-old were recruited for the manual generation of reports. All of them self-reported university degrees and a large experience using search engines and performing information searches. All subjects were given an in-place detailed description of the task in order to minimize divergent interpretations. They were told that, in a first step, they had to generate reports with a maximum of information about every topic within the fifty sentence space limit. In a second step, which would take place six months afterwards, they would be examined from each of the eight topics. The only documentation allowed during the exam would be the reports generated in the first phase of the experiment. Subjects scoring best would be rewarded. These instructions had two practical effects: first, the competitive setup was an extra motivation for achieving better results. And second, users tried to take advantage of all available space, and thus most reports were close to the fifty sentences limit. The time limit per topic was set to 30 minutes, which is tight for the information synthesis task, but prevents the effects of fatigue. We implemented an interface to facilitate the generation of extractive reports. The system displays a list with the titles of relevant documents in chronological order. Clicking on a title displays the full document, where the user can select any sentence(s) and add them to the final report. A different frame displays the selected sentences (also in chronological order), together with one bar indicating the remaining time and another bar indicating the remaining space. The 50 sentence limit can be temporarily exceeded and, when the 30 minute limit has been reached, the user can still remove sentences from the report until the sentence limit is reached back. 2.3 Questionnaires After summarizing every topic, the following questionnaire was filled in by every user: • Who are the main people involved in the topic? • What are the main organizations participating in the topic? • What are the key factors in the topic? Users provided free-text answers to these questions, with their freshly generated summary at hand. We did not provide any suggestions or constraints at this point, except that a maximum of eight slots were available per question (i.e. a maximum of 8X3 = 24 key concepts per topic, per user). This is, for instance, the answer of one user for the topic 42 about the invasion of Haiti by UN and USA troops in 1994: People Organizations Jean Bertrand Aristide ONU (UN) Clinton EEUU (USA) Raoul Cedras OEA (OAS) Philippe Biambi Michel Josep Francois Factors militares golpistas (coup attempting soldiers) golpe militar (coup attempt) restaurar la democracia (reinstatement of democracy) Finally, a single list of key concepts is generated for each topic, joining all the different answers. Redundant concepts (e.g. “war” and “conflict”) were inspected and collapsed by hand. These lists of key concepts constitute the gold standard for the similarity metric described in Section 3.2.5. Besides identifying key concepts, users also filled in the following questionnaire: • Were you familiarized with the topic? • Was it hard for you to elaborate the report? • Did you miss the possibility of introducing annotations or rewriting parts of the report by hand? • Do you consider that you generated a good report? • Are you tired? Out of the answers provided by users, the most remarkable facts are that: • only in 6% of the cases the user missed “a lot” the possibility of rewriting/adding comments to the topic. The fact that reports are made extractively did not seem to be a significant problem for our users. • in 73% of the cases, the user was quite or very satisfied about his summary. These are indications that the practical constraints imposed on the task (time limit and extractive nature of the summaries) do not necessarily compromise the representativeness of the testbed. The time limit is very tight, but the temporal arrangement of documents and their highly redundant nature facilitates skipping repetitive material (some pieces of news are discarded just by looking at the title, without examining the content). 2.4 Generation of baseline reports We have automatically generated baseline reports in two steps: • For every topic, we have produced 30 tentative baseline reports using DUC style criteria: – 18 summaries consist only of picking the first sentence out of each document in 18 different document subsets. The subsets are formed using different strategies, e.g. the most relevant documents for the query (according to the Inquery search engine), one document per day, the first or last 50 documents in chronological order, etc. – The other 12 summaries consist of a) picking the first n sentences out of a set of selected documents (with different values for n and different sets of documents) and b) taking the full content of a few documents. In both cases, document sets are formed with similar criteria as above. • Out of these 30 baseline reports, we have selected the 10 reports which have the highest sentence overlap with the manual summaries. The second step increases the quality of the baselines, making the task of differentiating manual and baseline reports more challenging. 3 Comparison of similarity metrics Formal aspects of a summary (or report), such as legibility, grammatical correctness, informativeness, etc., can only be evaluated manually. However, automatic evaluation metrics can play a useful role in the evaluation of how well the information from the original sources is preserved (Mani, 2001). Previous studies have shown that it is feasible to evaluate the output of summarization systems automatically (Lin and Hovy, 2003). The process is based in similarity metrics between texts. The first step is to establish a (manual) reference summary, and then the automatically generated summaries are ranked according to their similarity to the reference summary. The challenge is, then, to define an appropriate proximity metric for reports generated in the information synthesis task. 3.1 How to compare similarity metrics without human judgments? The QARLA estimation In tasks such as Machine Translation and Summarization, the quality of a proximity metric is measured in terms of the correlation between the ranking produced by the metric, and a reference ranking produced by human judges. An optimal similarity metric should produce the same ranking as human judges. In our case, acquiring human judgments about the quality of the baseline reports is too costly, and probably cannot be done reliably: a fine-grained evaluation of 50-sentence reports summarizing sets of 100 documents is a very complex task, which would probably produce different rankings from different judges. We believe there is a cheaper and more robust way of comparing similarity metrics without using human assessments. We assume a simple hypothesis: the best metric should be the one that best discriminates between manual and automatically generated reports. In other words, a similarity metric that cannot distinguish manual and automatic reports cannot be a good metric. Then, all we need is an estimation of how well a similarity metric separates manual and automatic reports. We propose to use the probability that, given any manual report Mref, any other manual report M is closer to Mref than any other automatic report A: QARLA(sim) = P(sim(M, Mref) > sim(A, Mref)) where M, Mref ∈M, A ∈A where M is the set of manually generated reports, A is the set of automatically generated reports, and “sim” is the similarity metric being evaluated. We refer to this value as the QARLA5 estimation. QARLA has two interesting features: • No human assessments are needed to compute QARLA. Only a set of manually produced summaries and a set of automatic summaries, for each topic considered. This reduces the cost of creating the testbed and, in addition, eliminates the possible bias introduced by human judges. • It is easy to collect enough data to achieve statistically significant results. For instance, our testbed provides 720 combinations per topic to estimate QARLA probability (we have nine manual plus ten automatic summaries per topic). A good QARLA value does not guarantee that a similarity metric will produce the same rankings as human judges, but a good similarity metric must have a good QARLA value: it is unlikely that a measure that cannot distinguish between manual and automatic summaries can still produce highquality rankings of automatic summaries by comparison to manual reference summaries. 3.2 Similarity metrics We have compared five different metrics using the QARLA estimation. The first three are meant as baselines; the fourth is the standard similarity metric used to evaluate summaries (ROUGE); and the last one, introduced in this paper, is based on the overlapping of key concepts. 3.2.1 Baseline 1: Document co-selection metric The following metric estimates the similarity of two reports from the set of documents which are represented in both reports (i.e. at least one sentence in each report belongs to the document). DocSim(Mr, M) = |Doc(Mr) ∩Doc(M)| |Doc(Mr)| where Mr is the reference report, M a second report and Doc(Mr), Doc(M) are the documents to which the sentences in Mr, M belong to. 5Quality criterion for reports evaluation metrics 3.2.2 Baselines 2 and 3: Sentence co-selection The more sentences in common between two reports, the more similar their content will be. We can measure Recall (how many sentences from the reference report are also in the contrastive report) and Precision (how many sentences from the contrastive report are also in the reference report): SentenceSimR(Mr, M) = |S(Mr) ∩S(M)| |S(Mr)| SentenceSimP(Mr, M) = |S(Mr) ∩S(M)| |S(M)| where S(Mr), S(M) are the sets of sentences in the reports Mr (reference) and M (contrastive). 3.2.3 Baseline 4: Perplexity A language model is a probability distribution over word sequences obtained from some training corpora (see e.g. (Manning and Schutze, 1999)). Perplexity is a measure of the degree of surprise of a text or corpus given a language model. In our case, we build a language model LM(Mr) for the reference report Mr, and measure the perplexity of the contrastive report M as compared to that language model: PerplexitySim(Mr, M) = 1 Perp(LM(Mr), M) We have used the Good-Turing discount algorithm to compute the language models (Clarkson and Rosenfeld, 1997). Note that this is also a baseline metric, because it only measures whether the content of the contrastive report is compatible with the reference report, but it does not consider the coverage: a single sentence from the reference report will have a low perplexity, even if it covers only a small fraction of the whole report. This problem is mitigated by the fact that we are comparing reports of approximately the same size and without repeated sentences. 3.2.4 ROUGE metric The distance between two summaries can be established as a function of their vocabulary (unigrams) and how this vocabulary is used (n-grams). From this point of view, some of the measures used in the evaluation of Machine Translation systems, such as BLEU (Papineni et al., 2002), have been imported into the summarization task. BLEU is based in the precision and n-gram co-ocurrence between an automatic translation and a reference manual translation. (Lin and Hovy, 2003) tried to apply BLEU as a measure to evaluate summaries, but the results were not as good as in Machine Translation. Indeed, some of the characteristics that define a good translation are not related with the features of a good summary; then Lin and Hovy proposed a recallbased variation of BLEU, known as ROUGE. The idea is the same: the quality of a proposed summary can be calculated as a function of the n-grams in common between the units of a model summary. The units can be sentences or discourse units: ROUGEn = P C∈{MU} P n-gram∈C Countm P C∈{MU} P n-gram∈C Count where MU is the set of model units, Countm is the maximum number of n-grams co-ocurring in a peer summary and a model unit, and Count is the number of n-grams in the model unit. It has been established that unigram and bigram based metrics permit to create a ranking of automatic summaries better (more similar to a human-produced ranking) than n-grams with n > 2. For our experiment, we have only considered unigrams (lemmatized words, excluding stop words), which gives good results with standard summaries (Lin and Hovy, 2003). 3.2.5 Key concepts metric Two summaries generated by different subjects may differ in the documents that contribute to the summary, in the sentences that are chosen, and even in the information that they provide. In our Information Synthesis settings, where topics are complex and the number of documents to summarize is large, it is likely to expect that similarity measures based on document, sentence or n-gram overlap do not give large similarity values between pairs of manually generated summaries. Our hypothesis is that two manual reports, even if they differ in their information content, will have the same (or very similar) key concepts; if this is true, comparing the key concepts of two reports can be a better similarity measure than the previous ones. In order to measure the overlap of key concepts between two reports, we create a vector ⃗kc for every report, such that every element in the vector represents the frequency of a key concept in the report in relation to the size of the report: kc(M)i = freq(Ci, M) |words(M)| being freq(Ci, M) the number of times the key concept Ci appears in the report M, and |words(M)| the number of words in the report. The key concept similarity NICOS (Nuclear Informative Concept Similarity) between two reports M and Mr can then be defined as the inverse of the Euclidean distance between their associated concept vectors: NICOS(M, Mr) = 1 | ⃗kc(Mr) −⃗kc(M)| In our experiment, the dimensions of kc vectors correspond to the list of key concepts provided by our test subjects (see Section 2.3). This list is our gold standard for every topic. 4 Experimental results Figure 1 shows, for every topic (horizontal axis), the QARLA estimation obtained for each similarity metric, i.e., the probability of a manual report being closer to other manual report than to an automatic report. Table 2 shows the average QARLA measure across all topics. Metric TT topics IE topics Perplexity 0.19 0.60 DocSim 0.20 0.34 SentenceSimR 0.29 0.52 SentenceSimP 0.38 0.57 ROUGE 0.54 0.53 NICOS 0.77 0.52 Table 2: Average QARLA For the six TT topics, the key concept similarity NICOS performs 43% better than ROUGE, and all baselines give poor results (all their QARLA probabilities are below chance, QARLA < 0.5). A nonparametric Wilcoxon sign test confirms that the difference between NICOS and ROUGE is highly significant (p < 0.005). This is an indication that the Information Synthesis task, as we have defined it, should not be studied as a standard summarization problem. It also confirms our hypothesis that key concepts tend to be stable across different users, and may help to generate the reports. The behavior of the two Information Extraction (IE) topics is substantially different from TT topics. While the ROUGE measure remains stable (0.53 versus 0.54), the key concept similarity is much worse with IE topics (0.52 versus 0.77). On the other hand, all baselines improve, and some of them (SentenceSim precision and perplexity) give better results than both ROUGE and NICOS. Of course, no reliable conclusion can be obtained from only two IE topics. But the observed differences suggest that TT and IE may need different approaches, both to the automatic generation of reports and to their evaluation. Figure 1: Comparison of similarity metrics by topic One possible reason for this different behavior is that IE topics do not have a set of consistent key concepts; every case of a hunger strike, for instance, involves different people, organizations and places. The average number of different key concepts is 18.7 for TT topics and 28.5 for IE topics, a difference that reveals less agreement between subjects, supporting this argument. 5 Related work Besides the measures included in our experiment, there are other criteria to compare summaries which could as well be tested for Information Synthesis: Annotation of relevant sentences in a corpus. (Khandelwal et al., 2001) propose a task, called “Temporal Summarization”, that combines summarization and topic tracking. The paper describes the creation of an evaluation corpus in which the most relevant sentences in a set of related news were annotated. Summaries are evaluated with a measure called “novel recall”, based in sentences selected by a summarization system and sentences manually associated to events in the corpus. The agreement rate between subjects in the identification of key events and the sentence annotation does not correspond with the agreement between reports that we have obtained in our experiments. There are, at least, two reasons to explain this: • (Khandelwal et al., 2001) work on an average of 43 documents, half the size of the topics in our corpus. • Although there are topics in both experiments, the information needs in our testbed are more complex (e.g. motivations for the invasion of Chechnya) Factoids. One of the problems in the evaluation of summaries is the versatility of human language. Two different summaries may contain the same information. In (Halteren and Teufel, 2003), the content of summaries is manually represented, decomposing sentences in factoids or simple facts. They also annotate the composition, generalization and implication relations between extracted factoids. The resulting measure is different from unigram based similarity. The main problem of factoids, as compared to other metrics, is that they require a costly manual processing of the summaries to be evaluated. 6 Conclusions In this paper, we have reported an empirical study of the “Information Synthesis” task, defined as the process of (given a complex information need) extracting, organizing and relating the pieces of information contained in a set of relevant documents, in order to obtain a comprehensive, non redundant report that satisfies the information need. We have obtained two main results: • The creation of an Information Synthesis testbed (ISCORPUS) with 72 reports manually generated by 9 subjects for 8 complex topics with 100 relevant documents each. • The empirical comparison of candidate metrics to estimate the similarity between reports. Our empirical comparison uses a quantitative criterion (the QARLA estimation) based on the hypothesis that a good similarity metric will be able to distinguish between manual and automatic reports. According to this measure, we have found evidence that the Information Synthesis task is not a standard multi-document summarization problem: state-ofthe-art similarity metrics for summaries do not perform equally well with the reports in our testbed. Our most interesting finding is that manually generated reports tend to have the same key concepts: a similarity metric based on overlapping key concepts (NICOS) gives significantly better results than metrics based on language models, n-gram coocurrence and sentence overlapping. This is an indication that detecting relevant key concepts is a promising strategy in the process of generating reports. Our results, however, has also some intrinsic limitations. Firstly, manually generated summaries are extractive, which is good for comparison purposes, but does not faithfully reflect a natural process of human information synthesis. Another weakness is the maximum time allowed per report: 30 minutes seems too little to examine 100 documents and extract a decent report, but allowing more time would have caused an excessive fatigue to users. Our volunteers, however, reported a medium to high satisfaction with the results of their work, and in some occasions finished their task without reaching the time limit. ISCORPUS is available at: http://nlp.uned.es/ISCORPUS Acknowledgments This research has been partially supported by a grant of the Spanish Government, project HERMES (TIC-2000-0335-C03-01). We are indebted to E. Hovy for his comments on an earlier version of this paper, and C. Y. Lin for his assistance with the ROUGE measure. Thanks also to our volunteers for their valuable cooperation. References P. Clarkson and R. Rosenfeld. 1997. Statistical language modeling using the CMU-Cambridge toolkit. In Proceeding of Eurospeech ’97, Rhodes, Greece. J. Goldstein, V. O. Mittal, J. G. Carbonell, and J. P. Callan. 2000. Creating and Evaluating Multi-Document Sentence Extract Summaries. In Proceedings of Ninth International Conferences on Information Knowledge Management (CIKM´00), pages 165–172, McLean, VA. H. V. Halteren and S. Teufel. 2003. Examining the Consensus between Human Summaries: Initial Experiments with Factoids Analysis. In HLT/NAACL-2003 Workshop on Automatic Summarization, Edmonton, Canada. V. Khandelwal, R. Gupta, and J. Allan. 2001. An Evaluation Corpus for Temporal Summarization. In Proceedings of the First International Conference on Human Language Technology Research (HLT 2001), Tolouse, France. C. Lin and E. H. Hovy. 2003. Automatic Evaluation of Summaries Using N-gram Co-ocurrence Statistics. In Proceeding of the 2003 Language Technology Conference (HLT-NAACL 2003), Edmonton, Canada. I. Mani. 2001. Automatic Summarization, volume 3 of Natural Language Processing. John Benjamins Publishing Company, Amsterdam/Philadelphia. C. D. Manning and H. Schutze. 1999. Foundations of statistical natural language processing. MIT Press, Cambridge Mass. P. Over. 2003. Introduction to DUC-2003: An Intrinsic Evaluation of Generic News Text Summarization Systems. In Proceedings of Workshop on Automatic Summarization (DUC 2003). K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311– 318, Philadelphia. C. Peters, M. Braschler, J. Gonzalo, and M. Kluck, editors. 2002. Evaluation of Cross-Language Information Retrieval Systems, volume 2406 of Lecture Notes in Computer Science. SpringerVerlag, Berlin-Heidelberg-New York. D. R. Radev, J. Hongyan, and M. Budzikowska. 2000. Centroid-Based Summarization of Multiple Documents: Sentence Extraction, UtilityBased Evaluation, and User Studies. In Proceedings of the Workshop on Automatic Summarization at the 6th Applied Natural Language Processing Conference and the 1st Conference of the North American Chapter of the Association for Computational Linguistics, Seattle, WA, April. | 2004 | 27 |
Mining metalinguistic activity in corpora to create lexical resources using Information Extraction techniques: the MOP system Carlos Rodríguez Penagos Language Engineering Group, Engineering Institute UNAM, Ciudad Universitaria A.P. 70-472 Coyoacán 04510 Mexico City, México [email protected] Abstract This paper describes and evaluates MOP, an IE system for automatic extraction of metalinguistic information from technical and scientific documents. We claim that such a system can create special databases to bootstrap compilation and facilitate update of the huge and dynamically changing glossaries, knowledge bases and ontologies that are vital to modern-day research. 1 Introduction Availability of large-scale corpora has made it possible to mine specific knowledge from free or semi-structured text, resulting in what many consider by now a reasonably mature NLP technology. Extensive research in Information Extraction (IE) techniques, especially with the series of Message Understanding Conferences of the nineties, has focused on tasks such as creating and updating databases of corporate join ventures or terrorist and guerrilla attacks, while the ACQUILEX project used similar methods for creating lexical databases using the highly structured environment of machine-readable dictionary entries and other resources. Gathering knowledge from unstructured text often requires manually crafting knowledgeengineering rules both complex and deeply dependent of the domain at hand, although some successful experiences using learning algorithms have been reported (Fisher et al., 1995; Chieu et al., 2003). Although mining specific semantic relations and subcategorization information from free-text has been successfully carried out in the past (Hearst, 1999; Manning, 1993), automatically extracting lexical resources (including terminological definitions) from text in special domains has been a field less explored, but recent experiences (Klavans et al., 2001; Rodríguez, 2001; Cartier, 1998) show that compiling the extensive resources that modern scientific and technical disciplines need in order to manage the explosive growth of their knowledge, is both feasible and practical. A good example of this NLP-based processing need is the MedLine abstract database maintained by the National Library of Medicine1 (NLM), which incorporates around 40,000 Health Sciences papers each month. Researchers depend on these electronic resources to keep abreast of their rapidly changing field. In order to maintain and update vital indexing references such as the Unified Medical Language System (UMLS) resources, the MeSH and SPECIALIST vocabularies, the NLM staff needs to review 400,000 highly-technical papers each year. Clearly, neology detection, terminological information update and other tasks can benefit from applications that automatically search text for information, e.g., when a new term is introduced or an existing one is modified due to data or theory-driven concerns, or, in general, when new information about sublanguage usage is being put forward. But the usefulness of robust NLP applications for special-domain text goes beyond glossary updates. The kind of categorization information implicit in many definitions can help improve anaphora resolution, semantic typing or acronym identification in these corpora, as well as enhance “semantic rerendering” of special-domain ontologies and thesaurii (Pustejovsky et al., 2002). In this paper we describe and evaluate the MOP2 IE system, implemented to automatically create Metalinguistic Information Databases (MIDs) from large collections of special-domain 1 http://www.nlm.nih.gov/ 2 Metalinguistic Operation Processor research papers. Section 2 will lay out the theory, methodology and the empirical research grounding the application, while Section 3 will describe the first phase of the MOP tasks: accurate location of good candidate metalinguistic sentences for further processing. We experimented both with manually coded rules and with learning algorithms for this task. Section 4 focuses on the problem of identifying and organizing into a useful database structure the different linguistic constituents of the candidate predications, a phase similar to what are known in the IE literature as Named-Entity recognition, Element and Scenario template fill-up tasks. Finally, Section 5 discusses results and problems of our experiments, as well as future lines of research. 2 Metalanguage and term evolution in scientific disciplines 2.1 Explicit Metalinguistic Operations Preliminary empirical work to explore how researchers modify the terminological framework of their highly complex conceptual systems, included manual review of a corpus of 19 sociology articles (138,183 words) published in various British, American and Canadian academic journals with strict peer-review policies. We look at how term manipulation was done as well as how metalinguistic activity was signaled in text, both by lexical and paralinguistic means. Some of the indicators found included verbs and verbal phrases like called, known as, defined as, termed, coined, dubbed, and descriptors such as term and word. Other non-lexical markers included quotation marks, apposition and text formatting. A collection of potential metalinguistic patterns identified in the exploratory Sociology corpus was expanded (using other verbal tenses and forms) to 116 queries sent to the scientific and learned domains of the British National Corpus. The resulting 10,937 sentences were manually classified as metalinguistic or otherwise, with 5,407 (49.6% of total) found to be truly metalinguistic sentences. The presence of three components described below (autonym, informative segment and markers/operators) was the criteria for classification. Reliability of human subjects for this task has not been reported in the literature, and was not evaluated in our experiments. Careful analysis of this extensive corpus presented some interesting facts about what we have termed “Explicit Metalinguistic Operations” (or EMOs) in specialized discourse: A) EMOs usually do not follow the genusdifferentia scheme of aristotelian definitions, nor conform to the rigid and artificial structure of dictionary entries. More often than not, specific information about language use and term definition is provided by sentences such as: (1) This means that they ingest oxygen from the air via fine hollow tubes, known as tracheae, in which the term trachea is linked to the description fine hollow tubes in the context of a globally nonmetalinguistic sentence. Partial and heterogeneous information, rather that a complete definition, are much more common. B) Introduction of metalinguistic information in discourse is highly regular, regardless of the specific domain. This can be credited to the fact that the writer needs to mark these sentences for special processing by the reader, as they dissect across two different semiotic levels: a metalanguage and its object language, to use the terminology of logic where these concepts originate.3 Its constitutive markedness means that most of the times these sentences will have at least two indicators present, for example a verb and a descriptor, or quotation marks, or even have preceding sentences that announce them in some way. These formal and cognitive properties of EMOs facilitate the task of locating them accurately in text. C) EMOs can be further analyzed into 3 distinct components, each with its own properties and linguistic realizations: i) An autonym (see note 3): One or more selfreferential lexical items that are the logical or grammatical subject of a predication that needs not be a complete grammatical sentence. 3 At a very basic semiotic level natural language has to be split (at least methodologically) into two distinct systems that share the same rules and elements: a metalanguage, which is a language that is used to talk about another one, and an object language, which in turn can refer to and describe objects in the mind or in the physical world. The two are isomorphic and this accounts for reflexivity, the property of referring to itself, as when linguistic items are mentioned instead of being used normally in an utterance. Rey-Debove (1978) and Carnap (1934) call this condition autonymy. ii) An informative segment: a contribution of relevant information about the meaning, status, coding or interpretation of a linguistic unit. Informative segments constitute what we state about the autonymical element. iii) Markers/Operators: Elements used to mark or made prominent whole discourse operation, on account of its non-referential, metalinguistic nature. They are usually lexical, typographic or pragmatic elements that articulate autonyms and informative segments into a predication. Thus, in a sentence such as (2), the [autonym] is marked in square brackets, the {informational segment} in curly brackets and the <markeroperators> in angular brackets: (2) {The bit sequences representing quanta of knowledge} <will be called “>[Kenes]<”>, {a neologism intentionally similar to 'genes'}. 2.2 Defaults, knowledge and knowledge of language The 5,400 metalinguistic sentences from our BNC-based test corpus (henceforth, the EMO corpus) reflect an important aspect of scientific sublanguages, and of the scientific enterprise in general. Whenever scientists and scholars advance the state of the art of a discipline, the language they use has to evolve and change, and this buildup is carried out under metalinguistic control. Previous knowledge is transformed into new scientific common ground and ontological commitments are introduced and defended when semantic reference is established. That is why when we want to structure and acquire new knowledge we have to go through a resource-costly cognitive process that integrates, within coherent conceptual structures, a considerable amount of new and very complex lexical items and terms. It has to be pointed out that non-specialized language is not abundant4 in these kinds of metalinguistic exchanges because (unless in the context of language acquisition) we usually rely on a lexical competence that, although subsequently modified and enhanced, reaches the plateau of a generalized lexicon relatively early in our adult life. Technical terms can be thought of as semantic anomalies, in the sense that they are ad hoc 4 Our study shows that they represent between 1 and 6% of all sentences across different domains. constructs strongly bounded to a model, a domain or a context, and are not, by definition, part of the far larger linguistic competence from a first native language. The information provided by EMOs is not usually inferable from previous one available to the speaker’s community or expert group, and does not depend on general language competence by itself, but nevertheless is judged important and relevant enough to warrant the additional processing effort involved. Conventional resources like lexicons and dictionaries compile established meaning definitions. They can be seen as repositories of the default, core lexical information of words or terms used by a community (that is, the information available to an average, idealized speaker). A Metalinguistic Information Database (MID), on the other hand, compiles the real-time data provided by metalanguage analysis of leading-edge research papers, and can be conceptualized as an anti-dictionary: a listing of exceptions, special contexts and specific usage, of instances where meaning, value or pragmatic conditions have been spotlighted by discourse for cognitive reasons. The non-default and highly relevant information from MIDs could provide the material for new interpretation rules in reasoning applications, when inferences won’t succeed because the states of the lexicoconceptual system have changed. When interpreting text, regular lexical information is applied by default under normal conditions, but more specific pragmatic or discursive information can override it if necessary, or if context demands so (Lascarides & Copestake, 1995). A neologism or a word in an unexpected technical sense could stump a NLP system that assumes it will be able to use default information from a machine-readable dictionary. 3 Locating metalinguistic information in text: two approaches When implementingan IE application to mine metalinguistic information from text, the first issue to tackle is how to obtain a reliable set of candidate sentences from free text for input into the next phases of extraction. From our initial corpus analysis we selected 44 patterns that showed the best reliability for being EMO indicators. We start our processing5 by tokenizing text, which then is 5 Our implementation is Python-based, using the run through a cascade of finite-state devices based on identification patterns that extract a candidate set for filtering. Our filtering strategies in effect distinguish between useful results such as (3) from non-metalinguistic instances like (4): (3) Since the shame that was elicited by the coding procedure was seldom explicitly mentioned by the patient or the therapist, Lewis called it unacknowledged shame. (4) It was Lewis (1971;1976) who called attention to emotional elements in what until then had been construed as a perceptual phenomenon . For this task, we experimented with two strategies: First, we used corpus-based collocations to discard non-metalinguistic instances, for example the presence of attention in sentence (4) next to the marker called. Since immediate co-text seems important for this classification task, we also implemented learning algorithms that were trained on a subset from our EMO corpus, using as vectors either POS tags or word forms, at 1, 2, and 3 positions adjacent before and after our markers. These approaches are representative of wider paradigmatic approaches to NLP: symbolic and statistic techniques, each with their own advantages and limitations. Our evaluations of the MOP system are based on test runs over 3 document sets: a) our original exploratory corpus of sociology research papers [5581 sentences, 243 EMOs]; b) an online histology textbook [5146 sentences, 69 EMOs] ; and c) a small sample from the MedLine abstract database [1403 sentences, 10 EMOs]. Using collocational information, our first approach fared very well, presenting good precision numbers, but not so encouraging recall. The sociology corpus, for example, gave 0.94 precision (P) and 0.68 recall (R), while the histology one presented 0.9 P and 0.5 R. These low recall numbers reflect the fact that we only selected a subset of the most reliable and common metalinguistic patterns, and our list is not exhaustive. Example (5) shows one kind of metalinguistic sentence (with a copulative structure) attested in corpora, NLTK toolkit (nltk.sf.net) developed by E. Loper and S. Byrd at the University of Pennsylvania, although we have replaced stochastic POS taggers with an implementation of the Brill algorithm by Hugo Liu at MIT. Our output files follow XML standards to ensure transparency, portability and accessibility but that the system does not attempt to extract or process: (5) “Intercursive” power , on the other hand , is power in Weber's sense of constraint by an actor or group of actors over others. In order to better compare our two strategies, we decided to also zoom in on a more limited subset of verb forms for extraction (namely, calls, called, call), which presented ratios of metalinguistic relevance in our MOP corpus, ranging from 100% positives (for the pattern so called + quotation marks) to 77% (called, by itself) to 31% (call). Restricted to these verbs, our metrics show precision and recall rates of around 0.97, and an overall F-measure of 0.97.6 Of 5581 sentences (96 of which were metalinguistic sentences signaled by our cluster of verbs), 83 were extracted, with 13 (or 15.6% of candidates) filtered-out by collocations. For our learning experiments (an approach we have called contextual feature language models), we selected two well-known algorithms that showed promise for this classification task.7 The naive Bayes (NB) algorithm estimates the conditional probability of a set of features given a label, using the product of the probabilities of the individual features given that label. The Maximum Entropy model establishes a probability distribution that favors entropy, or uniformity, subject to the constraints encoded in the feature-label correlation. When training our ME classifiers, Generalized (GISMax) and Improved Iterative Scaling (IISMax) algorithms are used to estimate the optimal maximum entropy of a feature set, given a corpus. 1,371 training sentences were converted into labeled vectors, for example using 3 positions and POS tags: ('VB WP NNP', 'calls', 'DT NN NN') /'YES'@[102]. The different number of positions considered to the left and right of the markers in our training corpus, as well as the nature of the features selected (there are many more word-types than POS tags) ensured that our 3-part vector introduced a wide range of features against our 2 possible YES-NO labels for processing by our algorithms. Although our test runs using only collocations showed initially that structural regulari 6 With a ß factor of 1.0, and within the sociology document set 7 see Ratnaparkhi (1997) and Berger et al. (1996) for a formal description of these algorithms ties would perform well, both with our restricted lemma cluster and with our wider set of verbs and markers, our intuitions about improvement with more features (more positions to the right of left of the markers) or a more controlled and grammatically restricted environment (a finite set of surrounding POS tags), turned out to be overly optimistic. Nevertheless, stochastic approaches that used short range features did perform very well, in line with the hand-coded approach. The results of the different algorithms, restricted to the lexeme call, are presented in Table 1, while Figures 1 and 2 present best results in the learning experiments for the complete set of patterns used in the collocation approach, over two of our evaluation corpora. Type Positions Tags/ Words Features Accuracy Precision Recall GISMax 1 W 1254 0.97 0.96 0.98 IISMax 1 T 136 0.95 0.96 0.94 IISMax 1 W 1252 0.92 0.97 0.9 GISMax 1 T 138 0.91 0.9 0.96 GISMax 2 T 796 0.88 0.93 0.92 IISMax 2 T 794 0.86 0.95 0.89 IISMax 3 W 4290 0.87 0.85 0.98 GISMax 3 W 4292 0.87 0.85 0.98 IISMax 2 W 3186 0.86 0.87 0.95 GISMax 2 W 3188 0.86 0.87 0.95 NB 1 T 136 0.88 0.97 0.84 NB 2 T 794 0.87 0.96 0.84 NB 3 W 4290 0.73 0.86 0.77 Table 1. Best metrics for “call” lexeme sorted by F-measure and classifier accuracy Figure 1. Best metrics for Sociology corpus 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 P R F NB (3/T) IIS (1/W) GIS (1/W) Figure 2. Best metrics for Histology corpus 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 P R F NB (3/W) IIS (3/W) GIS (1/W) Figures 1 & 2. Best results for filtering algorithms.8 Both Knowledge-Engineering and supervised learning approaches can be adequate for extraction of metalinguistic sentences, although learning algorithms can be helpful when procedural rules have not been compiled; they also allow easier transport of systems to new thematic domains. We plan further research into stochastic approaches to fine tune them for the task. One issue that merits special attention is why some of the algorithms and features work well with one corpus, but not so well with another. This fact is in line with observations in Nigam et al. (1999) that naive Bayes and Maximum Entropy do not show fundamental baseline superiorities, but are dependent on other factors. A hybrid approach that combines hand-crafted collocations with classifiers customized to each pattern’s behavior and morpho-syntactic contexts in corpora might offer better results in future experiments. 4 Processing EMOs to compile metalinguistic information databases Once we have extracted candidate EMOs, the MOP system conforms to a general processing architecture shown in Figure 3. POS tagging is followed by shallow parsing that attempts limited PP-attachment. The resulting chunks are then tagged semantically as Autonyms, Agents, Markers, Anaphoric elements or simply as Noun Chunks, 8 Legend: P: Precision; R: Recall; F: F-Measure. NB: naïve Bayes; IIS: Maximum Entropy trained with Improved Iterative Scaling; GIS: Maximum Entropy trained with Generalized Iterative Scaling. (Positions/Feature type) using heuristics based on syntactic, pragmatic and argument structure observation of the extraction patterns. Next, a predicate processing phase selects the most likely surface realization of informational segments, autonyms and makers-operators, and proceeds to fill the templates in our databases. This was done by following different processing routes customized for each pattern using corpus analysis as well as FrameNet data from Name conferral and Name bearing frames to establish relevant arguments and linguistic realizations. Figure 3. MOP Architecture As mentioned earlier, informational segments present many realizations that distance them from the clarity, completeness and conciseness of lexicographic entries. In fact, they may show up as full-fledged clauses (6), as inter- or intrasentential anaphoric elements (7 and 8, the first one a relative clause), supply a categorization descriptor (9), or even (10) restrict themselves semantically to what we could call a sententiallyunrealized “existential variable” (with logical form ›x) indicating only that certain discourse entity is being introduced. (6) In 1965 the term soliton was coined to describe waves with this remarkable behaviour. (7) This leap brings cultural citizenship in line with what has been called the politics of citizenship . (8) They are called “endothermic compounds.” (9) One of the most enduring aspects of all social theories are those conceptual entities known as structures or groups. (10) A ›x so called cell-type-specific TF can be used by closely related cells, e.g., in erythrocytes and megakaryocytes. We have not included an anaphora-resolution module in our present system, so that instances 7, 8 and 10 will only display in the output as unresolved surface element or as existential variable place-holders,9 but these issues will be explored in future versions of the system. Nevertheless, much more common occurrences as in (11) and (12) are enough to create MIDs quite useful for lexicographers and for NLP lexical resources. (11) The Jovian magnetic field exerts an influence out to near a surface, called the "magnetopause". (12) Here we report the discovery of a soluble decoy receptor, termed decoy receptor 3 (DcR3)... The correct database entry for example 12 is presented in Table 4. Reference: MedLine sample # 6 Autonym: decoy receptor 3 (DcR3) Information a soluble decoy receptor Markers/ Operators: termed Table 4. Sample entry of MID The final processing stage presents metrics shown in Figure 4, using a ß factor of 1.0 to estimate F-measures. To better reflect overall performance in all template slots, we introduced a threshold of similarity of 65% for comparison between a golden standard slot entry and the one provided by the application. Thus, if the autonym or the informational segment is at least 2/3 of the correct response, it is counted as a positive, in many cases leveling the field for the expected errors in the prepositional phrase- or acronym- attachment algorithms, but accounting for a (basically) correct selection of superficial sentence segments. 9 For sentence (8) the system would retrieve a previous sentence: (“A few have positive enthalpies of formation”). to define “endothermic compounds”. Corpus Tokenization Candidate extraction MID Candidate Filtering Collocations ♦ Learning POS tagging & Partial parsing Semantic labeling Database template fillup 5 Results, comparisons and discussion The DEFINDER system (Klavans et al, 2001) at Columbia University is, to my knowledge, the only one fully comparable with MOP, both in scope and goals, but some basic differences between them exist. First, DEFINDER examines user-oriented documents that are bound to contain fully-developed definitions for the layman, as the general goal of the PERSIVAL project is to present medical information to patients in a less technical language than the one of reference literature. MOP focuses on leading-edge research papers that present the less predictable informational templates of highly technical language. Secondly, by the very nature of DEFINDER’s goals their qualitative evaluation criteria include readability, usefulness and completeness as judged by lay subjects, criteria which we have not adopted here. Neither have we determined coverage against existing online dictionaries, as they have done. Taking into account the above-mentioned differences between the two systems’ methods and goals, MOP compares well with the 0.8 Precision and 0.75 Recall of DEFINDER. While the resulting MOP “definitions” generally do not present high readability or completeness, these informational segments are not meant to be read by laymen, but used by domain lexicographers reviewing existing glossaries for neological change, or, for example, in machine-readable form by applications that attempt automatic categorization for semantic rerendering of an expert ontology, since definitional contexts provide sortal information as a natural part of the process of precisely situating a term or concept against the meaning network of interrelated lexical items. The Metalinguistic Information Databases in their present form are not, in full justice, lexical knowledge bases comparable with the highly-structured and sophisticated resources that use inheritance and typed features, like LKB (Copestake et al., 1993). MIDs are semi-structured resources (midway between raw corpora and structured lexical bases) that can be further processed to convert them into usable data sources, along the lines suggested by Vossen and Copestake (1993) for the syntactic kernels of lexicographic definitions, or by Pustejovsky et al. (2002) using corpus analytics to increase the semantic type coverage of the NLM UMLS ontology. Another interesting possibility is to use a dynamically-updated MID to trace the conceptual and terminological evolution of a discipline. We believe that low recall rates in our tests are in part due to the fact that we are dealing with the wider realm of metalinguistic information, as opposed to structured definitional sentences that have been distilled by an expert for consumeroriented documents. We have opted in favor of exploiting less standardized, non-default metalinguistic information that is being put forward in text because it can’t be assumed to be part of the collective expert-domain competence (Section 2.1). In doing so, we have exposed our system to the less predictable and highly charged lexical environment of leading-edge research literature, the cauldron where knowledge and terminological systems are forged in real time, and where scientiFigure 4. Metrics for 3 corpora (# of Records/Global F-Measure) 0.6 0.7 0.8 0.9 1 Precision Recall Precision Recall Precision Recall Global Informational Segments Autonyms Histology (35/0.71) Sociology (143/0.77) MedLine (10/0.78) fic meaning and interpretation are constantly debated, modified and agreed. We have not performed major customization of the system (like enriching the tagging lexicon with medical terms), in order to preserve the ability to use the system across different domains. Domain customization may improve metrics, but at a cost for portability. The implementation we have described here undoubtedly shows room for improvement in some areas, including: adding other patterns for better overall recall rates, deeper parsing for more accurate semantic typing of sentence arguments, etc. Also, the issue of which learning algorithms can better perform the initial filtering of EMO candidates is still very much an open question. Applications that can turn MIDs into truly useful lexical resources by further processing them need to be written. We plan to continue development of our proof-of-concept system to explore those areas. DEFINDER and MOP both show great potential as robust lexical acquisition systems capable of handling the vast electronic resources available today to researchers and laymen alike, helping to make them more accessible and useful. In doing so, they are also fulfilling the promise of NLP techniques as mature and practical technologies. References ACQUILEX projects, final report available at: http://www.cl.cam.ac.uk/Research/NL/acquilex/ Berger, A., S. Della Pietra et al., 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics, vol. 22, no. 1. Carnap, R. 1934. The Logical Syntax of Language. Routledge and Kegan, Londres 1964. Cartier, E. 1998. Analyse Automatique des textes: l’example des informations définitoires. RIFRA 1998. Sfax, Tunisia. Chieu, Hai Leong, Ng, Hwee Tou, & Lee, Yoong Keok. 2003. Closing the Gap: Learning-Based Information Extraction Rivaling KnowledgeEngineering Methods. 41st ACL. Sapporo, Japan. Copestake, A., Sanfilippo, A., Briscoe, T. and de Pavia, V. 1993. The ACQUILEX LKB: An introduction. In: Inheritance, Defaults and the Lexicon. Cambridge University Press. Fisher, D., S. Soderland, J. McCarthy, F. Feng, and W. Lehnert. 1995. Description of the UMass system as used for MUC-6. In Proceedings of MUC-6 Hearst, M. 1998. Automated discovery of wordnet relations. In Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA Klavans, J. and S. Muresan. 2001. Evaluation of the DEFINDER System for Fully Automatic Glossary Construction, proceedings of the American Medical Informatics Association Symposium 2001 Lascarides, A. and Copestake A. 1995. The Pragmatics of Word Meaning, Proceedings of the AAAI Spring Symposium Series: Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity and Generativity, Stanford CA. Manning, Ch. 1993. Automatic acquisition of a large subcategorization dictionary from corpora, In Proceedings of the 31st ACL, Columbus, OH. Nigam, K., Lafferty, J., and McCallum, A. 1999. Using Maximum Entropy for Text Classification, IJCAI-99 Workshop on Machine Learning for Information Filtering, pp. 61-67 Pustejovsky J., A. Rumshisky and J. Castaño. 2002. Rerendering Semantic Ontologies: Automatic Extensions to UMLS through Corpus Analytics. LREC 2002 Workshop on Ontologies and Lexical Knowledge Bases. Las Palmas, Canary Islands, Spain. Ratnaparkhi A. 1997. A Simple Introduction to Maximum Entropy Models for Natural Language Processing, TR 97-08, Institute for Research in Cognitive Science, University of Pennsylvania Rey-Debove, J. 1978. Le Métalangage. Le Robert, Paris. Rodríguez, C. 2001. Parsing Metalinguistic Knowledge from Texts, Selected papers from CICLING-2000 Collection in Computer Science (CCC); National Polytechnic Institute (IPN), Mexico. Vossen, P. and Copestake, A. 1993. Untangling Definition Structure into Knowledge Representation. In: Inheritance, Defaults and the Lexicon. | 2004 | 28 |
Optimizing Typed Feature Structure Grammar Parsing through Non-Statistical Indexing Cosmin Munteanu and Gerald Penn University of Toronto 10 King’s College Rd. Toronto M5S 3G4 Canada mcosmin,gpenn @cs.toronto.edu Abstract This paper introduces an indexing method based on static analysis of grammar rules and type signatures for typed feature structure grammars (TFSGs). The static analysis tries to predict at compile-time which feature paths will cause unification failure during parsing at run-time. To support the static analysis, we introduce a new classification of the instances of variables used in TFSGs, based on what type of structure sharing they create. The indexing actions that can be performed during parsing are also enumerated. Non-statistical indexing has the advantage of not requiring training, and, as the evaluation using large-scale HPSGs demonstrates, the improvements are comparable with those of statistical optimizations. Such statistical optimizations rely on data collected during training, and their performance does not always compensate for the training costs. 1 Introduction Developing efficient all-paths parsers has been a long-standing goal of research in computational linguistics. One particular class still in need of parsing time improvements is that of TFSGs. While simpler formalisms such as context-free grammars (CFGs) also face slow all-paths parsing times when the size of the grammar increases significantly, TFSGs (which generally have fewer rules than largescale CFGs) become slow as a result of the complex structures used to describe the grammatical categories. In HPSGs (Pollard and Sag, 1994), one category description could contain hundreds of feature values. This has been a barrier in transferring CFGsuccessful techniques to TFSG parsing. For TFSG chart parsers, one of the most timeconsuming operations is the retrieval of categories from the chart during rule completion (closing of constituents in the chart under a grammar rule). Looking in the chart for a matching edge for a daughter is accomplished by attempting unifications with edges stored in the chart, resulting in many failed unifications. The large and complex structure of TFS descriptions (Carpenter, 1992) leads to slow unification times, affecting the parsing times. Thus, failing unifications must be avoided during retrieval from the chart. To our knowledge, there have been only four methods proposed for improving the retrieval component of TFSG parsing. One (Penn and Munteanu, 2003) addresses only the cost of copying large categories, and was found to reduce parsing times by an average of 25% on a large-scale TFSG (MERGE). The second, a statistical method known as quickcheck (Malouf et al., 2000), determines the paths that are likely to cause unification failure by profiling a large sequence of parses over representative input, and then filters unifications at run-time by first testing these paths for type consistency. This was measured as providing up to a 50% improvement in parse times on the English Resource Grammar (Flickinger, 1999, ERG). The third (Penn, 1999b) is a similar but more conservative approach that uses the profile to re-order sister feature values in the internal data structure. This was found to improve parse times on the ALE HPSG by up to 33%. The problem with these statistical methods is that the improvements in parsing times may not justify the time spent on profiling, particularly during grammar development. The static analysis method introduced here does not use profiling, although it does not preclude it either. Indeed, an evaluation of statistical methods would be more relevant if measured on top of an adequate extent of non-statistical optimizations. Although quick-check is thought to produce parsing time improvements, its evaluation used a parser with only a superficial static analysis of chart indexing. That analysis, rule filtering (Kiefer et al., 1999), reduces parse times by filtering out mother-daughter unifications that can be determined to fail at compile-time. True indexing organizes the data (in this case, chart edges) to avoid unnecessary retrievals altogether, does not require the operations that it performs to be repeated once full unification is deemed necessary, and offers the support for easily adding information extracted from further static analysis of the grammar rules, while maintaining the same indexing strategy. Flexibility is one of the reasons for the successful employment of indexing in databases (Elmasri and Navathe, 2000) and automated reasoning (Ramakrishnan et al., 2001). In this paper, we present a general scheme for indexing TFS categories during parsing (Section 3). We then present a specific method for statically analyzing TFSGs based on the type signature and the structure of category descriptions in the grammar rules, and prove its soundness and completeness (Section 4.2.1). We describe a specific indexing strategy based on this analysis (Section 4), and evaluate it on two large-scale TFSGs (Section 5). The result is a purely non-statistical method that is competitive with the improvements gained by statistical optimizations, and is still compatible with further statistical improvements. 2 TFSG Terminology TFSs are used as formal representatives of rich grammatical categories. In this paper, the formalism from (Carpenter, 1992) will be used. A TFSG is defined relative to a fixed set of types and set of features, along with constraints, called appropriateness conditions. These are collectively known as the type signature (Figure 3). For each type, appropriateness specifies all and only the features that must have values defined in TFSs of that type. It also specifies the types of the values that those features can take. The set of types is partially ordered, and has a unique most general type ( – “bottom”). This order is called subsumption ( ): more specific (higher) types inherit appropriate features from their more general (lower) supertypes. Two types t1 and t2 unify (t1 t2 ) iff they have a least upper bound in the hierarchy. Besides a type signature, TFSGs contain a set of grammar (phrase) rules and lexical descriptions. A simple example of a lexical description is: john SYNSEM: SYN: np SEM: j , while an example of a phrase rule is given in Figure 1. SYN: s SEM: VPSem AGENT: NPSem SYN: np AGR: Agr SEM : NPSem , SYN: vp AGR: Agr SEM: VPSem . Figure 1: A phrase rule stating that the syntactic category s can be combined from np and vp if their values for agr are the same. The semantics of s is that of the verb phrase, while the semantics of the noun phrase serves as agent. 2.1 Typed Feature Structures A TFS (Figure 2) is like a recursively defined record in a programming language: it has a type and features with values that can be TFSs, all obeying the appropriateness conditions of the type signature. TFSs can also be seen as rooted graphs, where arcs correspond to features and nodes to substructures. A node typing function θ q associates a type to every node q in a TFS. Every TFS F has a unique starting or root node, qF. For a given TFS, the feature value partial function δ f q specifies the node reachable from q by feature f when one exists. The path value partial function δ π q specifies the node reachable from q by following a path of features π when one exists. TFSs can be unified as well. The result represents the most general consistent combination of the information from two TFSs. That information includes typing (by unifying the types), feature values (by recursive unification), and structure sharing (by an equivalence closure taken over the nodes of the arguments). For large TFSs, unification is computationally expensive, since all the nodes of the two TFSs are visited. In this process, many nodes are collapsed into equivalence classes because of structure sharing. A node x in a TFS F with root qF and a node x
in a TFS F
with root qF are equivalent ( ) with respect to F F
iff x qF and x qF , or if there is a path π such that δF F π qF x and δF F π qF x
. NUMBER: PERSON: GENDER: masculine third [1]singular NUMBER: PERSON: GENDER: third neuter [1] throwing THROWER: index THROWN: index Figure 2: A TFS. Features are written in uppercase, while types are written with bold-face lowercase. Structure sharing is indicated by numerical tags, such as [1]. THROWER: THROWN: index index masculine feminine neuter singular plural first second third num gend pers PERSON: GENDER: NUMBER: pers num gend throwing index Figure 3: A type signature. For each type, appropriateness declares the features that must be defined on TFSs of that type, along with the type restrictions applying to their values. 2.2 Structure Sharing in Descriptions TFSGs are typically specified using descriptions, which logically denote sets of TFSs. Descriptions can be more terse because they can assume all of the information about their TFSs that can be inferred from appropriateness. Each non-disjunctive description can be associated with a unique most general feature structure in its denotation called a most general satisfier (MGSat). While a formal presentation can be found in (Carpenter, 1992), we limit ourselves to an intuitive example: the TFS from Figure 2 is the MGSat of the description: throwing THROWER: PERSON: third NUMBER: singular Nr GENDER : masculine THROWN : PERSON : third NUMBER : Nr GENDER : neuter . Descriptions can also contain variables, such as Nr. Structure sharing is enforced in descriptions through the use of variables. In TFSGs, the scope of a variable extends beyond a single description, resulting in structure sharing between different TFSs. In phrase structure rules (Figure 1), this sharing can occur between different daughter categories in a rule, or between a mother and a daughter. Unless the term description is explicitly used, we will use “mother” and “daughter” to refer to the MGSat of a mother or daughter description. We can classify instances of variables based on what type of structure sharing they create. Internal variables are the variables that represent internal structure sharing (such as in Figure 2). The occurrences of such variables are limited to a single category in a phrase structure rule. External variables are the variables used to share structure between categories. If a variable is used for structure sharing both inside a category and across categories, then it is also considered an external variable. For a specific category, two kinds of external variable instances can be distinguished, depending on their occurrence relative to the parsing control strategy: active external variables and inactive external variables. Active external variables are instances of external variables that are shared between the description of a category D and one or more descriptions of categories in the same rule as D visited by the parser before D as the rule is extended (completed). Inactive external variables are the external variable instances that are not active. For example, in bottom-up left-to-right parsing, all of a mother’s external variable instances would be active because, being external, they also occur in one of the daughter descriptions. Similarly, all of the leftmost daughter’s external variable instances would be inactive because this is the first description used by the parser. In Figure 1, Agr is an active external variable in the second daughter, but it is inactive in the first daughter. The active external variable instances are important for path indexing (Section 4.2), because they represent the points at which the parser must copy structure between TFSs. They are therefore substructures that must be provided to a rule by the parsing chart if these unifications could potentially fail. They also represent shared nodes in the MGSats of a rule’s category descriptions. In our definitions, we assume without loss of generality that parsing proceeds bottom-up, with left-to-right of rule daughters. This is the ALE system’s (Carpenter and Penn, 1996) parsing strategy. Definition 1. If D1 Dn are daughter descriptions in a rule and the rules are extended from left to right, then Ext MGSat Di is the set of nodes shared between MGSat Di and MGSat D1 MGSat Di 1 . For a mother description M, Ext MGSat M is the set of nodes shared with any daughter in the same rule. Because the completion of TFSG rules can cause the categories to change in structure (due to external variable sharing), we need some extra notation to refer to a phrase structure rule’s categories at different times during a single application of that rule. By M we symbolize the mother M after M’s rule is completed (all of the rule’s daughters are matched with edges in the chart). D symbolizes the daughter D after all daughters to D’s left in D’s rule were unified with edges from the chart. An important relation exists between M and M: if qM is M’s root and qM is M’s root, then x M x M such that π for which δ π qM x and δ π qM x, θ x θ x . In other words, extending the rule extends the information states of its categories monotonically. A similar relation exists between D and D. The set of all nodes x in M such that π for which δ π qM x and δ π qM x will be denoted by x 1 (and likewise for nodes in D). There may be more than one node in x 1 because of unifications that occur during the extension of M to M. 3 The Indexing Timeline Indexing can be applied at several moments during parsing. We introduce a general strategy for indexed parsing, with respect to what actions should be taken at each stage. Three main stages can be identified. The first one consists of indexing actions that can be taken off-line (along with other optimizations that can be performed at compile-time). The second and third stages refer to actions performed at run time. Stage 1. In the off-line phase, a static analysis of grammar rules can be performed. The complete content of mothers and daughters may not be accessible, due to variables that will be instantiated during parsing, but various sources of information, such as the type signature, appropriateness specifications, and the types and features of mother and daughter descriptions, can be analyzed and an appropriate indexing scheme can be specified. This phase of indexing may include determining: (1a) which daughters in which rules will certainly not unify with a specific mother, and (1b) what information can be extracted from categories during parsing that can constitute indexing keys. It is desirable to perform as much analysis as possible off-line, since the cost of any action taken during run time prolongs the parsing time. Stage 2. During parsing, after a rule has been completed, all variables in the mother have been extended as far as they can be before insertion into the chart. This offers the possibility of further investigating the mother’s content and extracting supplemental information from the mother that contributes to the indexing keys. However, the choice of such investigative actions must be carefully studied, since it might burden the parsing process. Stage 3. While completing a rule, for each daughter a matching edge is searched in the chart. At this moment, the daughter’s active external variables have been extended as far as they can be before unification with a chart edge. The information identified in stage (1b) can be extracted and unified as a precursor to the remaining steps involved in category unification. These steps also take place at this stage. 4 TFSG Indexing To reduce the time spent on failures when searching for an edge in the chart, each edge (edge’s category) has an associated index key which uniquely identifies the set of daughter categories that can potentially match it. When completing a rule, edges unifying with a specific daughter are searched for in the chart. Instead of visiting all edges in the chart, the daughter’s index key selects a restricted number of edges for traversal, thus reducing the number of unification attempts. The passive edges added to the chart represent specializations of rules’ mothers. When a rule is completed, its mother M is added to the chart according to M’s indexing scheme, which is the set of index keys of daughters that might possibly unify with M. The index is implemented as a hash, where the hash function applied to a daughter yields the daughter’s index key (a selection of chart edges). For a passive edge representing M, M’s indexing scheme provides the collection of hash entries where it will be added. Each daughter is associated with a unique index key. During parsing, a specific daughter is searched for in the chart by visiting only those edges that have a matching key, thus reducing the time needed for traversing the chart. The index keys can be computed off-line (when daughters are indexed by position), or during parsing. 4.1 Positional Indexing In positional indexing, the index key for each daughter is represented by its position (rule number and daughter position in the rule). The structure of the index can be determined at compile-time (first stage). For each mother M in the grammar, a collection L M Ri D j daughters that can match M is created (M’s indexing scheme), where each element of L M represents the rule number Ri and daughter position D j inside rule Ri (1 j arity Ri ) of a category that can match with M. For TFSGs it is not possible to compute off-line the exact list of mother-daughter matching pairs, but it is possible to rule out certain non-unifiable pairs before parsing — a compromise that pays off with a very low index management time. During parsing, each time an edge (representing a rule’s mother M) is added to the chart, it is inserted into the hash entries associated with the positions Ri D j from the list L M (the number of entries where M is inserted is L M ). The entry associated with the key Ri D j will contain only categories that can possibly unify with the daughter at position Ri D j in the grammar. Because our parsing algorithm closes categories depth-first under leftmost daughter matching, only daughters Di with i 2 are searched for in the chart (and consequently, indexed). We used the EFD-based modification of this algorithm (Penn and Munteanu, 2003), which needs no active edges, and requires a constant two copies per edges, rather than the standard one copy per retrieval found in Prolog parsers. Without this, the cost of copying TFS categories would have overwhelmed the benefit of the index. 4.2 Path Indexing Path indexing is an extension of positional indexing. Although it shares the same underlying principle as the path indexing used in automated reasoning (Ramakrishnan et al., 2001), its functionality is related to quick check: extract a vector of types from a mother (which will become an edge) and a daughter, and test the unification of the two vectors before attempting to unify the edge and the daughter. Path indexing differs from quick-check in that it identifies these paths by a static analysis of grammar rules, performed off-line and with no training required. Path indexing is also built on top of positional indexing, therefore the vector of types can be different for each potentially unifiable motherdaughter pair. 4.2.1 Static Analysis of Grammar Rules Similar to the abstract interpretation used in program verification (Cousot and Cousot, 1992), the static analysis tries to predict a run-time phenomenon (specifically, unification failures) at compile-time. It tries to identify nodes in a mother that carry no relevant information with respect to unification with a particular daughter. For a mother M unifiable with a daughter D, these nodes will be grouped in a set StaticCut M D . Intuitively, these nodes can be left out or ignored while computing the unification of M and D. The StaticCut can be divided into two subsets: StaticCut M D RigidCut M D VariableCut M D The RigidCut represents nodes that can be left out because neither they, nor one of their δπ-ancestors, can have their type values changed by means of external variable sharing. The VariableCut represents nodes that are either externally shared, or have an externally shared ancestor, but still can be left out. Definition 2. RigidCut M D is the largest subset of nodes x M such that, y D for which x y: 1. x Ext M , y Ext D , 2. x
M s.t. π s.t. δ π x
x, x
Ext M , and 3. y
D s.t. π s.t. δ π y
y, y
Ext D . Definition 3. VariableCut is the largest subset of nodes x M such that: 1. x RigidCut M D , and 2. y D for which x y, s θ x t θ y , s t exists. In words, a node can be left out even if it is externally shared (or has an externally shared ancestor) if all possible types this node can have unify with all possible types its corresponding nodes in D can have. Due to structure sharing, the types of nodes in M and D can change during parsing, by being specialized to one of their subtypes. Condition 2 ensures that the types of these nodes will remain compatible (have a least upper bound), even if they specialize during rule completion. An intuitive example (real-life examples cannot be reproduced here — a category in a typical TFSG can have hundreds of nodes) is presented in Figure 4. y2 y1 y3 y5 t1 t6 t6 y4 t1 t5 F: G: H: G: K: D x1 x2 x3 x4 F: H: G: I: t7 t7 t3 t1 G:t1 H:t6 F:t6 K:t1 I:t3 t1 t5 t3 G:t5 t4 t2 J:t5 t7 t6 t0 T t8 M Figure 4: Given the above type signature, mother M and daughter D (externally shared nodes are pointed to by dashed arrows), nodes x1 x2 and x3 from M can be left out when unifying M with D during parsing. x1 and x3 RigidCut M D , while x2 VariableCut M D (θ y2 can promote only to t7, thus x2 and y2 will always be compatible). x4 is not included in the StaticCut, because if θ y5 promotes to t5, then θ y4 will promote to t5 (not unifiable with t3). When computing the unification between a mother and a daughter during parsing, the same outcome (success or failure) will be reached by using a reduced representation of the mother ( M sD), with nodes in StaticCut M D removed from M. Proposition 1. For a mother M and a daughter D, if M D before parsing, and M (as an edge in the chart) and D exist, then during parsing: (1) M sD D M D , (2) M sD D M D . Proof. The second part ( M sD D M D ) of Proposition 1 has a straightforward proof: if M sD D , then z M sD D such that t for which x z
t θ x . Since M sD M, z M D such that t for which x z
t θ x , and therefore, M D . The first part of the proposition will be proven by showing that z M D, a consistent type can be assigned to z
, where z
is the set of nodes in M and D equivalent to z with respect to the unification of M and D.1 Three lemmata need to be formulated: Lemma 1. If x M and x x 1, then θ x θ x . Similarly, for y D, y y 1, θ y θ y . Lemma 2. If types t0 t1 tn are such that t
0 t0 i 1 n , t
0 ti , then t t0 such that i 1 n , t ti. 1Because we do not assume inequated TFSs (Carpenter, 1992) here, unification failure must result from type inconsistency. Lemma 3. If x M and y D for which x y, then x x 1 y y 1 such that x y. In proving the first part of Proposition 1, four cases are identified: Case A: z M 1 and z D 1, Case B: z
M 1 and z
D 1, Case C: z M 1 and z
D 1, Case D: z
M 1 and z
D 1. Case A is trivial, and D is a generalization of B and C. Case B. It will be shown that t Type such that y z D and for x z
M, t θ y and t θ x . Subcase B.i: x M x M sD. y z
D, y x. Therefore, according to Lemma 3, x x 1 y y 1 such that x y. Thus, according to Condition 2 of Definition 3, s θ y t θ x , s t . But according to Lemma 1, θ y θ y and θ x θ x . Therefore, y z
D, s θ y , t θ x , s t , and hence, y z
D t θ x t θ y . Thus, according to Lemma 2, t θ x y z D, t θ y . Subcase B.ii: x M x M sD. Since M sD D , t θ x such that y z
D, t θ y . Case C. It will be shown that t θ y such that x z
, t θ x . Let y z D. The set z
M can be divided into two subsets: Sii x z
M x M sD , and Si x z
M x M x M sD, and x VariableCut M D . If x were in RigidCut M D , then necessarily z
M would be 1. Since Sii M sD and M sD D , then t
θ y such that x Sii t
θ x (*). However, x Sii, x y. Therefore, according to Lemma 3, x Sii x x 1 y y 1 such that x y. Thus, since x VariableCut M D , Condition 2 of Definition 3 holds, and therefore, according to Lemma 1, s1 θ x s2 θ y s1 s2 . More than this, since t
θ y (for the type t
from (*)), s1 θ x s
2 t
s1 s
2 , and hence, s
2 t
s
2 θ x . Thus, according to Lemma 2 and to (*), t t
θ y such that x Sii t θ x Thus, t such that x z
, t θ x . While Proposition 1 could possibly be used by grammar developers to simplify TFSGs themselves at the source-code level, here we only exploit it for internally identifying index keys for more efficient chart parsing with the existing grammar. There may be better static analyses, and better uses of this static analysis. In particular, future work will focus on using static analysis to determine smaller representations (by cutting nodes in Static Cuts) of the chart edges themselves. 4.2.2 Building the Path Index The indexing schemes used in path indexing are built on the same principles as those in positional indexing. The main difference is the content of the indexing keys, which now includes a third element. Each mother M has its indexing scheme defined as: L M Ri D j Vi j . The pair Ri D j is the positional index key (as in positional indexing), while Vi j is the path index vector containing type values extracted from M. A different set of types is extracted for each mother-daughter pair. So, path indexing uses a two-layer indexing method: the positional key for daughters, and types extracted from the typed feature structure. Each daughter’s index key is now given by L D j Ri Vi j , where Ri is the rule number of a potentially matching mother, and Vi j is the path index vector containing types extracted from D j. The types extracted for the indexing vectors are those of nodes found at the end of indexing paths. A path π is an indexing path for a motherdaughter pair M D iff: (1) π is defined for both M and D, (2) x StaticCut M D f s.t. δ f x δ π qM (qM is M’s root), and (3) δ π qM StaticCut M D . Indexing paths are the “frontiers” of the non-statically-cut nodes of M. A similar key extraction could be performed during Stage 2 of indexing (as outlined in Section 3), using M rather than M. We have found that this online path discovery is generally too expensive to be performed during parsing, however. As stated in Proposition 1, the nodes in StaticCut M D do not affect the success/failure of M D. Therefore, the types of first nodes not included in StaticCut M D along each path π that stems from the root of M and D are included in the indexing key, since these nodes might contribute to the success/failure of the unification. It should be mentioned that the vectors Vi j are filled with values extracted from M after M’s rule is completed, and from D after all daughters to the left of D are unified with edges in the chart. As an example, assuming that the indexing paths are THROWER:PERSON, THROWN, and THROWN:GENDER, the path index vector for the TFS shown in Figure 2 is third index neuter . 4.2.3 Using the Path Index Inserting and retrieving edges from the chart using path indexing is similar to the general method presented at the beginning of this section. The first layer of the index is used to insert a mother as an edge into appropriate chart entries, according to the positional keys for the daughters it can match. Along with the mother, its path index vector is inserted into the chart. When searching for a matching edge for a daughter, the search is restricted by the first indexing layer to a single entry in the chart (labeled with the positional index key for the daughter). The second layer restricts searches to the edges that have a compatible path index vector. The compatibility is defined as type unification: the type pointed to by the element Vi j n of an edge’s vector Vi j should unify with the type pointed to by the element Vi j n of the path index vector Vi j of the daughter on position D j in a rule Ri. 5 Experimental Evaluation Two TFSGs were used to evaluate the performance of indexing: a pre-release version of the MERGE grammar, and the ALE port of the ERG (in its final form). MERGE is an adaptation of the ERG which uses types more conservatively in favour of relations, macros and complex-antecedent constraints. This pre-release version has 17 rules, 136 lexical items, 1157 types, and 144 introduced features. The ERG port has 45 rules, 1314 lexical entries, 4305 types and 155 features. MERGE was tested on 550 sentences of lengths between 6 and 16 words, extracted from the Wall Street Journal annotated parse trees (where phrases not covered by MERGE’s vocabulary were replaced by lexical entries having the same parts of speech), and from MERGE’s own test corpus. ERG was tested on 1030 sentences of lengths between 6 and 22 words, extracted from the Brown Corpus and from the Wall Street Journal annotated parse trees. Rather than use the current version of ALE, TFSs were encoded as Prolog terms as prescribed in (Penn, 1999a), where the number of argument positions is the number of colours needed to colour the feature graph. This was extended to allow for the enforcement of type constraints during TFS unification. Types were encoded as attributed variables in SICStus Prolog (Swedish Institute of Computer Science, 2004). 5.1 Positional and path indexing evaluation The average and best improvements in parsing times of positional and path indexing over the same EFDbased parser without indexing are presented in Table 1. The parsers were implemented in SICStus 3.10.1 for Solaris 8, running on a Sun Server with 16 GB of memory and 4 UltraSparc v.9 processors at 1281 MHz. For MERGE, parsing times range from 10 milliseconds to 1.3 seconds. For ERG, parsing times vary between 60 milliseconds and 29.2 seconds. Positional Index Path Index average best average best MERGE 1.3% 50% 1.3% 53.7% ERG 13.9% 36.5% 12% 41.6% Table 1: Parsing time improvements of positional and path indexing over the non-indexed EFD parser. 5.2 Comparison with statistical optimizations Non-statistical optimizations can be seen as a first step toward a highly efficient parser, while statistical optimization can be applied as a second step. However, one of the purposes of non-statistical indexing is to eliminate the burden of training while offering comparable improvements in parsing times. A quick-check parser was also built and evaluated and the set-up times for the indexed parsers and the quick-check parser were compared (Table 2). Quick-check was trained on a 300-sentence training corpus, as prescribed in (Malouf et al., 2000). The training corpus included 150 sentences also used in testing. The number of paths in path indexing is different for each mother-daughter pair, ranging from 1 to 43 over the two grammars. Positional Path Quick Index Index Check Compiling grammar 6’30” Compiling index 2” 1’33” Training 3h28’14” Total set-up time: 6’32” 8’3” 3h34’44” Table 2: The set-up times for non-statistically indexed parsers and statistically optimized parsers for MERGE. As seen in Table 3, quick-check alone surpasses positional and path indexing for the ERG. However, it is outperformed by them on the MERGE, recording slower times than even the baseline. But the combination of quick-check and path indexing is faster than quick-check alone on both grammars. Path indexing at best provided no decrease in performance over positional indexing alone in these experiments, attesting to the difficulty of maintaining efficient index keys in an implementation. Positional Path Quick Quick + Indexing Indexing Check Path MERGE 1.3% 1.3% -4.5% -4.3% ERG 13.9% 12% 19.8% 22% Table 3: Comparison of average improvements over nonindexed parsing among all parsers. The quick-check evaluation presented in (Malouf et al., 2000) uses only sentences with a length of at most 10 words, and the authors do not report the set-up times. Quick-check has an additional advantage in the present comparison, because half of the training sentences were included in the test corpus. While quick-check improvements on the ERG confirm other reports on this method, it must be Grammar Successful Failed unifications Failure rate reduction (vs. no index) unifications EFD Positional Path Quick Positional Path Quick non-indexed Index Index Check Index Index Check MERGE 159 755 699 552 370 7.4% 26.8% 50.9% ERG 1078 215083 109080 108610 18040 49.2% 49.5% 91.6% Table 4: The number of successful and failed unifications for the non-indexed, positional indexing, path indexing, and quick-check parsers, over MERGE and ERG (collected on the slowest sentence in the corresponding test sets.) noted that quick-check appears to be parochially very well-suited to the ERG (indeed quick-check was developed alongside testing on the ERG). Although the recommended first 30 most probable failure-causing paths account for a large part of the failures recorded in training on both grammars (94% for ERG and 97% for MERGE), only 51 paths caused failures at all for MERGE during training, compared to 216 for the ERG. Further training with quick-check for determining a better vector length for MERGE did not improve its performance. This discrepancy in the number of failure-causing paths could be resulting in an overfitted quick-check vector, or, perhaps the 30 paths chosen for MERGE really are not the best 30 (quick-check uses a greedy approximation). In addition, as shown in Table 4, the improvements made by quick-check on the ERG are explained by the drastic reduction of (chart lookup) unification failures during parsing relative to the other methods. It appears that nothing short of a drastic reduction is necessary to justify the overhead of maintaining the index, which is the largest for quick-check because some of its paths must be traversed at run-time — path indexing only uses paths available at compile-time in the grammar source. Note that path indexing outperforms quick-check on MERGE in spite of its lower failure reduction rate, because of its smaller overhead. 6 Conclusions and Future Work The indexing method proposed here is suitable for several classes of unification-based grammars. The index keys are determined statically and are based on an a priori analysis of grammar rules. A major advantage of such indexing methods is the elimination of the lengthy training processes needed by statistical methods. Our experimental evaluation demonstrates that indexing by static analysis is a promising alternative to optimizing parsing with TFSGs, although the time consumed by on-line maintenance of the index is a significant concern — echoes of an observation that has been made in applications of term indexing to databases and programming languages (Graf, 1996). Further work on efficient implementations and data structures is therefore required. Indexing by static analysis of grammar rules combined with statistical methods also can provide a higher aggregate benefit. The current static analysis of grammar rules used as a basis for indexing does not consider the effect of the universally quantified constraints that typically augment the signature and grammar rules. Future work will investigate this extension as well. References B. Carpenter and G. Penn. 1996. Compiling typed attribute-value logic grammars. In H. Bunt and M. Tomita, editors, Recent Advances in Parsing Technologies, pages 145–168. Kluwer. B. Carpenter. 1992. The Logic of Typed Feature Structures. Cambridge University Press. P. Cousot and R. Cousot. 1992. Abstract interpretation and application to logic programs. Journal of Logic Programming, 13(2–3). R. Elmasri and S. Navathe. 2000. Fundamentals of database systems. Addison-Wesley. D. Flickinger. 1999. The English Resource Grammar. http://lingo.stanford.edu/erg.html. P. Graf. 1996. Term Indexing. Springer. B. Kiefer, H.U. Krieger, J. Carroll, and R. Malouf. 1999. A bag of useful techniques for efficient and robust parsing. In Proceedings of the 37th Annual Meeting of the ACL. R. Malouf, J. Carrol, and A. Copestake. 2000. Efficient feature structure operations without compilation. Natural Language Engineering, 6(1). G. Penn and C. Munteanu. 2003. A tabulationbased parsing method that reduces copying. In Proceedings of the 41st Annual Meeting of the ACL, Sapporo, Japan. G. Penn. 1999a. An optimised Prolog encoding of typed feature structures. Technical Report 138, SFB 340, T¨ubingen. G. Penn. 1999b. Optimising don’t-care nondeterminism with statistical information. Technical Report 140, SFB 340, T¨ubingen. C. Pollard and I. Sag. 1994. Head-driven Phrase Structure Grammar. The University of Chicago Press. I.V. Ramakrishnan, R. Sekar, and A. Voronkov. 2001. Term indexing. In Handbook of Automated Reasoning, volume II, chapter 26. Elsevier Science. Swedish Institute of Computer Science. 2004. SICStus Prolog 3.11.0. http://www.sics.se/sicstus. | 2004 | 29 |
2004 | 3 |
|
Head-Driven Parsing for Word Lattices Christopher Collins Department of Computer Science University of Toronto Toronto, ON, Canada [email protected] Bob Carpenter Alias I, Inc. Brooklyn, NY, USA [email protected] Gerald Penn Department of Computer Science University of Toronto Toronto, ON, Canada [email protected] Abstract We present the first application of the head-driven statistical parsing model of Collins (1999) as a simultaneous language model and parser for largevocabulary speech recognition. The model is adapted to an online left to right chart-parser for word lattices, integrating acoustic, n-gram, and parser probabilities. The parser uses structural and lexical dependencies not considered by ngram models, conditioning recognition on more linguistically-grounded relationships. Experiments on the Wall Street Journal treebank and lattice corpora show word error rates competitive with the standard n-gram language model while extracting additional structural information useful for speech understanding. 1 Introduction The question of how to integrate high-level knowledge representations of language with automatic speech recognition (ASR) is becoming more important as (1) speech recognition technology matures, (2) the rate of improvement of recognition accuracy decreases, and (3) the need for additional information (beyond simple transcriptions) becomes evident. Most of the currently best ASR systems use an n-gram language model of the type pioneered by Bahl et al. (1983). Recently, research has begun to show progress towards application of new and better models of spoken language (Hall and Johnson, 2003; Roark, 2001; Chelba and Jelinek, 2000). Our goal is integration of head-driven lexicalized parsing with acoustic and n-gram models for speech recognition, extracting high-level structure from speech, while simultaneously selecting the best path in a word lattice. Parse trees generated by this process will be useful for automated speech understanding, such as in higher semantic parsing (Ng and Zelle, 1997). Collins (1999) presents three lexicalized models which consider long-distance dependencies within a sentence. Grammar productions are conditioned on headwords. The conditioning context is thus more focused than that of a large n-gram covering the same span, so the sparse data problems arising from the sheer size of the parameter space are less pressing. However, sparse data problems arising from the limited availability of annotated training data become a problem. We test the head-driven statistical lattice parser with word lattices from the NIST HUB-1 corpus, which has been used by others in related work (Hall and Johnson, 2003; Roark, 2001; Chelba and Jelinek, 2000). Parse accuracy and word error rates are reported. We present an analysis of the effects of pruning and heuristic search on efficiency and accuracy and note several simplifying assumptions common to other reported experiments in this area, which present challenges for scaling up to realworld applications. This work shows the importance of careful algorithm and data structure design and choice of dynamic programming constraints to the efficiency and accuracy of a head-driven probabilistic parser for speech. We find that the parsing model of Collins (1999) can be successfully adapted as a language model for speech recognition. In the following section, we present a review of recent works in high-level language modelling for speech recognition. We describe the word lattice parser developed in this work in Section 3. Section 4 is a description of current evaluation metrics, and suggestions for new metrics. Experiments on strings and word lattices are reported in Section 5, and conclusions and opportunities for future work are outlined in Section 6. 2 Previous Work The largest improvements in word error rate (WER) have been seen with n-best list rescoring. The best n hypotheses of a simple speech recognizer are processed by a more sophisticated language model and re-ranked. This method is algorithmically simpler than parsing lattices, as one can use a model developed for strings, which need not operate strictly left to right. However, we confirm the observation of (Ravishankar, 1997; Hall and Johnson, 2003) that parsing word lattices saves computation time by only parsing common substrings once. Chelba (2000) reports WER reduction by rescoring word lattices with scores of a structured language model (Chelba and Jelinek, 2000), interpolated with trigram scores. Word predictions of the structured language model are conditioned on the two previous phrasal heads not yet contained in a bigger constituent. This is a computationally intensive process, as the dependencies considered can be of arbitrarily long distances. All possible sentence prefixes are considered at each extension step. Roark (2001) reports on the use of a lexicalized probabilistic top-down parser for word lattices, evaluated both on parse accuracy and WER. Our work is different from Roark (2001) in that we use a bottom-up parsing algorithm with dynamic programming based on the parsing model II of Collins (1999). Bottom-up chart parsing, through various forms of extensions to the CKY algorithm, has been applied to word lattices for speech recognition (Hall and Johnson, 2003; Chappelier and Rajman, 1998; Chelba and Jelinek, 2000). Full acoustic and n-best lattices filtered by trigram scores have been parsed. Hall and Johnson (2003) use a best-first probabilistic context free grammar (PCFG) to parse the input lattice, pruning to a set of local trees (candidate partial parse trees), which are then passed to a version of the parser of Charniak (2001) for more refined parsing. Unlike (Roark, 2001; Chelba, 2000), Hall and Johnson (2003) achieve improvement in WER over the trigram model without interpolating its lattice parser probabilities directly with trigram probabilities. 3 Word Lattice Parser Parsing models based on headword dependency relationships have been reported, such as the structured language model of Chelba and Jelinek (2000). These models use much less conditioning information than the parsing models of Collins (1999), and do not provide Penn Treebank format parse trees as output. In this section we outline the adaptation of the Collins (1999) parsing model to word lattices. The intended action of the parser is illustrated in Figure 1, which shows parse trees built directly upon a word lattice. 3.1 Parameterization The parameterization of model II of Collins (1999) is used in our word lattice parser. Parameters are * tokyo was the could that speculation unit yen the rise arise NN NNP IN AUX DT MD VB NN IN and in CC S NP S* NP VP * Figure 1: Example of a partially-parsed word lattice. Different paths through the lattice are simultaneously parsed. The example shows two final parses, one of low probability (S ) and one of high probability (S). maximum likelihood estimates of conditional probabilities — the probability of some event of interest (e.g., a left-modifier attachment) given a context (e.g., parent non-terminal, distance, headword). One notable difference between the word lattice parser and the original implementation of Collins (1999) is the handling of part-of-speech (POS) tagging of unknown words (words seen fewer than 5 times in training). The conditioning context of the parsing model parameters includes POS tagging. Collins (1999) falls back to the POS tagging of Ratnaparkhi (1996) for words seen fewer than 5 times in the training corpus. As the tagger of Ratnaparkhi (1996) cannot tag a word lattice, we cannot back off to this tagging. We rely on the tag assigned by the parsing model in all cases. Edges created by the bottom-up parsing are assigned a score which is the product of the inside and outside probabilities of the Collins (1999) model. 3.2 Parsing Algorithm The algorithm is a variation of probabilistic online, bottom-up, left-to-right Cocke-KasamiYounger parsing similar to Chappelier and Rajman (1998). Our parser produces trees (bottom-up) in a rightbranching manner, using unary extension and binary adjunction. Starting with a proposed headword, left modifiers are added first using right-branching, then right modifiers using left-branching. Word lattice edges are iteratively added to the agenda. Complete closure is carried out, and the next word edge is added to the agenda. This process is repeated until all word edges are read from the lattice, and at least one complete parse is found. Edges are each assigned a score, used to rank parse candidates. For parsing of strings, the score for a chart edge is the product of the scores of any child edges and the score for the creation of the new edge, as given by the model parameters. This score, defined solely by the parsing model, will be referred to as the parser score. The total score for chart edges for the lattice parsing task is a combination of the parser score, an acoustic model score, and a trigram model score. Scaling factors follow those of (Chelba and Jelinek, 2000; Roark, 2001). 3.3 Smoothing and Pruning The parameter estimation techniques (smoothing and back-off) of Collins (1999) are reimplemented. Additional techniques are required to prune the search space of possible parses, due to the complexity of the parsing algorithm and the size of the word lattices. The main technique we employ is a variation of the beam search of Collins (1999) to restrict the chart size by excluding low probability edges. The total score (combined acoustic and language model scores) of candidate edges are compared against edge with the same span and category. Proposed edges with score outside the beam are not added to the chart. The drawback to this process is that we can no longer guarantee that a model-optimal solution will be found. In practice, these heuristics have a negative effect on parse accuracy, but the amount of pruning can be tuned to balance relative time and space savings against precision and recall degradation (Collins, 1999). Collins (1999) uses a fixed size beam (10 000). We experiment with several variable beam (ˆb) sizes, where the beam is some function of a base beam (b) and the edge width (the number of terminals dominated by an edge). The base beam starts at a low beam size and increases iteratively by a specified increment if no parse is found. This allows parsing to operate quickly (with a minimal number of edges added to the chart). However, if many iterations are required to obtain a parse, the utility of starting with a low beam and iterating becomes questionable (Goodman, 1997). The base beam is limited to control the increase in the chart size. The selection of the base beam, beam increment, and variable beam function is governed by the familiar speed/accuracy trade-off.1 The variable beam function found to allow fast convergence with minimal loss of accuracy is: ˆb b log w 2 2 (1) 1Details of the optimization can be found in Collins (2004). Charniak et al. (1998) introduce overparsing as a technique to improve parse accuracy by continuing parsing after the first complete parse tree is found. The technique is employed by Hall and Johnson (2003) to ensure that early stages of parsing do not strongly bias later stages. We adapt this idea to a single stage process. Due to the restrictions of beam search and thresholds, the first parse found by the model may not be the model optimal parse (i.e., we cannot guarantee best-first search). We therefore employ a form of overparsing — once a complete parse tree is found, we further extend the base beam by the beam increment and parse again. We continue this process as long as extending the beam results in an improved best parse score. 4 Expanding the Measures of Success Given the task of simply generating a transcription of speech, WER is a useful and direct way to measure language model quality for ASR. WER is the count of incorrect words in hypothesis ˆW per word in the true string W. For measurement, we must assume prior knowledge of W and the best alignment of the reference and hypothesis strings.2 Errors are categorized as insertions, deletions, or substitutions. Word Error Rate 100Insertions Substitutions Deletions Total Words in Correct Transcript (2) It is important to note that most models — Mangu et al. (2000) is an innovative exception — minimize sentence error. Sentence error rate is the percentage of sentences for which the proposed utterance has at least one error. Models (such as ours) which optimize prediction of test sentences Wt, generated by the source, minimize the sentence error. Thus even though WER is useful practically, it is formally not the appropriate measure for the commonly used language models. Unfortunately, as a practical measure, sentence error rate is not as useful — it is not as fine-grained as WER. Perplexity is another measure of language model quality, measurable independent of ASR performance (Jelinek, 1997). Perplexity is related to the entropy of the source model which the language model attempts to estimate. These measures, while informative, do not capture success of extraction of high-level information from speech. Task-specific measures should be used in tandem with extensional measures such as perplexity and WER. Roark (2002), when reviewing 2SCLITE (http://www.nist.gov/speech/ tools/) by NIST is the most commonly used alignment tool. parsing for speech recognition, discusses a modelling trade-off between producing parse trees and producing strings. Most models are evaluated either with measures of success for parsing or for word recognition, but rarely both. Parsing models are difficult to implement as word-predictive language models due to their complexity. Generative random sampling is equally challenging, so the parsing correlate of perplexity is not easy to measure. Traditional (i.e., n-gram) language models do not produce parse trees, so parsing metrics are not useful. However, Roark (2001) argues for using parsing metrics, such as labelled precision and recall,3 along with WER, for parsing applications in ASR. Weighted WER (Weber et al., 1997) is also a useful measurement, as the most often ill-recognized words are short, closed-class words, which are not as important to speech understanding as phrasal head words. We will adopt the testing strategy of Roark (2001), but find that measurement of parse accuracy and WER on the same data set is not possible given currently available corpora. Use of weighted WER and development of methods to simultaneously measure WER and parse accuracy remain a topic for future research. 5 Experiments The word lattice parser was evaluated with several metrics — WER, labelled precision and recall, crossing brackets, and time and space resource usage. Following Roark (2001), we conducted evaluations using two experimental sets — strings and word lattices. We optimized settings (thresholds, variable beam function, base beam value) for parsing using development test data consisting of strings for which we have annotated parse trees. The parsing accuracy for parsing word lattices was not directly evaluated as we did not have annotated parse trees for comparison. Furthermore, standard parsing measures such as labelled precision and recall are not directly applicable in cases where the number of words differs between the proposed parse tree and the gold standard. Results show scores for parsing strings which are lower than the original implementation of Collins (1999). The WER scores for this, the first application of the Collins (1999) model to parsing word lattices, are comparable to other recent work in syntactic language modelling, and better than a simple trigram model trained on the same data. 3Parse trees are commonly scored with the PARSEVAL set of metrics (Black et al., 1991). 5.1 Parsing Strings The lattice parser can parse strings by creating a single-path lattice from the input (all word transitions are assigned an input score of 1.0). The lattice parser was trained on sections 02-21 of the Wall Street Journal portion of the Penn Treebank (Taylor et al., 2003) Development testing was carried out on section 23 in order to select model thresholds and variable beam functions. Final testing was carried out on section 00, and the PARSEVAL measures (Black et al., 1991) were used to evaluate the performance. The scores for our experiments are lower than the scores of the original implementation of model II (Collins, 1999). This difference is likely due in part to differences in POS tagging. Tag accuracy for our model was 93.2%, whereas for the original implementation of Collins (1999), model II achieved tag accuracy of 96.75%. In addition to different tagging strategies for unknown words, mentioned above, we restrict the tag-set considered by the parser for each word to those suggested by a simple first-stage tagger.4 By reducing the tag-set considered by the parsing model, we reduce the search space and increase the speed. However, the simple tagger used to narrow the search also introduces tagging error. The utility of the overparsing extension can be seen in Table 1. Each of the PARSEVAL measures improves when overparsing is used. 5.2 Parsing Lattices The success of the parsing model as a language model for speech recognition was measured both by parsing accuracy (parsing strings with annotated reference parses), and by WER. WER is measured by parsing word lattices and comparing the sentence yield of the highest scoring parse tree to the reference transcription (using NIST SCLITE for alignment and error calculation).5 We assume the parsing performance achieved by parsing strings carries over approximately to parsing word lattices. Two different corpora were used in training the parsing model on word lattices: sections 02-21 of the WSJ Penn Treebank (the same sections as used to train the model for parsing strings) [1 million words] 4The original implementation (Collins, 1999) of this model considered all tags for all words. 5To properly model language using a parser, one should sum parse tree scores for each sentence hypothesis, and choose the sentence with the best sum of parse tree scores. We choose the yield of the parse tree with the highest score. Summation is too computationally expensive given the model —we do not even generate all possible parse trees, but instead restrict generation using dynamic programming. Exp. OP LP (%) LR (%) CB 0 CB (%) 2 CB (%) Ref N 88.7 89.0 0.95 65.7 85.6 1 N 79.4 80.6 1.89 46.2 74.5 2 Y 80.8 81.4 1.70 44.3 80.4 Table 1: Results for parsing section 0 ( 40 words) of the WSJ Penn Treebank: OP = overparsing, LP/LR = labelled precision/recall. CB is the average number of crossing brackets per sentence. 0 CB, 2 CB are the percentage of sentences with 0 or 2 crossing brackets respectively. Ref is model II of (Collins, 1999). section “1987” of the BLLIP corpus (Charniak et al., 1999) [20 million words] The BLLIP corpus is a collection of Penn Treebank-style parses of the three-year (1987-1989) Wall Street Journal collection from the ACL/DCI corpus (approximately 30 million words).6 The parses were automatically produced by the parser of Charniak (2001). As the memory usage of our model corresponds directly to the amount of training data used, we were restricted by available memory to use only one section (1987) of the total corpus. Using the BLLIP corpus, we expected to get lower quality parse results due to the higher parse error of the corpus, when compared to the manually annotated Penn Treebank. The WER was expected to improve, as the BLLIP corpus has much greater lexical coverage. The training corpora were modified using a utility by Brian Roark to convert newspaper text to speechlike text, before being used as training input to the model. Specifically, all numbers were converted to words (60 sixty) and all punctuation was removed. We tested the performance of our parser on the word lattices from the NIST HUB-1 evaluation task of 1993. The lattices are derived from a set of utterances produced from Wall Street Journal text — the same domain as the Penn Treebank and the BLLIP training data. The word lattices were previously pruned to the 50-best paths by Brian Roark, using the A* decoding of Chelba (2000). The word lattices of the HUB-1 corpus are directed acyclic graphs in the HTK Standard Lattice Format (SLF), consisting of a set of vertices and a set of edges. Vertices, or nodes, are defined by a time-stamp and labelled with a word. The set of labelled, weighted edges, represents the word utterances. A word w is hypothesized over edge e if e ends at a vertex v labelled w. Edges are associated with transition probabilities and are labelled with an acoustic score and a language model score. The lattices of the HUB6The sentences of the HUB-1 corpus are a subset of those in BLLIP. We removed all HUB-1 sentences from the BLLIP corpus used in training. 1 corpus are annotated with trigram scores trained using a 20 thousand word vocabulary and 40 million word training sample. The word lattices have a unique start and end point, and each complete path through a lattice represents an utterance hypothesis. As the parser operates in a left-to-right manner, and closure is performed at each node, the input lattice edges must be processed in topological order. Input lattices were sorted before parsing. This corpus has been used in other work on syntactic language modelling (Chelba, 2000; Roark, 2001; Hall and Johnson, 2003). The word lattices of the HUB-1 corpus are annotated with an acoustic score, a, and a trigram probability, lm, for each edge. The input edge score stored in the word lattice is: log Pinput αlog a βlog lm (3) where a is the acoustic score and lm is the trigram score stored in the lattice. The total edge weight in the parser is a scaled combination of these scores with the parser score derived with the model parameters: log w αlog a βlog lm s (4) where w is the edge weight, and s is the score assigned by the parameters of the parsing model. We optimized performance on a development subset of test data, yielding α 1 16 and β 1. There is an important difference in the tokenization of the HUB-1 corpus and the Penn Treebank format. Clitics (i.e., he’s, wasn’t) are split from their hosts in the Penn Treebank (i.e., he ’s, was n’t), but not in the word lattices. The Treebank format cannot easily be converted into the lattice format, as often the two parts fall into different parse constituents. We used the lattices modified by Chelba (2000) in dealing with this problem — contracted words are split into two parts and the edge scores redistributed. We followed Hall and Johnson (2003) and used the Treebank tokenization for measuring the WER. The model was tested with and without overparsing. We see from Table 2 that overparsing has little effect on the WER. The word sequence most easily parsed by the model (i.e., generating the first complete parse tree) is likely also the word sequence found by overparsing. Although overparsing may have little effect on WER, we know from the experiments on strings that overparsing increases parse accuracy. This introduces a speed-accuracy tradeoff: depending on what type of output is required from the model (parse trees or strings), the additional time and resource requirements of overparsing may or may not be warranted. 5.3 Parsing N-Best Lattices vs. N-Best Lists The application of the model to 50-best word lattices was compared to rescoring the 50-best paths individually (50-best list parsing). The results are presented in Table 2. The cumulative number of edges added to the chart per word for n-best lists is an order of magnitude larger than for corresponding n-best lattices, in all cases. As the WERs are similar, we conclude that parsing n-best lists requires more work than parsing n-best lattices, for the same result. Therefore, parsing lattices is more efficient. This is because common substrings are only considered once per lattice. The amount of computational savings is dependent on the density of the lattices — for very dense lattices, the equivalent n-best list parsing will parse common substrings up to n times. In the limit of lowest density, a lattice may have paths without overlap, and the number of edges per word would be the same for the lattice and lists. 5.4 Time and Space Requirements The algorithms and data structures were designed to minimize parameter lookup times and memory usage by the chart and parameter set (Collins, 2004). To increase parameter lookup speed, all parameter values are calculated for all levels of back-off at training time. By contrast, (Collins, 1999) calculates parameter values by looking up event counts at run-time. The implementation was then optimized using a memory and processor profiler and debugger. Parsing the complete set of HUB-1 lattices (213 sentences, a total of 3,446 words) on average takes approximately 8 hours, on an Intel Pentium 4 (1.6GHz) Linux system, using 1GB memory. Memory requirements for parsing lattices is vastly greater than equivalent parsing of a single sentence, as chart size increases with the number of divergent paths in a lattice. Additional analysis of resource issues can be found in Collins (2004). 5.5 Comparison to Previous Work The results of our best experiments for lattice- and list-parsing are compared with previous results in Table 3. The oracle WER7 for the HUB-1 corpus is 3.4%. For the pruned 50-best lattices, the oracle WER is 7.8%. We see that by pruning the lattices using the trigram model, we already introduce additional error. Because of the memory usage and time required for parsing word lattices, we were unable to test our model on the original “acoustic” HUB-1 lattices, and are thus limited by the oracle WER of the 50-best lattices, and the bias introduced by pruning using a trigram model. Where available, we also present comparative scores of the sentence error rate (SER) — the percentage of sentences in the test set for which there was at least one recognition error. Note that due to the small (213 samples) size of the HUB-1 corpus, the differences seen in SER may not be significant. We see an improvement in WER for our parsing model alone (α β 0) trained on 1 million words of the Penn Treebank compared to a trigram model trained on the same data — the “Treebank Trigram” noted in Table 3. This indicates that the larger context considered by our model allows for performance improvements over the trigram model alone. Further improvement is seen with the combination of acoustic, parsing, and trigram scores (α 1 16 β 1). However, the combination of the parsing model (trained on 1M words) with the lattice trigram (trained on 40M words) resulted in a higher WER than the lattice trigram alone. This indicates that our 1M word training set is not sufficient to permit effective combination with the lattice trigram. When the training of the head-driven parsing model was extended to the BLLIP 1987 corpus (20M words), the combination of models (α 1 16 β 1) achieved additional improvement in WER over the lattice trigram alone. The current best-performing models, in terms of WER, for the HUB-1 corpus, are the models of Roark (2001), Charniak (2001) (applied to n-best lists by Hall and Johnson (2003)), and the SLM of Chelba and Jelinek (2000) (applied to n-best lists by Xu et al. (2002)). However, n-best list parsing, as seen in our evaluation, requires repeated analysis of common subsequences, a less efficient process than directly parsing the word lattice. The reported results of (Roark, 2001) and (Chelba, 2000) are for parsing models interpolated with the lattice trigram probabilities. Hall and John7The WER of the hypothesis which best matches the true utterance, i.e., the lowest WER possible given the hypotheses set. Training Size Lattice/List OP WER Number of Edges S D I T (per word) 1M Lattice N 10.4 3.3 1.5 15.2 1788 1M List N 10.4 3.2 1.4 15.0 10211 1M Lattice Y 10.3 3.2 1.4 14.9 2855 1M List Y 10.2 3.2 1.4 14.8 16821 20M Lattice N 9.0 3.1 1.0 13.1 1735 20M List N 9.0 3.1 1.0 13.1 9999 20M Lattice Y 9.0 3.1 1.0 13.1 2801 20M List Y 9.0 3.3 0.9 13.3 16030 Table 2: Results for parsing HUB-1 n-best word lattices and lists: OP = overparsing, S = substutitions (%), D = deletions (%), I = insertions (%), T = total WER (%). Variable beam function: ˆb b log w 2 2 . Training corpora: 1M = Penn Treebank sections 02-21; 20M = BLLIP section 1987. Model n-best List/Lattice Training Size WER (%) SER (%) Oracle (50-best lattice) Lattice 7.8 Charniak (2001) List 40M 11.9 Xu (2002) List 20M 12.3 Roark (2001) (with EM) List 2M 12.7 Hall (2003) Lattice 30M 13.0 Chelba (2000) Lattice 20M 13.0 Current (α 1 16 β 1) List 20M 13.1 71.0 Current (α 1 16 β 1) Lattice 20M 13.1 70.4 Roark (2001) (no EM) List 1M 13.4 Lattice Trigram Lattice 40M 13.7 69.0 Current (α 1 16 β 1) List 1M 14.8 74.3 Current (α 1 16 β 1) Lattice 1M 14.9 74.0 Current (α β 0) Lattice 1M 16.0 75.5 Treebank Trigram Lattice 1M 16.5 79.8 No language model Lattice 16.8 84.0 Table 3: Comparison of WER for parsing HUB-1 words lattices with best results of other works. SER = sentence error rate. WER = word error rate. “Speech-like” transformations were applied to all training corpora. Xu (2002) is an implementation of the model of Chelba (2000) for n-best list parsing. Hall (2003) is a lattice-parser related to Charniak (2001). son (2003) does not use the lattice trigram scores directly. However, as in other works, the lattice trigram is used to prune the acoustic lattice to the 50 best paths. The difference in WER between our parser and those of Charniak (2001) and Roark (2001) applied to word lists may be due in part to the lower PARSEVAL scores of our system. Xu et al. (2002) report inverse correlation between labelled precision/recall and WER. We achieve 73.2/76.5% LP/LR on section 23 of the Penn Treebank, compared to 82.9/82.4% LP/LR of Roark (2001) and 90.1/90.1% LP/LR of Charniak (2000). Another contributing factor to the accuracy of Charniak (2001) is the size of the training set — 20M words larger than that used in this work. The low WER of Roark (2001), a top-down probabilistic parsing model, was achieved by training the model on 1 million words of the Penn Treebank, then performing a single pass of Expectation Maximization (EM) on a further 1.2 million words. 6 Conclusions In this work we present an adaptation of the parsing model of Collins (1999) for application to ASR. The system was evaluated over two sets of data: strings and word lattices. As PARSEVAL measures are not applicable to word lattices, we measured the parsing accuracy using string input. The resulting scores were lower than that original implementation of the model. Despite this, the model was successful as a language model for speech recognition, as measured by WER and ability to extract high-level information. Here, the system performs better than a simple n-gram model trained on the same data, while simultaneously providing syntactic information in the form of parse trees. WER scores are comparable to related works in this area. The large size of the parameter set of this parsing model necessarily restricts the size of training data that may be used. In addition, the resource requirements currently present a challenge for scaling up from the relatively sparse word lattices of the NIST HUB-1 corpus (created in a lab setting by professional readers) to lattices created with spontaneous speech in non-ideal conditions. An investigation into the relevant importance of each parameter for the speech recognition task may allow a reduction in the size of the parameter space, with minimal loss of recognition accuracy. A speedup may be achieved, and additional training data could be used. Tuning of parameters using EM has lead to improved WER for other models. We encourage investigation of this technique for lexicalized head-driven lattice parsing. Acknowledgements This research was funded in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Advice on training and test data was provided by Keith Hall of Brown University. References L. R. Bahl, F. Jelinek, and R. L. Mercer. 1983. A maximum likelihood approach to continuous speech recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5:179–190. E. Black, S. Abney, D. Flickenger, C. Gdaniec, R. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of English grammars. In Proceedings of Fourth DARPA Speech and Natural Language Workshop, pages 306– 311. J.-C. Chappelier and M. Rajman. 1998. A practical bottom-up algorithm for on-line parsing with stochastic context-free grammars. Technical Report 98-284, Swiss Federal Institute of Technology, July. Eugene Charniak, Sharon Goldwater, and Mark Johnson. 1998. Edge-Based Best-First Chart Parsing. In 6th Annual Workshop for Very Large Corpora, pages 127–133. Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, John Hale, and Mark Johnson. 1999. BLLIP 1987-89 WSJ Corpus Release 1. Linguistic Data Consortium. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 2000 Conference of the North American Chapter of the Association for Computational Linguistics, pages 132–129, New Brunswick, U.S.A. Eugene Charniak. 2001. Immediate-head parsing for language models. In Proceedings of the 39th Annual Meeting of the ACL. Ciprian Chelba and Frederick Jelinek. 2000. Structured language modeling. Computer Speech and Language, 14:283–332. Ciprian Chelba. 2000. Exploiting Syntactic Structure for Natural Language Modeling. Ph.D. thesis, Johns Hopkins University. Christopher Collins. 2004. Head-Driven Probabilistic Parsing for Word Lattices. M.Sc. thesis, University of Toronto. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Joshua Goodman. 1997. Global thresholding and multiple-pass parsing. In Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing. Keith Hall and Mark Johnson. 2003. Language modeling using efficient best-first bottom-up parsing. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop. Frederick Jelinek. 1997. Information Extraction From Speech And Text. MIT Press. Lidia Mangu, Eric Brill, and Andreas Stolcke. 2000. Finding consensus in speech recognition: Word error minimization and other applications of confusion networks. Computer Speech and Language, 14(4):373– 400. Hwee Tou Ng and John Zelle. 1997. Corpus-based approaches to semantic interpretation in natural language processing. AI Magazine, 18:45–54. A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Conference on Empirical Methods in Natural Language Processing, May. Mosur K. Ravishankar. 1997. Some results on search complexity vs accuracy. In DARPA Speech Recognition Workshop, pages 104–107, February. Brian Roark. 2001. Robust Probabilistic Predictive Syntactic Processing: Motivations, Models, and Applications. Ph.D. thesis, Brown University. Brian Roark. 2002. Markov parsing: Lattice rescoring with a statistical parser. In Proceedings of the 40th Annual Meeting of the ACL, pages 287–294. Ann Taylor, Mitchell Marcus, and Beatrice Santorini, 2003. The Penn TreeBank: An Overview, chapter 1. Kluwer, Dordrecht, The Netherlands. Hans Weber, J¨org Spilker, and G¨unther G¨orz. 1997. Parsing n best trees from a word lattice. Kunstliche Intelligenz, pages 279–288. Peng Xu, Ciprian Chelba, and Frederick Jelinek. 2002. A study on richer syntactic dependencies in structured language modeling. In Proceedings of the 40th Annual Meeting of the ACL, pages 191–198. | 2004 | 30 |
Balancing Clarity and Efficiency in Typed Feature Logic through Delaying Gerald Penn University of Toronto 10 King’s College Rd. Toronto M5S 3G4 Canada [email protected] Abstract The purpose of this paper is to re-examine the balance between clarity and efficiency in HPSG design, with particular reference to the design decisions made in the English Resource Grammar (LinGO, 1999, ERG). It is argued that a simple generalization of the conventional delay statements used in logic programming is sufficient to restore much of the functionality and concomitant benefit that the ERG elected to forego, with an acceptable although still perceptible computational cost. 1 Motivation By convention, current HPSGs consist, at the very least, of a deductive backbone of extended phrase structure rules, in which each category is a description of a typed feature structure (TFS), augmented with constraints that enforce the principles of grammar. These principles typically take the form of statements, “for all TFSs, ψ holds,” where ψ is usually an implication. Historically, HPSG used a much richer set of formal descriptive devices, however, mostly on analogy to developments in the use of types and description logics in programming language theory (A¨ıt-Ka´ci, 1984), which had served as the impetus for HPSG’s invention (Pollard, 1998). This included logic-programming-style relations (H¨ohfeld and Smolka, 1988), a powerful description language in which expressions could denote sets of TFSs through the use of an explicit disjunction operator, and the full expressive power of implications, in which antecedents of the abovementioned ψ principles could be arbitrarily complex. Early HPSG-based natural language processing systems faithfully supported large chunks of this richer functionality, in spite of their inability to handle it efficiently — so much so that when the designers of the ERG set out to select formal descriptive devices for their implementation with the aim of “balancing clarity and efficiency,” (Flickinger, 2000), they chose to include none of these amenities. The ERG uses only phrase-structure rules and type-antecedent constraints, pushing all would-be description-level disjunctions into its type system or rules. In one respect, this choice was successful, because it did at least achieve a respectable level of efficiency. But the ERG’s selection of functionality has acquired an almost liturgical status within the HPSG community in the intervening seven years. Keeping this particular faith, moreover, comes at a considerable cost in clarity, as will be argued below. This paper identifies what it is precisely about this extra functionality that we miss (modularity, Section 2), determines what it would take at a minimum computationally to get it back (delaying, Section 3), and attempts to measure exactly how much that minimal computational overhead would cost (about 4 µs per delay, Section 4). This study has not been undertaken before; the ERG designers’ decision was based on largely anecdotal accounts of performance relative to then-current implementations that had not been designed with the intention of minimizing this extra cost (indeed, the ERG baseline had not yet been devised). 2 Modularity: the cost in clarity Semantic types and inheritance serve to organize the constraints and overall structure of an HPSG grammar. This is certainly a familiar, albeit vague justification from programming languages research, but the comparison between HPSG and modern programming languages essentially ends with this statement. Programming languages with inclusional polymorphism (subtyping) invariably provide functions or relations and allow these to be reified as methods within user-defined subclasses/subtypes. In HPSG, however, values of features must necessarily be TFSs themselves, and the only method (implicitly) provided by the type signature to act on these values is unification. In the absence of other methods and in the absence of an explicit disjunction operator, the type signature itself has the responsibility of not only declaring definitional subfin-wh-fill-rel-clinf-wh-fill-rel-cl red-rel-cl simp-inf-rel-cl fin-hd-fill-ph inf-hd-fill-ph wh-rel-cl non-wh-rel-cl hd-fill-ph hd-comp-ph inter-cl rel-cl hd-adj-ph hd-nexus-ph clause non-hd-ph hd-ph headed phrase phrase Figure 1: Relative clauses in the ERG (partial). class relationships, but expressing all other nondefinitional disjunctions in the grammar (as subtyping relationships). It must also encode the necessary accoutrements for implementing all other necessary means of combination as unification, such as difference lists for appending lists, or the so-called qeq constraints of Minimal Recursion Semantics (Copestake et al., 2003) to encode semantic embedding constraints. Unification, furthermore, is an inherently nonmodular, global operation because it can only be defined relative to the structure of the entire partial order of types (as a least upper bound). Of course, some partial orders are more modularizable than others, but legislating the global form that type signatures must take on is not an easy property to enforce without more local guidance. The conventional wisdom in programming languages research is indeed that types are responsible for mediating the communication between modules. A simple type system such as HPSG’s can thus only mediate very simple communication. Modern programming languages incorporate some degree of parametric polymorphism, in addition to subtyping, in order to accommodate more complex communication. To date, HPSG’s use of parametric types has been rather limited, although there have been some recent attempts to apply them to the ERG (Penn and Hoetmer, 2003). Without this, one obtains type signatures such as Figure 1 (a portion of the ERG’s for relative clauses), in which both the semantics of the subtyping links themselves (normally, subset inclusion) and the multi-dimensionality of the empirical domain’s analysis erode into a collection of arbitrary naming conventions that are difficult to validate or modify. A more avant-garde view of typing in programming languages research, inspired by the CurryHoward isomorphism, is that types are equivalent to relations, which is to say that a relation can mediate communication between modules through its arguments, just as a parametric type can through its parameters. The fact that we witness some of these mediators as types and others as relations is simply an intensional reflection of how the grammar writer thinks of them. In classical HPSG, relations were generally used as goals in some proof resolution strategy (such as Prolog’s SLD resolution), but even this has a parallel in the world of typing. Using the type signature and principles of Figure 2, for exappendbase appendrec Arg1: e list Arg1:ne list Junk:append append Arg1: list Arg2: list Arg3: list ⊥ appendbase =⇒Arg2 : L ∧Arg3 : L. appendrec =⇒Arg1 : [H|L1] ∧ Arg2 : L2 ∧Arg3 : [H|L3] ∧ Junk : (append ∧A1 : L1 ∧ A2 : L2 ∧Arg3 : L3). Figure 2: Implementing SLD resolution over the append relation as sort resolution. ample, we can perform proof resolution by attempting to sort resolve every TFS to a maximally specific type. This is actually consistent with HPSG’s use of feature logic, although most TFS-based NLP systems do not sort resolve because type inference under sort resolution is NP-complete (Penn, 2001). Phrase structure rules, on the other hand, while they can be encoded inside a logic programming relation, are more naturally viewed as algebraic generators. In this respect, they are more similar to the immediate subtyping declarations that grammar writers use to specify type signatures — both chart parsing and transitive closure are instances of allsource shortest-path problems on the same kind of algebraic structure, called a closed semi-ring. The only notion of modularity ever proven to hold of phrase structure rule systems (Wintner, 2002), furthermore, is an algebraic one. 3 Delaying: the missing link of functionality If relations are used in the absence of recursive data structures, a grammar could be specified using relations, and the relations could then be unfolded offline into relation-free descriptions. In this usage, relations are just macros, and not at all inefficient. Early HPSG implementations, however, used quite a lot of recursive structure where it did not need to be, and the structures they used, such as lists, buried important data deep inside substructures that made parsing much slower. Provided that grammar writers use more parsimonious structures, which is a good idea even in the absence of relations, there is nothing wrong with the speed of logic programming relations (Van Roy, 1990). Recursive datatypes are also prone to nontermination problems, however. This can happen when partially instantiated and potentially recursive data structures are submitted to a proof resolution procedure which explores the further instantiations of these structures too aggressively. Although this problem has received significant attention over the last fifteen years in the constraint logic programming (CLP) community, no true CLP implementation yet exists for the logic of typed feature structures (Carpenter, 1992, LTFS). Some aspects of general solution strategies, including incremental entailment simplification (A¨ıt-Kaci et al., 1992), deterministic goal expansion (Doerre, 1993), and guard statements for relations (Doerre et al., 1996) have found their way into the less restrictive sorted feature constraint systems from which LTFS descended. The CUF implementation (Doerre et al., 1996), notably, allowed for delay statements to be attached to relation definitions, which would wait until each argument was at least as specific as some variable-free, disjunction-free description before resolving. In the remainder of this section, a method is presented for reducing delays on any inequationfree description, including variables and disjunctions, to the SICStus Prolog when/2 primitive (Sections 3.4). This method takes full advantage of the restrictions inherent to LTFS (Section 3.1) to maximize run-time efficiency. In addition, by delaying calls to subgoals individually rather than the (universally quantified) relation definitions themselves,1 we can also use delays to postpone non-deterministic search on disjunctive descriptions (Section 3.3) and to implement complexantecedent constraints (Section 3.2). As a result, this single method restores all of the functionality we were missing. For simplicity, it will be assumed that the target language of our compiler is Prolog itself. This is inconsequential to the general proposal, although implementing logic programs in Prolog certainly involves less effort. 1Delaying relational definitions is a subcase of this functionality, which can be made more accessible through some extra syntactic sugar. 3.1 Restrictions inherent to LTFS LTFS is distinguished by its possession of appropriateness conditions that mediate the occurrence of features and types in these records. Appropriateness conditions stipulate, for every type, a finite set of features that can and must have values in TFSs of that type. This effectively forces TFSs to be finitebranching terms with named attributes. Appropriateness conditions also specify a type to which the value of an appropriate feature is restricted (a value restriction). These conditions make LTFS very convenient for linguistic purposes because the combination of typing with named attributes allows for a very terse description language that can easily make reference to a sparse amount of information in what are usually extremely large structures/records: Definition: Given a finite meet semi-lattice of types, Type, a fixed finite set of features, Feat, and a countable set of variables, Var, Φ is the least set of descriptions that contains: • v, v ∈Var, • τ, τ ∈Type, • F : φ, F ∈Feat, φ ∈Φ, • φ1 ∧φ2, φ1, φ2 ∈Φ, and • φ1 ∨φ2, φ1, φ2 ∈Φ. A nice property of this description language is that every non-disjunctive description with a nonempty denotation has a unique most general TFS in its denotation. This is called its most general satisfier. We will assume that appropriateness guarantees that there is a unique most general type, Intro(F) to which a given feature, F, is appropriate. This is called unique feature introduction. Where unique feature introduction is not assumed, it can be added automatically in O(F ·T) time, where F is the number of features and T is the number of types (Penn, 2001). Meet semi-latticehood can also be restored automatically, although this involves adding exponentially many new types in the worst case. 3.2 Complex Antecedent Constraints It will be assumed here that all complex-antecedent constraints are implicitly universally quantified, and are of the form: α =⇒(γ ∧ρ) where α, γ are descriptions from the core description language, Φ, and ρ is drawn from a definite clause language of relations, whose arguments are also descriptions from Φ. As mentioned above, the ERG uses the same form, but where α can only be a type description, τ, and ρ is the trivial goal, true. The approach taken here is to allow for arbitrary antecedents, α, but still to interpret the implications of principles using subsumption by α, i.e., for every TFS (the implicit universal quantification is still there), either the consequent holds, or the TFS is not subsumed by the most general satisfier of α. The subsumption convention dates back to the TDL (Krieger and Sch¨afer, 1994) and ALE (Carpenter and Penn, 1996) systems, and has earlier antecedents in work that applied lexical rules by subsumption (Krieger and Nerbone, 1991). The ConTroll constraint solver (Goetz and Meurers, 1997) attempted to handle complex antecedents, but used a classical interpretation of implication and no deductive phrase-structure backbone, which created a very large search space with severe non-termination problems. Within CLP more broadly, there is some related work on guarded constraints (Smolka, 1994) and on inferring guards automatically by residuation of implicational rules (Smolka, 1991), but implicit universal quantification of all constraints seems to be unique to linguistics. In most CLP, constraints on a class of terms or objects must be explicitly posted to a store for each member of that class. If a constraint is not posted for a particular term, then it does not apply to that term. The subsumption-based approach is sound with respect to the classical interpretation of implication for those principles where the classical interpretation really is the correct one. For completeness, some additional resolution method (in the form of a logic program with relations) must be used. As is normally the case in CLP, deductive search is used alongside constraint resolution. Under such assumptions, our principles can be converted to: trigger(α) =⇒v ∧whenfs((v = α), ((v = γ)∧ρ)) Thus, with an implementation of type-antecedent constraints and an implementation of whenfs/2 (Section 3.3), which delays the goal in its second argument until v is subsumed by (one of) the most general satisfier(s) of description α, all that remains is a method for finding the trigger, the most efficient type antecedent to use, i.e., the most general one that will not violate soundness. trigger(α) can be defined as follows: • trigger(v) = ⊥, • trigger(τ) = τ, • trigger(F : φ) = Intro(F), • trigger(φ1∧φ2) = trigger(φ1)⊔trigger(φ2), and • trigger(φ1∨φ2) = trigger(φ1)⊓trigger(φ2), where ⊔and ⊓are respectively unification and generalization in the type semi-lattice. In this and the next two subsections, we can use Figure 3 as a running example of the various stages of compilation of a typical complex-antecedent constraint, namely the Finiteness Marking Principle for German (1). This constraint is stated relative to the signature shown in Figure 4. The description to the left of the arrow in Figure 3 (1) selects TFSs whose substructure on the path SYNSEM:LOC:CAT satisfies two requirements: its HEAD value has type verb, and its MARKING value has type fin. The principle says that every TFS that satisfies that description must also have a SYNSEM: LOC: CAT: HEAD: VFORM value of type bse. To find the trigger in Figure 3 (1), we can observe that the antecedent is a feature value description (F:φ), so the trigger is Intro(SYNSEM), the unique introducer of the SYNSEM feature, which happens to be the type sign. We can then transform this constraint as above (Figure 3 (2)). The cons and goal operators in (2)–(5) are ALE syntax, used respectively to separate the type antecedent of a constraint from the description component of the consequent (in this case, just the variable, X), and to separate the description component of the consequent from its relational attachment. We know that any TFS subsumed by the original antecedent will also be subsumed by the most general TFS of type sign, because sign introduces SYNSEM. 3.3 Reducing Complex Conditionals Let us now implement our delay predicate, whenfs(V=Desc,Goal). Without loss of generality, it can be assumed that the first argument is actually drawn from a more general conditional language, including those of the form Vi = Desci closed under conjunction and disjunction. It can also be assumed that the variables of each Desci are distinct. Such a complex conditional can easily be converted into a normal form in which each atomic conditional contains a non-disjunctive description. Conjunction and disjunction of atomic conditionals then reduce as follows (using the Prolog convention of comma for AND and semi-colon for OR): whenfs((VD1,VD2),Goal) :whenfs(VD1,whenfs(VD2,Goal)). whenfs((VD1;VD2),Goal) :whenfs(VD1,(Trigger = 0 -> Goal ; true)), whenfs(VD2,(Trigger = 1 -> Goal ; true)). The binding of the variable Trigger is necessary to ensure that Goal is only resolved once in case the (1) synsem:loc:cat:(head:verb,marking:fin) =⇒synsem:loc:cat:head:vform:bse. (2) sign cons X goal whenfs((X=synsem:loc:cat:(head:verb,marking:fin)), (X=synsem:loc:cat:head:vform:bse)). (3) sign cons X goal whentype(sign,X,(farg(synsem,X,SynVal), whentype(synsem,SynVal,(farg(loc,SynVal,LocVal), whentype(local,LocVal,(farg(cat,LocVal,CatVal), whenfs((CatVal=(head:verb,marking:fin)), (X=synsem:loc:cat:head:vform:bse)))))))). (4) sign cons X goal (whentype(sign,X,(farg(synsem,X,SynVal), whentype(synsem,SynVal,(farg(loc,SynVal,LocVal), whentype(local,LocVal,(farg(cat,LocVal,CatVal), whentype(category,CatVal,(farg(head,CatVal,HdVal), whentype(verb,HdVal, whentype(category,CatVal,(farg(marking,CatVal,MkVal), whentype(fin,MkVal, (X=synsem:loc:cat:head:vform:bse)))))))))))))). (5) sign cons X goal (farg(synsem,X,SynVal), farg(loc,SynVal,LocVal), farg(cat,LocVal,CatVal), farg(head,CatVal,HdVal), whentype(verb,HdVal,(farg(marking,CatVal,MkVal), whentype(fin,MkVal, (X=synsem:loc:cat:head:vform:bse))))). (6) sign(e list( ),e list( ),SynVal,DelayVar) (7) whentype(Type,FS,Goal) :functor(FS,CurrentType,Arity), (sub type(Type,CurrentType) -> call(Goal) ; arg(Arity,FS,DelayVar), whentype(Type,DelayVar,Goal)). Figure 3: Reduction stages for the Finiteness Marking Principle. bse ind fin inf verb noun vform marking head VFORM:vform sign QRETR:list QSTORE:list SYNSEM:synsem synsem LOC:local category HEAD:head MARKING:marking local CAT:category ⊥ Figure 4: Part of the signature underlying the constraint in Figure 3. goals for both conditionals eventually unsuspend. For atomic conditionals, we must thread two extra arguments, VsIn, and VsOut, which track which variables have been seen so far. Delaying on atomic type conditionals is implemented by a special whentype/3 primitive (Section 3.4), and feature descriptions reduce using unique feature introduction: whenfs(V=T,Goal,Vs,Vs) :type(T) -> whentype(T,V,Goal). whenfs(V=(F:Desc),Goal,VsIn,VsOut):unique introducer(F,Intro), whentype(Intro,V, (farg(F,V,FVal), whenfs(FVal=Desc,Goal,VsIn, VsOut))). farg(F,V,FVal) binds FVal to the argument position of V that corresponds to the feature F once V has been instantiated to a type for which F is appropriate. In the variable case, whenfs/4 simply binds the variable when it first encounters it, but subsequent occurrences of that variable create a suspension using Prolog when/2, checking for identity with the previous occurrences. This implements a primitive delay on structure sharing (Section 3.4): whenfs(V=X,Goal,VsIn,VsOut) :var(X), (select(VsIn,X,VsOut) -> % not first X - wait when(?=(V,X), ((V==X) -> call(Goal) ; true)) ; % first X - bind VsOut=VsIn,V=X,call(Goal)). In practice, whenfs/2 can be partially evaluated by a compiler. In the running example, Figure 3, we can compile the whenfs/2 subgoal in (2) into simpler whentype/2 subgoals, that delay until X reaches a particular type. The second case of whenfs/4 tells us that this can be achieved by successively waiting for the types that introduce each of the features, SYNSEM, LOC, and CAT. As shown in Figure 4, those types are sign, synsem and local, respectively (Figure 3 (3)). The description that CatVal is suspended on is a conjunction, so we successively suspend on each conjunct. The type that introduces both HEAD and MARKING is category (4). In practice, static analysis can greatly reduce the complexity of the resulting relational goals. In this case, static analysis of the type system tells us that all four of these whentype/2 calls can be eliminated (5), since X must be a sign in this context, synsem is the least appropriate type of any SYNSEM value, local is the least appropriate type of any LOC value, and category is the least appropriate type of any CAT value. 3.4 Primitive delay statements The two fundamental primitives typically provided for Prolog terms, e.g., by SICStus Prolog when/2, are: (1) suspending until a variable is instantiated, and (2) suspending until two variables are equated or inequated. The latter corresponds exactly to structure-sharing in TFSs, and to shared variables in descriptions; its implementation was already discussed in the previous section. The former, if carried over directly, would correspond to delaying until a variable is promoted to a type more specific than ⊥, the most general type in the type semilattice. There are degrees of instantiation in LTFS, however, corresponding to long subtyping chains that terminate in ⊥. A more general and useful primitive in a typed language with such chains is suspending until a variable is promoted to a particular type. whentype(Type,X,Goal), i.e., delaying subgoal Goal until variable X reaches Type, is then the non-universally-quantified cousin of the type-antecedent constraints that are already used in the ERG. How whentype(Type,X,Goal) is implemented depends on the data structure used for TFSs, but in Prolog they invariably use the underlying Prolog implementation of when/2. In ALE, for example, TFSs are represented with reference chains that extend every time their type changes. One can simply wait for a variable position at the end of this chain to be instantiated, and then compare the new type to Type. Figure 3 (6) shows a schematic representation of a sign-typed TFS with SYNSEM value SynVal, and two other appropriate feature values. Acting upon this as its second argument, the corresponding definition of whentype(Type,X,Goal) in Figure 3 (7) delays on the variable in the extra, fourth argument position. This variable will be instantiated to a similar term when this TFS promotes to a subtype of sign. As described above, delaying until the antecedent of the principle in Figure 3 (1) is true or false ultimately reduces to delaying until various feature values attain certain types using whentype/3. A TFS may not have substructures that are specific enough to determine whether an antecedent holds or not. In this case, we must wait until it is known whether the antecedent is true or false before applying the consequent. If we reach a deadlock, where several constraints are suspended on their antecedents, then we must use another resolution method to begin testing more specific extensions of the TFS in turn. The choice of these other methods characterizes a true CLP solution for LTFS, all of which are enabled by the method presented in this paper. In the case of the signature in Figure 4, one of these methods may test whether a marking-typed substructure is consistent with either fin or inf. If it is consistent with fin, then this branch of the search may unsuspend the Finiteness Marking Principle on a sign-typed TFS that contains this substructure. 4 Measuring the cost of delaying How much of a cost do we pay for using delaying? In order to answer this question definitively, we would need to reimplement a large-scale grammar which was substantially identical in every way to the ERG but for its use of delay statements. The construction of such a grammar is outside the scope of this research programme, but we do have access to MERGE,2 which was designed to have the same extensional coverage of English as the ERG. Internally, the MERGE is quite unlike the ERG. Its TFSs are far larger because each TFS category carries inside it the phrase structure daughters of the rule that created it. It also has far fewer types, more feature values, a heavy reliance on lists, about a third as many phrase structure rules with daughter categories that are an average of 32% larger, and many more constraints. Because of these differences, this version of MERGE runs on average about 300 times slower than the ERG. On the other hand, MERGE uses delaying for all three of the purposes that have been discussed in this paper: complex antecedents, explicit whenfs/2 calls to avoid non-termination problems, and explicit whenfs/2 calls to avoid expensive nondeterministic searches. While there is currently no delay-free grammar to compare it to, we can pop open the hood on our implementation and measure delaying relative to other system functions on MERGE with its test suite. The results are shown in Figure 5. These results show that while the per call per sent. avg. avg. % Function µs avg. parse / call # calls time PS rules 1458 410 0.41 Chart access 13.3 13426 0.12 Relations 4.0 1380288 1.88 Delays 2.6 3633406 6.38 Path compression 2.0 955391 1.31 Constraints 1.6 1530779 1.62 Unification 1.5 37187128 38.77 Dereferencing 0.5 116731777 38.44 Add type MGSat 0.3 5131391 0.97 Retrieve feat. val. 0.02 19617973 0.21 Figure 5: Run-time allocation of functionality in MERGE. Times were measured on an HP Omnibook XE3 laptop with an 850MHz Pentium II processor and 512MB of RAM, running SICStus Prolog 3.11.0 on Windows 98 SE. cost of delaying is on a par with other system functions such as constraint enforcement and relational goal resolution, delaying takes between three and five times more of the percentage of sentence parse 2The author sincerely thanks Kordula DeKuthy and Detmar Meurers for their assistance in providing the version of MERGE (0.9.6) and its test suite (1347 sentences, average word length 6.3, average chart size 410 edges) for this evaluation. MERGE is still under development. time because it is called so often. This reflects, in part, design decisions of the MERGE grammar writers, but it also underscores the importance of having an efficient implementation of delaying for largescale use. Even if delaying could be eliminated entirely from this grammar at no cost, however, a 6% reduction in parsing speed would not, in the present author’s view, warrant the loss of modularity in a grammar of this size. 5 Conclusion It has been shown that a simple generalization of conventional delay statements to LTFS, combined with a subsumption-based interpretation of implicational constraints and unique feature introduction are sufficient to restore much of the functionality and concomitant benefit that has been routinely sacrificed in HPSG in the name of parsing efficiency. While a definitive measurement of the computational cost of this functionality has yet to emerge, there is at least no apparent indication from the experiments that we can conduct that disjunction, complex antecedents and/or a judicious use of recursion pose a significant obstacle to tractable grammar design when the right control strategy (CLP with subsumption testing) is adopted. References H. A¨ıt-Kaci, A. Podelski, and G. Smolka. 1992. A feature-based constraint system for logic programming with entailment. In Proceedings of the International Conference on Fifth Generation Computer Systems. H. A¨ıt-Ka´ci. 1984. A Lattice-theoretic Approach to Computation based on a Calculus of Partially Ordered Type Structures. Ph.D. thesis, University of Pennsylvania. B. Carpenter and G. Penn. 1996. Compiling typed attribute-value logic grammars. In H. Bunt and M. Tomita, editors, Recent Advances in Parsing Technologies, pages 145–168. Kluwer. B. Carpenter. 1992. The Logic of Typed Feature Structures. Cambridge. A. Copestake, D. Flickinger, C. Pollard, and I. Sag. 2003. Minimal Recursion Semantics: An introduction. Journal submission, November 2003. J. Doerre, M. Dorna, J. Junger, and K. Schneider, 1996. The CUF User’s Manual. IMS Stuttgart, 2.0 edition. J. Doerre. 1993. Generalizing Earley deduction for constraint-based grammars. Technical Report R1.2.A, DYANA Deliverable. D. Flickinger. 2000. On building a more efficient grammar by exploiting types. Natural Language Engineering, 6(1):15–28. T. Goetz and W.D. Meurers. 1997. Interleaving universal principles and relational constraints over typed feature logic. In Proceedings of the 35th ACL / 8th EACL, pages 1–8. M. H¨ohfeld and G. Smolka. 1988. Definite relations over constraint languages. LILOG Report 53, IBM Deutschland. H.-U. Krieger and J. Nerbone. 1991. Feature-based inheritance networks for computational lexicons. In Proceedings of the ACQUILEX Workshop on Default Inheritance in the Lexicon, number 238 in University of Cambridge, Computer Laboratory Technical Report. H.-U. Krieger and U. Sch¨afer. 1994. TDL — a type description language for HPSG part 1: Overview. Technical Report RR-94-37, Deutsches Forschungszentrum f¨ur K¨unstliche Intelligenz (DFKI), November. LinGO. 1999. The LinGO grammar and lexicon. Available on-line at http://lingo.stanford.edu. G. Penn and K. Hoetmer. 2003. In search of epistemic primitives in the english resource grammar. In Proceedings of the 10th International Conference on Head-driven Phrase Structure Grammar, pages 318–337. G. Penn. 2001. Tractability and structural closures in attribute logic signatures. In Proceedings of the 39th ACL, pages 410–417. C. J. Pollard. 1998. Personal communiciation to the author. G. Smolka. 1991. Residuation and guarded rules for constraint logic programming. Technical Report RR-91-13, DFKI. G. Smolka. 1994. A calculus for higher-order concurrent constraint programming with deep guards. Technical Report RR-94-03, DFKI. P. Van Roy. 1990. Can Logic Programming Execute as Fast as Imperative Programming? Ph.D. thesis, University of California, Berkeley. S. Wintner. 2002. Modular context-free grammars. Grammars, 5(1):41–63. | 2004 | 31 |
Minimal Recursion Semantics as Dominance Constraints: Translation, Evaluation, and Analysis Ruth Fuchss,1 Alexander Koller,1 Joachim Niehren,2 and Stefan Thater1 1 Dept. of Computational Linguistics, Saarland University, Saarbrücken, Germany ∗ 2 INRIA Futurs, Lille, France {fuchss,koller,stth}@coli.uni-sb.de Abstract We show that a practical translation of MRS descriptions into normal dominance constraints is feasible. We start from a recent theoretical translation and verify its assumptions on the outputs of the English Resource Grammar (ERG) on the Redwoods corpus. The main assumption of the translation— that all relevant underspecified descriptions are nets—is validated for a large majority of cases; all non-nets computed by the ERG seem to be systematically incomplete. 1 Introduction Underspecification is the standard approach to dealing with scope ambiguity (Alshawi and Crouch, 1992; Pinkal, 1996). The readings of underspecified expressions are represented by compact and concise descriptions, instead of being enumerated explicitly. Underspecified descriptions are easier to derive in syntax-semantics interfaces (Egg et al., 2001; Copestake et al., 2001), useful in applications such as machine translation (Copestake et al., 1995), and can be resolved by need. Two important underspecification formalisms in the recent literature are Minimal Recursion Semantics (MRS) (Copestake et al., 2004) and dominance constraints (Egg et al., 2001). MRS is the underspecification language which is used in large-scale HPSG grammars, such as the English Resource Grammar (ERG) (Copestake and Flickinger, 2000). The main advantage of dominance constraints is that they can be solved very efficiently (Althaus et al., 2003; Bodirsky et al., 2004). Niehren and Thater (2003) defined, in a theoretical paper, a translation from MRS into normal dominance constraints. This translation clarified the precise relationship between these two related formalisms, and made the powerful meta-theory of dominance constraints accessible to MRS. Their goal was to also make the large grammars for MRS ∗Supported by the CHORUS project of the SFB 378 of the DFG. and the efficient constraint solvers for dominance constraints available to the other formalism. However, Niehren and Thater made three technical assumptions: 1. that EP-conjunction can be resolved in a preprocessing step; 2. that the qeq relation in MRS is simply dominance; 3. and (most importantly) that all linguistically correct and relevant MRS expressions belong to a certain class of constraints called nets. This means that it is not obvious whether their result can be immediately applied to the output of practical grammars like the ERG. In this paper, we evaluate the truth of these assumptions on the MRS expressions which the ERG computes for the sentences in the Redwoods Treebank (Oepen et al., 2002). The main result of our evaluation is that 83% of the Redwoods sentences are indeed nets, and 17% aren’t. A closer analysis of the non-nets reveals that they seem to be systematically incomplete, i. e. they predict more readings than the sentence actually has. This supports the claim that all linguistically correct MRS expressions are indeed nets. We also verify the other two assumptions, one empirically and one by proof. Our results are practically relevant because dominance constraint solvers are much faster and have more predictable runtimes when solving nets than the LKB solver for MRS (Copestake, 2002), as we also show here. In addition, nets might be useful as a debugging tool to identify potentially problematic semantic outputs when designing a grammar. Plan of the Paper. We first recall the definitions of MRS (§2) and dominance constraints (§3). We present the translation from MRS-nets to dominance constraints (§4) and prove that it can be extended to MRS-nets with EP-conjunction (§5). Finally we evaluate the net hypothesis and the qeq assumption on the Redwoods corpus, and compare runtimes (§6). 2 Minimal Recursion Semantics This section presents a definition of Minimal Recursion Semantics (MRS) (Copestake et al., 2004) including EP-conjunctions with a merging semantics. Full MRS with qeq-semantics, top handles, and event variables will be discussed in the last paragraph. MRS Syntax. MRS constraints are conjunctive formulas over the following vocabulary: 1. An infinite set of variables ranged over by h. Variables are also called handles. 2. An infinite set of constants x,y,z denoting indivual variables of the object language. 3. A set of function symbols ranged over by P, and a set of quantifier symbols ranged over by Q. Pairs Qx are further function symbols. 4. The binary predicate symbol ‘=q’. MRS constraints have three kinds of literals, two kinds of elementary predications (EPs) in the first two lines and handle constraints in the third line: 1. h : P(x1,...,xn,h1,...,hm), where n,m ≥0 2. h : Qx(h1,h2) 3. h1 =q h2 In EPs, label positions are on the left of ‘:’ and argument positions on the right. Let M be a set of literals. The label set lab(M) contains all handles of M that occur in label but not in argument position, and the argument handle set arg(M) contains all handles of M that occur in argument but not in label position. Definition 1 (MRS constraints). An MRS constraint (MRS for short) is a finite set M of MRSliterals such that: M1 every handle occurs at most once in argument position in M, M2 handle constraints h =q h′ always relate argument handles h to labels h′, and M3 for every constant (individual variable) x in argument position in M there is a unique literal of the form h : Qx(h1,h2) in M. We say that an MRS M is compact if every handle h in M is either a label or an argument handle. Compactness simplifies the following proofs, but it is no serious restriction in practice. We usually represent MRSs as directed graphs: the nodes of the graph are the handles of the MRS, EPs are represented as solid lines, and handle constraints are represented as dotted lines. For instance, the following MRS is represented by the graph on the left of Fig. 1. {h5 : somey(h6,h8),h7 : book(y),h1 : everyx(h2,h4), h3 : student(x),h9 : read(x,y),h2 =q h3,h6 =q h7} everyx somey studentx booky readx,y everyx somey studentx booky readx,y everyx somey studentx booky readx,y Figure 1: An MRS and its two configurations. Note that the relation between bound variables and their binders is made explicit by binding edges drawn as dotted lines (cf. C2 below); transitively redundand binding edges (e. g., from somey to booky) however are omited. MRS Semantics. Readings of underspecified representations correspond to configurations of MRS constraints. Intuitively, a configuration is an MRS where all handle constraints have been resolved by plugging the “tree fragments” into each other. Let M be an MRS and h,h′ be handles in M. We say that h immediately outscopes h′ in M if there is an EP in M with label h and argument handle h′, and we say that h outscopes h′ in M if the pair (h,h′) belongs to the reflexive transitive closure of the immediate outscope relation of M. Definition 2 (MRS configurations). An MRS M is a configuration if it satisfies conditions C1 and C2: C1 The graph of M is a tree of solid edges: (i) all handles are labels i. e., arg(M) = /0 and M contains no handle constraints, (ii) handles don’t properly outscope themselve, and (iii) all handles are pairwise connected by EPs in M. C2 If h : Qx(h1,h2) and h′ : P(...,x,...) belong to M, then h outscopes h′ in M i. e., binding edges in the graph of M are transitively redundant. We say that a configuration M is configuration of an MRS M′ if there exists a partial substitution σ : lab(M′) ⇝arg(M′) that states how to identify labels with argument handles of M′ so that: C3 M = {σ(E) | E is an EP in M′}, and C4 for all h =q h′ in M′, h outscopes σ(h′) in M. The value σ(E) is obtained by substituting all labels in dom(σ) in E while leaving all other handels unchanged. The MRS on the left of Fig. 1, for instance, has two configurations given to the right. EP-conjunctions. Definitions 1 and 2 generalize the idealized definition of MRS of Niehren and Thater (2003) by EP-conjunctions with a merging semantics. An MRS M contains an EP-conjunction if it contains different EPs with the same label h.The intuition is that EP-conjunctions are interpreted by object language conjunctions. P1, P2 P3 {h1 : P1(h2),h1 : P2(h3),h4 : P3 h2 =q h4,h3 =q h4} Figure 2: An unsolvable MRS with EP-conjunction P1 P3 P2 P1 P2, P3 configures Figure 3: A solvable MRS without merging-free configaration Fig. 2 shows an MRS with an EP-conjunction and its graph. The function symbols of both EPs are conjoined and their arguments are merged into a set. The MRS does not have configurations since the argument handles of the merged EPs cannot jointly outscope the node P4. We call a configuration merging if it contains EPconjunctions, and merging-free otherwise. Merging configurations are needed to solve EP-conjuctions such as {h : P1, h : P2}. Unfortunately, they can also solve MRSs without EP-conjunctions, such as the MRS in Fig. 3. The unique configuration of this MRS is a merging configuration: the labels of P1 and P2 must be identified with the only available argument handle. The admission of merging configurations may thus have important consequences for the solution space of arbitrary MRSs. Standard MRS. Standard MRS requires three further extensions: (i) qeq-semantics, (ii) tophandles, and (iii) event variables. These extensions are less relevant for our comparision. The qeq-semantics restricts the interpretation of handle constraints beyond dominance. Let M be an MRS with handles h,h′. We say that h is qeq h′ in M if either h = h′, or there is an EP h : Qx(h0,h1) in M and h1 is qeq h′ in M. Every qeq-configuration is a configuration as defined above, but not necessarily vice versa. The qeq-restriction is relevant in theory but will turn out unproblematic in practice (see §6). Standard MRS requires the existence of top handles in all MRS constraints. This condition doesn’t matter for MRSs with connected graphs (see (Bodirsky et al., 2004) for the proof idea). MRSs with unconnected graphs clearly do not play any role in practical underspecified semantics. Finally, MRSs permit events variables e,e′ as a second form of constants. They are treated equally to individual variables except that they cannot be bound by quantifiers. 3 Dominance Constraints Dominance constraints are a general framework for describing trees. For scope underspecification, they are used to describe the syntax trees of object language formulas. Dominance constraints are the core language underlying CLLS (Egg et al., 2001) which adds parallelism and binding constraints. Syntax and semantics. We assume a possibly infinite signature Σ = {f,g,...} of function symbols with fixed arities (written ar( f)) and an infinite set of variables ranged over by X,Y,Z. A dominance constraint ϕ is a conjunction of dominance, inequality, and labeling literals of the following form, where ar( f) = n: ϕ ::= X ◁∗Y | X ̸= Y | X : f(X1,...,Xn) | ϕ∧ϕ′ Dominance constraints are interpreted over finite constructor trees i. e., ground terms constructed from the function symbols in Σ. We identify ground terms with trees that are rooted, ranked, edgeordered and labeled. A solution for a dominance constraint ϕ consists of a tree τ and an assignment α that maps the variables in ϕ to nodes of τ such that all constraints are satisfied: labeling literals X : f(X1,...,Xn) are satisfied iff α(X) is labeled with f and its daughters are α(X1),...,α(Xn) in this order; dominance literals X ◁∗Y are satisfied iff α(X) dominates α(Y) in τ; and inequality literals X ̸= Y are satisfied iff α(X) and α(Y) are distinct nodes. Solved forms. Satisfiable dominance constraints have infinitely many solutions. Constraint solvers for dominance constraints therefore do not enumerate solutions but solved forms i. e., “tree shaped” constraints. To this end, we consider (weakly) normal dominance constraints (Bodirsky et al., 2004). We call a variable a hole of ϕ if it occurs in argument position in ϕ and a root of ϕ otherwise. Definition 3. A dominance constraint ϕ is normal if it satisfies the following conditions. N1 (a) each variable of ϕ occurs at most once in the labeling literals of ϕ. (b) each variable of ϕ occurs at least once in the labeling literals of ϕ. N2 for distinct roots X and Y of ϕ, X ̸= Y is in ϕ. N3 (a) if X ◁∗Y occurs in ϕ, Y is a root in ϕ. (b) if X ◁∗Y occurs in ϕ, X is a hole in ϕ. We call ϕ weakly normal if it satisfies the above properties except for N1 (b) and N3 (b). Note that Definition 3 imposes compactness: the height of tree fragments is always one. This is not everyx somey studentx booky readx,y everyx somey studentx booky readx,y everyx somey studentx booky readx,y Figure 4: A normal dominance constraint (left) and its two solved forms (right). a serious restriction, as weakly normal dominance constraints can be compactified, provided that dominance links relate either roots or holes with roots. Weakly normal dominance constraints ϕ can be represented by dominance graphs. The dominance graph of ϕ is a directed graph G = (V,ET ⊎ED) defined as follows. The nodes of G are the variables of ϕ. Labeling literals X : f(X1,...,Xk) are represented by tree edges (X,Xi) ∈ET, for 1 ≤i ≤k, and dominance literals X ◁∗X′ are represented by dominance edges (X,X′) ∈ED. Inequality literals are not represented in the graph. In pictures, labeling literals are drawn with solid lines and dominance edges with dotted lines. We say that a constraint ϕ is in solved form if its graph is in solved form. A graph G is in solved form iff it is a forest. The solved forms of G are solved forms G′ which are more specific than G i. e., they differ only in their dominance edges and the reachability relation of G extends the reachability of G′. A minimal solved form is a solved form which is minimal with respect to specificity. Simple solved forms are solved forms where every hole has exactly one outgoing dominance edge. Fig. 4 shows as a concrete example the translation of the MRS description in Fig. 1 together with its two minimal solved forms. Both solved forms are simple. 4 Translating Merging-Free MRS-Nets This section defines MRS-nets without EPconjunctions, and sketches their translation to normal dominance constraints. We define nets equally for MRSs and dominance constraints. The key semantic property of nets is that different notions of solutions coincide. In this section, we show that merging-free configurations coincides to minimal solved forms. §5 generalizes the translation by adding EP-conjunctions and permitting merging semantics. Pre-translation. An MRS constraint M can be represented as a corresponding dominance constraint ϕM as follows: The variables of ϕM are the handles of M, and the literals of ϕM correspond ... ... ... ... ... (a) strong (b) weak (c) island Figure 5: Fragment Schemata of Nets those of M in the following sence: h : P(x1,...,xn,h1,...,hk) →h : Px1,...,xn(h1,...,hk) h : Qx(h1,h2) →h : Qx(h1,h2) h =q h′ →h ◁∗h′ Additionally, dominance literals h ◁∗h′ are added to ϕM for all h,h′ s. t. h : Qx(h1,h2) and h′ : P(...,x,...) belong to M (cf. C2), and literals h ̸= h′ are added to ϕM for all h,h′ in distinct label position in M. Lemma 1. If a compact MRS M does not contain EP-conjunctions then ϕM is weakly normal, and the graph of M is the transitive reduction of the graph of ϕM. Nets. A hypernormal path (Althaus et al., 2003) in a constraint graph is a path in the undirected graph that contains for every leaf X at most one incident dominance edge. Let ϕ be a weakly normal dominance constraint and let G be the constraint graph of ϕ. We say that ϕ is a dominance net if the transitive reduction G′ of G is a net. G′ is a net if every tree fragment F of G′ satisfies one of the following three conditions, illustrated in Fig. 5: Strong. Every hole of F has exactly one outgoing dominance edge, and there is no weak root-to-root dominance edge. Weak. Every hole except for the last one has exactly one outgoing dominance edge; the last hole has no outgoing dominance edge, and there is exactly one weak root-to-root dominance edge. Island. The fragment has one hole X, and all variables which are connected to X by dominance edges are connected by a hypernormal path in the graph where F has been removed. We say that an MRS M is an MRS-net if the pretranslation of its literals results in a dominance net ϕM. We say that an MRS-net M is connected if ϕM is connected; ϕM is connected if the graph of ϕM is connected. Note that this notion of MRS-nets implies that MRS-nets cannot contain EP-conjunctions as otherwise the resulting dominance constraint would not be weakly normal. §5 shows that EP-conjunctions can be resolved i. e., MRSs with EP-conjunctions can be mapped to corresponding MRSs without EPconjunctions. If M is an MRS-net (without EP-conjunctions), then M can be translated into a corresponding dominance constraint ϕ by first pre-translating M into a ϕM and then normalizing ϕM by replacing weak root-to-root dominance edges in weak fragments by dominance edges which start from the open last hole. Theorem 1 (Niehren and Thater, 2003). Let M be an MRS and ϕM be the translation of M. If M is a connected MRS-net, then the merging-free configurations of M bijectively correspond to the minimal solved forms of the ϕM. The following section generalizes this result to MRS-nets with a merging semantics. 5 Merging and EP-Conjunctions We now show that if an MRS is a net, then all its configurations are merging-free, which in particular means that the translation can be applied to the more general version of MRS with a merging semantics. Lemma 2 (Niehren and Thater, 2003). All minimal solved forms of a connected dominance net are simple. Lemma 3. If all solved forms of a normal dominance constraint are simple, then all of its solved forms are minimal. Theorem 2. The configurations of an MRS-net M are merging-free. Proof. Let M′ be a configuration of M and let σ be the underlying substitution. We construct a solved form ϕM′ as follows: the labeling literals of ϕM′ are the pre-translations of the EPs in M, and ϕM′ has a dominance literal h′ ◁∗h iff (h,h′) ∈σ, and inequality literals X ̸= Y for all distinct roots in ϕM′. By condition C1 in Def. 2, the graph of M′ is a tree, hence the graph of ϕM′ must also be a tree i. e., ϕM′ is a solved form. ϕM′ must also be more specific than the graph of ϕM because the graph of M′ satisfies all dominance requirements of the handle constraints in M, hence ϕM′ is a solved form of ϕM. M clearly solved ϕM′. By Lemmata 2 and 3, ϕM′ must be simple and minimal because ϕM is a net. But then M′ cannot contain EP-conjunctions i. e., M′ is merging-free. The merging semantics of MRS is needed to solve EP-conjunctions. As we have seen, the merging semantics is not relevant for MRS constraints which are nets. This also verifies Niehren and Thater’s (2003) assumption that EP-conjunctions are “syntactic sugar” which can be resolved in a preprocessing step: EP-conjunctions can be resolved by exhaustively applying the following rule which adds new literals to make the implicit conjunction explicit: h : E1(h1,...,hn),h : E2(h′ 1,...,h′ m) ⇒ h : ‘E1&E2’(h1,...,hn,h′ 1,...,h′ m), where E(h1,...,hn) stands for an EP with argument handles h1,...,hn, and where ‘E1&E2’ is a complex function symbol. If this rule is applied exhaustively to an MRS M, we obtain an MRS M′ without EPconjunctions. It should be intuitively clear that the configurations of M and M′ correspond; Therefore, the configurations of M also correspond to the minimal solved forms of the translation of M′. 6 Evaluation The two remaining assumptions underlying the translation are the “net-hypothesis” that all linguistically relevant MRS expressions are nets, and the “qeq-hypothesis” that handle constraints can be given a dominance semantics practice. In this section, we empirically show that both assumptions are met in practice. As an interesting side effect, we also compare the run-times of the constraint-solvers we used, and we find that the dominance constraint solver typically outperforms the MRS solver, often by significant margins. Grammar and Resources. We use the English Resource Grammar (ERG), a large-scale HPSG grammar, in connection with the LKB system, a grammar development environment for typed feature grammars (Copestake and Flickinger, 2000). We use the system to parse sentences and output MRS constraints which we then translate into dominance constraints. As a test corpus, we use the Redwoods Treebank (Oepen et al., 2002) which contains 6612 sentences. We exclude the sentences that cannot be parsed due to memory capacities or words and grammatical structures that are not included in the ERG, or which produce ill-formed MRS expressions (typically violating M1) and thus base our evaluation on a corpus containing 6242 sentences. In case of syntactic ambiguity, we only use the first reading output by the LKB system. To enumerate the solutions of MRS constraints and their translations, we use the MRS solver built into the LKB system and a solver for weakly normal dominance constraints (Bodirsky et al., 2004), ... (a) open hole (b) ill-formed island Figure 6: Two classes of non-nets which is implemented in C++ and uses LEDA, a class library for efficient data types and algorithms (Mehlhorn and Näher, 1999). 6.1 Relevant Constraints are Nets We check for 6242 constraints whether they constitute nets. It turns out that 5200 (83.31%) constitute nets while 1042 (16.69%) violate one or more netconditions. Non-nets. The evaluation shows that the hypothesis that all relevant constraints are nets seems to be falsified: there are constraints that are not nets. However, a closer analysis suggests that these constraints are incomplete and predict more readings than the sentence actually has. This can also be illustrated with the average number of solutions: For the Redwoods corpus in combination with the ERG, nets have 1836 solutions on average, while non-nets have 14039 solutions, which is a factor of 7.7. The large number of solutions for non-nets is due to the “structural weakness” of non-nets; often, non-nets have only merging configurations. Non-nets can be classified into two categories (see Fig. 6): The first class are violated “strong” fragments which have holes without outgoing dominance edge and without a corresponding root-toroot dominance edge. The second class are violated “island” fragments where several outgoing dominance edges from one hole lead to nodes which are not hypernormally connected. There are two more possibilities for violated “weak” fragments— having more than one weak dominance edge or having a weak dominance edge without empty hole—, but they occur infrequently (4.4%). If those weak fragments were normalized, they would constitute violated island fragments, so we count them as such. 124 (11.9%) of the non-nets contain empty holes, 762 (73.13%) contain violated island fragments, and 156 (14.97%) contain both. Those constraints that contain only empty holes and no violated island fragments cannot be configured, as in configurations, all holes must be filled. Fragments with open holes occur frequently, but not in all contexts, for constraints representing for example time specifications (e. g., “from nine to twelve” or “a three o’clock flight”) or intensional expressions (e. g., “Is it?” or “I suppose”). Illavailablee, ax ay cafeteriax saunay ande,x,y prop ax ay cafeteriax saunay, ande,x,y availablee prop ax ay cafeteriax saunay ande,x,y availablee prop ϕ1 ϕ2 Figure 7: An MRS for “A sauna and a cafeteria are available” (top) and two of sixteen merging configurations (below). ax ay cafeteriax saunay ande,x,y availablee prop Figure 8: The “repaired” MRS from Fig. 7 formed island fragments are often triggered by some kind of coordination, like “a restaurant and/or a sauna” or “a hundred and thirty Marks”, also implicit ones like “one hour thirty minutes” or “one thirty”. Constraints with both kinds of violated fragments emerge when there is some input that yields an open hole and another part of the input yields a violated island fragment (for example in constructions like “from nine to eleven thirty” or “the ten o’clock flight Friday or Thursday”, but not necessarily as obviously as in those examples). The constraint on the left in Fig. 7 gives a concrete example for violated island fragments. The topmost fragment has outgoing dominance edges to otherwise unconnected subconstraints ϕ1 and ϕ2. Under the merging-free semantics of the MRS dialect used in (Niehren and Thater, 2003) where every hole has to be filled exactly once, this constraint cannot be configured: there is no hole into which “available” could be plugged. However, standard MRS has merging configuration where holes can be filled more than once. For the constraint in Fig. 7 this means that “available” can be merged in almost everywhere, only restricted by the “qeq-semantics” which forbids for instance “available” to be merged with “sauna.” In fact, the MRS constraint solver derives sixteen configurations for the constraint, two of which are given in Fig. 7, although the sentence has only two scope readings. We conjecture that non-nets are semantically “incomplete” in the sense that certain constraints are missing. For instance, an alternative analysis for the above constraint is given in Fig. 8. The constraint adds an additional argument handle to “and” and places a dominance edge from this handle to “available.” In fact, the constraint is a net; it has exactly two readings. 6.2 Qeq is dominance For all nets, the dominance constraint solver calculates the same number of solutions as the MRS solver does, with 3 exceptions that hint at problems in the syntax-semantics interface. As every configuration that satisfies proper qeq-constraints is also a configuration if handle constraints are interpreted under the weaker notion of dominance, the solutions computed by the dominance constraint solver and the MRS solver must be identical for every constraint. This means that the additional expressivity of proper qeq-constraints is not used in practice, which in turn means that in practice, the translation is sound and correct even for the standard MRS notion of solution, given the constraint is a net. 6.3 Comparison of Runtimes The availability of a large body of underspecified descriptions both in MRS and in dominance constraint format makes it possible to compare the solvers for the two underspecification formalisms. We measured the runtimes on all nets using a Pentium III CPU at 1.3 GHz. The tests were run in a multi-user environment, but as the MRS and dominance measurements were conducted pairwise, conditions were equal for every MRS constraint and corresponding dominance constraint. The measurements for all MRS-nets with less than thirty dominance edges are plotted in Fig. 9. Inputs are grouped according to the constraint size. The filled circles indicate average runtimes within each size group for enumerating all solutions using the dominance solver, and the empty circles indicate the same for the LKB solver. The brackets around each point indicate maximum and minimum runtimes in that group. Note that the vertical axis is logarithmic. We excluded cases in which one or both of the solvers did not return any results: There were 173 sentences (3.33% of all nets) on which the LKB solver ran out of memory, and 1 sentence (0.02%) that took the dominance solver more than two minutes to solve. The graph shows that the dominance constraint solver is generally much faster than the LKB solver: The average runtime is less by a factor of 50 for constraints of size 10, and this grows to a factor of 500 for constraints of size 25. Our experiments show that the dominance solver outperforms the LKB solver on 98% the cases. In addition, its runtimes are much more predictable, as the brackets in the graph are also shorter by two or three orders of magnitude, and the standard deviation is much smaller (not shown). 7 Conclusion We developed Niehren and Thater’s (2003) theoretical translation into a practical system for translating MRS into dominance constraints, applied it systematically to MRSs produced by English Resource Grammar for the Redwoods treebank, and evaluated the results. We showed that: 1. most “real life” MRS expressions are MRSnets, which means that the translation is correct in these cases; 2. for nets, merging is not necessary (or even possible); 3. the practical translation works perfectly for all MRS-nets from the corpus; in particular, the =q relation can be taken as synonymous with dominance in practice. Because the translation works so well in practice, we were able to compare the runtimes of MRS and dominance constraint solvers on the same inputs. This evaluation shows that the dominance constraint solver outperforms the MRS solver and displays more predictable runtimes. A researcher working with MRS can now solve MRS nets using the efficient dominance constraint solvers. A small but significant number of the MRS constraints derived by the ERG are not nets. We have argued that these constraints seem to be systematically incomplete, and their correct completions are indeed nets. A more detailed evaluation is an important task for future research, but if our “net hypothesis” is true, a system that tests whether all outputs of a grammar are nets (or a formal “safety criterion” that would prove this theoretically) could be a useful tool for developing and debugging grammars. From a more abstract point of view, our evaluation contributes to the fundamental question of what expressive power an underspecification formalism needs. It turned out that the distinction between qeq 1 10 100 1000 10000 100000 1e+06 0 5 10 15 20 25 30 Time (ms) Size (number of dominance edges) DC solver (LEDA) MRS solver Figure 9: Comparison of runtimes for the MRS and dominance constraint solvers. and dominance hardly plays a role in practice. If the net hypothesis is true, it also follows that merging is not necessary because EP-conjunctions can be converted into ordinary conjunctions. More research along these lines could help unify different underspecification formalisms and the resources that are available for them. Acknowledgments We are grateful to Ann Copestake for many fruitful discussions, and to our reviewers for helpful comments. References H. Alshawi and R. Crouch. 1992. Monotonic semantic interpretation. In Proc. 30th ACL, pages 32–39. Ernst Althaus, Denys Duchier, Alexander Koller, Kurt Mehlhorn, Joachim Niehren, and Sven Thiel. 2003. An efficient graph algorithm for dominance constraints. Journal of Algorithms, 48:194–219. Manuel Bodirsky, Denys Duchier, Joachim Niehren, and Sebastian Miele. 2004. An efficient algorithm for weakly normal dominance constraints. In ACM-SIAM Symposium on Discrete Algorithms. The ACM Press. Ann Copestake and Dan Flickinger. 2000. An open-source grammar development environment and broad-coverage english grammar using HPSG. In Conference on Language Resources and Evaluation. Ann Copestake, Dan Flickinger, Rob Malouf, Susanne Riehemann, and Ivan Sag. 1995. Translation using Minimal Recursion Semantics. Leuven. Ann Copestake, Alex Lascarides, and Dan Flickinger. 2001. An algebra for semantic construction in constraint-based grammars. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, pages 132–139, Toulouse, France. Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan Sag. 2004. Minimal recursion semantics: An introduction. Journal of Language and Computation. To appear. Ann Copestake. 2002. Implementing Typed Feature Structure Grammars. CSLI Publications, Stanford, CA. Markus Egg, Alexander Koller, and Joachim Niehren. 2001. The Constraint Language for Lambda Structures. Logic, Language, and Information, 10:457–485. K. Mehlhorn and S. Näher. 1999. The LEDA Platform of Combinatorial and Geometric Computing. Cambridge University Press, Cambridge. See also http://www.mpi-sb.mpg.de/LEDA/. Joachim Niehren and Stefan Thater. 2003. Bridging the gap between underspecification formalisms: Minimal recursion semantics as dominance constraints. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Stephan Oepen, Kristina Toutanova, Stuart Shieber, Christopher Manning, Dan Flickinger, and Thorsten Brants. 2002. The LinGO Redwoods treebank: Motivation and preliminary applications. In Proceedings of the 19th International Conference on Computational Linguistics (COLING’02), pages 1253–1257. Manfred Pinkal. 1996. Radical underspecification. In 10th Amsterdam Colloquium, pages 587–606. | 2004 | 32 |
Learning with Unlabeled Data for Text Categorization Using Bootstrapping and Feature Projection Techniques Youngjoong Ko Dept. of Computer Science, Sogang Univ. Sinsu-dong 1, Mapo-gu Seoul, 121-742, Korea [email protected] Jungyun Seo Dept. of Computer Science, Sogang Univ. Sinsu-dong 1, Mapo-gu Seoul, 121-742, Korea [email protected] Abstract A wide range of supervised learning algorithms has been applied to Text Categorization. However, the supervised learning approaches have some problems. One of them is that they require a large, often prohibitive, number of labeled training documents for accurate learning. Generally, acquiring class labels for training data is costly, while gathering a large quantity of unlabeled data is cheap. We here propose a new automatic text categorization method for learning from only unlabeled data using a bootstrapping framework and a feature projection technique. From results of our experiments, our method showed reasonably comparable performance compared with a supervised method. If our method is used in a text categorization task, building text categorization systems will become significantly faster and less expensive. 1 Introduction Text categorization is the task of classifying documents into a certain number of pre-defined categories. Many supervised learning algorithms have been applied to this area. These algorithms today are reasonably successful when provided with enough labeled or annotated training examples. For example, there are Naive Bayes (McCallum and Nigam, 1998), Rocchio (Lewis et al., 1996), Nearest Neighbor (kNN) (Yang et al., 2002), TCFP (Ko and Seo, 2002), and Support Vector Machine (SVM) (Joachims, 1998). However, the supervised learning approach has some difficulties. One key difficulty is that it requires a large, often prohibitive, number of labeled training data for accurate learning. Since a labeling task must be done manually, it is a painfully time-consuming process. Furthermore, since the application area of text categorization has diversified from newswire articles and web pages to E-mails and newsgroup postings, it is also a difficult task to create training data for each application area (Nigam et al., 1998). In this light, we consider learning algorithms that do not require such a large amount of labeled data. While labeled data are difficult to obtain, unlabeled data are readily available and plentiful. Therefore, this paper advocates using a bootstrapping framework and a feature projection technique with just unlabeled data for text categorization. The input to the bootstrapping process is a large amount of unlabeled data and a small amount of seed information to tell the learner about the specific task. In this paper, we consider seed information in the form of title words associated with categories. In general, since unlabeled data are much less expensive and easier to collect than labeled data, our method is useful for text categorization tasks including online data sources such as web pages, E-mails, and newsgroup postings. To automatically build up a text classifier with unlabeled data, we must solve two problems; how we can automatically generate labeled training documents (machine-labeled data) from only title words and how we can handle incorrectly labeled documents in the machine-labeled data. This paper provides solutions for these problems. For the first problem, we employ the bootstrapping framework. For the second, we use the TCFP classifier with robustness from noisy data (Ko and Seo, 2004). How can labeled training data be automatically created from unlabeled data and title words? Maybe unlabeled data don’t have any information for building a text classifier because they do not contain the most important information, their category. Thus we must assign the class to each document in order to use supervised learning approaches. Since text categorization is a task based on pre-defined categories, we know the categories for classifying documents. Knowing the categories means that we can choose at least a representative title word of each category. This is the starting point of our proposed method. As we carry out a bootstrapping task from these title words, we can finally get labeled training data. Suppose, for example, that we are interested in classifying newsgroup postings about specially ‘Autos’ category. Above all, we can select ‘automobile’ as a title word, and automatically extract keywords (‘car’, ‘gear’, ‘transmission’, ‘sedan’, and so on) using co-occurrence information. In our method, we use context (a sequence of 60 words) as a unit of meaning for bootstrapping from title words; it is generally constructed as a middle size of a sentence and a document. We then extract core contexts that include at least one of the title words and the keywords. We call them centroid-contexts because they are regarded as contexts with the core meaning of each category. From the centroidcontexts, we can gain many words contextually cooccurred with the title words and keywords: ‘driver’, ‘clutch’, ‘trunk’, and so on. They are words in first-order co-occurrence with the title words and the keywords. To gather more vocabulary, we extract contexts that are similar to centroid-contexts by a similarity measure; they contain words in second-order co-occurrence with the title words and the keywords. We finally construct context-cluster of each category as the combination of centroid-contexts and contexts selected by the similarity measure. Using the context-clusters as labeled training data, a Naive Bayes classifier can be built. Since the Naive Bayes classifier can label all unlabeled documents for their category, we can finally obtain labeled training data (machine-labeled data). When the machine-labeled data is used to learn a text classifier, there is another difficult in that they have more incorrectly labeled documents than manually labeled data. Thus we develop and employ the TCFP classifiers with robustness from noisy data. The rest of this paper is organized as follows. Section 2 reviews previous works. In section 3 and 4, we explain the proposed method in detail. Section 5 is devoted to the analysis of the empirical results. The final section describes conclusions and future works. 2 Related Works In general, related approaches for using unlabeled data in text categorization have two directions; One builds classifiers from a combination of labeled and unlabeled data (Nigam, 2001; Bennett and Demiriz, 1999), and the other employs clustering algorithms for text categorization (Slonim et al., 2002). Nigam studied an Expected Maximization (EM) technique for combining labeled and unlabeled data for text categorization in his dissertation. He showed that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training data with a large pool of unlabeled data. Bennet and Demiriz achieved small improvements on some UCI data sets using SVM. It seems that SVMs assume that decision boundaries lie between classes in low-density regions of instance space, and the unlabeled examples help find these areas. Slonim suggested clustering techniques for unsupervised document classification. Given a collection of unlabeled data, he attempted to find clusters that are highly correlated with the true topics of documents by unsupervised clustering methods. In his paper, Slonim proposed a new clustering method, the sequential Information Bottleneck (sIB) algorithm. 3 The Bootstrapping Algorithm for Creating Machine-labeled Data The bootstrapping framework described in this paper consists of the following steps. Each module is described in the following sections in detail. 1. Preprocessing: Contexts are separated from unlabeled documents and content words are extracted from them. 2. Constructing context-clusters for training: - Keywords of each category are created - Centroid-contexts are extracted and verified - Context-clusters are created by a similarity measure 3. Learning Classifier: Naive Bayes classifier are learned by using the context-clusters 3.1 Preprocessing The preprocessing module has two main roles: extracting content words and reconstructing the collected documents into contexts. We use the Brill POS tagger to extract content words (Brill, 1995). Generally, the supervised learning approach with labeled data regards a document as a unit of meaning. But since we can use only the title words and unlabeled data, we define context as a unit of meaning and we employ it as the meaning unit to bootstrap the meaning of each category. In our system, we regard a sequence of 60 content words within a document as a context. To extract contexts from a document, we use sliding window techniques (Maarek et al., 1991). The window is a slide from the first word of the document to the last in the size of the window (60 words) and the interval of each window (30 words). Therefore, the final output of preprocessing is a set of context vectors that are represented as content words of each context. 3.2 Constructing Context-Clusters for Training At first, we automatically create keywords from a title word for each category using co-occurrence information. Then centroid-contexts are extracted using the title word and keywords. They contain at least one of the title and keywords. Finally, we can gain more information of each category by assigning remaining contexts to each contextcluster using a similarity measure technique; the remaining contexts do not contain any keywords or title words. 3.2.1 Creating Keyword Lists The starting point of our method is that we have title words and collected documents. A title word can present the main meaning of each category but it could be insufficient in representing any category for text categorization. Thus we need to find words that are semantically related to a title word, and we define them as keywords of each category. The score of semantic similarity between a title word, T, and a word, W, is calculated by the cosine metric as follows: ∑ ∑ ∑ = = = × × = n i i n i i n i i i w t w t W T sim 1 2 1 2 1 ) , ( (1) where ti and wi represent the occurrence (binary value: 0 or 1) of words T and W in i-th document respectively, and n is the total number of documents in the collected documents. This method calculates the similarity score between words based on the degree of their co-occurrence in the same document. Since the keywords for text categorization must have the power to discriminate categories as well as similarity with the title words, we assign a word to the keyword list of a category with the maximum similarity score and recalculate the score of the word in the category using the following formula: )) , ( ) , ( ( ) , ( ) , ( max sec max max max W T sim W T sim W T sim c W Score ond − + = (2) where Tmax is the title word with the maximum similarity score with a word W, cmax is the category of the title word Tmax, and Tsecondmax is other title word with the second high similarity score with the word W. This formula means that a word with high ranking in a category has a high similarity score with the title word of the category and a high similarity score difference with other title words. We sort out words assigned to each category according to the calculated score in descending order. We then choose top m words as keywords in the category. Table 1 shows the list of keywords (top 5) for each category in the WebKB data set. Table 1. The list of keywords in the WebKB data set Category Title Word Keywords course course assignments, hours, instructor, class, fall faculty professor associate, ph.d, fax, interests, publications project project system, systems, research, software, information student student graduate, computer, science, page, university 3.2.2 Extracting and Verifying Centroid-Contexts We choose contexts with a keyword or a title word of a category as centroid-contexts. Among centroid-contexts, some contexts could not have good features of a category even though they include the keywords of the category. To rank the importance of centroid-contexts, we compute the importance score of each centroid-context. First of all, weights (Wij) of word wi in j-th category are calculated using Term Frequency (TF) within a category and Inverse Category Frequency (ICF) (Cho and Kim, 1997) as follows: )) log( ) (log( i ij i ij ij CF M TF ICF TF W − × = × = (3) where CFi is the number of categories that contain wi and M is the total number of categories. Using word weights (Wij) calculated by formula 3, the score of a centroid-context (Sk) in j-th category (cj) is computed as follows: N W W W c S Score Nj j j j k + + + = ... ) , ( 2 1 (4) where N is the number of words in the centroidcontext. As a result, we obtain a set of words in firstorder co-occurrence from centroid-contexts of each category. 3.2.3 Creating Context-Clusters We gather the second-order co-occurrence information by assigning remaining contexts to the context-cluster of each category. For the assigning criterion, we calculate similarity between remaining contexts and centroid-contexts of each category. Thus we employ the similarity measure technique by Karov and Edelman (1998). In our method, a part of this technique is reformed for our purpose and remaining contexts are assigned to each context-cluster by that revised technique. 1) Measurement of word and context similarities As similar words tend to appear in similar contexts, we can compute the similarity by using contextual information. Words and contexts play complementary roles. Contexts are similar to the extent that they contain similar words, and words are similar to the extent that they appear in similar contexts (Karov and Edelman, 1998). This definition is circular. Thus it is applied iteratively using two matrices, WSM and CSM. Each category has a word similarity matrix WSMn and a context similarity matrix CSMn. In each iteration n, we update WSMn, whose rows and columns are labeled by all content words encountered in the centroid-contexts of each category and input remaining contexts. In that matrix, the cell (i,j) holds a value between 0 and 1, indicating the extent to which the i-th word is contextually similar to the j-th word. Also, we keep and update a CSMn, which holds similarities among contexts. The rows of CSMn correspond to the remaining contexts and the columns to the centroid-contexts. In this paper, the number of input contexts of row and column in CSM is limited to 200, considering execution time and memory allocation, and the number of iterations is set as 3. To compute the similarities, we initialize WSMn to the identity matrix. The following steps are iterated until the changes in the similarity values are small enough. 1. Update the context similarity matrix CSMn, using the word similarity matrix WSMn. 2. Update the word similarity matrix WSMn, using the context similarity matrix CSMn. 2) Affinity formulae To simplify the symmetric iterative treatment of similarity between words and contexts, we define an auxiliary relation between words and contexts as affinity. Affinity formulae are defined as follows (Karov and Edelman, 1998): ) , ( max ) , ( i n X W n W W sim X W aff i∈ = (5) (6) ) , ( max ) , ( j n X W n X X sim W X aff j ∈ = In the above formulae, n denotes the iteration number, and the similarity values are defined by WSMn and CSMn. Every word has some affinity to the context, and the context can be represented by a vector indicating the affinity of each word to it. 3) Similarity formulae The similarity of W1 to W2 is the average affinity of the contexts that include W1 to W2, and the similarity of a context X1 to X2 is a weighted average of the affinity of the words in X1 to X2. Similarity formulae are defined as follows: ) , ( ) , ( ) , ( 2 1 2 1 1 1 X W aff X W weight X X sim n X W n ⋅ = ∑ ∈ + (7) (8) ) , ( ) , ( ) , ( 1 ) , ( 2 1 2 1 1 2 1 1 2 1 1 W X aff W X weight W W sim else W W sim W W if n X W n n ⋅ = = = ∑ ∈ + + The weights in formula 7 are computed as reflecting global frequency, log-likelihood factors, and part of speech as used in (Karov and Edelman, 1998). The sum of weights in formula 8, which is a reciprocal number of contexts that contain W1, is 1. 4) Assigning remaining contexts to a category We decided a similarity value of each remaining context for each category using the following method: ) , ( ) , ( = ∈ ∈ j CC S i C c S X sim aver c X sim ic j i (9) In formula 9, i) X is a remaining context, ii) { } m c c c C ,..., , 2 1 = is a category set, and iii) { } n c S S i ,..., 1 = CC is a controid-contexts set of category ci. Each remaining context is assigned to a category which has a maximum similarity value. But there may exist noisy remaining contexts which do not belong to any category. To remove these noisy remaining contexts, we set up a dropping threshold using normal distribution of similarity values as follows (Ko and Seo, 2000): } ) , ( max{ C ci θσ µ + ≥ ∈ ic X sim (10) where i) X is a remaining context, ii) µ is an average of similarity values , iii) σ is a standard deviation of similarity values, and iv) θ is a numerical value corresponding to the threshold (%) in normal distribution table. ) , ( i C c c X sim i∈ Finally, a remaining context is assigned to the context-cluster of any category when the category has a maximum similarity above the dropping threshold value. In this paper, we empirically use a 15% threshold value from an experiment using a validation set. 3.3 Learning the Naive Bayes Classifier Using Context-Clusters In above section, we obtained labeled training data: context-clusters. Since training data are labeled as the context unit, we employ a Naive Bayes classifier because it can be built by estimating the word probability in a category, but not in a document. That is, the Naive Bayes classifier does not require labeled data with the unit of documents unlike other classifiers. We use the Naive Bayes classifier with minor modifications based on Kullback-Leibler Divergence (Craven et al., 2000). We classify a document di according to the following formula: ∑ ∏ = = + ∝ ≈ = | | 1 | | 1 ) , ( )ˆ ; | ( )ˆ ; | ( log )ˆ ; | ( )ˆ ; ( log )ˆ ; | ( )ˆ | ( )ˆ | ( )ˆ ; | ( )ˆ | ( )ˆ ; | ( V t i t j t i t j V t d w N j t j i j i j i j d w P c w P d w P n c P c w P c P d P c d P c P d c P i θ θ θ θ θ θ θ θ θ θ (11) where i) n is the number of words in document di, ii) wt is the t-th word in the vocabulary, iii) N(wt,di) is the frequency of word wt in document di. Here, the Laplace smoothing is used to estimate the probability of word wt in class cj and the probability of class cj as follows: ∑= + + = | | 1 ) , ( | | ) , ( 1 )ˆ ; | ( V t c t c t j t j j G w N V G w N c w P θ (12) ∑ + + = i i j c c c j G C G c P | | | | | | 1 )ˆ | ( θ (13) where is the count of the number of times word w ) , ( jc t G w N t occurs in the context-cluster ( ) of category c jc G j. 4 Using a Feature Projection Technique for Handling Noisy Data of Machine-labeled Data We finally obtained labeled data of a documents unit, machine-labeled data. Now we can learn text classifiers using them. But since the machinelabeled data are created by our method, they generally include far more incorrectly labeled documents than the human-labeled data. Thus we employ a feature projection technique for our method. By the property of the feature projection technique, a classifier (the TCFP classifier) can have robustness from noisy data (Ko and Seo, 2004). As seen in our experiment results, TCFP showed the highest performance among conventional classifiers in using machine-labeled data. The TCFP classifier with robustness from noisy data Here, we simply describe the TCFP classifier using the feature projection technique (Ko and Seo, 2002; 2004). In this approach, the classification knowledge is represented as sets of projections of training data on each feature dimension. The classification of a test document is based on the voting of each feature of that test document. That is, the final prediction score is calculated by accumulating the voting scores of all features. First of all, we must calculate the voting ratio of each category for all features. Since elements with a high TF-IDF value in projections of a feature must become more useful classification criteria for the feature, we use only elements with TF-IDF values above the average TF-IDF value for voting. And the selected elements participate in proportional voting with the same importance as the TF-IDF value of each element. The voting ratio of each category cj in a feature tm is calculated by the following formula: ∑ ∑ ∈ ∈ ⋅ = m m m m j j I l t l m I l t m l m m d t w l t c y d t w t c r ) ( ) ( ) , ( )) ( , ( ) , ( ) , ( r r (14) In formula 14, w ) , ( d tm r is the weight of term tm in document d, Im denotes a set of elements selected for voting and is a function; if the category for an element t is equal to c , the output value is 1. Otherwise, the output value is 0. { } 1.0 ∈ ) (l m )) ( , ( l t c y m j j Next, since each feature separately votes on feature projections, contextual information is missing. Thus we calculate co-occurrence frequency of features in the training data and modify TF-IDF values of two terms ti and tj in a test document by co-occurrence frequency between them; terms with a high co-occurrence frequency value have higher term weights. Finally, the voting score of each category c in the m-th feature t j m of a test document d is calculated by the following formula: )) ( 1 log( ) , ( ) , ( ) , ( 2 m m m m t t c r d t tw t c vs j j χ + ⋅ ⋅ = r (15) where tw(tm,d) denotes a modified term weight by the co-occurrence frequency and denotes the calculated χ ) ( 2 mt χ m 2 statistics value of . t Table 2. The top micro-avg F1 scores and precision-recall breakeven points of each method. OurMethod (basis) OurMethod (NB) OurMethod (Rocchio) OurMethod (kNN) OurMethod (SVM) OurMethod (TCFP) Newsgroups 79.36 83.46 83 79.95 82.49 86.19 WebKB 73.63 73.22 75.28 68.04 73.74 75.47 Reuters 88.62 88.23 86.26 85.65 87.41 89.09 The outline of the TCFP classifier is as follow: 5 Empirical Evaluation 5.1 Data Sets and Experimental Settings To test our method, we used three different kinds of data sets: UseNet newsgroups (20 Newsgroups), web pages (WebKB), and newswire articles (Reuters 21578). For fair evaluation in Newsgroups and WebKB, we employed the fivefold cross-validation method. The Newsgroups data set, collected by Ken Lang, contains about 20,000 articles evenly divided among 20 UseNet discussion groups (McCallum and Nigam, 1998). In this paper, we used only 16 categories after removing 4 categories: three miscellaneous categories (talk.politics.misc, talk.religion.misc, and comp.os.ms-windows.misc) and one duplicate meaning category (comp.sys. ibm.pc.hardware). The second data set comes from the WebKB project at CMU (Craven et al., 2000). This data set contains web pages gathered from university computer science departments. The Reuters 21578 Distribution 1.0 data set consists of 12,902 articles and 90 topic categories from the Reuters newswire. Like other study in (Nigam, 2001), we used the ten most populous categories to identify the news topic. About 25% documents from training data of each data set are selected for a validation set. We applied a statistical feature selection method (χ2 statistics) to a preprocessing stage for each classifier (Yang and Pedersen, 1997). As performance measures, we followed the standard definition of recall, precision, and F1 measure. For evaluation performance average across categories, we used the micro-averaging method (Yang et al., 2002). Results on Reuters are reported as precision-recall breakeven points, which is a standard information retrieval measure for binary classification (Joachims, 1998). 1. input : test document: d r =<t1,t2,…,tn> 2. main process For each feature ti tw(ti,d) is calculated For each feature ti For each category cj vote[cj]=vote[cj]+vs(cj,ti) by Formula 15 prediction = ] [ max arg j c c vote j Title words in our experiment are selected according to category names of each data set (see Table 1 as an example). 5.2 Experimental Results 5.2.1 Observing the Performance According to the Number of Keywords First of all, we determine the number of keywords in our method using the validation set. The number of keywords is limited by the top m-th keyword from the ordered list of each category. Figure 1 displays the performance at different number of keywords (from 0 to 20) in each data set. 40 45 50 55 60 65 70 75 80 85 0 1 2 3 4 5 8 10 13 15 18 20 The number of keywords Micro- avg. F1 Newsgroups WebKB Reuters Figure 1. The comparison of performance according to the number of keywords We set the number of keywords to 2 in Newsgroups, 5 in WebKB, and 3 in Reuters empirically. Generally, we recommend that the number of keywords be between 2 and 5. 5.2.2 Comparing our Method Using TCFP with those Using other Classifiers In this section, we prove the superiority of TCFP over the other classifiers (SVM, kNN, Naive Bayes (NB), Roccio) in training data with much noisy data such as machine-labeled data. As shown in Table 2, we obtained the best performance in using TCFP at all three data sets. Let us define the notations. OurMethod(basis) denotes the Naive Bayes classifier using labeled contexts and OurMethod(NB) denotes the Naive Bayes classifier using machine-labeled data as training data. The same manner is applied for other classifiers. OurMethod(TCFP) achieved more advanced scores than OurMethod(basis): 6.83 in Newsgroups, 1.84 in WebKB, and 0.47 in Reuters. 5.2.3 Comparing with the Supervised Naive Bayes Classifier For this experiment, we consider two possible cases for labeling task. The first task is to label a part of collected documents and the second is to label all of them. As the first task, we built up a new training data set; it consists of 500 different documents randomly chosen from appropriate categories like the experiment in (Slonim et al., 2002). As a result, we report performances from two kinds of Naive Bayes classifiers which are learned from 500 training documents and the whole training documents respectively. Table 3. The comparison of our method and the supervised NB classifier OurMethod (TCFP) NB (500) NB (All) Newsgroups 86.19 72.68 91.72 WebKB 75.47 74.1 85.29 Reuters 89.09 82.1 91.64 In Table 3, the results of our method are higher than those of NB(500) and are comparable to those of NB(All) in all data sets. Especially, the result in Reuters reached 2.55 close to that of NB(All) though it used the whole labeled training data. 5.2.4 Enhancing our Method from Choosing Keywords by Human The main problem of our method is that the performance depends on the quality of the keywords and title words. As we have seen in Table 3, we obtained the worst performance in the WebKB data set. In fact, title words and keywords of each category in the WebKB data set also have high frequency in other categories. We think these factors contribute to a comparatively poor performance of our method. If keywords as well as title words are supplied by humans, our method may achieve higher performance. However, choosing the proper keywords for each category is a much difficult task. Moreover, keywords from developers, who have insufficient knowledge about an application domain, do not guarantee high performance. In order to overcome this problem, we propose a hybrid method for choosing keywords. That is, a developer obtains 10 candidate keywords from our keyword extraction method and then they can choose proper keywords from them. Table 4 shows the results from three data sets. Table 4. The comparison of our method and enhancing method OurMethod (TCFP) Enhancing (TCFP)) Improvement Newsgroups 86.19 86.23 +0.04 WebKB 75.47 77.59 +2.12 Reuters 89.09 89.52 +0.43 As shown in Table 4, especially we could achieve significant improvement in the WebKb data set. Thus we find that the new method for choosing keywords is more useful in a domain with confused keywords between categories such as the WebKB data set. 5.2.5 Comparing with a Clustering Technique In related works, we presented two approaches using unlabeled data in text categorization; one approach combines unlabeled data and labeled data, and the other approach uses the clustering technique for text categorization. Since our method does not use any labeled data, it cannot be fairly compared with the former approaches. Therefore, we compare our method with a clustering technique. Slonim et al. (2002) proposed a new clustering algorithm (sIB) for unsupervised document classification and verified the superiority of his algorithm. In his experiments, the sIB algorithm was superior to other clustering algorithms. As we set the same experimental settings as in Slonim’s experiments and conduct experiments, we verify that our method outperforms ths sIB algorithm. In our experiments, we used the micro-averaging precision as performance measure and two revised data sets: revised_NG, revised_Reuters. These data sets were revised in the same way according to Slonim’s paper as follows: In revised_NG, the categories of Newsgroups were united with respect to 10 meta-categories: five comp categories, three politics categories, two sports categories, three religions categories, and two transportation categories into five big metacategories. The revised_Reuters used the 10 most frequent categories in the Reuters 21578 corpus under the ModApte split. As shown in Table 5, our method shows 6.65 advanced score in revised_NG and 3.2 advanced score in revised_Reuters. Table 5. The comparison of our method and sIB sIB OurMethod (TCFP) Improvement revised_NG 79.5 86.15 +6.65 revised_Reuters 85.8 89 +3.2 6 Conclusions and Future Works This paper has addressed a new unsupervised or semi-unsupervised text categorization method. Though our method uses only title words and unlabeled data, it shows reasonably comparable performance in comparison with that of the supervised Naive Bayes classifier. Moreover, it outperforms a clustering method, sIB. Labeled data are expensive while unlabeled data are inexpensive and plentiful. Therefore, our method is useful for low-cost text categorization. Furthermore, if some text categorization tasks require high accuracy, our method can be used as an assistant tool for easily creating labeled training data. Since our method depends on title words and keywords, we need additional studies about the characteristics of candidate words for title words and keywords according to each data set. Acknowledgement This work was supported by grant No. R01-2003000-11588-0 from the basic Research Program of the KOSEF References K. Bennett and A. Demiriz, 1999, Semi-supervised Support Vector Machines, Advances in Neural Information Processing Systems 11, pp. 368-374. E. Brill, 1995, Transformation-Based Error-driven Learning and Natural Language Processing: A Case Study in Part of Speech Tagging, Computational Linguistics, Vol.21, No. 4. K. Cho and J. Kim, 1997, Automatic Text Categorization on Hierarchical Category Structure by using ICF (Inverse Category Frequency) Weighting, In Proc. of KISS conference, pp. 507-510. M. Craven, D. DiPasquo, D. Freitag, A. McCallum, T. Mitchell, K. Nigam, and S. Slattery, 2000, Learning to construct knowledge bases from the World Wide Web, Artificial Intelligence, 118(1-2), pp. 69-113. T. Joachims, 1998, Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Proc. of ECML, pp. 137-142. Y. Karov and S. Edelman, 1998, Similarity-based Word Sense Disambiguation, Computational Linguistics, Vol. 24, No. 1, pp. 41-60. Y. Ko and J. Seo, 2000, Automatic Text Categorization by Unsupervised Learning, In Proc. of COLING’2000, pp. 453-459. Y. Ko and J. Seo, 2002, Text Categorization using Feature Projections, In Proc. of COLING’2002, pp. 467-473. Y. Ko and J. Seo, 2004, Using the Feature Projection Technique based on the Normalized Voting Method for Text Classification, Information Processing and Management, Vol. 40, No. 2, pp. 191-208. D.D. Lewis, R.E. Schapire, J.P. Callan, and R. Papka, 1996, Training Algorithms for Linear Text Classifiers. In Proc. of SIGIR’96, pp.289-297. Y. Maarek, D. Berry, and G. Kaiser, 1991, An Information Retrieval Approach for Automatically Construction Software Libraries, IEEE Transaction on Software Engineering, Vol. 17, No. 8, pp. 800813. A. McCallum and K. Nigam, 1998, A Comparison of Event Models for Naive Bayes Text Classification. AAAI ’98 workshop on Learning for Text Categorization, pp. 41-48. K. P. Nigam, A. McCallum, S. Thrun, and T. Mitchell, 1998, Learning to Classify Text from Labeled and Unlabeled Documents, In Proc. of AAAI-98. K. P. Nigam, 2001, Using Unlabeled Data to Improve Text Classification, The dissertation for the degree of Doctor of Philosophy. N. Slonim, N. Friedman, and N. Tishby, 2002, Unsupervised Document Classification using Sequential Information Maximization, In Proc. of SIGIR’02, pp. 129-136. Y. Yang and J. P. Pedersen. 1997, Feature selection in statistical leaning of text categorization. In Proc. of ICML’97, pp. 412-420. Y. Yang, S. Slattery, and R. Ghani. 2002, A study of approaches to hypertext categorization, Journal of Intelligent Information Systems, Vol. 18, No. 2. | 2004 | 33 |
The Sentimental Factor: Improving Review Classification via Human-Provided Information Philip Beineke∗and Trevor Hastie Dept. of Statistics Stanford University Stanford, CA 94305 Shivakumar Vaithyanathan IBM Almaden Research Center 650 Harry Rd. San Jose, CA 95120-6099 Abstract Sentiment classification is the task of labeling a review document according to the polarity of its prevailing opinion (favorable or unfavorable). In approaching this problem, a model builder often has three sources of information available: a small collection of labeled documents, a large collection of unlabeled documents, and human understanding of language. Ideally, a learning method will utilize all three sources. To accomplish this goal, we generalize an existing procedure that uses the latter two. We extend this procedure by re-interpreting it as a Naive Bayes model for document sentiment. Viewed as such, it can also be seen to extract a pair of derived features that are linearly combined to predict sentiment. This perspective allows us to improve upon previous methods, primarily through two strategies: incorporating additional derived features into the model and, where possible, using labeled data to estimate their relative influence. 1 Introduction Text documents are available in ever-increasing numbers, making automated techniques for information extraction increasingly useful. Traditionally, most research effort has been directed towards “objective” information, such as classification according to topic; however, interest is growing in producing information about the opinions that a document contains; for instance, Morinaga et al. (2002). In March, 2004, the American Association for Artificial Intelligence held a symposium in this area, entitled “Exploring Affect and Attitude in Text.” One task in opinion extraction is to label a review document d according to its prevailing sentiment s ∈{−1, 1} (unfavorable or favorable). Several previous papers have addressed this problem by building models that rely exclusively upon labeled documents, e.g. Pang et al. (2002), Dave et al. (2003). By learning models from labeled data, one can apply familiar, powerful techniques directly; however, in practice it may be difficult to obtain enough labeled reviews to learn model parameters accurately. A contrasting approach (Turney, 2002) relies only upon documents whose labels are unknown. This makes it possible to use a large underlying corpus – in this case, the entire Internet as seen through the AltaVista search engine. As a result, estimates for model parameters are subject to a relatively small amount of random variation. The corresponding drawback to such an approach is that its predictions are not validated on actual documents. In machine learning, it has often been effective to use labeled and unlabeled examples in tandem, e.g. Nigam et al. (2000). Turney’s model introduces the further consideration of incorporating human-provided knowledge about language. In this paper we build models that utilize all three sources: labeled documents, unlabeled documents, and human-provided information. The basic concept behind Turney’s model is quite simple. The “sentiment orientation” (Hatzivassiloglou and McKeown, 1997) of a pair of words is taken to be known. These words serve as “anchors” for positive and negative sentiment. Words that co-occur more frequently with one anchor than the other are themselves taken to be predictive of sentiment. As a result, information about a pair of words is generalized to many words, and then to documents. In the following section, we relate this model with Naive Bayes classification, showing that Turney’s classifier is a “pseudo-supervised” approach: it effectively generates a new corpus of labeled documents, upon which it fits a Naive Bayes classifier. This insight allows the procedure to be represented as a probability model that is linear on the logistic scale, which in turn suggests generalizations that are developed in subsequent sections. 2 A Logistic Model for Sentiment 2.1 Turney’s Sentiment Classifier In Turney’s model, the “sentiment orientation” σ of word w is estimated as follows. ˆσ(w) = log N(w,excellent)/Nexcellent N(w,poor)/Npoor (1) Here, Na is the total number of sites on the Internet that contain an occurrence of a – a feature that can be a word type or a phrase. N(w,a) is the number of sites in which features w and a appear “near” each other, i.e. in the same passage of text, within a span of ten words. Both numbers are obtained from the hit count that results from a query of the AltaVista search engine. The rationale for this estimate is that words that express similar sentiment often co-occur, while words that express conflicting sentiment cooccur more rarely. Thus, a word that co-occurs more frequently with excellent than poor is estimated to have a positive sentiment orientation. To extrapolate from words to documents, the estimated sentiment ˆs ∈{−1, 1} of a review document d is the sign of the average sentiment orientation of its constituent features.1 To represent this estimate formally, we introduce the following notation: W is a “dictionary” of features: (w1, . . . , wp). Each feature’s respective sentiment orientation is represented as an entry in the vector ˆσ of length p: ˆσj = ˆσ(wj) (2) Given a collection of n review documents, the i-th each di is also represented as a vector of length p, with dij equal to the number of times that feature wj occurs in di. The length of a document is its total number of features, |di| = Pp j=1 dij. Turney’s classifier for the i-th document’s sentiment si can now be written: ˆsi = sign Pp j=1 ˆσjdij |di| ! (3) Using a carefully chosen collection of features, this classifier produces correct results on 65.8% of a collection of 120 movie reviews, where 60 are labeled positive and 60 negative. Although this is not a particularly encouraging result, movie reviews tend to be a difficult domain. Accuracy on sentiment classification in other domains exceeds 80% (Turney, 2002). 1Note that not all words or phrases need to be considered as features. In Turney (2002), features are selected according to part-of-speech labels. 2.2 Naive Bayes Classification Bayes’ Theorem provides a convenient framework for predicting a binary response s ∈{−1, 1} from a feature vector x: Pr(s = 1|x) = Pr(x|s = 1)π1 P k∈{−1,1} Pr(x|s = k)πk (4) For a labeled sample of data (xi, si), i = 1, ..., n, a class’s marginal probability πk can be estimated trivially as the proportion of training samples belonging to the class. Thus the critical aspect of classification by Bayes’ Theorem is to estimate the conditional distribution of x given s. Naive Bayes simplifies this problem by making a “naive” assumption: within a class, the different feature values are taken to be independent of one another. Pr(x|s) = Y j Pr(xj|s) (5) As a result, the estimation problem is reduced to univariate distributions. • Naive Bayes for a Multinomial Distribution We consider a “bag of words” model for a document that belongs to class k, where features are assumed to result from a sequence of |di| independent multinomial draws with outcome probability vector qk = (qk1, . . . , qkp). Given a collection of documents with labels, (di, si), i = 1, . . . , n, a natural estimate for qkj is the fraction of all features in documents of class k that equal wj: ˆqkj = P i:si=k dij P i:si=k |di| (6) In the two-class case, the logit transformation provides a revealing representation of the class posterior probabilities of the Naive Bayes model. d logit(s|d) ≜ log c Pr(s = 1|d) c Pr(s = −1|d) (7) = log ˆπ1 ˆπ−1 + p X j=1 dj log ˆq1j ˆq−1j (8) = ˆα0 + p X j=1 dj ˆαj (9) where ˆα0 = log ˆπ1 ˆπ−1 (10) ˆαj = log ˆq1j ˆq−1j (11) Observe that the estimate for the logit in Equation 9 has a simple structure: it is a linear function of d. Models that take this form are commonplace in classification. 2.3 Turney’s Classifier as Naive Bayes Although Naive Bayes classification requires a labeled corpus of documents, we show in this section that Turney’s approach corresponds to a Naive Bayes model. The necessary documents and their corresponding labels are built from the spans of text that surround the anchor words excellent and poor. More formally, a labeled corpus may be produced by the following procedure: 1. For a particular anchor ak, locate all of the sites on the Internet where it occurs. 2. From all of the pages within a site, gather the features that occur within ten words of an occurrence of ak, with any particular feature included at most once. This list comprises a new “document,” representing that site.2 3. Label this document +1 if ak = excellent, -1 if ak = poor. When a Naive Bayes model is fit to the corpus described above, it results in a vector ˆα of length p, consisting of coefficient estimates for all features. In Propositions 1 and 2 below, we show that Turney’s estimates of sentiment orientation ˆσ are closely related to ˆα, and that both estimates produce identical classifiers. Proposition 1 ˆα = C1ˆσ (12) where C1 = Nexc./ P i:si=1 |di| Npoor/ P i:si=−1 |di| (13) Proof: Because a feature is restricted to at most one occurrence in a document, X i:si=k dij = N(w,ak) (14) Then from Equations 6 and 11: ˆαj = log ˆq1j ˆq−1j (15) = log N(w,exc.)/ P i:si=1 |di| N(w,poor)/ P i:si=−1 |di| (16) = C1ˆσj (17) 2 2If both anchors occur on a site, then there will actually be two documents, one for each sentiment Proposition 2 Turney’s classifier is identical to a Naive Bayes classifier fit on this corpus, with π1 = π−1 = 0.5. Proof: A Naive Bayes classifier typically assigns an observation to its most probable class. This is equivalent to classifying according to the sign of the estimated logit. So for any document, we must show that both the logit estimate and the average sentiment orientation are identical in sign. When π1 = 0.5, α0 = 0. Thus the estimated logit is d logit(s|d) = p X j=1 ˆαjdj (18) = C1 p X j=1 ˆσjdj (19) This is a positive multiple of Turney’s classifier (Equation 3), so they clearly match in sign. 2 3 A More Versatile Model 3.1 Desired Extensions By understanding Turney’s model within a Naive Bayes framework, we are able to interpret its output as a probability model for document classes. In the presence of labeled examples, this insight also makes it possible to estimate the intercept term α0. Further, we are able to view this model as a member of a broad class: linear estimates for the logit. This understanding facilitates further extensions, in particular, utilizing the following: 1. Labeled documents 2. More anchor words The reason for using labeled documents is straightforward; labels offer validation for any chosen model. Using additional anchors is desirable in part because it is inexpensive to produce lists of words that are believed to reflect positive sentiment, perhaps by reference to a thesaurus. In addition, a single anchor may be at once too general and too specific. An anchor may be too general in the sense that many common words have multiple meanings, and not all of them reflect a chosen sentiment orientation. For example, poor can refer to an objective economic state that does not necessarily express negative sentiment. As a result, a word such as income appears 4.18 times as frequently with poor as excellent, even though it does not convey negative sentiment. Similarly, excellent has a technical meaning in antiquity trading, which causes it to appear 3.34 times as frequently with furniture. An anchor may also be too specific, in the sense that there are a variety of different ways to express sentiment, and a single anchor may not capture them all. So a word like pretentious carries a strong negative sentiment but co-occurs only slightly more frequently (1.23 times) with excellent than poor. Likewise, fascination generally reflects a positive sentiment, yet it appears slightly more frequently (1.06 times) with poor than excellent. 3.2 Other Sources of Unlabeled Data The use of additional anchors has a drawback in terms of being resource-intensive. A feature set may contain many words and phrases, and each of them requires a separate AltaVista query for every chosen anchor word. In the case of 30,000 features and ten queries per minute, downloads for a single anchor word require over two days of data collection. An alternative approach is to access a large collection of documents directly. Then all cooccurrences can be counted in a single pass. Although this approach dramatically reduces the amount of data available, it does offer several advantages. • Increased Query Options Search engine queries of the form phrase NEAR anchor may not produce all of the desired cooccurrence counts. For instance, one may wish to run queries that use stemmed words, hyphenated words, or punctuation marks. One may also wish to modify the definition of NEAR, or to count individual co-occurrences, rather than counting sites that contain at least one co-occurrence. • Topic Matching Across the Internet as a whole, features may not exhibit the same correlation structure as they do within a specific domain. By restricting attention to documents within a domain, one may hope to avoid cooccurrences that are primarily relevant to other subjects. • Reproducibility On a fixed corpus, counts of word occurrences produce consistent results. Due to the dynamic nature of the Internet, numbers may fluctuate. 3.3 Co-Occurrences and Derived Features The Naive Bayes coefficient estimate ˆαj may itself be interpreted as an intercept term plus a linear combination of features of the form log N(wj,ak). Num. of Labeled Occurrences Correlation 1 - 5 0.022 6 - 10 0.082 11 - 25 0.113 26 - 50 0.183 51 - 75 0.283 76 - 100 0.316 Figure 1: Correlation between Supervised and Unsupervised Coefficient Estimates ˆαj = log N(j,exc.)/ P i:si=1 |di| N(j,pr.)/ P i:si=−1 |di| (20) = log C1 + log N(j,exc.) −log N(j,pr.) (21) We generalize this estimate as follows: for a collection of K different anchor words, we consider a general linear combination of logged co-occurrence counts. ˆαj = K X k=1 γk log N(wj,ak) (22) In the special case of a Naive Bayes model, γk = 1 when the k-th anchor word ak conveys positive sentiment, −1 when it conveys negative sentiment. Replacing the logit estimate in Equation 9 with an estimate of this form, the model becomes: d logit(s|d) = ˆα0 + p X j=1 dj ˆαj (23) = ˆα0 + p X j=1 K X k=1 djγk log N(wj,ak) (24) = γ0 + K X k=1 γk p X j=1 dj log N(wj,ak) (25) (26) This model has only K + 1 parameters: γ0, γ1, . . . , γK. These can be learned straightforwardly from labeled documents by a method such as logistic regression. Observe that a document receives a score for each anchor word Pp j=1 dj log N(wj,ak). Effectively, the predictor variables in this model are no longer counts of the original features dj. Rather, they are −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 −3 −2 −1 0 1 2 3 4 Traditional Naive Bayes Coefs. Turney Naive Bayes Coefs. Unsupervised vs. Supervised Coefficients Figure 2: Unsupervised versus Supervised Coefficient Estimates inner products between the entire feature vector d and the logged co-occurence vector N(w,ak). In this respect, the vector of logged co-occurrences is used to produce derived feature. 4 Data Analysis 4.1 Accuracy of Unsupervised Coefficients By means of a Perl script that uses the Lynx browser, Version 2.8.3rel.1, we download AltaVista hit counts for queries of the form “target NEAR anchor.” The initial list of targets consists of 44,321 word types extracted from the Pang corpus of 1400 labeled movie reviews. After preprocessing, this number is reduced to 28,629.3 In Figure 1, we compare estimates produced by two Naive Bayes procedures. For each feature wj, we estimate αj by using Turney’s procedure, and by fitting a traditional Naive Bayes model to the labeled documents. The traditional estimates are smoothed by assuming a Beta prior distribution that is equivalent to having four previous observations of wj in documents of each class. ˆq1j ˆq−1j = C2 4 + P i:si=1 dij 4 + P i:si=−1 dij (27) where C2 = 4p + P i:si=1 |di| 4p + P i:si=−1 |di| (28) Here, dij is used to indicate feature presence: dij = 1 if wj appears in di 0 otherwise (29) 3We eliminate extremely rare words by requiring each target to co-occur at least once with each anchor. In addition, certain types, such as words containing hyphens, apostrophes, or other punctuation marks, do not appear to produce valid counts, so they are discarded. Positive Negative best awful brilliant bad excellent pathetic spectacular poor wonderful worst Figure 3: Selected Anchor Words We choose this fitting procedure among several candidates because it performs well in classifying test documents. In Figure 1, each entry in the right-hand column is the observed correlation between these two estimates over a subset of features. For features that occur in five documents or fewer, the correlation is very weak (0.022). This is not surprising, as it is difficult to estimate a coefficient from such a small number of labeled examples. Correlations are stronger for more common features, but never strong. As a baseline for comparison, Naive Bayes coefficients can be estimated using a subset of their labeled occurrences. With two independent sets of 51-75 occurrences, Naive Bayes coefficient estimates had a correlation of 0.475. Figure 2 is a scatterplot of the same coefficient estimates for word types that appear in 51 to 100 documents. The great majority of features do not have large coefficients, but even for the ones that do, there is not a tight correlation. 4.2 Additional Anchors We wish to learn how our model performance depends on the choice and number of anchor words. Selecting from WordNet synonym lists (Fellbaum, 1998), we choose five positive anchor words and five negative (Figure 3). This produces a total of 25 different possible pairs for use in producing coefficient estimates. Figure 4 shows the classification performance of unsupervised procedures using the 1400 labeled Pang documents as test data. Coefficients ˆαj are estimated as described in Equation 22. Several different experimental conditions are applied. The methods labeled ”Count” use the original un-normalized coefficients, while those labeled “Norm.” have been normalized so that the number of co-occurrences with each anchor have identical variance. Results are shown when rare words (with three or fewer occurrences in the labeled corpus) are included and omitted. The methods “pair” and “10” describe whether all ten anchor coefficients are used at once, or just the ones that correspond to a single pair of Method Feat. Misclass. St.Dev Count Pair >3 39.6% 2.9% Norm. Pair >3 38.4% 3.0% Count Pair all 37.4% 3.1% Norm. Pair all 37.3% 3.0% Count 10 > 3 36.4% – Norm. 10 > 3 35.4% – Count 10 all 34.6% – Norm. 10 all 34.1% – Figure 4: Classification Error Rates for Different Unsupervised Approaches anchor words. For anchor pairs, the mean error across all 25 pairs is reported, along with its standard deviation. Patterns are consistent across the different conditions. A relatively large improvement comes from using all ten anchor words. Smaller benefits arise from including rare words and from normalizing model coefficients. Models that use the original pair of anchor words, excellent and poor, perform slightly better than the average pair. Whereas mean performance ranges from 37.3% to 39.6%, misclassification rates for this pair of anchors ranges from 37.4% to 38.1%. 4.3 A Smaller Unlabeled Corpus As described in Section 3.2, there are several reasons to explore the use of a smaller unlabeled corpus, rather than the entire Internet. In our experiments, we use additional movie reviews as our documents. For this domain, Pang makes available 27,886 reviews.4 Because this corpus offers dramatically fewer instances of anchor words, we modify our estimation procedure. Rather than discarding words that rarely co-occur with anchors, we use the same feature set as before and regularize estimates by the same procedure used in the Naive Bayes procedure described earlier. Using all features, and ten anchor words with normalized scores, test error is 35.0%. This suggests that comparable results can be attained while referring to a considerably smaller unlabeled corpus. Rather than requiring several days of downloads, the count of nearby co-occurrences was completed in under ten minutes. Because this procedure enables fast access to counts, we explore the possibility of dramatically enlarging our collection of anchor words. We col4This corpus is freely available on the following website: http://www.cs.cornell.edu/people/pabo/movie-review-data/. 100 200 300 400 500 600 0.30 0.32 0.34 0.36 0.38 0.40 Num. of Labeled Documents Classif. Error Misclassification versus Sample Size Figure 5: Misclassification with Labeled Documents. The solid curve represents a latent factor model with estimated coefficients. The dashed curve uses a Naive Bayes classifier. The two horizontal lines represent unsupervised estimates; the upper one is for the original unsupervised classifier, and the lower is for the most successful unsupervised method. lect data for the complete set of WordNet synonyms for the words good, best, bad, boring, and dreadful. This yields a total of 83 anchor words, 35 positive and 48 negative. When all of these anchors are used in conjunction, test error increases to 38.3%. One possible difficulty in using this automated procedure is that some synonyms for a word do not carry the same sentiment orientation. For instance, intense is listed as a synonym for bad, even though its presence in a movie review is a strongly positive indication.5 4.4 Methods with Supervision As demonstrated in Section 3.3, each anchor word ak is associated with a coefficient γk. In unsupervised models, these coefficients are assumed to be known. However, when labeled documents are available, it may be advantageous to estimate them. Figure 5 compares the performance of a model with estimated coefficient vector γ, as opposed to unsupervised models and a traditional supervised approach. When a moderate number of labeled documents are available, it offers a noticeable improvement. The supervised method used for reference in this case is the Naive Bayes model that is described in section 4.1. Naive Bayes classification is of particular interest here because it converges faster to its asymptotic optimum than do discriminative methods (Ng, A. Y. and Jordan, M., 2002). Further, with 5In the labeled Pang corpus, intense appears in 38 positive reviews and only 6 negative ones. a larger number of labeled documents, its performance on this corpus is comparable to that of Support Vector Machines and Maximum Entropy models (Pang et al., 2002). The coefficient vector γ is estimated by regularized logistic regression. This method has been used in other text classification problems, as in Zhang and Yang (2003). In our case, the regularization6 is introduced in order to enforce the beliefs that: γ1 ≈ γ2, if a1, a2 synonyms (30) γ1 ≈ −γ2, if a1, a2 antonyms (31) For further information on regularized model fitting, see for instance, Hastie et al. (2001). 5 Conclusion In business settings, there is growing interest in learning product reputations from the Internet. For such problems, it is often difficult or expensive to obtain labeled data. As a result, a change in modeling strategies is needed, towards approaches that require less supervision. In this paper we provide a framework for allowing human-provided information to be combined with unlabeled documents and labeled documents. We have found that this framework enables improvements over existing techniques, both in terms of the speed of model estimation and in classification accuracy. As a result, we believe that this is a promising new approach to problems of practical importance. References Kushal Dave, Steve Lawrence, and David M. Pennock. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. C. Fellbaum. 1998. Wordnet an electronic lexical database. T. Hastie, R. Tibshirani, and J. Friedman. 2001. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag. Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the semantic orientation of adjectives. In Philip R. Cohen and Wolfgang Wahlster, editors, Proceedings of the Thirty-Fifth Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics, pages 174–181, Somerset, New Jersey. Association for Computational Linguistics. 6By cross-validation, we choose the regularization term λ = 1.5/sqrt(n), where n is the number of labeled documents. Satoshi Morinaga, Kenji Yamanishi, Kenji Tateishi, and Toshikazu Fukushima. 2002. Mining product reputations on the web. Ng, A. Y. and Jordan, M. 2002. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Advances in Neural Information Processing Systems, 14. Kamal Nigam, Andrew K. McCallum, Sebastian Thrun, and Tom M. Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103–134. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP). P.D. Turney and M.L. Littman. 2002. Unsupervised learning of semantic orientation from a hundredbillion-word corpus. Peter Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL’02), pages 417– 424, Philadelphia, Pennsylvania. Association for Computational Linguistics. Janyce Wiebe. 2000. Learning subjective adjectives from corpora. In Proc. 17th National Conference on Artificial Intelligence (AAAI-2000), Austin, Texas. Jian Zhang and Yiming Yang. 2003. ”robustness of regularized linear classification methods in text categorization”. In Proceedings of the 26th Annual International ACM SIGIR Conference (SIGIR 2003). | 2004 | 34 |
A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts Bo Pang and Lillian Lee Department of Computer Science Cornell University Ithaca, NY 14853-7501 {pabo,llee}@cs.cornell.edu Abstract Sentiment analysis seeks to identify the viewpoint(s) underlying a text span; an example application is classifying a movie review as “thumbs up” or “thumbs down”. To determine this sentiment polarity, we propose a novel machine-learning method that applies text-categorization techniques to just the subjective portions of the document. Extracting these portions can be implemented using efficient techniques for finding minimum cuts in graphs; this greatly facilitates incorporation of cross-sentence contextual constraints. 1 Introduction The computational treatment of opinion, sentiment, and subjectivity has recently attracted a great deal of attention (see references), in part because of its potential applications. For instance, informationextraction and question-answering systems could flag statements and queries regarding opinions rather than facts (Cardie et al., 2003). Also, it has proven useful for companies, recommender systems, and editorial sites to create summaries of people’s experiences and opinions that consist of subjective expressions extracted from reviews (as is commonly done in movie ads) or even just a review’s polarity — positive (“thumbs up”) or negative (“thumbs down”). Document polarity classification poses a significant challenge to data-driven methods, resisting traditional text-categorization techniques (Pang, Lee, and Vaithyanathan, 2002). Previous approaches focused on selecting indicative lexical features (e.g., the word “good”), classifying a document according to the number of such features that occur anywhere within it. In contrast, we propose the following process: (1) label the sentences in the document as either subjective or objective, discarding the latter; and then (2) apply a standard machine-learning classifier to the resulting extract. This can prevent the polarity classifier from considering irrelevant or even potentially misleading text: for example, although the sentence “The protagonist tries to protect her good name” contains the word “good”, it tells us nothing about the author’s opinion and in fact could well be embedded in a negative movie review. Also, as mentioned above, subjectivity extracts can be provided to users as a summary of the sentiment-oriented content of the document. Our results show that the subjectivity extracts we create accurately represent the sentiment information of the originating documents in a much more compact form: depending on choice of downstream polarity classifier, we can achieve highly statistically significant improvement (from 82.8% to 86.4%) or maintain the same level of performance for the polarity classification task while retaining only 60% of the reviews’ words. Also, we explore extraction methods based on a minimum cut formulation, which provides an efficient, intuitive, and effective means for integrating inter-sentencelevel contextual information with traditional bag-ofwords features. 2 Method 2.1 Architecture One can consider document-level polarity classification to be just a special (more difficult) case of text categorization with sentiment- rather than topic-based categories. Hence, standard machinelearning classification techniques, such as support vector machines (SVMs), can be applied to the entire documents themselves, as was done by Pang, Lee, and Vaithyanathan (2002). We refer to such classification techniques as default polarity classifiers. However, as noted above, we may be able to improve polarity classification by removing objective sentences (such as plot summaries in a movie review). We therefore propose, as depicted in Figure 1, to first employ a subjectivity detector that determines whether each sentence is subjective or not: discarding the objective ones creates an extract that should better represent a review’s subjective content to a default polarity classifier. s1 s2 s3 s4 s_n +/− s4 s1 subjectivity detector yes no no yes n−sentence review subjective sentence? m−sentence extract (m<=n) review? positive or negative default classifier polarity subjectivity extraction Figure 1: Polarity classification via subjectivity detection. To our knowledge, previous work has not integrated sentence-level subjectivity detection with document-level sentiment polarity. Yu and Hatzivassiloglou (2003) provide methods for sentencelevel analysis and for determining whether a document is subjective or not, but do not combine these two types of algorithms or consider document polarity classification. The motivation behind the singlesentence selection method of Beineke et al. (2004) is to reveal a document’s sentiment polarity, but they do not evaluate the polarity-classification accuracy that results. 2.2 Context and Subjectivity Detection As with document-level polarity classification, we could perform subjectivity detection on individual sentences by applying a standard classification algorithm on each sentence in isolation. However, modeling proximity relationships between sentences would enable us to leverage coherence: text spans occurring near each other (within discourse boundaries) may share the same subjectivity status, other things being equal (Wiebe, 1994). We would therefore like to supply our algorithms with pair-wise interaction information, e.g., to specify that two particular sentences should ideally receive the same subjectivity label but not state which label this should be. Incorporating such information is somewhat unnatural for classifiers whose input consists simply of individual feature vectors, such as Naive Bayes or SVMs, precisely because such classifiers label each test item in isolation. One could define synthetic features or feature vectors to attempt to overcome this obstacle. However, we propose an alternative that avoids the need for such feature engineering: we use an efficient and intuitive graph-based formulation relying on finding minimum cuts. Our approach is inspired by Blum and Chawla (2001), although they focused on similarity between items (the motivation being to combine labeled and unlabeled data), whereas we are concerned with physical proximity between the items to be classified; indeed, in computer vision, modeling proximity information via graph cuts has led to very effective classification (Boykov, Veksler, and Zabih, 1999). 2.3 Cut-based classification Figure 2 shows a worked example of the concepts in this section. Suppose we have n items x1, . . . , xn to divide into two classes C1 and C2, and we have access to two types of information: • Individual scores indj(xi): non-negative estimates of each xi’s preference for being in Cj based on just the features of xi alone; and • Association scores assoc(xi, xk): non-negative estimates of how important it is that xi and xk be in the same class.1 We would like to maximize each item’s “net happiness”: its individual score for the class it is assigned to, minus its individual score for the other class. But, we also want to penalize putting tightlyassociated items into different classes. Thus, after some algebra, we arrive at the following optimization problem: assign the xis to C1 and C2 so as to minimize the partition cost X x∈C1 ind2(x)+ X x∈C2 ind1(x)+ X xi∈C1, xk∈C2 assoc(xi, xk). The problem appears intractable, since there are 2n possible binary partitions of the xi’s. However, suppose we represent the situation in the following manner. Build an undirected graph G with vertices {v1, . . . , vn, s, t}; the last two are, respectively, the source and sink. Add n edges (s, vi), each with weight ind1(xi), and n edges (vi, t), each with weight ind2(xi). Finally, add n 2 edges (vi, vk), each with weight assoc(xi, xk). Then, cuts in G are defined as follows: Definition 1 A cut (S, T) of G is a partition of its nodes into sets S = {s} ∪S′ and T = {t} ∪T ′, where s ̸∈S′, t ̸∈T ′. Its cost cost(S, T) is the sum of the weights of all edges crossing from S to T. A minimum cut of G is one of minimum cost. 1Asymmetry is allowed, but we used symmetric scores. [ ] s t Y M N 2 ind (Y) [.2] 1 ind (Y) [.8] 2 ind (M) [.5] 1 ind (M) [.5] [.1] assoc(Y,N) 2 ind (N) [.9] 1 ind (N) assoc(M,N) assoc(Y,M) [.2] [1.0] [.1] C1 Individual Association Cost penalties penalties {Y,M} .2 + .5 + .1 .1 + .2 1.1 (none) .8 + .5 + .1 0 1.4 {Y,M,N} .2 + .5 + .9 0 1.6 {Y} .2 + .5 + .1 1.0 + .1 1.9 {N} .8 + .5 + .9 .1 + .2 2.5 {M} .8 + .5 + .1 1.0 + .2 2.6 {Y,N} .2 + .5 + .9 1.0 + .2 2.8 {M,N} .8 + .5 + .9 1.0 + .1 3.3 Figure 2: Graph for classifying three items. Brackets enclose example values; here, the individual scores happen to be probabilities. Based on individual scores alone, we would put Y (“yes”) in C1, N (“no”) in C2, and be undecided about M (“maybe”). But the association scores favor cuts that put Y and M in the same class, as shown in the table. Thus, the minimum cut, indicated by the dashed line, places M together with Y in C1. Observe that every cut corresponds to a partition of the items and has cost equal to the partition cost. Thus, our optimization problem reduces to finding minimum cuts. Practical advantages As we have noted, formulating our subjectivity-detection problem in terms of graphs allows us to model item-specific and pairwise information independently. Note that this is a very flexible paradigm. For instance, it is perfectly legitimate to use knowledge-rich algorithms employing deep linguistic knowledge about sentiment indicators to derive the individual scores. And we could also simultaneously use knowledgelean methods to assign the association scores. Interestingly, Yu and Hatzivassiloglou (2003) compared an individual-preference classifier against a relationship-based method, but didn’t combine the two; the ability to coordinate such algorithms is precisely one of the strengths of our approach. But a crucial advantage specific to the utilization of a minimum-cut-based approach is that we can use maximum-flow algorithms with polynomial asymptotic running times — and near-linear running times in practice — to exactly compute the minimumcost cut(s), despite the apparent intractability of the optimization problem (Cormen, Leiserson, and Rivest, 1990; Ahuja, Magnanti, and Orlin, 1993).2 In contrast, other graph-partitioning problems that have been previously used to formulate NLP classification problems3 are NP-complete (Hatzivassiloglou and McKeown, 1997; Agrawal et al., 2003; Joachims, 2003). 2Code available at http://www.avglab.com/andrew/soft.html. 3Graph-based approaches to general clustering problems are too numerous to mention here. 3 Evaluation Framework Our experiments involve classifying movie reviews as either positive or negative, an appealing task for several reasons. First, as mentioned in the introduction, providing polarity information about reviews is a useful service: witness the popularity of www.rottentomatoes.com. Second, movie reviews are apparently harder to classify than reviews of other products (Turney, 2002; Dave, Lawrence, and Pennock, 2003). Third, the correct label can be extracted automatically from rating information (e.g., number of stars). Our data4 contains 1000 positive and 1000 negative reviews all written before 2002, with a cap of 20 reviews per author (312 authors total) per category. We refer to this corpus as the polarity dataset. Default polarity classifiers We tested support vector machines (SVMs) and Naive Bayes (NB). Following Pang et al. (2002), we use unigram-presence features: the ith coordinate of a feature vector is 1 if the corresponding unigram occurs in the input text, 0 otherwise. (For SVMs, the feature vectors are length-normalized). Each default documentlevel polarity classifier is trained and tested on the extracts formed by applying one of the sentencelevel subjectivity detectors to reviews in the polarity dataset. Subjectivity dataset To train our detectors, we need a collection of labeled sentences. Riloff and Wiebe (2003) state that “It is [very hard] to obtain collections of individual sentences that can be easily identified as subjective or objective”; the polarity-dataset sentences, for example, have not 4Available at www.cs.cornell.edu/people/pabo/moviereview-data/ (review corpus version 2.0). been so annotated.5 Fortunately, we were able to mine the Web to create a large, automaticallylabeled sentence corpus6. To gather subjective sentences (or phrases), we collected 5000 moviereview snippets (e.g., “bold, imaginative, and impossible to resist”) from www.rottentomatoes.com. To obtain (mostly) objective data, we took 5000 sentences from plot summaries available from the Internet Movie Database (www.imdb.com). We only selected sentences or snippets at least ten words long and drawn from reviews or plot summaries of movies released post-2001, which prevents overlap with the polarity dataset. Subjectivity detectors As noted above, we can use our default polarity classifiers as “basic” sentencelevel subjectivity detectors (after retraining on the subjectivity dataset) to produce extracts of the original reviews. We also create a family of cut-based subjectivity detectors; these take as input the set of sentences appearing in a single document and determine the subjectivity status of all the sentences simultaneously using per-item and pairwise relationship information. Specifically, for a given document, we use the construction in Section 2.2 to build a graph wherein the source s and sink t correspond to the class of subjective and objective sentences, respectively, and each internal node vi corresponds to the document’s ith sentence si. We can set the individual scores ind1(si) to PrNB sub (si) and ind2(si) to 1 −PrNB sub (si), as shown in Figure 3, where PrNB sub (s) denotes Naive Bayes’ estimate of the probability that sentence s is subjective; or, we can use the weights produced by the SVM classifier instead.7 If we set all the association scores to zero, then the minimum-cut classification of the sentences is the same as that of the basic subjectivity detector. Alternatively, we incorporate the degree of proximity between pairs of sentences, controlled by three parameters. The threshold T specifies the maximum distance two sentences can be separated by and still be considered proximal. The 5We therefore could not directly evaluate sentenceclassification accuracy on the polarity dataset. 6Available at www.cs.cornell.edu/people/pabo/moviereview-data/ , sentence corpus version 1.0. 7We converted SVM output di, which is a signed distance (negative=objective) from the separating hyperplane, to nonnegative numbers by ind1(si) def = ( 1 di > 2; (2 + di)/4 −2 ≤di ≤2; 0 di < −2. and ind2(si) = 1 −ind1(si). Note that scaling is employed only for consistency; the algorithm itself does not require probabilities for individual scores. non-increasing function f(d) specifies how the influence of proximal sentences decays with respect to distance d; in our experiments, we tried f(d) = 1, e1−d, and 1/d2. The constant c controls the relative influence of the association scores: a larger c makes the minimum-cut algorithm more loath to put proximal sentences in different classes. With these in hand8, we set (for j > i) assoc(si, sj) def = n f(j −i) · c if (j −i) ≤T; 0 otherwise. 4 Experimental Results Below, we report average accuracies computed by ten-fold cross-validation over the polarity dataset. Section 4.1 examines our basic subjectivity extraction algorithms, which are based on individualsentence predictions alone. Section 4.2 evaluates the more sophisticated form of subjectivity extraction that incorporates context information via the minimum-cut paradigm. As we will see, the use of subjectivity extracts can in the best case provide satisfying improvement in polarity classification, and otherwise can at least yield polarity-classification accuracies indistinguishable from employing the full review. At the same time, the extracts we create are both smaller on average than the original document and more effective as input to a default polarity classifier than the same-length counterparts produced by standard summarization tactics (e.g., first- or last-N sentences). We therefore conclude that subjectivity extraction produces effective summaries of document sentiment. 4.1 Basic subjectivity extraction As noted in Section 3, both Naive Bayes and SVMs can be trained on our subjectivity dataset and then used as a basic subjectivity detector. The former has somewhat better average ten-fold cross-validation performance on the subjectivity dataset (92% vs. 90%), and so for space reasons, our initial discussions will focus on the results attained via NB subjectivity detection. Employing Naive Bayes as a subjectivity detector (ExtractNB) in conjunction with a Naive Bayes document-level polarity classifier achieves 86.4% accuracy.9 This is a clear improvement over the 82.8% that results when no extraction is applied 8Parameter training is driven by optimizing the performance of the downstream polarity classifier rather than the detector itself because the subjectivity dataset’s sentences come from different reviews, and so are never proximal. 9This result and others are depicted in Figure 5; for now, consider only the y-axis in those plots. ... ... sub sub NB NB s1 s2 s3 s4 s_n construct graph compute min. cut extract create s1 s4 m−sentence extract (m<=n) n−sentence review v1 v2 s v3 edge crossing the cut v2 v3 v1 t s v n t v n proximity link individual subjectivity−probability link Pr 1−Pr (s1) Pr (s1) Figure 3: Graph-cut-based creation of subjective extracts. (Full review); indeed, the difference is highly statistically significant (p < 0.01, paired t-test). With SVMs as the polarity classifier instead, the Full review performance rises to 87.15%, but comparison via the paired t-test reveals that this is statistically indistinguishable from the 86.4% that is achieved by running the SVM polarity classifier on ExtractNB input. (More improvements to extraction performance are reported later in this section.) These findings indicate10 that the extracts preserve (and, in the NB polarity-classifier case, apparently clarify) the sentiment information in the originating documents, and thus are good summaries from the polarity-classification point of view. Further support comes from a “flipping” experiment: if we give as input to the default polarity classifier an extract consisting of the sentences labeled objective, accuracy drops dramatically to 71% for NB and 67% for SVMs. This confirms our hypothesis that sentences discarded by the subjectivity extraction process are indeed much less indicative of sentiment polarity. Moreover, the subjectivity extracts are much more compact than the original documents (an important feature for a summary to have): they contain on average only about 60% of the source reviews’ words. (This word preservation rate is plotted along the x-axis in the graphs in Figure 5.) This prompts us to study how much reduction of the original documents subjectivity detectors can perform and still accurately represent the texts’ sentiment information. We can create subjectivity extracts of varying lengths by taking just the N most subjective sentences11 from the originating review. As one base10Recall that direct evidence is not available because the polarity dataset’s sentences lack subjectivity labels. 11These are the N sentences assigned the highest probability by the basic NB detector, regardless of whether their probabilline to compare against, we take the canonical summarization standard of extracting the first N sentences — in general settings, authors often begin documents with an overview. We also consider the last N sentences: in many documents, concluding material may be a good summary, and www.rottentomatoes.com tends to select “snippets” from the end of movie reviews (Beineke et al., 2004). Finally, as a sanity check, we include results from the N least subjective sentences according to Naive Bayes. Figure 4 shows the polarity classifier results as N ranges between 1 and 40. Our first observation is that the NB detector provides very good “bang for the buck”: with subjectivity extracts containing as few as 15 sentences, accuracy is quite close to what one gets if the entire review is used. In fact, for the NB polarity classifier, just using the 5 most subjective sentences is almost as informative as the Full review while containing on average only about 22% of the source reviews’ words. Also, it so happens that at N = 30, performance is actually slightly better than (but statistically indistinguishable from) Full review even when the SVM default polarity classifier is used (87.2% vs. 87.15%).12 This suggests potentially effective extraction alternatives other than using a fixed probability threshold (which resulted in the lower accuracy of 86.4% reported above). Furthermore, we see in Figure 4 that the N mostsubjective-sentences method generally outperforms the other baseline summarization methods (which perhaps suggests that sentiment summarization cannot be treated the same as topic-based summarizaities exceed 50% and so would actually be classified as subjective by Naive Bayes. For reviews with fewer than N sentences, the entire review will be returned. 12Note that roughly half of the documents in the polarity dataset contain more than 30 sentences (average=32.3, standard deviation 15). 55 60 65 70 75 80 85 90 1 5 10 15 20 25 30 35 40 Average accuracy N Accuracy for N-sentence abstracts (def = NB) most subjective N sentences last N sentences first N sentences least subjective N sentences Full review 55 60 65 70 75 80 85 90 1 5 10 15 20 25 30 35 40 Average accuracy N Accuracy for N-sentence abstracts (def = SVM) most subjective N sentences last N sentences first N sentences least subjective N sentences Full review Figure 4: Accuracies using N-sentence extracts for NB (left) and SVM (right) default polarity classifiers. 83 83.5 84 84.5 85 85.5 86 86.5 87 0.6 0.7 0.8 0.9 1 1.1 Average accuracy % of words extracted Accuracy for subjective abstracts (def = NB) difference in accuracy ExtractSVM+Prox ExtractNB+Prox ExtractNB ExtractSVM not statistically significant Full Review indicates statistically significant improvement in accuracy 83 83.5 84 84.5 85 85.5 86 86.5 87 0.6 0.7 0.8 0.9 1 1.1 Average accuracy % of words extracted Accuracy for subjective abstracts (def = SVM) difference in accuracy ExtractNB+Prox ExtractSVM+Prox ExtractSVM ExtractNB not statistically significant Full Review improvement in accuracy indicates statistically significant Figure 5: Word preservation rate vs. accuracy, NB (left) and SVMs (right) as default polarity classifiers. Also indicated are results for some statistical significance tests. tion, although this conjecture would need to be verified on other domains and data). It’s also interesting to observe how much better the last N sentences are than the first N sentences; this may reflect a (hardly surprising) tendency for movie-review authors to place plot descriptions at the beginning rather than the end of the text and conclude with overtly opinionated statements. 4.2 Incorporating context information The previous section demonstrated the value of subjectivity detection. We now examine whether context information, particularly regarding sentence proximity, can further improve subjectivity extraction. As discussed in Section 2.2 and 3, contextual constraints are easily incorporated via the minimum-cut formalism but are not natural inputs for standard Naive Bayes and SVMs. Figure 5 shows the effect of adding in proximity information. ExtractNB+Prox and ExtractSVM+Prox are the graph-based subjectivity detectors using Naive Bayes and SVMs, respectively, for the individual scores; we depict the best performance achieved by a single setting of the three proximity-related edge-weight parameters over all ten data folds13 (parameter selection was not a focus of the current work). The two comparisons we are most interested in are ExtractNB+Prox versus ExtractNB and ExtractSVM+Prox versus ExtractSVM. We see that the context-aware graph-based subjectivity detectors tend to create extracts that are more informative (statistically significant so (paired t-test) for SVM subjectivity detectors only), although these extracts are longer than their contextblind counterparts. We note that the performance 13Parameters are chosen from T ∈ {1, 2, 3}, f(d) ∈ {1, e1−d, 1/d2}, and c ∈[0, 1] at intervals of 0.1. enhancements cannot be attributed entirely to the mere inclusion of more sentences regardless of whether they are subjective or not — one counterargument is that Full review yielded substantially worse results for the NB default polarity classifier— and at any rate, the graph-derived extracts are still substantially more concise than the full texts. Now, while incorporating a bias for assigning nearby sentences to the same category into NB and SVM subjectivity detectors seems to require some non-obvious feature engineering, we also wish to investigate whether our graph-based paradigm makes better use of contextual constraints that can be (more or less) easily encoded into the input of standard classifiers. For illustrative purposes, we consider paragraph-boundary information, looking only at SVM subjectivity detection for simplicity’s sake. It seems intuitively plausible that paragraph boundaries (an approximation to discourse boundaries) loosen coherence constraints between nearby sentences. To capture this notion for minimum-cutbased classification, we can simply reduce the association scores for all pairs of sentences that occur in different paragraphs by multiplying them by a cross-paragraph-boundary weight w ∈[0, 1]. For standard classifiers, we can employ the trick of having the detector treat paragraphs, rather than sentences, as the basic unit to be labeled. This enables the standard classifier to utilize coherence between sentences in the same paragraph; on the other hand, it also (probably unavoidably) poses a hard constraint that all of a paragraph’s sentences get the same label, which increases noise sensitivity.14 Our experiments reveal the graph-cut formulation to be the better approach: for both default polarity classifiers (NB and SVM), some choice of parameters (including w) for ExtractSVM+Prox yields statistically significant improvement over its paragraphunit non-graph counterpart (NB: 86.4% vs. 85.2%; SVM: 86.15% vs. 85.45%). 5 Conclusions We examined the relation between subjectivity detection and polarity classification, showing that subjectivity detection can compress reviews into much shorter extracts that still retain polarity information at a level comparable to that of the full review. In fact, for the Naive Bayes polarity classifier, the subjectivity extracts are shown to be more effective input than the originating document, which suggests 14For example, in the data we used, boundaries may have been missed due to malformed html. that they are not only shorter, but also “cleaner” representations of the intended polarity. We have also shown that employing the minimum-cut framework results in the development of efficient algorithms for sentiment analysis. Utilizing contextual information via this framework can lead to statistically significant improvement in polarity-classification accuracy. Directions for future research include developing parameterselection techniques, incorporating other sources of contextual cues besides sentence proximity, and investigating other means for modeling such information. Acknowledgments We thank Eric Breck, Claire Cardie, Rich Caruana, Yejin Choi, Shimon Edelman, Thorsten Joachims, Jon Kleinberg, Oren Kurland, Art Munson, Vincent Ng, Fernando Pereira, Ves Stoyanov, Ramin Zabih, and the anonymous reviewers for helpful comments. This paper is based upon work supported in part by the National Science Foundation under grants ITR/IM IIS-0081334 and IIS-0329064, a Cornell Graduate Fellowship in Cognitive Studies, and by an Alfred P. Sloan Research Fellowship. Any opinions, findings, and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the National Science Foundation or Sloan Foundation. References Agrawal, Rakesh, Sridhar Rajagopalan, Ramakrishnan Srikant, and Yirong Xu. 2003. Mining newsgroups using networks arising from social behavior. In WWW, pages 529–535. Ahuja, Ravindra, Thomas L. Magnanti, and James B. Orlin. 1993. Network Flows: Theory, Algorithms, and Applications. Prentice Hall. Beineke, Philip, Trevor Hastie, Christopher Manning, and Shivakumar Vaithyanathan. 2004. Exploring sentiment summarization. In AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications (AAAI tech report SS-04-07). Blum, Avrim and Shuchi Chawla. 2001. Learning from labeled and unlabeled data using graph mincuts. In Intl. Conf. on Machine Learning (ICML), pages 19–26. Boykov, Yuri, Olga Veksler, and Ramin Zabih. 1999. Fast approximate energy minimization via graph cuts. In Intl. Conf. on Computer Vision (ICCV), pages 377–384. Journal version in IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI) 23(11):1222–1239, 2001. Cardie, Claire, Janyce Wiebe, Theresa Wilson, and Diane Litman. 2003. Combining low-level and summary representations of opinions for multiperspective question answering. In AAAI Spring Symposium on New Directions in Question Answering, pages 20–27. Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest. 1990. Introduction to Algorithms. MIT Press. Das, Sanjiv and Mike Chen. 2001. Yahoo! for Amazon: Extracting market sentiment from stock message boards. In Asia Pacific Finance Association Annual Conf. (APFA). Dave, Kushal, Steve Lawrence, and David M. Pennock. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In WWW, pages 519–528. Dini, Luca and Giampaolo Mazzini. 2002. Opinion classification through information extraction. In Intl. Conf. on Data Mining Methods and Databases for Engineering, Finance and Other Fields, pages 299–310. Durbin, Stephen D., J. Neal Richter, and Doug Warner. 2003. A system for affective rating of texts. In KDD Wksp. on Operational Text Classification Systems (OTC-3). Hatzivassiloglou, Vasileios and Kathleen McKeown. 1997. Predicting the semantic orientation of adjectives. In 35th ACL/8th EACL, pages 174–181. Joachims, Thorsten. 2003. Transductive learning via spectral graph partitioning. In Intl. Conf. on Machine Learning (ICML). Liu, Hugo, Henry Lieberman, and Ted Selker. 2003. A model of textual affect sensing using real-world knowledge. In Intelligent User Interfaces (IUI), pages 125–132. Montes-y-G´omez, Manuel, Aurelio L´opez-L´opez, and Alexander Gelbukh. 1999. Text mining as a social thermometer. In IJCAI Wksp. on Text Mining, pages 103–107. Morinaga, Satoshi, Kenji Yamanishi, Kenji Tateishi, and Toshikazu Fukushima. 2002. Mining product reputations on the web. In KDD, pages 341– 349. Industry track. Pang, Bo, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In EMNLP, pages 79–86. Qu, Yan, James Shanahan, and Janyce Wiebe, editors. 2004. AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications. AAAI technical report SS-04-07. Riloff, Ellen and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In EMNLP. Riloff, Ellen, Janyce Wiebe, and Theresa Wilson. 2003. Learning subjective nouns using extraction pattern bootstrapping. In Conf. on Natural Language Learning (CoNLL), pages 25–32. Subasic, Pero and Alison Huettner. 2001. Affect analysis of text using fuzzy semantic typing. IEEE Trans. Fuzzy Systems, 9(4):483–496. Tong, Richard M. 2001. An operational system for detecting and tracking opinions in on-line discussion. SIGIR Wksp. on Operational Text Classification. Turney, Peter. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In ACL, pages 417–424. Wiebe, Janyce M. 1994. Tracking point of view in narrative. Computational Linguistics, 20(2):233– 287. Yi, Jeonghee, Tetsuya Nasukawa, Razvan Bunescu, and Wayne Niblack. 2003. Sentiment analyzer: Extracting sentiments about a given topic using natural language processing techniques. In IEEE Intl. Conf. on Data Mining (ICDM). Yu, Hong and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In EMNLP. | 2004 | 35 |
Finding Predominant Word Senses in Untagged Text Diana McCarthy & Rob Koeling & Julie Weeds & John Carroll Department of Informatics, University of Sussex Brighton BN1 9QH, UK dianam,robk,juliewe,johnca @sussex.ac.uk Abstract In word sense disambiguation (WSD), the heuristic of choosing the most common sense is extremely powerful because the distribution of the senses of a word is often skewed. The problem with using the predominant, or first sense heuristic, aside from the fact that it does not take surrounding context into account, is that it assumes some quantity of handtagged data. Whilst there are a few hand-tagged corpora available for some languages, one would expect the frequency distribution of the senses of words, particularly topical words, to depend on the genre and domain of the text under consideration. We present work on the use of a thesaurus acquired from raw textual corpora and the WordNet similarity package to find predominant noun senses automatically. The acquired predominant senses give a precision of 64% on the nouns of the SENSEVAL2 English all-words task. This is a very promising result given that our method does not require any hand-tagged text, such as SemCor. Furthermore, we demonstrate that our method discovers appropriate predominant senses for words from two domainspecific corpora. 1 Introduction The first sense heuristic which is often used as a baseline for supervised WSD systems outperforms many of these systems which take surrounding context into account. This is shown by the results of the English all-words task in SENSEVAL-2 (Cotton et al., 1998) in figure 1 below, where the first sense is that listed in WordNet for the PoS given by the Penn TreeBank (Palmer et al., 2001). The senses in WordNet are ordered according to the frequency data in the manually tagged resource SemCor (Miller et al., 1993). Senses that have not occurred in SemCor are ordered arbitrarily and after those senses of the word that have occurred. The figure distinguishes systems which make use of hand-tagged data (using HTD) such as SemCor, from those that do not (without HTD). The high performance of the first sense baseline is due to the skewed frequency distribution of word senses. Even systems which show superior performance to this heuristic often make use of the heuristic where evidence from the context is not sufficient (Hoste et al., 2001). Whilst a first sense heuristic based on a sense-tagged corpus such as SemCor is clearly useful, there is a strong case for obtaining a first, or predominant, sense from untagged corpus data so that a WSD system can be tuned to the genre or domain at hand. SemCor comprises a relatively small sample of 250,000 words. There are words where the first sense in WordNet is counter-intuitive, because of the size of the corpus, and because where the frequency data does not indicate a first sense, the ordering is arbitrary. For example the first sense of tiger in WordNet is audacious person whereas one might expect that carnivorous animal is a more common usage. There are only a couple of instances of tiger within SemCor. Another example is embryo, which does not occur at all in SemCor and the first sense is listed as rudimentary plant rather than the anticipated fertilised egg meaning. We believe that an automatic means of finding a predominant sense would be useful for systems that use it as a means of backing-off (Wilks and Stevenson, 1998; Hoste et al., 2001) and for systems that use it in lexical acquisition (McCarthy, 1997; Merlo and Leybold, 2001; Korhonen, 2002) because of the limited size of hand-tagged resources. More importantly, when working within a specific domain one would wish to tune the first sense heuristic to the domain at hand. The first sense of star in SemCor is celestial body, however, if one were disambiguating popular news celebrity would be preferred. Assuming that one had an accurate WSD system then one could obtain frequency counts for senses and rank them with these counts. However, the most accurate WSD systems are those which require manually sense tagged data in the first place, and their accuracy depends on the quantity of training examples (Yarowsky and Florian, 2002) available. We 0 20 40 60 80 100 0 20 40 60 80 100 recall precision First Sense "using HTD" "without HTD" "First Sense" Figure 1: The first sense heuristic compared with the SENSEVAL-2 English all-words task results are therefore investigating a method of automatically ranking WordNet senses from raw text. Many researchers are developing thesauruses from automatically parsed data. In these each target word is entered with an ordered list of “nearest neighbours”. The neighbours are words ordered in terms of the “distributional similarity” that they have with the target. Distributional similarity is a measure indicating the degree that two words, a word and its neighbour, occur in similar contexts. From inspection, one can see that the ordered neighbours of such a thesaurus relate to the different senses of the target word. For example, the neighbours of star in a dependency-based thesaurus provided by Lin 1 has the ordered list of neighbours: superstar, player, teammate, actor early in the list, but one can also see words that are related to another sense of star e.g. galaxy, sun, world and planet further down the list. We expect that the quantity and similarity of the neighbours pertaining to different senses will reflect the dominance of the sense to which they pertain. This is because there will be more relational data for the more prevalent senses compared to the less frequent senses. In this paper we describe and evaluate a method for ranking senses of nouns to obtain the predominant sense of a word using the neighbours from automatically acquired thesauruses. The neighbours for a word in a thesaurus are words themselves, rather than senses. In order to associate the neighbours with senses we make use of another notion of similarity, “semantic similarity”, which exists between senses, rather than words. We experiment with several WordNet Similarity measures (Patwardhan and Pedersen, 2003) which aim to capture semantic relatedness within 1Available at http://www.cs.ualberta.ca/˜lindek/demos/depsim.htm the WordNet hierarchy. We use WordNet as our sense inventory for this work. The paper is structured as follows. We discuss our method in the following section. Sections 3 and 4 concern experiments using predominant senses from the BNC evaluated against the data in SemCor and the SENSEVAL-2 English all-words task respectively. In section 5 we present results of the method on two domain specific sections of the Reuters corpus for a sample of words. We describe some related work in section 6 and conclude in section 7. 2 Method In order to find the predominant sense of a target word we use a thesaurus acquired from automatically parsed text based on the method of Lin (1998). This provides the nearest neighbours to each target word, along with the distributional similarity score between the target word and its neighbour. We then use the WordNet similarity package (Patwardhan and Pedersen, 2003) to give us a semantic similarity measure (hereafter referred to as the WordNet similarity measure) to weight the contribution that each neighbour makes to the various senses of the target word. To find the first sense of a word ( ) we take each sense in turn and obtain a score reflecting the prevalence which is used for ranking. Let
be the ordered set of the top scoring neighbours of from the thesaurus with associated distributional similarity scores ! ! ! . Let " " ! be the set of senses of . For each sense of ( #%$'&("
" ! ) we obtain a ranking score by summing over the )*+ -, ! of each neighbour ( .,/&0 ) multiplied by a weight. This weight is the WordNet similarity score ( 1
) between the target sense ( #%$ ) and the sense of -, (
%23&4"
"* -, ! ) that maximises this score, divided by the sum of all such WordNet similarity scores for "
" ! and ., . Thus we rank each sense 1 $ &5" " ! using: 687 "9):);<"
=%"?>@=%A 7 "#%$ ! B CED FGIH , !J 1
#%$* ., ! K LNMPO F LRQ C LNQRL*STU #
1 $ O -, ! (1) where: 1
#%$V -, ! WYXZ C L\[ F LNQ C LNQRL]S CED U *1
#%$*
%2 !V! 2.1 Acquiring the Automatic Thesaurus The thesaurus was acquired using the method described by Lin (1998). For input we used grammatical relation data extracted using an automatic parser (Briscoe and Carroll, 2002). For the experiments in sections 3 and 4 we used the 90 million words of written English from the BNC. For each noun we considered the co-occurring verbs in the direct object and subject relation, the modifying nouns in noun-noun relations and the modifying adjectives in adjective-noun relations. We could easily extend the set of relations in the future. A noun, , is thus described by a set of co-occurrence triples ^ 7 _a` and associated frequencies, where 7 is a grammatical relation and _ is a possible cooccurrence with in that relation. For every pair of nouns, where each noun had a total frequency in the triple data of 10 or more, we computed their distributional similarity using the measure given by Lin (1998). If b8 ! is the set of co-occurrence types 7 _ ! such that cd*+ 7 _ ! is positive then the similarity between two nouns, and , can be computed as: )*+ ! K Sfe*g 2 U Fh Sf
Uji h S C U Ncd 7 _ !)k cl m 7 _ !V! K Sfe*g 2 U Fh Sf
U cl 7 _ !-k K STe*g 2 U Fh S C U cl m 7 _ ! where: cl 7 _ ! onprq 6 _Istvu 7 ! 6 *_Is 7 ! A thesaurus entry of size for a target noun is then defined as the most similar nouns to . 2.2 The WordNet Similarity Package We use the WordNet Similarity Package 0.05 and WordNet version 1.6. 2 The WordNet Similarity package supports a range of WordNet similarity scores. We experimented using six of these to provide the 1
in equation 1 above and obtained results well over our baseline, but because of space limitations give results for the two which perform the best. We briefly summarise the two measures here; for a more detailed summary see (Patwardhan et al., 2003). The measures provide a similarity score between two WordNet senses ( xw and y ), these being synsets within WordNet. lesk (Banerjee and Pedersen, 2002) This score maximises the number of overlapping words in the gloss, or definition, of the senses. It uses the glosses of semantically related (according to WordNet) senses too. jcn (Jiang and Conrath, 1997) This score uses corpus data to populate classes (synsets) in the WordNet hierarchy with frequency counts. Each 2We use this version of WordNet since it allows us to map information to WordNets of other languages more accurately. We are of course able to apply the method to other versions of WordNet. synset, is incremented with the frequency counts from the corpus of all words belonging to that synset, directly or via the hyponymy relation. The frequency data is used to calculate the “information content” (IC) of a class c-z{ ! }|~;<A%
j
* !V! . Jiang and Conrath specify a distance measure: ,V C xwr y ! c-z{xw !Ik cz/y ! |oy J c-z{
! , where the third class (
) is the most informative, or most specific, superordinate synset of the two senses rw and y . This is transformed from a distance measure in the WN-Similarity package by taking the reciprocal: =% rwx y ! (w ,] C xwr y ! 3 Experiment with SemCor In order to evaluate our method we use the data in SemCor as a gold-standard. This is not ideal since we expect that the sense frequency distributions within SemCor will differ from those in the BNC, from which we obtain our thesaurus. Nevertheless, since many systems performed well on the English all-words task for SENSEVAL-2 by using the frequency information in SemCor this is a reasonable approach for evaluation. We generated a thesaurus entry for all polysemous nouns which occurred in SemCor with a frequency ` 2, and in the BNC with a frequency 10 in the grammatical relations listed in section 2.1 above. The jcn measure uses corpus data for the calculation of IC. We experimented with counts obtained from the BNC and the Brown corpus. The variation in counts had negligible affect on the results. 3 The experimental results reported here are obtained using IC counts from the BNC corpus. All the results shown here are those with the size of thesaurus entries ( ) set to 50. 4 We calculate the accuracy of finding the predominant sense, when there is indeed one sense with a higher frequency than the others for this word in SemCor ( 6 > < ). We also calculate the WSD accuracy that would be obtained on SemCor, when using our first sense in all contexts ( > L ). 3.1 Results The results in table 1 show the accuracy of the ranking with respect to SemCor over the entire set of 2595 polysemous nouns in SemCor with 3Using the default IC counts provided with the package did result in significantly higher results, but these default files are obtained from the sense-tagged data within SemCor itself so we discounted these results. 4We repeated the experiment with the BNC data for jcn using #3VE\E and however, the number of neighbours used gave only minimal changes to the results so we do not report them here. measure 6 >< % a> L % lesk 54 48 jcn 54 46 baseline 32 24 Table 1: SemCor results the jcn and lesk WordNet similarity measures. The random baseline for choosing the predominant sense over all these words ( K Fr' eVVL LNQ C LRQRL]Sf
U ) is 32%. Both WordNet similarity measures beat this baseline. The random baseline for a> L ( K F Q< e Q C L LRQ C LRQNL*S U ) is 24%. Again, the automatic ranking outperforms this by a large margin. The first sense in SemCor provides an upperbound for this task of 67%. Since both measures gave comparable results we restricted our remaining experiments to jcn because this gave good results for finding the predominant sense, and is much more efficient than lesk, given the precompilation of the IC files. 3.2 Discussion From manual analysis, there are cases where the acquired first sense disagrees with SemCor, yet is intuitively plausible. This is to be expected regardless of any inherent shortcomings of the ranking technique since the senses within SemCor will differ compared to those of the BNC. For example, in WordNet the first listed sense of pipe is tobacco pipe, and this is ranked joint first according to the Brown files in SemCor with the second sense tube made of metal or plastic used to carry water, oil or gas etc.... The automatic ranking from the BNC data lists the latter tube sense first. This seems quite reasonable given the nearest neighbours: tube, cable, wire, tank, hole, cylinder, fitting, tap, cistern, plate.... Since SemCor is derived from the Brown corpus, which predates the BNC by up to 30 years 5 and contains a higher proportion of fiction 6, the high ranking for the tobacco pipe sense according to SemCor seems plausible. Another example where the ranking is intuitive, is soil. The first ranked sense according to SemCor is the filth, stain: state of being unclean sense whereas the automatic ranking lists dirt, ground, earth as the first sense, which is the second ranked 5The text in the Brown corpus was produced in 1961, whereas the bulk of the written portion of the BNC contains texts produced between 1975 and 1993. 66 out of the 15 Brown genres are fiction, including one specifically dedicated to detective fiction, whilst only 20% of the BNC text represents imaginative writing, the remaining 80% being classified as informative. sense according to SemCor. This seems intuitive given our expected relative usage of these senses in modern British English. Even given the difference in text type between SemCor and the BNC the results are encouraging, especially given that our a> L results are for polysemous nouns. In the English all-words SENSEVAL-2, 25% of the noun data was monosemous. Thus, if we used the sense ranking as a heuristic for an “all nouns” task we would expect to get precision in the region of 60%. We test this below on the SENSEVAL-2 English all-words data. 4 Experiment on SENSEVAL-2 English all Words Data In order to see how well the automatically acquired predominant sense performs on a WSD task from which the WordNet sense ordering has not been taken, we use the SENSEVAL-2 all-words data (Palmer et al., 2001). 7 This is a hand-tagged test suite of 5,000 words of running text from three articles from the Penn Treebank II. We use an allwords task because the predominant senses will reflect the sense distributions of all nouns within the documents, rather than a lexical sample task, where the target words are manually determined and the results will depend on the skew of the words in the sample. We do not assume that the predominant sense is a method of WSD in itself. To disambiguate senses a system should take context into account. However, it is important to know the performance of this heuristic for any systems that use it. We generated a thesaurus entry for all polysemous nouns in WordNet as described in section 2.1 above. We obtained the predominant sense for each of these words and used these to label the instances in the noun data within the SENSEVAL-2 English allwords task. We give the results for this WSD task in table 2. We compare results using the first sense listed in SemCor, and the first sense according to the SENSEVAL-2 English all-words test data itself. For the latter, we only take a first-sense where there is more than one occurrence of the noun in the test data and one sense has occurred more times than any of the others. We trivially labelled all monosemous items. Our automatically acquired predominant sense performs nearly as well as the first sense provided by SemCor, which is very encouraging given that 7In order to do this we use the mapping provided at http://www.lsi.upc.es/˜nlp/tools/mapping.html (Daud´e et al., 2000) for obtaining the SENSEVAL-2 data in WordNet 1.6. We discounted the few items for which there was no mapping. This amounted to only 3% of the data. precision recall Automatic 64 63 SemCor 69 68 SENSEVAL-2 92 72 Table 2: Evaluating predominant sense information on SENSEVAL-2 all-words data. our method only uses raw text, with no manual labelling. The performance of the predominant sense provided in the SENSEVAL-2 test data provides an upper bound for this task. The items that were not covered by our method were those with insufficient grammatical relations for the tuples employed. Two such words, today and one, each occurred 5 times in the test data. Extending the grammatical relations used for building the thesaurus should improve the coverage. There were a similar number of words that were not covered by a predominant sense in SemCor. For these one would need to obtain more sense-tagged text in order to use this heuristic. Our automatic ranking gave 67% precision on these items. This demonstrates that our method of providing a first sense from raw text will help when sense-tagged data is not available. 5 Experiments with Domain Specific Corpora A major motivation for our work is to try to capture changes in ranking of senses for documents from different domains. In order to test this we applied our method to two specific sections of the Reuters corpus. We demonstrate that choosing texts from a particular domain has a significant influence on the sense ranking. We chose the domains of SPORTS and FINANCE since there is sufficient material for these domains in this publically available corpus. 5.1 Reuters Corpus The Reuters corpus (Rose et al., 2002) is a collection of about 810,000 Reuters, English Language News stories. Many of the articles are economy related, but several other topics are included too. We selected documents from the SPORTS domain (topic code: GSPO) and a limited number of documents from the FINANCE domain (topic codes: ECAT and MCAT). The SPORTS corpus consists of 35317 documents (about 9.1 million words). The FINANCE corpus consists of 117734 documents (about 32.5 million words). We acquired thesauruses for these corpora using the procedure described in section 2.1. 5.2 Two Experiments There is no existing sense-tagged data for these domains that we could use for evaluation. We therefore decided to select a limited number of words and to evaluate these words qualitatively. The words included in this experiment are not a random sample, since we anticipated different predominant senses in the SPORTS and FINANCE domains for these words. Additionally, we evaluated our method quantitatively using the Subject Field Codes (SFC) resource (Magnini and Cavagli`a, 2000) which annotates WordNet synsets with domain labels. The SFC contains an economy label and a sports label. For this domain label experiment we selected all the words in WordNet that have at least one synset labelled economy and at least one synset labelled sports. The resulting set consisted of 38 words. We contrast the distribution of domain labels for these words in the 2 domain specific corpora. 5.3 Discussion The results for 10 of the words from the qualitative experiment are summarized in table 3 with the WordNet sense number for each word supplied alongside synonyms or hypernyms from WordNet for readability. The results are promising. Most words show the change in predominant sense (PS) that we anticipated. It is not always intuitively clear which of the senses to expect as predominant sense for either a particular domain or for the BNC, but the first senses of words like division and goal shift towards the more specific senses (league and score respectively). Moreover, the chosen senses of the word tie proved to be a textbook example of the behaviour we expected. The word share is among the words whose predominant sense remained the same for all three corpora. We anticipated that the stock certificate sense would be chosen for the FINANCE domain, but this did not happen. However, that particular sense ended up higher in the ranking for the FINANCE domain. Figure 2 displays the results of the second experiment with the domain specific corpora. This figure shows the domain labels assigned to the predominant senses for the set of 38 words after ranking the words using the SPORTS and the FINANCE corpora. We see that both domains have a similarly high percentage of factotum (domain independent) labels, but as we would expect, the other peaks correspond to the economy label for the FINANCE corpus, and the sports label for the SPORTS corpus. Word PS BNC PS FINANCE PS SPORTS pass 1 (accomplishment) 14 (attempt) 15 (throw) share 2 (portion, asset) 2 2 division 4 (admin. unit) 4 6 (league) head 1 (body part) 4 (leader) 4 loss 2 (transf. property) 2 8 (death, departure) competition 2 (contest, social event) 3 (rivalry) 2 match 2 (contest) 7 (equal, person) 2 tie 1 (neckwear) 2 (affiliation) 3 (draw) strike 1 (work stoppage) 1 6 (hit, success) goal 1 (end, mental object) 1 2 (score) Table 3: Domain specific results 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Percentage law politics religion factotum administr. biology play commerce industry free_time economy physics telecom. mathematics medicine sports sport finance Figure 2: Distribution of domain labels of predominant senses for 38 polysemous words ranked using the SPORTS and FINANCE corpus. 6 Related Work Most research in WSD concentrates on using contextual features, typically neighbouring words, to help determine the correct sense of a target word. In contrast, our work is aimed at discovering the predominant senses from raw text because the first sense heuristic is such a useful one, and because handtagged data is not always available. A major benefit of our work, rather than reliance on hand-tagged training data such as SemCor, is that this method permits us to produce predominant senses for the domain and text type required. Buitelaar and Sacaleanu (2001) have previously explored ranking and selection of synsets in GermaNet for specific domains using the words in a given synset, and those related by hyponymy, and a term relevance measure taken from information retrieval. Buitelaar and Sacaleanu have evaluated their method on identifying domain specific concepts using human judgements on 100 items. We have evaluated our method using publically available resources, both for balanced and domain specific text. Magnini and Cavagli`a (2000) have identified WordNet word senses with particular domains, and this has proven useful for high precision WSD (Magnini et al., 2001); indeed in section 5 we used these domain labels for evaluation. Identification of these domain labels for word senses was semiautomatic and required a considerable amount of hand-labelling. Our approach is complementary to this. It only requires raw text from the given domain and because of this it can easily be applied to a new domain, or sense inventory, given sufficient text. Lapata and Brew (2004) have recently also highlighted the importance of a good prior in WSD. They used syntactic evidence to find a prior distribution for verb classes, based on (Levin, 1993), and incorporate this in a WSD system. Lapata and Brew obtain their priors for verb classes directly from subcategorisation evidence in a parsed corpus, whereas we use parsed data to find distributionally similar words (nearest neighbours) to the target word which reflect the different senses of the word and have associated distributional similarity scores which can be used for ranking the senses according to prevalence. There has been some related work on using automatic thesauruses for discovering word senses from corpora Pantel and Lin (2002). In this work the lists of neighbours are themselves clustered to bring out the various senses of the word. They evaluate using the lin measure described above in section 2.2 to determine the precision and recall of these discovered classes with respect to WordNet synsets. This method obtains precision of 61% and recall 51%. If WordNet sense distinctions are not ultimately required then discovering the senses directly from the neighbours list is useful because sense distinctions discovered are relevant to the corpus data and new senses can be found. In contrast, we use the neighbours lists and WordNet similarity measures to impose a prevalence ranking on the WordNet senses. We believe automatic ranking techniques such as ours will be useful for systems that rely on WordNet, for example those that use it for lexical acquisition or WSD. It would be useful however to combine our method of finding predominant senses with one which can automatically find new senses within text and relate these to WordNet synsets, as Ciaramita and Johnson (2003) do with unknown nouns. We have restricted ourselves to nouns in this work, since this PoS is perhaps most affected by domain. We are currently investigating the performance of the first sense heuristic, and this method, for other PoS on SENSEVAL-3 data (McCarthy et al., 2004), although not yet with rankings from domain specific corpora. The lesk measure can be used when ranking adjectives, and adverbs as well as nouns and verbs (which can also be ranked using jcn). Another major advantage that lesk has is that it is applicable to lexical resources which do not have the hierarchical structure that WordNet does, but do have definitions associated with word senses. 7 Conclusions We have devised a method that uses raw corpus data to automatically find a predominant sense for nouns in WordNet. We use an automatically acquired thesaurus and a WordNet Similarity measure. The automatically acquired predominant senses were evaluated against the hand-tagged resources SemCor and the SENSEVAL-2 English all-words task giving us a WSD precision of 64% on an all-nouns task. This is just 5% lower than results using the first sense in the manually labelled SemCor, and we obtain 67% precision on polysemous nouns that are not in SemCor. In many cases the sense ranking provided in SemCor differs to that obtained automatically because we used the BNC to produce our thesaurus. Indeed, the merit of our technique is the very possibility of obtaining predominant senses from the data at hand. We have demonstrated the possibility of finding predominant senses in domain specific corpora on a sample of nouns. In the future, we will perform a large scale evaluation on domain specific corpora. In particular, we will use balanced and domain specific corpora to isolate words having very different neighbours, and therefore rankings, in the different corpora and to detect and target words for which there is a highly skewed sense distribution in these corpora. There is plenty of scope for further work. We want to investigate the effect of frequency and choice of distributional similarity measure (Weeds et al., 2004). Additionally, we need to determine whether senses which do not occur in a wide variety of grammatical contexts fare badly using distributional measures of similarity, and what can be done to combat this problem using relation specific thesauruses. Whilst we have used WordNet as our sense inventory, it would be possible to use this method with another inventory given a measure of semantic relatedness between the neighbours and the senses. The lesk measure for example, can be used with definitions in any standard machine readable dictionary. Acknowledgements We would like to thank Siddharth Patwardhan and Ted Pedersen for making the WN Similarity package publically available. This work was funded by EU-2001-34460 project MEANING: Developing Multilingual Web-scale Language Technologies, UK EPSRC project Robust Accurate Statistical Parsing (RASP) and a UK EPSRC studentship. References Satanjeev Banerjee and Ted Pedersen. 2002. An adapted Lesk algorithm for word sense disambiguation using WordNet. In Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-02), Mexico City. Edward Briscoe and John Carroll. 2002. Robust accurate statistical annotation of general text. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC), pages 1499–1504, Las Palmas, Canary Islands, Spain. Paul Buitelaar and Bogdan Sacaleanu. 2001. Ranking and selecting synsets by domain relevance. In Proceedings of WordNet and Other Lexical Resources: Applications, Extensions and Customizations, NAACL 2001 Workshop, Pittsburgh, PA. Massimiliano Ciaramita and Mark Johnson. 2003. Supersense tagging of unknown nouns in WordNet. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2003). Scott Cotton, Phil Edmonds, Adam Kilgarriff, and Martha Palmer. 1998. SENSEVAL-2. http://www.sle.sharp.co.uk/senseval2/. Jordi Daud´e, Lluis Padr´o, and German Rigau. 2000. Mapping wordnets using structural information. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, Hong Kong. V´eronique Hoste, Anne Kool, and Walter Daelemans. 2001. Classifier optimization and combination in the English all words task. In Proceedings of the SENSEVAL-2 workshop, pages 84–86. Jay Jiang and David Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In International Conference on Research in Computational Linguistics, Taiwan. Anna Korhonen. 2002. Semantically motivated subcategorization acquisition. In Proceedings of the ACL Workshop on Unsupervised Lexical Acquisition, Philadelphia, USA. Mirella Lapata and Chris Brew. 2004. Verb class disambiguation using informative priors. Computational Linguistics, 30(1):45–75. Beth Levin. 1993. English Verb Classes and Alternations: a Preliminary Investigation. University of Chicago Press, Chicago and London. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL 98, Montreal, Canada. Bernardo Magnini and Gabriela Cavagli`a. 2000. Integrating subject field codes into WordNet. In Proceedings of LREC-2000, Athens, Greece. Bernardo Magnini, Carlo Strapparava, Giovanni Pezzuli, and Alfio Gliozzo. 2001. Using domain information for word sense disambiguation. In Proceedings of the SENSEVAL-2 workshop, pages 111–114. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carrolł. 2004. Using automatically acquired predominant senses for word sense disambiguation. In Proceedings of the ACL SENSEVAL-3 workshop. Diana McCarthy. 1997. Word sense disambiguation for acquisition of selectional preferences. In Proceedings of the ACL/EACL 97 Workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications, pages 52–61. Paola Merlo and Matthias Leybold. 2001. Automatic distinction of arguments and modifiers: the case of prepositional phrases. In Proceedings of the Workshop on Computational Language Learning (CoNLL 2001), Toulouse, France. George A. Miller, Claudia Leacock, Randee Tengi, and Ross T Bunker. 1993. A semantic concordance. In Proceedings of the ARPA Workshop on Human Language Technology, pages 303–308. Morgan Kaufman. Martha Palmer, Christiane Fellbaum, Scott Cotton, Lauren Delfs, and Hoa Trang Dang. 2001. English tasks: All-words and verb lexical sample. In Proceedings of the SENSEVAL-2 workshop, pages 21–24. Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 613–619, Edmonton, Canada. Siddharth Patwardhan and Ted Pedersen. 2003. The cpan wordnet::similarity package. http://search.cpan.org/author/SID/WordNetSimilarity-0.03/. Siddharth Patwardhan, Satanjeev Banerjee, and Ted Pedersen. 2003. Using measures of semantic relatedness for word sense disambiguation. In Proceedings of the Fourth International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2003), Mexico City. Tony G. Rose, Mary Stevenson, and Miles Whitehead. 2002. The Reuters Corpus volume 1 from yesterday’s news to tomorrow’s language resources. In Proc. of Third International Conference on Language Resources and Evaluation, Las Palmas de Gran Canaria. Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. Yorick Wilks and Mark Stevenson. 1998. The grammar of sense: using part-of speech tags as a first step in semantic disambiguation. Natural Language Engineering, 4(2):135–143. David Yarowsky and Radu Florian. 2002. Evaluating sense disambiguation performance across diverse parameter spaces. Natural Language Engineering, 8(4):293–310. | 2004 | 36 |
Unsupervised Sense Disambiguation Using Bilingual Probabilistic Models Indrajit Bhattacharya Dept. of Computer Science University of Maryland College Park, MD, USA [email protected] Lise Getoor Dept. of Computer Science University of Maryland College Park, MD, USA [email protected] Yoshua Bengio Dept. IRO Universit´e de Montr´eal Montr´eal, Qu´ebec, Canada [email protected] Abstract We describe two probabilistic models for unsupervised word-sense disambiguation using parallel corpora. The first model, which we call the Sense model, builds on the work of Diab and Resnik (2002) that uses both parallel text and a sense inventory for the target language, and recasts their approach in a probabilistic framework. The second model, which we call the Concept model, is a hierarchical model that uses a concept latent variable to relate different language specific sense labels. We show that both models improve performance on the word sense disambiguation task over previous unsupervised approaches, with the Concept model showing the largest improvement. Furthermore, in learning the Concept model, as a by-product, we learn a sense inventory for the parallel language. 1 Introduction Word sense disambiguation (WSD) has been a central question in the computational linguistics community since its inception. WSD is fundamental to natural language understanding and is a useful intermediate step for many other language processing tasks (Ide and Veronis, 1998). Many recent approaches make use of ideas from statistical machine learning; the availability of shared sense definitions (e.g. WordNet (Fellbaum, 1998)) and recent international competitions (Kilgarrif and Rosenzweig, 2000) have enabled researchers to compare their results. Supervised approaches which make use of a small hand-labeled training set (Bruce and Wiebe, 1994; Yarowsky, 1993) typically outperform unsupervised approaches (Agirre et al., 2000; Litkowski, 2000; Lin, 2000; Resnik, 1997; Yarowsky, 1992; Yarowsky, 1995), but tend to be tuned to a specific corpus and are constrained by scarcity of labeled data. In an effort to overcome the difficulty of finding sense-labeled training data, researchers have begun investigating unsupervised approaches to wordsense disambiguation. For example, the use of parallel corpora for sense tagging can help with word sense disambiguation (Brown et al., 1991; Dagan, 1991; Dagan and Itai, 1994; Ide, 2000; Resnik and Yarowsky, 1999). As an illustration of sense disambiguation from translation data, when the English word bank is translated to Spanish as orilla, it is clear that we are referring to the shore sense of bank, rather than the financial institution sense. The main inspiration for our work is Diab and Resnik (2002), who use translations and linguistic knowledge for disambiguation and automatic sense tagging. Bengio and Kermorvant (2003) present a graphical model that is an attempt to formalize probabilistically the main ideas in Diab and Resnik (2002). They assume the same semantic hierarchy (in particular, WordNet) for both the languages and assign English words as well as their translations to WordNet synsets. Here we present two variants of the graphical model in Bengio and Kermorvant (2003), along with a method to discover a cluster structure for the Spanish senses. We also present empirical word sense disambiguation results which demonstrate the gain brought by this probabilistic approach, even while only using the translated word to provide disambiguation information. Our first generative model, the Sense Model, groups semantically related words from the two languages into senses, and translations are generated by probabilistically choosing a sense and then words from the sense. We show that this improves on the results of Diab and Resnik (2002). Our next model, which we call the Concept Model, aims to improve on the above sense structure by modeling the senses of the two languages separately and relating senses from both languages through a higher-level, semantically less precise concept. The intuition here is that not all of the senses that are possible for a word will be relevant for a concept. In other words, the distribution over the senses of a word given a concept can be expected to have a lower entropy than the distribution over the senses of the word in the language as a whole. In this paper, we look at translation data as a resource for identification of semantic concepts. Note that actual translated word pairs are not always good matches semantically, because the translation process is not on a word by word basis. This introduces a kind of noise in the translation, and an additional hidden variable to represent the shared meaning helps to take it into account. Improved performance over the Sense Model validates the use of concepts in modeling translations. An interesting by-product of the Concept Model is a semantic structure for the secondary language. This is automatically constructed using background knowledge of the structure for the primary language and the observed translation pairs. In the model, words sharing the same sense are synonyms while senses under the same concept are semantically related in the corpus. An investigation of the model trained over real data reveals that it can indeed group related words together. It may be noted that predicting senses from translations need not necessarily be an end result in itself. As we have already mentioned, lack of labeled data is a severe hindrance for supervised approaches to word sense disambiguation. At the same time, there is an abundance of bilingual documents and many more can potentially be mined from the web. It should be possible using our approach to (noisily) assign sense tags to words in such documents, thus providing huge resources of labeled data for supervised approaches to make use of. For the rest of this paper, for simplicity we will refer to the primary language of the parallel document as English and to the secondary as Spanish. The paper is organized as follows. We begin by formally describing the models in Section 2. We describe our approach for constructing the senses and concepts in Section 3. Our algorithm for learning the model parameters is described in Section 4. We present experimental results in Section 5 and our analysis in Section 6. We conclude in Section 7. 2 Probabilistic Models for Parallel Corpora We motivate the use of a probabilistic model by illustrating that disambiguation using translations is possible even when a word has a unique translation. For example, according to WordNet, the word prevention has two senses in English, which may be abbreviated as hindrance (the act of hindering or obstruction) and control (by prevention, e.g. the control of a disease). It has a single translation in our corpus, that being prevenci´on. The first English sense, hindrance, also has other words like bar that occur in the corpus and all of these other words are observed to be translated in Spanish as the word obstrucci´on. In addition, none of these other words translate to prevenci´on. So it is not unreasonable to suppose that the intended sense for prevention when translated as prevenci´on is different from that of bar. Therefore, the intended sense is most likely to be control. At the very heart of the reasoning is probabilistic analysis and independence assumptions. We are assuming that senses and words have certain occurrence probabilities and that the choice of the word can be made independently once the sense has been decided. This is the flavor that we look to add to modeling parallel documents for sense disambiguation. We formally describe the two generative models that use these ideas in Subsections 2.2 and 2.3. T We Ws Te Ts C Ws We word concept sense b) Concept Model a) Sense Model Figure 1: Graphical Representations of the a) Sense Model and the b) Concept Model 2.1 Notation Throughout, we use uppercase letters to denote random variables and lowercase letters to denote specific instances of the random variables. A translation pair is ( , ) where the subscript and indicate the primary language (English) and the secondary language (Spanish).
and
. We use the shorthand for !#" $ . 2.2 The Sense Model The Sense Model makes the assumption, inspired by ideas in Diab and Resnik (2002) and Bengio and Kermorvant (2003), that the English word ! and the Spanish word % in a translation pair share the same precise sense. In other words, the set of sense labels for the words in the two languages is the same and may be collapsed into one set of senses that is responsible for both English and Spanish words and the single latent variable in the model is the sense label & '()*(+, for both words and - . We also make the assumption that the words in both languages are conditionally independent given the sense label. The generative parameters .0/ for the model are the prior probability ( of each sense ( and the conditional probabilities 21 ( and 3 *1 ( of each word and in the two languages given the sense. The generation of a translation pair by this model may be viewed as a two-step process that first selects a sense according to the priors on the senses and then selects a word from each language using the conditional probabilities for that sense. This may be imagined as a factoring of the joint distribution: & 4"5 & 1 & 1 & . Note that in the absence of labeled training data, two of the random variables - and are observed, while the sense variable & is not. However, we can derive the possible values for our sense labels from WordNet, which gives us the possible senses for each English word - . The Sense model is shown in Figure 1(a). 2.3 The Concept Model The assumption of a one-to-one association between sense labels made in the Sense Model may be too simplistic to hold for arbitrary languages. In particular, it does not take into account that translation is from sentence to sentence (with a shared meaning), while the data we are modeling are aligned single-word translations % -6 , in which the intended meaning of - does not always match perfectly with the intended meaning of 7 . Generally, a set of 8 related senses in one language may be translated by one of 9 related senses in the other. This many-to-many mapping is captured in our alternative model using a second level hidden variable called a concept. Thus we have three hidden variables in the Concept Model — the English sense & , the Spanish sense & and the concept : , where & ;" ( <
*( >= , & ?" ( <
*( A@ and : " B)*<BCD . We make the assumption that the senses & and & are independent of each other given the shared concept : . The generative parameters . / in the model are the prior probabilities B over the concepts, the conditional probabilities ( E1 B and ( *1 B for the English and Spanish senses given the concept, and the conditional probabilities F1 ( $ and 21 ( $ for the words and in each language given their senses. We can now imagine the generative process of a translation pair by the Concept Model as first selecting a concept according to the priors, then a sense for each language given the concept, and finally a word for each sense using the conditional probabilities of the words. As in Bengio and Kermorvant (2003), this generative procedure may be captured by factoring the joint distribution using the conditional independence assumptions as 3 - - & & : " : & F1 : 3 !21 & $ & E1 : 21 & $ . The Concept model is shown in Figure 1(b). 3 Constructing the Senses and Concepts Building the structure of the model is crucial for our task. Choosing the dimensionality of the hidden variables by selecting the number of senses and concepts, as well as taking advantage of prior knowledge to impose constraints, are very important aspects of building the structure. If certain words are not possible for a given sense, or certain senses are not possible for a given concept, their corresponding parameters should be 0. For instance, for all words that do not belong to a sense ( , the corresponding parameter .EGIH$J KLH would be permanently set to 0. Only the remaining parameters need to be modeled explicitly. While model selection is an extremely difficult problem in general, an important and interesting option is the use of world knowledge. Semantic hierarchies for some languages have been built. We should be able to make use of these known taxonomies in constructing our model. We make heavy use of the WordNet ontology to assign structure to both our models, as we discuss in the following subsections. There are two major tasks in building the structure — determining the possible sense labels for each word, both English and Spanish, and constructing the concepts, which involves choosing the number of concepts and the probable senses for each concept. 3.1 Building the Sense Model Each word in WordNet can belong to multiple synsets in the hierarchy, which are its possible senses. In both of our models, we directly use the WordNet senses as the English sense labels. All WordNet senses for which a word has been observed in the corpus form our set of English sense labels. The Sense Model holds that the sense labels for the two domains are the same. So we must use the same WordNet labels for the Spanish words as well. We include a Spanish word for a sense ( if is the translation of any English word in ( . 3.2 Building the Concept Model Unlike the Sense Model, the Concept Model does not constrain the Spanish senses to be the same as the English ones. So the two major tasks in building the Concept Model are constructing the Spanish senses and then clustering the English and Spanish senses to build the concepts. Concept Model te2 ts1 te1 bar prevention c6118 ts2 c20 prevencio’n obstruccio’n Sense Model bar prevention te1 te2 prevencio’n obstruccio’n Figure 2: The Sense and Concept models for prevention, bar, prevenci´on and obstrucci´on For each Spanish word M , we have its set of English translations N
=O . One possibility is to group Spanish words looking at their translations. However, a more robust approach is to consider the relevant English senses for . Each English translation for has its set of English sense labels P GIHDQ drawn from WordNet. So the relevant English sense labels for may be defined as P GSR "UTNV P GIH Q . We call this the English sense map or 2WXY for . We use the 2WXY s to define the Spanish senses. We may imagine each Spanish word to come from one or more Spanish senses. If each word has a single sense, then we add a Spanish sense ( for each *WZXY and all Spanish words that share that 2WXY belong to that sense. Otherwise, the *WZXY s have to be split into frequently occurring subgroups. Frequently co-occurring subsets of 2WXY s can define more refined Spanish senses. We identify these subsets by looking at pairs of 2WZXY s and computing their intersections. An intersection is considered to be a Spanish sense if it occurs for a significant number of pairs of 2WXY s. We consider both ways of building Spanish senses. In either case, a constructed Spanish sense ( comes with its relevant set ( Q of English senses, which we denote as 2WZXY ( . Once we have the Spanish senses, we cluster them to form concepts. We use the *WZXY corresponding to each Spanish sense to define a measure of similarity for a pair of Spanish senses. There are many options to choose from here. We use a simple measure that counts the number of common items in the two *WZXY s.1 The similarity measure is now used to cluster the Spanish senses ( . Since this measure is not transitive, it does not directly define equivalence classes over (6 . Instead, we get a similarity graph where the vertices are the Spanish senses and we add an edge between two senses if their similarity is above a threshold. We now pick each connected component from this graph as a cluster of similar Spanish senses. 1Another option would be to use a measure of similarity for English senses, proposed in Resnik (1995) for two synsets in a concept hierarchy like WordNet. Our initial results with this measure were not favorable. Now we build the concepts from the Spanish sense clusters. We recall that a concept is defined by a set of English senses and a set of Spanish senses that are related. Each cluster represents a concept. A particular concept is formed by the set of Spanish senses in the cluster and the English senses relevant for them. The relevant English senses for any Spanish sense is given by its 2WZXY . Therefore, the union of the *WZXY s of all the Spanish senses in the cluster forms the set of English senses for each concept. 4 Learning the Model Parameters Once the model is built, we use the popular EM algorithm (Dempster et al., 1977) for hidden variables to learn the parameters for both models. The algorithm repeatedly iterates over two steps. The first step maximizes the expected log-likelihood of the joint probability of the observed data with the current parameter settings . / . The next step then reestimates the values of the parameters of the model. Below we summarize the re-estimation steps for each model. 4.1 EM for the Sense Model 3 & V " ( [" \ ] ^ _ VL` ) & " ( 1 Q Q . / Q " 1 & V " ( a" b ^ GIHDQ ` c VL` ) 3 & " ( 1 Q d Q .F/ b b ^e H Q ` <c VL` ) & " ( 1 Q Q . / Q " 1 & V " ( follows similarly. 4.2 EM for the Concept Model : V "gfha" \ ] ^ _ Vi` ) : V "gfj1 Q Q . / & Q "lk<1 : V "gfha" b ^ VL` ) : V "mf & Q "lk<1 Q d Q .F/ b ^ VL` ) : V "gfj1 Q Q . / ! Q " 1 & Q "lk [" b ^e HDQ ` <c VL` ) & Q "lk<1 Q " Q . / b b ^ e HDQ ` <c VL` ) & Q "gk1n Q " Q . / & Q " 8 1 : V "ofh and Q " 1 & Q " 8 follow similarly. 4.3 Initialization of Model Probabilities Since the EM algorithm performs gradient ascent as it iteratively improves the log-likelihood, it is prone to getting caught in local maxima, and selection of the initial conditions is crucial for the learning procedure. Instead of opting for a uniform or random initialization of the probabilities, we make use of prior knowledge about the English words and senses available from WordNet. WordNet provides occurrence frequencies for each synset in the SemCor Corpus that may be normalized to derive probabilities Gqp ( $ for each English sense (> . For the Sense Model, these probabilities form the initial priors over the senses, while all English (and Spanish) words belonging to a sense are initially assumed to be equally likely. However, initialization of the Concept Model using the same knowledge is trickier. We would like each English sense ( to have V p V K ( "r Gqp ( $ . But the fact that each sense belongs to multiple concepts and the constraint b K H6sEt ( E1 B u" \ makes the solution non-trivial. Instead, we settle for a compromise. We set V p V K ( E1 B v"w Gqp ( $ and B x" b KLH s2t Gqp ( $ . Subsequent normalization takes care of the sum constraints. For a Spanish sense, we set (< a" b KLH s y{z}|E~ KLR Gqp (> . Once we have the Spanish sense probabilities, we follow the same procedure for setting ( 21 B for each concept. All the Spanish and English words for a sense are set to be equally likely, as in the Sense Model. It turned out in our experiments on real data that this initialization makes a significant difference in model performance. 5 Experimental Evaluation Both the models are generative probabilistic models learned from parallel corpora and are expected to fit the training and subsequent test data. A good fit should be reflected in good prediction accuracy over a test set. The prediction task of interest is the sense of an English word when its translation is provided. We estimate the prediction accuracy and recall of our models on Senseval data.2 In addition, the Concept Model learns a sense structure for the Spanish 2Accuracy is the ratio of the number of correct predictions and the number of attempted predictions. Recall is the ratio of the number of correct predictions and the size of the test set. language. While it is hard to objectively evaluate the quality of such a structure, we present some interesting concepts that are learned as an indication of the potential of our approach. 5.1 Evaluation with Senseval Data In our experiments with real data, we make use of the parallel corpora constructed by Diab and Resnik (2002) for evaluation purposes. We chose to work on these corpora in order to permit a direct comparison with their results. The sense-tagged portion of the English corpus is comprised of the English “allwords” section of the SENSEVAL-2 test data. The remainder of this corpus is constructed by adding the Brown Corpus, the SENSEVAL-1 corpus, the SENSEVAL-2 English Lexical Sample test, trial and training corpora and the Wall Street Journal sections 18-24 from the Penn Treebank. This English corpus is translated into Spanish using two commercially available MT systems: Globalink Pro 6.4 and Systran Professional Premium. The GIZA++ implementation of the IBM statistical MT models was used to derive the most-likely word-level alignments, and these define the English/Spanish word co-occurrences. To take into account variability of translation, we combine the translations from the two systems for each English word, following in the footsteps of Diab and Resnik (2002). For our experiments, we focus only on nouns, of which there are 875 occurrences in our tagged data. The sense tags for the English domain are derived from the WordNet 1.7 inventory. After pruning stopwords, we end up with 16,186 English words, 31,862 Spanish words and 2,385,574 instances of 41,850 distinct translation pairs. The English words come from 20,361 WordNet senses. Table 1: Comparison with Diab’s Model Model Accuracy Recall Parameters Diab 0.618 0.572 Sense M. 0.624 0.616 154,947 Concept M. 0.672 0.651 120,268 As can be seen from the following table, both our models clearly outperform Diab (2003), which is an improvement over Diab and Resnik (2002), in both accuracy and recall, while the Concept Model does significantly better than the Sense Model with fewer parameters. The comparison is restricted to the same subset of the test data. For our best results, the Sense Model has 20,361 senses, while the Concept Model has 20,361 English senses, 11,961 Spanish senses and 7,366 concepts. The Concept Model results are for the version that allows multiple senses for a Spanish word. Results for the 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Recall Accuracy unsup. sup. diab concept model sense model Figure 3: Comparison with Senseval2 Systems single-sense model are similar. In Figure 3, we compare the prediction accuracy and recall against those of the 21 Senseval-2 English All Words participants and that of Diab (2003), when restricted to the same set of noun instances from the gold standard. It can be seen that our models outperform all the unsupervised approaches in recall and many supervised ones as well. No unsupervised approach is better in both accuracy and recall. It needs to be kept in mind that we take into account only bilingual data for our predictions, and not monolingual features like context of the word as most other WSD approaches do. 5.2 Semantic Grouping of Spanish Senses Table 2 shows some interesting examples of different Spanish senses for discovered concepts.3 The context of most concepts, like the ones shown, can be easily understood. For example, the first concept is about government actions and the second deals with murder and accidental deaths. The penultimate concept is interesting because it deals with different kinds of association and involves three different senses containing the word conexi´on. The other words in two of these senses suggest that they are about union and relation respectively. The third probably involves the link sense of connection. Conciseness of the concepts depends on the similarity threshold that is selected. Some may bring together loosely-related topics, which can be separated by a higher threshold. 6 Model Analysis In this section, we back up our experimental results with an in-depth analysis of the performance of our two models. Our Sense Model was motivated by Diab and Resnik (2002) but the flavors of the two are quite 3Some English words are found to occur in the Spanish Senses. This is because the machine translation system used to create the Spanish document left certain words untranslated. different. The most important distinction is that the Sense Model is a probabilistic generative model for parallel corpora, where interaction between different words stemming from the same sense comes into play, even if the words are not related through translations, and this interdependence of the senses through common words plays a role in sense disambiguation. We started off with our discussions on semantic ambiguity with the intuition that identification of semantic concepts in the corpus that relate multiple senses should help disambiguate senses. The Sense Model falls short of this target since it only brings together a single sense from each language. We will now revisit the motivating example from Section 2 and see how concepts help in disambiguation by grouping multiple related senses together. For the Sense Model, S<F
IDE
?1 ( > S<F
IDE
?1 ( <
since it is the only word that ( can generate. However, this difference is compensated for by the higher prior probability (
, which is strengthened by both the translation pairs. Since the probability of joint occurrence is given by the product 3 ( F1 ( 21 ( for any sense ( , the model does not develop a clear preference for any of the two senses. The critical difference in the Concept Model can be appreciated directly from the corresponding joint probability B ( F1 B O1 ( ( 21 B 21 ( 6 , where B is the relevant concept in the model. The preference for a particular instantiation in the model is dependent not on the prior ( over a sense, but on the sense conditional 3 ( 21 B . In our example, since bar, obstrucci´on can be generated only through concept BE , (
*1 BE is the only English sense conditional boosted by it. prevention, prevenci´on is generated through a different concept B \F\ , where the higher conditional qE
SDDE
?1 ( gradually strengthens one of the possible instantiations for it, and the other one becomes increasingly unlikely as the iterations progress. The inference is that only one sense of prevention is possible in the context of the parallel corpus. The key factor in this disambiguation was that two senses of prevention separated out in two different concepts. The other significant difference between the models is in the constraints on the parameters and the effect that they have on sense disambiguation. In the Sense Model, b K ( u" \ , while in the Concept Model, b K H6sEt ( E1 B ?" \ separately for each concept B . Now for two relevant senses for an English word, a slight difference in their priors will tend to get ironed out when normalized over the enTable 2: Example Spanish Senses in a Concept. For each concept, each row is a separate sense. Dictionary senses of Spanish words are provided in English within parenthesis where necessary. actos accidente accidentes supremas muertes(deaths) decisi´on decisiones casualty gobernando gobernante matar(to kill) matanzas(slaughter) muertes-le gubernamentales slaying gobernaci´on gobierno-proporciona derramamiento-de-sangre (spilling-of-blood) prohibir prohibiendo prohibitivo prohibitiva cachiporra(bludgeon) obligar(force) obligando(forcing) gubernamental gobiernos asesinato(murder) asesinatos linterna-el´ectrica linterna(lantern) man´ia craze faros-autom´ovil(headlight) culto(cult) cultos proto-senility linternas-portuarias(harbor-light) delirio delirium antorcha(torch) antorchas antorchas-pino-nudo rabias(fury) rabia farfulla(do hastily) oportunidad oportunidades diferenciaci´on ocasi´on ocasiones distinci´on distinciones riesgo(risk) riesgos peligro(danger) especializaci´on destino sino(fate) maestr´ia (mastery) fortuna suerte(fate) peculiaridades particularidades peculiaridades-inglesas probabilidad probabilidades especialidad especialidades diablo(devil) diablos modelo parang´on dickens ideal ideales heller santo(saint) santos san lucifer satan satan´as idol idols ´idolo deslumbra(dazzle) dios god dioses cromo(chromium) divinidad divinity meteoro meteoros meteor meteoros-blue inmortal(immortal) inmortales meteorito meteoritos teolog´ia teolog pedregosos(rocky) deidad deity deidades variaci´on variaciones minutos minuto discordancia desacuerdo(discord) discordancias momento momentos un-momento desviaci´on(deviation) desviaciones desviaciones-normales minutos momentos momento segundos discrepancia discrepancias fugaces(fleeting) variaci´on diferencia instante momento disensi´on pesta˜neo(blink) gui˜na(wink) pesta˜nean adhesi´on adherencia ataduras(tying) pasillo(corridor) enlace(connection) ataduras aisle atadura ataduras pasarela(footbridge) conexi´on conexiones hall vest´ibulos conexi´on une(to unite) pasaje(passage) relaci´on conexi´on callej´on(alley) callejas-ciegas (blind alley) callejones-ocultos implicaci´on (complicity) envolvimiento tire set of senses for the corpus. In contrast, if these two senses belong to the same concept in the Concept Model, the difference in the sense conditionals will be highlighted since the normalization occurs over a very small set of senses — the senses for only that concept, which in the best possible scenario will contain only the two contending senses, as in concept B \F\ of our example. As can be seen from Table 1, the Concept Model not only outperforms the Sense Model, it does so with significantly fewer parameters. This may be counter-intuitive since Concept Model involves an extra concept variable. However, the dissociation of Spanish and English senses can significantly reduce the parameter space. Imagine two Spanish words that are associated with ten English senses and accordingly each of them has a probability for belonging to each of these ten senses. Aided with a concept variable, it is possible to model the same relationship by creating a separate Spanish sense that contains these two words and relating this Spanish sense with the ten English senses through a concept variable. Thus these words now need to belong to only one sense as opposed to ten. Of course, now there are new transition probabilities for each of the eleven senses from the new concept node. The exact reduction in the parameter space will depend on the frequent subsets discovered for the 2WXY s of the Spanish words. Longer and more frequent subsets will lead to larger reductions. It must also be borne in mind that this reduction comes with the independence assumptions made in the Concept Model. 7 Conclusions and Future Work We have presented two novel probabilistic models for unsupervised word sense disambiguation using parallel corpora and have shown that both models outperform existing unsupervised approaches. In addition, we have shown that our second model, the Concept model, can be used to learn a sense inventory for the secondary language. An advantage of the probabilistic models is that they can easily incorporate additional information, such as context information. In future work, we plan to investigate the use of additional monolingual context. We would also like to perform additional validation of the learned secondary language sense inventory. 8 Acknowledgments The authors would like to thank Mona Diab and Philip Resnik for many helpful discussions and insightful comments for improving the paper and also for making their data available for our experiments. This study was supported by NSF Grant 0308030. References E. Agirre, J. Atserias, L. Padr, and G. Rigau. 2000. Combining supervised and unsupervised lexical knowledge methods for word sense disambiguation. In Computers and the Humanities, Special Double Issue on SensEval. Eds. Martha Palmer and Adam Kilgarriff. 34:1,2. Yoshua Bengio and Christopher Kermorvant. 2003. Extracting hidden sense probabilities from bitexts. Technical report, TR 1231, Departement d’informatique et recherche operationnelle, Universite de Montreal. Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1991. Word-sense disambiguation using statistical methods. In Meeting of the Association for Computational Linguistics, pages 264–270. Rebecca Bruce and Janyce Wiebe. 1994. A new approach to sense identification. In ARPA Workshop on Human Language Technology. Ido Dagan and Alon Itai. 1994. Word sense disambiguation using a second language monolingual corpus. Computational Linguistics, 20(4):563– 596. Ido Dagan. 1991. Lexical disambiguation: Sources of information and their statistical realization. In Meeting of the Association for Computational Linguistics, pages 341–342. A.P. Dempster, N.M. Laird, and D.B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B 39:1–38. Mona Diab and Philip Resnik. 2002. An unsupervised method for word sense tagging using parallel corpora. In Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL-02). Mona Diab. 2003. Word Sense Disambiguation Within a Multilingual Framework. Ph.D. thesis, University of Maryland, College Park. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. Nancy Ide and Jean Veronis. 1998. Word sense disambiguation: The state of the art. Computational Linguistics, 28(1):1–40. Nancy Ide. 2000. Cross-lingual sense determination: Can it work? In Computers and the Humanities: Special Issue on Senseval, 34:147-152. Adam Kilgarrif and Joseph Rosenzweig. 2000. Framework and results for english senseval. Computers and the Humanities, 34(1):15–48. Dekang Lin. 2000. Word sense disambiguation with a similarity based smoothed library. In Computers and the Humanities: Special Issue on Senseval, 34:147-152. K. C. Litkowski. 2000. Senseval: The cl research experience. In Computers and the Humanities, 34(1-2), pp. 153-8. Philip Resnik and David Yarowsky. 1999. Distinguishing systems and distinguishing senses: new evaluation methods for word sense disambiguation. Natural Language Engineering, 5(2). Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 448–453. Philip Resnik. 1997. Selectional preference and sense disambiguation. In Proceedings of ACL Siglex Workshop on Tagging Text with Lexical Semantics, Why, What and How?, Washington, April 4-5. David Yarowsky. 1992. Word-sense disambiguation using statistical models of Roget’s categories trained on large corpora. In Proceedings of COLING-92, pages 454–460, Nantes, France, July. David Yarowsky. 1993. One sense per collocation. In Proceedings, ARPA Human Language Technology Workshop, Princeton. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Meeting of the Association for Computational Linguistics, pages 189–196. | 2004 | 37 |
Chinese Verb Sense Discrimination Using an EM Clustering Model with Rich Linguistic Features Jinying Chen, Martha Palmer Department of Computer and Information Science University of Pennsylvania Philadelphia, PA, 19104 {jinying,mpalmer}@linc.cis.upenn.edu Abstract This paper discusses the application of the Expectation-Maximization (EM) clustering algorithm to the task of Chinese verb sense discrimination. The model utilized rich linguistic features that capture predicateargument structure information of the target verbs. A semantic taxonomy for Chinese nouns, which was built semi-automatically based on two electronic Chinese semantic dictionaries, was used to provide semantic features for the model. Purity and normalized mutual information were used to evaluate the clustering performance on 12 Chinese verbs. The experimental results show that the EM clustering model can learn sense or sense group distinctions for most of the verbs successfully. We further enhanced the model with certain fine-grained semantic categories called lexical sets. Our results indicate that these lexical sets improve the model’s performance for the three most challenging verbs chosen from the first set of experiments. 1 Introduction Highly ambiguous words may lead to irrelevant document retrieval and inaccurate lexical choice in machine translation (Palmer et al., 2000), which suggests that word sense disambiguation (WSD) is beneficial and sometimes even necessary in such NLP tasks. This paper addresses WSD in Chinese through developing an Expectation-Maximization (EM) clustering model to learn Chinese verb sense distinctions. The major goal is to do sense discrimination rather than sense labeling, similar to (Schütze, 1998). The basic idea is to divide instances of a word into several clusters that have no sense labels. The instances in the same cluster are regarded as having the same meaning. Word sense discrimination can be applied to document retrieval and similar tasks in information access, and to facilitating the building of large annotated corpora. In addition, since the clustering model can be trained on large unannotated corpora and evaluated on a relatively small sense-tagged corpus, it can be used to find indicative features for sense distinctions through exploring huge amount of available unannotated text data. The EM clustering algorithm (Hofmann and Puzicha, 1998) used here is an unsupervised machine learning algorithm that has been applied in many NLP tasks, such as inducing a semantically labeled lexicon and determining lexical choice in machine translation (Rooth et al., 1998), automatic acquisition of verb semantic classes (Schulte im Walde, 2000) and automatic semantic labeling (Gildea and Jurafsky, 2002). In our task, we equipped the EM clustering model with rich linguistic features that capture the predicate-argument structure information of verbs and restricted the feature set for each verb using knowledge from dictionaries. We also semiautomatically built a semantic taxonomy for Chinese nouns based on two Chinese electronic semantic dictionaries, the Hownet dictionary1 and the Rocling dictionary.2 The 7 top-level categories of this taxonomy were used as semantic features for the model. Since external knowledge is used to obtain the semantic features and guide feature selection, the model is not completely unsupervised from this perspective; however, it does not make use of any annotated training data. Two external quality measures, purity and normalized mutual information (NMI) (Strehl. 2002), were used to evaluate the model’s performance on 12 Chinese verbs. The experimental results show that rich linguistic features and the semantic taxonomy are both very useful in sense discrimination. The model generally performs well in learning sense group distinctions for difficult, highly polysemous verbs and sense distinctions for other verbs. Enhanced by certain fine-grained semantic categories called lexical sets (Hanks, 1996), the model’s 1 http://www.keenage.com/. 2 A Chinese electronic dictionary liscenced from The Association for Computational Linguistics and Chinese Language Processing (ACLCLP), Nankang, Taipei, Taiwan. performance improved in a preliminary experiment for the three most difficult verbs chosen from the first set of experiments. The paper is organized as follows: we briefly introduce the EM clustering model in Section 2 and describe the features used by the model in Section 3. In Section 4, we introduce a semantic taxonomy for Chinese nouns, which is built semiautomatically for our task but can also be used in other NLP tasks such as co-reference resolution and relation detection in information extraction. We report our experimental results in Section 5 and conclude our discussion in Section 6. 2 EM Clustering Model The basic idea of our EM clustering approach is similar to the probabilistic model of co-occurrence described in detail in (Hofmann and Puzicha 1998). In our model, we treat a set of features { } mf f f ,..., , 2 1 , which are extracted from the parsed sentences that contain a target verb, as observed variables. These variables are assumed to be independent given a hidden variable c, the sense of the target verb. Therefore the joint probability of the observed variables (features) for each verb instance, i.e., each parsed sentence containing the target verb, is defined in equation (1), ∑ ∏ = = c m i i m c f p c p f f f p 1 2 1 ) | ( ) ( ) ,..., , ( (1) The if ’s are discrete-valued features that can take multiple values. A typical feature used in our model is shown in (2), = if (2) At the beginning of training (i.e., clustering), the model’s parameters ) (c p and ) | ( c f p i are randomly initialized.3 Then, the probability of c conditioned on the observed features is computed in the expectation step (E-step), using equation (3), ∑ ∏ ∏ = = = c m i i m i i m c f p c p c f p c p f f f c p 1 1 2 1 ) | ( ) ( ) | ( ) ( ) ,..., , | ( ~ (3) 3 In our experiments, for verbs with more than 3 senses, syntactic and semantic restrictions derived from dictionary entries are used to constrain the random initialization. In the maximization step (M-step), ) (c p and ) | ( c f p i are re-computed by maximizing the loglikelihood of all the observed data which is calculated by using ) ,..., , | ( ~ 2 1 mf f f c p estimated in the E-step. The E-step and M-step are repeated for a fixed number of rounds, which is set to 20 in our experiments,4 or till the amount of change of ) (c p and ) | ( c f p i is under the threshold 0.001. When doing classification, for each verb instance, the model calculates the same conditional probability as in equation (3) and assigns the instance to the cluster with the maximal ) ,..., , | ( 2 1 mf f f c p . 3 Features Used in the Model The EM clustering model uses a set of linguistic features to capture the predicate-argument structure information of the target verbs. These features are usually more indicative of verb sense distinctions than simple features such as words next to the target verb or their POS tags. For example, the Chinese verb “出| chu1” has a sense of produce, the distinction between this sense and the verb’s other senses, such as happen and go out, largely depends on the semantic category of the verb’s direct object. Typical examples are shown in (1), (1 他们 县 出 香蕉 ) a. /their /county /produce /banana “Their county produces bananas.” 他们 县 出 大 b. /their /county /happen /big 事 了 /event /ASP “A big event happened in their county.” 他们 县 出 门 c. /their /county /go out 就 /door 是 山 /right away /be /mountain “In their county, you can see mountains as soon as you step out of the doors.” The verb has the sense produce in (1a) and its object should be something producible, such as 香蕉 “ /banana”. While in (1b), with the sense happen, the verb typically takes an event or eventlike 大事 object, such as “ /big event”, 事故 “ /accident” or “问题/problem” etc. In (1c), 门 the verb’s object “ /door” is closely related to location, consistent with the sense go out. In contrast, simple lexical or POS tag features sometimes fail to capture such information, which can be seen clearly in (2), 4 In our experiments, we set 20 as the maximal number of rounds after trying different numbers of rounds (20, 40, 60, 80, 100) in a preliminary experiment. 0 iff the target verb has no sentential complement 1 iff the target verb has a nonfinite sentential complement 2 iff the target verb has a finite sentential complement 去年 出 (2) a. /last year /produce 香蕉/banana 3000 公斤 / kilogram “3000 kilograms of bananas were produced last year.” 要 出 b. /in order to /produce 海南/Hainan 最好 的 香蕉 /best /DE /banana “In order to produce the best bananas in Hainan, ……” The verb’s object “香蕉/banana”, which is next to the verb in (2a), is far away from the verb in (2b). For (2b), a classifier only looking at the adjacent positions of the target verb tends to be misled by the NP right after the verb, i.e., “海南/Hainan”, which is a Province in China and a typical object of the verb with the sense go out. Five types of features are used in our model: 1. Semantic category of the subject of the target verb 2. Semantic category of the object of the target verb 3. Transitivity of the target verb 4. Whether the target verb takes a sentential complement and which type of sentential complement (finite or nonfinite) it takes 5. Whether the target verb occurs in a verb compound We obtain the values for the first two types of features (1) and (2) from a semantic taxonomy for Chinese nouns, which we will introduce in detail in the next section. In our implementation, the model uses different features for different verbs. The criteria for feature selection are from the electronic CETA dictionary file 5 and a hard copy English-Chinese dictionary, The Warmth Modern Chinese-English Dictionary.6 For example, the verb “出|chu1” never takes sentential complements, thus the fourth type of feature is not used for it. It could be supposed that we can still have a uniform model, i.e., a model using the same set of features for all the target verbs, and just let the EM clustering algorithm find useful features for different verbs automatically. The problem here is that unsupervised learning models (i.e., models trained on unlabeled data) are more likely to be affected by noisy data than supervised ones. Since all the features used in our model are extracted from automatically parsed sentences that inevitably have preprocessing errors such as segmentation, POS tagging and parsing errors, using verb-specific sets of features can alleviate the problem caused by noisy data to some extent. For example, if the model already knows 5 Licensed from the Department of Defense 6 The Warmth Modern Chinese-English Dictionary, Wang-Wen Books Ltd, 1997. that a verb like “出|chu1” can never take sentential complements (i.e., it does not use the fourth type of feature for that verb), it will not be misled by erroneous parsing information saying that the verb takes sentential complements in certain sentences. Since the corresponding feature is not included, the noisy data is filtered out. In our EM clustering model, all the features selected for a target verb are treated in the same way, as described in Section 2. 4 A Semantic Taxonomy Built Semiautomatically Examples in (1) have shown that the semantic category of the object of a verb sometimes is crucial in distinguishing certain Chinese verb senses. And our previous work on information extraction in Chinese (Chen et al., 2004) has shown that semantic features, which are more general than lexical features but still contain rich information about words, can be used to improve a model’s capability of handling unknown words, thus alleviating potential sparse data problems. We have two Chinese electronic semantic dictionaries: the Hownet dictionary, which assigns 26,106 nouns to 346 semantic categories, and the Rocling dictionary, which assigns 4,474 nouns to 110 semantic categories.7 A preliminary experimental result suggests that these semantic categories might be too fine-grained for the EM clustering model (see Section 5.2 for greater details). An analysis of the sense distinctions of several Chinese verbs also suggests that more general categories on top of the Hownet and Rocling categories could still be informative and most importantly, could enable the model to generate meaningful clusters more easily. We therefore built a three-level semantic taxonomy based on the two semantic dictionaries using both automatic methods and manual effort. The taxonomy was built in three steps. First, a simple mapping algorithm was used to map semantic categories defined in Hownet and Rocling into 27 top-level WordNet categories.8 The Hownet or Rocling semantic categories have English glosses. For each category gloss, the algorithm looks through the hypernyms of its first sense in WordNet and chooses the first WordNet top-level category it finds. 7 Hownet assigns multiple entries (could be different semantic categories) to polysemous words. The Rocling dictionary we used only assigns one entry (i.e., one semantic category) to each noun. 8 The 27 categories contain 25 unique beginners for noun source files in WordNet, as defined in (Fellbaum, 1998) and two higher level categories Entity and Abstraction. The mapping obtained from step 1 needs further modification for two reasons. First, the glosses of Hownet or Rocling semantic categories usually have multiple senses in WordNet. Sometimes, the first sense in WordNet for a category gloss is not its intended meaning in Hownet or Rocling. In this case, the simple algorithm cannot get the correct mapping. Second, Hownet and Rocling sometimes use adjectives or non-words as category glosses, such as animate and LandVehicle etc., which have no WordNet nominal hypernyms at all. However, those adjectives or non-words usually have straightforward meanings and can be easily reassigned to an appropriate WordNet category. Although not accurate, the automatic mapping in step 1 provides a basic framework or skeleton for the semantic taxonomy we want to build and makes subsequent work easier. In step 2, hand correction, we found that we could make judgments and necessary adjustments on about 80% of the mappings by only looking at the category glosses used by Hownet or Rocling, such as livestock, money, building and so on. For the other 20%, we could make quick decisions by looking them up in an electronic table we created. For each Hownet or Rocling category, our table lists all the nouns assigned to it by the two dictionaries. We merged two WordNet categories into others and subdivided three categories that seemed more coarse-grained than others into 2~5 subcategories. Step 2 took three days and 35 intermediate-level categories were generated. In step 3, we manually clustered the 35 intermediate-level categories into 7 top-level semantic categories. Figure 1 shows part of the taxonomy. The EM clustering model uses the 7 top-level categories to define the first two types of features that were introduced in Section 3. For example, the value of a feature kf is 1 if and only if the object NP of the target verb belongs to the semantic category Event and is otherwise 0. 5 Clustering Experiments Since we need labeled data to evaluate the clustering performance but have limited sense- tagged corpora, we applied the clustering model to 12 Chinese verbs in our experiments. The verbs are chosen from 28 annotated verbs in Penn Chinese Treebank so that they have at least two verb meanings in the corpus and for each of them, the number of instances for a single verb sense does not exceed 90% of the total number of instances. In our task, we generally do not include senses for other parts of speech of the selected words, such as noun, preposition, conjunction and particle etc., since the parser we used has a very high accuracy in distinguishing different parts of speech of these words (>98% for most of them). However, we do include senses for conjunctional and/or prepositional usage of two words, “到|dao4” and “为|wei4”, since our parser cannot distinguish the verb usage from the conjunctional or prepositional usage for the two words very well. Five verbs, the first five listed in Table 1, are both highly polysemous and difficult for a supervised word sense classifier (Dang et al., 2002). 9 In our experiments, we manually grouped the verb senses for the five verbs. The criteria for the grouping are similar to Palmer et al.’s (to appear) work on English verbs, which considers both sense coherence and predicate-argument structure distinctions. Figure 2 gives an example of 9 In the supervised task, their accuracies are lower than 85%, and four of them are even lower than the baselines. Entity Plant Artifact Document Food …… Money drinks, edible, meals, vegetable, … Location Location_Part Location Group …… institution, army, corporation, … Event Natural Phenomena Happening Activity …… Process chase, cut, pass, split, cheat, … process, BecomeLess, StateChange, disappear, …. Top level Intermediate level Hownet/Rocling categories Figure 1. Part of the 3-level Semantic Taxonomy for Chinese Nouns (other top-level nodes are Time, Human, Animal and State) the definition of sense groups. The manually defined sense groups are used to evaluate the model’s performance on the five verbs. The model was trained on an unannotated corpus, People’s Daily News (PDN), and tested on the manually sense-tagged Chinese Treebank (with some additional sense-tagged PDN data).10 We parsed the training and test data using a Maximum Entropy parser and extracted the features from the parsed data automatically. The number of clusters used by the model is set to the number of the defined senses or sense groups of each target verb. For each verb, we ran the EM clustering algorithm ten times. Table 2 shows the average performance and the standard deviation for each verb. Table 1 summarizes the data used in the experiments, where we also give the normalized sense perplexity11 of each verb in the test data. 5.1 Evaluation Methods We use two external quality measures, purity and normalized mutual information (NMI) (Strehl. 2002) to evaluate the clustering performance. Assuming a verb has l senses, the clustering model assigns n instances of the verb into k clusters, in is the size of the ith cluster, j n is the number of instances hand-tagged with the jth sense, and j in is the number of instances with the jth sense in the ith cluster, purity is defined in equation (4): ∑ = = k i j i j n n purity 1 max 1 (4) 10 The sense-tagged PDN data we used here are the same as in (Dang et al., 2002). 11 It is calculated as the entropy of the sense distribution of a verb in the test data divided by the largest possible entropy, i.e., log2 (the number of senses of the verb in the test data). It can be interpreted as classification accuracy when for each cluster we treat the majority of instances that have the same sense as correctly classified. The baseline purity is calculated by treating all instances for a target verb in a single cluster. The purity measure is very intuitive. In our case, since the number of clusters is preset to the number of senses, purity for verbs with two senses is equal to classification accuracy defined in supervised WSD. However, for verbs with more than 2 senses, purity is less informative in that a clustering model could achieve high purity by making the instances of 2 or 3 dominant senses the majority instances of all the clusters. Mutual information (MI) is more theoretically well-founded than purity. Treating the verb sense and the cluster as random variables S and C, the MI between them is defined in equation (5): ∑∑ ∑ = = = = l j k i j i j i j i c s n n n n n n c p s p c s p c s p C S MI 1 1 , log ) ( ) ( ) , ( log ) , ( ) , ( (5) MI(S,C) characterizes the reduction in uncertainty of one random variable S (or C) due to knowing the other variable C (or S). A single cluster with all instances for a target verb has a zero MI. Random clustering also has a zero MI in the limit. In our experiments, we used [0,1]normalized mutual information (NMI) (Strehl. 2002). A shortcoming of this measure, however, is that the best possible clustering (upper bound) evaluates to less than 1, unless classes are balanced. Unfortunately, unbalanced sense distribution is the usual case in WSD tasks, which makes NMI itself hard to interpret. Therefore, in addition to NMI, we also give its upper bound (upper-NMI) and the ratio of NMI and its upper bound (NMI-ratio) for each verb, as shown in columns 6 to 8 in Table 2. Senses for “到|dao4” Sense groups for “到|dao4” 1. to go to, leave for 2. to come 3. to arrive 4. to reach a particular stage, condition, or level 5. marker for completion of activities (after a verb) 6. marker for direction of activities (after a verb) 7. to reach a time point 8. up to, until (prepositional usage) 9. up to, until, (from …) to … (conjunctional usage) 1, 2 4,7,8,9 5 3 6 Figure 2. Sense groups for the Chinese verb “到|dao4” Verb| Pinyin Sample senses of the verb # Senses in test data # Sense groups in test data Sense perplexity # Clusters # Training instances # Test instances 出 |chu1 go out /produce 16 7 0.68 8 399 157 到 |dao4 come /reach 9 5 0.72 6 1838 186 见 |jian4 see /show 8 5 0.68 6 117 82 想 |xiang3 think/suppose 6 4 0.64 6 94 228 要 |yao4 Should/intend to 8 4 0.65 7 2781 185 表示|biao3shi4 Indicate /express 2 0.93 2 666 97 发现|fa1xian4 discover /realize 2 0.76 2 319 27 发展|fa1zhan3 develop /grow 3 0.69 3 458 130 恢复|hui1fu4 resume /restore 4 0.83 4 107 125 说 |shuo1 say /express by written words 7 0.40 7 2692 307 投入|tou2ru4 to input /plunge into 2 1.00 2 136 23 为 |wei2_4 to be /in order to 6 0.82 6 547 463 Verb Sense perplexity Baseline Purity (%) Purity (%) Std. Dev. of purity (%) NMI Upper- NMI NMI- ratio (%) Std. Dev. of NMI ratio (%) 出 0.68 52.87 63.31 1.59 0.2954 0.6831 43.24 1.76 到 0.72 40.32 90.48 1.08 0.4802 0.7200 75.65 0.00 见 0.68 58.54 72.20 1.61 0.1526 0.6806 22.41 0.66 想 0.64 68.42 79.39 3.74 0.2366 0.6354 37.24 8.22 要 0.65 69.19 69.62 0.34 0.0108 0.6550 1.65 0.78 表示 0.93 64.95 98.04 1.49 0.8670 0.9345 92.77 0.00 发现 0.76 77.78 97.04 3.87 0.7161 0.7642 93.71 13.26 发展 0.69 53.13 90.77 0.24 0.4482 0.6918 64.79 2.26 恢复 0.83 45.97 65.32 0.00 0.1288 0.8234 15.64 0.00 说 0.40 80.13 93.00 0.58 0.3013 0.3958 76.13 4.07 投入 1.00 52.17 95.65 0.00 0.7827 0.9986 78.38 0.00 为 0.82 32.61 75.12 0.43 0.4213 0.8213 51.30 2.07 Average 0.73 58.01 82.50 1.12 0.4088 0.7336 54.41 3.31 5.2 Experimental Results Table 2 summarizes the experimental results for the 12 Chinese verbs. As we see, the EM clustering model performs well on most of them, except the verb “要|yao4”.12 The NMI measure NMI-ratio turns out to be more stringent than purity. A high purity does not necessarily mean a high NMI-ratio. Although intuitively, NMI-ratio should be related to sense perplexity and purity, it is hard to formalize the relationships between them from the results. In fact, the NMI-ratio for a particular verb is eventually determined by its concrete sense distribution in the test data and the model’s clustering behavior for that verb. For example, the verbs “出|chu1” and “见|jian4” have the same sense perplexity and “见|jian4” has a higher purity than “出|chu1” (72.20% vs. 63.31%), but the NMIratio for “见|jian4” is much lower than “出|chu1” (22.41% vs. 43.24%). An analysis of the 12 For all the verbs except “要|yao4”, the model’s purities outperformed the baseline purities significantly (p<0.05, and p<0.001 for 8 of them). classification results for “见|jian4” shows that the clustering model made the instances of the verb’s most dominant sense the majority instances of three clusters (of total 5 clusters), which is penalized heavily by the NMI measure. Rich linguistic features turn out to be very effective in learning Chinese verb sense distinctions. Except for the two verbs, “发现|fa1xian4” and “表示|biao3shi4”, the sense distinctions of which can usually be made only by syntactic alternations,13 features such as semantic features or combinations of semantic features and syntactic alternations are very beneficial and sometimes even necessary for learning sense distinctions of other verbs. For example, the verb “见|jian4” has one sense see, in which the verb typically takes a Human subject and a sentential complement, while in another sense show, the verb typically takes an Entity subject and a State object. An inspection of the classification results shows 13 For example, the verb “发现|fa1xian4” takes an object in one sense discover and a sentential complement in the other sense realize. Table 1. A summary of the training and test data used in the experiments Table 2. The performance of the EM clustering model on 12 Chinese verbs measured by purity and normalized mutual information (NMI) that the EM clustering model has indeed learned such combinatory patterns from the training data. The experimental results also indicate that the semantic taxonomy we built is beneficial for the task. For example, the verb “投入|tou1ru4” has two senses, input and plunge into. It typically takes an Event object for the second sense but not for the first one. A single feature obtained from our semantic taxonomy, which tests whether the verb takes an Event object, captures this property neatly (achieves purity 95.65% and NMI-ratio 78.38% when using 2 clusters). Without the taxonomy, the top-level category Event is split into many finegrained Hownet or Rocling categories, which makes it very difficult for the EM clustering model to learn sense distinctions for this verb. In fact, in a preliminary experiment only using the Hownet and Rocling categories, the model had the same purity as the baseline (52.17%) and a low NMI-ratio (4.22%) when using 2 clusters. The purity improved when using more clusters (70.43% with 4 clusters and 76.09% with 6), but it was still much lower than the purity achieved by using the semantic taxonomy and the NMI-ratio dropped further (1.19% and 1.20% for the two cases). By looking at the classification results, we identified three major types of errors. First, preprocessing errors create noisy data for the model. Second, certain sense distinctions depend heavily on global contextual information (crosssentence information) that is not captured by our model. This problem is especially serious for the verb “要|yao4”. For example, without global contextual information, the verb can have at least three meanings want, need or should in the same clause, as shown in (3). (3) 他 要 马上 /he /want/need/should /at once 读完 这本 书 /finish reading /this /book. “He wants to/needs to/should finish reading this book at once.” Third, a target verb sometimes has specific types of NP arguments or co-occurs with specific types of verbs in verb compounds in certain senses. Such information is crucial for distinguishing these senses from others, but is not captured by the general semantic taxonomy used here. We did further experiments to investigate how much improvement the model could gain by capturing such information, as discussed in Section 5.3. 5.3 Experiments with Lexical Sets As discussed by Patrick Hanks (1996), certain senses of a verb are often distinguished by very narrowly defined semantic classes (called lexical sets) that are specific to the meaning of that verb sense. For example, in our case, the verb “恢复|hui1fu4” has a sense recover in which its direct object should be something that can be recovered naturally. A typical set of object NPs of the verb for this particular sense is partially listed in (4), (4) Lexical set for naturally recoverable things 体力 身体 健康 { /physical strength, /body, /health, 精力 听力 /mental energy, /hearing 知觉 , /feeling, 记忆力/memory, ……} Most words in this lexical set belong to the Hownet category attribute and the top-level category State in our taxonomy. However, even the lower-level category attribute still contains many other words irrelevant to the lexical set, some of which are even typical objects of the verb for two other senses, resume and regain, such as “邦交/diplomatic relations” in “恢复/resume 邦交/diplomatic relations” and “名誉/reputation” in “恢复/regain名誉/reputation”. Therefore, a lexical set like (4) is necessary for distinguishing the recover sense from other senses of the verb. It has been argued that the extensional definition of lexical sets can only be done using corpus evidence and it cannot be done fully automatically (Hanks, 1997). In our experiments, we use a bootstrapping approach to obtain five lexical sets semi-automatically for three verbs “出|chu1”, “见|jian4” and “恢复|hui1fu4” that have both low purity and low NMI-ratio in the first set of experiments. 14 We first extracted candidates for the lexical sets from the training data. For example, we extracted all the direct objects of the verb “恢复|hui1fu4” and all the verbs that combined with the verb “出|chu1” to form verb compounds from the automatically parsed training data. From the candidates, we manually selected words to form five initial seed sets, each of which contains no more than ten words. A simple algorithm was used to search for all the words that have the same detailed Hownet semantic definitions (semantic category plus certain supplementary information) as the seed words. We did not use Rocling because its semantic definitions are so general that a seed word tends to extend to a huge set of irrelevant words. Highly relevant words were manually selected from all the words found by the searching algorithm and added to the initial seed sets. The enlarged sets were used as lexical sets. The enhanced model first uses the lexical sets to obtain the semantic category of the NP arguments 14 We did not include “要|yao4”, since its meaning rarely depends on local predicate-argument structure information. of the three verbs. Only when the search fails does the model resort to the general semantic taxonomy. The model also uses the lexical sets to determine the types of the compound verbs that contain the target verb “出|chu1” and uses them as new features. Table 3 shows the model’s performance on the three verbs with or without using lexical sets. As we see, lexical sets improves the model’s performance on all of them, especially on the verb “出|chu1”. Although the results are still preliminary, they nevertheless provide us hints of how much a WSD model for Chinese verbs could gain from lexical sets. w/o lexical sets (%) with lexical sets (%) Verb Purity NMI-ratio Purity NMI-ratio 出 63.61 43.24 76.50 52.81 见 72.20 22.41 77.56 34.63 恢复 65.32 15.64 69.03 19.71 6 Conclusion We have shown that an EM clustering model that uses rich linguistic features and a general semantic taxonomy for Chinese nouns generally performs well in learning sense distinctions for 12 Chinese verbs. In addition, using lexical sets improves the model’s performance on three of the most challenging verbs. Future work is to extend our coverage and to apply the semantic taxonomy and the same types of features to supervised WSD in Chinese. Since the experimental results suggest that a general semantic taxonomy and more constrained lexical sets are both beneficial for WSD tasks, we will develop automatic methods to build large-scale semantic taxonomies and lexical sets for Chinese, which reduce human effort as much as possible but still ensure high quality of the obtained taxonomies or lexical sets. 7 Acknowledgements This work has been supported by an ITIC supplement to a National Science Foundation Grant, NSF-ITR-EIA-0205448. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. References Jinying Chen, Nianwen Xue and Martha Palmer. 2004. Using a Smoothing Maximum Entropy Model for Chinese Nominal Entity Tagging. In Proceedings of the 1st Int. Joint Conference on Natural Language Processing. Hainan Island, China. Hoa Trang Dang, Ching-yi Chia, Martha Palmer, and Fu-Dong Chiou. 2002. Simple Features for Chinese Word Sense Disambiguation. In Proceedings of COLING-2002 Nineteenth Int. Conference on Computational Linguistics, Taipei, Aug.24–Sept.1. Christiane Fellbaum. 1998. WordNet – an Electronic Lexical Database. The MIT Press, Cambridge, Massachusetts, London. Daniel Gildea and Daniel Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics, 28(3): 245-288, 2002. Patrick Hanks. 1996. Contextual dependencies and lexical sets. The Int. Journal of Corpus Linguistics, 1:1. Patrick Hanks. 1997. Lexical sets: relevance and probability. in B. Lewandowska-Tomaszczyk and M. Thelen (eds.) Translation and Meaning, Part 4, School of Translation and Interpreting, Maastricht, The Netherlands. Thomas Hofmann and Puzicha Jan. 1998. Statistical models for co-occurrence data, MIT Artificial Intelligence Lab., Technical Report AIM-1625. Adam Kilgarriff and Martha Palmer. 2000. Introduction to the sepcial issue on SENSEVAL. Computers and the Humanities, 34(1-2): 15-48. Martha Palmer, Hoa Trang Dang, and Christiane Fellbaum. To appear. Making fine-grained and coarse-grained sense distinctions, both manually and automatically. Natural Language Engineering. Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1998. EM-based clustering for NLP applications. AIMS Report 4(3).Institut für Maschinelle Sprachverarbeitung. Sabine Schulte im Walde. 2000. Clustering verbs semantically according to their alternation behaviour. In Proceedings of the 18th Int. Conference on Computational Linguistics, 747753. Hinrich Schütze. 1998. Automatic Word Sense Discrimination. Computational Linguistics, 24 (1): 97-124. Alexander Strehl. 2002. Relationship-based Clustering and Cluster Ensembles for Highdimensional Data Mining. Dissertation. The University of Texas at Austin. http://www.lans. ece.utexas.edu/~strehl/diss/. Table 3. Clustering performance with and without lexical sets for three Chinese verbs | 2004 | 38 |
Relieving The Data Acquisition Bottleneck In Word Sense Disambiguation Mona Diab Linguistics Department Stanford University [email protected] Abstract Supervised learning methods for WSD yield better performance than unsupervised methods. Yet the availability of clean training data for the former is still a severe challenge. In this paper, we present an unsupervised bootstrapping approach for WSD which exploits huge amounts of automatically generated noisy data for training within a supervised learning framework. The method is evaluated using the 29 nouns in the English Lexical Sample task of SENSEVAL2. Our algorithm does as well as supervised algorithms on 31% of this test set, which is an improvement of 11% (absolute) over state-of-the-art bootstrapping WSD algorithms. We identify seven different factors that impact the performance of our system. 1 Introduction Supervised Word Sense Disambiguation (WSD) systems perform better than unsupervised systems. But lack of training data is a severe bottleneck for supervised systems due to the extensive labor and cost involved. Indeed, one of the main goals of the SENSEVAL exercises is to create large amounts of sense-annotated data for supervised systems (Kilgarriff&Rosenzweig, 2000). The problem is even more challenging for languages which possess scarce computer readable knowledge resources. In this paper, we investigate the role of large amounts of noisily sense annotated data obtained using an unsupervised approach in relieving the data acquisition bottleneck for the WSD task. We bootstrap a supervised learning WSD system with an unsupervised seed set. We use the sense annotated data produced by Diab’s unsupervised system SALAAM (Diab&Resnik, 2002; Diab, 2003). SALAAM is a WSD system that exploits parallel corpora for sense disambiguation of words in running text. To date, SALAAM yields the best scores for an unsupervised system on the SENSEVAL2 English All-Words task (Diab, 2003). SALAAM is an appealing approach as it provides automatically sense annotated data in two languages simultaneously, thereby providing a multilingual framework for solving the data acquisition problem. For instance, SALAAM has been used to bootstrap the WSD process for Arabic as illustrated in (Diab, 2004). In a supervised learning setting, WSD is cast as a classification problem, where a predefined set of sense tags constitutes the classes. The ambiguous words in text are assigned one or more of these classes by a machine learning algorithm based on some extracted features. This algorithm learns parameters from explicit associations between the class and the features, or combination of features, that characterize it. Therefore, such systems are very sensitive to the training data, and those data are, generally, assumed to be as clean as possible. In this paper, we question that assumption. Can large amounts of noisily annotated data used in training be useful within such a learning paradigm for WSD? What is the nature of the quality-quantity trade-off in addressing this problem? 2 Related Work To our knowledge, the earliest study of bootstrapping a WSD system with noisy data is by Gale et. al., (Gale et al. , 1992). Their investigation was limited in scale to six data items with two senses each and a bounded number of examples per test item. Two more recent investigations are by Yarowsky, (Yarowsky, 1995), and later, Mihalcea, (Mihalcea, 2002). Each of the studies, in turn, addresses the issue of data quantity while maintaining good quality training examples. Both investigations present algorithms for bootstrapping supervised WSD systems using clean data based on a dictionary or an ontological resource. The general idea is to start with a clean initial seed and iteratively increase the seed size to cover more data. Yarowsky starts with a few tagged instances to train a decision list approach. The initial seed is manually tagged with the correct senses based on entries in Roget’s Thesaurus. The approach yields very successful results — 95% — on a handful of data items. Mihalcea, on the other hand, bases the bootstrapping approach on a generation algorithm, GenCor (Mihalcea&Moldovan, 1999). GenCor creates seeds from monosemous words in WordNet, Semcor data, sense tagged examples from the glosses of polysemous words in WordNet, and other hand tagged data if available. This initial seed set is used for querying the Web for more examples and the retrieved contexts are added to the seed corpus. The words in the contexts of the seed words retrieved are then disambiguated. The disambiguated contexts are then used for querying the Web for yet more examples, and so on. It is an iterative algorithm that incrementally generates large amounts of sense tagged data. The words found are restricted to either part of noun compounds or internal arguments of verbs. Mihalcea’s supervised learning system is an instance-based-learning algorithm. In the study, Mihalcea compares results yielded by the supervised learning system trained on the automatically generated data, GenCor, against the same system trained on manually annotated data. She reports successful results on six of the data items tested. 3 Empirical Layout Similar to Mihalcea’s approach, we compare results obtained by a supervised WSD system for English using manually sense annotated training examples against results obtained by the same WSD system trained on SALAAM sense tagged examples. The test data is the same, namely, the SENSEVAL 2 English Lexical Sample test set. The supervised WSD system chosen here is the University of Maryland System for SENSEVAL 2 Tagging ( ) (Cabezas et al. , 2002). 3.1 The learning approach adopted by is based on Support Vector Machines (SVM). uses SVM
by Joachims (Joachims, 1998).1 For each target word, where a target word is a test item, a family of classifiers is constructed, one for each of the target word senses. All the positive examples for a sense are considered the negative examples of , where
! "$# .(Allwein et al., 2000) In , each target word is considered an independent classification problem. The features used for are mainly contextual features with weight values associated with each feature. The features are space delimited units, 1http://www.ai.cs.uni.dortmund.de/svmlight. tokens, extracted from the immediate context of the target word. Three types of features are extracted: % Wide Context Features: All the tokens in the paragraph where the target word occurs. % Narrow Context features: The tokens that collocate in the surrounding context, to the left and right, with the target word within a fixed window size of & . % Grammatical Features: Syntactic tuples such as verb-obj, subj-verb, etc. extracted from the context of the target word using a dependency parser, MINIPAR (Lin, 1998). Each feature extracted is associated with a weight value. The weight calculation is a variant on the Inverse Document Frequency (IDF) measure in Information Retrieval. The weighting, in this case, is an Inverse Category Frequency (ICF) measure where each token is weighted by the inverse of its frequency of occurrence in the specified context of the target word. 3.1.1 Manually Annotated Training Data The manually-annotated training data is the SENSEVAL2 Lexical Sample training data for the English task, (SV2LS Train).2 This training data corpus comprises 44856 lines and 917740 tokens. There is a close affinity between the test data and the manually annotated training data. The Pearson (' correlation between the sense distributions for the test data and the manually annotated training data, per test item, ranges between )+*-,/.10 .3 3.2 SALAAM SALAAM exploits parallel corpora for sense annotation. The key intuition behind SALAAM is that when words in one language, L1, are translated into the same word in a second language, L2, then those L1 words are semantically similar. For example, when the English — L1 — words bank, brokerage, mortgage-lender translate into the French — L2 — word banque in a parallel corpus, where bank is polysemous, SALAAM discovers that the intended sense for bank is the financial institution sense, not the geological formation sense, based on the fact that it is grouped with brokerage and mortgage-lender. SALAAM’s algorithm is as follows: % SALAAM expects a word aligned parallel corpus as input; 2http://www.senseval.org 3The correlation is measured between two frequency distributions. Throughout this paper, we opt for using the parametric Pearson 2 correlation rather than KL distance in order to test statistical significance. % L1 words that translate into the same L2 word are grouped into clusters; % SALAAM identifies the appropriate senses for the words in those clusters based on the words senses’ proximity in WordNet. The word sense proximity is measured in information theoretic terms based on an algorithm by Resnik (Resnik, 1999); % A sense selection criterion is applied to choose the appropriate sense label or set of sense labels for each word in the cluster; % The chosen sense tags for the words in the cluster are propagated back to their respective contexts in the parallel text. Simultaneously, SALAAM projects the propagated sense tags for L1 words onto their L2 corresponding translations. 3.2.1 Automatically Generated SALAAM Training Data Three sets of SALAAM tagged training corpora are created: % SV2LS TR: English SENSEVAL2 Lexical Sample trial and training corpora with no manual annotations. It comprises 61879 lines and 1084064 tokens. % MT: The English Brown Corpus, SENSEVAL1 (trial, training and test corpora), Wall Street Journal corpus, and SENSEVAL 2 All Words corpus. All of which comprise 151762 lines and 37945517 tokens. % HT: UN English corpus which comprises 71672 lines of 1734001 tokens The SALAAM-tagged corpora are rendered in a format similar to that of the manually annotated training data. The automatic sense tagging for MT and SV2LS TR training data is based on using SALAAM with machine translated parallel corpora. The HT training corpus is automatically sense tagged based on using SALAAM with the EnglishSpanish UN naturally occurring parallel corpus. 3.3 Experimental Conditions Experimental conditions are created based on three of SALAAM’s tagging factors, Corpus, Language and Threshold: % Corpus: There are 4 different combinations for the training corpora: MT+SV2LS TR; MT+HT+SV2LS TR; HT+SV2LS TR; or SV2LS TR alone. % Language: The context language of the parallel corpus used by SALAAM to obtain the sense tags for the English training corpus. There are three options: French (FR), Spanish (SP), or, Merged languages (ML), where the results are obtained by merging the English output of FR and SP. % Threshold: Sense selection criterion, in SALAAM, is set to either MAX (M) or THRESH (T). These factors result in 39 conditions.4 3.4 Test Data The test data are the 29 noun test items for the SENSEVAL 2 English Lexical Sample task, (SV2LSTest). The data is tagged with the WordNet 1.7pre (Fellbaum, 1998; Cotton et al. , 2001). The average perplexity for the test items is 3.47 (see Section 5.3), the average number of senses is 7.93, and the total number of contexts for all senses of all test items is 1773. 4 Evaluation In this evaluation, is the system trained with SALAAM-tagged data and is the system trained with manually annotated data. Since we don’t expect to outperform human tagging, the results yielded by , are the upper bound for the purposes of this study. It is important to note that is always trained with SV2LS TR as part of the training set in order to guarantee genre congruence between the training and test sets.The scores are calculated using scorer2.5 The average precision score over all the items for is 65.3% at 100% Coverage. 4.1 Metrics We report the results using two metrics, the harmonic mean of precision and recall, ( ) score, and the Performance Ratio (PR), which we define as the ratio between two precision scores on the same test data where precision is rendered using scorer2. PR is measured as follows: "
'
'
(1) 4Originally, there are 48 conditions, 9 of which are excluded due to extreme sparseness in training contexts. 5From http://www.senseval.org, all scorer2 results are reported in fine-grain mode. 4.2 Results Table 1 shows the scores for the upper bound . is the condition in that yields the highest overall score over all noun items.
the maximum score achievable, if we know which condition yields the best performance per test item, therefore it is an oracle condition.6 Since our approach is unsupervised, we also report the results of other unsupervised systems on this test set. Accordingly, the last seven row entries in Table 1 present state-of-the-art SENSEVAL2 unsupervised systems performance on this test set.7 System 65.3 ! " 36.02 $#&%(' 45.1 ITRI 45 UNED-LS-U 40.1 CLRes 29.3 IIT2(R) 24.4 IIT1(R) 23.9 IIT2 23.2 IIT1 22 Table 1: scores on SV2LS Test for , )* ,
, and state-of-the-art unsupervised systems participating in the SENSEVAL2 English Lexical Sample task. All of the unsupervised methods including * and + , are significantly below the supervised method, . * is the third in the unsupervised methods. It is worth noting that the average score across the 39 conditions is & &*.-0/ , and the lowest is &+0 * 01- . The five best conditions for , that yield the highest average across all test items, use the HT corpus in the training data, four of which are the result of merged languages in SALAAM indicating that evidence from different languages simultaneously is desirable.
is the maximum potential among all unsupervised approaches if the best of all the conditions are combined. One of our goals is to automatically determine which condition or set of conditions yield the best results for each test item. Of central interest in this paper is the performance ratio (PR) for the individual nouns. Table 6The different conditions are considered independent taggers and there is no interaction across target nouns 7http://www.senseval.org 2 illustrates the PR of the different nouns yielded by * and
sorted in descending order by + , PR scores. A 0 * ) ) PR indicates an equivalent performance between and . The highest PR values are highlighted in bold. Nouns #Ss UMH% UMSb UMSm detention 4 65.6 1.00 1.05 chair 7 83.3 1.02 1.02 bum 4 85 0.14 1.00 dyke 2 89.3 1.00 1.00 fatigue 6 80.5 1.00 1.00 hearth 3 75 1.00 1.00 spade 6 75 1.00 1.00 stress 6 50 0.05 1.00 yew 3 78.6 1.00 1.00 art 17 47.9 0.98 0.98 child 7 58.7 0.93 0.97 material 16 55.9 0.81 0.92 church 6 73.4 0.75 0.77 mouth 10 55.9 0 0.73 authority 9 62 0.60 0.70 post 12 57.6 0.66 0.66 nation 4 78.4 0.34 0.59 feeling 5 56.9 0.33 0.59 restraint 8 60 0.2 0.56 channel 7 62 0.52 0.52 facility 5 54.4 0.32 0.51 circuit 13 62.7 0.44 0.44 nature 7 45.7 0.43 0.43 bar 19 60.9 0.20 0.30 grip 6 58.8 0.27 0.27 sense 8 39.6 0.24 0.24 lady 8 72.7 0.09 0.16 day 16 62.5 0.06 0.08 holiday 6 86.7 0.08 0.08 Table 2: The number of senses per item, in column #Ss, precision performance per item as indicated in column UMH, PR scores for )* in column UMSb and + , in column UMSm on SV2LS Test + , yields PR scores 2$)+*-,+0 for the top 12 test items listed in Table 2. Our algorithm does as well as supervised algorithm, , on 41.6% of this test set. In , 31% of the test items, (9 nouns yield PR scores 2 )+*-,43 ), do as well as . This is an improvement of 11% absolute over state-of-the-art bootstrapping WSD algorithm yielded by Mihalcea (Mihalcea, 2002). Mihalcea reports high PR scores for six test items only: art, chair, channel, church, detention, nation. It is worth highlighting that her bootstrapping approach is partially supervised since it depends mainly on hand labelled data as a seed for the training data. Interestingly, two nouns, detention and chair, yield better performance than , as indicated by the PRs 0 * ) and 0 * ) 3 , respectively. This is attributed to the fact that SALAAM produces a lot more correctly annotated training data for these two words than that provided in the manually annotated training data for . Some nouns yield very poor PR values mainly due to the lack of training contexts, which is the case for mouth in )* , for example. Or lack of coverage of all the senses in the test data such as for bar and day, or simply errors in the annotation of the SALAAM-tagged training data. If we were to include only nouns that achieve acceptable PR scores of )+*.- — the first 16 nouns in Table 2 for , — the overall potential precision of is significantly increased to 63.8% and the overall precision of is increased to 68.4%.8 These results support the idea that we could replace hand tagging with SALAAM’s unsupervised tagging if we did so for those items that yield an acceptable PR score. But the question remains: How do we predict which training/test items will yield acceptable PR scores? 5 Factors Affecting Performance Ratio In an attempt to address this question, we analyze several different factors for their impact on the performance of quanitified as PR. In order to effectively alleviate the sense annotation acquisition bottleneck, it is crucial to predict which items would be reliably annotated automatically using . Accordingly, in the rest of this paper, we explore 7 different factors by examining the yielded PR values in + , . 5.1 Number of Senses The test items that possess many senses, such as art (17 senses), material (16 senses), mouth (10 senses) and post (12 senses), exhibit PRs of 0.98, 0.92, 0.73 and 0.66, respectively. Overall, the correlation between number of senses per noun and its PR score is an insignificant ' " )+*-&+0 , / 0
3 " 3*-, 2 )+* 0 . Though it is a weak negative correlation, it does suggest that when the number of senses increases, PR tends to decrease. 5.2 Number of Training Examples This is a characteristic of the training data. We examine the correlation between the PR and the num8A PR of
is considered acceptable since achieves an overall score of
! in the WSD task. ber of training examples available to for each noun in the training data. The correlation between the number of training examples and PR is insignificant at ' "" )+* 0 , / 0
3 " )+*.&# 2 )+* / . More interestingly, however, spade, with only 5 training examples, yields a PR score of 0 * ) . This contrasts with nation, which has more than 4200 training examples, but yields a low PR score of )+*$ , . Accordingly, the number of training examples alone does not seem to have a direct impact on PR. 5.3 Sense Perplexity This factor is a characteristic of the training data. Perplexity is 3%'& )(+*,. Entropy is measured as follows: /. "10 243 /5 6 7 /5 (2) where 5 is a sense for a polysemous noun and . is the set of all its senses. Entropy is a measure of confusability in the senses’ contexts distributions; when the distribution is relatively uniform, entropy is high. A skew in the senses’ contexts distributions indicates low entropy, and accordingly, low perplexity. The lowest possible perplexity is 0 , corresponding to ) entropy. A low sense perplexity is desirable since it facilitates the discrimination of senses by the learner, therefore leading to better classification. In the SALAAMtagged training data, for example, bar has the highest perplexity value of ,*$8 over its 19 senses, while day, with 16 senses, has a much lower perplexity of 0 *-& . Surprisingly, we observe nouns with high perplexity such as bum (sense perplexity value of &* ) & ) achieving PR scores of 0 * ) . While nouns with relatively low perplexity values such as grip (sense perplexity of )+*$ & ) yields a low PR score of )+*.34- . Moreover, nouns with the same perplexity and similar number of senses yield very different PR scores. For example, examining holiday and child, both have the same perplexity of 3* 0 /4/ and the number of senses is close, with 6 and 7 senses, respectively, however, the PR scores are very different; holiday yields a PR of )+* )8 , and child achieves a PR of )+*-, . Furthermore, nature and art have the same perplexity of 3*.3 , ; art has 17 senses while nature has 7 senses only, nonetheless, art yields a much higher PR score of ( )+*-,8 ) compared to a PR of )+* /4/ for nature. These observations are further solidified by the insignificant correlation of ' " )+* 013 , / 0
3 " )+* /9 2 )+*$ between sense perplexity and PR. At first blush, one is inclined to hypothesize that, the combination of low perplexity associated with a large number of senses — as an indication of high skew in the distribution — is a good indicator of high PR, but reviewing the data, this hypothesis is dispelled by day which has 16 senses and a sense perplexity of 0 *-& , yet yields a low PR score of )+* )8 . 5.4 Semantic Translation Entropy Semantic translation entropy (STE) (Melamed, 1997) is a special characteristic of the SALAAMtagged training data, since the source of evidence for SALAAM tagging is multilingual translations. STE measures the amount of translational variation for an L1 word in L2, in a parallel corpus. STE is a variant on the entropy measure. STE is expressed as follows: "" 0 2 ( * 6 7 ( (3) where is a translation in the set of possible translations in L2; and is L1 word. The probability of a translation is calculated directly from the alignments of the test nouns and their corresponding translations via the maximum likelihood estimate. Variation in translation is beneficial for SALAAM tagging, therefore, high STE is a desirable feature. Correlation between the automatic tagging precision and STE is expected to be high if SALAAM has good quality translations and good quality alignments. However, this correlation is a low ' " )+*-& & . Consequently, we observe a low correlation between STE and PR, ' " )+*.343 , / 0
3 " 0 *-&+0 2 )+*.34 . Examining the data, the nouns bum, detention, dyke, stress, and yew exhibit both high STE and high PR; Moreover, there are several nouns that exhibit low STE and low PR. But the intriguing items are those that are inconsistent. For instance, child and holiday: child has an STE of )+* )8 and comprises 7 senses at a low sense perplexity of 0 *., , yet yields a high PR of )+*-, . As mentioned earlier, low STE indicates lack of translational variation. In this specific experimental condition, child is translated as enfant, enfantile, ni˜no, ni˜no-peque˜no , which are words that preserve ambiguity in both French and Spanish. On the other hand, holiday has a relatively high STE value of )+*.-4- , yet results in the lowest PR of )+* )8 . Consequently, we conclude that STE alone is not a good direct indicator of PR. 5.5 Perplexity Difference Perplexity difference (PerpDiff) is a measure of the absolute difference in sense perplexity between the test data items and the training data items. For the manually annotated training data items, the overall correlation between the perplexity measures is a significant ' " )+*-,4which contrasts to a low overall correlation of ' " )+* / & between the SALAAMtagged training data items and the test data items. Across the nouns in this study, the correlation between PerpDiff and PR is ' " )+* / . It is advantageous to be as similar as possible to the training data to guarantee good classification results within a supervised framework, therefore a low PerpDiff is desirable. We observe cases with a low PerpDiff such as holiday (PerpDiff of )+* ) ), yet the PR is a low )+* )8 . On the other hand, items such as art have a relatively high PerpDiff of 3*.-43 , but achieves a high PR of )+*-, . Accordingly, PerpDiff alone is not a good indicator of PR. 5.6 Sense Distributional Correlation Sense Distributional Correlation (SDC) results from comparing the sense distributions of the test data items with those of SALAAM-tagged training data items. It is worth noting that the correlation between the SDC of manually annotated training data and that of the test data ranges from ' " )+*-, 0 * ) . A strong significant correlation of ' " )+*$8 , / 0
3 " 8 ) )+* ) ) )0 between SDC and PR exists for SALAAM-tagged training data and the test data. Overall, nouns that yield high PR have high SDC values. However, there are some instances where this strong correlation is not exhibited. For example, circuit and post have relatively high SDC values, )+*7 ,0/ and )+*$8 , , respectively, in + , , but they score lower PR values than detention which has a comparatively lower SDC value of )+*7 - . The fact that both circuit and post have many senses, 13 and 12, respectively, while detention has 4 senses only is noteworthy. detention has a higher STE and lower sense perplexity than either of them however. Overall, the data suggests that SDC is a very good direct indicator of PR. 5.7 Sense Context Confusability A situation of sense context confusability (SCC) arises when two senses of a noun are very similar and are highly uniformly represented in the training examples. This is an artifact of the fine granularity of senses in WordNet 1.7pre. Highly similar senses typically lead to similar usages, therefore similar contexts, which in a learning framework detract from the learning algorithm’s discriminatory power. Upon examining the 29 polysemous nouns in the training and test sets, we observe that a significant number of the words have similar senses according to a manual grouping provided by Palmer, in 2002.9 For example, senses 2 and 3 of nature, meaning trait and quality, respectively, are considered similar by the manual grouping. The manual grouping does not provide total coverage of all the noun senses in this test set. For instance, it only considers the homonymic senses 1, 2 and 3 of spade, yet, in the current test set, spade has 6 senses, due to the existence of sub senses. 26 of the 29 test items exhibit multiple groupings based on the manual grouping. Only three nouns, detention, dyke, spade do not have any sense groupings. They all, in turn, achieve high PR scores of 0 * ) . There are several nouns that have relatively high SDC values yet their performance ratios are low such as post, nation, channel and circuit. For instance, nation has a very high SDC value of )+*-,4-43 , a low sense perplexity of 0 *-& — relatively close to the 0 *.- sense perplexity of the test data — a sufficient number of contexts (4350), yet it yields a PR of )+*$ , . According to the manual sense grouping, senses 1 and 3 are similar, and indeed, upon inspection of the context distributions, we find the bulk of the senses’ instance examples in the SALAAMtagged training data for the condition that yields this PR in + , are annotated with either sense 1 or sense 3, thereby creating confusable contexts for the learning algorithm. All the cases of nouns that achieve high PR and possess sense groups do not have any SCC in the training data which strongly suggests that SCC is an important factor to consider when predicting the PR of a system. 5.8 Discussion We conclude from the above exploration that SDC and SCC affect PR scores directly. PerpDiff, STE, and Sense Perplexity, number of senses and number of contexts seem to have no noticeable direct impact on the PR. Based on this observation, we calculate the SDC values for all the training data used in our experimental conditions for the 29 test items. Table 3 illustrates the items with the highest SDC values, in descending order, as yielded from any of the SALAAM conditions. We use an empirical cut-off value of )+*7 for SDC. The SCC values are reported as a boolean Y/N value, where a Y indicates the presence of a sense confusable context. As shown a high SDC can serve as a means of auto9http://www.senseval.org/sense-groups. The manual sense grouping comprises 400 polysemous nouns including the 29 nouns in this evaluation. Noun SDC SCC PR dyke 1 N 1.00 bum 1 N 1.00 fatigue 1 N 1.00 hearth 1 N 1.00 yew 1 N 1.00 chair 0.99 N 1.02 child 0.99 N 0.95 detention 0.98 N 1.0 spade 0.97 N 1.00 mouth 0.96 Y 0.73 nation 0.96 N 0.59 material 0.92 N 0.92 post 0.90 Y 0.63 authority 0.86 Y 0.70 art 0.83 N 0.98 church 0.80 N 0.77 circuit 0.79 N 0.44 stress 0.77 N 1.00 Table 3: Highest SDC values for the test items associated with their respective SCC and PR values.11 matically predicting a high PR, but it is not sufficient. If we eliminate the items where an SCC exists, namely, mouth, post, and authority, we are still left with nation and circuit, where both yield very low PR scores. nation has the desirable low PerpDiff of )+*.343 . The sense annotation tagging precision of the 3 in this condition which yields the highest SDC — Spanish UN data with the 3 for training — is a low & )+* / and a low STE value of )+* 013 , . This is due to the fact that both French and Spanish preserve ambiguity in similar ways to English which does not make it a good target word for disambiguation within the SALAAM framework, given these two languages as sources of evidence. Accordingly, in this case, STE coupled with the noisy tagging could have resulted in the low PR. However, for circuit, the STE value for its respective condition is a high )+*.3 ,+0 , but we observe a relatively high PerpDiff of 0 *$ & compared to the PerpDiff of ) for the manually annotated data. Therefore, a combination of high SDC and nonexistent SCC can reliably predict good PR. But the other factors still have a role to play in order to achieve accurate prediction. It is worth emphasizing that two of the identified factors are dependent on the test data in this study, SDC and PerpDiff. One solution to this problem is to estimate SDC and PerpDiff using a held out data set that is hand tagged. Such a held out data set would be considerably smaller than the required size of a manually tagged training data for a classical supervised WSD system. Hence, SALAAMtagged training data offers a viable solution to the annotation acquisition bottleneck. 6 Conclusion and Future Directions In this paper, we applied an unsupervised approach within a learning framework for the sense annotation of large amounts of data. The ultimate goal of is to alleviate the data labelling bottleneck by means of a trade-off between quality and quantity of the training data. is competitive with state-of-the-art unsupervised systems evaluated on the same test set from SENSEVAL2. Moreover, it yields superior results to those obtained by the only comparable bootstrapping approach when tested on the same data set. Moreover, we explore, in depth, different factors that directly and indirectly affect the performance of quantified as a performance ratio, PR. Sense Distribution Correlation (SDC) and Sense Context Confusability (SCC) have the highest direct impact on performance ratio, PR. However, evidence suggests that probably a confluence of all the different factors leads to the best prediction of an acceptable PR value. An investigation into the feasibility of combining these different factors with the different attributes of the experimental conditions for SALAAM to automatically predict when the noisy training data can reliably replace manually annotated data is a matter of future work. 7 Acknowledgements I would like to thank Philip Resnik for his guidance and insights that contributed tremendously to this paper. Also I would like to acknowledge Daniel Jurafsky and Kadri Hacioglu for their helpful comments. I would like to thank the three anonymous reviewers for their detailed reviews. This work has been supported, in part, by NSF Award #IIS0325646. References Erin L. Allwein, Robert E. Schapire, and Yoram Singer. 2000. Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113-141. Clara Cabezas, Philip Resnik, and Jessica Stevens. 2002. Supervised Sense Tagging using Support Vector Machines. Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems (SENSEVAL-2). Toulouse, France. Scott Cotton, Phil Edmonds, Adam Kilgarriff, and Martha Palmer, ed. 2001. SENSEVAL-2: Second International Workshop on Evaluating Word Sense Disambiguation Systems. ACL SIGLEX, Toulouse, France. Mona Diab. 2004. An Unsupervised Approach for Bootstrapping Arabic Word Sense Tagging. Proceedings of Arabic Based Script Languages, COLING 2004. Geneva, Switzerland. Mona Diab and Philip Resnik. 2002. An Unsupervised Method for Word Sense Tagging Using Parallel Corpora. Proceedings of 40th meeting of ACL. Pennsylvania, USA. Mona Diab. 2003. Word Sense Disambiguation Within a Multilingual Framework. PhD Thesis. University of Maryland College Park, USA. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. William A. Gale and Kenneth W. Church and David Yarowsky. 1992. Using Bilingual Materials to Develop Word Sense Disambiguation Methods. Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation. Montr´eal, Canada. Thorsten Joachims. 1998. Text Categorization with Support Vector Machines: Learning with Many Relevant Features. Proceedings of the European Conference on Machine Learning. Springer. A. Kilgarriff and J. Rosenzweig. 2000. Framework and Results for English SENSEVAL. Journal of Computers and the Humanities. pages 15—48, 34. Dekang Lin. 1998. Dependency-Based Evaluation of MINIPAR. Proceedings of the Workshop on the Evaluation of Parsing Systems, First International Conference on Language Resources and Evaluation. Granada, Spain. Dan I. Melamed. 1997. Measuring Semantic Entropy. ACL SIGLEX, Washington, DC. Rada Mihalcea and Dan Moldovan. 1999. A method for Word Sense Disambiguation of unrestricted text. Proceedings of the 37th Annual Meeting of ACL. Maryland, USA. Rada Mihalcea. 2002. Bootstrapping Large sense tagged corpora. Proceedings of the 3rd International Conference on Languages Resources and Evaluations (LREC). Las Palmas, Canary Islands, Spain. Philip Resnik. 1999. Semantic Similarity in a Taxonomy: An Information-Based Measure and its Application to Problems of Ambiguity in Natural Language. Journal Artificial Intelligence Research. (11) p. 95130. David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. Proceedings of the 33rd Annual Meeting of ACL. Cambridge, MA. | 2004 | 39 |
Analysis of Mixed Natural and Symbolic Language Input in Mathematical Dialogs Magdalena Wolska Ivana Kruijff-Korbayov´a Fachrichtung Computerlinguistik Universit¨at des Saarlandes, Postfach 15 11 50 66041 Saarbr¨ucken, Germany magda,korbay @coli.uni-sb.de Abstract Discourse in formal domains, such as mathematics, is characterized by a mixture of telegraphic natural language and embedded (semi-)formal symbolic mathematical expressions. We present language phenomena observed in a corpus of dialogs with a simulated tutorial system for proving theorems as evidence for the need for deep syntactic and semantic analysis. We propose an approach to input understanding in this setting. Our goal is a uniform analysis of inputs of different degree of verbalization: ranging from symbolic alone to fully worded mathematical expressions. 1 Introduction Our goal is to develop a language understanding module for a flexible dialog system tutoring mathematical problem solving, in particular, theorem proving (Benzm¨uller et al., 2003a).1 As empirical findings in the area of intelligent tutoring show, flexible natural language dialog supports active learning (Moore, 1993). However, little is known about the use of natural language in dialog setting in formal domains, such as mathematics, due to the lack of empirical data. To fill this gap, we collected a corpus of dialogs with a simulated tutorial dialog system for teaching proofs in naive set theory. An investigation of the corpus reveals various phenomena that present challenges for such input understanding techniques as shallow syntactic analysis combined with keyword spotting, or statistical methods, e.g., Latent Semantic Analysis, which are commonly employed in (tutorial) dialog systems. The prominent characteristics of the language in our corpus include: (i) tight interleaving of natural and symbolic language, (ii) varying degree of natural language verbalization of the formal mathematical 1This work is carried out within the DIALOG project: a collaboration between the Computer Science and Computational Linguistics departments of the Saarland University, within the Collaborative Research Center on Resource-Adaptive Cognitive Processes, SFB 378 (www.coli.uni-sb.de/ sfb378). content, and (iii) informal and/or imprecise reference to mathematical concepts and relations. These phenomena motivate the need for deep syntactic and semantic analysis in order to ensure correct mapping of the surface input to the underlying proof representation. An additional methodological desideratum is to provide a uniform treatment of the different degrees of verbalization of the mathematical content. By designing one grammar which allows a uniform treatment of the linguistic content on a par with the mathematical content, one can aim at achieving a consistent analysis void of example-based heuristics. We present such an approach to analysis here. The paper is organized as follows: In Section 2, we summarize relevant existing approaches to input analysis in (tutorial) dialog systems on the one hand and analysis of mathematical discourse on the other. Their shortcomings with respect to our setting become clear in Section 3 where we show examples of language phenomena from our dialogs. In Section 4, we propose an analysis methodology that allows us to capture any mixture of natural and mathematical language in a uniform way. We show example analyses in Section 5. In Section 6, we conclude and point out future work issues. 2 Related work Language understanding in dialog systems, be it with text or speech interface, is commonly performed using shallow syntactic analysis combined with keyword spotting. Tutorial systems also successfully employ statistical methods which compare student responses to a model built from preconstructed gold-standard answers (Graesser et al., 2000). This is impossible for our dialogs, due to the presence of symbolic mathematical expressions. Moreover, the shallow techniques also remain oblivious of such aspects of discourse meaning as causal relations, modality, negation, or scope of quantifiers which are of crucial importance in our setting. When precise understanding is needed, tutorial systems either use menu- or template-based input, or use closed-questions to elicit short answers of little syntactic variation (Glass, 2001). However, this conflicts with the preference for flexible dialog in active learning (Moore, 1993). With regard to interpreting mathematical texts, (Zinn, 2003) and (Baur, 1999) present DRT analyses of course-book proofs. However, the language in our dialogs is more informal: natural language and symbolic mathematical expressions are mixed more freely, there is a higher degree and more variety of verbalization, and mathematical objects are not properly introduced. Moreover, both above approaches rely on typesetting and additional information that identifies mathematical symbols, formulae, and proof steps, whereas our input does not contain any such information. Forcing the user to delimit formulae would reduce the flexibility of the system, make the interface harder to use, and might not guarantee a clean separation of the natural language and the non-linguistic content anyway. 3 Linguistic data In this section, we first briefly describe the corpus collection experiment and then present the common language phenomena found in the corpus. 3.1 Corpus collection 24 subjects with varying educational background and little to fair prior mathematical knowledge participated in a Wizard-of-Oz experiment (Benzm¨uller et al., 2003b). In the tutoring session, they were asked to prove 3 theorems2: (i)
; (ii) ! "# ; (iii) $&%(' *)+-,
.0/1' 2 . To encourage dialog with the system, the subjects were instructed to enter proof steps, rather than complete proofs at once. Both the subjects and the tutor were free in formulating their turns. Buttons were available in the interface for inserting mathematical symbols, while literals were typed on the keyboard. The dialogs were typed in German. The collected corpus consists of 66 dialog logfiles, containing on average 12 turns. The total number of sentences is 1115, of which 393 are student sentences. The students’ turns consisted on average of 1 sentence, the tutor’s of 2. More details on the corpus itself and annotation efforts that guide the development of the system components can be found in (Wolska et al., 2004). 2 3 stands for set complement and 4 for power set. 3.2 Language phenomena To indicate the overall complexity of input understanding in our setting, we present an overview of common language phenomena in our dialogs.3 In the remainder of this paper, we then concentrate on the issue of interleaved natural language and mathematical expressions, and present an approach to processing this type of input. Interleaved natural language and formulae Mathematical language, often semi-formal, is interleaved with natural language informally verbalizing proof steps. In particular, mathematical expressions (or parts thereof) may lie within the scope of quantifiers or negation expressed in natural language: A auch 57698;:1< [ =?>@ ACBD5?EF8HG1< ] A I B ist J von C K (A I B) [... is J of . . . ] (da ja A I B= L ) [(because A I B= L )] B enthaelt kein x J A [B contains no x J A] For parsing, this means that the mathematical content has to be identified before it is interpreted within the utterance. Imprecise or informal naming Domain relations and concepts are described informally using imprecise and/or ambiguous expressions. A enthaelt B [A contains B] A muss in B sein [A must be in B] where contain and be in can express the domain relation of either subset or element; B vollstaendig ausserhalb von A liegen muss, also im Komplement von A [B has to be entirely outside of A, so in the complement of A] dann sind A und B (vollkommen) verschieden, haben keine gemeinsamen Elemente [then A and B are (completely) different, have no common elements] where be outside of and be different are informal descriptions of the empty intersection of sets. To handle imprecision and informality, we constructed an ontological knowledge base containing domain-specific interpretations of the predicates (Horacek and Wolska, 2004). Discourse deixis Anaphoric expressions refer deictically to pieces of discourse: der obere Ausdruck [the above term] der letzte Satz [the last sentence] Folgerung aus dem Obigen [conclusion from the above] aus der regel in der zweiten Zeile [from the rule in the second line] 3As the tutor was also free in wording his turns, we include observations from both student and tutor language behavior. In the presented examples, we reproduce the original spelling. In our domain, this class of referring expressions also includes references to structural parts of terms and formulae such as “the left side” or “the inner parenthesis” which are incomplete specifications: the former refers to a part of an equation, the latter, metonymic, to an expression enclosed in parenthesis. Moreover, these expressions require discourse referents for the sub-parts of mathematical expressions to be available. Generic vs. specific reference Generic and specific references can appear within one utterance: Potenzmenge enthaelt alle Teilmengen, also auch (A I B) [A power set contains all subsets, hence also(A I B)] where “a power set” is a generic reference, whereas “ ” is a specific reference to a subset of a specific instance of a power set introduced earlier. Co-reference4 Co-reference phenomena specific to informal mathematical discourse involve (parts of) mathematical expressions within text. Da, wenn 5
698;:*< sein soll, Element von 698 : < sein muss. Und wenn : 5
698 < sein soll, muss auch Element von 698 < sein. [Because if it should be that 5
698 : < , must be an element of 698;:*< . And if it should be that : 5
698 < , it must be an element of 698 < as well.] Entities denoted with the same literals may or may not co-refer: DeMorgan-Regel-2 besagt: 698
I : < = 698 < K 698;: < In diesem Fall: z.B. 698 H< = dem Begriff 698 K&: ) 698;:*< = dem Begriff 698K < [DeMorgan-Regel-2 means: 698
I : ) 698 < K 698;:*< In this case: e.g. 698 H< = the term 698 K : < 698;:*< = the term 698K < ] Informal descriptions of proof-step actions Sometimes, “actions” involving terms, formulae or parts thereof are verbalized before the appropriate formal operation is performed: Wende zweimal die DeMorgan-Regel an [I’m applying DeMorgan rule twice] damit kann ich den oberen Ausdruck wie folgt schreiben:. . . [given this I can write the upper term as follows:. . . ] The meaning of the “action verbs” is needed for the interpretation of the intended proof-step. Metonymy Metonymic expressions are used to refer to structural sub-parts of formulae, resulting in predicate structures acceptable informally, yet incompatible in terms of selection restrictions. Dann gilt fuer die linke Seite, wenn ! #"$% &'(% , der Begriff A B dann ja schon dadrin und ist somit auch Element davon [Then for the left hand side it holds that..., the term A B is already there, and so an element of it] 4To indicate co-referential entities, we inserted the indices which are not present in the dialog logfiles. where the predicate hold, in this domain, normally takes an argument of sort CONST, TERM or FORMULA, rather than LOCATION; de morgan regel 2 auf beide komplemente angewendet [de morgan rule 2 applied to both complements] where the predicate apply takes two arguments: one of sort RULE and the other of sort TERM or FORMULA, rather than OPERATION ON SETS. In the next section, we present our approach to a uniform analysis of input that consists of a mixture of natural language and mathematical expressions. 4 Uniform input analysis strategy The task of input interpretation is two-fold. Firstly, it is to construct a representation of the utterance’s linguistic meaning. Secondly, it is to identify and separate within the utterance: (i) parts which constitute meta-communication with the tutor, e.g.: Ich habe die Aufgabenstellung nicht verstanden. [I don’t understand what the task is.] (ii) parts which convey domain knowledge that should be verified by a domain reasoner; for example, the entire utterance ) *(! + ist laut deMorgan-1 ) , & ) [. . . is, according to deMorgan-1,. . . ] can be evaluated; on the other hand, the domain reasoner’s knowledge base does not contain appropriate representations to evaluate the correctness of using, e.g., the focusing particle “also”, as in: Wenn A = B, dann ist A auch ) und B ) , . [If A = B, then A is also ) and B ) , .] Our goal is to provide a uniform analysis of inputs of varying degrees of verbalization. This is achieved by the use of one grammar that is capable of analyzing utterances that contain both natural language and mathematical expressions. Syntactic categories corresponding to mathematical expressions are treated in the same way as those of linguistic lexical entries: they are part of the deep analysis, enter into dependency relations and take on semantic roles. The analysis proceeds in 2 stages: 1. After standard pre-processing,5 mathematical expressions are identified, analyzed, categorized, and substituted with default lexicon entries encoded in the grammar (Section 4.1). 5Standard pre-processing includes sentence and word tokenization, (spelling correction and) morphological analysis, part-of-speech tagging. = A B C D A B C D Figure 1: Tree representation of the formula 7 ) 2. Next, the input is syntactically parsed, and a representation of its linguistic meaning is constructed compositionally along with the parse (Section 4.2). The obtained linguistic meaning representation is subsequently merged with discourse context and interpreted by consulting a semantic lexicon of the domain and a domain-specific knowledge base (Section 4.3). If the syntactic parser fails to produce an analysis, a shallow chunk parser and keyword-based rules are used to attempt partial analysis and build a partial representation of the predicate-argument structure. In the next sections, we present the procedure of constructing the linguistic meaning of syntactically well-formed utterances. 4.1 Parsing mathematical expressions The task of the mathematical expression parser is to identify mathematical expressions. The identified mathematical expressions are subsequently verified as to syntactic validity and categorized. Implementation Identification of mathematical expressions within word-tokenized text is performed using simple indicators: single character tokens (with the characters and standing for power set and set complement respectively), mathematical symbol unicodes, and new-line characters. The tagger converts the infix notation used in the input into an expression tree from which the following information is available: surface sub-structure (e.g., “left side” of an expression, list of sub-expressions, list of bracketed sub-expressions) and expression type based on the top level operator (e.g., CONST, TERM, FORMULA 0 FORMULA (formula missing left argument), etc.). For example, the expression ) is represented by the formula tree in Fig. 1. The bracket subscripts indicate the operators heading sub-formulae enclosed in parenthesis. Given the expression’s top node operator, =, the expression is of type formula, its “left side” is the expression F , the list of bracketed sub-expressions includes: A B, C D, " , etc. Evaluation We have conducted a preliminary evaluation of the mathematical expression parser. Both the student and tutor turns were included to provide more data for the evaluation. Of the 890 mathematical expressions found in the corpus (432 in the student and 458 in the tutor turns), only 9 were incorrectly recognized. The following classes of errors were detected:6 1. P((A K C) I (B K C)) =PC K (A I B) P((A K C) I (B K C))=PC K (A I B) 2. a. (A 5 U und B 5 U) b. (da ja A I B= L ) ( A 5 U und B 5 U ) (da ja A I B= L ) 3. K((A K B) I (C K D)) = K(A ? B) ? K(C ? D) K((A K B) I (C K D)) = K(A ? B) ? K(C ? D) 4. Gleiches gilt mit D (K(C I D)) K (K(A I B)) Gleiches gilt mit D (K(C I D)) K (K(A I B)) [The same holds with . . . ] The examples in (1) and (2) have to do with parentheses. In (1), the student actually omitted them. The remedy in such cases is to ask the student to correct the input. In (2), on the other hand, no parentheses are missing, but they are ambiguous between mathematical brackets and parenthetical statement markers. The parser mistakenly included one of the parentheses with the mathematical expressions, thereby introducing an error. We could include a list of mathematical operations allowed to be verbalized, in order to include the logical connective in (2a) in the tagged formula. But (2b) shows that this simple solution would not remedy the problem overall, as there is no pattern as to the amount and type of linguistic material accompanying the formulae in parenthesis. We are presently working on ways to identify the two uses of parentheses in a pre-processing step. In (3) the error is caused by a non-standard character, “?”, found in the formula. In (4) the student omitted punctuation causing the character “D” to be interpreted as a nonstandard literal for naming an operation on sets. 4.2 Deep analysis The task of the deep parser is to produce a domainindependent linguistic meaning representation of syntactically well-formed sentences and fragments. By linguistic meaning (LM), we understand the dependency-based deep semantics in the sense of the Prague School notion of sentence meaning as employed in the Functional Generative Description 6Incorrect tagging is shown along with the correct result below it, following an arrow. (FGD) (Sgall et al., 1986; Kruijff, 2001). It represents the literal meaning of the utterance rather than a domain-specific interpretation.7 In FGD, the central frame unit of a sentence/clause is the head verb which specifies the tectogrammatical relations (TRs) of its dependents (participants). Further distinction is drawn into inner participants, such as Actor, Patient, Addressee, and free modifications, such as Location, Means, Direction. Using TRs rather than surface grammatical roles provides a generalized view of the correlations between domain-specific content and its linguistic realization. We use a simplified set of TRs based on (Hajiˇcov´a et al., 2000). One reason for simplification is to distinguish which relations are to be understood metaphorically given the domain sub-language. In order to allow for ambiguity in the recognition of TRs, we organize them hierarchically into a taxonomy. The most commonly occurring relations in our context, aside from the inner participant roles of Actor and Patient, are Cause, Condition, and ResultConclusion (which coincide with the rhetorical relations in the argumentative structure of the proof), for example: Da [A ) gilt] CAUSE , alle x, die in A sind sind nicht in B [As A ) applies, all x that are in A are not in B] Wenn [A ) ] COND , dann A B= [If A ) ! , then A B= ] Da ) gilt, [alle x, die in A sind sind nicht in B] RES Wenn A ) ! , dann [A B= ] RES Other commonly found TRs include NormCriterion, e.g. [nach deMorgan-Regel-2] NORM ist ) + & =...) [according to De Morgan rule 2 it holds that ...] ) *(! + ist [laut DeMorgan-1] NORM ( ) , ) ! ) [. . . equals, according to De Morgan rule1, . . . ] We group other relations into sets of HasProperty, GeneralRelation (for adjectival and clausal modification), and Other (a catch-all category), for example: dann muessen alla A und B [in C] PROP-LOC enthalten sein [then all A and B have to be contained in C] Alle x, [die in B sind] GENREL . . . [All x that are in B...] alle elemente [aus A] PROP-FROM sind in ) enthalten [all elements from A are contained in ) ! ] Aus A - U B folgt [mit A B= ] OTHER , B - U A. [From A - U B follows with A B= , that B - U A] 7LM is conceptually related to logical form, however, differs in coverage: while it does operate on the level of deep semantic roles, such aspects of meaning as the scope of quantifiers or interpretation of plurals, synonymy, or ambiguity are not resolved. where PROP-LOC denotes the HasProperty relation of type Location, GENREL is a general relation as in complementation, and PROP-FROM is a HasProperty relation of type Direction-From or From-Source. More details on the investigation into tectogrammatical relations that build up linguistic meaning of informal mathematical text can be found in (Wolska and Kruijff-Korbayov´a, 2004a). Implementation The syntactic analysis is performed using openCCG8, an open source parser for Multi-Modal Combinatory Categorial Grammar (MMCCG). MMCCG is a lexicalist grammar formalism in which application of combinatory rules is controlled though context-sensitive specification of modes on slashes (Baldridge and Kruijff, 2003). The linguistic meaning, built in parallel with the syntax, is represented using Hybrid Logic Dependency Semantics (HLDS), a hybrid logic representation which allows a compositional, unification-based construction of HLDS terms with CCG (Baldridge and Kruijff, 2002). An HLDS term is a relational structure where dependency relations between heads and dependents are encoded as modal relations. The syntactic categories for a lexical entry FORMULA, corresponding to mathematical expressions of type “formula”, are , , and . For example, in one of the readings of “B enthaelt " ”, “enthaelt” represents the meaning contain taking dependents in the relations Actor and Patient, shown schematically in Fig. 2. enthalten:contain FORMULA: ACT FORMULA:
PAT Figure 2: Tectogrammatical representation of the utterance “B enthaelt ” [B contains ]. FORMULA represents the default lexical entry for identified mathematical expressions categorized as “formula” (cf. Section 4.1). The LM is represented by the following HLDS term: @h1(contain ACT (f1 FORMULA:B) PAT (f2 FORMULA: ) where h1 is the state where the proposition contain is true, and the nominals f1 and f2 represent dependents of the head contain, which stand in the tectogrammatical relations Actor and Patient, respectively. It is possible to refer to the structural sub-parts of the FORMULA type expressions, as formula subparts are identified by the tagger, and discourse ref8http://openccg.sourceforge.net erents are created for them and stored with the discourse model. We represent the discourse model within the same framework of hybrid modal logic. Nominals of the hybrid logic object language are atomic formulae that constitute a pointing device to a particular place in a model where they are true. The satisfaction operator, @, allows to evaluate a formula at the point in the model given by a nominal (e.g. the formula @ evaluates at the point i). For discourse modeling, we adopt the hybrid logic formalization of the DRT notions in (Kruijff, 2001; Kruijff and Kruijff-Korbayov´a, 2001). Within this formalism, nominals are interpreted as discourse referents that are bound to propositions through the satisfaction operator. In the example above, f1 and f2 represent discourse referents for FORMULA:B and FORMULA: 1 , respectively. More technical details on the formalism can be found in the aforementioned publications. 4.3 Domain interpretation The linguistic meaning representations obtained from the parser are interpreted with respect to the domain. We are constructing a domain ontology that reflects the domain reasoner’s knowledge base, and is augmented to allow resolution of ambiguities introduced by natural language. For example, the previously mentioned predicate contain represents the semantic relation of Containment which, in the domain of naive set theory, is ambiguous between the domain relations ELEMENT, SUBSET, and PROPER SUBSET. The specializations of the ambiguous semantic relations are encoded in the ontology, while a semantic lexicon provides interpretations of the predicates. At the domain interpretation stage, the semantic lexicon is consulted to translate the tectogrammatical frames of the predicates into the semantic relations represented in the domain ontology. More details on the lexical-semantic stage of interpretation can be found in (Wolska and KruijffKorbayov´a, 2004b), and more details on the domain ontology are presented in (Horacek and Wolska, 2004). For example, for the predicate contain, the lexicon contains the following facts: contain( ,
,
) (SUBFORMULA , embedding ) [’a Patient of type FORMULA is a subformula embedded within a FORMULA in the Actor relation with respect to the head contain’] contain( ,
!#"%$ ,
!#"%$ ) CONTAINMENT(container , containee ) [’the Containment relation involves a predicate contain and its Actor and Patient dependents, where the Actor and Patient are the container and containee parameters respectively’] Translation rules that consult the ontology expand the meaning of the predicates to all their alternative domain-specific interpretations preserving argument structure. As it is in the capacity of neither sentence-level nor discourse-level analysis to evaluate the correctness of the alternative interpretations, this task is delegated to the Proof Manager (PM). The task of the PM is to: (A) communicate directly with the theorem prover;9 (B) build and maintain a representation of the proof constructed by the student;10 (C) check type compatibility of proof-relevant entities introduced as new in discourse; (D) check consistency and validity of each of the interpretations constructed by the analysis module, with the proof context; (E) evaluate the proof-relevant part of the utterance with respect to completeness, accuracy, and relevance. 5 Example analysis In this section, we illustrate the mechanics of the approach on the following examples. (1) B enthaelt kein [B contains no ] (2) A B & A B ' (3) A enthaelt keinesfalls Elemente, die in B sind. [A contains no elements that are also in B] Example (1) shows the tight interaction of natural language and mathematical formulae. The intended reading of the scope of negation is over a part of the formula following it, rather than the whole formula. The analysis proceeds as follows. The formula tagger first identifies the formula ( x A ) and substitutes it with the generic entry FORMULA represented in the lexicon. If there was no prior discourse entity for “B” to verify its type, the type is ambiguous between CONST, TERM, and FORMULA.11 The sentence is assigned four alternative readings: (i) “CONST contains no FORMULA”, (ii) “TERM contains no FORMULA”, (iii) “FORMULA contains no FORMULA”, (iv) “CONST contains no CONST 0 FORMULA”. The last reading is obtained by partitioning an entity of type FORMULA in meaningful ways, taking into account possible interaction with preceding modifiers. Here, given the quantifier “no”, the expression ( x A ) has been split into its surface parts 9We are using a version of * MEGA adapted for assertionlevel proving (Vo et al., 2003). 10The discourse content representation is separated from the proof representation, however, the corresponding entities must be co-indexed in both. 11In prior discourse, there may have been an assignment B := + , where + is a formula, in which case, B would be known from discourse context to be of type FORMULA (similarly for term assignment); by CONST we mean a set or element variable such as A, x denoting a set A or an element x respectively. enthalten:contain FORMULA: ACT no RESTR FORMULA: PAT Figure 3: Tectogrammatical representation of the utterance “B enthaelt kein ( ) ” [B contains no ]. enthalten:contain CONST: ACT no RESTR CONST: PAT 0 FORMULA: GENREL Figure 4: Tectogrammatical representation of the utterance “B enthaelt kein ( ) ” [B contains no ( ) ]. as follows: ( [x][ A] ) .12 [x] has been substituted with a generic lexical entry CONST, and [ A] with a symbolic entry for a formula missing its left argument (cf. Section 4.1). The readings (i) and (ii) are rejected because of sortal incompatibility. The linguistic meanings of readings (iii) and (iv) are presented in Fig. 3 and Fig. 4, respectively. The corresponding HLDS representations are:13 — for “FORMULA contains no FORMULA”: s:(@k1(kein RESTR f2 BODY (e1 enthalten ACT (f1 FORMULA) PAT f2)) @f2(FORMULA)) [‘formula B embeds no subformula x A’] — for “CONST contains no CONST 0 FORMULA”: s:(@k1(kein RESTR x1 BODY (e1 enthalten ACT (c1 CONST) PAT x1)) @x1(CONST HASPROP (x2 0 FORMULA))) [‘B contains no x such that x is an element of A’] Next, the semantic lexicon is consulted to translate these readings into their domain interpretations. The relevant lexical semantic entries were presented in Section 4.3. Using the linguistic meaning, the semantic lexicon, and the ontology, we obtain four interpretations paraphrased below: — for “FORMULA contains no FORMULA”: (1.1) ’it is not the case that PAT , the formula, x A, is a subformula of ACT , the formula B’; — for “CONST contains no CONST 0 FORMULA”: 12There are other ways of constituent partitioning of the formula at the top level operator to separate the operator and its arguments: [x][
][A] and [x
][A] . Each of the partitions obtains its appropriate type corresponding to a lexical entry available in the grammar (e.g., the [x
] chunk is of type FORMULA 0 for a formula missing its right argument). Not all the readings, however, compose to form a syntactically and semantically valid parse of the given sentence. 13Irrelevant parts of the meaning representation are omitted; glosses of the hybrid formulae are provided. enthalten:contain CONST: ACT no RESTR elements PAT in GENREL ACT CONST: LOC Figure 5: Tectogrammatical representation of the utterance “A enthaelt keinesfalls Elemente, die auch in B sind.” [A contains no elements that are also in B.]. (1.2a) ’it is not the case that PAT , the constant x, ACT , B, and x A’, (1.2b) ’it is not the case that PAT , the constant x, ACT , B, and x A’, (1.2c) ’it is not the case that PAT , the constant x, ACT , B, and x A’. The interpretation (1.1) is verified in the discourse context with information on structural parts of the discourse entity “B” of type formula, while (1.2a-c) are translated into messages to the PM and passed on for evaluation in the proof context. Example (2) contains one mathematical formula. Such utterances are the simplest to analyze: The formulae identified by the mathematical expression tagger are passed directly to the PM. Example (3) shows an utterance with domainrelevant content fully linguistically verbalized. The analysis of fully verbalized utterances proceeds similarly to the first example: the mathematical expressions are substituted with the appropriate generic lexical entries (here, “A” and “B” are substituted with their three possible alternative readings: CONST, TERM, and FORMULA, yielding several readings “CONST contains no elements that are also in CONST”, “TERM contains no elements that are also in TERM”, etc.). Next, the sentence is analyzed by the grammar. The semantic roles of Actor and Patient associated with the verb “contain” are taken by “A” and “elements” respectively; quantifier “no” is in the relation Restrictor with “A”; the relative clause is in the GeneralRelation with “elements”, etc. The linguistic meaning of the utterance in example (3) is shown in Fig. 5. Then, the semantic lexicon and the ontology are consulted to translate the linguistic meaning into its domain-specific interpretations, which are in this case very similar to the ones of example (1). 6 Conclusions and Further Work Based on experimentally collected tutorial dialogs on mathematical proofs, we argued for the use of deep syntactic and semantic analysis. We presented an approach that uses multimodal CCG with hybrid logic dependency semantics, treating natural and symbolic language on a par, thus enabling uniform analysis of inputs with varying degree of formal content verbalization. A preliminary evaluation of the mathematical expression parser showed a reasonable result. We are incrementally extending the implementation of the deep analysis components, which will be evaluated as part of the next Wizard-of-Oz experiment. One of the issues to be addressed in this context is the treatment of ill-formed input. On the one hand, the system can initiate a correction subdialog in such cases. On the other hand, it is not desirable to go into syntactic details and distract the student from the main tutoring goal. We therefore need to handle some degree of ill-formed input. Another question is which parts of mathematical expressions should have explicit semantic representation. We feel that this choice should be motivated empirically, by systematic occurrence of natural language references to parts of mathematical expressions (e.g., “the left/right side”, “the parenthesis”, and “the inner parenthesis”) and by the syntactic contexts in which they occur (e.g., the partitioning ( [x][ A] ) seems well motivated in “B contains no x A”; [x ] is a constituent in “x of complement of B.”) We also plan to investigate the interaction of modal verbs with the argumentative structure of the proof. For instance, the necessity modality is compatible with asserting a necessary conclusion or a prerequisite condition (e.g., “A und B muessen disjunkt sein.” [A and B must be disjoint.]). This introduces an ambiguity that needs to be resolved by the domain reasoner. References J. M. Baldridge and G.J. M. Kruijff. 2002. Coupling CCG with hybrid logic dependency semantics. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia PA. pp. 319–326. J. M. Baldridge and G.J. M. Kruijff. 2003. Multi-modal combinatory categorial grammar. In Proc. of the 10th Annual Meeting of the European Chapter of the Association for Computational Linguistics (EACL’03), Budapest, Hungary. pp. 211–218. J. Baur. 1999. Syntax und Semantik mathematischer Texte. Diplomarbeit, Fachrichtung Computerlinguistik, Universit¨at des Saarlandes, Saarbr¨ucken, Germany. C. Benzm¨uller, A. Fiedler, M. Gabsdil, H. Horacek, I. KruijffKorbayov´a, M. Pinkal, J. Siekmann, D. Tsovaltzi, B. Q. Vo, and M. Wolska. 2003a. Tutorial dialogs on mathematical proofs. In Proc. of IJCAI’03 Workshop on Knowledge Representation and Automated Reasoning for E-Learning Systems, Acapulco, Mexico. C. Benzm¨uller, A. Fiedler, M. Gabsdil, H. Horacek, I. KruijffKorbayov´a, M. Pinkal, J. Siekmann, D. Tsovaltzi, B. Q. Vo, and M. Wolska. 2003b. A Wizard-of-Oz experiment for tutorial dialogues in mathematics. In Proc. of the AIED’03 Workshop on Advanced Technologies for Mathematics Education, Sydney, Australia. pp. 471–481. M. Glass. 2001. Processing language input in the CIRCSIMTutor intelligent tutoring system. In Proc. of the 10th AIED Conference, San Antonio, TX. pp. 210–221. A. Graesser, P. Wiemer-Hastings, K. Wiemer-Hastings, D. Harter, and N. Person. 2000. Using latent semantic analysis to evaluate the contributions of students in autotutor. Interactive Learning Environments, 8:2. pp. 129–147. E. Hajiˇcov´a, J. Panevov´a, and P. Sgall. 2000. A manual for tectogrammatical tagging of the Prague Dependency Treebank. TR-2000-09, Charles University, Prague, Czech Republic. H. Horacek and M. Wolska. 2004. Interpreting Semi-Formal Utterances in Dialogs about Mathematical Proofs. In Proc. of the 9th International Conference on Application of Natural Language to Information Systems (NLDB’04), Salford, Manchester, Springer. To appear. G.J.M. Kruijff and I. Kruijff-Korbayov´a. 2001. A hybrid logic formalization of information structure sensitive discourse interpretation. In Proc. of the 4th International Conference on Text, Speech and Dialogue (TSD’2001), ˇZelezn´a Ruda, Czech Republic. pp. 31–38. G.J.M. Kruijff. 2001. A Categorial-Modal Logical Architecture of Informativity: Dependency Grammar Logic & Information Structure. Ph.D. Thesis, Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic. J. Moore. 1993. What makes human explanations effective? In Proc. of the 15th Annual Conference of the Cognitive Science Society, Hillsdale, NJ. pp. 131–136. P. Sgall, E. Hajiˇcov´a, and J. Panevov´a. 1986. The meaning of the sentence in its semantic and pragmatic aspects. Reidel Publishing Company, Dordrecht, The Netherlands. Q.B. Vo, C. Benzm¨uller, and S. Autexier. 2003. Assertion Application in Theorem Proving and Proof Planning. In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI). Acapulco, Mexico. M. Wolska and I. Kruijff-Korbayov´a. 2004a. Building a dependency-based grammar for parsing informal mathematical discourse. In Proc. of the 7th International Conference on Text, Speech and Dialogue (TSD’04), Brno, Czech Republic, Springer. To appear. M. Wolska and I. Kruijff-Korbayov´a. 2004b. LexicalSemantic Interpretation of Language Input in Mathematical Dialogs. In Proc. of the ACL Workshop on Text Meaning and Interpretation, Barcelona, Spain. To appear. M. Wolska, B. Q. Vo, D. Tsovaltzi, I. Kruijff-Korbayov´a, E. Karagjosova, H. Horacek, M. Gabsdil, A. Fiedler, C. Benzm¨uller, 2004. An annotated corpus of tutorial dialogs on mathematical theorem proving. In Proc. of 4th International Conference On Language Resources and Evaluation (LREC’04), Lisbon, Portugal. pp. 1007–1010. C. Zinn. 2003. A Computational Framework for Understanding Mathematical Discourse. In Logic Journal of the IGPL, 11:4, pp. 457–484, Oxford University Press. | 2004 | 4 |
Enriching the Output of a Parser Using Memory-Based Learning Valentin Jijkoun and Maarten de Rijke Informatics Institute, University of Amsterdam jijkoun, mdr @science.uva.nl Abstract We describe a method for enriching the output of a parser with information available in a corpus. The method is based on graph rewriting using memorybased learning, applied to dependency structures. This general framework allows us to accurately recover both grammatical and semantic information as well as non-local dependencies. It also facilitates dependency-based evaluation of phrase structure parsers. Our method is largely independent of the choice of parser and corpus, and shows state of the art performance. 1 Introduction We describe a method to automatically enrich the output of parsers with information that is present in existing treebanks but usually not produced by the parsers themselves. Our motivation is two-fold. First and most important, for applications requiring information extraction or semantic interpretation of text, it is desirable to have parsers produce grammatically and semantically rich output. Second, to facilitate dependency-based comparison and evaluation of different parsers, their outputs may need to be transformed into specific rich dependency formalisms. The method allows us to automatically transform the output of a parser into structures as they are annotated in a dependency treebank. For a phrase structure parser, we first convert the produced phrase structures into dependency graphs in a straightforward way, and then apply a sequence of graph transformations: changing dependency labels, adding new nodes, and adding new dependencies. A memory-based learner trained on a dependency corpus is used to detect which modifications should be performed. For a dependency corpus derived from the Penn Treebank and the parsers we considered, these transformations correspond to adding Penn functional tags (e.g., -SBJ, -TMP, -LOC), empty nodes (e.g., NP PRO) and non-local dependencies (controlled traces, WHextraction, etc.). For these specific sub-tasks our method achieves state of the art performance. The evaluation of the transformed output of the parsers of Charniak (2000) and Collins (1999) gives 90% unlabelled and 84% labelled accuracy with respect to dependencies, when measured against a dependency corpus derived from the Penn Treebank. The paper is organized as follows. After providing some background and motivation in Section 2, we give the general overview of our method in Section 3. In Sections 4 through 8, we describe all stages of the transformation process, providing evaluation results and comparing our methods to earlier work. We discuss the results in Section 9. 2 Background and Motivation State of the art statistical parsers, e.g., parsers trained on the Penn Treebank, produce syntactic parse trees with bare phrase labels, such as NP, PP, S, although the training corpora are usually much richer and often contain additional grammatical and semantic information (distinguishing various modifiers, complements, subjects, objects, etc.), including non-local dependencies, i.e., relations between phrases not adjacent in the parse tree. While this information may be explicitly annotated in a treebank, it is rarely used or delivered by parsers.1 The reason is that bringing in more information of this type usually makes the underlying parsing model more complicated: more parameters need to be estimated and independence assumptions may no longer hold. Klein and Manning (2003), for example, mention that using functional tags of the Penn Treebank (temporal, location, subject, predicate, etc.) with a simple unlexicalized PCFG generally had a negative effect on the parser’s performance. Currently, there are no parsers trained on the Penn Treebank that use the structure of the treebank in full and that are thus 1Some notable exceptions are the CCG parser described in (Hockenmaier, 2003), which incorporates non-local dependencies into the parser’s statistical model, and the parser of Collins (1999), which uses WH traces and argument/modifier distinctions. capable of producing syntactic structures containing all or nearly all of the information annotated in the corpus. In recent years there has been a growing interest in getting more information from parsers than just bare phrase trees. Blaheta and Charniak (2000) presented the first method for assigning Penn functional tags to constituents identified by a parser. Pattern-matching approaches were used in (Johnson, 2002) and (Jijkoun, 2003) to recover non-local dependencies in phrase trees. Furthermore, experiments described in (Dienes and Dubey, 2003) show that the latter task can be successfully addressed by shallow preprocessing methods. 3 An Overview of the Method In this section we give a high-level overview of our method for transforming a parser’s output and describe the different steps of the process. In the experiments we used the parsers described in (Charniak, 2000) and (Collins, 1999). For Collins’ parser the text was first POS-tagged using Ratnaparkhi’s maximum enthropy tagger. The training phase of the method consists in learning which transformations need to be applied to the output of a parser to make it as similar to the treebank data as possible. As a preliminary step (Step 0), we convert the WSJ2 to a dependency corpus without losing the annotated information (functional tags, empty nodes, non-local dependencies). The same conversion is applied to the output of the parsers we consider. The details of the conversion process are described in Section 4 below. The training then proceeds by comparing graphs derived from a parser’s output with the graphs from the dependency corpus, detecting various mismatches, such as incorrect arc labels and missing nodes or arcs. Then the following steps are taken to fix the mismatches: Step 1: changing arc labels Step 2: adding new nodes Step 3: adding new arcs Obviously, other modifications are possible, such as deleting arcs or moving arcs from one node to another. We leave these for future work, though, and focus on the three transformations mentioned above. The dependency corpus was split into training (WSJ sections 02–21), development (sections 00– 2Thoughout the paper WSJ refers to the Penn Treebank II Wall Street Journal corpus. 01) and test (section 23) corpora. For each of the steps 1, 2 and 3 we proceed as follows: 1. compare the training corpus to the output of the parser on the strings of the corpus, after applying the transformations of the previous steps 2. identify possible beneficial transformations (which arc labels need to be changed or where new nodes or arcs need to be added) 3. train a memory-based classifier to predict possible transformations given their context (i.e., information about the local structure of the dependency graph around possible application sites). While the definitions of the context and application site and the graph modifications are different for the three steps, the general structure of the method remains the same at each stage. Sections 6, 7 and 8 describe the steps in detail. In the application phase of the method, we proceed similarly. First, the output of the parser is converted to dependency graphs, and then the learners trained during the steps 1, 2 and 3 are applied in sequence to perform the graph transformations. Apart from the conversion from phrase structures to dependency graphs and the extraction of some linguistic features for the learning, our method does not use any information about the details of the treebank annotation or the parser’s output: it works with arbitrary labelled directed graphs. 4 Step 0: From Constituents to Dependencies To convert phrase trees to dependency structures, we followed the commonly used scheme (Collins, 1999). The conversion routine,3 described below, is applied both to the original WSJ structures and the output of the parsers, though the former provides more information (e.g., traces) which is used by the conversion routine if available. First, for the treebank data, all traces are resolved and corresponding empty nodes are replaced with links to target constituents, so that syntactic trees become directed acyclic graphs. Second, for each constituent we detect its head daughters (more than one in the case of conjunction) and identify lexical heads. Then, for each constituent we output new dependencies between its lexical head and the lexical heads of its non-head daughters. The label of every new dependency is the constituent’s phrase 3Our converter is available at http://www.science. uva.nl/˜jijkoun/software. (a) S NP−SBJ VP to seek NP seats *−1 directors NP−SBJ−1 this month NP−TMP VP planned S (b) VP to seek NP seats VP planned S directors this month NP NP S (c) planned directors VP|S S|NP−SBJ to seek seats VP|NP month this VP|TO S|NP−TMP NP|DT S|NP−SBJ (d) planned directors VP|S S|NP to seek seats VP|NP month this VP|TO S|NP NP|DT Figure 1: Example of (a) the Penn Treebank WSJ annotation, (b) the output of Charniak’s parser, and the results of the conversion to dependency structures of (c) the Penn tree and of (d) the parser’s output label, stripped of all functional tags and coindexing marks, conjoined with the label of the non-head daughter, with its functional tags but without coindexing marks. Figure 1 shows an example of the original Penn annotation (a), the output of Charniak’s parser (b) and the results of our conversion of these trees to dependency structures (c and d). The interpretation of the dependency labels is straightforward: e.g., the label S NP-TMP corresponds to a sentence (S) being modified by a temporal noun phrase (NP-TMP). The core of the conversion routine is the selection of head daughters of the constituents. Following (Collins, 1999), we used a head table, but extended it with a set of additional rules, based on constituent labels, POS tags or, sometimes actual words, to account for situations where the head table alone gave unsatisfactory results. The most notable extension is our handling of conjunctions, which are often left relatively flat in WSJ and, as a result, in a parser’s output: we used simple pattern-based heuristics to detect conjuncts and mark all conjuncts as heads of a conjunction. After the conversion, every resulting dependency structure is modified deterministically: auxiliary verbs (be, do, have) become dependents of corresponding main verbs (similar to modal verbs, which are handled by the head table); to fix a WSJ inconsistency, we move the -LGS tag (indicating logical subject of passive in a by-phrase) from the PP to its child NP. 5 Dependency-based Evaluation of Parsers After the original WSJ structures and the parsers’ outputs have been converted to dependency structures, we evaluate the performance of the parsers against the dependency corpus. We use the standard precision/recall measures over sets of dependencies (excluding punctuation marks, as usual) and evaluate Collins’ and Charniak’s parsers on WSJ section 23 in three settings: on unlabelled dependencies; on labelled dependencies with only bare labels (all functional tags discarded); on labelled dependencies with functional tags. Notice that since neither Collins’ nor Charniak’s parser outputs WSJ functional labels, all dependencies with functional labels in the gold parse will be judged incorrect in the third setting. The evaluation results are shown in Table 1, in the row “step 0”.4 As explained above, the low numbers for the dependency evaluation with functional tags are expected, because the two parsers were not intended to produce functional labels. Interestingly, the ranking of the two parsers is different for the dependency-based evaluation than for PARSEVAL: Charniak’s parser obtains a higher PARSEVAL score than Collins’ (89.0% vs. 88.2%), 4For meaningful comparison, the Collins’ tags -A and -g are removed in this evaluation. Evaluation Parser unlabelled labelled with func. tags P R f P R f P R f after conversion Charniak 89.9 83.9 86.8 85.9 80.1 82.9 68.0 63.5 65.7 (step 0, Section 4) Collins 90.4 83.7 87.0 86.7 80.3 83.4 68.4 63.4 65.8 after relabelling Charniak 89.9 83.9 86.8 86.3 80.5 83.3 83.8 78.2 80.9 (step 1, Section 6) Collins 90.4 83.7 87.0 87.0 80.6 83.7 84.6 78.4 81.4 after adding nodes Charniak 90.1 85.4 87.7 86.5 82.0 84.2 84.1 79.8 81.9 (step 2, Section 7) Collins 90.6 85.3 87.9 87.2 82.1 84.6 84.9 79.9 82.3 after adding arcs Charniak 90.0 89.7 89.8 86.5 86.2 86.4 84.2 83.9 84.0 (step 3, Section 8) Collins 90.4 89.4 89.9 87.1 86.2 86.6 84.9 83.9 84.4 Table 1: Dependency-based evaluation of the parsers after different transformation steps but slightly lower f-score on dependencies without functional tags (82.9% vs. 83.4%). To summarize the evaluation scores at this stage, both parsers perform with f-score around 87% on unlabelled dependencies. When evaluating on bare dependency labels (i.e., disregarding functional tags) the performance drops to 83%. The new errors that appear when taking labels into account come from different sources: incorrect POS tags (NN vs. VBG), different degrees of flatness of analyses in gold and test parses (JJ vs. ADJP, or CD vs. QP) and inconsistencies in the Penn annotation (VP vs. RRC). Finally, the performance goes down to around 66% when taking into account functional tags, which are not produced by the parsers at all. 6 Step 1: Changing Dependency Labels Intuitively, it seems that the 66% performance on labels with functional tags is an underestimation, because much of the missing information is easily recoverable. E.g., one can think of simple heuristics to distinguish subject NPs, temporal PPs, etc., thus introducing functional labels and improving the scores. Developing such heuristics would be a very time consuming and ad hoc process: e.g., Collins’ -A and -g tags may give useful clues for this labelling, but they are not available in the output of other parsers. As an alternative to hardcoded heuristics, Blaheta and Charniak (2000) proposed to recover the Penn functional tags automatically. On the Penn Treebank, they trained a statistical model that, given a constituent in a parsed sentence and its context (parent, grandparent, head words thereof etc.), predicted the functional label, possibly empty. The method gave impressive performance, with 98.64% accuracy on all constituents and 87.28% f-score for non-empty functional labels, when applied to constituents correctly identified by Charniak’s parser. If we extrapolate these results to labelled PARSEVAL with functional labels, the method would give around 87.8% performance (98.64% of the “usual” 89%) for Charniak’s parser. Adding functional labels can be viewed as a relabelling task: we need to change the labels produced by a parser. We considered this more general task, and used a different approach, taking dependency graphs as input. We first parsed the training part of our dependency treebank (sections 02–21) and identified possible relabellings by comparing dependencies output by a parser to dependencies from the treebank. E.g., for Collins’ parser the most frequent relabellings were S NP S NP-SBJ, PP NP-A PP NP, VP NP-A VP NP, S NP-A S NP-SBJ and VP PP VP PP-CLR. In total, around 30% of all the parser’s dependencies had different labels in the treebank. We then learned a mapping from the parser’s labels to those in the dependency corpus, using TiMBL, a memory-based classifier (Daelemans et al., 2003). The features used for the relabelling were similar to those used by Blaheta and Charniak, but redefined for dependency structures. For each dependency we included: the head ( ) and dependent ( ), their POS tags; the leftmost dependent of and its POS; the head of ( ), its POS and the label of the dependency ; the closest left and right siblings of (dependents of ) and their POS tags; the label of the dependency ( ) as derived from the parser’s output. When included in feature vectors, all dependency labels were split at ‘ ’, e.g., the label S NP-A resulted in two features: S and NP-A. Testing was done as follows. The test corpus (section 23) was also parsed, and for each dependency a feature vector was formed and given to TiMBL to correct the dependency label. After this transformation the outputs of the parsers were evaluated, as before, on dependencies in the three settings. The results of the evaluation are shown in Table 1 (the row marked “step 1”). Let us take a closer look at the evaluation results. Obviously, relabelling does not change the unlabelled scores. The 1% improvement for evaluation on bare labels suggests that our approach is capable not only of adding functional tags, but can also correct the parser’s phrase labels and partof-speech tags: for Collins’ parser the most frequent correct changes not involving functional labels were NP NN NP JJ and NP JJ NP VBN, fixing POS tagging errors. A very substantial increase of the labelled score (from 66% to 81%), which is only 6% lower than unlabelled score, clearly indicates that, although the parsers do not produce functional labels, this information is to a large extent implicitly present in trees and can be recovered. 6.1 Comparison to Earlier Work One effect of the relabelling procedure described above is the recovery of Penn functional tags. Thus, it is informative to compare our results with those reported in (Blaheta and Charniak, 2000) for this same task. Blaheta and Charniak measured tagging accuracy and precision/recall for functional tag identification only for constituents correctly identified by the parser (i.e., having the correct span and nonterminal label). Since our method uses the dependency formalism, to make a meaningful comparison we need to model the notion of a constituent being correctly found by a parser. For a word we say that the constituent corresponding to its maximal projection is correctly identified if there exists , the head of , and for the dependency the right part of its label (e.g., NP-SBJ for S NP-SBJ) is a nonterminal (i.e., not a POS tag) and matches the right part of the label in the gold dependency structure, after stripping functional tags. Thus, the constituent’s label and headword should be correct, but not necessarily the span. Moreover, 2.5% of all constituents with functional labels (246 out of 9928 in section 23) are not maximal projections. Since our method ignores functional tags of such constituents (these tags disappear after the conversion of phrase structures to dependency graphs), we consider them as errors, i.e., reducing our recall value. Below, the tagging accuracy, precision and recall are evaluated on constituents correctly identified by Charniak’s parser for section 23. Method Accuracy P R f Blaheta 98.6 87.2 87.4 87.3 This paper 94.7 90.2 86.9 88.5 The difference in the accuracy is due to two reasons. First, because of the different definition of a correctly identified constituent in the parser’s output, we apply our method to a greater portion of all labels produced by the parser (95% vs. 89% reported in (Blaheta and Charniak, 2000)). This might make the task for out system more difficult. And second, whereas 22% of all constituents in section 23 have a functional tag, 36% of the maximal projections have one. Since we apply our method only to labels of maximal projections, this means that our accuracy baseline (i.e., never assign any tag) is lower. 7 Step 2: Adding Missing Nodes As the row labelled “step 1” in Table 1 indicates, for both parsers the recall is relatively low (6% lower than the precision): while the WSJ trees, and hence the derived dependency structures, contain non-local dependencies and empty nodes, the parsers simply do not provide this information. To make up for this, we considered two further tranformations of the output of the parsers: adding new nodes (corresponding to empty nodes in WSJ), and adding new labelled arcs. This section describes the former modification and Section 8 the latter. As described in Section 4, when converting WSJ trees to dependency structures, traces are resolved, their empty nodes removed and new dependencies introduced. Of the remaining empty nodes (i.e., non-traces), the most frequent in WSJ are: NP PRO, empty units, empty complementizers, empty relative pronouns. To add missing empty nodes to dependency graphs, we compared the output of the parsers on the strings of the training corpus after steps 0 and 1 (conversion to dependencies and relabelling) to the structures in the corpus itself. We trained a classifier which, for every word in the parser’s output, had to decide whether an empty node should be added as a new dependent of the word, and what its symbol (‘*’, ‘*U*’ or ‘0’ in WSJ), POS tag (always -NONE- in WSJ) and the label of the new dependency (e.g., ‘S NP-SBJ’ for NP PRO and ‘VP SBAR’ for empty complementizers) should be. This decision is conditioned on the word itself and its context. The features used were: the word and its POS tag, whether the word has any subject and object dependents, and whether it is the head of a finite verb group; the same information for the word’s head (if any) and also the label of the corresponding dependency; the same information for the rightmost and leftmost dependents of the word (if exist) along with their dependency labels. In total, we extracted 23 symbolic features for every word in the corpus. TiMBL was trained on sections 02–21 and applied to the output of the parsers (after steps 0 and 1) on the test corpus (section 23), producing a list of empty nodes to be inserted in the dependency graphs. After insertion of the empty nodes, the resulting structures were evaluated against section 23 of the gold dependency treebank. The results are shown in Table 1 (the row “step 2”). For both parsers the insertion of empty nodes improves the recall by 1.5%, resulting in a 1% increase of the f-score. 7.1 Comparison to Earlier Work A procedure for empty node recovery was first described in (Johnson, 2002), along with an evaluation criterion: an empty node is correct if its category and position in the sentence are correct. Since our method works with dependency structures, not phrase trees, we adopt a different but comparable criterion: an empty node should be attached as a dependent to the correct word, and with the correct dependency label. Unlike the first metric, our correctness criterion also requires that possible attachment ambiguities are resolved correctly (e.g., as in the number of reports 0 they sent, where the empty relative pronoun may be attached either to number or to reports). For this task, the best published results (using Johnson’s metric) were reported by Dienes and Dubey (2003), who used shallow tagging to insert empty elements. Below we give the comparison to our method. Notice that this evaluation does not include traces (i.e., empty elements with antecedents): recovery of traces is described in Section 8. Type This paper Dienes&Dubey P R f P R f PRO-NP 73.1 63.89 68.1 68.7 70.4 69.5 COMP-SBAR 82.6 83.1 82.8 93.8 78.6 85.5 COMP-WHNP 65.3 40.0 49.6 67.2 38.3 48.8 UNIT 95.4 91.8 93.6 99.1 92.5 95.7 For comparison we use the notation of Dienes and Dubey: PRO-NP for uncontrolled PROs (nodes ‘*’ in the WSJ), COMP-SBAR for empty complementizers (nodes ‘0’ with dependency label VP SBAR), COMP-WHNP for empty relative pronouns (nodes ‘0’ with dependency label X SBAR, where X VP) and UNIT for empty units (nodes ‘*U*’). It is interesting to see that for empty nodes except for UNIT both methods have their advantages, showing better precision or better recall. Yet shallow tagging clearly performs better for UNIT. 8 Step 3: Adding Missing Dependencies We now get to the third and final step of our transformation method: adding missing arcs to dependency graphs. The parsers we considered do not explicitly provide information about non-local dependencies (control, WH-extraction) present in the treebank. Moreover, newly inserted empty nodes (step 2, Section 7) might also need more links to the rest of a sentence (e.g., the inserted empty complementizers). In this section we describe the insertion of missing dependencies. Johnson (2002) was the first to address recovery of non-local dependencies in a parser’s output. He proposed a pattern-matching algorithm: first, from the training corpus the patterns that license nonlocal dependencies are extracted, and then these patterns are detected in unseen trees, dependencies being added when matches are found. Building on these ideas, Jijkoun (2003) used a machine learning classifier to detect matches. We extended Jijkoun’s approach by providing the classifier with lexical information and using richer patterns with labels containing the Penn functional tags and empty nodes, detected at steps 1 and 2. First, we compared the output of the parsers on the strings of the training corpus after steps 0, 1 and 2 to the dependency structures in the training corpus. For every dependency that is missing in the parser’s output, we find the shortest undirected path in the dependency graph connecting the head and the dependent. These paths, connected sequences of labelled dependencies, define the set of possible patterns. For our experiments we only considered patterns occuring more than 100 times in the training corpus. E.g., for Collins’ parser, 67 different patterns were found. Next, from the parsers’ output on the strings of the training corpus, we extracted all occurrences of the patterns, along with information about the nodes involved. For every node in an occurrence of a pattern we extracted the following features: the word and its POS tag; whether the word has subject and object dependents; whether the word is the head of a finite verb cluster. We then trained TiMBL to predict the label of the missing dependency (or ‘none’), given an occurrence of a pattern and the features of all the nodes involved. We trained a separate classifier for each pattern. For evaluation purposes we extracted all occurrences of the patterns and the features of their nodes from the parsers’ outputs for section 23 after steps 0, 1 and 2 and used TiMBL to predict and insert new dependencies. Then we compared the resulting dependency structures to the gold corpus. The results are shown in Table 1 (the row “step 3”). As expected, adding missing dependencies substantially improves the recall (by 4% for both parsers) and allows both parsers to achieve an 84% f-score on dependencies with functional tags (90% on unlabelled dependencies). The unlabelled f-score 89.9% for Collins’ parser is close to the 90.9% reported in (Collins, 1999) for the evaluation on unlabelled local dependencies only (without empty nodes and traces). Since as many as 5% of all dependencies in WSJ involve traces or empty nodes, the results in Table 1 are encouraging. 8.1 Comparison to Earlier Work Recently, several methods for the recovery of nonlocal dependencies have been described in the literature. Johnson (2002) and Jijkoun (2003) used pattern-matching on local phrase or dependency structures. Dienes and Dubey (2003) used shallow preprocessing to insert empty elements in raw sentences, making the parser itself capable of finding non-local dependencies. Their method achieves a considerable improvement over the results reported in (Johnson, 2002) and gives the best evaluation results published to date. To compare our results to Dienes and Dubey’s, we carried out the transformation steps 0–3 described above, with a single modification: when adding missing dependencies (step 3), we only considered patterns that introduce nonlocal dependencies (i.e., traces: we kept the information whether a dependency is a trace when converting WSJ to a dependency corpus). As before, a dependency is correctly found if its head, dependent, and label are correct. For traces, this corresponds to the evaluation using the head-based antecedent representation described in (Johnson, 2002), and for empty nodes without antecedents (e.g., NP PRO) this is the measure used in Section 7.1. To make the results comparable to other methods, we strip functional tags from the dependency labels before label comparison. Below are the overall precision, recall, and f-score for our method and the scores reported in (Dienes and Dubey, 2003) for antecedent recovery using Collins’ parser. Method P R f Dienes and Dubey 81.5 68.7 74.6 This paper 82.8 67.8 74.6 Interestingly, the overall performance of our postprocessing method is very similar to that of the pre- and in-processing methods of Dienes and Dubey (2003). Hence, for most cases, traces and empty nodes can be reliably identified using only local information provided by a parser, using the parser itself as a black box. This is important, since making parsers aware of non-local relations need not improve the overall performance: Dienes and Dubey (2003) report a decrease in PARSEVAL fscore from 88.2% to 86.4% after modifying Collins’ parser to resolve traces internally, although this allowed them to achieve high accuracy for traces. 9 Discussion The experiments described in the previous sections indicate that although statistical parsers do not explicitly output some information available in the corpus they were trained on (grammatical and semantic tags, empty nodes, non-local dependencies), this information can be recovered with reasonably high accuracy, using pattern matching and machine learning methods. For our task, using dependency structures rather than phrase trees has several advantages. First, after converting both the treebank trees and parsers’ outputs to graphs with head–modifier relations, our method needs very little information about the linguistic nature of the data, and thus is largely corpusand parser-independent. Indeed, after the conversion, the only linguistically informed operation is the straightforward extraction of features indicating the presence of subject and object dependents, and finiteness of verb groups. Second, using a dependency formalism facilitates a very straightforward evaluation of the systems that produce structures more complex than trees. It is not clear whether the PARSEVAL evaluation can be easily extended to take non-local relations into account (see (Johnson, 2002) for examples of such extension). Finally, the independence from the details of the parser and the corpus suggests that our method can be applied to systems based on other formalisms, e.g., (Hockenmaier, 2003), to allow a meaningful dependency-based comparison of very different parsers. Furthermore, with the fine-grained set of dependency labels that our system provides, it is possible to map the resulting structures to other dependency formalisms, either automatically in case annotated corpora exist, or with a manually developed set of rules. Our preliminary experiments with Collins’ parser and the corpus annotated with grammatical relations (Carroll et al., 2003) are promising: the system achieves 76% precision/recall fscore, after the parser’s output is enriched with our method and transformed to grammatical relations using a set of 40 simple rules. This is very close to the performance reported by Carroll et al. (2003) for the parser specifically designed for the extraction of grammatical relations. Despite the high-dimensional feature spaces, the large number of lexical features, and the lack of independence between features, we achieved high accuracy using a memory-based learner. TiMBL performed well on tasks where structured, more complicated and task-specific statistical models have been used previously (Blaheta and Charniak, 2000). For all subtasks we used the same settings for TiMBL: simple feature overlap measure, 5 nearest neighbours with majority voting. During further experiments with our method on different corpora, we found that quite different settings led to a better performance. It is clear that more careful and systematic parameter tuning and the analysis of the contribution of different features have to be addressed. Finally, our method is not restricted to syntactic structures. It has been successfully applied to the identification of semantic relations (Ahn et al., 2004), using FrameNet as the training corpus. For this task, we viewed semantic relations (e.g., Speaker, Topic, Addressee) as dependencies between a predicate and its arguments. Adding such semantic relations to syntactic dependency graphs was simply an additional graph transformation step. 10 Conclusions We presented a method to automatically enrich the output of a parser with information that is not provided by the parser itself, but is available in a treebank. Using the method with two state of the art statistical parsers and the Penn Treebank allowed us to recover functional tags (grammatical and semantic), empty nodes and traces. Thus, we are able to provide virtually all information available in the corpus, without modifying the parser, viewing it, indeed, as a black box. Our method allows us to perform a meaningful dependency-based comparison of phrase structure parsers. The evaluation on a dependency corpus derived from the Penn Treebank showed that, after our post-processing, two state of the art statistical parsers achieve 84% accuracy on a fine-grained set of dependency labels. Finally, our method for enriching the output of a parser is, to a large extent, independent of a specific parser and corpus, and can be used with other syntactic and semantic resources. 11 Acknowledgements We are grateful to David Ahn and Stefan Schlobach and to the anonymous referees for their useful suggestions. This research was supported by grants from the Netherlands Organization for Scientific Research (NWO) under project numbers 22080-001, 365-20-005, 612.069.006, 612.000.106, 612.000.207 and 612.066.302. References David Ahn, Sisay Fissaha, Valentin Jijkoun, and Maarten de Rijke. 2004. The University of Amsterdam at Senseval-3: semantic roles and logic forms. In Proceedings of the ACL-2004 Workshop on Evaluation of Systems for the Semantic Analysis of Text. Don Blaheta and Eugene Charniak. 2000. Assigning function tags to parsed text. In Proceedings of the 1st Meeting of NAACL, pages 234–240. John Carroll, Guido Minnen, and Ted Briscoe. 2003. Parser evaluation using a grammatical relation annotation scheme. In Anne Abeill´e, editor, Building and Using Parsed Corpora, pages 299–316. Kluwer. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Meeting of NAACL, pages 132–139. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch, 2003. TiMBL: Tilburg Memory Based Learner, version 5.0, Reference Guide. ILK Technical Report 03-10. Available from http://ilk.kub.nl/downloads/pub/papers/ilk0310.ps.gz. P´eter Dienes and Amit Dubey. 2003. Antecedent recovery: Experiments with a trace tagger. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 33–40. Julia Hockenmaier. 2003. Parsing with generative models of predicate-argument structure. In Proceedings of the 41st Meeting of ACL, pages 359–366. Valentin Jijkoun. 2003. Finding non-local dependencies: Beyond pattern matching. In Proceedings of the ACL-2003 Student Research Workshop, pages 37–43. Mark Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of the 40th meeting of ACL, pages 136–143. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Meeting of ACL, pages 423–430. | 2004 | 40 |
Long-Distance Dependency Resolution in Automatically Acquired Wide-Coverage PCFG-Based LFG Approximations Aoife Cahill, Michael Burke, Ruth O’Donovan, Josef van Genabith, Andy Way National Centre for Language Technology and School of Computing, Dublin City University, Dublin, Ireland {acahill,mburke,rodonovan,josef,away}@computing.dcu.ie Abstract This paper shows how finite approximations of long distance dependency (LDD) resolution can be obtained automatically for wide-coverage, robust, probabilistic Lexical-Functional Grammar (LFG) resources acquired from treebanks. We extract LFG subcategorisation frames and paths linking LDD reentrancies from f-structures generated automatically for the Penn-II treebank trees and use them in an LDD resolution algorithm to parse new text. Unlike (Collins, 1999; Johnson, 2002), in our approach resolution of LDDs is done at f-structure (attribute-value structure representations of basic predicate-argument or dependency structure) without empty productions, traces and coindexation in CFG parse trees. Currently our best automatically induced grammars achieve 80.97% f-score for fstructures parsing section 23 of the WSJ part of the Penn-II treebank and evaluating against the DCU 1051 and 80.24% against the PARC 700 Dependency Bank (King et al., 2003), performing at the same or a slightly better level than state-of-the-art hand-crafted grammars (Kaplan et al., 2004). 1 Introduction The determination of syntactic structure is an important step in natural language processing as syntactic structure strongly determines semantic interpretation in the form of predicate-argument structure, dependency relations or logical form. For a substantial number of linguistic phenomena such as topicalisation, wh-movement in relative clauses and interrogative sentences, however, there is an important difference between the location of the (surface) realisation of linguistic material and the location where this material should be interpreted semantically. Resolution of such long-distance dependencies (LDDs) is therefore crucial in the determination of accurate predicate-argument struc1Manually constructed f-structures for 105 randomly selected trees from Section 23 of the WSJ section of the Penn-II Treebank ture, deep dependency relations and the construction of proper meaning representations such as logical forms (Johnson, 2002). Modern unification/constraint-based grammars such as LFG or HPSG capture deep linguistic information including LDDs, predicate-argument structure, or logical form. Manually scaling rich unification grammars to naturally occurring free text, however, is extremely time-consuming, expensive and requires considerable linguistic and computational expertise. Few hand-crafted, deep unification grammars have in fact achieved the coverage and robustness required to parse a corpus of say the size and complexity of the Penn treebank: (Riezler et al., 2002) show how a deep, carefully hand-crafted LFG is successfully scaled to parse the Penn-II treebank (Marcus et al., 1994) with discriminative (loglinear) parameter estimation techniques. The last 20 years have seen continuously increasing efforts in the construction of parse-annotated corpora. Substantial treebanks2 are now available for many languages (including English, Japanese, Chinese, German, French, Czech, Turkish), others are currently under construction (Arabic, Bulgarian) or near completion (Spanish, Catalan). Treebanks have been enormously influential in the development of robust, state-of-the-art parsing technology: grammars (or grammatical information) automatically extracted from treebank resources provide the backbone of many state-of-the-art probabilistic parsing approaches (Charniak, 1996; Collins, 1999; Charniak, 1999; Hockenmaier, 2003; Klein and Manning, 2003). Such approaches are attractive as they achieve robustness, coverage and performance while incurring very low grammar development cost. However, with few notable exceptions (e.g. Collins’ Model 3, (Johnson, 2002), (Hockenmaier, 2003) ), treebank-based probabilistic parsers return fairly simple “surfacey” CFG trees, without deep syntactic or semantic information. The grammars used by such systems are sometimes re2Or dependency banks. ferred to as “half” (or “shallow”) grammars (Johnson, 2002), i.e. they do not resolve LDDs but interpret linguistic material purely locally where it occurs in the tree. Recently (Cahill et al., 2002) showed how wide-coverage, probabilistic unification grammar resources can be acquired automatically from fstructure-annotated treebanks. Many second generation treebanks provide a certain amount of deep syntactic or dependency information (e.g. in the form of Penn-II functional tags and traces) supporting the computation of representations of deep linguistic information. Exploiting this information (Cahill et al., 2002) implement an automatic LFG f-structure annotation algorithm that associates nodes in treebank trees with fstructure annotations in the form of attribute-value structure equations representing abstract predicateargument structure/dependency relations. From the f-structure annotated treebank they automatically extract wide-coverage, robust, PCFG-based LFG approximations that parse new text into trees and f-structure representations. The LFG approximations of (Cahill et al., 2002), however, are only “half” grammars, i.e. like most of their probabilistic CFG cousins (Charniak, 1996; Johnson, 1999; Klein and Manning, 2003) they do not resolve LDDs but interpret linguistic material purely locally where it occurs in the tree. In this paper we show how finite approximations of long distance dependency resolution can be obtained automatically for wide-coverage, robust, probabilistic LFG resources automatically acquired from treebanks. We extract LFG subcategorisation frames and paths linking LDD reentrancies from f-structures generated automatically for the PennII treebank trees and use them in an LDD resolution algorithm to parse new text. Unlike (Collins, 1999; Johnson, 2002), in our approach LDDs are resolved on the level of f-structure representation, rather than in terms of empty productions and coindexation on parse trees. Currently we achieve fstructure/dependency f-scores of 80.24 and 80.97 for parsing section 23 of the WSJ part of the PennII treebank, evaluating against the PARC 700 and DCU 105 respectively. The paper is structured as follows: we give a brief introduction to LFG. We outline the automatic f-structure annotation algorithm, PCFG-based LFG grammar approximations and parsing architectures of (Cahill et al., 2002). We present our subcategorisation frame extraction and introduce the treebankbased acquisition of finite approximations of LFG functional uncertainty equations in terms of LDD paths. We present the f-structure LDD resolution algorithm, provide results and extensive evaluation. We compare our method with previous work. Finally, we conclude. 2 Lexical Functional Grammar (LFG) Lexical-Functional Grammar (Kaplan and Bresnan, 1982; Dalrymple, 2001) minimally involves two levels of syntactic representation:3 c-structure and f-structure. C(onstituent)-structure represents the grouping of words and phrases into larger constituents and is realised in terms of a CFPSG grammar. F(unctional)-structure represents abstract syntactic functions such as SUBJ(ect), OBJ(ect), OBL(ique), closed and open clausal COMP/XCOMP(lement), ADJ(unct), APP(osition) etc. and is implemented in terms of recursive feature structures (attribute-value matrices). C-structure captures surface grammatical configurations, fstructure encodes abstract syntactic information approximating to predicate-argument/dependency structure or simple logical form (van Genabith and Crouch, 1996). C- and f-structures are related in terms of functional annotations (constraints, attribute-value equations) on c-structure rules (cf. Figure 1). S NP VP U.N. V NP signs treaty " SUBJ PRED U.N. PRED sign OBJ PRED treaty # S → NP VP ↑SUBJ=↓ ↑=↓ VP → V NP ↑=↓ ↑OBJ=↓ NP → U.N V → signs ↑PRED=U.N. ↑PRED=sign Figure 1: Simple LFG C- and F-Structure Uparrows point to the f-structure associated with the mother node, downarrows to that of the local node. The equations are collected with arrows instantiated to unique tree node identifiers, and a constraint solver generates an f-structure. 3 Automatic F-Structure Annotation The Penn-II treebank employs CFG trees with additional “functional” node annotations (such as -LOC, -TMP, -SBJ, -LGS, . . . ) as well as traces and coindexation (to indicate LDDs) as basic data structures. The f-structure annotation algorithm of (Cahill et 3LFGs may also involve morphological and semantic levels of representation. al., 2002) exploits configurational, categorial, PennII “functional”, local head and trace information to annotate nodes with LFG feature-structure equations. A slightly adapted version of (Magerman, 1994)’s scheme automatically head-lexicalises the Penn-II trees. This partitions local subtrees of depth one (corresponding to CFG rules) into left and right contexts (relative to head). The annotation algorithm is modular with four components (Figure 2): left-right (L-R) annotation principles (e.g. leftmost NP to right of V head of VP type rule is likely to be an object etc.); coordination annotation principles (separating these out simplifies other components of the algorithm); traces (translates traces and coindexation in trees into corresponding reentrancies in f-structure ( 1 in Figure 3)); catch all and clean-up. Lexical information is provided via macros for POS tag classes. L/R Context ⇒Coordination ⇒Traces ⇒Catch-All Figure 2: Annotation Algorithm The f-structure annotations are passed to a constraint solver to produce f-structures. Annotation is evaluated in terms of coverage and quality, summarised in Table 1. Coverage is near complete with 99.82% of the 48K Penn-II sentences receiving a single, connected f-structure. Annotation quality is measured in terms of precision and recall (P&R) against the DCU 105. The algorithm achieves an F-score of 96.57% for full f-structures and 94.3% for preds-only f-structures.4 S S-TPC- 1 NP U.N. VP V signs NP treaty NP Det the N headline VP V said S T- 1 TOPIC " SUBJ PRED U.N. PRED sign OBJ PRED treaty # 1 SUBJ h SPEC the PRED headline i PRED say COMP 1 Figure 3: Penn-II style tree with LDD trace and corresponding reentrancy in f-structure 4Full f-structures measure all attribute-value pairs including“minor” features such as person, number etc. The stricter preds-only captures only paths ending in PRED:VALUE. # frags # sent percent 0 85 0.176 1 48337 99.820 2 2 0.004 all preds P 96.52 94.45 R 96.63 94.16 Table 1: F-structure annotation results for DCU 105 4 PCFG-Based LFG Approximations Based on these resources (Cahill et al., 2002) developed two parsing architectures. Both generate PCFG-based approximations of LFG grammars. In the pipeline architecture a standard PCFG is extracted from the “raw” treebank to parse unseen text. The resulting parse-trees are then annotated by the automatic f-structure annotation algorithm and resolved into f-structures. In the integrated architecture the treebank is first annotated with f-structure equations. An annotated PCFG is then extracted where each non-terminal symbol in the grammar has been augmented with LFG f-equations: NP[↑OBJ=↓] → DT[↑SPEC=↓] NN[↑=↓] . Nodes followed by annotations are treated as a monadic category for grammar extraction and parsing. Post-parsing, equations are collected from parse trees and resolved into f-structures. Both architectures parse raw text into “proto” fstructures with LDDs unresolved resulting in incomplete argument structures as in Figure 4. S S NP U.N. VP V signs NP treaty NP Det the N headline VP V said TOPIC " SUBJ PRED U.N. PRED sign OBJ PRED treaty # SUBJ h SPEC the PRED headline i PRED say Figure 4: Shallow-Parser Output with Unresolved LDD and Incomplete Argument Structure (cf. Figure 3) 5 LDDs and LFG FU-Equations Theoretically, LDDs can span unbounded amounts of intervening linguistic material as in [U.N. signs treaty]1 the paper claimed . . . a source said []1. In LFG, LDDs are resolved at the f-structure level, obviating the need for empty productions and traces in trees (Dalrymple, 2001), using functional uncertainty (FU) equations. FUs are regular expressions specifying paths in f-structure between a source (where linguistic material is encountered) and a target (where linguistic material is interpreted semantically). To account for the fronted sentential constituents in Figures 3 and 4, an FU equation of the form ↑TOPIC = ↑COMP* COMP would be required. The equation states that the value of the TOPIC attribute is token identical with the value of the final COMP argument along a path through the immediately enclosing f-structure along zero or more COMP attributes. This FU equation is annotated to the topicalised sentential constituent in the relevant CFG rules as follows S → S NP VP ↑TOPIC=↓ ↑SUBJ=↓ ↑=↓ ↑TOPIC=↑COMP*COMP and generates the LDD-resolved proper f-structure in Figure 3 for the traceless tree in Figure 4, as required. In addition to FU equations, subcategorisation information is a crucial ingredient in LFG’s account of LDDs. As an example, for a topicalised constituent to be resolved as the argument of a local predicate as specified by the FU equation, the local predicate must (i) subcategorise for the argument in question and (ii) the argument in question must not be already filled. Subcategorisation requirements are provided lexically in terms of semantic forms (subcat lists) and coherence and completeness conditions (all GFs specified must be present, and no others may be present) on f-structure representations. Semantic forms specify which grammatical functions (GFs) a predicate requires locally. For our example in Figures 3 and 4, the relevant lexical entries are: V → said ↑PRED=say⟨↑SUBJ, ↑COMP⟩ V → signs ↑PRED=sign⟨↑SUBJ, ↑OBJ⟩ FU equations and subcategorisation requirements together ensure that LDDs can only be resolved at suitable f-structure locations. 6 Acquiring Lexical and LDD Resources In order to model the LFG account of LDD resolution we require subcat frames (i.e. semantic forms) and LDD resolution paths through f-structure. Traditionally, such resources were handcoded. Here we show how they can be acquired from f-structure annotated treebank resources. LFG distinguishes between governable (arguments) and nongovernable (adjuncts) grammatical functions (GFs). If the automatic f-structure annotation algorithm outlined in Section 3 generates high quality f-structures, reliable semantic forms can be extracted (reverse-engineered): for each f-structure generated, for each level of embedding we determine the local PRED value and collect the governable, i.e. subcategorisable grammatical functions present at that level of embedding. For the proper f-structure in Figure 3 we obtain sign([subj,obj]) and say([subj,comp]). We extract frames from the full WSJ section of the Penn-II Treebank with 48K trees. Unlike many other approaches, our extraction process does not predefine frames, fully reflects LDDs in the source data-structures (cf. Figure 3), discriminates between active and passive frames, computes GF, GF:CFG category pairas well as CFG category-based subcategorisation frames and associates conditional probabilities with frames. Given a lemma l and an argument list s, the probability of s given l is estimated as: P(s|l) := count(l, s) Pn i=1 count(l, si) Table 2 summarises the results. We extract 3586 verb lemmas and 10969 unique verbal semantic form types (lemma followed by non-empty argument list). Including prepositions associated with the subcategorised OBLs and particles, this number goes up to 14348. The number of unique frame types (without lemma) is 38 without specific prepositions and particles, 577 with. F-structure annotations allow us to distinguish passive and active frames. Table 3 shows the most frequent semantic forms for accept. Passive frames are marked p. We carried out a comprehensive evaluation of the automatically acquired verbal semantic forms against the COMLEX Resource (Macleod et al., 1994) for the 2992 active verb lemmas that both resources have in common. We report on the evaluation of GF-based frames for the full frames with complete prepositional and particle infomation. We use relative conditional probability thresholds (1% and 5%) to filter the selection of semantic forms (Table 4). (O’Donovan et al., 2004) provide a more detailed description of the extraction and evaluation of semantic forms. Without Prep/Part With Prep/Part Lemmas 3586 3586 Sem. Forms 10969 14348 Frame Types 38 577 Active Frame Types 38 548 Passive Frame Types 21 177 Table 2: Verb Results Semantic Form Occurrences Prob. accept([obj,subj]) 122 0.813 accept([subj],p) 9 0.060 accept([comp,subj]) 5 0.033 accept([subj,obl:as],p) 3 0.020 accept([obj,subj,obl:as]) 3 0.020 accept([obj,subj,obl:from]) 3 0.020 accept([subj]) 2 0.013 accept([obj,subj,obl:at]) 1 0.007 accept([obj,subj,obl:for]) 1 0.007 accept([obj,subj,xcomp]) 1 0.007 Table 3: Semantic forms for the verb accept. Threshold 1% Threshold 5% P R F-Score P R F-Score Exp. 73.7% 22.1% 34.0% 78.0% 18.3% 29.6% Table 4: COMLEX Comparison We further acquire finite approximations of FUequations. by extracting paths between co-indexed material occurring in the automatically generated fstructures from sections 02-21 of the Penn-II treebank. We extract 26 unique TOPIC, 60 TOPIC-REL and 13 FOCUS path types (with a total of 14,911 token occurrences), each with an associated probability. We distinguish between two types of TOPICREL paths, those that occur in wh-less constructions, and all other types (c.f Table 5). Given a path p and an LDD type t (either TOPIC, TOPIC-REL or FOCUS), the probability of p given t is estimated as: P(p|t) := count(t, p) Pn i=1 count(t, pi) In order to get a first measure of how well the approximation models the data, we compute the path types in section 23 not covered by those extracted from 02-21: 23/(02-21). There are 3 such path types (Table 6), each occuring exactly once. Given that the total number of path tokens in section 23 is 949, the finite approximation extracted from 02-23 covers 99.69% of all LDD paths in section 23. 7 Resolving LDDs in F-Structure Given a set of semantic forms s with probabilities P(s|l) (where l is a lemma), a set of paths p with P(p|t) (where t is either TOPIC, TOPIC-REL or FOCUS) and an f-structure f, the core of the algorithm to resolve LDDs recursively traverses f to: find TOPIC|TOPIC-REL|FOCUS:g pair; retrieve TOPIC|TOPIC-REL|FOCUS paths; for each path p with GF1 : . . . : GFn : GF, traverse f along GF1 : . . . : GFn to sub-f-structure h; retrieve local PRED:l; add GF:g to h iff ∗GF is not present at h wh-less TOPIC-REL # wh-less TOPIC-REL # subj 5692 adjunct 1314 xcomp:adjunct 610 obj 364 xcomp:obj 291 xcomp:xcomp:adjunct 96 comp:subj 76 xcomp:subj 67 Table 5: Most frequent wh-less TOPIC-REL paths 02–21 23 23 /(02–21) TOPIC 26 7 2 FOCUS 13 4 0 TOPIC-REL 60 22 1 Table 6: Number of path types extracted ∗h together with GF is locally complete and coherent with respect to a semantic form s for l rank resolution by P(s|l) × P(p|t) The algorithm supports multiple, interacting TOPIC, TOPIC-REL and FOCUS LDDs. We use P(s|l) × P(p|t) to rank a solution, depending on how likely the PRED takes semantic frame s, and how likely the TOPIC, FOCUS or TOPIC-REL is resolved using path p. The algorithm also supports resolution of LDDs where no overt linguistic material introduces a source TOPIC-REL function (e.g. in reduced relative clause constructions). We distinguish between passive and active constructions, using the relevant semantic frame type when resolving LDDs. 8 Experiments and Evaluation We ran experiments with grammars in both the pipeline and the integrated parsing architectures. The first grammar is a basic PCFG, while A-PCFG includes the f-structure annotations. We apply a parent transformation to each grammar (Johnson, 1999) to give P-PCFG and PA-PCFG. We train on sections 02-21 (grammar, lexical extraction and LDD paths) of the Penn-II Treebank and test on section 23. The only pre-processing of the trees that we do is to remove empty nodes, and remove all PennII functional tags in the integrated model. We evaluate the parse trees using evalb. Following (Riezler et al., 2002), we convert f-structures into dependency triple format. Using their software we evaluate the f-structure parser output against: 1. The DCU 105 (Cahill et al., 2002) 2. The full 2,416 f-structures automatically generated by the f-structure annotation algorithm for the original Penn-II trees, in a CCG-style (Hockenmaier, 2003) evaluation experiment Pipeline Integrated PCFG P-PCFG A-PCFG PA-PCFG 2416 Section 23 trees # Parses 2416 2416 2416 2414 Lab. F-Score 75.83 80.80 79.17 81.32 Unlab. F-Score 78.28 82.70 81.49 83.28 DCU 105 F-Strs All GFs F-Score (before LDD resolution) 79.82 79.24 81.12 81.20 All GFs F-Score (after LDD resolution) 83.79 84.59 86.30 87.04 Preds only F-Score (before LDD resolution) 70.00 71.57 73.45 74.61 Preds only F-Score (after LDD resolution) 73.78 77.43 78.76 80.97 2416 F-Strs All GFs F-Score (before LDD resolution) 81.98 81.49 83.32 82.78 All GFs F-Score (after LDD resolution) 84.16 84.37 86.45 86.00 Preds only F-Score (before LDD resolution) 72.00 73.23 75.22 75.10 Preds only F-Score (after LDD resolution) 74.07 76.12 78.36 78.40 PARC 700 Dependency Bank Subset of GFs following (Kaplan et al., 2004) 77.86 80.24 77.68 78.60 Table 7: Parser Evaluation 3. A subset of 560 dependency structures of the PARC 700 Dependency Bank following (Kaplan et al., 2004) The results are given in Table 7. The parenttransformed grammars perform best in both architectures. In all cases, there is a marked improvement (2.07-6.36%) in the f-structures after LDD resolution. We achieve between 73.78% and 80.97% preds-only and 83.79% to 87.04% all GFs f-score, depending on gold-standard. We achieve between 77.68% and 80.24% against the PARC 700 following the experiments in (Kaplan et al., 2004). For details on how we map the f-structures produced by our parsers to a format similar to that of the PARC 700 Dependency Bank, see (Burke et al., 2004). Table 8 shows the evaluation result broken down by individual GF (preds-only) for the integrated model PA-PCFG against the DCU 105. In order to measure how many of the LDD reentrancies in the gold-standard f-structures are captured correctly by our parsers, we developed evaluation software for f-structure LDD reentrancies (similar to Johnson’s (2002) evaluation to capture traces and their antecedents in trees). Table 9 shows the results with the integrated model achieving more than 76% correct LDD reentrancies. 9 Related Work (Collins, 1999)’s Model 3 is limited to wh-traces in relative clauses (it doesn’t treat topicalisation, focus etc.). Johnson’s (2002) work is closest to ours in spirit. Like our approach he provides a finite approximation of LDDs. Unlike our approach, however, he works with tree fragments in a postprocessing approach to add empty nodes and their DEP. PRECISION RECALL F-SCORE adjunct 717/903 = 79 717/947 = 76 78 app 14/15 = 93 14/19 = 74 82 comp 35/43 = 81 35/65 = 54 65 coord 109/143 = 76 109/161 = 68 72 det 253/264 = 96 253/269 = 94 95 focus 1/1 = 100 1/1 = 100 100 obj 387/445 = 87 387/461 = 84 85 obj2 0/1 = 0 0/2 = 0 0 obl 27/56 = 48 27/61 = 44 46 obl2 1/3 = 33 1/2 = 50 40 obl ag 5/11 = 45 5/12 = 42 43 poss 69/73 = 95 69/81 = 85 90 quant 40/55 = 73 40/52 = 77 75 relmod 26/38 = 68 26/50 = 52 59 subj 330/361 = 91 330/414 = 80 85 topic 12/12 = 100 12/13 = 92 96 topicrel 35/42 = 83 35/52 = 67 74 xcomp 139/160 = 87 139/146 = 95 91 OVERALL 83.78 78.35 80.97 Table 8: Preds-only results of PA-PCFG against the DCU 105 antecedents to parse trees, while we present an approach to LDD resolution on the level of f-structure. It seems that the f-structure-based approach is more abstract (99 LDD path types against approximately 9,000 tree-fragment types in (Johnson, 2002)) and fine-grained in its use of lexical information (subcat frames). In contrast to Johnson’s approach, our LDD resolution algorithm is not biased. It computes all possible complete resolutions and orderranks them using LDD path and subcat frame probabilities. It is difficult to provide a satisfactory comparison between the two methods, but we have carried out an experiment that compares them at the f-structure level. We take the output of Charniak’s Pipeline Integrated PCFG P-PCFG A-PCFG PA-PCFG TOPIC Precision (11/14) (12/13) (12/13) (12/12) Recall (11/13) (12/13) (12/13) (12/13) F-Score 0.81 0.92 0.92 0.96 FOCUS Precision (0/1) (0/1) (0/1) (0/1) Recall (0/1) (0/1) (0/1) (0/1) F-Score 0 0 0 0 TOPIC-REL Precision (20/34) (27/36) (34/42) (34/42) Recall (20/52) (27/52) (34/52) (34/52) F-Score 0.47 0.613 0.72 0.72 OVERALL 0.54 0.67 0.75 0.76 Table 9: LDD Evaluation on the DCU 105 Charniak -LDD res. +LDD res. (Johnson, 2002) All GFs 80.86 86.65 85.16 Preds Only 74.63 80.97 79.75 Table 10: Comparison at f-structure level of LDD resolution to (Johnson, 2002) on the DCU 105 parser (Charniak, 1999) and, using the pipeline f-structure annotation model, evaluate against the DCU 105, both before and after LDD resolution. Using the software described in (Johnson, 2002) we add empty nodes to the output of Charniak’s parser, pass these trees to our automatic annotation algorithm and evaluate against the DCU 105. The results are given in Table 10. Our method of resolving LDDs at f-structure level results in a preds-only f-score of 80.97%. Using (Johnson, 2002)’s method of adding empty nodes to the parse-trees results in an f-score of 79.75%. (Hockenmaier, 2003) provides CCG-based models of LDDs. Some of these involve extensive cleanup of the underlying Penn-II treebank resource prior to grammar extraction. In contrast, in our approach we leave the treebank as is and only add (but never correct) annotations. Earlier HPSG work (Tateisi et al., 1998) is based on independently constructed hand-crafted XTAG resources. In contrast, we acquire our resources from treebanks and achieve substantially wider coverage. Our approach provides wide-coverage, robust, and – with the addition of LDD resolution – “deep” or “full”, PCFG-based LFG approximations. Crucially, we do not claim to provide fully adequate statistical models. It is well known (Abney, 1997) that PCFG-type approximations to unification grammars can yield inconsistent probability models due to loss of probability mass: the parser successfully returns the highest ranked parse tree but the constraint solver cannot resolve the f-equations (generated in the pipeline or “hidden” in the integrated model) and the probability mass associated with that tree is lost. This case, however, is surprisingly rare for our grammars: only 0.0018% (85 out of 48424) of the original Penn-II trees (without FRAGs) fail to produce an f-structure due to inconsistent annotations (Table 1), and for parsing section 23 with the integrated model (A-PCFG), only 9 sentences do not receive a parse because no f-structure can be generated for the highest ranked tree (0.4%). Parsing with the pipeline model, all sentences receive one complete f-structure. Research on adequate probability models for unification grammars is important. (Miyao et al., 2003) present a Penn-II treebank based HPSG with log-linear probability models. They achieve coverage of 50.2% on section 23, as against 99% in our approach. (Riezler et al., 2002; Kaplan et al., 2004) describe how a carefully hand-crafted LFG is scaled to the full Penn-II treebank with log-linear based probability models. They achieve 79% coverage (full parse) and 21% fragement/skimmed parses. By the same measure, full parse coverage is around 99% for our automatically acquired PCFG-based LFG approximations. Against the PARC 700, the hand-crafted LFG grammar reported in (Kaplan et al., 2004) achieves an fscore of 79.6%. For the same experiment, our best automatically-induced grammar achieves an f-score of 80.24%. 10 Conclusions We presented and extensively evaluated a finite approximation of LDD resolution in automatically constructed, wide-coverage, robust, PCFGbased LFG approximations, effectively turning the “half”(or “shallow”)-grammars presented in (Cahill et al., 2002) into “full” or “deep” grammars. In our approach, LDDs are resolved in f-structure, not trees. The method achieves a preds-only f-score of 80.97% for f-structures with the PA-PCFG in the integrated architecture against the DCU 105 and 78.4% against the 2,416 automatically generated f-structures for the original Penn-II treebank trees. Evaluating against the PARC 700 Dependency Bank, the P-PCFG achieves an f-score of 80.24%, an overall improvement of approximately 0.6% on the result reported for the best hand-crafted grammars in (Kaplan et al., 2004). Acknowledgements This research was funded by Enterprise Ireland Basic Research Grant SC/2001/186 and IRCSET. References S. Abney. 1997. Stochastic attribute-value grammars. Computational Linguistics, 23(4):597– 618. M. Burke, A. Cahill, R. O’Donovan, J. van Genabith, and A. Way 2004. The Evaluation of an Automatic Annotation Algorithm against the PARC 700 Dependency Bank. In Proceedings of the Ninth International Conference on LFG, Christchurch, New Zealand (to appear). A. Cahill, M. McCarthy, J. van Genabith, and A. Way. 2002. Parsing with PCFGs and Automatic F-Structure Annotation. In Miriam Butt and Tracy Holloway King, editors, Proceedings of the Seventh International Conference on LFG, pages 76–95. CSLI Publications, Stanford, CA. E. Charniak. 1996. Tree-Bank Grammars. In AAAI/IAAI, Vol. 2, pages 1031–1036. E. Charniak. 1999. A Maximum-Entropy-Inspired Parser. Technical Report CS-99-12, Brown University, Providence, RI. M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA. M. Dalrymple. 2001. Lexical-Functional Grammar. San Diego, CA; London Academic Press. J. Hockenmaier. 2003. Parsing with Generative models of Predicate-Argument Structure. In Proceedings of the 41st Annual Conference of the Association for Computational Linguistics, pages 359–366, Sapporo, Japan. M. Johnson. 1999. PCFG models of linguistic tree representations. Computational Linguistics, 24(4):613–632. M. Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 136–143, Philadelphia, PA. R. Kaplan and J. Bresnan. 1982. Lexical Functional Grammar, a Formal System for Grammatical Representation. In The Mental Representation of Grammatical Relations, pages 173–281. MIT Press, Cambridge, MA. R. Kaplan, S. Riezler, T. H. King, J. T. Maxwell, A. Vasserman, and R. Crouch. 2004. Speed and accuracy in shallow and deep stochastic parsing. In Proceedings of the Human Language Technology Conference and the 4th Annual Meeting of the North American Chapter of the Association for Computational Linguistics, pages 97– 104, Boston, MA. T.H. King, R. Crouch, S. Riezler, M. Dalrymple, and R. Kaplan. 2003. The PARC700 dependency bank. In Proceedings of the EACL03: 4th International Workshop on Linguistically Interpreted Corpora (LINC-03), pages 1–8, Budapest. D. Klein and C. Manning. 2003. Accurate Unlexicalized Parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL’02), pages 423–430, Sapporo, Japan. C. Macleod, A. Meyers, and R. Grishman. 1994. The COMLEX Syntax Project: The First Year. In Proceedings of the ARPA Workshop on Human Language Technology, pages 669-703, Princeton, NJ. D. Magerman. 1994. Natural Language Parsing as Statistical Pattern Recognition. PhD thesis, Stanford University, CA. M. Marcus, G. Kim, M.A. Marcinkiewicz, R. MacIntyre, A. Bies, M. Ferguson, K. Katz, and B. Schasberger. 1994. The Penn Treebank: Annotating Predicate Argument Structure. In Proceedings of the ARPA Workshop on Human Language Technology, pages 110–115, Princeton, NJ. Y. Miyao, T. Ninomiya, and J. Tsujii. 2003. Probabilistic modeling of argument structures including non-local dependencies. In Proceedings of the Conference on Recent Advances in Natural Language Processing (RANLP), pages 285–291, Borovets, Bulgaria. R. O’Donovan, M. Burke, A. Cahill, J. van Genabith, and A. Way. 2004. Large-Scale Induction and Evaluation of Lexical Resources from the Penn-II Treebank. In Proceedings of the 42nd Annual Conference of the Association for Computational Linguistics (ACL-04), Barcelona. S. Riezler, T.H. King, R. Kaplan, R. Crouch, J. T. Maxwell III, and M. Johnson. 2002. Parsing the Wall Street Journal using a LexicalFunctional Grammar and Discriminative Estimation Techniques. In Proceedings of the 40th Annual Conference of the Association for Computational Linguistics (ACL-02), pages 271–278, Philadelphia, PA. Y. Tateisi, K. Torisawa, Y. Miyao, and J. Tsujii. 1998. Translating the XTAG English Grammar to HPSG. In 4th International Workshop on Tree Adjoining Grammars and Related Frameworks, Philadelphia, PA, pages 172–175. J. van Genabith and R. Crouch. 1996. Direct and Underspecified Interpretations of LFG fStructures. In Proceedings of the 16th International Conference on Computational Linguistics (COLING), pages 262–267, Copenhagen. | 2004 | 41 |
Deep dependencies from context-free statistical parsers: correcting the surface dependency approximation Roger Levy Department of Linguistics Stanford University [email protected] Christopher D. Manning Departments of Computer Science and Linguistics Stanford University [email protected] Abstract We present a linguistically-motivated algorithm for reconstructing nonlocal dependency in broad-coverage context-free parse trees derived from treebanks. We use an algorithm based on loglinear classifiers to augment and reshape context-free trees so as to reintroduce underlying nonlocal dependencies lost in the context-free approximation. We find that our algorithm compares favorably with prior work on English using an existing evaluation metric, and also introduce and argue for a new dependency-based evaluation metric. By this new evaluation metric our algorithm achieves 60% error reduction on gold-standard input trees and 5% error reduction on state-ofthe-art machine-parsed input trees, when compared with the best previous work. We also present the first results on nonlocal dependency reconstruction for a language other than English, comparing performance on English and German. Our new evaluation metric quantitatively corroborates the intuition that in a language with freer word order, the surface dependencies in context-free parse trees are a poorer approximation to underlying dependency structure. 1 Introduction While parsers are been used for other purposes, the primary motivation for syntactic parsing is as an aid to semantic interpretation, in pursuit of broader goals of natural language understanding. Proponents of traditional ‘deep’ or ‘precise’ approaches to syntax, such as GB, CCG, HPSG, LFG, or TAG, have argued that sophisticated grammatical formalisms are essential to resolving various hidden relationships such as the source phrase of moved whphrases in questions and relativizations, or the controller of clauses without an overt subject. Knowledge of these hidden relationships is in turn essential to semantic interpretation of the kind practiced in the semantic parsing (Gildea and Jurafsky, 2002) and QA (Pasca and Harabagiu, 2001) literatures. However, work in statistical parsing has for the most part put these needs aside, being content to recover surface context-free (CF) phrase structure trees. This perhaps reflects the fact that context-free phrase structure grammar (CFG) is in some sense at the the heart of the majority of both formal and computational syntactic research. Although, upon introducing it, Chomsky (1956) rejected CFG as an adequate framework for natural language description, the majority of work in the last half century has used context-free structural descriptions and related methodologies in one form or another as an important component of syntactic analysis. CFGs seem adequate to weakly generate almost all common natural language structures, and also facilitate a transparent predicate-argument and/or semantic interpretation for the more basic ones (Gazdar et al., 1985). Nevertheless, despite their success in providing surface phrase structure analyses, if statistical parsers and the representations they produce do not provide a useful stepping stone to recovering the hidden relationships, they will ultimately come to be seen as a dead end, and work will necessarily return to using richer formalisms. In this paper we attempt to establish to what degree current statistical parsers are a useful step in analysis by examining the performance of further statistical classifiers on non-local dependency recovery from CF parse trees. The natural isomorphism from CF trees to dependency trees induces only local dependencies, derived from the headsister relation in a CF local tree. However, if the output of a context-free parser can be algorithmically augmented to accurately identify and incorporate nonlocal dependencies, then we can say that the context-free parsing model is a safe approximation to the true task of dependency reconstruction. We investigate the safeness of this approximation, devising an algorithm to reconstruct non-local dependencies from context-free parse trees using loglinear classifiers, tested on treebanks of not only English but also German, a language with much freer word order and correspondingly more discontinuity than English. This algorithm can be used as an intermediate step between the surface output trees of modern statistical parsers and semantic interpretation systems for a variety of tasks.1 1Many linguistic and technical intricacies are involved in the interpretation and use of non-local annotation structure found in treebanks. A more complete exposition of the work presented here can be found in Levy (2004). S NP-3 NNP Farmers VP VBD was ADJP JJ quick S *ICH*-2 NP NN yesterday S-2 NP *-3 VP TO to VP VB point PRT RP out NP NP DT the NN problems SBAR WHNP-1 0 S NP PRP it VP VBZ sees NP *T*-1 . . Figure 1: Example of empty and nonlocal annotations from the Penn Treebank of English, including null complementizers (0), relativization (*T*-1), rightextraposition (*ICH*-2), and syntactic control (*-3). 1.1 Previous Work Previous work on nonlocal dependency has focused entirely on English, despite the disparity in type and frequency of various non-local dependency constructions for varying languages (Kruijff, 2002). Collins (1999)’s Model 3 investigated GPSG-style trace threading for resolving nonlocal relative pronoun dependencies. Johnson (2002) was the first post-processing approach to non-local dependency recovery, using a simple pattern-matching algorithm on context-free trees. Dienes and Dubey (2003a,b) and Dienes (2003) approached the problem by preidentifying empty categories using an HMM on unparsed strings and threaded the identified empties into the category structure of a context-free parser, finding that this method compared favorably with both Collins’ and Johnson’s. Traditional LFG parsing, in both non-stochastic (Kaplan and Maxwell, 1993) and stochastic (Riezler et al., 2002; Kaplan et al., 2004) incarnations, also divides the labor of local and nonlocal dependency identification into two phases, starting with context-free parses and continuing by augmentation with functional information. 2 Datasets The datasets used for this study consist of the Wall Street Journal section of the Penn Treebank of English (WSJ) and the context-free version of the NEGRA (version 2) corpus of German (Skut et al., 1997b). Full-size experiments on WSJ described in Section 4 used the standard sections 2-21 for training, 24 for development, and trees whose yield is under 100 words from section 23 for testing. Experiments described in Section 4.3 used the same development and test sets but files 200-959 of WSJ as a smaller training set; for NEGRA we followed Dubey and Keller (2003) in using the first 18,602 sentences for training, the last 1,000 for development, and the previous 1,000 for testing. Consistent with prior work and with common practice in statistical parsing, we stripped categories of all functional tags prior to training and testing (though in several cases this seems to have been a limiting move; see Section 5). Nonlocal dependency annotation in Penn Treebanks can be divided into three major types: unindexed empty elements, dislocations, and control. The first type consists primarily of null complementizers, as exemplified in Figure 1 by the null relative pronoun 0 (c.f. aspects that it sees), and do not participate in (though they may mediate) nonlocal dependency. The second type consists of a dislocated element coindexed with an origin site of semantic interpretation, as in the association in Figure 1 of WHNP-1 with the direct object position of sees (a relativization), and the association of S2 with the ADJP quick (a right dislocation). This type encompasses the classic cases of nonlocal dependency: topicalization, relativization, wh- movement, and right dislocation, as well as expletives and other instances of non-canonical argument positioning. The third type involves control loci in syntactic argument positions, sometimes coindexed with overt controllers, as in the association of the NP Farmers with the empty subject position of the S2 node. (An example of a control locus with no controller would be [S NP-* [VP Eating ice cream ]] is fun.) Controllers are to be interpreted as syntactic (and possibly semantic) arguments both in their overt position and in the position of loci they control. This type encompasses raising, control, passivization, and unexpressed subjects of to- infinitive and gerund verbs, among other constructions.2 NEGRA’s original annotation is as dependency trees with phrasal nodes, crossing branches, and no empty elements. However, the distribution includes a context-free version produced algorithmically by recursively remapping discontinuous parts of nodes upward into higher phrases and marking their sites of origin.3 The resulting “traces” correspond roughly to a subclass of the second class of Penn Treebank empties discussed above, and include wh- movement, topicalization, right extrapositions from NP, expletives, and scrambling of sub2Four of the annotation errors in WSJ lead to uninterpretable dislocation and sharing patterns, including failure to annotate dislocations corresponding to marked origin sites, and mislabelings of control loci as origin sites of dislocation that lead to cyclic dislocations (which are explicitly prohibited in WSJ annotation guidelines). We corrected these errors manually before model testing and training. 3For a detailed description of the algorithm for creating the context-free version of NEGRA, see Skut et al. (1997a). S VAFIN VP $, $. AP wird PP VVPP . ADV NP ADJD PROAV begonnen , VP Erst ADJA NN sp¨ater damit NP VZ lange Zeit ART NE PTKZU VVINF den RMV zu schaffen S AP-2 ADV Erst not until NP ADJA lange long NN Zeit time ADJD sp¨ater later VAFIN wird will VP *T2* PP PROAV damit with it *T1* VVPP begonnen be begun $, , VP-1 NP ART den the NE RMV RMV VZ PTKZU zu to VVINF schaffen form $. . “The RMV will not begin to be formed for a long time.” Figure 2: Nonlocal dependencies via right-extraposition (*T1*) and topicalization (*T2*) in the NEGRA corpus of German, before (top) and after (bottom) transformation to context-free form. Dashed lines show where nodes go as a result of remapping into context-free form. jects after other complements. The positioning of NEGRA’s “traces” inside the mother node is completely algorithmic; a dislocated constituent C has its trace at the edge of the original mother closest to C’s overt position. Given a context-free NEGRA tree shorn of its trace/antecedent notation, however, it is far from trivial to determine which nodes are dislocated, and where they come from. Figure 2 shows an annotated sentence from the NEGRA corpus with discontinuities due to right extraposition (*T1*) and topicalization (*T2*), before and after transformation into context-free form with traces. 3 Algorithm Corresponding to the three types of empty-element annotation found in the Penn Treebank, our algorithm divides the process of CF tree enhancement into three phases. Each phase involves the identification of a certain subset of tree nodes to be operated on, followed by the application of the appropriate operation to the node. Operations may involve the insertion of a category at some position among a node’s daughters; the marking of certain nodes as dislocated; or the relocation of dislocated nodes to other positions within the tree. The content and ordering of phases is consistent with the syntactic theory upon which treebank annotation is based. For example, WSJ annotates relative clauses lacking overt relative pronouns, such as the SBAR in Figure 1, with a trace in the relativization site whose antecedent is an empty relative pronoun. This requires that empty relative pronoun insertion precede dislocated element identification. Likewise, dislocated elements can serve as controllers of control loci, based on their originating site, so it is sensible to return dislocated nodes to their originating sites before identifying control loci and their controllers. For WSJ, the three phases are: 1. (a) Determine nodes at which to insert null COMPlementizers4 (IDENTNULL) (b) For each COMP insertion node, determine position of each insertion and insert COMP (INSERTNULL) 2. (a) Classify each tree node as +/- DISLOCATED (IDENTMOVED) (b) For each DISLOCATED node, choose an ORIGIN node (RELOCMOVED) (c) For each pair ⟨DISLOCATED,origin⟩, choose a position of insertion and insert dislocated (INSERTRELOC) 3. (a) Classify each node as +/- control LOCUS (IDENTLOCUS) (b) For each LOCUS, determine position of insertion and insert LOCUS (INSERTLOCUS) (c) For each LOCUS, determine CONTROLLER (if any) (FINDCONTROLLER) Note in particular that phase 2 involves the classification of overt tree nodes as dislocated, followed by the identification of an origin site (annotated in the treebank as an empty node) for each dislocated element; whereas phase 3 involves the identification of (empty) control loci first, and of controllers later. This approach contrasts with Johnson (2002), who treats empty/antecedent identification as a joint task, and with Dienes and Dubey (2003a,b), who always identify empties first and determine antecedents later. Our motivation is that it should generally be easier to determine whether an overt element is dislocated than whether a given position is the origin of some yet unknown dislocated element (particularly in the absence of a sophisticated model of argument expression); but control loci are highly predictable from local context, such as the subjectless non-finite S in Figure 1’s S-2.5 Indeed this difference seems to be implicit in the nonlocal feature templates used by Dienes and Dubey (2003a,b) in their empty element tagger, in particular lookback for wh- words preceding a candidate verb. As described in Section 2, NEGRA’s nonlocal annotation schema is much simpler, involving no 4The WSJ contains a number of SBARs headed by empty complementizers with trace S’s. These SBARs are introduced in our algorithm as projections of identified empty complementizers as daughters of non-SBAR categories. 5Additionally, whereas dislocated nodes are always overt, control loci may be controlled by other (null) control loci, meaning that identifying controllers before control loci would still entail looking for nulls. IDENTMOVED S NP⟨it/there⟩ VP S/SBAR Expletive dislocation IDENTLOCUS S VP ⟨⟩ VP-internal context to determine null subjecthood INSERTNULLS S VP Possible null complementizer (records syntactic path from every S in sentence) Figure 3: Different classifiers’ specialized tree-matching fragments and their purposes uncoindexed empties or control loci. Correspondingly, our NEGRA algorithm includes only phase 2 of the WSJ algorithm, step (c) of which is trivial for NEGRA due to the deterministic positioning of trace insertion in the treebank. In each case we use a loglinear model for node classification, with a combination of quadratic regularization and thresholding by individual feature count to prevent overfitting. In the second and third parts of phases 2 and 3, when determining an originating site or controller for a given node N, or an insertion position for a node N′ in N, we use a competition-based setting, using a binary classification (yes/no for association with N) on each node in the tree, and during testing choosing the node with the highest score for positive association with N.6 All other phases of classification involve independent decisions at each node. In phase 3, we include a special zero node to indicate a control locus with no antecedent. 3.1 Feature templates Each subphase of our dependency reconstruction algorithm involves the training of a separate model and the development of a separate feature set. We found that it was important to include both a variety of general feature templates and a number of manually designed, specialized features to resolve specific problems observed for individual classifiers. We developed all feature templates exclusively on the training and development sets specified in Section 2. Table 1 shows which general feature templates we used in each classifier. The features are 6The choice of a unique origin site makes our algorithm unable to deal with right-node raising or parasitic gaps. Cases of right-node raising could be automatically transformed into single-origin dislocations by making use of a theory of coordination such as Maxwell and Manning (1996), while parasitic gaps could be handled with the introduction of a secondary classifier. Both phenomena are low-frequency, however, and we ignore them here. Feature type IdentNull InsertNull IdentMoved RelocMoved InsertReloc IdentLocus InsertLocus FindController TAG ✓ ✓ HD ✓ CAT×MCAT ⊗ ✓ CAT×MCAT×GCAT ✓ ✓ ✓ CAT×HD×MCAT×MHD ⊗ CAT×TAG×MCAT×MTAG ⊗ CAT×TAG ✓ ✓ CAT×HD ⊗ (FIRST/LAST)CAT ✓ ✓ (L/RSIS)CAT ✓ ✓ DPOS×CAT ✓ PATH ✓ ✓ CAT×RCAT ✓ TAG×RCAT ✓ CAT×TAG×RCAT ✓ CAT×RCAT×DPOS ✓ HD×RHD ⊗ CAT×HD×RHD ✓ CAT×DCAT ✓ ✓ ✓ ✓ MHD×HD ⊗ # Special 9 0 11 0 0 12 0 3 Table 1: Shared feature templates. See text for template descriptions. # Special is the number of special templates used for the classifier. ⊗denotes that all subsets of the template conjunction were included. coded as follows. The prefixes {∅,M,G,D,R} indicate that the feature value is calculated with respect to the node in question, its mother, grandmother, daughter, or relative node respectively.7 {CAT,POS,TAG,WORD} stand for syntactic category, position (of daughter) in mother, head tag, and head word respectively. For example, when determining whether an infinitival VP is extraposed, such as S-2 in Figure 1, the plausibility of the VP head being a deep dependent of the head verb is captured with the MHD×HD template. (FIRST/LAST)CAT and (L/RSIS)CAT are templates used for choosing the position to insert insert relocated nodes, respectively recording whether a node of a given category is the first/last daughter, and the syntactic category of a node’s left/right sisters. PATH is the syntactic path between relative and base node, defined as the list of the syntactic categories on the (inclusive) node path linking the relative node to the node in question, paired with whether the step on the path was upward or downward. For example, in Figure 2 the syntactic path from VP-1 to PP is [↑-VP,↑S,↓-VP,↓-PP]. This is a crucial feature for the relativized classifiers RELOCATEMOVED and FINDCONTROLLER; in an abstract sense it mediates the gap-threading information incorporated into GPSG7The relative node is DISLOCATED in RELOCMOVED and LOCUS in FINDCONTROLLER. Gold trees Parser output Jn Pres Jn DD Pres NP-* 62.4 75.3 55.6 (69.5) 61.1 WH-t 85.1 67.6 80.0 (82.0) 63.3 0 89.3 99.6 77.1 (48.8) 87.0 SBAR 74.8 74.7 71.0 73.8 71.0 S-t 90 93.3 87 84.5 83.6 Table 2: Comparison with previous work using Johnson’s PARSEVAL metric. Jn is Johnson (2002); DD is Dienes and Dubey (2003b); Pres is the present work. style (Gazdar et al., 1985) parsers, and in concrete terms it closely matches the information derived from Johnson (2002)’s connected local tree set patterns. Gildea and Jurafsky (2002) is to our knowledge the first use of such a feature for classification tasks on syntactic trees; they found it important for the related task of semantic role identification. We expressed specialized hand-coded feature templates as tree-matching patterns that capture a fragment of the content of the pattern in the feature value. Representative examples appear in Figure 3. The italicized node is the node for which a given feature is recorded; underscores indicate variables that can match any category; and the angle-bracketed parts of the tree fragment, together with an index for the pattern, determine the feature value.8 4 Evaluation 4.1 Comparison with previous work Our algorithm’s performance can be compared with the work of Johnson (2002) and Dienes and Dubey (2003a) on WSJ. Valid comparisons exist for the insertion of uncoindexed empty nodes (COMP and ARB-SUBJ), identification of control and raising loci (CONTROLLOCUS), and pairings of dislocated and controller/raised nodes with their origins (DISLOC,CONTROLLER). In Table 2 we present comparative results, using the PARSEVAL-based evaluation metric introduced by Johnson (2002) – a correct empty category inference requires the string position of the empty category, combined with the left and right boundaries plus syntactic category of the antecedent, if any, for purposes of comparison.9,10 Note that this evaluation metric does not require correct attachment of the empty category into 8A complete description of feature templates can be found at http://nlp.stanford.edu/˜rog/acl2004/templates/index.html 9For purposes of comparability with Johnson (2002) we used Charniak’s 2000 parser as P. 10Our algorithm was evaluated on a more stringent standard for NP-* than in previous work: control loci-related mappings were done after dislocated nodes were actually relocated by the algorithm, so an incorrect dislocation remapping can render incorrect the indices of a correct NP-* labeled bracketing. Additionally, our algorithm does not distinguish the syntactic cateP CF P A ◦P J ◦P D G A ◦G J ◦G Overall 91.2 87.6 90.5 90.0 88.3 95.7 99.4 98.5 NP 91.6 89.9 91.4 91.2 89.4 97.9 99.8 99.6 S 93.3 83.4 91.2 89.9 89.2 89.0 98.0 96.0 VP 91.2 87.3 90.2 89.6 88.0 95.2 99.0 97.7 ADJP 73.1 72.8 72.9 72.8 72.5 99.7 99.6 98.8 SBAR 94.4 66.7 89.3 84.9 85.0 72.6 99.4 94.1 ADVP 70.1 69.7 69.5 69.7 67.7 99.4 99.4 99.7 Table 3: Typed dependency F1 performance when composed with statistical parser. PCF is parser output evaluated by context-free (shallow) dependencies; all others are evaluated on deep dependencies. P is parser, G is string-to-context-free-gold-tree mapping, A is present remapping algorithm, J is Johnson 2002, D is the COMBINED model of Dienes 2003. the parse tree. In Figure 1, for example, WHNP1 could be erroneously remapped to the right edge of any S or VP node in the sentence without resulting in error according to this metric. We therefore abandon this metric in further evaluations as it is not clear whether it adequately approximates performance in predicate-argument structure recovery.11 4.2 Composition with a context-free parser If we think of a statistical parser as a function from strings to CF trees, and the nonlocal dependency recovery algorithm A presented in this paper as a function from trees to trees, we can naturally compose our algorithm with a parser P to form a function A ◦P from strings to trees whose dependency interpretation is, hopefully, an improvement over the trees from P. To test this idea quantitatively we evaluate performance with respect to recovery of typed dependency relations between words. A dependency relation, commonly employed for evaluation in the statistical parsing literature, is defined at a node N of a lexicalized parse tree as a pair ⟨wi, wj⟩where wi is the lexical head of N and wj is the lexical head of some non-head daughter of N. Dependency relations may further be typed according to information at or near the relevant tree node; Collins (1999), for example, reports dependency scores typed on the syntactic categories of the mother, head daughter, and dependent daughter, plus on whether the dependent precedes or follows the head. We present here dependency evaluations where the gold-standard dependency set is defined by the remapped tree, typed gory of null insertions, whereas previous work has; as a result, the null complementizer class 0 and WH-t dislocation class are aggregates of classes used in previous work. 11Collins (1999) reports 93.8%/90.1% precision/recall in his Model 3 for accurate identification of relativization site in noninfinitival relative clauses. This figure is difficult to compare directly with other figures in this section; a tree search indicates that non-infinitival subjects make up at most 85.4% of the WHNP dislocations in WSJ. Performance on gold trees Performance on parsed trees ID Rel Combo ID Combo P R F1 Acc P R F1 P R F1 P R F1 WSJ(full) 92.0 82.9 87.2 95.0 89.6 80.1 84.6 34.5 47.6 40.0 17.8 24.3 20.5 WSJ(sm) 92.3 79.5 85.5 93.3 90.4 77.2 83.2 38.0 47.3 42.1 19.7 24.3 21.7 NEGRA 73.9 64.6 69.0 85.1 63.3 55.4 59.1 48.3 39.7 43.6 20.9 17.2 18.9 Table 4: Cross-linguistic comparison of dislocated node identification and remapping. ID is correct identification of nodes as +/– dislocated; Rel is relocation of node to correct mother given gold-standard data on which nodes are dislocated (only applicable for gold trees); Combo is both correct identification and remapping. by syntactic category of the mother node.12 In Figure 1, for example, to would be an ADJP dependent of quick rather than a VP dependent of was; and Farmers would be an S dependent both of to in to point out . . . and of was. We use the head-finding rules of Collins (1999) to lexicalize trees, and assume that null complementizers do not participate in dependency relations. To further compare the results of our algorithm with previous work, we obtained the output trees produced by Johnson (2002) and Dienes (2003) and evaluated them on typed dependency performance. Table 3 shows the results of this evaluation. For comparison, we include shallow dependency accuracy for Charniak’s parser under PCF. 4.3 Cross-linguistic comparison In order to compare the results of nonlocal dependency reconstruction between languages, we must identify equivalence classes of nonlocal dependency annotation between treebanks. NEGRA’s nonlocal dependency annotation is quite different from WSJ, as described in Section 2, ignoring controlled and arbitrary unexpressed subjects. The natural basis of comparison is therefore the set of all nonlocal NEGRA annotations against all WSJ dislocations, excluding relativizations (defined simply as dislocated wh- constituents under SBAR).13 Table 4 shows the performance comparison between WSJ and NEGRA of IDENTDISLOC and RELOCMOVED, on sentences of 40 tokens or less. For this evaluation metric we use syntactic category and left & right edges of (1) dislocated nodes (ID); and (2) originating mother node to which dislocated node is mapped (Rel). Combo requires both (1) and (2) to be correct. NEGRA is smaller than WSJ (∼350,000 words vs. 1 million), so for fair 12Unfortunately, 46 WSJ dislocation annotations in this testset involve dislocated nodes dominating their origin sites. It is not entirely clear how to interpret the intended semantics of these examples, so we ignore them in evaluation. 13The interpretation of comparative results must be modulated by the fact that more total time was spent on feature engineering for WSJ than for NEGRA, and the first author, who engineered the NEGRA feature set, is not a native speaker of German. comparison we tested WSJ using the smaller training set described in Section 2, comparable in size to NEGRA’s. Since the positioning of traces within NEGRA nodes is trivial, we evaluate remapping and combination performances requiring only proper selection of the originating mother node; thus we carry the algorithm out on both treebanks through step (2b). This is adequate for purposes of our typed dependency evaluation in Section 4.2, since typed dependencies do not depend on positional information. State-of-the-art statistical parsing is far better on WSJ (Charniak, 2000) than on NEGRA (Dubey and Keller, 2003), so for comparison of parser-composed dependency performance we used vanilla PCFG models for both WSJ and NEGRA trained on comparably-sized datasets; in addition to making similar types of independence assumptions, these models performed relatively comparably on labeled bracketing measures for our development sets (73.2% performance for WSJ versus 70.9% for NEGRA). Table 5 compares the testset performance of algorithms on the two treebanks on the typed dependency measure introduced in Section 4.2.14 5 Discussion The WSJ results shown in Tables 2 and 3 suggest that discriminative models incorporating both nonlocal and local lexical and syntactic information can achieve good results on the task of non-local dependency identification. On the PARSEVAL metric, our algorithm performed particularly well on null complementizer and control locus insertion, and on S node relocation. In particular, Johnson noted that the proper insertion of control loci was a difficult issue involving lexical as well as structural sensitivity. We found the loglinear paradigm a good one in which to model this feature combination; when run in isolation on gold-standard development trees, our model reached 96.4% F1 on control locus insertion, reducing error over the Johnson model’s 89.3% 14Many head-dependent relations in NEGRA are explicitly marked, but for those that are not we used a Collins (1999)style head-finding algorithm independently developed for German PCFG parsing. PCF P A ◦P G A ◦G WSJ(full) 76.3 75.4 75.7 98.7 99.7 WSJ(sm) 76.3 75.4 75.7 98.7 99.6 NEGRA 62.0 59.3 61.0 90.9 93.6 Table 5: Typed dependency F1 performance when composed with statistical parser. Remapped dependencies involve only non-relativization dislocations and exclude control loci. by nearly two-thirds. The performance of our algorithm is also evident in the substantial contribution to typed dependency accuracy seen in Table 3. For gold-standard input trees, our algorithm reduces error by over 80% from the surface-dependency baseline, and over 60% compared with Johnson’s results. For parsed input trees, our algorithm reduces dependency error by 23% over the baseline, and by 5% compared with Johnson’s results. Note that the dependency figures of Dienes lag behind even the parsed results for Johnson’s model; this may well be due to the fact that Dienes built his model as an extension of Collins (1999), which lags behind Charniak (2000) by about 1.3-1.5%. Manual investigation of errors on English goldstandard data revealed two major issues that suggest further potential for improvement in performance without further increase in algorithmic complexity or training set size. First, we noted that annotation inconsistency accounted for a large number of errors, particularly false positives. VPs from which an S has been extracted ([SShut up,] he [VP said t]) are inconsistently given an empty SBAR daughter, suggesting the cross-model low-70’s performance on null SBAR insertion models (see Table 2) may be a ceiling. Control loci were often under-annotated; the first five development-set false positive control loci we checked were all due to annotation error. And why-WHADVPs under SBAR, which are always dislocations, were not so annotated 20% of the time. Second, both control locus insertion and dislocated NP remapping must be sensitive to the presence of argument NPs under classified nodes. But temporal NPs, indistinguishable by gross category, also appear under such nodes, creating a major confound. We used customized features to compensate to some extent, but temporal annotation already exists in WSJ and could be used. We note that Klein and Manning (2003) independently found retention of temporal NP marking useful for PCFG parsing. As can be seen in Table 3, the absolute improvement in dependency recovery is smaller for both our and Johnson’s postprocessing algorithms when applied to parsed input trees than when applied to gold-standard input trees. It seems that this degradation is not primarily due to noise in parse tree outputs reducing recall of nonlocal dependency identification: precision/recall splits were largely the same between gold and parsed data, and manual inspection revealed that incorrect nonlocal dependency choices often arose from syntactically reasonable yet incorrect input from the parser. For example, the gold-standard parse right-wing whites . . . will [VP step up [NP their threats [S [VP * to take matters into their own hands ]]]] has an unindexed control locus because Treebank annotation specifies that infinitival VPs inside NPs are not assigned controllers. Charniak’s parser, however, attaches the infinitival VP into the higher step up . . . VP. Infinitival VPs inside VPs generally do receive controllers for their null subjects, and our algorithm accordingly yet mistakenly assigns right-wing-whites as the antecedent. The English/German comparison shown in Tables 4 and 5 is suggestive, but caution is necessary in its interpretation due to the fact that differences in both language structure and treebank annotation may be involved. Results in the G column of Table 5, showing the accuracy of the context-free dependency approximation from gold-standard parse trees, quantitatively corroborates the intuition that nonlocal dependency is more prominent in German than in English. Manual investigation of errors made on German gold-standard data revealed two major sources of error beyond sparsity. The first was a widespread ambiguity of S and VP nodes within S and VP nodes; many true dislocations of all sorts are expressed at the S and VP levels in CFG parse trees, such as VP1 of Figure 2, but many adverbial and subordinate phrases of S or VP category are genuine dependents of the main clausal verb. We were able to find a number of features to distinguish some cases, such as the presence of certain unambiguous relativeclause introducing complementizers beginning an S node, but much ambiguity remained. The second was the ambiguity that some matrix S-initial NPs are actually dependents of the VP head (in these cases, NEGRA annotates the finite verb as the head of S and the non-finite verb as the head of VP). This is not necessarily a genuine discontinuity per se, but rather corresponds to identification of the subject NP in a clause. Obviously, having access to reliable case marking would improve performance in this area; such information is in fact included in NEGRA’s morphological annotation, another argument for the utility of involving enhanced annotation in CF parsing. As can be seen in the right half of Table 4, performance falls off considerably on vanilla PCFGparsed data. This fall-off seems more dramatic than that seen in Sections 4.1 and 4.2, no doubt partly due to the poorer performance of the vanilla PCFG, but likely also because only non-relativization dislocations are considered in Section 4.3. These dislocations often require non-local information (such as identity of surface lexical governor) for identification and are thus especially susceptible to degradation in parsed data. Nevertheless, seemingly dismal performance here still provided a strong boost to typed dependency evaluation of parsed data, as seen in A ◦P of Table 5. We suspect this indicates that dislocated terminals are being usefully identified and mapped back to their proper governors, even if the syntactic projections of these terminals and governors are not being correctly identified by the parser. 6 Further Work Against the background of CFG as the standard approximation of dependency structure for broadcoverage parsing, there are essentially three options for the recovery of nonlocal dependency. The first option is to postprocess CF parse trees, which we have closely investigated in this paper. The second is to incorporate nonlocal dependency information into the category structure of CF trees. This was the approach taken by Dienes and Dubey (2003a,b) and Dienes (2003); it is also practiced in recent work on broad-coverage CCG parsing (Hockenmaier, 2003). The third would be to incorporate nonlocal dependency information into the edge structure parse trees, allowing discontinuous constituency to be explicitly represented in the parse chart. This approach was tentatively investigated by Plaehn (2000). As the syntactic diversity of languages for which treebanks are available grows, it will become increasingly important to compare these three approaches. 7 Acknowledgements This work has benefited from feedback from Dan Jurafsky and three anonymous reviewers, and from presentation at the Institute of Cognitive Science, University of Colorado at Boulder. The authors are also grateful to Dan Klein and Jenny Finkel for use of maximum-entropy software they wrote. This work was supported in part by the Advanced Research and Development Activity (ARDA)’s Advanced Question Answering for Intelligence (AQUAINT) Program. References Charniak, E. (2000). A Maximum-Entropy-inspired parser. In Proceedings of NAACL. Chomsky, N. (1956). Three models for the description of language. IRE Transactions on Information Theory, 2(3):113– 124. Collins, M. (1999). Head-Driven Statistical Models for Natural Language Parsing. PhD thesis, University of Pennsylvania. Dienes, P. (2003). Statistical Parsing with Non-local Dependencies. PhD thesis, Saarland University. Dienes, P. and Dubey, A. (2003a). Antecedent recovery: Experiments with a trace tagger. In Proceedings of EMNLP. Dienes, P. and Dubey, A. (2003b). Deep processing by combining shallow methods. In Proceedings of ACL. Dubey, A. and Keller, F. (2003). Parsing German with sisterhead dependencies. In Proceedings of ACL. Gazdar, G., Klein, E., Pullum, G., and Sag, I. (1985). Generalized Phrase Structure Grammar. Harvard. Gildea, D. and Jurafsky, D. (2002). Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. Hockenmaier, J. (2003). Data and models for Statistical Parsing with Combinatory Categorial Grammar. PhD thesis, University of Edinburgh. Johnson, M. (2002). A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of ACL, volume 40. Kaplan, R., Riezler, S., King, T. H., Maxwell, J. T., Vasserman, A., and Crouch, R. (2004). Speed and accuracy in shallow and deep stochastic parsing. In Proceedings of NAACL. Kaplan, R. M. and Maxwell, J. T. (1993). The interface between phrasal and functional constraints. Computational Linguistics, 19(4):571–590. Klein, D. and Manning, C. D. (2003). Accurate unlexicalized parsing. In Proceedings of ACL. Kruijff, G.-J. (2002). Learning linearization rules from treebanks. Invited talk at the Formal Grammar’02/COLOGNET-ELSNET Symposium. Levy, R. (2004). Probabilistic Models of Syntactic Discontinuity. PhD thesis, Stanford University. In progress. Maxwell, J. T. and Manning, C. D. (1996). A theory of nonconstituent coordination based on finite-state rules. In Butt, M. and King, T. H., editors, Proceedings of LFG. Pasca, M. and Harabagiu, S. M. (2001). High performance question/answering. In Proceedings of SIGIR. Plaehn, O. (2000). Computing the most probable parse for a discontinuous phrase structure grammar. In Proceedings of IWPT, Trento, Italy. Riezler, S., King, T. H., Kaplan, R. M., Crouch, R. S., Maxwell, J. T., and Johnson, M. (2002). Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. In Proceedings of ACL, pages 271– 278. Skut, W., Brants, T., Krenn, B., and Uszkoreit, H. (1997a). Annotating unrestricted German text. In Fachtagung der Sektion Computerlinguistik der Deutschen Gesellschaft fr Sprachwissenschaft, Heidelberg, Germany. Skut, W., Krenn, B., Brants, T., and Uszkoreit, H. (1997b). An annotation scheme for free word order languages. In Proceedings of ANLP. | 2004 | 42 |
A Study on Convolution Kernels for Shallow Semantic Parsing Alessandro Moschitti University of Texas at Dallas Human Language Technology Research Institute Richardson, TX 75083-0688, USA [email protected] Abstract In this paper we have designed and experimented novel convolution kernels for automatic classification of predicate arguments. Their main property is the ability to process structured representations. Support Vector Machines (SVMs), using a combination of such kernels and the flat feature kernel, classify PropBank predicate arguments with accuracy higher than the current argument classification stateof-the-art. Additionally, experiments on FrameNet data have shown that SVMs are appealing for the classification of semantic roles even if the proposed kernels do not produce any improvement. 1 Introduction Several linguistic theories, e.g. (Jackendoff, 1990) claim that semantic information in natural language texts is connected to syntactic structures. Hence, to deal with natural language semantics, the learning algorithm should be able to represent and process structured data. The classical solution adopted for such tasks is to convert syntax structures into flat feature representations which are suitable for a given learning model. The main drawback is that structures may not be properly represented by flat features. In particular, these problems affect the processing of predicate argument structures annotated in PropBank (Kingsbury and Palmer, 2002) or FrameNet (Fillmore, 1982). Figure 1 shows an example of a predicate annotation in PropBank for the sentence: "Paul gives a lecture in Rome". A predicate may be a verb or a noun or an adjective and most of the time Arg 0 is the logical subject, Arg 1 is the logical object and ArgM may indicate locations, as in our example. FrameNet also describes predicate/argument structures but for this purpose it uses richer semantic structures called frames. These latter are schematic representations of situations involving various participants, properties and roles in which a word may be typically used. Frame elements or semantic roles are arguments of predicates called target words. In FrameNet, the argument names are local to a particular frame. Predicate Arg. 0 Arg. M S N NP D N VP V Paul in gives a lecture PP IN N Rome Arg. 1 Figure 1: A predicate argument structure in a parse-tree representation. Several machine learning approaches for argument identification and classification have been developed (Gildea and Jurasfky, 2002; Gildea and Palmer, 2002; Surdeanu et al., 2003; Hacioglu et al., 2003). Their common characteristic is the adoption of feature spaces that model predicate-argument structures in a flat representation. On the contrary, convolution kernels aim to capture structural information in term of sub-structures, providing a viable alternative to flat features. In this paper, we select portions of syntactic trees, which include predicate/argument salient sub-structures, to define convolution kernels for the task of predicate argument classification. In particular, our kernels aim to (a) represent the relation between predicate and one of its arguments and (b) to capture the overall argument structure of the target predicate. Additionally, we define novel kernels as combinations of the above two with the polynomial kernel of standard flat features. Experiments on Support Vector Machines using the above kernels show an improvement of the state-of-the-art for PropBank argument classification. On the contrary, FrameNet semantic parsing seems to not take advantage of the structural information provided by our kernels. The remainder of this paper is organized as follows: Section 2 defines the Predicate Argument Extraction problem and the standard solution to solve it. In Section 3 we present our kernels whereas in Section 4 we show comparative results among SVMs using standard features and the proposed kernels. Finally, Section 5 summarizes the conclusions. 2 Predicate Argument Extraction: a standard approach Given a sentence in natural language and the target predicates, all arguments have to be recognized. This problem can be divided into two subtasks: (a) the detection of the argument boundaries, i.e. all its compounding words and (b) the classification of the argument type, e.g. Arg0 or ArgM in PropBank or Agent and Goal in FrameNet. The standard approach to learn both detection and classification of predicate arguments is summarized by the following steps: 1. Given a sentence from the training-set generate a full syntactic parse-tree; 2. let P and A be the set of predicates and the set of parse-tree nodes (i.e. the potential arguments), respectively; 3. for each pair <p, a> ∈P × A: • extract the feature representation set, Fp,a; • if the subtree rooted in a covers exactly the words of one argument of p, put Fp,a in T + (positive examples), otherwise put it in T − (negative examples). For example, in Figure 1, for each combination of the predicate give with the nodes N, S, VP, V, NP, PP, D or IN the instances F”give”,a are generated. In case the node a exactly covers Paul, a lecture or in Rome, it will be a positive instance otherwise it will be a negative one, e.g. F”give”,”IN”. To learn the argument classifiers the T + set can be re-organized as positive T + argi and negative T − argi examples for each argument i. In this way, an individual ONE-vs-ALL classifier for each argument i can be trained. We adopted this solution as it is simple and effective (Hacioglu et al., 2003). In the classification phase, given a sentence of the test-set, all its Fp,a are generated and classified by each individual classifier. As a final decision, we select the argument associated with the maximum value among the scores provided by the SVMs, i.e. argmaxi∈S Ci, where S is the target set of arguments. - Phrase Type: This feature indicates the syntactic type of the phrase labeled as a predicate argument, e.g. NP for Arg1. - Parse Tree Path: This feature contains the path in the parse tree between the predicate and the argument phrase, expressed as a sequence of nonterminal labels linked by direction (up or down) symbols, e.g. V ↑VP ↓NP for Arg1. - Position: Indicates if the constituent, i.e. the potential argument, appears before or after the predicate in the sentence, e.g. after for Arg1 and before for Arg0. - Voice: This feature distinguishes between active or passive voice for the predicate phrase, e.g. active for every argument. - Head Word: This feature contains the headword of the evaluated phrase. Case and morphological information are preserved, e.g. lecture for Arg1. - Governing Category indicates if an NP is dominated by a sentence phrase or by a verb phrase, e.g. the NP associated with Arg1 is dominated by a VP. - Predicate Word: This feature consists of two components: (1) the word itself, e.g. gives for all arguments; and (2) the lemma which represents the verb normalized to lower case and infinitive form, e.g. give for all arguments. Table 1: Standard features extracted from the parse-tree in Figure 1. 2.1 Standard feature space The discovery of relevant features is, as usual, a complex task, nevertheless, there is a common consensus on the basic features that should be adopted. These standard features, firstly proposed in (Gildea and Jurasfky, 2002), refer to a flat information derived from parse trees, i.e. Phrase Type, Predicate Word, Head Word, Governing Category, Position and Voice. Table 1 presents the standard features and exemplifies how they are extracted from the parse tree in Figure 1. For example, the Parse Tree Path feature represents the path in the parse-tree between a predicate node and one of its argument nodes. It is expressed as a sequence of nonterminal labels linked by direction symbols (up or down), e.g. in Figure 1, V↑VP↓NP is the path between the predicate to give and the argument 1, a lecture. Two pairs <p1, a1> and <p2, a2> have two different Path features even if the paths differ only for a node in the parse-tree. This pre S N NP D N VP V Paul in delivers a talk PP IN NP jj Fdeliver, Arg0 formal N style Arg. 0 a) S N NP D N VP V Paul in delivers a talk PP IN NP jj formal N style Fdeliver, Arg1 b) S N NP D N VP V Paul in delivers a talk PP IN NP jj formal N style Arg. 1 Fdeliver, ArgM c) Arg. M Figure 2: Structured features for Arg0, Arg1 and ArgM. vents the learning algorithm to generalize well on unseen data. In order to address this problem, the next section describes a novel kernel space for predicate argument classification. 2.2 Support Vector Machine approach Given a vector space in ℜn and a set of positive and negative points, SVMs classify vectors according to a separating hyperplane, H(⃗x) = ⃗w × ⃗x + b = 0, where ⃗w ∈ℜn and b ∈ℜare learned by applying the Structural Risk Minimization principle (Vapnik, 1995). To apply the SVM algorithm to Predicate Argument Classification, we need a function φ : F →ℜn to map our features space F = {f1, .., f|F|} and our predicate/argument pair representation, Fp,a = Fz, into ℜn, such that: Fz →φ(Fz) = (φ1(Fz), .., φn(Fz)) From the kernel theory we have that: H(⃗x) = X i=1..l αi⃗xi ·⃗x+b = X i=1..l αi⃗xi ·⃗x+b = = X i=1..l αiφ(Fi) · φ(Fz) + b. where, Fi ∀i ∈{1, .., l} are the training instances and the product K(Fi, Fz) =<φ(Fi) · φ(Fz)> is the kernel function associated with the mapping φ. The simplest mapping that we can apply is φ(Fz) = ⃗z = (z1, ..., zn) where zi = 1 if fi ∈Fz otherwise zi = 0, i.e. the characteristic vector of the set Fz with respect to F. If we choose as a kernel function the scalar product we obtain the linear kernel KL(Fx, Fz) = ⃗x · ⃗z. Another function which is the current stateof-the-art of predicate argument classification is the polynomial kernel: Kp(Fx, Fz) = (c+⃗x·⃗z)d, where c is a constant and d is the degree of the polynom. 3 Convolution Kernels for Semantic Parsing We propose two different convolution kernels associated with two different predicate argument sub-structures: the first includes the target predicate with one of its arguments. We will show that it contains almost all the standard feature information. The second relates to the sub-categorization frame of verbs. In this case, the kernel function aims to cluster together verbal predicates which have the same syntactic realizations. This provides the classification algorithm with important clues about the possible set of arguments suited for the target syntactic structure. 3.1 Predicate/Argument Feature (PAF) We consider the predicate argument structures annotated in PropBank or FrameNet as our semantic space. The smallest sub-structure which includes one predicate with only one of its arguments defines our structural feature. For example, Figure 2 illustrates the parse-tree of the sentence "Paul delivers a talk in formal style". The circled substructures in (a), (b) and (c) are our semantic objects associated with the three arguments of the verb to deliver, i.e. <deliver, Arg0>, <deliver, Arg1> and <deliver, ArgM >. Note that each predicate/argument pair is associated with only one structure, i.e. Fp,a contain only one of the circled sub-trees. Other important properties are the followings: (1) The overall semantic feature space F contains sub-structures composed of syntactic information embodied by parse-tree dependencies and semantic information under the form of predicate/argument annotation. (2) This solution is efficient as we have to classify as many nodes as the number of predicate arguments. (3) A constituent cannot be part of two different arguments of the target predicate, i.e. there is no overlapping between the words of two arguments. Thus, two semantic structures Fp1,a1 and Fp2,a2 1, associated with two different ar1Fp,a was defined as the set of features of the object <p, a>. Since in our representations we have only one S NP VP VP VP CC VBD NP flushed DT NN the pan and VBD NP buckled PRP$ NN his belt PRP He Arg0 (flush and buckle) Arg1 (flush) Arg1 (buckle) Predicate 1 Predicate 2 Fflush Fbuckle Figure 3: Sub-Categorization Features for two predicate argument structures. guments, cannot be included one in the other. This property is important because a convolution kernel would not be effective to distinguish between an object and its sub-parts. 3.2 Sub-Categorization Feature (SCF) The above object space aims to capture all the information between a predicate and one of its arguments. Its main drawback is that important structural information related to interargument dependencies is neglected. In order to solve this problem we define the SubCategorization Feature (SCF). This is the subparse tree which includes the sub-categorization frame of the target verbal predicate. For example, Figure 3 shows the parse tree of the sentence "He flushed the pan and buckled his belt". The solid line describes the SCF of the predicate flush, i.e. Fflush whereas the dashed line tailors the SCF of the predicate buckle, i.e. Fbuckle. Note that SCFs are features for predicates, (i.e. they describe predicates) whereas PAF characterizes predicate/argument pairs. Once semantic representations are defined, we need to design a kernel function to estimate the similarity between our objects. As suggested in Section 2 we can map them into vectors in ℜn and evaluate implicitly the scalar product among them. 3.3 Predicate/Argument structure Kernel (PAK) Given the semantic objects defined in the previous section, we design a convolution kernel in a way similar to the parse-tree kernel proposed in (Collins and Duffy, 2002). We divide our mapping φ in two steps: (1) from the semantic structure space F (i.e. PAF or SCF objects) to the set of all their possible sub-structures element in Fp,a with an abuse of notation we use it to indicate the objects themselves. NP D N a talk NP D N NP D N a D N a talk NP D N NP D N VP V delivers a talk V delivers NP D N VP V a talk NP D N VP V NP D N VP V a NP D VP V talk N a NP D N VP V delivers talk NP D N VP V delivers NP D N VP V delivers NP VP V NP VP V delivers talk Figure 4: All 17 valid fragments of the semantic structure associated with Arg 1 of Figure 2. F′ = {f ′ 1, .., f ′ |F′|} and (2) from F′ to ℜ|F′|. An example of features in F ′ is given in Figure 4 where the whole set of fragments, F ′ deliver,Arg1, of the argument structure Fdeliver,Arg1, is shown (see also Figure 2). It is worth noting that the allowed sub-trees contain the entire (not partial) production rules. For instance, the sub-tree [NP [D a]] is excluded from the set of the Figure 4 since only a part of the production NP →D N is used in its generation. However, this constraint does not apply to the production VP →V NP PP along with the fragment [VP [V NP]] as the subtree [VP [PP [...]]] is not considered part of the semantic structure. Thus, in step 1, an argument structure Fp,a is mapped in a fragment set F ′ p,a. In step 2, this latter is mapped into ⃗x = (x1, .., x|F′|) ∈ℜ|F′|, where xi is equal to the number of times that f ′ i occurs in F ′ p,a 2. In order to evaluate K(φ(Fx), φ(Fz)) without evaluating the feature vector ⃗x and ⃗z we define the indicator function Ii(n) = 1 if the substructure i is rooted at node n and 0 otherwise. It follows that φi(Fx) = P n∈Nx Ii(n), where Nx is the set of the Fx’s nodes. Therefore, the kernel can be written as: K(φ(Fx), φ(Fz)) = |F′| X i=1 ( X nx∈Nx Ii(nx))( X nz∈Nz Ii(nz)) = X nx∈Nx X nz∈Nz X i Ii(nx)Ii(nz) where Nx and Nz are the nodes in Fx and Fz, respectively. In (Collins and Duffy, 2002), it has been shown that P i Ii(nx)Ii(nz) = ∆(nx, nz) can be computed in O(|Nx| × |Nz|) by the following recursive relation: (1) if the productions at nx and nz are different then ∆(nx, nz) = 0; 2A fragment can appear several times in a parse-tree, thus each fragment occurrence is considered as a different element in F ′ p,a. (2) if the productions at nx and nz are the same, and nx and nz are pre-terminals then ∆(nx, nz) = 1; (3) if the productions at nx and nz are the same, and nx and nz are not pre-terminals then ∆(nx, nz) = nc(nx) Y j=1 (1 + ∆(ch(nx, j), ch(nz, j))), where nc(nx) is the number of the children of nx and ch(n, i) is the i-th child of the node n. Note that as the productions are the same ch(nx, i) = ch(nz, i). This kind of kernel has the drawback of assigning more weight to larger structures while the argument type does not strictly depend on the size of the argument (Moschitti and Bejan, 2004). To overcome this problem we can scale the relative importance of the tree fragments using a parameter λ for the cases (2) and (3), i.e. ∆(nx, nz) = λ and ∆(nx, nz) = λ Qnc(nx) j=1 (1 + ∆(ch(nx, j), ch(nz, j))) respectively. It is worth noting that even if the above equations define a kernel function similar to the one proposed in (Collins and Duffy, 2002), the substructures on which it operates are different from the parse-tree kernel. For example, Figure 4 shows that structures such as [VP [V] [NP]], [VP [V delivers ] [NP]] and [VP [V] [NP [DT] [N]]] are valid features, but these fragments (and many others) are not generated by a complete production, i.e. VP →V NP PP. As a consequence they would not be included in the parse-tree kernel of the sentence. 3.4 Comparison with Standard Features In this section we compare standard features with the kernel based representation in order to derive useful indications for their use: First, PAK estimates a similarity between two argument structures (i.e., PAF or SCF) by counting the number of sub-structures that are in common. As an example, the similarity between the two structures in Figure 2, F”delivers”,Arg0 and F”delivers”,Arg1, is equal to 1 since they have in common only the [V delivers] substructure. Such low value depends on the fact that different arguments tend to appear in different structures. On the contrary, if two structures differ only for a few nodes (especially terminals or near terminal nodes) the similarity remains quite high. For example, if we change the tense of the verb to deliver (Figure 2) in delivered, the [VP [V delivers] [NP]] subtree will be transformed in [VP [VBD delivered] [NP]], where the NP is unchanged. Thus, the similarity with the previous structure will be quite high as: (1) the NP with all sub-parts will be matched and (2) the small difference will not highly affect the kernel norm and consequently the final score. The above property also holds for the SCF structures. For example, in Figure 3, KP AK(φ(Fflush), φ(Fbuckle)) is quite high as the two verbs have the same syntactic realization of their arguments. In general, flat features do not possess this conservative property. For example, the Parse Tree Path is very sensible to small changes of parse-trees, e.g. two predicates, expressed in different tenses, generate two different Path features. Second, some information contained in the standard features is embedded in PAF: Phrase Type, Predicate Word and Head Word explicitly appear as structure fragments. For example, in Figure 4 are shown fragments like [NP [DT] [N]] or [NP [DT a] [N talk]] which explicitly encode the Phrase Type feature NP for the Arg 1 in Figure 2.b. The Predicate Word is represented by the fragment [V delivers] and the Head Word is encoded in [N talk]. The same is not true for SCF since it does not contain information about a specific argument. SCF, in fact, aims to characterize the predicate with respect to the overall argument structures rather than a specific pair <p, a>. Third, Governing Category, Position and Voice features are not explicitly contained in both PAF and SCF. Nevertheless, SCF may allow the learning algorithm to detect the active/passive form of verbs. Finally, from the above observations follows that the PAF representation may be used with PAK to classify arguments. On the contrary, SCF lacks important information, thus, alone it may be used only to classify verbs in syntactic categories. This suggests that SCF should be used in conjunction with standard features to boost their classification performance. 4 The Experiments The aim of our experiments are twofold: On the one hand, we study if the PAF representation produces an accuracy higher than standard features. On the other hand, we study if SCF can be used to classify verbs according to their syntactic realization. Both the above aims can be carried out by combining PAF and SCF with the standard features. For this purpose we adopted two ways to combine kernels3: (1) K = K1 · K2 and (2) K = γK1 + K2. The resulting set of kernels used in the experiments is the following: • Kpd is the polynomial kernel with degree d over the standard features. • KP AF is obtained by using PAK function over the PAF structures. • KP AF +P = γ KP AF |KP AF | + Kpd |Kpd|, i.e. the sum between the normalized4 PAF-based kernel and the normalized polynomial kernel. • KP AF ·P = KP AF ·Kpd |KP AF |·|Kpd|, i.e. the normalized product between the PAF-based kernel and the polynomial kernel. • KSCF +P = γ KSCF |KSCF | + Kpd |Kpd|, i.e. the summation between the normalized SCF-based kernel and the normalized polynomial kernel. • KSCF ·P = KSCF ·Kpd |KSCF |·|Kpd|, i.e. the normalized product between SCF-based kernel and the polynomial kernel. 4.1 Corpora set-up The above kernels were experimented over two corpora: PropBank (www.cis.upenn.edu/∼ace) along with Penn TreeBank5 2 (Marcus et al., 1993) and FrameNet. PropBank contains about 53,700 sentences and a fixed split between training and testing which has been used in other researches e.g., (Gildea and Palmer, 2002; Surdeanu et al., 2003; Hacioglu et al., 2003). In this split, Sections from 02 to 21 are used for training, section 23 for testing and sections 1 and 22 as developing set. We considered all PropBank arguments6 from Arg0 to Arg9, ArgA and ArgM for a total of 122,774 and 7,359 arguments in training and testing respectively. It is worth noting that in the experiments we used the gold standard parsing from Penn TreeBank, thus our kernel structures are derived with high precision. For the FrameNet corpus (www.icsi.berkeley 3It can be proven that the resulting kernels still satisfy Mercer’s conditions (Cristianini and Shawe-Taylor, 2000). 4To normalize a kernel K(⃗x, ⃗z) we can divide it by p K(⃗x, ⃗x) · K(⃗z, ⃗z). 5We point out that we removed from Penn TreeBank the function tags like SBJ and TMP as parsers usually are not able to provide this information. 6We noted that only Arg0 to Arg4 and ArgM contain enough training/testing data to affect the overall performance. .edu/∼framenet) we extracted all 24,558 sentences from the 40 frames of Senseval 3 task (www.senseval.org) for the Automatic Labeling of Semantic Roles. We considered 18 of the most frequent roles and we mapped together those having the same name. Only verbs are selected to be predicates in our evaluations. Moreover, as it does not exist a fixed split between training and testing, we selected randomly 30% of sentences for testing and 70% for training. Additionally, 30% of training was used as a validation-set. The sentences were processed using Collins’ parser (Collins, 1997) to generate parse-trees automatically. 4.2 Classification set-up The classifier evaluations were carried out using the SVM-light software (Joachims, 1999) available at svmlight.joachims.org with the default polynomial kernel for standard feature evaluations. To process PAF and SCF, we implemented our own kernels and we used them inside SVM-light. The classification performances were evaluated using the f1 measure7 for single arguments and the accuracy for the final multi-class classifier. This latter choice allows us to compare the results with previous literature works, e.g. (Gildea and Jurasfky, 2002; Surdeanu et al., 2003; Hacioglu et al., 2003). For the evaluation of SVMs, we used the default regularization parameter (e.g., C = 1 for normalized kernels) and we tried a few costfactor values (i.e., j ∈{0.1, 1, 2, 3, 4, 5}) to adjust the rate between Precision and Recall. We chose parameters by evaluating SVM using Kp3 kernel over the validation-set. Both λ (see Section 3.3) and γ parameters were evaluated in a similar way by maximizing the performance of SVM using KP AF and γ KSCF |KSCF | + Kpd |Kpd| respectively. These parameters were adopted also for all the other kernels. 4.3 Kernel evaluations To study the impact of our structural kernels we firstly derived the maximal accuracy reachable with standard features along with polynomial kernels. The multi-class accuracies, for PropBank and FrameNet using Kpd with d = 1, .., 5, are shown in Figure 5. We note that (a) the highest performance is reached for d = 3, (b) for PropBank our maximal accuracy (90.5%) 7f1 assigns equal importance to Precision P and Recall R, i.e. f1 = 2P ·R P +R. is substantially equal to the SVM performance (88%) obtained in (Hacioglu et al., 2003) with degree 2 and (c) the accuracy on FrameNet (85.2%) is higher than the best result obtained in literature, i.e. 82.0% in (Gildea and Palmer, 2002). This different outcome is due to a different task (we classify different roles) and a different classification algorithm. Moreover, we did not use the Frame information which is very important8. 0.82 0.83 0.84 0.85 0.86 0.87 0.88 0.89 0.9 0.91 1 2 3 4 5 d Accuracy FrameNet PropBank Figure 5: Multi-classifier accuracy according to different degrees of the polynomial kernel. It is worth noting that the difference between linear and polynomial kernel is about 3-4 percent points for both PropBank and FrameNet. This remarkable difference can be easily explained by considering the meaning of standard features. For example, let us restrict the classification function CArg0 to the two features Voice and Position. Without loss of generality we can assume: (a) Voice=1 if active and 0 if passive, and (b) Position=1 when the argument is after the predicate and 0 otherwise. To simplify the example, we also assume that if an argument precedes the target predicate it is a subject, otherwise it is an object 9. It follows that a constituent is Arg0, i.e. CArg0 = 1, if only one feature at a time is 1, otherwise it is not an Arg0, i.e. CArg0 = 0. In other words, CArg0 = Position XOR Voice, which is the classical example of a non-linear separable function that becomes separable in a superlinear space (Cristianini and Shawe-Taylor, 2000). After it was established that the best kernel for standard features is Kp3, we carried out all the other experiments using it in the kernel combinations. Table 2 and 3 show the single class (f1 measure) as well as multi-class classifier (accuracy) performance for PropBank and FrameNet respectively. Each column of the two tables refers to a different kernel defined in the 8Preliminary experiments indicate that SVMs can reach 90% by using the frame feature. 9Indeed, this is true in most part of the cases. previous section. The overall meaning is discussed in the following points: First, PAF alone has good performance, since in PropBank evaluation it outperforms the linear kernel (Kp1), 88.7% vs. 86.7% whereas in FrameNet, it shows a similar performance 79.5% vs. 82.1% (compare tables with Figure 5). This suggests that PAF generates the same information as the standard features in a linear space. However, when a degree greater than 1 is used for standard features, PAF is outperformed10. Args P 3 PAF PAF+P PAF·P SCF+P SCF·P Arg0 90.8 88.3 90.6 90.5 94.6 94.7 Arg1 91.1 87.4 89.9 91.2 92.9 94.1 Arg2 80.0 68.5 77.5 74.7 77.4 82.0 Arg3 57.9 56.5 55.6 49.7 56.2 56.4 Arg4 70.5 68.7 71.2 62.7 69.6 71.1 ArgM 95.4 94.1 96.2 96.2 96.1 96.3 Acc. 90.5 88.7 90.2 90.4 92.4 93.2 Table 2: Evaluation of Kernels on PropBank. Roles P 3 PAF PAF+P PAF·P SCF+P SCF·P agent 92.0 88.5 91.7 91.3 93.1 93.9 cause 59.7 16.1 41.6 27.7 42.6 57.3 degree 74.9 68.6 71.4 57.8 68.5 60.9 depict. 52.6 29.7 51.0 28.6 46.8 37.6 durat. 45.8 52.1 40.9 29.0 31.8 41.8 goal 85.9 78.6 85.3 82.8 84.0 85.3 instr. 67.9 46.8 62.8 55.8 59.6 64.1 mann. 81.0 81.9 81.2 78.6 77.8 77.8 Acc. 85.2 79.5 84.6 81.6 83.8 84.2 18 roles Table 3: Evaluation of Kernels on FrameNet semantic roles. Second, SCF improves the polynomial kernel (d = 3), i.e. the current state-of-the-art, of about 3 percent points on PropBank (column SCF·P). This suggests that (a) PAK can measure the similarity between two SCF structures and (b) the sub-categorization information provides effective clues about the expected argument type. The interesting consequence is that SCF together with PAK seems suitable to automatically cluster different verbs that have the same syntactic realization. We note also that to fully exploit the SCF information it is necessary to use a kernel product (K1 · K2) combination rather than the sum (K1 + K2), e.g. column SCF+P. Finally, the FrameNet results are completely different. No kernel combinations with both PAF and SCF produce an improvement. On 10Unfortunately the use of a polynomial kernel on top the tree fragments to generate the XOR functions seems not successful. the contrary, the performance decreases, suggesting that the classifier is confused by this syntactic information. The main reason for the different outcomes is that PropBank arguments are different from semantic roles as they are an intermediate level between syntax and semantic, i.e. they are nearer to grammatical functions. In fact, in PropBank arguments are annotated consistently with syntactic alternations (see the Annotation guidelines for PropBank at www.cis.upenn.edu/∼ace). On the contrary FrameNet roles represent the final semantic product and they are assigned according to semantic considerations rather than syntactic aspects. For example, Cause and Agent semantic roles have identical syntactic realizations. This prevents SCF to distinguish between them. Another minor reason may be the use of automatic parse-trees to extract PAF and SCF, even if preliminary experiments on automatic semantic shallow parsing of PropBank have shown no important differences versus semantic parsing which adopts Gold Standard parse-trees. 5 Conclusions In this paper, we have experimented with SVMs using the two novel convolution kernels PAF and SCF which are designed for the semantic structures derived from PropBank and FrameNet corpora. Moreover, we have combined them with the polynomial kernel of standard features. The results have shown that: First, SVMs using the above kernels are appealing for semantically parsing both corpora. Second, PAF and SCF can be used to improve automatic classification of PropBank arguments as they provide clues about the predicate argument structure of the target verb. For example, SCF improves (a) the classification state-of-theart (i.e. the polynomial kernel) of about 3 percent points and (b) the best literature result of about 5 percent points. Third, additional work is needed to design kernels suitable to learn the deep semantic contained in FrameNet as it seems not sensible to both PAF and SCF information. Finally, an analysis of SVMs using polynomial kernels over standard features has explained why they largely outperform linear classifiers based-on standard features. In the future we plan to design other structures and combine them with SCF, PAF and standard features. In this vision the learning will be carried out on a set of structural features instead of a set of flat features. Other studies may relate to the use of SCF to generate verb clusters. Acknowledgments This research has been sponsored by the ARDA AQUAINT program. In addition, I would like to thank Professor Sanda Harabagiu for her advice, Adrian Cosmin Bejan for implementing the feature extractor and Paul Mor˘arescu for processing the FrameNet data. Many thanks to the anonymous reviewers for their invaluable suggestions. References Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In proceeding of ACL-02. Michael Collins. 1997. Three generative, lexicalized models for statistical parsing. In proceedings of the ACL-97, pages 16–23, Somerset, New Jersey. Nello Cristianini and John Shawe-Taylor. 2000. An introduction to Support Vector Machines. Cambridge University Press. Charles J. Fillmore. 1982. Frame semantics. In Linguistics in the Morning Calm, pages 111–137. Daniel Gildea and Daniel Jurasfky. 2002. Automatic labeling of semantic roles. Computational Linguistic. Daniel Gildea and Martha Palmer. 2002. The necessity of parsing for predicate argument recognition. In proceedings of ACL-02, Philadelphia, PA. R. Jackendoff. 1990. Semantic Structures, Current Studies in Linguistics series. Cambridge, Massachusetts: The MIT Press. T. Joachims. 1999. Making large-scale SVM learning practical. In Advances in Kernel Methods Support Vector Learning. Paul Kingsbury and Martha Palmer. 2002. From treebank to propbank. In proceedings of LREC02, Las Palmas, Spain. M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics. Alessandro Moschitti and Cosmin Adrian Bejan. 2004. A semantic kernel for predicate argument classification. In proceedings of CoNLL-04, Boston, USA. Kadri Hacioglu, Sameer Pradhan, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2003. Shallow Semantic Parsing Using Support Vector Machines. TR-CSLR-2003-03, University of Colorado. Mihai Surdeanu, Sanda M. Harabagiu, John Williams, and John Aarseth. 2003. Using predicate-argument structures for information extraction. In proceedings of ACL-03, Sapporo, Japan. V. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag New York, Inc. | 2004 | 43 |
Combining Acoustic and Pragmatic Features to Predict Recognition Performance in Spoken Dialogue Systems Malte Gabsdil Department of Computational Linguistics Saarland University Germany [email protected] Oliver Lemon School of Informatics Edinburgh University Scotland [email protected] Abstract We use machine learners trained on a combination of acoustic confidence and pragmatic plausibility features computed from dialogue context to predict the accuracy of incoming n-best recognition hypotheses to a spoken dialogue system. Our best results show a 25% weighted f-score improvement over a baseline system that implements a “grammar-switching” approach to context-sensitive speech recognition. 1 Introduction A crucial problem in the design of spoken dialogue systems is to decide for incoming recognition hypotheses whether a system should accept (consider correctly recognized), reject (assume misrecognition), or ignore (classify as noise or speech not directed to the system) them. In addition, a more sophisticated dialogue system might decide whether to clarify or confirm certain hypotheses. Obviously, incorrect decisions at this point can have serious negative effects on system usability and user satisfaction. On the one hand, accepting misrecognized hypotheses leads to misunderstandings and unintended system behaviors which are usually difficult to recover from. On the other hand, users might get frustrated with a system that behaves too cautiously and rejects or ignores too many utterances. Thus an important feature in dialogue system engineering is the tradeoff between avoiding task failure (due to misrecognitions) and promoting overall dialogue efficiency, flow, and naturalness. In this paper, we investigate the use of machine learners trained on a combination of acoustic confidence and pragmatic plausibility features (i.e. computed from dialogue context) to predict the quality of incoming n-best recognition hypotheses to a spoken dialogue system. These predictions are then used to select a “best” hypothesis and to decide on appropriate system reactions. We evaluate this approach in comparison with a baseline system that combines fixed recognition confidence rejection thresholds with dialogue-state dependent recognition grammars (Lemon, 2004). The paper is organized as follows. After a short relation to previous work, Section 3 introduces the WITAS multimodal dialogue system, which we use to collect data (Section 4) and to derive baseline results (Section 5). Section 6 describes our learning experiments for classifying and selecting from nbest recognition hypotheses and Section 7 reports our results. 2 Relation to Previous Work (Litman et al., 2000) use acoustic-prosodic information extracted from speech waveforms, together with information derived from their speech recognizer, to automatically predict misrecognized turns in a corpus of train-timetable information dialogues. In our experiments, we also use recognizer confidence scores and a limited number of acousticprosodic features (e.g. amplitude in the speech signal) for hypothesis classification. (Walker et al., 2000) use a combination of features from the speech recognizer, natural language understanding, and dialogue manager/discourse history to classify hypotheses as correct, partially correct, or misrecognized. Our work is related to these experiments in that we also combine confidence scores and higherlevel features for classification. However, both (Litman et al., 2000) and (Walker et al., 2000) consider only single-best recognition results and thus use their classifiers as “filters” to decide whether the best recognition hypothesis for a user utterance is correct or not. We go a step further in that we classify n-best hypotheses and then select among the alternatives. We also explore the use of more dialogue and task-oriented features (e.g. the dialogue move type of a recognition hypothesis) for classification. The main difference between our approach and work on hypothesis reordering (e.g. (Chotimongkol and Rudnicky, 2001)) is that we make a decision regarding whether a dialogue system should accept, clarify, reject, or ignore a user utterance. Furthermore, our approach is more generally applicable than preceding research, since we frame our methodology in the Information State Update (ISU) approach to dialogue management (Traum et al., 1999) and therefore expect it to be applicable to a range of related multimodal dialogue systems. 3 The WITAS Dialogue System The WITAS dialogue system (Lemon et al., 2002) is a multimodal command and control dialogue system that allows a human operator to interact with a simulated “unmanned aerial vehicle” (UAV): a small robotic helicopter. The human operator is provided with a GUI – an interactive (i.e. mouse clickable) map – and specifies mission goals using natural language commands spoken into a headset, or by using combinations of GUI actions and spoken commands. The simulated UAV can carry out different activities such as flying to locations, following vehicles, and delivering objects. The dialogue system uses the Nuance 8.0 speech recognizer with language models compiled from a grammar (written using the Gemini system (Dowding et al., 1993)), which is also used for parsing and generation. 3.1 WITAS Information States The WITAS dialogue system is part of a larger family of systems that implement the Information State Update (ISU) approach to dialogue management (Traum et al., 1999). The ISU approach has been used to formalize different theories of dialogue and forms the basis of several dialogue system implementations in domains such as route planning, home automation, and tutorial dialogue. The ISU approach is a particularly useful testbed for our technique because it collects information relevant to dialogue context in a central data structure from which it can be easily extracted. (Lemon et al., 2002) describe in detail the components of Information States (IS) and the update procedures for processing user input and generating system responses. Here, we briefly introduce parts of the IS which are needed to understand the system’s basic workings, and from which we will extract dialogue-level and task-level information for our learning experiments: • Dialogue Move Tree (DMT): a tree-structure, in which each subtree of the root node represents a “thread” in the conversation, and where each node in a subtree represents an utterance made either by the system or the user. 1 • Active Node List (ANL): a list that records all “active” nodes in the DMT; active nodes indi1A tree is used in order to overcome the limitations of stackbased processing, see (Lemon and Gruenstein, 2004). cate conversational contributions that are still in some sense open, and to which new utterances can attach. • Activity Tree (AT): a tree-structure representing the current, past, and planned activities that the back-end system (in this case a UAV) performs. • Salience List (SL): a list of NPs introduced in the current dialogue ordered by recency. • Modality Buffer (MB): a temporary store that registers click events on the GUI. The DMT and AT are the core components of Information States. The SL and MB are subsidiary data-structures needed for interpreting and generating anaphoric expressions and definite NPs. Finally, the ANL plays a crucial role in integrating new user utterances into the DMT. 4 Data Collection For our experiments, we use data collected in a small user study with the grammar-switching version of the WITAS dialogue system (Lemon, 2004). In this study, six subjects from Edinburgh University (4 male, 2 female) had to solve five simple tasks with the system, resulting in 30 complete dialogues. The subjects’ utterances were recorded as 8kHz 16bit waveform files and all aspects of the Information State transitions during the interactions were logged as html files. Altogether, 303 utterances were recorded in the user study (≈10 user utterances/dialogue). 4.1 Labeling We transcribed all user utterances and parsed the transcriptions offline using WITAS’ natural language understanding component in order to get a gold-standard labeling of the data. Each utterance was labeled as either in-grammar or out-ofgrammar (oog), depending on whether its transcription could be parsed or not, or as crosstalk: a special marker that indicated that the input was not directed to the system (e.g. noise, laughter, self-talk, the system accidentally recording itself). For all in-grammar utterances we stored their interpretations (quasi-logical forms) as computed by WITAS’ parser. Since the parser uses a domain-specific semantic grammar designed for this particular application, each in-grammar utterance had an interpretation that is “correct” with respect to the WITAS application. 4.2 Simplifying Assumptions The evaluations in the following sections make two simplifying assumptions. First, we consider a user utterance correctly recognized only if the logical form of the transcription is the same as the logical form of the recognition hypothesis. This assumption can be too strong because the system might react appropriately even if the logical forms are not literally the same. Second, if a transcribed utterance is out-of-grammar, we assume that the system cannot react appropriately. Again, this assumption might be too strong because the recognizer can accidentally map an utterance to a logical form that is equivalent to the one intended by the user. 5 The Baseline System The baseline for our experiments is the behavior of the WITAS dialogue system that was used to collect the experimental data (using dialogue context as a predictor of language models for speech recognition, see below). We chose this baseline because it has been shown to perform significantly better than an earlier version of the system that always used the same (i.e. full) grammar for recognition (Lemon, 2004). We evaluate the performance of the baseline by analyzing the dialogue logs from the user study. With this information, it is possible to decide how the system reacted to each user utterance. We distinguish between the following three cases: 1. accept: the system accepted the recognition hypothesis of a user utterance as correct. 2. reject: the system rejected the recognition hypothesis of a user utterance given a fixed confidence rejection threshold. 3. ignore: the system did not react to a user utterance at all. These three classes map naturally to the goldstandard labels of the transcribed user utterances: the system should accept in-grammar utterances, reject out-of-grammar input, and ignore crosstalk. 5.1 Context-sensitive Speech Recognition In the the WITAS dialogue system, the “grammarswitching” approach to context-sensitive speech recognition (Lemon, 2004) is implemented using the ANL. At any point in the dialogue, there is a “most active node” at the top of the ANL. The dialogue move type of this node defines the name of a language model that is used for recognizing the next user utterance. For instance, if the most active node is a system yes-no-question then the appropriate language model is defined by a small context-free grammar covering phrases such as “yes”, “that’s right”, “okay”, “negative”, “maybe”, and so on. The WITAS dialogue system with contextsensitive speech recognition showed significantly better recognition rates than a previous version of the system that used the full grammar for recognition at all times ((Lemon, 2004) reports a 11.5% reduction in overall utterance recognition error rate). Note however that an inherent danger with grammar-switching is that the system may have wrong expectations and thus might activate a language model which is not appropriate for the user’s next utterance, leading to misrecognitions or incorrect rejections. 5.2 Results Table 1 summarizes the evaluation of the baseline system. System behavior User utterance accept reject ignore in-grammar 154/22 8 4 out-of-grammar 45 43 4 crosstalk 12 9 2 Accuracy: 65.68% Weighted f-score: 61.81% Table 1: WITAS dialogue system baseline results Table 1 should be read as follows: looking at the first row, in 154 cases the system understood and accepted the correct logical form of an in-grammar utterance by the user. In 22 cases, the system accepted a logical form that differed from the one for the transcribed utterance.2 In 8 cases, the system rejected an in-grammar utterance and in 4 cases it did not react to an in-grammar utterance at all. The second row of Table 1 shows that the system accepted 45, rejected 43, and ignored 4 user utterances whose transcriptions were out-of-grammar and could not be parsed. Finally, the third row of the table shows that the baseline system accepted 12 utterances that were not addressed to it, rejected 9, and ignored 2. Table 1 shows that a major problem with the baseline system is that it accepts too many user utterances. In particular, the baseline system accepts the wrong interpretation for 22 in-grammar utterances, 45 utterances which it should have rejected as outof-grammar, and 12 utterances which it should have 2For the computation of accuracy and weighted f-scores, these were counted as wrongly accepted out-of-grammar utterances. ignored. All of these cases will generally lead to unintended actions by the system. 6 Classifying and Selecting N-best Recognition Hypotheses We aim at improving over the baseline results by considering the n-best recognition hypotheses for each user utterance. Our methodology consists of two steps: i) we automatically classify the n-best recognition hypotheses for an utterance as either correctly or incorrectly recognized and ii) we use a simple selection procedure to choose the “best” hypothesis based on this classification. In order to get multiple recognition hypotheses for all utterances in the experimental data, we re-ran the speech recognizer with the full recognition grammar and 10best output and processed the results offline with WITAS’ parser, obtaining a logical form for each recognition hypothesis (every hypothesis has a logical form since language models are compiled from the parsing grammar). 6.1 Hypothesis Labeling We labeled all hypotheses with one of the following four classes, based on the manual transcriptions of the experimental data: in-grammar, oog (WER ≤ 50), oog (WER > 50), or crosstalk. The in-grammar and crosstalk classes correspond to those described for the baseline. However, we decided to divide up the out-of-grammar class into the two classes oog (WER ≤50) and oog (WER > 50) to get a more finegrained classification. In order to assign hypotheses to the two oog classes, we compute the word error rate (WER) between recognition hypotheses and the transcription of corresponding user utterances. If the WER is ≤50%, we label the hypothesis as oog (WER ≤50), otherwise as oog (WER > 50). We also annotate all misrecognized hypotheses of in-grammar utterances with their respective WER scores. The motivation behind splitting the out-ofgrammar class into two subclasses and for annotating misrecognized in-grammar hypotheses with their WER scores is that we want to distinguish between different “degrees” of misrecognition that can be used by the dialogue system to decide whether it should initiate clarification instead of rejection.3 We use a threshold (50%) on a hypothesis’ WER as an indicator for whether hypotheses should be 3The WITAS dialogue system currently does not support this type of clarification dialogue; the WER annotations are therefore only of theoretical interest. However, an extended system could easily use this information to decide when clarification should be initiated. clarified or rejected. This is adopted from (Gabsdil, 2003), based on the fact that WER correlates with concept accuracy (CA, (Boros et al., 1996)). The WER threshold can be set differently according to the needs of an application. However, one would ideally set a threshold directly on CA scores for this labeling, but these are currently not available for our data. We also introduce the distinction between out-ofgrammar (WER ≤50) and out-of-grammar (WER > 50) in the gold standard for the classification of (whole) user utterances. We split the out-ofgrammar class into two sub-classes depending on whether the 10-best recognition results include at least one hypothesis with a WER ≤50 compared to the corresponding transcription. Thus, if there is a recognition hypothesis which is close to the transcription, an utterance is labeled as oog (WER ≤ 50). In order to relate these classes to different system behaviors, we define that utterances labeled as oog (WER ≤50) should be clarified and utterances labeled as oog (WER > 50) should be rejected by the system. The same is done for all in-grammar utterances for which only misrecognized hypotheses are available. 6.2 Classification: Feature Groups We represent recognition hypotheses as 20dimensional feature vectors for automatic classification. The feature vectors combine recognizer confidence scores, low-level acoustic information, information from WITAS system Information States, and domain knowledge about the different tasks in the scenario. The following list gives an overview of all features (described in more detail below). 1. Recognition (6): nbestRank, hypothesisLength, confidence, confidenceZScore, confidence-StandardDeviation, minWordConfidence 2. Utterance (3): minAmp, meanAmp, RMS-amp 3. Dialogue (9): currentDM, currentCommand, mostActiveNode, DMBigramFrequency, qaMatch, aqMatch, #unresolvedNPs, #unresolvedPronouns, #uniqueIndefinites 4. Task (2): taskConflict, #taskConstraintConflict All features are extracted automatically from the output of the speech recognizer, utterance waveforms, IS logs, and a small library of plan operators describing the actions the UAV can perform. The recognition (REC) feature group includes the position of a hypothesis in the n-best list (nbestRank), its length in words (hypothesisLength), and five features representing the recognizer’s confidence assessment. Similar features have been used in the literature (e.g. (Litman et al., 2000)). The minWordConfidence and standard deviation/zScore features are computed from individual word confidences in the recognition output. We expect them to help the machine learners decide between the different WER classes (e.g. a high overall confidence score can sometimes be misleading). The utterance (UTT) feature group reflects information about the amplitude in the speech signal (all features are extracted with the UNIX sox utility). The motivation for including the amplitude features is that they might be useful for detecting crosstalk utterances which are not directly spoken into the headset microphone (e.g. the system accidentally recognizing itself). The dialogue features (DIAL) represent information derived from Information States and can be coarsely divided into two sub-groups. The first group includes features representing general coherence constraints on the dialogue: the dialogue move types of the current utterance (currentDM) and of the most active node in the ANL (mostActiveNode), the command type of the current utterance (currentCommand, if it is a command, null otherwise), statistics on which move types typically follow each other (DMBigramFrequency), and two features (qaMatch and aqMatch) that explicitly encode whether the current and the previous utterance form a valid question answer pair (e.g. yn-question followed by yn-answer). The second group includes features that indicate how many definite NPs and pronouns cannot be resolved in the current Information State (#unresolvedNP, #unresolvedPronouns, e.g. “the car” if no car was mentioned before) and a feature indicating the number of indefinite NPs that can be uniquely resolved in the Information State (#uniqueIndefinites, e.g. “a tower” where there is only one tower in the domain). We include these features because (short) determiners are often confused by speech recognizers. In the WITAS scenario, a misrecognized determiner/demonstrative pronoun can lead to confusing system behavior (e.g. a wrongly recognized “there” will cause the system to ask “Where is that?”). Finally, the task features (TASK) reflect conflicting instructions in the domain. The feature taskConflict indicates a conflict if the current dialogue move type is a command and that command already appears as an active task in the AT. #taskConstraintConflict counts the number of conflicts that arise between the currently active tasks in the AT and the hypothesis. For example, if the UAV is already flying somewhere the preconditions of the action operator for take off (altitude = 0) conflict with those for fly (altitude ̸= 0), so that “take off” would be an unlikely command in this context. 6.3 Learners and Selection Procedure We use the memory based learner TiMBL (Daelemans et al., 2002) and the rule induction learner RIPPER (Cohen, 1995) to predict the class of each of the 10-best recognition hypotheses for a given utterance. We chose these two learners because they implement different learning strategies, are well established, fast, freely available, and easy to use. In a second step, we decide which (if any) of the classified hypotheses we actually want to pick as the best result and how the user utterance should be classified as a whole. This task is decided by the following selection procedure (see Figure 1) which implements a preference ordering accept > clarify > reject > ignore.4 1. Scan the list of classified n-best recognition hypotheses top-down. Return the first result that is classified as accept and classify the utterance as accept. 2. If 1. fails, scan the list of classified n-best recognition hypotheses top-down. Return the first result that is classified as clarify and classify the utterance as clarify. 3. If 2. fails, count the number of rejects and ignores in the classified recognition hypotheses. If the number of rejects is larger or equal than the number of ignores classify the utterance as reject. 4. Else classify the utterance as ignore. Figure 1: Selection procedure This procedure is applied to choose from the classified n-best hypotheses for an utterance, independent of the particular machine learner, in all of the following experiments. Since we have a limited amount experimental data in this study (10 hypotheses for each of the 303 user utterances), we use a “leave-one-out” crossvalidation setup for classification. This means that we classify the 10-best hypotheses for a particular utterance based on the 10-best hypotheses of all 302 other utterances and repeat this 303 times. 4Note that in a dialogue application one would not always need to classify all n-best hypotheses in order to select a result but could stop as soon as a hypothesis is classified as correct, which can save processing time. 7 Results and Evaluation The middle part of Table 2 shows the classification results for TiMBL and RIPPER when run with default parameter settings (the other results are included for comparison). The individual rows show the performance when different combinations of feature groups are used for training. The results for the three-way classification are included for comparison with the baseline system and are obtained by combining the two classes clarify and reject. Note that we do not evaluate the performance of the learners for classifying the individual recognition hypotheses but the classification of (whole) user utterances (i.e. including the selection procedure to choose from the classified hypotheses). The results show that both learners profit from the addition of more features concerning dialogue context and task context for classifying user speech input appropriately. The only exception from this trend is a slight performance decrease when task features are added in the four-way classification for RIPPER. Note that both learners already outperform the baseline results even when only recognition features are considered. The most striking result is the performance gain for TiMBL (almost 10%) when we include the dialogue features. As soon as dialogue features are included, TiMBL also performs slightly better than RIPPER. Note that the introduction of (limited) task features, in addition to the DIAL and UTT features, did not have dramatic impact in this study. One aim for future work is to define and analyze the influence of further task related features for classification. 7.1 Optimizing TiMBL Parameters In all of the above experiments we ran the machine learners with their default parameter settings. However, recent research (Daelemans and Hoste, 2002; Marsi et al., 2003) has shown that machine learners often profit from parameter optimization (i.e. finding the best performing parameters on some development data). We therefore selected 40 possible parameter combinations for TiMBL (varying the number of nearest neighbors, feature weighting, and class voting weights) and nested a parameter optimization step into the “leave-oneout” evaluation paradigm (cf. Figure 2).5 Note that our optimization method is not as sophisticated as the “Iterative Deepening” approach 5We only optimized parameters for TiMBL because it performed better with default settings than RIPPER and because the findings in (Daelemans and Hoste, 2002) indicate that TiMBL profits more from parameter optimization. 1. Set aside the recognition hypotheses for one of the user utterances. 2. Randomly split the remaining data into an 80% training and 20% test set. 3. Run TiMBL with all possible parameter settings on the generated training and test sets and store the best performing settings. 4. Classify the left-out hypotheses with the recorded parameter settings. 5. Iterate. Figure 2: Parameter optimization described by (Marsi et al., 2003) but is similar in the sense that it computes a best-performing parameter setting for each data fold. Table 3 shows the classification results when we run TiMBL with optimized parameter settings and using all feature groups for training. System Behavior User Utterance accept clarify reject ignore in-grammar 159/2 11 16 0 out-of-grammar 0 25 5 0 (WER ≤50) out-of-grammar 6 6 50 0 (WER > 50) crosstalk 2 5 0 16 Acc/wf-score (3 classes): 86.14/86.39% Acc/wf-score (4 classes): 82.51/83.29% Table 3: TiMBL classification results with optimized parameters Table 3 shows a remarkable 9% improvement for the 3-way and 4-way classification in both accuracy and weighted f-score, compared to using TiMBL with default parameter settings. In terms of WER, the baseline system (cf. Table 1) accepted 233 user utterances with a WER of 21.51%, and in contrast, TiMBL with optimized parameters (Ti OP) only accepted 169 user utterances with a WER of 4.05%. This low WER reflects the fact that if the machine learning system accepts an user utterance, it is almost certainly the correct one. Note that although the machine learning system in total accepted far fewer utterances (169 vs. 233) it accepted more correct utterances than the baseline (159 vs. 154). 7.2 Evaluation The baseline accuracy for the 3-class problem is 65.68% (61.81% weighted f-score). Our best results, obtained by using TiMBL with parameter opSystem or features used Acc/wf-score Acc/wf-score Acc/wf-score Acc/wf-score for classification (3 classes) (4 classes) (3 classes) (4 classes) Baseline 65.68/61.81% TiMBL RIPPER REC 67.66/67.51% 63.04/63.03% 69.31/69.03% 66.67/65.14% REC+UTT 68.98/68.32% 64.03/63.08% 72.61/72.33% 70.30/68.61% REC+UTT+DIAL 77.56/77.59% 72.94/73.70% 74.92/75.34% 71.29/71.62% REC+UTT+DIAL+TASK 77.89/77.91% 73.27/74.12% 75.25/75.61% 70.63/71.54% TiMBL (optimized params.) 86.14/86.39% 82.51/83.29% Oracle 94.06/94.17% 94.06/94.18% Table 2: Classification Results timization, show a 25% weighted f-score improvement over the baseline system. We can compare these results to a hypothetical “oracle” system in order to obtain an upper bound on classification performance. This is an imaginary system which performs perfectly on the experimental data given the 10-best recognition output. The oracle results reveal that for 18 of the in-grammar utterances the 10-best recognition hypotheses do not include the correct logical form at all and therefore have to be classified as clarify or reject (i.e. it is not possible to achieve 100% accuracy on the experimental data). Table 2 shows that our best results are only 8%/12% (absolute) away from the optimal performance. 7.2.1 Costs and χ2 Levels of Significance We use the χ2 test of independence to statistically compare the different classification results. However, since χ2 only tells us whether two classifications are different from each other, we introduce a simple cost measure (Table 4) for the 3-way classification problem to complement the χ2 results.6 System behavior User utterance accept reject ignore in-grammar 0 2 2 out-of-grammar 4 2 2 crosstalk 4 2 0 Table 4: Cost measure Table 4 captures the intuition that the correct behavior of a dialogue system is to accept correctly recognized utterances and ignore crosstalk (cost 0). The worst a system can do is to accept misrecognized utterances or utterances that were not addressed to the system. The remaining classes are as6We only evaluate the 3-way classification problem because there are no baseline results for the 4-way classification available. signed a value in-between these two extremes. Note that the cost assignment is not validated against user judgments. We only use the costs to interpret the χ2 levels of significance (i.e. as an indicator to compare the relative quality of different systems). Table 5 shows the differences in cost and χ2 levels of significance when we compare the classification results. Here, Ti OP stands for TiMBL with optimized parameters and the stars indicate the level of statistical significance as computed by the χ2 statistics (∗∗∗indicates significance at p = .001, ∗∗at p = .01, and ∗at p = .05).7 Baseline RIPPER TiMBL Ti OP Oracle −232∗∗∗−116∗∗∗−100∗∗∗ −56 Ti OP −176∗∗∗ −60∗ −44 TiMBL −132∗∗∗ −16 RIPPER −116∗∗∗ Table 5: Cost comparisons and χ2 levels of significance for 3-way classification The cost measure shows the strict ordering: Oracle < Ti OP < TiMBL < RIPPER < Baseline. Note however that according to the χ2 test there is no significant difference between the oracle system and TiMBL with optimized parameters. Table 5 also shows that all of our experiments significantly outperform the baseline system. 8 Conclusion We used a combination of acoustic confidence and pragmatic plausibility features (i.e. computed from dialogue context) to predict the quality of incoming recognition hypotheses to a multi-modal dialogue system. We classified hypotheses as accept, (clarify), reject, or ignore: functional categories that 7Following (Hinton, 1995), we leave out categories with expected frequencies < 5 in the χ2 computation and reduce the degrees of freedom accordingly. can be used by a dialogue manager to decide appropriate system reactions. The approach is novel in combining machine learning with n-best processing for spoken dialogue systems using the Information State Update approach. Our best results, obtained using TiMBL with optimized parameters, show a 25% weighted f-score improvement over a baseline system that uses a “grammar-switching” approach to context-sensitive speech recognition, and are only 8% away from the optimal performance that can be achieved on the data. Clearly, this improvement would result in better dialogue system performance overall. Parameter optimization improved the classification results by 9% compared to using the learner with default settings, which shows the importance of such tuning. Future work points in two directions: first, integrating our methodology into working ISU-based dialogue systems and determining whether or not they improve in terms of standard dialogue evaluation metrics (e.g. task completion). The ISU approach is a particularly useful testbed for our methodology because it collects information pertaining to dialogue context in a central data structure from which it can be easily extracted. This avenue will be further explored in the TALK project8. Second, it will be interesting to investigate the impact of different dialogue and task features for classification and to introduce a distinction between “generic” features that are domain independent and “application-specific” features which reflect properties of individual systems and application scenarios. Acknowledgments We thank Nuance Communications Inc. for the use of their speech recognition and synthesis software and Alexander Koller and Dan Shapiro for reading draft versions of this paper. Oliver Lemon was partially supported by Scottish Enterprise under the Edinburgh-Stanford Link programme. References M. Boros, W. Eckert, F. Gallwitz, G. G¨orz, G. Hanrieder, and H. Niemann. 1996. Towards Understanding Spontaneous Speech: Word Accuracy vs. Concept Accuracy. In Proc. ICSLP-96. Ananlada Chotimongkol and Alexander I. Rudnicky. 2001. N-best Speech Hypotheses Reordering Using Linear Regression. In Proceedings of EuroSpeech 2001, pages 1829–1832. William W. Cohen. 1995. Fast Effective Rule Induction. In Proceedings of the 12th International Conference on Machine Learning. 8EC FP6 IST-507802, http://www.talk-project.org Walter Daelemans and V´eronique Hoste. 2002. Evaluation of Machine Learning Methods for Natural Language Processing Tasks. In Proceedings of LREC-02. Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2002. TIMBL: Tilburg Memory Based Learner, version 4.2, Reference Guide. In ILK Technical Report 02-01. John Dowding, Jean Mark Gawron, Doug Appelt, John Bear, Lynn Cherny, Robert Moore, and Douglas Moran. 1993. GEMINI: a natural language system for spoken-language understanding. In Proceedings of ACL-93. Malte Gabsdil. 2003. Classifying Recognition Results for Spoken Dialogue Systems. In Proceedings of the Student Research Workshop at ACL03. Perry R. Hinton. 1995. Statistics Explained – A Guide For Social Science Students. Routledge. Oliver Lemon and Alexander Gruenstein. 2004. Multithreaded context for robust conversational interfaces: context-sensitive speech recognition and interpretation of corrective fragments. ACM Transactions on Computer-Human Interaction. (to appear). Oliver Lemon, Alexander Gruenstein, and Stanley Peters. 2002. Collaborative activities and multitasking in dialogue systems. Traitement Automatique des Langues, 43(2):131–154. Oliver Lemon. 2004. Context-sensitive speech recognition in ISU dialogue systems: results for the grammar switching approach. In Proceedings of the 8th Workshop on the Semantics and Pragmatics of Dialogue, CATALOG’04. Diane J. Litman, Julia Hirschberg, and Marc Swerts. 2000. Predicting Automatic Speech Recognition Performance Using Prosodic Cues. In Proceedings of NAACL-00. Erwin Marsi, Martin Reynaert, Antal van den Bosch, Walter Daelemans, and V´eronique Hoste. 2003. Learning to predict pitch accents and prosodic boundaries in Dutch. In Proceedings of ACL-03. David Traum, Johan Bos, Robin Cooper, Staffan Larsson, Ian Lewin, Colin Matheson, and Massimo Poesio. 1999. A Model of Dialogue Moves and Information State Revision. Technical Report D2.1, Trindi Project. Marilyn Walker, Jerry Wright, and Irene Langkilde. 2000. Using Natural Language Processing and Discourse Features to Identify Understanding Errors in a Spoken Dialogue System. In Proceedings of ICML-2000. | 2004 | 44 |
Predicting Student Emotions in Computer-Human Tutoring Dialogues Diane J. Litman University of Pittsburgh Department of Computer Science Learning Research and Development Center Pittsburgh PA, 15260, USA [email protected] Kate Forbes-Riley University of Pittsburgh Learning Research and Development Center Pittsburgh PA, 15260, USA [email protected] Abstract We examine the utility of speech and lexical features for predicting student emotions in computerhuman spoken tutoring dialogues. We first annotate student turns for negative, neutral, positive and mixed emotions. We then extract acoustic-prosodic features from the speech signal, and lexical items from the transcribed or recognized speech. We compare the results of machine learning experiments using these features alone or in combination to predict various categorizations of the annotated student emotions. Our best results yield a 19-36% relative improvement in error reduction over a baseline. Finally, we compare our results with emotion prediction in human-human tutoring dialogues. 1 Introduction This paper explores the feasibility of automatically predicting student emotional states in a corpus of computer-human spoken tutoring dialogues. Intelligent tutoring dialogue systems have become more prevalent in recent years (Aleven and Rose, 2003), as one method of improving the performance gap between computer and human tutors; recent experiments with such systems (e.g., (Graesser et al., 2002)) are starting to yield promising empirical results. Another method for closing this performance gap has been to incorporate affective reasoning into computer tutoring systems, independently of whether or not the tutor is dialogue-based (Conati et al., 2003; Kort et al., 2001; Bhatt et al., 2004). For example, (Aist et al., 2002) have shown that adding human-provided emotional scaffolding to an automated reading tutor increases student persistence. Our long-term goal is to merge these lines of dialogue and affective tutoring research, by enhancing our intelligent tutoring spoken dialogue system to automatically predict and adapt to student emotions, and to investigate whether this improves learning and other measures of performance. Previous spoken dialogue research has shown that predictive models of emotion distinctions (e.g., emotional vs. non-emotional, negative vs. nonnegative) can be developed using features typically available to a spoken dialogue system in real-time (e.g, acoustic-prosodic, lexical, dialogue, and/or contextual) (Batliner et al., 2000; Lee et al., 2001; Lee et al., 2002; Ang et al., 2002; Batliner et al., 2003; Shafran et al., 2003). In prior work we built on and generalized such research, by defining a three-way distinction between negative, neutral, and positive student emotional states that could be reliably annotated and accurately predicted in human-human spoken tutoring dialogues (ForbesRiley and Litman, 2004; Litman and Forbes-Riley, 2004). Like the non-tutoring studies, our results showed that combining feature types yielded the highest predictive accuracy. In this paper we investigate the application of our approach to a comparable corpus of computerhuman tutoring dialogues, which displays many different characteristics, such as shorter utterances, little student initiative, and non-overlapping speech. We investigate whether we can annotate and predict student emotions as accurately and whether the relative utility of speech and lexical features as predictors is the same, especially when the output of the speech recognizer is used (rather than a human transcription of the student speech). Our best models for predicting three different types of emotion classifications achieve accuracies of 66-73%, representing relative improvements of 19-36% over majority class baseline errors. Our computer-human results also show interesting differences compared with comparable analyses of human-human data. Our results provide an empirical basis for enhancing our spoken dialogue tutoring system to automatically predict and adapt to a student model that includes emotional states. 2 Computer-Human Dialogue Data Our data consists of student dialogues with ITSPOKE (Intelligent Tutoring SPOKEn dialogue system) (Litman and Silliman, 2004), a spoken dialogue tutor built on top of the Why2-Atlas conceptual physics text-based tutoring system (VanLehn et al., 2002). In ITSPOKE, a student first types an essay answering a qualitative physics problem. ITSPOKE then analyzes the essay and engages the student in spoken dialogue to correct misconceptions and to elicit complete explanations. First, the Why2-Atlasback-end parses the student essay into propositional representations, in order to find useful dialogue topics. It uses 3 different approaches (symbolic, statistical and hybrid) competitively to create a representation for each sentence, then resolves temporal and nominal anaphora and constructs proofs using abductive reasoning (Jordan et al., 2004). During the dialogue, student speech is digitized from microphone input and sent to the Sphinx2 recognizer, whose stochastic language models have a vocabulary of 1240 words and are trained with 7720 student utterances from evaluations of Why2-Atlas and from pilot studies of ITSPOKE. Sphinx2’s best “transcription”(recognition output) is then sent to the Why2-Atlas back-end for syntactic, semantic and dialogue analysis. Finally, the text response produced by Why2-Atlas is sent to the Cepstral text-to-speech system and played to the student. After the dialogue, the student revises the essay, thereby ending the tutoring or causing another round of tutoring/essay revision. Our corpus of dialogues with ITSPOKE was collected from November 2003 - April 2004, as part of an evaluation comparing ITSPOKE, Why2-Atlas, and human tutoring (Litman et al., 2004). Subjects are University of Pittsburgh students who have never taken college physics, and who are native English speakers. Subjects first read a small document of background physics material, then work through 5 problems (dialogues) with ITSPOKE. The corpus contains 100 dialogues (physics problems) from 20 subjects, with a total of 2445 student turns and 398 unique words. 15 dialogues have been annotated for emotion as described in Section 3. On average, our dialogues last 19.4 minutes and contain 25 student turns. While ITSPOKE’s word error rate on this corpus is 31.2%, semantic accuracy is more useful for dialogue evaluation as it does not penalize for unimportant word errors. Semantic analysis based on speech recognition is the same as based on perfect transcription 92.4% of the time. An emotionannotated corpus example is shown in Figure 1. 3 Annotating Student Turns In our data, student “emotions”1 can only be identified indirectly: via what is said and/or how it is 1We use the term “emotion” loosely to cover both affects and attitudes that can impact student learning. ...dialogue excerpt at 18.3 min. into session... ITSPOKE : What is the magnitude of the acceleration of the packet in the horizontal direction? STUDENT : i don’t know (NEGATIVE) ITSPOKE : Ok, let’s consider how forces in the vertical direction affect the motion of a body in a horizontal direction in a different scenario. Say an apple falls from a tree. What force(s) are acting on the apple as it falls? STUDENT : gravity (NEGATIVE) ITSPOKE : Yeah. Are there any other forces acting on the apple as it falls? STUDENT : no (NEGATIVE) ITSPOKE : Fine. The only force acting on the apple as it falls to the earth is the earth’s gravitational force. In which direction does gravitational force act? STUDENT : down (POSITIVE) ITSPOKE : Yep. Therefore, what is the magnitude of this gravitational force in the horizontal direction? STUDENT : in the direction of the airplane (NEUTRAL) Figure 1: Annotated Spoken Dialogue Excerpt said. In (Litman and Forbes-Riley, 2004), we discuss a scheme for manually annotating student turns in a human-human tutoring dialogue corpus for intuitively perceived emotions.2 These emotions are viewed along a linear scale, shown and defined as follows: negative
neutral
positive. Negative: a student turn that expresses emotions such as confused, bored, irritated. Evidence of a negative emotion can come from many knowledge sources such as lexical items (e.g., “I don’t know” in student in Figure 1), and/or acoustic-prosodic features (e.g., prior-turn pausing in student ). Positive: a student turn expressing emotions such as confident, enthusiastic. An example is student , which displays louder speech and faster tempo. Neutral: a student turn not expressing a negative or positive emotion. An example is student , where evidence comes from moderate loudness, pitch and tempo. We also distinguish Mixed: a student turn expressing both positive and negative emotions. To avoid influencing the annotator’s intuitive understanding of emotion expression, and because particular emotional cues are not used consistently 2Weak and strong expressions of emotions are annotated. or unambiguously across speakers, our annotation manual does not associate particular cues with particular emotion labels. Instead, it contains examples of labeled dialogue excerpts (as in Figure 1, except on human-human data) with links to corresponding audio files. The cues mentioned in the discussion of Figure 1 above were elicited during post-annotation discussion of the emotions, and are presented here for expository use only. (Litman and Forbes-Riley, 2004) further details our annotation scheme and discusses how it builds on related work. To analyze the reliability of the scheme on our new computer-human data, we selected 15 transcribed dialogues from the corpus described in Section 2, yielding a dataset of 333 student turns, where approximately 30 turns came from each of 10 subjects. The 333 turns were separately annotated by two annotators following the emotion annotation scheme described above. We focus here on three analyses of this data, itemized below. While the first analysis provides the most fine-grained distinctions for triggering system adaptation, the second and third (simplified) analyses correspond to those used in (Lee et al., 2001) and (Batliner et al., 2000), respectively. These represent alternative potentially useful triggering mechanisms, and are worth exploring as they might be easier to annotate and/or predict. Negative, Neutral, Positive (NPN): mixeds are conflated with neutrals. Negative, Non-Negative (NnN): positives, mixeds, neutrals are conflated as nonnegatives. Emotional, Non-Emotional (EnE): negatives, positives, mixeds are conflated as Emotional; neutrals are Non-Emotional. Tables 1-3 provide a confusion matrix for each analysis summarizing inter-annotator agreement. The rows correspond to the labels assigned by annotator 1, and the columns correspond to the labels assigned by annotator 2. For example, the annotators agreed on 89 negatives in Table 1. In the NnN analysis, the two annotators agreed on the annotations of 259/333 turns achieving 77.8% agreement, with Kappa = 0.5. In the EnE analysis, the two annotators agreed on the annotations of 220/333 turns achieving 66.1% agreement, with Kappa = 0.3. In the NPN analysis, the two annotators agreed on the annotations of 202/333 turns achieving 60.7% agreement, with Kappa = 0.4. This inter-annotator agreement is on par with that of prior studies of emotion annotation in naturally occurring computer-human dialogues (e.g., agreement of 71% and Kappa of 0.47 in (Ang et al., 2002), Kappa of 0.45 and 0.48 in (Narayanan, 2002), and Kappa ranging between 0.32 and 0.42 in (Shafran et al., 2003)). A number of researchers have accommodated for this low agreement by exploring ways of achieving consensus between disagreed annotations, to yield 100% agreement (e.g (Ang et al., 2002; Devillers et al., 2003)). As in (Ang et al., 2002), we will experiment below with predicting emotions using both our agreed data and consensuslabeled data. negative non-negative negative 89 36 non-negative 38 170 Table 1: NnN Analysis Confusion Matrix emotional non-emotional emotional 129 43 non-emotional 70 91 Table 2: EnE Analysis Confusion Matrix negative neutral positive negative 89 30 6 neutral 32 94 38 positive 6 19 19 Table 3: NPN Analysis Confusion Matrix 4 Extracting Features from Turns For each of the 333 student turns described above, we next extracted the set of features itemized in Figure 2, for use in the machine learning experiments described in Section 5. Motivated by previous studies of emotion prediction in spontaneous dialogues (Ang et al., 2002; Lee et al., 2001; Batliner et al., 2003), our acousticprosodic features represent knowledge of pitch, energy, duration, tempo and pausing. We further restrict our features to those that can be computed automatically and in real-time, since our goal is to use such features to trigger online adaptation in ITSPOKE based on predicted student emotions. F0 and RMS values, representing measures of pitch and loudness, respectively, are computed using Entropic Research Laboratory’s pitch tracker, get f0, with no post-correction. Amount of Silence is approximated as the proportion of zero f0 frames for the turn. Turn Duration and Prior Pause Duration are computed Acoustic-Prosodic Features 4 fundamental frequency (f0): max, min, mean, standard deviation 4 energy (RMS): max, min, mean, standard deviation 4 temporal: amount of silence in turn, turn duration, duration of pause prior to turn, speaking rate Lexical Features human-transcribed lexical items in the turn ITSPOKE-recognized lexical items in the turn Identifier Features: subject, gender, problem Figure 2: Features Per Student Turn automatically via the start and end turn boundaries in ITSPOKE logs. Speaking Rate is automatically calculated as #syllables per second in the turn. While acoustic-prosodic features address how something is said, lexical features representing what is said have also been shown to be useful for predicting emotion in spontaneous dialogues (Lee et al., 2002; Ang et al., 2002; Batliner et al., 2003; Devillers et al., 2003; Shafran et al., 2003). Our first set of lexical features represents the human transcription of each student turn as a word occurrence vector (indicating the lexical items that are present in the turn). This feature represents the “ideal” performance of ITSPOKE with respect to speech recognition. The second set represents ITSPOKE’s actual best speech recognition hypothesisof what is said in each student turn, again as a word occurrence vector. Finally, we recorded for each turn the 3 “identifier” features shown last in Figure 2. Prior studies (Oudeyer, 2002; Lee et al., 2002) have shown that “subject” and “gender” can play an important role in emotion recognition. “Subject” and “problem” are particularly important in our tutoring domain because students will use our system repeatedly, and problems are repeated across students. 5 Predicting Student Emotions 5.1 Feature Sets and Method We next created the 10 feature sets in Figure 3, to study the effects that various feature combinations had on predicting emotion. We compare an acoustic-prosodic feature set (“sp”), a humantranscribed lexical items feature set (“lex”) and an ITSPOKE-recognized lexical items feature set (“asr”). We further compare feature sets combining acoustic-prosodic and either transcribed or recognized lexical items (“sp+lex”, “sp+asr”). Finally, we compare each of these 5 feature sets with an identical set supplemented with our 3 identifier features (“+id”). sp: 12 acoustic-prosodic features lex: human-transcribed lexical items asr: ITSPOKE recognized lexical items sp+lex: combined sp and lex features sp+asr: combined sp and asr features +id: each above set + 3 identifier features Figure 3: Feature Sets for Machine Learning We use the Weka machine learning software (Witten and Frank, 1999) to automatically learn our emotion prediction models. In our humanhuman dialogue studies (Litman and Forbes, 2003), the use of boosted decision trees yielded the most robust performance across feature sets so we will continue their use here. 5.2 Predicting Agreed Turns As in (Shafran et al., 2003; Lee et al., 2001), our first study looks at the clearer cases of emotional turns, i.e. only those student turns where the two annotators agreed on an emotion label. Tables 4-6 show, for each emotion classification, the mean accuracy (%correct) and standard error (SE) for our 10 feature sets (Figure 3), computed across 10 runs of 10-fold cross-validation.3 For comparison, the accuracy of a standard baseline algorithm (MAJ), which always predicts the majority class, is shown in each caption. For example, Table 4’s caption shows that for NnN, always predicting the majority class of non-negative yields an accuracy of 65.65%. In each table, the accuracies are labeled for how they compare statistically to the relevant baseline accuracy ( = worse, = same, = better), as automatically computed in Weka using a two-tailed t-test (p .05). First note that almost every feature set significantly outperforms the majority class baseline, across all emotion classifications; the only exceptions are the speech-only feature sets without identifier features (“sp-id”) in the NnN and EnE tables, which perform the same as the baseline. These results suggest that without any subject or task specific information, acoustic-prosodic features alone 3For each cross-validation, the training and test data are drawn from utterances produced by the same set of speakers. A separate experiment showed that testing on one speaker and training on the others, averaged across all speakers, does not significantly change the results. are not useful predictors for our two binary classification tasks, at least in our computer-human dialogue corpus. As will be discussed in Section 6, however, “sp-id” feature sets are useful predictors in human-human tutoring dialogues. Feat. Set -id SE +id SE sp 64.10 0.80 70.66 0.76 lex 68.20 0.41 72.74 0.58 asr 72.30 0.58 70.51 0.59 sp+lex 71.78 0.77 72.43 0.87 sp+asr 69.90 0.57 71.44b 0.68 Table 4: %Correct, NnN Agreed, MAJ (nonnegative) = 65.65% Feat. Set -id SE +id SE sp 59.18 0.75 70.68 0.89 lex 63.18 0.82 75.64 0.37 asr 66.36 0.54 72.91 0.35 sp+lex 63.86 0.97 69.59 0.48 sp+asr 65.14 0.82 69.64 0.57 Table 5: %Correct, EnE Agreed, MAJ (emotional) = 58.64% Feat. Set -id SE +id SE sp 55.49 1.01 62.03 0.91 lex 52.66 0.62 67.84 0.66 asr 57.95 0.67 65.70 0.50 sp+lex 62.08 0.56 63.52 0.48 sp+asr 61.22 1.20 62.23 0.86 Table 6: %Correct, NPN Agreed, MAJ (neutral) = 46.52% Further note that adding identifier features to the “-id” feature sets almost always improves performance, although this difference is not always significant4; across tables the “+id” feature sets outperform their “-id” counterparts across all feature sets and emotion classifications except one (NnN “asr”). Surprisingly, while (Lee et al., 2002) found it useful to develop separate gender-based emotion prediction models, in our experiment, gender is the only identifier that does not appear in any learned model. Also note that with the addition of identifier features, the speech-only feature sets (sp+id) now do outperform the majority class baselines for all three emotion classifications. 4For any feature set, the mean +/- 2*SE = the 95% confidence interval. If the confidence intervals for two feature sets are non-overlapping, then their mean accuracies are significantly different with 95% confidence. With respect to the relative utility of lexical versus acoustic-prosodic features, without identifier features, using only lexical features (“lex” or “asr”) almost always produces statistically better performance than using only speech features (“sp”); the only exception is NPN “lex”, which performs statistically the same as NPN “sp”. This is consistent with others’ findings, e.g., (Lee et al., 2002; Shafran et al., 2003). When identifier features are added to both, the lexical sets don’t always significantly outperform the speech set; only in NPN and EnE “lex+id” is this the case. For NnN, just as using “sp+id” rather than “sp-id” improved performance when compared to the majority baseline, the addition of the identifier features also improves the utility of the speech features when compared to the lexical features. Interestingly, although we hypothesized that the “lex” feature sets would present an upper bound on the performance of the “asr” sets, because the human transcription is more accurate than the speech recognizer, we see that this is not consistently the case. In fact, in the “-id” sets, “asr” always significantly outperforms “lex”. A comparison of the decision trees produced in either case, however, does not reveal why this is the case; words chosen as predictors are not very intuitive in either case (e.g., for NnN, an example path through the learned “lex” decision tree says predict negative if the utterance contains the word will but does not contain the word decrease). Understanding this result is an area for future research. Within the “+id” sets, we see that “lex” and “asr” perform the same in the NnN and NPN classifications; in EnE “lex+id” significantly outperforms “asr+id”. The utility of the “lex” features compared to “asr” also increases when combined with the “sp” features (with and without identifiers), for both NnN and NPN. Moreover, based on results in (Lee et al., 2002; Ang et al., 2002; Forbes-Riley and Litman, 2004), we hypothesized that combining speech and lexical features would result in better performance than either feature set alone. We instead found that the relative performance of these sets depends both on the emotion classification being predicted and the presence or absence of “id” features. Although consistently with prior research we find that the combined feature sets usually outperform the speech-only feature sets, the combined feature sets frequently perform worse than the lexical-only feature sets. However, we will see in Section 6 that combining knowledge sources does improve prediction performance in human-human dialogues. Finally, the bolded accuracies in each table summarize the best-performing feature sets with and without identifiers, with respect to both the %Corr figures shown in the tables, as well as to relative improvement in error reduction over the baseline (MAJ) error5, after excluding all the feature sets containing “lex” features. In this way we give a better estimate of the best performance our system could accomplish, given the features it can currently access from among those discussed. These bestperforming feature sets yield relative improvements over their majority baseline errors ranging from 1936%. Moreover, although the NPN classification yields the lowest raw accuracies, it yields the highest relative improvement over its baseline. 5.3 Predicting Consensus Turns Following (Ang et al., 2002; Devillers et al., 2003), we also explored consensus labeling, both with the goal of increasing our usable data set for prediction, and to include the more difficult annotation cases. For our consensus labeling, the original annotators revisited each originally disagreed case, and through discussion, sought a consensus label. Due to consensus labeling, agreement rose across all three emotion classifications to 100%. Tables 79 show, for each emotion classification, the mean accuracy (%correct) and standard error (SE) for our 10 feature sets. Feat. Set -id SE +id SE sp 59.10 0.57 64.20 0.52 lex 63.70 0.47 68.64 0.41 asr 66.26 0.71 68.13 0.56 sp+lex 64.69 0.61 65.40 0.63 sp+asr 65.99 0.51 67.55 0.48 Table 7: %Corr., NnN Consensus, MAJ=62.47% Feat. Set -id SE +id SE sp 56.13 0.94 59.30 0.48 lex 52.07 0.34 65.37 0.47 asr 53.78 0.66 64.13 0.51 sp+lex 60.96 0.76 63.01 0.62 sp+asr 57.84 0.73 60.89 0.38 Table 8: %Corr., EnE Consensus, MAJ=55.86% A comparison with Tables 4-6 shows that overall, using consensus-labeled data decreased the performance across all feature sets and emotion classifications. This was also found in (Ang et al., 2002). Moreover, it is no longer the case that every feature 5Relative improvement over the baseline (MAJ) error for feature set x = ! #"$%'&)(!*!+,)-* )$ #"$%'&.(/!* , where error(x) is 100 minus the %Corr(x) value shown in Tables 4-6. Feat. Set -id SE +id SE sp 48.97 0.66 51.90 0.40 lex 47.86 0.54 57.28 0.44 asr 51.09 0.66 53.41 0.66 sp+lex 53.41 0.62 54.20 0.86 sp+asr 52.50 0.42 53.84 0.42 Table 9: %Corr., NPN Consensus, MAJ=48.35% set performs as well as or better than their baselines6; within the “-id” sets, NnN “sp” and EnE “lex” perform significantly worse than their baselines. However, again we see that the “+id” sets do consistently better than the “-id” sets and moreover always outperform the baselines. We also see again that using only lexical features almost always yields better performance than using only speech features. In addition, we again see that the “lex” feature sets perform comparably to the “asr” feature sets, rather than outperforming them as we first hypothesized. And finally, we see again that while in most cases combining speech and lexical features yields better performance than using only speech features, the combined feature sets in most cases perform the same or worse than the lexical feature sets. As above, the bolded accuracies summarize the best-performing feature sets from each emotion classification, after excluding all the feature sets containing “lex” to give a better estimate of actual system performance. The best-performing feature sets in the consensus data yield an 11%-19% relative improvement in error reduction compared to the majority class prediction, which is a lower error reduction than seen for agreed data. Moreover, the NPN classification yields the lowest accuracies and the lowest improvements over its baseline. 6 Comparison with Human Tutoring While building ITSPOKE, we collected a corresponding corpus of spoken human tutoring dialogues, using the same experimental methodology as for our computer tutoring corpus (e.g. same subject pool, physics problems, web and audio interface, etc); the only difference between the two corpora is whether the tutor is human or computer. As discussed in (Forbes-Riley and Litman, 2004), two annotators had previously labeled 453 turns in this corpus with the emotion annotation scheme discussed in Section 3, and performed a preliminary set of machine learning experiments (different from those reported above). Here, we perform the exper6The majority class for EnE Consensus is non-emotional; all others are unchanged. NnN EnE NPN FS -id SE +id SE -id SE +id SE -id SE +id SE sp 77.46 0.42 77.56 0.30 84.71 0.39 84.66 0.40 73.09 0.68 74.18 0.40 lex 80.74 0.42 80.60 0.34 88.86 0.26 86.23 0.34 78.56 0.45 77.18 0.43 sp+lex 81.37 0.33 80.79 0.41 87.74 0.36 88.31 0.29 79.06 0.38 78.03 0.33 Table 10: Human-Human %Correct, NnN MAJ=72.21%; EnE MAJ=50.86%; NPN MAJ=53.24% iments from Section 5.2 on this annotated human tutoring data, as a step towards understand the differences between annotating and predicting emotion in human versus computer tutoring dialogues. With respect to inter-annotator agreement, in the NnN analysis, the two annotators had 88.96% agreement (Kappa = 0.74). In the EnE analysis, the annotators had 77.26% agreement (Kappa = 0.55). In the NPN analysis, the annotators had 75.06% agreement (Kappa = 0.60). A comparison with the results in Section 3 shows that all of these figures are higher than their computer tutoring counterparts. With respect to predictive accuracy, Table 10 shows our results for the agreed data. A comparison with Tables 4-6 shows that overall, the humanhuman data yields increased performance across all feature sets and emotion classifications, although it should be noted that the human-human corpus is over 100 turns larger than the computer-human corpus. Every feature set performs significantly better than their baselines. However, unlike the computerhuman data, we don’t see the “+id” sets performing better than the “-id” sets; rather, both sets perform about the same. We do see again the “lex” sets yielding better performance than the “sp” sets. However, we now see that in 5 out of 6 cases, combining speech and lexical features yields better performance than using either “sp” or “lex” alone. Finally, these feature sets yield a relative error reduction of 42.45%-77.33% compared to the majority class predictions, which is far better than in our computer tutoring experiments. Moreover, the EnE classification yields the highest raw accuracies and relative improvements over baseline error. We hypothesize that such differences arise in part due to differences between the two corpora: 1) student turns with the computer tutor are much shorter than with the human tutor (and thus contain less emotional content - making both annotation and prediction more difficult), 2) students respond to the computer tutor differently and perhaps more idiosyncratically than to the human tutor, 3) the computer tutor is less “flexible” than the human tutor (allowing little student initiative, questions, groundings, contextual references, etc.), which also effects student emotional response and its expression. 7 Conclusions and Current Directions Our results show that acoustic-prosodic and lexical features can be used to automatically predict student emotion in computer-human tutoring dialogues. We examined emotion prediction using a classification scheme developed for our prior humanhuman tutoring studies (negative/positive/neutral), as well as using two simpler schemes proposed by other dialogue researchers (negative/non-negative, emotional/non-emotional). We used machine learning to examine the impact of different feature sets on prediction accuracy. Across schemes, our feature sets outperform a majority baseline, and lexical features outperform acoustic-prosodic features. While adding identifier features typically also improves performance, combining lexical and speech features does not. Our analyses also suggest that prediction in consensus-labeled turns is harder than in agreed turns, and that prediction in our computerhuman corpus is harder and based on somewhat different features than in our human-human corpus. Our continuing work extends this methodology with the goal of enhancing ITSPOKE to predict and adapt to student emotions. We continue to manually annotate ITSPOKE data, and are exploring partial automation via semi-supervised machine learning (Maeireizo-Tokeshi et al., 2004). Further manual annotation might also improve reliability, as understanding systematic disagreements can lead to coding manual revisions. We are also expanding our feature set to include features suggested in prior dialogue research, tutoring-dependent features (e.g., pedagogical goal), and other features available in our logs (e.g., semantic analysis). Finally, we will explore how the recognized emotions can be used to improve system performance. First, we will label human tutor adaptations to emotional student turns in our human tutoring corpus; this labeling will be used to formulate adaptive strategies for ITSPOKE, and to determine which of our three prediction tasks best triggers adaptation. Acknowledgments This research is supported by NSF Grants 9720359 & 0328431. Thanks to the Why2-Atlas team and S. Silliman for system design and data collection. References G. Aist, B. Kort, R. Reilly, J. Mostow, and R. Picard. 2002. Experimentally augmenting an intelligent tutoring system with human-supplied capabilities: Adding Human-Provided Emotional Scaffolding to an Automated Reading Tutor that Listens. In Proc. Intelligent Tutoring Systems. V. Aleven and C. P. Rose, editors. 2003. Proc. AI in Education Workshop on Tutorial Dialogue Systems: With a View toward the Classroom. J. Ang, R. Dhillon, A. Krupski, E.Shriberg, and A. Stolcke. 2002. Prosody-based automatic detection of annoyance and frustration in humancomputer dialog. In Proc. International Conf. on Spoken Language Processing (ICSLP). A. Batliner, K. Fischer, R. Huber, J. Spilker, and E. N¨oth. 2000. Desperately seeking emotions: Actors, wizards, and human beings. In Proc. ISCA Workshop on Speech and Emotion. A. Batliner, K. Fischer, R. Huber, J. Spilker, and E. Noth. 2003. How to find trouble in communication. Speech Communication, 40:117–143. K. Bhatt, M. Evens, and S. Argamon. 2004. Hedged responses and expressions of affect in human/human and human/computer tutorial interactions. In Proc. Cognitive Science. C. Conati, R. Chabbal, and H. Maclaren. 2003. A study on using biometric sensors for monitoring user emotions in educational games. In Proc. User Modeling Workshop on Assessing and Adapting to User Attitudes and Effect: Why, When, and How? L. Devillers, L. Lamel, and I. Vasilescu. 2003. Emotion detection in task-oriented spoken dialogs. In Proc. IEEE International Conference on Multimedia & Expo (ICME). K. Forbes-Riley and D. Litman. 2004. Predicting emotion in spoken dialogue from multiple knowledge sources. In Proc. Human Language Technology Conf. of the North American Chap. of the Assoc. for Computational Linguistics (HLT/NAACL). A. Graesser, K. VanLehn, C. Rose, P. Jordan, and D. Harter. 2002. Intelligent tutoring systems with conversational dialogue. AI Magazine. P. W. Jordan, M. Makatchev, and K. VanLehn. 2004. Combining competing language understanding approaches in an intelligenttutoring system. In Proc. Intelligent Tutoring Systems. B. Kort, R. Reilly, and R. W. Picard. 2001. An affective model of interplay between emotions and learning: Reengineering educational pedagogy building a learning companion. In International Conf. on Advanced Learning Technologies. C.M. Lee, S. Narayanan, and R. Pieraccini. 2001. Recognition of negative emotions from the speech signal. In Proc. IEEE Automatic Speech Recognition and Understanding Workshop. C.M. Lee, S. Narayanan, and R. Pieraccini. 2002. Combining acoustic and language information for emotion recognition. In International Conf. on Spoken Language Processing (ICSLP). D. Litman and K. Forbes-Riley. 2004. Annotating student emotional states in spoken tutoring dialogues. In Proc. 5th SIGdial Workshop on Discourse and Dialogue. D. Litman and K. Forbes. 2003. Recognizing emotion from student speech in tutoring dialogues. In Proc. IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). D. Litman and S. Silliman. 2004. ITSPOKE: An intelligent tutoring spoken dialogue system. In Companion Proc. of the Human Language Technology Conf. of the North American Chap. of the Assoc. for Computational Linguistics (HLT/NAACL). D. J. Litman, C. P. Rose, K. Forbes-Riley, K. VanLehn, D. Bhembe, and S. Silliman. 2004. Spoken versus typed human and computer dialogue tutoring. In Proc. Intelligent Tutoring Systems. B. Maeireizo-Tokeshi, D. Litman, and R. Hwa. 2004. Co-training for predicting emotions with spoken dialogue data. In Companion Proc. Assoc. for Computational Linguistics (ACL). S. Narayanan. 2002. Towards modeling user behavior in human-machine interaction: Effect of errors and emotions. In Proc. ISLE Workshop on Dialogue Tagging for Multi-modal Human Computer Interaction. P-Y. Oudeyer. 2002. The production and recognition of emotions in speech: Features and Algorithms. International Journal of Human Computer Studies, 59(1-2):157–183. I. Shafran, M. Riley, and M. Mohri. 2003. Voice signatures. In Proc. IEEE Automatic Speech Recognition and Understanding Workshop. K. VanLehn, P. W. Jordan, C. P. Ros´e, D. Bhembe, M. B¨ottner, A. Gaydos, M. Makatchev, U. Pappuswamy, M. Ringenberg, A. Roque, S. Siler, R. Srivastava, and R. Wilson. 2002. The architecture of Why2-Atlas: A coach for qualitative physics essay writing. In Proc. Intelligent Tutoring Systems. I. H. Witten and E. Frank. 1999. Data Mining: Practical Machine Learning Tools and Techniques with Java implementations. | 2004 | 45 |